url
stringlengths
14
1.76k
text
stringlengths
100
1.02M
metadata
stringlengths
1.06k
1.1k
https://www.codecademy.com/courses/machine-learning/lessons/logistic-regression/exercises/log-odds
Learn So far, we’ve learned that the equation for a logistic regression model looks like this: $ln(\frac{p}{1-p}) = b_{0} + b_{1}x_{1} + b_{2}x_{2} +\cdots + b_{n}x_{n}$ Note that we’ve replaced y with the letter p because we are going to interpret it as a probability (eg., the probability of a student passing the exam). The whole left-hand side of this equation is called log-odds because it is the natural logarithm (ln) of odds (p/(1-p)). The right-hand side of this equation looks exactly like regular linear regression! In order to understand how this link function works, let’s dig into the interpretation of log-odds a little more. The odds of an event occurring is: $Odds = \frac{p}{1-p} = \frac{P(event\ occurring)}{P(event\ not\ occurring)}$ For example, suppose that the probability a student passes an exam is 0.7. That means the probability of failing is 1 - 0.7 = 0.3. Thus, the odds of passing are: $Odds\ of\ passing = \frac{0.7}{0.3} = 2.\overline{33}$ This means that students are 2.33 times more likely to pass than to fail. Odds can only be a positive number. When we take the natural log of odds (the log odds), we transform the odds from a positive value to a number between negative and positive infinity — which is exactly what we need! The logit function (log odds) transforms a probability (which is a number between 0 and 1) into a continuous value that can be positive or negative. ### Instructions 1. Suppose that there is a 40% probability of rain today (p = 0.4). Calculate the odds of rain and save it as odds_of_rain. Note that the odds are less than 1 because the probability of rain is less than 0.5. Feel free to print odds_of_rain to see the results. 2. Use the odds that you calculated above to calculate the log odds of rain and save it as log_odds_of_rain. You can calculate the natural log of a value using the numpy.log() function. Note that the log odds are negative because the probability of rain was less than 0.5. Feel free to print log_odds_of_rain to see the results. 3. Suppose that there is a 90% probability that my train to work arrives on-time. Calculate the odds of my train being on-time and save it as odds_on_time. Note that the odds are greater than 1 because the probability is greater than 0.5. Feel free to print odds_on_time to see the results. 4. Use the odds that you calculated above to calculate the log odds of an on-time train and save it as log_odds_on_time. Note that the log odds are positive because the probability of an on-time train was greater than 0.5. Feel free to print log_odds_on_time to see the results.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 3, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8463050723075867, "perplexity": 421.82468636894197}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00717.warc.gz"}
http://koasas.kaist.ac.kr/handle/10203/27283
#### cDNA cloning and characterization of nucleus confined hnRNA in HeLa cell = HeLa 세포의 핵 내에 존재하는 hnRNA 의 cDNA 클로닝과 특성 To understand the structure and physiological function of the hnRNA, cDNA clones for the most typical and abundant hnRNA have been isolated. Pure form of hnRNA was isolate from ribosomal RNA and tRNA by immunoaffinity chromatography with the monoclonal anibody (4F4) to the hnRNP proteins. First strand cDNA was synthesied using random hexamer as a primer and a cDNA library was constructed in $\lambda$ gtll vector. Recombinant phage clones of the hnRNA were isolated from the cDNA librrary by plaque hybridization with $^{32}P$ labeled first strand cDNA of the hnRNA as a probe. Strongly positive plaques were labeled as S series and weakly positive plaques were labeled as W series. It was assumed that the signal intensity and the frequency of the positive plaques were propotional to the abundance of the specific hnRNA. Most of nucleus confined hnRNA contains Alu repeated sequence on the basis of the Southern blot hybridization and nucleotide sequence analysis. Comparison of the alu related sequences the base substitutions of Alu repeated sequences are fairl random. However, 17 bp direct repeats (TTGCAGTGAGCCAAGAT) are well conserved. Another direct repeats was also present in their franking regions. It suggest that Alu repeated sequence are the result of insertion mechanism. The cDNA clone, W16W, hybridized to the discrete hnRNA transcripts in nonpolyadenylated nuclear RNA fraction, but barely to the polyadenylated nuclear RNA and ctoplasmic mRNA, if any. It suggest that the transcripts corresponding to W16W are confined in the nucleus and not polyadenylated in its 3-termini. It also suggest the expression of the transcript could be controlled at post transcriptional or transport level rather than at transcriptional or translational level. The sie of corresponding hnRNAs are estimated to be about 1.2-1.3 kb. The transcription of the transcript was senstive to Actinomycin D treatment arguing for RNA polymerse II transcript. The transcript of W16W however remains re... Byun, Si-Myungresearcher변시명researcher Publisher 한국과학기술원 Issue Date 1990 Identifier 61468/325007 / 000835097 Language eng Description 학위논문(박사) - 한국과학기술원 : 생물공학과, 1990.2, [ ix, 141 p. ] Keywords 친화성 크로마토그래피 URI http://hdl.handle.net/10203/27283 http://library.kaist.ac.kr/search/detail/view.do?bibCtrlNo=61468&flag=t Appears in Collection BS-Theses_Ph.D.(박사논문) Files in This Item There are no files associated with this item. • Hit : 154
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6997238993644714, "perplexity": 14182.577331877137}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886120573.0/warc/CC-MAIN-20170823132736-20170823152736-00451.warc.gz"}
https://epiga.episciences.org/4944
## Beauville, Arnaud - Limits of the trivial bundle on a curve epiga:4454 - Épijournal de Géométrie Algébrique, November 1, 2018, Volume 2 - https://doi.org/10.46298/epiga.2018.volume2.4454 Limits of the trivial bundle on a curve Authors: Beauville, Arnaud We attempt to describe the rank 2 vector bundles on a curve C which are specializations of the trivial bundle. We get a complete classifications when C is Brill-Noether generic, or when it is hyperelliptic; in both cases all limit vector bundles are decomposable. We give examples of indecomposable limit bundles for some special curves. Volume: Volume 2 Published on: November 1, 2018 Submitted on: April 24, 2018 Keywords: Mathematics - Algebraic Geometry
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8913848400115967, "perplexity": 1565.6747372570842}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038057476.6/warc/CC-MAIN-20210410181215-20210410211215-00311.warc.gz"}
https://www.gnu.org/software/gnuastro/manual/html_node/MakeCatalog-general-settings.html
## GNU Astronomy Utilities Next: , Previous: , Up: Invoking astmkcatalog   [Contents][Index] #### 7.3.5.2 MakeCatalog general settings Some of the columns require particular settings (for example the zero point magnitude for measuring magnitudes), the options in this section can be used for such configurations. -z FLT --zeropoint=FLT The zero point magnitude for the input image, see Flux Brightness and magnitude. -E --skysubtracted If the image has already been sky subtracted by another program, then you need to notify MakeCatalog through this option. Note that this is only relevant when the Signal to noise ratio is to be calculated. -T FLT --threshold=FLT For all the columns, only consider pixels that are above a given relative threshold. Symbolizing the value of this option as $$T$$, the Sky for a pixel at $$(i,j)$$ with $$\mu_{ij}$$ and its Standard deviation with $$\sigma_{ij}$$, that pixel will only be used if its value ($$B_{ij}$$) satisfies this condition: $$B_{ij}>\mu_{ij}+{T}\sigma_{ij}$$. The only calculations that will not be affected are is the average river values (--riverave), since they are used as a reference. A commented row will be added in the header of the output catalog that will print the given value, since this is a very important issue, it starts with **IMPORTANT**. NoiseChisel will detect very diffuse signal which is useful in most cases where the aggregate properties of the detections are desired, since there is signal there (with the desired certainty). However, in some cases, only the properties of the peaks of the objects/clumps are desired, for example in attempting to separate stars from galaxies, the peaks are the major target and the diffuse regions only act to complicate the separation. With this option, MakeCatalog will simply ignore any pixel below the relative threshold. This option is not mandatory, so if it isn’t given (after reading the command-line and all configuration files, see Configuration files), MakeCatalog will still operate. However, if it has a value in any lower-level configuration file and you want to ignore that value for this particular run or in a higher-level configuration file, then set it to NaN, for example --threshold=nan. Gnuastro uses the C library’s strtod function to read floats, which is not case-sensitive in reading NaN values. But to be consistent, it is good practice to only use nan. --nsigmag=FLT The median standard deviation (from the standard deviation image) will be multiplied by the value to this option and its magnitude will be reported in the comments of the output catalog. This value is a per-pixel value, not per object/clump and is not found over an area or aperture, like the common $$5\sigma$$ values that are commonly reported as a measure of depth or the upper-limit measurements (see Quantifying measurement limits). Next: , Previous: , Up: Invoking astmkcatalog   [Contents][Index]
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8802396655082703, "perplexity": 1296.080179493907}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823260.52/warc/CC-MAIN-20171019084246-20171019104246-00701.warc.gz"}
https://www.physicsforums.com/threads/feather-falling-on-mars.172656/
# Feather Falling on Mars 1. Jun 3, 2007 ### Worzo Firstly, this isn't my homework question. I was trying to answer another, broader question for a student, and it boiled down to this one. There's quite a subtle point here, I think, but I just can't grasp it. Consider stable atmospheric conditions on Mars and Earth. A feather is dropped from a great height on both planents. Which planet gives the feather the higher terminal velocity? Data given is: - Mars gravity = (1/3)g - Earth atmosphere: 1000mbar - Mars atmosphere: 10mbar So terminal velocity goes as square root of gravitational force and inverse square root of viscosity. I can't work out how the viscosity changes with temperature and pressure. Gut feeling tells you that the weaker gravity (a third of Earth's) contributes to lowering the terminal velocity. However, doesn't the fact that the pressure is 100 times smaller contribute to the viscosity somehow? I remember proving in kinetic theory that viscosity is independent of pressure (except for high pressures), but does that hold here? I can't help thinking temperature has something to do with it as well. Any explanation/calculation of Terrestrial/Martian atmospheric viscosity would be most appreciated. 2. Jun 4, 2007 3. Jun 4, 2007 ### DaveC426913 AFAIK, Mars' atmo is more akin to vacuum than it is to a real atmo. 4. Jun 5, 2007 ### Worzo That's what I thought, but I can't find any expression for how the viscosity changes at low pressure.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8856029510498047, "perplexity": 1435.1089477810795}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542323.80/warc/CC-MAIN-20161202170902-00217-ip-10-31-129-80.ec2.internal.warc.gz"}
https://gmatclub.com/forum/a-cylindrical-tank-has-a-base-with-a-circumference-of-105453.html
It is currently 17 Dec 2017, 00:25 ### GMAT Club Daily Prep #### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email. Customized for You we will pick new questions that match your level based on your Timer History Track every week, we’ll send you an estimated GMAT score based on your performance Practice Pays we will pick new questions that match your level based on your Timer History # Events & Promotions ###### Events & Promotions in June Open Detailed Calendar # A cylindrical tank has a base with a circumference of Author Message TAGS: ### Hide Tags Intern Joined: 21 Jun 2010 Posts: 5 Kudos [?]: 28 [5], given: 1 A cylindrical tank has a base with a circumference of [#permalink] ### Show Tags 28 Nov 2010, 01:04 5 KUDOS 22 This post was BOOKMARKED 00:00 Difficulty: 75% (hard) Question Stats: 56% (02:07) correct 44% (01:46) wrong based on 581 sessions ### HideShow timer Statistics A cylindrical tank has a base with a circumference of $$4\sqrt{\pi{\sqrt{3}}$$ meters and an equilateral triangle painted on the interior side of the base. A grain of sand is dropped into the tank, and has an equal probability of landing on any particular point on the base. If the probability of the grain of sand landing on the portion of the base outside the triangle is 3/4, what is the length of a side of the triangle? A. $$\sqrt{2{\sqrt{6}}$$ B. $$\frac{\sqrt{6{\sqrt{6}}}}{2}$$ C. $$\sqrt{2{\sqrt{3}}$$ D. $$\sqrt{3}$$ E. $$2$$ [Reveal] Spoiler: OA Last edited by Bunuel on 09 Jun 2013, 07:33, edited 2 times in total. Edited the question and added the OA Kudos [?]: 28 [5], given: 1 Math Expert Joined: 02 Sep 2009 Posts: 42631 Kudos [?]: 135910 [2], given: 12715 Re: Probability Triangle(700 lvl Qn ) [#permalink] ### Show Tags 28 Nov 2010, 01:43 2 KUDOS Expert's post 9 This post was BOOKMARKED bhushan288 wrote: hi guys... can u help me out with this 1.... A cylindrical tank has a base with a circumference of meters and an equilateral triangle painted on the interior side of the base. A grain of sand is dropped into the tank, and has an equal probability of landing on any particular point on the base. If the probability of the grain of sand landing on the portion of the base outside the triangle is 3/4, what is the length of a side of the triangle? Hi bhushan288, and welcome to Gmat Club. Provide answer choices for PS questions. Make sure you type the question in exactly as it was stated from the source. Also: Please post PS questions in the PS subforum: gmat-problem-solving-ps-140/ Please post DS questions in the DS subforum: gmat-data-sufficiency-ds-141/ No posting of PS/DS questions is allowed in the main Math forum. Original question is: A cylindrical tank has a base with a circumference of $$4\sqrt{\pi{\sqrt{3}}$$ meters and an equilateral triangle painted on the interior side of the base. A grain of sand is dropped into the tank, and has an equal probability of landing on any particular point on the base. If the probability of the grain of sand landing on the portion of the base outside the triangle is 3/4, what is the length of a side of the triangle? A. $$\sqrt{2{\sqrt{6}}$$ B. $$\frac{\sqrt{6{\sqrt{6}}}}{2}$$ C. $$\sqrt{2{\sqrt{3}}$$ D. $$\sqrt{3}$$ E. $$2$$ Given: $$circumference=4\sqrt{\pi{\sqrt{3}}$$ and $$P(out)=\frac{3}{4}$$ Now, as the probability of the grain of sand landing on the portion of the base outside the triangle is 3/4 then the the portion of the base (circle) outside the triangle must be 3/4 of the are of the base and the triangle itself 1/4 of the are of the base. Next: $$circumference=4\sqrt{\pi{\sqrt{3}}}=2\pi{r}$$ --> square both sides --> $$16\pi{\sqrt{3}}=4{\pi}^2{r}^2$$ --> $$4{\sqrt{3}}={\pi}{r}^2$$ --> $$area_{base}=\pi{r^2}=4{\sqrt{3}}$$; The area of the equilateral triangle is 1/4 of the base: $$area_{equilateral}=\frac{1}{4}*4{\sqrt{3}}=\sqrt{3}$$ --> also the ares of the equilateral triangle is $$area_{equilateral}=a^2*\frac{\sqrt{3}}{4}$$, where $$a$$ is the length of a side --> $$area_{equilateral}=a^2*\frac{\sqrt{3}}{4}=\sqrt{3}$$ --> $$a=2$$. _________________ Kudos [?]: 135910 [2], given: 12715 Intern Joined: 21 Jun 2010 Posts: 5 Kudos [?]: 28 [0], given: 1 Re: Probability Triangle(700 lvl Qn ) [#permalink] ### Show Tags 28 Nov 2010, 01:51 Thnks a lot Bunuel... Kudos [?]: 28 [0], given: 1 Manager Joined: 17 Aug 2010 Posts: 88 Kudos [?]: 20 [0], given: 22 Re: Probability Triangle(700 lvl Qn ) [#permalink] ### Show Tags 16 Mar 2011, 06:13 A cylindrical tank has a base with a circumference of 4(sqrt(pi sqrt(3)) meters and an equilateral triangle painted on the interior side of the base. A grain of sand is dropped into the tank, and has an equal probability of landing on any particular point on the base. If the probability of the grain of sand landing on the portion of the base outside the triangle is 3/4, what is the length of a side of the triangle? A. $$\sqrt{2{\sqrt{6}}$$ B. $$\frac{\sqrt{6{\sqrt{6}}}}{2}$$ C. $$\sqrt{2{\sqrt{3}}$$ D. $$\sqrt{3}$$ E. $$2$$ Given: $$circumference=4\sqrt{\pi{\sqrt{3}}$$ and $$P(out)=\frac{3}{4}$$ Now, as the probability of the grain of sand landing on the portion of the base outside the triangle is 3/4 then the the portion of the base (circle) outside the triangle must be 3/4 of the are of the base and the triangle itself 1/4 of the are of the base. Next: $$circumference=4\sqrt{\pi{\sqrt{3}}}=2\pi{r}$$ --> square both sides --> $$16\pi{\sqrt{3}}=4{\pi}^2{r}^2$$ --> $$4{\sqrt{3}}={\pi}{r}^2$$ --> $$area_{base}=\pi{r^2}=4{\sqrt{3}}$$; The area of the equilateral triangle is 1/4 of the base: $$area_{equilateral}=\frac{1}{4}*4{\sqrt{3}}=\sqrt{3}$$ --> also the ares of the equilateral triangle is $$area_{equilateral}=a^2*\frac{\sqrt{3}}{4}$$, where $$a$$ is the length of a side --> $$area_{equilateral}=a^2*\frac{\sqrt{3}}{4}=\sqrt{3}$$ --> $$a=2$$. Hello Bunuel ur explanation is perfect . i just cant understand one thing. i know that formula for the side of equilateral triangle inscribed in the circle should be a= √3 * r where r is radius and a is side of the triangle, but when using this formula i am not getting the right answer in the above exmpl. what could be the problem? is somth. wrong with formula ? thanks Kudos [?]: 20 [0], given: 22 Math Forum Moderator Joined: 20 Dec 2010 Posts: 1949 Kudos [?]: 2144 [1], given: 376 Re: Probability Triangle(700 lvl Qn ) [#permalink] ### Show Tags 16 Mar 2011, 06:32 1 KUDOS tinki wrote: Hello Bunuel ur explanation is perfect . i just cant understand one thing. i know that formula for the side of equilateral triangle inscribed in the circle should be a= √3 * r where r is radius and a is side of the triangle, but when using this formula i am not getting the right answer in the above exmpl. what could be the problem? is somth. wrong with formula ? thanks[/quote] The triangle is not necessarily inscribed because it is not mentioned in the question. It can be any equilateral triangle drawn within the base. The vertices of the triangle may not touch the circle. _________________ Kudos [?]: 2144 [1], given: 376 TOEFL Forum Moderator Joined: 16 Nov 2010 Posts: 1589 Kudos [?]: 607 [0], given: 40 Location: United States (IN) Concentration: Strategy, Technology Re: Probability Triangle(700 lvl Qn ) [#permalink] ### Show Tags 17 Mar 2011, 00:40 2*pi*r = 4(sqrt(pi sqrt(3)) => r = 4(sqrt(pi sqrt(3))/2*pi = 2(sqrt(pi sqrt(3))/pi Area of circle C = pi * r^2 = 4 * pi* 1/(pi)^2 * pi * sqrt(3) = 4*sqrt(3) A/C = 1/4 => A = sqrt(3) Area of Triangle = sqrt(3) = sqrt(3)/4 * (side)^2 So side = sqrt(4) = 2 _________________ Formula of Life -> Achievement/Potential = k * Happiness (where k is a constant) GMAT Club Premium Membership - big benefits and savings Kudos [?]: 607 [0], given: 40 Senior Manager Joined: 23 Oct 2010 Posts: 380 Kudos [?]: 412 [0], given: 73 Location: Azerbaijan Concentration: Finance Schools: HEC '15 (A) GMAT 1: 690 Q47 V38 Re: A cylindrical tank has a base with a circumference of [#permalink] ### Show Tags 15 Apr 2013, 08:59 could u please explain me why we cant use this formula here ? The radius of the circumscribed circle is R=a*\sqrt{3}/3 math-triangles-87197.html _________________ Happy are those who dream dreams and are ready to pay the price to make them come true I am still on all gmat forums. msg me if you want to ask me smth Kudos [?]: 412 [0], given: 73 Math Expert Joined: 02 Sep 2009 Posts: 42631 Kudos [?]: 135910 [1], given: 12715 Re: A cylindrical tank has a base with a circumference of [#permalink] ### Show Tags 16 Apr 2013, 02:27 1 KUDOS Expert's post LalaB wrote: could u please explain me why we cant use this formula here ? The radius of the circumscribed circle is R=a*\sqrt{3}/3 math-triangles-87197.html Because we are told that equilateral triangle is painted so not necessarily inscribed on the interior side of the base. _________________ Kudos [?]: 135910 [1], given: 12715 Manager Joined: 09 May 2013 Posts: 54 Kudos [?]: 15 [0], given: 12 Re: A cylindrical tank has a base with a circumference of [#permalink] ### Show Tags 09 Jun 2013, 07:10 Hi Bunuel, can you explain how can you consider P(out) as fraction of total base? Posted from my mobile device Kudos [?]: 15 [0], given: 12 Math Expert Joined: 02 Sep 2009 Posts: 42631 Kudos [?]: 135910 [0], given: 12715 Re: A cylindrical tank has a base with a circumference of [#permalink] ### Show Tags 09 Jun 2013, 07:39 WarriorGmat wrote: Hi Bunuel, can you explain how can you consider P(out) as fraction of total base? Posted from my mobile device Bigger the area bigger the probability of a grain landing there. P(out)=3/4 simply means that the the portion of the base (circle) outside the triangle must be 3/4 of the are of the base. _________________ Kudos [?]: 135910 [0], given: 12715 Intern Joined: 25 Apr 2013 Posts: 5 Kudos [?]: 5 [0], given: 5 Location: India Concentration: Strategy, Marketing GPA: 2.7 WE: Information Technology (Computer Software) Re: A cylindrical tank has a base with a circumference of [#permalink] ### Show Tags 11 Jun 2013, 06:29 Hi, Let P(E) = Probability of grain landing inside triangle = 1 - 3/4 = 1/4;--------(1) Also P(E) = Area of Equilateral Triangle/Area of Base(i.e. Circle) ---------- (2) Area (Triangle) = (3^1/2 / 4 )*a^2 Area (Circle) = pi*r^2 = pi * (2(3^1/2/pi)^1/2)^2 = 4*3^1/2 By using 1 & 2 a = 4 (Ans.) Thanks & Regards, Prateek Sharma Kudos [?]: 5 [0], given: 5 Math Expert Joined: 02 Sep 2009 Posts: 42631 Kudos [?]: 135910 [0], given: 12715 Re: A cylindrical tank has a base with a circumference of [#permalink] ### Show Tags 11 Jun 2013, 07:35 pratsh123 wrote: Hi, Let P(E) = Probability of grain landing inside triangle = 1 - 3/4 = 1/4;--------(1) Also P(E) = Area of Equilateral Triangle/Area of Base(i.e. Circle) ---------- (2) Area (Triangle) = (3^1/2 / 4 )*a^2 Area (Circle) = pi*r^2 = pi * (2(3^1/2/pi)^1/2)^2 = 4*3^1/2 By using 1 & 2 a = 4 (Ans.) Thanks & Regards, Prateek Sharma a=2. Check here: a-cylindrical-tank-has-a-base-with-a-circumference-of-105453.html#p824188 _________________ Kudos [?]: 135910 [0], given: 12715 Manager Joined: 26 Sep 2013 Posts: 217 Kudos [?]: 187 [0], given: 40 Concentration: Finance, Economics GMAT 1: 670 Q39 V41 GMAT 2: 730 Q49 V41 Re: A cylindrical tank has a base with a circumference of [#permalink] ### Show Tags 03 Nov 2013, 09:15 Is there any quicker way to do this one? I started down a similar path and was only about 1/4 done by the time I hit 2 minutes Kudos [?]: 187 [0], given: 40 Intern Joined: 11 Aug 2013 Posts: 19 Kudos [?]: 6 [0], given: 0 Concentration: Strategy Schools: Foster '16 (II) GMAT 1: 720 Q47 V42 GPA: 3.47 Re: A cylindrical tank has a base with a circumference of [#permalink] ### Show Tags 15 Nov 2013, 20:43 Hey guys, I feel like the answer to my problem is something super obvious, but why is the area of the triangle 1/4 of the base (from subtracting the probability 3/4 from 1), resulting in an area of 3. I got an area of 4, resulting from (Area of Circle)/(Area of Circle + Area of Triangle) = 3/4 (with Area of Circle = 12). I want to say if I was given a problem asking for the probability of red balls when there are 12 red balls and 4 blue balls, i would say the probability is 12/(12+4) = 3/4. Thank you for the help........I'm slowly losing it with all these fractions, and positives and negatives, and less than greater than's, and that and it not having references, and primary purpase of passages....... Kudos [?]: 6 [0], given: 0 Senior Manager Joined: 03 Apr 2013 Posts: 291 Kudos [?]: 51 [0], given: 862 Re: A cylindrical tank has a base with a circumference of [#permalink] ### Show Tags 16 Nov 2013, 02:02 Hey bunuel! same answer same approach! I have been posting answers to some questions but m unaware of how to post formulas in the standard form. please give me a link so i can learn to do the same. _________________ Spread some love..Like = +1 Kudos Kudos [?]: 51 [0], given: 862 Math Expert Joined: 02 Sep 2009 Posts: 42631 Kudos [?]: 135910 [0], given: 12715 Re: A cylindrical tank has a base with a circumference of [#permalink] ### Show Tags 16 Nov 2013, 10:02 ShashankDave wrote: Hey bunuel! same answer same approach! I have been posting answers to some questions but m unaware of how to post formulas in the standard form. please give me a link so i can learn to do the same. Hope it helps. _________________ Kudos [?]: 135910 [0], given: 12715 Math Expert Joined: 02 Sep 2009 Posts: 42631 Kudos [?]: 135910 [0], given: 12715 Re: A cylindrical tank has a base with a circumference of [#permalink] ### Show Tags 16 Nov 2013, 10:03 dwalker0219 wrote: Hey guys, I feel like the answer to my problem is something super obvious, but why is the area of the triangle 1/4 of the base (from subtracting the probability 3/4 from 1), resulting in an area of 3. I got an area of 4, resulting from (Area of Circle)/(Area of Circle + Area of Triangle) = 3/4 (with Area of Circle = 12). I want to say if I was given a problem asking for the probability of red balls when there are 12 red balls and 4 blue balls, i would say the probability is 12/(12+4) = 3/4. Thank you for the help........I'm slowly losing it with all these fractions, and positives and negatives, and less than greater than's, and that and it not having references, and primary purpase of passages....... Check here: a-cylindrical-tank-has-a-base-with-a-circumference-of-105453.html#p1234048 _________________ Kudos [?]: 135910 [0], given: 12715 Senior Manager Joined: 07 Apr 2012 Posts: 444 Kudos [?]: 82 [0], given: 58 Re: A cylindrical tank has a base with a circumference of [#permalink] ### Show Tags 26 Jun 2014, 13:57 Bunuel wrote: bhushan288 wrote: hi guys... can u help me out with this 1.... A cylindrical tank has a base with a circumference of meters and an equilateral triangle painted on the interior side of the base. A grain of sand is dropped into the tank, and has an equal probability of landing on any particular point on the base. If the probability of the grain of sand landing on the portion of the base outside the triangle is 3/4, what is the length of a side of the triangle? Hi bhushan288, and welcome to Gmat Club. Provide answer choices for PS questions. Make sure you type the question in exactly as it was stated from the source. Also: Please post PS questions in the PS subforum: gmat-problem-solving-ps-140/ Please post DS questions in the DS subforum: gmat-data-sufficiency-ds-141/ No posting of PS/DS questions is allowed in the main Math forum. Original question is: A cylindrical tank has a base with a circumference of $$4\sqrt{\pi{\sqrt{3}}$$ meters and an equilateral triangle painted on the interior side of the base. A grain of sand is dropped into the tank, and has an equal probability of landing on any particular point on the base. If the probability of the grain of sand landing on the portion of the base outside the triangle is 3/4, what is the length of a side of the triangle? A. $$\sqrt{2{\sqrt{6}}$$ B. $$\frac{\sqrt{6{\sqrt{6}}}}{2}$$ C. $$\sqrt{2{\sqrt{3}}$$ D. $$\sqrt{3}$$ E. $$2$$ Given: $$circumference=4\sqrt{\pi{\sqrt{3}}$$ and $$P(out)=\frac{3}{4}$$ Now, as the probability of the grain of sand landing on the portion of the base outside the triangle is 3/4 then the the portion of the base (circle) outside the triangle must be 3/4 of the are of the base and the triangle itself 1/4 of the are of the base. Next: $$circumference=4\sqrt{\pi{\sqrt{3}}}=2\pi{r}$$ --> square both sides --> $$16\pi{\sqrt{3}}=4{\pi}^2{r}^2$$ --> $$4{\sqrt{3}}={\pi}{r}^2$$ --> $$area_{base}=\pi{r^2}=4{\sqrt{3}}$$; The area of the equilateral triangle is 1/4 of the base: $$area_{equilateral}=\frac{1}{4}*4{\sqrt{3}}=\sqrt{3}$$ --> also the ares of the equilateral triangle is $$area_{equilateral}=a^2*\frac{\sqrt{3}}{4}$$, where $$a$$ is the length of a side --> $$area_{equilateral}=a^2*\frac{\sqrt{3}}{4}=\sqrt{3}$$ --> $$a=2$$. Hi Bunuel, I missed the part where the triangle = 1/4 of the circle. I did, however, get all the other results. So what I did was (1-Area of triangle)/area of circle = 3/4 But I seem to get a different answer that you. Have any idea why? Kudos [?]: 82 [0], given: 58 Senior Manager Joined: 07 Apr 2012 Posts: 444 Kudos [?]: 82 [0], given: 58 Re: A cylindrical tank has a base with a circumference of [#permalink] ### Show Tags 27 Jul 2014, 06:11 Bunuel wrote: ... also the ares of the equilateral triangle is $$area_{equilateral}=a^2*\frac{\sqrt{3}}{4}$$... Hi Bunuel, The equation for the area of a triangle is only for an equilateral inscribed in a circle, is it not? is it for any triangle painted within a circle? Kudos [?]: 82 [0], given: 58 Math Expert Joined: 02 Sep 2009 Posts: 42631 Kudos [?]: 135910 [0], given: 12715 Re: A cylindrical tank has a base with a circumference of [#permalink] ### Show Tags 27 Jul 2014, 14:27 ronr34 wrote: Bunuel wrote: ... also the ares of the equilateral triangle is $$area_{equilateral}=a^2*\frac{\sqrt{3}}{4}$$... Hi Bunuel, The equation for the area of a triangle is only for an equilateral inscribed in a circle, is it not? is it for any triangle painted within a circle? $$area_{equilateral}=side^2*\frac{\sqrt{3}}{4}$$ is for ANY EQUILATERAL triangle. Check Triangles chapter of our Math Book: math-triangles-87197.html _________________ Kudos [?]: 135910 [0], given: 12715 Re: A cylindrical tank has a base with a circumference of   [#permalink] 27 Jul 2014, 14:27 Go to page    1   2    Next  [ 22 posts ] Display posts from previous: Sort by # A cylindrical tank has a base with a circumference of Powered by phpBB © phpBB Group | Emoji artwork provided by EmojiOne Kindly note that the GMAT® test is a registered trademark of the Graduate Management Admission Council®, and this site has neither been reviewed nor endorsed by GMAC®.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6462553143501282, "perplexity": 2189.3991464666933}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948594665.87/warc/CC-MAIN-20171217074303-20171217100303-00328.warc.gz"}
https://arxiv.org/abs/1203.6580
# Title:Search for events with large missing transverse momentum, jets, and at least two tau leptons in 7 TeV proton-proton collision data with the ATLAS detector Abstract: A search for events with large missing transverse momentum, jets, and at least two tau leptons has been performed using 2 fb^-1 of proton-proton collision data at sqrt(s) = 7 TeV recorded with the ATLAS detector at the Large Hadron Collider. No excess above the Standard Model background expectation is observed and a 95% CL visible cross section upper limit for new phenomena is set. A 95% CL lower limit of 32 TeV is set on the GMSB breaking scale Lambda independent of tan(beta). These limits provide the most stringent tests to date in a large part of the considered parameter space. Comments: 6 pages plus author list (19 pages total), 3 figures, revised author list, matches published PLB version Subjects: High Energy Physics - Experiment (hep-ex) Journal reference: Phys.Lett. B714 (2012) 180-196 DOI: 10.1016/j.physletb.2012.06.055 Report number: CERN-PH-EP-2012-054 Cite as: arXiv:1203.6580 [hep-ex] (or arXiv:1203.6580v2 [hep-ex] for this version) ## Submission history From: Atlas Publications [view email] [v1] Thu, 29 Mar 2012 16:35:14 UTC (247 KB) [v2] Sat, 28 Jul 2012 11:30:28 UTC (243 KB)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9030207991600037, "perplexity": 4526.767488278136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875143646.38/warc/CC-MAIN-20200218085715-20200218115715-00351.warc.gz"}
http://hysafe.org/wiki/BRHS/MitigationMeasures
Search: # Mitigation Measures Contributing author Main contributions Organisation e.mail Olav Hansen Chapter coordinator Various contributions GexCon [email protected] Vladimir Molkov Venting guidelines example UU [email protected] Miyahara Protection wall example Obayashi Corporation [email protected] Angunn Engebø Emergency response DNV [email protected] Andrzej Teodorczyk Flame & detonation arresters, safe gap WUT [email protected] Karl Verfondern Liquid spill FZJ [email protected] When handling hydrogen there are usually a number of unwanted potentially hazardous events that can take place with a certain frequency. The total sum of all consequences weighted by their frequency is normally referred to as the risk. This chapter will discuss various ways and methods that can potentially reduce the risk from unwanted events (i.e. a reduction of frequency and/or consequences). Consequences can include loss of life or injuries to people, property as well as reputation and more. The measurement unit for risk can be e.g. money, as all consequences may have a estimated price. Quite often, though, a risk assessment will focus on potential for loss of life. There are a number of possible unwanted events when handling hydrogen. Depending on setting and surroundings, the hazard will vary strongly. While a significant leak of hydrogen gas may be harmless in an unconfined process plant scenario because all gas is rapidly disappearing due to its buoyant nature, a much smaller leak may lead to a disaster if ignited inside a building. Examples of hazardous events are e.g. Pressurized pipeline or vessel: Major rupture may this give strong shockwaves as well as significant loads due to dynamic pressure from the flow out of the pipeline. If ignited, fire may produce heat loads and radiation. Significant leak rates may lead to severe explosion scenarios with pressure effects in case of delayed ignition. Liquid hydrogen storage: If released the low temperature of the hydrogen can cause damage to surroundings. If container is exposed to a fire, a too rapid heating relative to overpressure venting can lead to a BLEVE with significant overpressures and fireball with heat and radiation loads if ignited. Releases in water can result in rapid phase transition (RPT) explosions with associated overpressures. Liquid releases of hydrogen can also lead to significant release rates, and may in some circumstances show dense gas behavior, which may lead to major fires or explosions with associated pressures and heat loads. Smaller releases may build up gas and lead to strong explosions inside confinements, in addition to smaller releases from hydrogen storage, transportation or equipment, utilities, these releases could come from batteries, nuclear radiation in water, electric arcs in oil, waste treatment (metal containing ash into water). One major concern is usually the pressure effects, secondary effects such as projectiles and building collapse are generally more of a concern than the direct pressure effects on people. Consequences like explosion wind, fire heat loads as well as asphyxiation may also be important for the risk. This section will aim at discussing and describing possible ways and methods to reduce the risk from unwanted events. It can sometimes be useful to separate between passive and active measures. A passive measure is already in place and activated when the unwanted incident takes place, whereas the active measure requires some kind of detection and activation before it is applied. Due to the nature of hydrogen, with the wide flammability and high reactivity, the use of active measures can be a challenge. In risk assessments one will normally also include a certain probability that the active system fails to activate. Measures discussed can either be applied to mitigate, control or prevent the event (fire triangle approach removing oxygen, ignition or hydrogen), or to protect people or equipment from the consequences of a given event. Some examples of protection measures are indicated. Dispersion process, limiting amount of flammables: • Confine leak exposed area either by solid casing or by soft barriers (polyethylene sheets). This may limit flammable cloud size, by physically limiting the cloud or reducing the momentum of a jet release. • Reduce confinement near leak-exposed area to allow buoyancy driven dispersion transporting hydrogen away. • Natural ventilation, forced ventilation, emergency ventilation to remove hydrogen • Removal of ignition sources to reduce explosion frequency. • Igniters (or continuous burners) to ensure that gas clouds are ignited before they grow too large to limit consequences. • Catalytic recombiners to remove unwanted hydrogen. • Inert gas dilution after release but prior to ignition, reducing the reactivity. • Fine water-mist dilution to reduce flammability, or sprinklers to improve mixing/dilution • Rapid injection of dense hydrocarbon gas (e.g. butane) with much lower reactivity than hydrogen. • Detection, activate shut-down (ESD), pressure relief, and safety measures, move people to safe place. Fire, limiting fire loads and consequences: • Proper design against heat loads • Passive fire protection to protect equipment and increase time before escalation • Sprinkler systems and water deluge to cool equipment and control flames • Inert gas systems or fine water mist to dilute oxygen and reduce heat generation. • Avoid feeding oxygen into fire by proper confinement, limit ventilation. Explosion, limiting pressure generation and consequences: • Proper design against pressure loads, particular focus on manned areas and control rooms, as well as structures that can give escalation when failing. • Explosion vents allowing overpressure to be vented • Layout optimisation to limit turbulence generation • Water deluge or mist generation ahead of flames cooling the flame • Suppression systems quickly putting up inert atmosphere (powder, inert gas, water mist or too rich flammables) ahead of flame • Flame isolation by fast acting closing valves or flame arresters (Maximum Experimental Safe Gap, MESG) • The use of large balloons to prevent flammable mixtures in certain regions, but still give volume for gas expansion during explosion. Similar “soft barriers” could be used to limit combustion near ceiling (in flame accelerating beams) or other places with significant congestion. • Separation distances to avoid incidents to escalate to other parts of plant or to protect neighbours. • Absorbing/collapsing walls to reduce reflected shockwaves. • Introduce heat absorbing material, like porous elements made of thin aluminium foils or similar Since the list of possible scenarios is very long, this selection will not cover all possible ways of reducing risk. One very important thing to notice is that some of the measures may seem contradictory from a risk point of view, and it is not obvious whether risk is reduced or increased. Examples are removal of ignition source vs. ignition on purpose. If gas clouds are always ignited small, the frequency of explosion may be increased, but the consequences likely reduced, giving a hopefully acceptable risk. Another example is increased confinement, which can reduce cloud size, but will often increase pressure and probability of unwanted consequences. Most of the previous work on protection measures has been focusing on less reactive hydrocarbon gases or even dusts. Because the properties of hydrogen are very different (order of magnitude lower Minimum Ignition Energy, much wider flammability, much higher burning velocity, more likely to detonate, more difficult to inert and more), it is not obvious that these measures will do any good mitigating hydrogen. Important aspects are: • The time available to activate the measure is shorter due to a higher reactivity of hydrogen • The required amount of inert or cooling material (gas, powder, aerosols or metal surfaces) is higher • The path to a DDT and detonation is shorter, turbulence from active system may accelerate this, inert aerosols or powders may have limited effect once a detonation is seen. A further general problem with mitigation systems is that they are generally tested for idealized situations (empty spherical vessel with central ignition), but then applied in real life situations for which geometry will influence performance. It may therefore be necessary to focus more on preventive measures, apply safety methods that exploit the buoyancy effects, and also put more weight on creative passive ways to reduce risk. The latter can be e.g. “soft barrier” methods [Tam, 2000] to reduce the size of dangerous flammable clouds, avoid flames to burn into congested areas, and also fill parts of the volume with inert balloons that will reduce combustible volume, but be compressed when overpressure builds up. A further discussion on such measures will be found in a later section. Reference and sources [Tam 2000] Tam V (2000), Barrier Method: An Alternative Approach to Gas Explosion Control, FABIG Newsletter, R372, The Steel Construction Institute, UK ## Explosion venting of equipment and buildings ### Introduction Venting of deflagrations is recognized as a most widespread and cost-effective explosion mitigation strategy. The methods are based on the two following observations/assumptions: • The less confinement of a room, the lower general overpressure is seen • The more reactive gas, the more vent area is required for pressures to remain low The leading “Venting of Deflagration” guidelines from the USA, NFPA-68 [NFPA-68, 2002], has history back to a temporary explosion venting standard from 1945. NFPA-68 has been updated with input from various sources, much of this is done in Europe with very significant contributions from Germany [Bartknecht 1993]. Based on numerous experiments and analytical considerations vent nomograms were developed for numerous dusts as well as some gases, including hydrogen. When developing vent guidelines and nomograms, a number of assumptions, simplification and limitations will have to be defined. Since the flammables shall be categorized by reactivity, it is important to avoid situations where the flames get too turbulent, e.g. due to flame accelerating objects inside the room, or because the length/diameter ratio is too large. For this reason such guidelines will normally require that there are no obstructions inside the room and a maximum aspect ratio to be valid. This way, a significant part of the real life scenarios to be protected will fall outside the limitations of such guidelines. Other situations which may be difficult to cover with simple analytical equations or nomograms include the use of vent ducts, connected vessels, layout (geometry/vent distribution), non-ideal conditions (elevated or reduced temperature, pressure and oxygen concentration) and more. In a recent effort to improve the venting guidelines and reduce the number of situations where these can not be applied, a new European Vent standard prEN14994 [prEN14994 2004], has been developed. This has been available in a draft version since 2004. In NFPA-68 relations exist for hydrogen, but only for strong enclosures and with no turbulence generating obstructions. Similarly the prEN14994 can calculate relations for hydrogen, but only for situations “essentially free for turbulence generating obstructions”, with aspect ratio L/D < 3 and only allowing vessel strength of up to 2 bar. The possibility to use these standards and guidelines for the dimensioning of practical hydrogen applications may therefore be limited. The strict limitations when handling hydrogen are based on experimental observations, the presence of small objects or deviations from required shape of vessel may increase the severity of explosions dramatically. Experiments [Pförtner, 1985] have shown how the flame exiting from a vented vessel may experience a deflagration to detonation transition outside the vent, and [Dorofeev, 1995] showed that a detonation may be initiated inside the vent. In at least one of the experiments in the FLAME facility [Sherman, 1989] DDT and detonation flames inside the geometry may have been caused by lateral venting. For most situations with flammable gas either outside or inside a building/vessel, this may not be too much of a concern. More detailed information about the various standards and guidelines can be found by reading them. Standards and guidelines will usually be based on a coarse description of a room/vessel and the important parameters. Detailed layout, vent position, geometry and likely ignition location may be poorly described. One should therefore expect that the guidelines in most cases will give a conservative estimate of the expected overpressure, if applicable at all. Computational Fluid Dynamics (CFD) has a better possibility to describe the actual situation, including the situations not covered by the guidelines. One should in general expect to be able to reduce conservatism when applying more advanced methods. From CFD it is also possible to obtain more details about pressure loads, like duration, shape and distribution, and further how the venting will influence blast pressures and drag loads outside the vent openings. As the quality and applicability of CFD-tools vary significantly, one should make sure that the CFD-tool is properly validated against a wide array of relevant experiments, and also that validation based user guidelines exist and are followed by the user. Figure 1-99 Example of a vented hydrogen explosion from GexCon small scale channel with L/D about 4. In the NFPA-68 (2002) guidelines, vessels with 2 < L/D < 5 will require a higher vent area than for L/D < 2, and the guideline will predict a maximum explosion overpressure of 1.05 barg for the given experiment. Previous versions of NFPA-68 (e.g. 1988 edition) use one relation for L/D < 5, and would predict only 0.50 barg. As can be seen from the experimental pressure traces to the right, an overpressure of around 0.8 barg is seen in the experiment. Figure 1-99 Computational Fluid Dynamics can be useful when estimating required venting. Distributed venting, like the transverse venting shown above [Hansen, 2005], can be very efficient to keep pressures low. CFD-tools can take into account detailed layout, including shape of vessel, position and shape of vent openings, presence of geometry and more. In addition to maximum pressure, shape and duration of pressure as well as distribution in space can be found using CFD. ### Example of venting guideline: NFPA-68 The current edition of NFPA 68 (2002) includes the vent sizing correlation, which reflect results presented by Bartknecht [1993]. The test data used in support of the correlation covered a range of volumes from 1 to 60 m3 and four gases: methane, propane, city gas and hydrogen. Additional testing was also carried out to study the effect of increasing values of vent relief pressure, Pstat. The result of all this work is summarized by the following formula: $$A_v \; = \;\left\{ {\left( {0.127\;Log_{10} K_G \; - \;0.0567} \right)\;P_{red}^{ - \,0.582} \; + \;0.1754\;P_{red}^{ - \,0.572} \;\left( {P_{stat} \, - \,0.1} \right)} \right\}\;V^{2/3}$$ The range of applicability of the above equation is given by: $$K_G \le 550\,bar{m \over s}$$ $$P_{stat} \; \le \;0.5\,bar$$ $$P_{red} \; \le \;2\,bar$$ $$P_{red} \; \ge \;P_{stat} + 0.05\,bar$$ $$1\,m^3 \; \le \;V\; \le \;1000\,m^3$$ For elongated vessels (2 < L/D < 5) a correction to the vent area is indicated in the NFPA standard, which is calculated in accordance with the following formula: More details will be found in NFPA-68 (2002) ### Example of venting guideline: vent sizing of enclosures with hydrogen-air mixtures (V. Molkov) Explosion venting is the most wide spread and cost effective deflagration mitigation technique. Design of explosion vents may be based on the vent sizing correlations or application of the computational fluid dynamics. In general, the vent sizing formulas of NFPA 68 standard [1] and its European version EN 14994 [2] are not applicable to hydrogen because of its high K_{\mathrm G} index. Indeed, the vent sizing area formulas adopted by NFPA and EN standards are only applicable for a value of K_{\mathrm G} inferior or equal to 550 bar-m/sec. As shown in Figure C.1 of Annex C of the NFPA 68, the K_{\mathrm G} index of hydrogen increases with volume. For instance, the K_{\mathrm G} index of hydrogen rises from 550 bar-m/sec for a volume of 0.005 m3 to 780 bar-m/sec for a volume of 10 m3. This simply means that the NFPA 68 vent sizing approach for hydrogen-air mixtures is not applicable for volume larger than 5 L. Examples of comparison between the experimental data and the predictions by the innovative vent sizing technology [3] and NFPA 68 [1] are presented in Table 1. The predictions of the NFPA 68 were calculated using the value K_{\mathrm G}=550 bar-m/sec. Experimental configurations included sphere, cylinder and tunnel. Hydrogen concentrations were in the range 10-30% by volume. The vent sizing of tunnels was as follows: the volume of hydrogen-air mixture represents the enclosure volume and the enclosure vent area is naturally equal to double the cross sectional area of the tunnel. Table 1. Comparison between experimental data and predictions by the vent sizing technology [3] and NFPA 68 [1]. Vent Area, \mathrm{F} (m2) Reduced pressure, P_\mathrm{red} Test [H2], vol.% Shape V, m3 Igna Newb %c NFPAd %c Expd Newa %c NFPAc %c Expe Use of NFPAe K-10–45-C 10 Sphere 6.85 C 0.2214 39 2.117 1232 0.1590 0.54 79 25.65 8457 0.300 (-) K-15-15-C 15 Sphere 6.85 C 0.0753 326 0.493 2691 0.0177 5.34 46 1120 30418 3.670 (-) K-15-25-C 15 Sphere 6.85 C 0.1002 104 0.524 969 0.0491 4.20 27 193.5 5764 3.300 (-) K-15-45-C 15 Sphere 6.85 C 0.2378 50 0.682 329 0.1590 2.68 27 25.65 1121 2.100 (-) K-20-15-C 20 Sphere 6.85 C 0.0536 203 0.410 2223 0.0177 6.14 22 1120 22067 5.030 (-) K-20-25-C 20 Sphere 6.85 C 0.0819 67 0.435 787 0.0491 5.13 13 193.6 4155 4.550 (-) K-20-45-C 20 Sphere 6.85 C 0.1643 3 0.491 209 0.1590 3.74 1 25.65 593 3.700 (-) P-1-C 29.6 Cyl 0.95 C 0.2132 7 0.244 22 0.2000 1.35 8 1.753 40 1.250 (-) P-2-C 29.6 Cyl 0.95 C 0.4176 39 0.49 63 0.3000 0.74 85 0.929 132 0.400 (-) SRI-30-F 30 Tunnel 37.4 F 11.95 61.5 2.628 -65 7.48 1.73 33 0.2159 -83 1.300 (-) SRI-20-F 20 Tunnel 37.4 F 11.82 58 5.6545 -25 7.48 0.78 122 0.2159 -38 0.280 (-) SRI-15-F 15 Tunnel 37.4 F 7.55 1 7.210 -4 7.48 0.23 0 0.2159 -6 0.220 (-) a C - Central ignition; F - floor ignition. b New - Innovative vent sizing technology. c % - Deviation of prediction from corresponding experimental value, calculated by the formula: 100 \times (A_{\mathrm{pred}} - A_{\mathrm{exp}}.)/A_{\mathrm{exp}}, where A is the reduced pressure or the vent area. d NFPA - NFPA 68 vent sizing. e Exp - Experimental data. f Use of NFPA - Applicability of NFPA 68 equations. (-) in the last column refers to experimental conditions outside the specified range of applicability of the NFPA 68 equations. From Table 1, it can be seen that the vent sizing technology predicts reasonably well both vent area and reduced pressure for different conditions whereas the predictions of NFPA 68 are shown to be significantly overestimating or even underestimating experiments. The procedure for calculating the vent area in an empty enclosure or enclosure with insignificant influence of obstacles is as follows: 1) Calculate the value of the dimensionless reduced explosion overpressure \pi_{\mathrm{red}}=p_{\mathrm{red}}-p_{\mathrm{i}}. 2) Determine the value of dimensionless static activation pressure \pi_{\mathrm{V}}=\left(p_{\mathrm{stat}}+p_{\mathrm{i}}\right)/p_{\mathrm{i}}. 3) Calculate the value of the dimensionless pressure complex \pi_{\mathrm{red}}\pi_{\mathrm{V}}^{2.5} based on the data from the two previous steps; 4) Calculate the value of the turbulent Bradley number \mathrm{Br}_t by the use of one of the following two equations depending on the value of the above mentioned dimensionless pressure complex \pi_{\mathrm{red}}\pi_{\mathrm{V}}^{2.5}: \mathrm{if} \pi_{\mathrm{red}}\pi_{\mathrm{V}}^{2.5} < 1 \qquad \pi_{\mathrm{red}}\pi_{\mathrm{V}}^{2.5} = 5.65 \mathrm{Br}_t^{-2.5} \mathrm{if} \pi_{\mathrm{red}}\pi_{\mathrm{V}}^{2.5} > 1 \qquad \pi_{\mathrm{red}}\pi_{\mathrm{V}}^{2.5} = 7.9-5.8\mathrm{Br}_t^{0.5} 5) Using Figure 1, determine the appropriate values of laminar burning velocity and the expansion ratio for the suitable hydrogen-air mixture. For instance, for stoichiometric hydrogen-air mixture at NPT, the following values can be used for the purpose of vent sizing: E_{\mathrm{i}}=6.88, {S_u}_{\mathrm{0}}=1.96 m/s [7, 8]. The influence of the initial temperature on the laminar burning velocity can be extrapolated from the formula [9] {S_u}_{\mathrm{i}}={S_u}_{\mathrm{0}} = \left(\frac{\displaystyle T}{\displaystyle 298}\right)^{1.7} where {S_u}_{\mathrm{0}} is the laminar burning velocity at NTP (see Figure 1); and T is the initial temperature. Figure 1: Laminar burning velocity and expansion ratio of hydrogen-air mixtures at NPT. 6) Determine the vent area by solving numerically the following transcendental equation (by changing area A until the right hand side of the equation is equal to the left hand side): \frac{ \displaystyle \mathrm{Br}_t \sqrt[3]{36\pi_\mathrm{0}} V^{2/3}}{\displaystyle {c_\mathrm{u}}_{\mathrm{i}}\sqrt{E_{\mathrm{i}/\gamma_\mathrm{u}}}} = \frac{A\displaystyle \left(1+\pi_{\mathrm{\displaystyle}}\right)^{0.4} \left[1 + 0.5\left(\frac{A}{V^{2/3}} \frac{{c_\mathrm{u}}_{\mathrm{i}}}{{S_u}_{\mathrm{i}}(E_{\mathrm{i}} - 1)}\right)^{0.8}\right]^{0.4}}{\displaystyle \alpha \left(1 + 2 V^{0.94}\right)^{0.4}{S_u}_{\mathrm{i}}\left(E_\mathrm{i}-1\right)} where A vent area of an explosion venting device, in m2; \mathrm{Br}_t turbulent Bradley number; {c_\mathrm{u}}_{\mathrm{i}} speed of sound at initial conditions (m/s); {c_\mathrm{u}}_{\mathrm{i}}=(\gamma_{\mathrm{u}}{c_\mathrm{u}}_{\mathrm{i}}\mathrm{R}{T_\mathrm{u}}_{\mathrm{i}}/\mathrm{{M_u}_i})^{0.5}; {E_\mathrm{i}} expansion ratio of combustion products, {E_\mathrm{i}}={\mathrm{{M_u}_i}T_\mathrm{b}}_{\mathrm{i}}/{\mathrm{{M_b}_i}T_\mathrm{u}}_{\mathrm{i}}; \mathrm{M} molecular mass, in kg/mol; p_{\mathrm{i}} initial absolute pressure, in bar abs.; p_{\mathrm{red}} reduced overpressure, in bar gauge; p_{\mathrm{stat}} static activation pressure, in bar gauge; R universal gas constant, R = 8.31 J/K/mol; {S_u}_{\mathrm{i}} burning velocity at initial conditions, in m/s; V enclosure volume, in m3; \gamma_{\mathrm{u}} specific heats ratio for unburned mixture; \pi_{\mathrm{red}} dimensionless maximum explosion overpressure (reduced pressure), \pi_{\mathrm{red}}=$p_{\mathrm{red}}/$p_{\mathrm{i}}; \pi_{\mathrm{V}} dimensionless static activation pressure, \pi_{\mathrm{V}}=\left(p_{\mathrm{stat}}+p_{\mathrm{i}}\right)/p_{\mathrm{i}}; \pi = 3.14; The correlations have been calibrated up to date against experimental data for hydrogen-air deflagrations for the following range of conditions: L/D \le 4.2; V \le 37.4 m3; 0.005 \le A/V^{2/3} \le 0.34; 0 \ \text{kPa} \le p_{\mathrm{stat}} \le 13.5 \ \text{kPa}; p_{\mathrm{i}} = \text{1 bar abs.}; 0.22 \le \pi_{\mathrm{red}} \le 5. Reference & sources 1. NFPA 68, 2007. Standard on Explosion Protection by Deflagration Venting, National Fire Protection Association, NFPA, 1 Batterymarch Park, Quincy, Massachusetts, USA 02169-7471. 2. EN14994:2007. Gas Explosion Venting Protective Systems. 3. Molkov V., Verbecke F., Saffers J.B., Venting of uniform hydrogen-air deflagrations in enclosures and tunnels: vent sizing and prediction of overpressure, Proceedings of the 7th ISHPMIE, St. Petersburg, Russia, July 7–11, 2008. 4. Kumar, R., Dewit, W., & Greig, D. (1989). Vented explosion of hydrogen-air mixtures in a large volume. Combustion Science and Technology, 66, 251-266. 5. Pasman H.J., Groothuisen Th.M. and Gooijer P.H. (1974) Design of Pressure Relief Vents, in “Loss Prevention and Safety Promotion in the Process Industries”, Ed. Buschman C.H., Elsevier, New-York, pp.185-189. 6. Y. Sato, E. Merilo, M. Groethe, J. Colton, S. Chiba, H. Iwabuchi, Homogeneous hydrogen deflagrations in a sub-scale vehicle tunnel, Proceedings of the National Hydrogen Association Conference, Long Beach, CA, USA, March 12 - March 16, 2006. 7. N. Lamoureux, N. Djebaili-Chaumeix, C.-E. Paillard, Flame velocity determination for H2-air-He-CO2 mixtures using the spherical bomb, Experimental thermal and Fluid science, 2003. 8. S.D. Tse, D. L., Zhu, C. K. Law, Morphology and burning rates of expanding spherical flames in H2/O2/inert mixtures up to 60 atmospheres. Proceedings of the 28th symposium (international) on combustion. (pp. 1793–1800). Pittsburgh, PA: the Combustion Institute, 2000. 9. Babkin V.S., Private communication. Institute of Chemical Kinetics and Combustion, Siberian Branch, Russian Academy of Science, Novosibirsk, Russia, 2003. ### Venting of equipment (vent ducting) In the design of a venting system it is necessary to consider the hazards that can arise from the flame and hot combustion products that would be discharged from the vent. They should be discharged into a safe area, which is away from where any personnel may be present and so it does not cause any damage to surrounding equipment. This can be particularly a problem for vented equipment located inside a building. One way of overcoming the problem is by attaching ducting to the vent so the discharge can be directed to a safe area, preferably outside the building. The downside on the use of vent ducting is that it reduces the efficiency of the venting. The ducting will increase the flow resistance and there is the possibility of a secondary explosion of any unburnt gas initially discharged into the duct. The net effect is to reduce the flow through the vent and this lead to an increase in the reduced explosion pressure. To minimise the reduction in vent efficiency the ducting should be kept as short as possible, with no bends or large radius bends and have a cross-sectional area at least as great as the vent itself. Meeting the above guidelines is not always practicable and even when they are met it may still be necessary to increase the size of the vent to compensate for the reduced venting efficiency. Guidance on estimating the required increase in vent size is limited. The proposed European standard on the gas explosion venting and NFPA 68, on which the European standard is based, give formula for estimating the increase in the reduced explosion pressure for ducts with lengths of less than 3 m and for ducts with lengths between 3 m and 6 m. For longer duct lengths it will be necessary to determine the effect of the duct by appropriate testing of the actual duct configuration. In the NFPA-68 2002 version, there seems to be an error in the duct length formula as the duct length to be entered in the formula is not an absolute length but the ratio of length to duct diameter. This will be corrected in NFPA-68 2006 edition. Reference & sources [NFPA 68, 2002] Guide for Venting of Deflagrations, National Fire Protection Association, NFPA, 1 Batterymarch Park, Quincy, Massachusetts, USA 02169-7471 [Bartknecht, 1993] Wolfgang Bartknecht, Explosionsschutz – Grundlagen und Anwendung, Springer Verlag, ISBN 3-540-55464-5 [prEN14994, 2004] Gas Explosion Venting Protective Systems [draft version] [Molkov, 1999] Molkov V.V., (1999). Explosion safety engineering: NFPA 68 and improved vent sizing technology, Proceedings of Interflam’99, 8th International Fire Science Conference, Edinburgh Conference Centre, Scotland, UK, 29th June-1st July 1999, pp.1129-1134. [Molkov, 2003] Molkov V.V., Grigorash A.V., Eber R.M., Guidelines for venting of deflagrations in enclosures with inertial vent covers. FireSERT, University of Ulster, 2003, 41 p. [Grigorash, 2004] Grigorash A., Eber R., Molkov V., Theoretical Model Of Vented Gaseous Deflagrations In Enclosures With Inertial Vent Covers, Proceedings of the 4th International Seminar on Fire and Explosion Hazards, 8-12 September 2003, Londonderry, pp.445-456, 2004. [Dorofeev, 1995] Dorofeev, S. B., Bezmelnitsin, A. V., and Sidorov, V. P., 1995, Transition to detonation in vented hydrogen-air explosions. Comb. Flame, 103, 243-246. [Pfortner, 1984] Pfortner, H., Schneider, H., Final Report for Interatom GmbH, Bergish Gladbach, Germany, October 1984, Fraunhofer ICT Internal Report. [Sherman, 1989] Sherman, M.P., Tieszen, S.R. and Benedick, W.B, FLAME Facility, The Effect of Obstacles and Transverse Venting on Flame Acceleration and Transition to Detonation for Hydrogen-Air Mixtures at Large Scale, Sandia National Laboratories, Albuquerque, NM 87185, USA, NUREG/CR-5275, SAND85-1264, R3, April 1989. [Hansen, 2005] Hansen, O.R., Renoult, J., Sherman, M.P., and Tieszen, S.R. 2005, Validation of FLACS-Hydrogen CFD Consequence Prediction Model Against Large Scale H2 Explosion Experiments in the FLAME Facility, Proceedings of International Conference on Hydrogen Safety, Pisa, Italy, September 2005. ## Active inerting, suppression and isolation systems A number of active mitigation methods are applied in the industry to limit the consequences of accidental fires and explosions. In the following some of these methods will be described, with particular focus on their potential benefit with regard to protection against hydrogen fire and explosion scenarios. Systems using water will be discussed separately in the next section. The concept of constant inerting is also discussed, even if this cannot be considered to be an active method. The approach is however closely related to methods like rapid pre-ignition inerting or suppression. The method is also discussed elsewhere in this report, and only a brief description will be given here. ### Constant inert gas dilution to prevent ignition and combustion The typical approach is to dilute the atmosphere with sufficient amount of inert gas to prevent ignition and combustion. In situations where human activity is not required, one may also replace all the air by inert gas. The inert gas will typically be N2, CO2, or special mixtures to allow human breathing but no combustion (of hydrocarbon gas at room temperature) like InergenTM (mainly Ar and N2, some CO2), ArgoniteTM (Ar, N2) or similar. The approach is typically applied for situations where the risk from accidental explosions or fire would be unacceptably high, examples are: • The computer room of important installations, for which a fire may destroy safety critical control systems. • Leak exposed volumes where proper venting is difficult, like the turret of an FPSO. • Gas turbines/compressor casing, with high probability both for leaks and ignition. Challenges with such systems are that they would require proper control systems to maintain the intended dilution level. Good routines and safety systems may be required to limit the hazard to personnel, either from volumes 100% filled with inert gas, but also possible malfunction of people-safe inert gas dilution systems. Since flammability limits are much wider and dilution levels to obtain inert atmosphere are much higher for hydrogen compared to natural gas, gas dilution to levels where humans can breath but flames not propagate is more challenging when handling hydrogen. In Table 1-4 a comparison of inert levels between natural gas and hydrogen is shown for some relevant inert gases. None of the inert gases most frequently applied for hydrocarbon gas allowing presence of people will be safe for hydrogen. Halons would be more efficient, however, the Montreal protocol with the ban on halons due to the ozone depletion effect removes this option. HFC-gases like e.g. HFC-236fa can be an option. But due to greenhouse gas effects (high Global Warming Potential) these agents are banned for fire protection use in some countries, and subject to prohibitive environmental tax in others. Since HFC-236fa has shown better performance than HFC-227ea, and will be safe for people at higher concentrations, this gas could give a certain protection against hydrogen ignition and flame propagation. The solution is questionable, as ignition should still be expected for H2 concentrations in the range 10-20%. If inerting fails, the HFC-gases may in certain circumstances decompose or take part in combustion, enhancing pressure build-up and the gases developed during combustion are toxic. It should be noticed that the values shown in Table 1-4 are for normal pressure and temperatures, and that higher inerting levels will be required e.g. for elevated temperatures [see www.safekinex.org]. Butane (C4H10) has also been added to Table 1-4 as another creative approach would be to add sufficient amount of other flammables so that the total mixture becomes too fuel rich to burn. It is expected that 8.5% butane (UFL) mixed in the air could prevent any mixture with hydrogen at ambient temperature and pressure to become flammable. Courage is however required to apply this approach as the mixture will become flammable again once diluted with air. One should then consider the possible benefits achieved from reduced reactivity due to butane dilution of hydrogen versus the increased amount of flammable substance due to the added butane. As a conclusion, good solutions for the protection of rooms with presence of people have not been identified. For rooms or situations with no presence of people, full inerting, for instance with nitrogen, can be applied. For industrial process flows containing pure hydrogen, purging with inert gas could also be performed prior to shut-down or start-up to avoid explosions. Table 1 4: Efficiency, environmental impact and hazard for people for different inert gases, most of the data is extracted from [Isaksson, 1997] and for conditions near 25ºC 1atm. 1. Inerting avoids ignition, quenching stops combustion [Isaksson, 1997 ] 2. According to report from [SFT, 2001] 3. 20-30% CO2 may give cramps and fainting in less than 1 minute 4. 50% Nitrogen and 50% Argon 5. 52% Nitrogen, 40% Argon and 8% CO2 6. Not Halon 1301, but MeBr [Zabetakis, 1965] 7. According to [US Patent 5615742], see Figure 5.8.1 8. Lower [No] observed adverse effect level Fig 1 14: These plots show necessary inerting level (left) for hydrogen-air with added inert gases N2, H2O or CO2, as well as the assumed impact on laminar burning velocity (right). Relations shown are those used in the CFD-tool FLACS, and are based on [Zabetakis, 1965]. ### Pre-ignition inert gas dilution When the probability for accidental leaks is low, or there is a need for presence of people, it may not be practical to keep an inert or partially inert atmosphere constantly. Another alternative then will be to activate inert gas dilution on leak detection prior to ignition. Depending on scenario, the optimal choice of system will vary. • Nitrogen or CO2 (or similar): These gases can be applied for release scenarios where leak rate is small, i.e. where it will take minutes to build up any dangerous gas clouds. Since the required inerting level is very high (2-3 parts inert for every part air) it takes time to introduce the inert gas, and one will need a ventilation system to safely remove overpressure. Be aware that with regard to explosion protection, an emergency ventilation system may be equally useful and less complicated. For fire prevention, an inert system will have advantages. • HFC-gases: In situations where the leak rate is large, and protection will be needed in seconds rather than minutes, HFC-gases may be a good alternative. Due to environmental concerns, these should only be applied in situations with a very low leak frequency but potential severe consequences. Examples of application areas could be airplanes and submarines. Some testing of such a system using HFC-236fa and HFC-227ea with focus on transformer protection has been published [Hansen, 2002]. There will be some challenges when applying pre-ignition inert gas dilution. One will be to detect the problem and activate the system before dangerous pockets of flammable gas have built up. Personnel safety is another issue. The system must not be activated before people are safe. Further, the distribution of inert gas must be as even as possible or give better protection where the flammables are found. If CO2 is injected in a dense gas layer near the floor and the leaked hydrogen creates a flammable cloud near the ceiling, the protection is limited. On the other hand, one should also be aware that the turbulence created when injecting inert gas can make an explosion more severe if it gets ignited. A further issue to consider is a safe handling of the overpressure from injection systems with outflow of potentially explosive mixtures. ### Explosion suppression and fast acting valves In the powder handling industry dust explosions can be a severe hazard. In many situations explosion suppression is used to quench flames, either inside a vessel or in the pipe connection between vessels to prevent escalation into further vessels. An alternative to suppression (chemical isolation) in the pipes between vessels will be explosion isolation by fast acting valves closing the pipe mechanically. More information on suppression can be found in [Moore, 1996]. To apply similar methods for hydrogen flames may be possible, but will be much more challenging. While turbulence from a suppression system alone may be sufficient to quench dust flames, the same turbulence will likely accelerate hydrogen flames. To apply suppression at hydrogen flame detection inside a room or vessel will likely make things worse, as the turbulence will strongly enhance the flame spread and no quenching can be expected. Further challenges are the short time window to detect and evenly distribute agent, the influence of real geometries that may prevent an even mixing of inert, and also the evaporation time for e.g. HFC-gases (these are normally stored as a liquid). In work towards protection of transformers, room suppression against hydrogen flames was tested [Hansen, 2002] with limited success. The chemical or mechanical isolation of hydrogen flames burning from one vessel towards the next should be a more realistic task. Challenges will still be to detect and activate the suppression system or isolating closing valve fast enough. With fast deflagration or detonation mode flame propagation, the flame may propagate 10-20 m in 10 ms. Success with such a concept therefore depends on early detection (before flame is entering the pipe to be isolated) and rapid activation of measure. For chemical isolation (suppression) one must ensure that enough inert gas is injected for a sufficiently long period. One must be prepared that the flame may have a delayed entrance to the pipe after detection, so that the suppression system must release enough suppressant to inert a sonic flow through the pipe at least until the flame has reached the barrier. Other issues to consider is to what extent a hydrogen detonation wave will manage to propagate through a chemical barrier in its early phases, and further to what extent a plug of hot reaction products after the chemical barrier can re-ignite gases in the second vessel. Mechanical isolation seems safer if this can be done fast enough. Challenge here will be to dimension the system to withstand a reflected detonation wave. ### Computation Tools No calculation tool has the necessary functionality and models to precisely evaluate all the aspects discussed. The physics is complex, but a range of CFD-tools can still be useful. The GexCon FLACS tool can be used to evaluate the transient distribution of inert gas, either from a suppression or inerting system. Further the influence of inert gas dilution on explosions and the effect of fast acting valves can be predicted. ## Water based protection systems Water is extensively used for fire and explosion protection. It has a high heat capacity (per mass) and heat of evaporation, water is easily available, safe and friendly to the environment, and can be applied both as liquid particles (efficient distribution) and vapor. Examples of applications are • Water deluge is activated to control fire and cool equipment (not always optimal to quench flame if leak is present). • Water curtains can be used to influence dispersion pattern or remove chemicals, they could also add heat in connection to cryogenic releases. • Water deluge is sometimes activated on hydrocarbon gas leak detection. The deluge will increase mixing/dilution of cloud. If ignition takes place, deluge will increase turbulence in flame, but expansion flow ahead of flame will thereafter break up droplets and the fine mist will have an effect similar as an inert gas. • Aerosols from the release of superheated water are used for explosion suppression in the powder industry, and can also be used for pre-ignition inerting of flammable mixture [Hansen, 2002b, Hansen, 2002c]. • Presence of water vapor in nuclear accident scenarios will reduce flammability of hydrogen flames [Jones, 2006]. • Different droplet sizes will have different properties, this is discussed in the following. Fine aerosol droplets [< 10 micron]: These are difficult to generate and distribute mechanically in large quantities. This can either be done when large droplets (0.5-1 mm) break up in the explosion wind ahead of deflagration flames. Another alternative for confined situations will be by flashing of superheated water. For explosion protection the water mist must be of this size class to have a beneficial effect on the flame. Larger droplets will not manage to evaporate in the reaction zone of the flame. Due to their size, these small aerosol droplets will follow the flow. If significant flow velocities are present in the accident scenario, they may be transported away by wind or convection flow from fire and have no beneficial effect. Explosion tests with such a fine aerosol system from Micromist Ltd. [Hansen 2002b, 2002c] have shown that stoichiometric propane can be made inert, while a significant pressure reduction 50-70% was achieved with hydrogen using 4 litre/m3 prior to ignition in a 50m3 vessel with low congestion and relative low vent area. Compared to natural gas, tests seemed to indicate that of the order 3 times more water mist must be applied for hydrogen to achieve similar relative pressure reduction. Fine mist [30-200 micron]: These can be generated by commercial mist/fog nozzles. Due to a better ability to penetrate the flow, but limited size giving fast evaporation, they may be useful for fire mitigation. For explosion protection this droplet size will have a limited or even negative effect, as the turbulence from their distribution will accelerate flames, but the evaporation time scales are too large for deflagration flames. GexCon has performed hydrocarbon explosion tests using fog nozzles for mitigation. This resulted in increase of pressure instead of a decrease. The reasons for this were strong initial turbulence from sprays and combined with limited mitigation due to too large droplet size for efficient evaporation (but too small droplets to achieve droplet break-up). Droplets from sprinklers [400-1000 micron]: These can be generated from normal sprinklers at 3-7 bar water pressure. These droplets may have a positive effect on large-scale fires, but may be less efficient for smaller fires compared to the previous category. For unconfined and partially confined explosions, these droplets may be very efficient. Due to their size they are not so much influenced by strong natural ventilation or buoyant convection flow from a fire. When explosion starts, the sprays will initially accelerate flames. Very soon these droplets are broken up into very fine mist particles due to the forces from the expansion flow ahead of the flame. The fine mist will be efficient against explosions as the flame reaction zone is diluted with fine aerosol particles. The efficiency of such a system increases with scale, with amount of water, with equipment congestion and with decreased confinement. For natural gas hazards on offshore installations, typical application rates are 10-25 litre/sqm/min depending on area to be protected. For explosion protection, 10 litre/sqm/min is not necessarily sufficient if the confinement is significant. For hydrogen the beneficial effect may be even harder to achieve, this will be discussed in the next section. Advantica [Catlin, 1993, Selby, 1998, Al-Hassan, 1998], and GexCon [van Wingerden, 1997, 1998 and 2000] have performed numerous tests with sprinkler systems to study explosion mitigation for natural gas. This has shown a very beneficial effect at large-scale when confinement is low. With low congestion and high confinement, less good results are seen, and in some situations the use of water deluge may make the explosion consequences significantly more severe.. Despite a significant research effort on water mitigation of natural gas, limited work has been done on hydrogen. The effect of inert water vapor on hydrogen flames is one exception. In the following it will be discussed to what extent water can be used to improve hydrogen safety. Water based systems and effect on hydrogen safety For a situation where accidental releases of hydrogen can take place, a sprinkler system with water could enhance mixing and avoid stratification effects. If the total amount of hydrogen that can leak is small compared to room volume, this can be a good idea as very reactive flammable clouds may be avoided. For larger releases, this may strongly increase the hazard, as a large homogeneous cloud at dangerous concentration may form. A forced ventilation or fan system could have the same effect. If there is a wish to add heat to released gas to enhance buoyancy of the cold plume, water curtains directly downwind or around a cryogenic hydrogen spill dike could be to some help. It should be confirmed that no ignition hazard is introduced due to static electricity. Static electricity from nozzle systems does not seem to be a problem for natural gas clouds exposed to deluge, however, minimum ignition energy for hydrogen is 10 times lower than for propane. Against fire it is assumed that water can be applied to cool equipment exposed to radiation or flame impact, to cool the flames, and possibly also to set up a radiation shield where needed. Quite a lot of water vapor will normally be needed for extinction of hydrogen flames. Turbulent jet flames may lift-off with increased water vapor level. To quench hydrogen flames may be very difficult, and will seldom be a beneficial result in relatively confined situations as an uncontrolled leak and potential explosion may follow. For explosion mitigation an aerosol water system based on flashing of substantial amounts of superheated water (4 litre/m3 water at 180ºC/10 bar) has been shown to reduce hydrogen explosion pressures significantly. More than a factor of two reduction of overpressure was achieved at 15-20% H2 concentrations [Hansen, 2002b]. More water is expected to improve the effect further, but the release of hot water may lead to a significant temperature increase and a certain overpressure at activation. Best effect will be seen if injected short time before ignition. The suppression of the hydrogen flames inside a room with such a system will likely not work, due to problems with activation time and turbulence from release. In special situations a system could still work, for instance being released in compartments where the flames have not reached yet. Steam (water vapor) would be expected to have a similar (or better) effect, but the distribution of significant amounts of steam will take time and build up pressure. Water sprinkler systems activated at release prior to ignition could be expected to have a mitigation effect on hydrogen explosions in certain situations. Significant more water than applied for natural gas would be needed. Potential problems include the possibility that turbulence from sprays may quickly accelerate the flames into DDT and detonation, and then the water sprinkler will not be expected to have a mitigating effect any more. The much lower minimum ignition energy for hydrogen compared to natural gas may also increase the likelihood for ignition from static electricity in connection to the water sprinkler systems. The conclusion will be that potential benefits from using water-based protection systems within hydrogen safety may exist. For protection against fire effects, traditional methods should be applicable. There are few good solutions at the moment to handle explosions, more work will be needed to identify and validate good systems. Further development and testing of the fine aerosol technology from superheated water should be performed and the potential benefits and problems for sprinkler systems should be investigated. ## Tools and methods: No calculation tool has the necessary functionality and models to precisely evaluate all the aspects discussed. Several CFD-tools can be used to study the effect of deluge on dispersion. Some CFD-tools have models for the effect of deluge on deflagration flames, these are mainly valid for natural gas. The GexCon FLACS tool has modified guidelines for hydrogen and deluge, but experimental validation is limited. Reference & sources [Isaksson, 1997] Isaksson, S., Simonson, M. and Holmstedt, G., Sveriges Provnings- og Forskningsinstitutt, SP report 1997-10 [SFT, 2001] Norwegian Pollution Control Authorities (SFT), SFT report 1754-2001 [Zabetakis, 1965] Zabetakis, M.G. “Flammability characteristics of combustible gases and vapours”, Bureau of Mines, Bulletin 627, Washington 1965 [Hansen, 2002] Hansen, O.R., Wilkins, B.A. and Wiik, A.,”Suppression of secondary explosions in transformer rooms”, J.Phys.IV France 12 (2002) [Hansen, 2002b] Hansen, O.R., Wilkins, B.A. and Eckhoff, K. "Explosion protection in transformer rooms” ESMG symposium proceedings, Nurnberg,8th -10th October 2002. [Hansen, 2002c] Hansen, O.R., Wilkins, B.A., Eckhoff, K., O’Connell, M. and Holen, J.K., Mitigation and prevention of hydrocarbon explosions by micromist water inerting, Conference proceedings Major Hazards Offshore, 2003, London, UK, ERA report 2003-548, ISBN 0 7008 0776 4 [Moore, 1996] Moore, P.E. “Suppression of Maize Dust Explosions” Industrial Dust Explosions, Symposium Pittsburg, Pennsylvania, June 1996. [Jones, 2006] Jones, S.J., Averill, A.F., Ingram, J.M., Holborn, P.G., Battersby, P., Nolan, P.F., Kempsell, I.D. and Wakem, M.J., Mitigation of Hydrogen-Air explosion mixtures using fine water mist sprays, Hazards XIX Conference proceedings, Manchester, UK, 27-30 March 2006. [Al-Hassan, 1998] Al-Hassan, T. Johnson, D.M. “Gas explosions in large scale offshore module geometries: Overpressures, mitigation and repeatability, presented at OMAE-98, Lisbon, Portugal, 1998 [Selby, 1998] Selby, C., Burgan, B. “Blast and fire engineering for topside structures, Phase 2, Final summary report”, Steel Construction Institute, UK, Publication number 253, 1998 [van Wingerden, 1997] van Wingerden, K. and Linga, H. “New aspects of the effect of water spray on gas explosions in offshore rigs”, Conference on Fire and Blast Engineering: Offshore Installations, ERA-report 97-0994, London 1997 [van Wingerden, 1998] van Wingerden, K., Hansen, O.R. and Lemousy, T. “Effect of deluge on explosions, FLACS simulations compared to full scale experiments”, 7th Annual Conference on Offshore Installations, ERA-report No 98-0958, ISBN 0-7008-0679-2, London 1998 [van Wingerden, 2000] van Wingerden, K “Mitigation of gas explosions using water deluge”, Process Safety Progress, Volume 19, Issue 3, Pages 173-178, 2000 [Catlin, 1993] Catlin, C., Gregory, C.A.J., Johnson, D.M., Walker, D.G., “Explosion mitigation in offshore modules by general area deluge”, TransIChemE, vol. 71 Part B, 1993 ## Passive systems In this section various passive methods and their potential influence on the hydrogen safety will be discussed. Passive measures will include elements such as “Inherently safe design”, “Soft barriers” as well as certain protection measures that are constantly in place and thus require no maintenance. Because of the high reactivity of hydrogen, and the limited benefits expected from active measures, special consideration should be given to find the optimal passive protection methods. For gas explosions, some best practice advice can be found in [Bjerketvedt, 1997], see examples in Figure XX. Figure XX: Some illustrations from Gas Explosion Handbook [Bjerketvedt, 1997] indicating best practice layouts for explosion exposed areas. ### Inherently safe design The main focus here should be to avoid significant flammable gas clouds. Some focus will also be on limiting overpressures if an explosion takes place. Both these goals can be achieved by minimizing the confinement (the optimal wall is no wall). The strong positive buoyancy of hydrogen should be exploited, and one should ensure that released hydrogen finds its way upwards without meeting too much confinement. In outdoor situations, this can be ensured by proper design of ceilings and covers. Large high-momentum leaks inside a process area may still generate significant cloud sizes. If this turns out to be a problem, methods can be applied to reduce the momentum of horizontal leaks, e.g. putting up vertical walls around the likely leak locations. By reducing the momentum of the leak, it will much sooner find its way upwards. This may reduce cloud sizes (but increase likelihood of small explosions as more frequent smaller leaks may now generate flammable clouds). Such a measure should therefore not be applied without a proper risk evaluation. Another issue in the design is that different units should be separated so that the gas cloud from one unit does not reach the next unit. In semi-confined situations, one should further ensure that natural ventilation in combination with buoyancy effects will be as efficient as possible preventing gas cloud build-up for different wind conditions. Again focus should be on designing the ceilings so that buoyant layers of gas will find its way out of the vent openings. For a more confined situation it will depend on the leak rate whether a low momentum release (more stratification, beneficial for large amounts released or if gas near ceiling is quickly removed) or high momentum release (more mixing, beneficial provided concentration can be held e.g. below 8%) is preferable. A casing around the leak exposed equipment can ensure a low momentum leak. Similar effects may be achieved by applying weak barriers, like curtains. This may let some of the gas through, but may reduce the size of very flammable gas clouds. If a gas cloud is generated and ignites, presence of large vent areas will usually be an advantage to limit explosion pressures. If the vent areas are well distributed, this may reduce the flame acceleration through the geometry and the severity of the explosion. A strong feedback from external explosion into the chamber increasing the turbulence and flame speeds may also be less likely when vents are distributed. In some situations it will be an advantage that the vent panels close after an explosion to limit access to oxygen for the following fire. The congestion level should also be made as low as possible, to limit turbulent flame speeds. In areas exposed to hydrogen leaks, the area near the ceiling should be given particular attention, as the gas is likely to collect there. It may then be a good idea to limit the equipment density near the ceiling, to avoid equipment that will accelerate flames in that region. If there are significant support beams below ceiling, these may both be an advantage as they may influence the shape of the gas cloud, but also a disadvantage accelerating flames. When designing such facilities, one should have a philosophy about this before deciding on the detailed layout. It is not always straight-forward to choose the optimal design based on the guidelines above. Several of the considerations will depend on the frequency and consequences of various incident scenarios. If one design choice is taken, one should expect this to increase the frequency/consequence of certain incidents, and reduce the frequency/consequences for other. When evaluating these issues it is important to apply methods that take the complexity of the phenomena into consideration. If consequence tools are to be applied, this will in many situations mean that CFD-tools should be applied, as simplified guidelines will not pick up the physics. ### Protection walls One approach to protect sensitive equipment from explosion effects will be to design some kind of barrier between a source of explosion and a sensitive target. This is sometimes done in connection to the handling of explosives, and also for situations in the chemical industry to protect surroundings from high pressure tanks with potential unstable chemicals that may explode [Herrmann, 2005]. It is also sometimes used to deflect flames in connection to explosion venting, either to prevent people from being killed by fast vented flames, or to protect buildings directly outside an explosion vent. Like for many other mitigation measures the design and optimization of a protection wall is not straight forward. Important design questions are: Where to locate the wall? The wall can either be located close to the source to absorb the energy from the explosion or venting, or it can be located close to the target to shield the target from pressure waves. For a deflagration it is in general difficult to identify the exact position of the explosion source, and it will usually not be practical or cost-efficient to use this as a mitigation measure. One exception is when there is a vent opening, in this case one may know where the energy comes from, and it will be possible to design a protection wall. The alternative approach will be to design a protection wall in front of the target. In order to have a good effect, one will have to study the detailed interaction between blast waves and the wall & building complex and optimize size & position based on such a study. It can be a challenging task to design a good protecting wall, and in most cases it will be better to spend the same resources strengthening the target building. How large should the wall be? This can be a difficult question to answer as it will depend on several parameters, including position and volume of source explosion relative to object to be protected. For a geographically well defined detonation or vessel burst situation that can be considered as a point source, an optimization of wall design may be possible, for a less well defined source a significant conservatism will normally have to be included. How strong must the wall be? If the wall is located near the source, it has to be stronger than if it is located close to the target. In both cases, it should not generate projectiles as a result of the blast loads. If the incident is statistically rare, it may be acceptable that the wall is damaged by the incident. By studying such approaches with protection walls against blast waves, one will normally realize that the effect of shielding walls is usually limited. Parameter studies may also show that it is fully possible to make the blast loads worse depending on location and size of the shielding wall. This can be partly because the pressure will go around the wall on all sides (above and to the sides), and these pressure waves will be deflected and may again meet behind the wall. In the planes where these deflected waves will meet, one may experience higher pressure loads than for the reference case with no walls. Another issue is that the pressure waves coming from a different angle compared to the case with no protection wall may be more dangerous giving a stronger reflected pressure. Figure X.X Due to reflection effects, the pressure in front of a “protection object” may be significantly higher than the free-field blast pressure. But even behind an obstacle, interference may lead to overpressures higher than without the object present. The plot shows enhancement factor for simulated pressure waves relative to free field blast, in this case the object may enhance observed pressure behind the object by more than 30% locally (the effect will depend on the strength of the shockwaves). In the following an example of the testing and modeling of protection walls will be given. Forecast of blast wave propagation and impact force which is applied to the protective wall In case if explosion accident occurs, it is necessary to have some measures in order to minimize the disaster of material and personnel on the surrounding area. For this purpose, the design conditions of protective wall was investigated in order to obtain more efficient reduction of blast wave by means of calculating the blast wave propagation using compressible fluid simulation for the postulated explosion accident. The benefit of protective wall installation was examined based upon the numerical simulations of blast wave propagation by the BAAL which is open source code of Los Alamos National Laboratory. Fig. 1 shows a reduction effect of explosion overpressure by various protective walls. Fig.1 Comparison of reduction effect of explosion overpressure of various protective walls The value in Fig. 1 represents a reduction effect of explosion overpressure, and this is calculated as per cent value of explosion of overpressure to that without protective wall at a distance of 10 meter downstream from the protective wall. Based upon the result which is shown in Fig. 1, it becomes clear from this simulation that protective wall should have certain width at least 12 meter, and the reduction effect of explosion overpressure greatly depends on the height of protective wall and does not depend on its configuration. ### Experimental evaluation and numerical simulation of the damage of surrounding structures by an explosion accident Explosion experiments were carried out in order to evaluate the damage of surrounding reinforced concrete (RC) structures and to enable a structural design of it by numerical simulations. (See Fig.2) Fig.2: Experiment of hydrogen explosion and RC structure damage (left:Experimental system, right:A moment of explosion) For the experiment, pre-mixed 37 m3 of 30% hydrogen with air was detonated with the RC test pieces located at 5 meter from explosion center, and response and damage of test pieces were observed. The number of RC test pieces is 22 with a different height, thickness, bar arrangement and steel ratio. Table 1 shows the result of experiments with a broad range of conditions from elastic stage to breaking. Table 1: Experimental results on RC structure damages caused by hydrogen explosions The explosion results show that response of the structures has a significant time lag behind the blast wave propagation. And because a trace of crack shows an evidence of higher order deformation mode, the damage of RC structure is caused by a vibrational phenomenon which is dependent on the natural frequency of it. (See Fig.3) Fig.3 Typical displacement response Result obtained from the coupling of a blast wave analysis by AUTODYN and a response analysis by FINAL which is analysis software for a structure developed by Obayashi Corporation agrees well with the experimental result. (See Fig.4) Therefore, this phenomenon is found to be simulated with above-mentioned software. Fig.4 Comparison of simulation and experiment concerning displacement response ### Soft barriers The concept of soft barriers for explosion mitigation was discussed in [Tam, 2000]. A soft barrier could be a polyethylene sheet preventing gas to enter into regions where explosions could become more severe due to pressure piling or reflections. Another soft barrier could be to put a cover around a congested pipe bundle. A gas explosion will accelerate much less going past one large “cylinder” compared to a pipe bundle. A third example would be to fill the upper half of a room with balloons. A released gas will only be able to fill half of the volume. If this explodes, the overpressure will manage to expand as the balloons get compressed. If the balloons also fill space between beams (repeated beams would normally accelerate the flames), the effect from such measures can be very significant. The possibilities with such soft barriers are numerous. Another example could be a pattern of regular vertical curtains. Workers could easily walk through the curtains, so the limitations to the normal work operations could be limited. A high momentum jet release on the other hand would soon lose its momentum and move upwards due to buoyancy. The curtains would also limit the mixing of gas. The flammable cloud size would then be limited (a small rich region, other lean regions, and some regions with no gas at all). Once the explosion would start, the soft barriers will act as weak vent panels in all directions. Fig.x.x Two creative ways to reduce worst-case explosion consequences are illustrated. In central picture the volume exposed to flammable gas is reduced by introducing a false weak ceiling, in right picture balloons reduce volume that can be occupied by flammable gas, these will be compressed in case of pressure buildup and thus reduce the explosion consequences. ### Flame arresters Flame quenching and quenching diameter Cold walls quench the flame over a fairly long distance. The observation led Sir Humphrey Davy to the invention of the miners safety lamp in 1815 and has been used ever since in the construction of various explosion proof equipment including flame arrestors used to protect storage, distribution and chemical processing facilities containing flammable gases from fire and explosions. Typically the arrestors are composed of metal plates with orifices, wire mesh screens, porous sintered metal elements, etc. The flame quenching by walls can be due to cooling and chemical effects in particular destruction of radical chain carriers. By testing different mixtures of the same composition diluted in different proportions by argon and helium which changes the ratio of diffusion coefficients and thermal conductivities of the mixtures without affecting the chemistry it was proven that heat transfer is by far the dominating mechanism. Then simple physical considerations lead to the conclusion that the quenching distance dq should be proportional to the flame thickness that in turn is related to the laminar burning velocity, SL (1) where, λ is the thermal conductivity, cP is the specific heat at constant pressure, ρ is the density, T is the temperature and is the average molecular weight, and the subscript u denotes the unburned state. The above equation is a surprisingly exact one, and only the additional, typically weak, pressure dependence of the SL introduces some discrepancies. It is interesting to note that only about 22% of the heat generated by the flame per unit surface must be removed in order to quench the flame. In some methods the flame is quenched using a circular tube in which case, one often speaks of the quenching diameter D0. In other methods it is convenient to quench the flame by a tube of slot like cross section, in which case one speaks of the quenching distance referring to the width of the slot. Fig 1: Quenching distance as function of hydrogen concentration at various initial pressures. In Fig .1 quenching distances are plotted as function of hydrogen concentration in air at 300 K for various initial pressures after Yang et al. [Yang, 2003]. In the figure also data of Lewis and Elbe [Lewis, 1987] are shown for comparison. The quenching distance has its minimum at about 30% vol. of hydrogen i.e. practically at stoichiometry. Other geometries provide different quenching distances. The geometrical factor could be calculated from the requirement that the heat loss rate at which flame is quenched is a constant independent of tube geometry. The geometrical quenching factors were studied by Berlad and Potter [Berlad, 1955]. The following relations were proposed for D0, quenching distance D1 and quenching by a rectangular slit D2 with the shorter side Dr and longer side b. (2) Other geometries were also analysed. Although the predicted and observed values agreed well, systematic deviations were observed, which required empirical correction factors (typically of the order of 10%). The length of the quenching hole is unimportant, both orifices in foils and thick plated provide the same results. Several investigators looked for an effect on quenching distance of the nature of the wall and found none, even when the walls were coated with special chain breaking salts of various efficiencies. Maximum experimental safe gap (MESG) Forced flow conditions, like the ones occurring during explosion, make a difference. Thus, the following problem is of importance. A mixture is ignited, or explodes in a closed vessel. The same mixture surrounds the vessel. What is the maximum safe width of a slit in the vessel (sometimes referred to as the “maximum experimental safety gap” MESG) for the flame to spread outside. Propagation of the flame under such condition is a much more complex process due to the domination of non-stationary and gas-dynamic phenomena. A landmark analysis of the problem was provided by Phillips [Phillips, 1963]. Discussion of the problem is beyond the scope of this note and as an indication of the orders of magnitude in Table 1 we give the width of the “explosions proof” slits and D1 for several mixtures after Chomiak [Chomiak, 1990]. It is interesting to note that MESG is by a factor of two larger than quenching distance at explosion pressures for most fuels, except acetylene where it is less than half. This aspect of flame quenching is poorly understood and requires more work. These values relate to stationary flame. If the gas flow is in the direction of flame propagation, a smaller gap is needed to quench the flame, and conversely. If the gas velocity is high enough, a condition can occur in which a flame propagating against the flow is stabilized at a constriction and causes local overheating. Table 1: Comparison of MESG and quenching distances for several mixtures [Chomiak, 1990] Deflagration Flame Arresters A flame arrester, or flame trap, is a device used to prevent the passage of a flame along a pipe or duct. A flame arrester is generally an assembly of narrow passages through which gas or vapour can flow, but which are too small to allow the passage of flame. Flame arresters are generally distinguished as end-of-line or in-line arresters. There are three types of arresters: • Type 1 – arresters with multiple small channels (planar sheet metal, crimped ribbon, wire gauze, perforated plate, perforated block, sintered metal, parallel plate, wire pack, packed bed); • Type 2 – hydraulic devices; • Type 3 – velocity flame stoppers. The operation of type 1 arresters is generally treated in terms of the mechanism of quenching and heat loss. Desirable properties of a flame arrester are high free cross-sectional area available for flow, low resistance to flow and freedom from blockage; a high capacity to absorb the heat of the flame, and the ability to withstand mechanical shock, including explosion. The design of flame arrester depends on the combustion properties of the flammable mixture and on the function and location of the arrester. The size of the aperture through the arrester is determined by the quenching distance of the flammable mixture. The diameter of the aperture of an arrester should be smaller than the quenching diameter by at least 50%. The performance of an arrester is affected by the temperature. The quenching distance increases as the temperature increases. It is approx. inversely proportional to the square root of the absolute temperature. Hydraulic, or liquid seal, arresters contain a liquid, usually water, which serves to break up the gas stream into bubbles and so prevents passage of the flame. Velocity flame stoppers are arresters used in end-of-line applications. Their function is to prevent a flame passing from downstream to the upstream side. The principle of their operation is to assure that the velocity of the upstream gas passing through the arrester is sufficiently high to prevent a flame propagating through the arrester from the downstream side. The velocity necessary to prevent flashback through apertures larger than those which would give quenching is given by the equation [Hajek and Ludwig, 1960]: uT = 0.2015gLD where: D – internal pipe diameter (m) gL – laminar velocity gradient (s-1); it is a function of the gas and its concentration; for hydrogen its maximum value is equal to 10 000 s-1 uT – turbulent flashback velocity (m/s) More details on flame arresters, including technology and list of manufacturers can be found in the book by Grossel [Grossel, 2002]. Several types of flame arresters have been tested for hydrogen service and found acceptable for quenching of hydrogen-air and hydrogen-methane-air mixtures. Howard et al. [Howard, 1975] conducted experiments on three types of flame arresters for quenching fuel mixtures of hydrogen and methane with air. Tests were run at pressures of 0.02 and 0.08 MPa and feed gas temperatures of ambient, 423 K, 473 K and 523 K. In these experiments only the velocity stopper was able to stop all flame propagation. Some crimped metal ribbon flame arresters have been tested for hydrogen service and can be used. [Protego, 1993) has both deflagration and detonation flame arresters, ranging in size from 10 mm to 400 mm, approved in Germany for mixtures of hydrogen and air in all ranges of concentration. Enardo [Enardo, 2005] has also in-line flame arresters for hydrogen-air mixtures. NAO has designed and successfully tested and provided a hydraulic flame arrester for hydrogen-air applications. Rao [Rao, 1980] also provides information on a hydraulic flame arrester that was designed and used successfully for hydrogen service in a nuclear power plant. ### Codes and standards Flame arresters are the subject of a number of codes and standards in different countries. In the UK BS 7244 – 1990 [BS, 1990] covers the testing of arresters. In the USA the Underwriters Laboratories standard UL 525-1989 [UL, 1990] deals with construction and testing. Also in the USA the American Petroleum Institute has API PB 2028.2002 standard [API, 2002]. Germany has legally backed standards on the same aspects. The International Maritime Organization (IMO) also has requirements for flame arresters [IMO, 1984]. A new CEN European standard, EN 12874 was issued in 2001 [CEN, 2001]. This is very comprehensive standard covering many aspects of flame arrester technology. ### Detonation arresters None of the deflagration arrester designs can withstand a detonation. Therefore the detonation flame arrestor was designed. Detonation arresters are devices designed to withstand and extinguish the high speed and high pressure flame front that characterizes a detonation propagating through a piping system. Therefore, a detona¬tion arrester must be able to withstand the mechanical effects of a detona-tion shock wave while quenching the flame. Some designs have a "shock absorber" in front of the flame arresting element to reduce both the high pressure shock wave and the dynamic energy and to split the flame front before it reaches the flame arrester element. Another design variation has what is called a "detonation momentum attenuator" (DMA) [Westech 1989]. Detonations occurring in piping have velocities of about 2000 m/s, or greater, and in closed process vessels and equipment can generate pressures from 20 to 100+ times the initial pressure. Detonation flame arresters are available for hydrogen as both unidirectional or bidirectional types. When installed in a vent manifold system the flame arresters on the tanks may be unidirectional or bidirectional, depending on the manufacturer's recommendations. They should preferably be installed in a vertical orientation, so that if liquid is present, the arrester will drain. If they must be installed in a horizontal orientation, they should be provided with drain connections. Most detonation arresters have crimped metal ribbon arrester elements, although expanded metal cartridges are also used. Arrester elements for detonation arresters are usually longer than for deflagration arresters. Detonation flame arresters impose higher pressure drops than deflagration flame arresters due to heat transfer requirements, they are heavier because of structural requirements, and they are typically more expensive. Instantaneous impulse pressures caused by the shock waves of overdriven detonation subject the arrestor to forces up to 34,000 kPa(g) at atmospheric initial pressure. Volume filling of tanks with thin metal objects with large surface The fact that surfaces will cool a flame can also be exploited in a different way. If a potentially flammable volume, like a fuel tank in a fighter plane or a racing car, is packed with small elements built up from thin metal foils, this will represent a very large surface area. The volume occupied by the metal object may still only be of the order a few percent, so the influence on the tank performance may be limited. A flame burning in this volume will then experience a very substantial heat loss, and may quench. Such methods have been applied for certain applications for hydrocarbon vapors of moderate reactivity. Since the quench distance and MESG is one order of magnitude smaller for hydrogen than for typical hydrocarbons, requirements for fineness of metal structures will be much higher since 10 times shorter distance between cells will require 1000 times more cells in 3 dimensions. It should still be possible to benefit from such a method, even if the design would allow the flames to burn, heat will be extracted from the burnt gases which could both reduce the burning velocity and terminal pressure. If the cells of the metal structure are too large, they could accelerate the flames. One example of a company manufacturing such a concept is [eXess, 2006]. Reference & sources [Bjerketvedt 1997] Bjerketvedt, D., Bakke J.R. and van Wingerden, K., Gas Explosion Handbook, Journal of Hazardous Materials 52 (1997) 1-150 [Tam 2000] Tam V (2000), Barrier Method: An Alternative Approach to Gas Explosion Control, FABIG Newsletter, R372, The Steel Construction Institute, UK [Herrmann, 2005] Herrmann, D.D., Developing a sound bases for the design of vented explosion barricades in chemical processes, Process Safety Progress, Volume 24, Issue 1, pp 52-58, March 2005 [Westech, 1989] Westech Industrial Ltd. 1989. Flame Arrester Seminar Notes. Westech Industrial Ltd. Calgary, Canada [Yang, 2003] S. Y. Yang, S. H. Chung, H.J. Kim. Effect of pressure on effectiveness of quenching meshes in transmitting hydrogen combustion. .Nuclear Engineering and Design, 224 (2003) pp. 199-206. For additional data see also: Hong Seong-Wan, Shin Yong-Seung, Soug Jin-Ho, Chang Soon-Heung. Performance test of quenching meshes for hydrogen control. Journal of Nuclear Science and Technology, 40 (2003) pp. 814-819. [Lewis, 1987] B. Lewis, G. von Elbe, Combustion, Flames and Explosions of Gases. (3rd edition) Academic Press, New York, 1987. [Berlad, 1955] A. L. Berlad, A. E. Potter. Fifth Symposium (International) on Combustion, Reinhold Publishing Corp. 1955, pp 728-735. [Phillips, 1963] H. Phillips. On the transmission of explosion through a gap smaller than the quenching distance. Combust. Flame, 7 (1963) pp. 129-135. [Chomiak, 1990] J. Chomiak. Combustion A Study in Theory Fact and Application. Gordon and Breach Science Publishers, New York, 1990, p. 56. [CEN, 2001] CEN EN 12874.2001. Flame Arresters – Specifications, Operational Requirements and Test Procedures. European Committee for Standardization, Brussels, Belgium [IMO, 1984] IMO (International Maritime Organization) MSC/Circ. 373. 1984. Standards for the Design, Testing and Locating of Devices to Prevent the Passage of Flame into Cargo Tanks in Oil Tankers. International Maritime Organization, London, England, UK [UL, 1990] UL 525. 1994. Standard for Flame Arresters. 6th edition. Underwriters Laboratories, Inc., Northbrook, IL. [BSI, 1990] BSI (British Standards Institution) BS 7244. 1990. Flame Arresters for General Use. British Standards Institution, London, England, UK. [API, 2002] API PB 2028. 2002. Flame Arresters in Piping Systems. 2nd ed. American Petroleum Institute, Washington, D.C [Grossel, 2002] S.S.Grossel. Deflagration and Detonation Flame Arresters. American Institute of Chemical Engineers, New York 2002 [Enardo, 2005] Enardo. Flame Arrester Technology. Technical Bulletin, 2005, www.enardo.com [Protego, 1993] Protego. Special Catalogue of Protego Flame Arrester for Hydrogen Systems. Protego Publication No. NO770993. Braunschweiger Flammenfilter GmbH, Germany, 1993 [Howard, 1975] Howard W.B., Rodenhorts C.W., Small G.E. Flame Arresters for Hydrogen Fuel-Air Mixtures. CEP Loss Prevention Manual, 9 (1975) 46-53 [Nao, 2005] NAO Inc. 2005, www.nao.com [Rao, 1980] Rao S.N., Dam A.S., Maus F.G. Detonation Flame Arrester Testing for Oyster Creek Nuclear Station. ANS/ENS Int. Conference on World Nuclear Energy, Washington, 1980 [eXess, 2006] eXess Engineering GmbH, http://www.exess.at/ ## Emergency response Emergency response methods available for a hydrogen “loss of containment” incident will to some extent be similar to emergency response to loss of containment for other gaseous fuels. Active fire fighting is not as effective as for petrol or diesel, and more emphasis will thus have to be laid on extensive emergency response planning. The emergency response plan should reflect the foreseen major hazards and aim at minimize the risk to people. ### Emergency response plan General principles for emergency response planning may, in the absence of guidelines specific for hydrogen, be extracted from other areas where extensive emergency planning is seen as essential. Guidelines for emergency response for offshore installations are given in [ISO, 2000]. A basic principle is that emergency planning should be based on systematic identification of hazards, followed by evaluation and risk management. The initial step in emergency response planning would be the emergency response strategy, describing the general philosophy on how the organization, procedures, equipment training and other measures are supposed to work together to deal with foreseeable incidents – even in the case of failure of an emergency response measure. For a hydrogen leak the direct mitigation means could for instance be deactivation of ignition sources upon gas detection, to prevent ignition. (Ignition source control is described in Ch 5.6.6.) This measure may not be effective, possibly even leading to ignition, and warning and escape procedures as well as egress routes will thus have to be part of the strategy. Moreover, as these measures both rely on the detection and communication of a hydrogen leak, detection (See Ch 5.7.1) and communication should have a high reliability. Communication is a key element in any emergency response plan. Effective communication will involve technical measures, organization, procedures and training adapted to each other and to the overall strategy. If communication fails, effective emergency response is not possible. Technical communication measures could initiate automatic actions such as shut down of electrical power supply, or initiate an alarm, emergency ventilation, enabling manual (human) intervention or escape. Technical communication measures will also be needed for mobilization and communication within the emergency response organization and for mobilization of external resources. All of these measures will have to have a high reliability, and in cases where human action is intended (mobilization, intervention or escape), the recipient’s ability to receive the message and discern the essential information must also be considered. Effective emergency response will also require an organization intended and prepared for emergency response. The lines of communication should be well known and worked in, preferably the same as for daily operation. Emergency procedures, and especially the function and use of communication equipment, should be known and tested within the organization. Escape/evacuation of people should be part of the initial planning of any new or modified installation. Escape routes are easy to implement at the design stage, but may be rather expensive or nearly impossible to implement if thought of too late. The principle of two escape routes from all areas regularly occupied by humans is laid down in most countries building regulations and should also be applied for outdoor facilities such as refueling stations. Bearing in mind refueling stations may be placed in congested areas and close to a highly trafficked road, this may not be straight-forward to accomplish. ### Liquid spill Liquid spill on water Spill of liquid hydrogen on water may lead to Rapid Phase Transition (spontaneous and explosive boiling of liquid hydrogen) due to the rather favorable heat transfer conditions and a practically unlimited reservoir of heat. The phenomenon is described by several sources, e.g. by [Hightower, 2004] for liquid natural gas (LNG) on water, where the temperature difference is less than for liquid hydrogen and water. Emergency response in such a case should include warning of boats in the area against sailing into the gas cloud. In some cases even car traffic may have to be stopped or re-directed. Warning of other people in the area, especially downwind of the release is also important, though the gas cloud may not be of such duration to expect any benefit from evacuation of people. Liquid spill on ground Spill of liquid hydrogen on ground can be expected to give less rapid evaporation than spill on water. The spread of liquid may be constrained, either by design of storage facilities or by natural formations. The best industry practice for storage of flammable liquids or condensed gases would be to lead liquid spills away from storage tanks (as well as temporarily stored transport tanks) by sloping ground (ditch) a collection basin, minimizing the liquid surface and thus minimizing evaporation. Hydrogen pool fires are described in Ch. 3.1.8.6. Prevention of ignition would normally require a larger safety distance than the protection of people from a pool fire. Emergency response should encompass warning of people in the area and re-routing of traffic to prevent cars from driving into the gas cloud. ## Gaseous release Gaseous releases and dispersion of released gases are described in Ch. 3.1.1 and 3.1.2. Guidelines for emergency response for gaseous releases can be found in offshore standards, e.g. from [ISO, 2000 and 1999]. Though hydrogen’s properties are different from those of petroleum gases, there are also similarities: Methane is buoyant in air, and methane releases are often seen as the most hazardous flammable gas releases on offshore installations because methane gas will not sink towards sea level. A number of general principles for danger limitation should be transferable to hydrogen releases: • Fire and gas detection and alarm systems • Escape of personnel to safe place • Emergency shut down (ESD) of process and power supply to equipment not essential for safe shut down or emergency response • Essential electrical equipment, e.g. emergency lighting, is EX certified ### Hydrogen fire Hydrogen gas fires are described in Ch. 3.1.8.7. An ignited gas leakage is not easy to extinguish, and the principle normally applied is to protect the surroundings as far as possible from the effects of the fire and prevent escalation. Guidelines for fire control and fire load protection can be found in [ISO, 1999]. The general principles are summarized below: • Muster areas for escaped people should be protected from fire loads • Active fire protection (fire water) may be used for cooling of equipment exposed to heat radiation • Equipment that may be directly exposed to flame should also have passive fire protection. Reference & sources: [ISO, 2000] ISO 15544:2000(E) Petroleum and natural gas industries – Offshore production installations – Requirements and guidelines for emergency response, 1st ed., 15.09.2000 [ISO, 1999] ISO 13702:1999(E) Petroleum and natural gas industries – Control and mitigation of fires on offshore production installations – Requirements and guidelines, 1st ed., 15.03.1999 [Hightower, 2004] Hightower, M., Gritzo, L., Luketa-Hanlin, A., Covan, J., Tieszen, S., Wellman, G., Irwin, M., Kaneshige, M., Melof, B., Morrow, C., Ragland, D., Guidance on Risk Analysis and Safety Implications of a Large Liquefied Natural Gas (LNG) Spill Over Water, SAND2004-6258, Sandia, Dec. 2004. ## Safety distances A safety distance is the required distance between the location of a gas leakage and the object to be protected which takes account of the evolving flammable atmosphere as well as of the heat and pressure wave resulting from a possible ignition. This separation distance is usually determined as a function of the quantity of hydrogen involved. It may be fixed on the basis of credible events and can be defined according to physically defined criteria, e.g., the dose of thermal radiation or the peak overpressure, to have reached a certain threshold value. Distance requirements may be reduced by the use of barricades. A minimum safety distance is desirable for economic purposes. The safety distance guidelines approach described in the following is simplified. Such simplified approaches may not be applicable in situations where confinement and congestion may collect gas and influence the flame acceleration. For certain conditions LH2 releases may show dense gas behavior, and if such a dense cloud of cold hydrogen-air mixture will enter a partly confined and congested region, one should not expect simplified safety distance guidelines to be valid. Another aspect is the risk of projectiles. Even if the blast pressure hazard is acceptable at a certain safety distance, dangerous projectiles may be thrown much further away. One major disadvantage using simplified methods for safety distances is that the lack of detailed description of the actual facility will give very limited credit to safety measures. One can therefore expect that the estimated safety distance is either significantly higher than necessary, or the guidelines are generally non-conservative. Today, more refined methods exist that can take into account a larger number of parameters, in particular safety measures, and for most situations it should no longer be considered responsible to apply simplistic safety distance guidelines developed 30-50 years ago (in the pre-computer age). In a study from 1960 [Zabetakis 1960] investigating the vaporization of LH2 and the ignition of H2-air vapour clouds above LH2 pools, a conclusion was made that the quantity-distance relation which was valid at that time is very conservative. The new recommendation as shown in Fig. 5-x1 as a step function is based on the assumption that the total content of an LH2 storage tank of up to 45 t or 640 m3 is released and ignited. The solid curves represent the estimated distances at which thermal radiation values reach a value of about 84 kJ/m2, a limit that is expected to produce flesh burns and ignite certain combustible materials. Curves are given for different humidity concentrations in the air where the severest case would be a zero water vapor content meaning that an essential radiation heat sink will be absent. Fig 0 1: Industrial storage standards for H2, LNG, and gasoline in the USA, from [Zabetakis, 1960] A basic prerequisite is the knowledge of the source term which is dependent on leak size and thermal dynamic conditions of the leaking substance. A problem is given by non-quantifiable leakages, e.g., from cracks in welding seams. Quantity-distance relationships are usually different for people and for less demanding equipment, e.g., adjacent storage tanks, working buildings, or distinguished with respect to fireballs, shrapnel, structural response, or physiological effects (heat radiation). They also may differ for experimental and storage areas. A comparison of industrial storage standards for hydrogen, LNG, and gasoline is given in Fig. 5-x2 [Hord, 1978]. Fig 0 2: Industrial storage standards for H2, LNG, and gasoline in the USA, from [Hord, 1978] The following two figures show the quantity-distance relationships for LH2 storage containers assuming no barricades. Fig. 5-x3 applies to the protection of personnel and inhabited buildings from hydrogen fire and from shrapnel in explosions. The respective separation distance between storage containers is given in Fig. 5-x4. Fig 0 3: Quantity-distance relationship for protection of personnel and inhabited buildings near liquid hydrogen storage containers in the USA, from [Hord 1978] Fig 0 4: Quantity-distance relationship for protection of adjacent liquid hydrogen storage containers in the USA, from [Hord 1978] Design and operation of H2 and LH2 storage installations is regulated under the US OSHA (Occupational Safety and Health Administration) regulations as part of 29 CFR (Code of Federal Regulations). Here the minimum safety distance to be provided between the installation and people or property is defined as 15.3 m (50 ft) for gaseous H2 amounts > 425 Nm3. For LH2 tanks containing more than 2.27 m3 (600 gallons), the respective distance must be at least 23 m [US-DOT, 1997]. For hydrogen stored at US refueling stations, existing ASME pressure vessel standards apply requiring various distances between the pressurized tanks and public facilities depending on the amount of fuel stored. Current safety distance restrictions are significant. If reduced separation distances are desired, respective safety implications need to be investigated [Bevilaqua, 2001]. On-board hydrogen storage tanks are being covered by US-DOT regulations. They appear to be reasonable in their present form [Bevilaqua, 2001]. In Japan, respective safety distances rules have to meet the “High Pressure Gas Safety Law” (see also Fig. 5-x6). It prescribes at present the H2 pressure at filling stations to be not higher than 40 MPa. The respective upper limit for vehicle tanks is 35 MPa. There are activities ongoing to shorten the presently valid safety distances for H2 refueling stations. The corresponding investigation includes H2 gas leakage experiments plus respective simulation calculations for demonstration purposes and also tests with ignition of the escaping gas as well as the effect of barriers. Safety zones around storage tanks for liquefied gases according to the German law are described in Fig. 5-x5 for both above-ground and underground tank arrangement [Westfalen, 2001]. Fig 0 5: Safety zone arrangement for above-ground (top) and underground (bottom) storage tanks for liquefied gas with RI = 1 m and RII = 3 m, from [Westfalen, 2001] Fig. 5-x6 gives a comparison of minimum safety distances between LH2 storage systems and inhabited buildings as a function of LH2 mass as were fixed in codes and standards from different institutions and countries, respectively. The curves illustrate the variation in conservatism of these institutions that generate safety criteria. Fig 0 6: Safety distances (please note scale change on the ordinate), from [Verfondern 1999] Curves 1 and 3 from [Edeskuty 1979], 2 and 6 from [Japan Society for Safety Engineering], 4 from [Zabetakis 1961], 5 from [Doehrn 1984]. A formula for the safety distance is generally acknowledged to have the form R = k * M1/3 (5-1) where R is the safety distance in m and M the mass of the flammable substance in kg. The relation may be modified by damping parameters, if some sort of protective measure is applied, e.g., wall or earth coverage. The k-factor depends on the building to be protected (from German recommendations: 2.5 - 8 for working building, 22 for residential building, 200 for no damage) and on the type of substance. The above mass-distance relation applying a k-factor of 8 in combination with an overpressure history to be sustained has been used in the German legislation on the protection of nuclear power plants against external explosions [BMI, 1976]. It applies to explosive substances which are handled in the neighborhood like production sites, waterways or trans-shipment places, railways, roads. Explosive substances which are required for the plant operation, are not included. In this guideline, a distinction is also made between different kinds of flammable masses. The distance between the NPP and locations where explosive substances are handled shall be calculated according to the following mass - distance relation R = 8*L1/3 (5-2) Furthermore the safety distance has to obey a minimum of 100 m. If M is the maximum possible explosive inventory of a production facility or a storage tank or the biggest pipeline section between isolating equipment or transportation container in kg, then L is defined as the TNT equivalent in kg for explosive substances; • 100 % of M for unsaturated hydrocarbons and non-liquefied gases; • 50 % of M for gases, liquefied under pressure; • 10 % of M for gases, liquefied at low temperatures; • 0.3 % for combustible liquids with a flash point < 21 °C. In terms of hydrogen, this is equivalent to a reduction of the k-factor from 8 m/kg1/3 down to 6.3 for gaseous H2 and to 3.7 m/kg1/3 for liquid H2, respectively. In the USA, it is judged according to the US-AEC Regulatory Guide 1.91 that structures, systems, and components important to safety and designed for high wind loads are also capable of withstanding pressure peaks of at least 7 kPa resulting from explosions. No additional measures need to be taken, if the equation R = 13*W1/3 (5-3) is met, where R is the safety distance [m] from an exploding charge and W is the mass of TNT (equivalent) [kg] of the exploding material (see solid line in Fig. 5-x7). For the LNG storage tank of the HTTR/SR system, the 400 m3 of LNG correspond to a mass of 169 tons of LNG, and this to a TNT equivalent of 1859 tons which then translate into a safety distance of as long as 2.2 km. This approach appears to be unrealistic for the HTTR/SR system considering the fact that much larger stationary LNG tanks up to 200,000 m3 ( R  18 km) have been established worldwide. Aspects not taken into account here are the different explosive characters of a liquefied gas and a TNT explosive, the possibility of additional options offered by the 1.91 guideline, and finally the extreme unlikeliness of the total tank content to “explode” rather than assuming less conservative “design spills”. Fig 0 7: Safety distance as a function of the quantity of released liquefied gas according to the BMI guideline and the US regulatory guide 1.91, from [Verfondern, 2004] ## Knowledge gaps With regard to mitigation of hydrogen explosions, the main knowledge gap may be the lack of identified useful methods for mitigation. Whereas numerous methods can be applied for hydrocarbon gas explosion mitigation, few of these will have a sufficient beneficial effect on hydrogen flames. Due to the lack of good ways to mitigate hydrogen explosions, efforts to avoid significant flammable clouds to build up in partially confined and congested areas should have a main focus. Some areas where increased understanding could help to estimate the risk better, is for instance to get a better understanding of spontaneous ignition phenomena. If larger high-pressure hydrogen leaks would always ignite within fractions of a second, like seen in some jet release experiments by [Groethe, 2006], this would be important for the estimated risk and risk reduction measures for such situations. The implication would be that for such releases, there is no point to work actively to minimize ignition sources, there is too little time for any action to be taken, and fortunately, there is no risk for a very large gas cloud to be generated. More work will be needed to understand these phenomena better. It is unclear under what conditions, such as volume size, aspect ratios, and obstructions, etc., the mitigation by explosion venting would be applicable for hydrogen. Available vent-sizing methods and guidelines have very limited applicability for hydrogen. More experimental data and analysis is necessary. Available guidelines on safety distances related to siting of hydrogen facilities are controversial and do not provide clear input. Water deluge is potentially a mitigation measure that could reduce the flame speeds and explosion severity. This measure works very well for natural gas explosions, provided the degree of confinement of the gas cloud is limited. Potentially, there will also be situations where water deluge may mitigate hydrogen flames, this should be investigated experimentally at realistic scales. One possibly very critical situation will be a massive release of liquid hydrogen on a warm day with low humidity. In such a situation the evaporated gas cloud may form a neutral or dense hydrogen-air cloud, which may represent a very significant hazard, in particular if it will become filled with obstacles or become partly confined. Typical obstacles could be a forest, a process plant, industrial or domestic houses etc. One possibility to mitigate this hazard will be to introduce sufficient heat to the cold evaporated hydrogen-air mixture for it to become more buoyant. This can for instance be done by water spray systems with small droplets to maximize the heat transfer. For the increased understanding of this hazard, it would be useful to see large-scale experiments which both demonstrates the possibility to generate a dense gas hydrogen-air mixture on a warm day with low humidity, and then repeat the experiment applying water sprays to add heat to the plume. Another critical situation is the transport of significant amounts of hydrogen through tunnels. If significant leaks may take place, or if the gas is on purpose released in an emergency situation, the confinement of the tunnel may make this a severe risk scenario. For situations with significant releases of hydrogen inside a tunnel, no good mitigation methods have been identified so far. The best method for mitigation of risk is to build up a good understanding of physics and to be able to model the various risk reduction methods available. With a CFD-tool available that can model the consequences of a given incident, as well as the consequences of mitigated incidents, one will have the possibility to optimize the design and mitigation methods for the situations considered. When doing so, it is important not only to consider one particular incident, but to study the range of possible incidents, to estimate the overall effect of mitigation measures. Optimally, a probabilistic risk assessment could be carried out, in which the effect of mitigation is assessed. This could e.g. be along the lines recommended for Norwegian offshore installations [Norsok, 2001]. To follow this approach, a validated CFD-tool will be required, which can model as much as possible of the phenomena and mitigation methods of interest. Reference& sources: [Bevilagua, 2001] Bevilaqua Knight Inc., Bringing Fuel Cell Vehicles to Market, Scenarios and Challenges with Fuel Alternatives. Consultant Study Report, Hayward, USA (2001). [BMI, 1976] BMI Bundesministerium des Innern. Bekanntmachung der Richtlinie für den Schutz von Kernkraftwerken gegen Druckwellen aus chemischen Reaktionen durch Auslegung der Kernkraftwerke hinsichtlich ihrer Festigkeit und induzierter Schwingungen sowie durch Sicherheitsabstaende, September 13, 1976. [Hord 1978] Hord, J., How Safe is Hydrogen. Symp. on Hydrogen for Energy Distribution, Chicago (1978). [US-DOT, 1997] US-DOT. Department of Transportation. Clean Air Programm, Use of Hydrogen to Power the Advanced Technology Transit Bus (ATTB): An Assessment. Report DOT-FTA-MA-26-0001-97-1 (1997). [US-NRC, 1978] US-NRC. Evaluations of Explosions Postulated to Occur on Transportation Routes Near Nuclear Power Plants, Regulatory Guide 1.91, Revision 1, U.S. Nuclear Regulatory Commission (1978). [Verfondern, 1999] Verfondern, K., Hydrogen as an Energy Carrier and its Production by Nuclear Power, Report IAEA-TECDOC-1085, International Atomic Energy Agency, Vienna, Austria (1999). [Verfondern, 2004] Verfondern K., Nishihara T., Valuation of the Safety Concept of the Combined Nuclear/Chemical Complex for Hydrogen Production with HTTR, Report Juel-4135, Research Center Juelich, Germany (2004). [Westfalen, 2001] Westfalen AG. Aufstellen oder Einlagern von Fluessiggas-Behaeltern, Muenster, Germany, Company’s Pamphlet (2001). [Zabetakis, 1960] Zabetakis M.G., Burgess D.S., Research on the Hazards Associated with the Production and Handling of Liquid Hydrogen. Report WADD TR 60-141. Wright Air Development Division, Wright Patterson Air Force Base, Ohio, USA (1960). [Sherman, 1989] Sherman, M.P., Tieszen, S.R. and Benedick, W.B, FLAME Facility, The Effect of Obstacles and Transverse Venting on Flame Acceleration and Transition to Detonation for Hydrogen-Air Mixtures at Large Scale, Sandia National Laboratories, Albuquerque, NM 87185, USA, NUREG/CR-5275, SAND85-1264, R3, April 1989. [Norsok, 2001] NORSOK Z-013, 2001. Risk and emergency preparedness analysis, Norsok standard. Available from Standard Norge, Postboks 242, N-1326 Lysaker, Norway. [Groethe, 2006] M. Groethe, E. Merilo, J. Colton, S. Chiba, Y. Sato, H. Iwabuchi: “Large-scale Hydrogen Deflagrations and Detonations,” International Journal of Hydrogen Energy, 31, 2006.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5412552952766418, "perplexity": 3509.386562426817}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710698.62/warc/CC-MAIN-20221129132340-20221129162340-00099.warc.gz"}
http://math.stackexchange.com/questions/211689/real-valued-2d-fourier-series
# Real-valued 2D Fourier series? For a (well-behaved) one-dimensional function $f: [-\pi, \pi] \rightarrow \mathbb{R}$, we can use the Fourier series expansion to write $$f(x) = \frac{a_0}{2} + \sum_{n=1}^{\infty} \left( a_n \cos(nx) + b_n\sin(nx) \right)$$ For a function of two variables, Wikipedia lists the formula $$f(x,y) = \sum_{j,k \in \mathbb{Z}} c_{j,k} e^{ijx}e^{iky}$$ In this formula, $f$ is complex-valued. Is there a similar series representation for real-valued functions of two variables? - Substitute $e^{i\omega} = \cos\omega + i\sin\omega$ and $c_{j,k} = a_{j,k} + ib_{j,k}$ in the formula you get from Wikipedia, and look only at the real value of the result. The formula gets a bit unwieldy due to the 4 $\sin\cos$ combinations you get, but it works... –  fgp Oct 12 '12 at 15:06 Yes! And these types of expansions occur in a variety of applications, e.g., solving the heat or wave equation on a rectangle with prescribed boundary and initial data. As a specific example, we can think of the following expansion as a two dimensional Fourier sine series for $f(x,y)$ on $0<x<a$, $0<y<b$: $$f(x,y)=\sum_{n=1}^\infty \sum_{m=1}^\infty c_{nm}\sin\left({n\pi\, x\over a}\right)\sin\left({m\pi\, y\over b}\right), \quad 0<x<a,\ 0<y<b,$$ where the coefficients (obtained from the same type of orthogonality argument as in the 1D case) are given by \begin{align} c_{nm}&={\int_0^b \int_0^a f(x,y)\sin\left({n\pi\, x\over a}\right)\sin\left({m\pi\, y\over b}\right)\,dx\,dy\over \int_0^b \int_0^a \sin^2\left({n\pi\, x\over a}\right)\sin^2\left({m\pi\, y\over b}\right)\,dx\,dy}\\ &={4\over a b}\int_0^b \int_0^a f(x,y)\sin\left({n\pi\, x\over a}\right)\sin\left({m\pi\, y\over b}\right)\,dx\,dy, \quad n,m=1,2,3,\dots \end{align} For example, the picture below shows (left) the surface $$f(x,y)=30x y^2 (1-x)(1-y)\cos(10x)\cos(10y), \quad 0<x<1,\ 0<y<1,$$ and a plot of the two dimensional Fourier sine series (right) of $f(x,y)$ for $n,m,=1,\dots,5$: Finally, keep in mind that we are not limited just to double sums of the form sine-sine. We could have any combination we like so long as they form a complete orthogonal family on the domain under discussion. - This representation seems to be valid only if the values of the function on the boundary of the rectangle are zero. –  Beni Bogosel Nov 13 '13 at 11:59 Yes, that's why I said, "As a specific example..." The sine functions used there are the eigenfunctions obtained when solving the heat equation on a rectangle where zero boundary conditions are specified. –  JohnD Nov 14 '13 at 0:47 What about the case where cosine terms are required? This answer is useless because it does not address the general case. –  user3728501 Dec 21 '13 at 14:19 @JohnD only details about the coefficients. The correct formula is: $$c_{n,m} = \frac{\int_{0}^a \int_0^b f(x,y)sin(\frac{n\pi x}{a})sin(\frac{n\pi y}{b})dxdy}{{\int_{0}^a \int_0^b sin^2(\frac{n\pi x}{a})sin^2(\frac{n\pi y}{b})dxdy}{}}$$ the impression that $c_{n,m}$ is $1$. That's not true. Cheers! - Yes, it was a typo (I had left off the squares in the denominator). Fixed now. –  JohnD Mar 6 '13 at 4:48 By Euler's identity $e^{i\varphi}=\cos{\varphi}+i\sin{\varphi}$ trigonometrical Fourier series expansion $$f(x) \sim \dfrac{a_0}{2} + \sum\limits_{n=1}^{\infty} \left( a_n \cos(nx) + b_n\sin(nx) \right)$$ may be easily transformed into exponential form $$f(x) \sim \sum\limits_{n=-\infty}^{\infty}{c_n{e^{inx}}},$$ and vice versa, where $c_k=\dfrac{1}{2\pi}\int\limits_0^{2\pi}f(x){e^{-ikx}} \, dx$ are complex Fourier coefficients. Function $f$ may be real- or complex-valued.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9548519253730774, "perplexity": 556.8860676127272}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997878518.58/warc/CC-MAIN-20140722025758-00098-ip-10-33-131-23.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/155518/tooltip-that-works-with-all-pdf-readers?noredirect=1
# Tooltip that works with all pdf readers Is it possible to get tooltips into a LaTeX pdf? I am working on my CV. I would like to show some more information on my electronic version once you swipe over that field with the mouse. i.e. when swiping with the mouse over education I want to show my subjects taken in undergrad through a tooltip. Is this possible and if yes would if work for all kind of pdf readers? Finally I came up with a \tooltip command that works across a small number of PDF viewers, among which is an Open-Source one. The tooltip command allows for TeX-formatted tip texts. \tooltip[*[*[*[*]]]] [<tip box colour>]{<tip text>} [<x-offset>,<y-offset>] It comes in five variants: %draggable tip box (e. g. https://tex.stackexchange.com/a/108998), %visible on mouse-over, hidden on mouse-out %draggable tip box, toggle visibility on mouse-over (a second wipe hides the tip) %NON-draggable tip, visible on mouse-over, hidden on mouse-out %NON-draggable tip, toggle visiblity on mouse-over %NON-draggable tip, toggle visiblity on mouse-click For Evince (open-source), only the command with 4 stars \tooltip**** is usable, because mouse-click is the only event Evince listens for. In Acrobat Reader, all variants are functional. The non-draggable variants (two and more stars) do not use any JavaScript. If hyperref is loaded, the colour of internal links (hyperref option linkcolor) is used as <link colour>. In order to break longer <tip text> into multiple lines, wrap it in a \parbox of given width: \tootltip{link text}{\parbox{0.5\linewidth}{... long tip text ...}} Evince: \documentclass[a6paper,12pt]{scrbook} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % tooltips with LaTeX v. 2019/09/26 % % \tooltip[*[*[*[*]]]] % [<tip box colour>]{<tip text>} % [<x-offset>,<y-offset>] % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% % % \tooltip --> draggable tip, visible on mouse-over, hidden on mouse-out % % \tooltip* --> draggable tip, toggle visiblity on mouse-over % % \tooltip** --> NON-draggable tip, visible on mouse-over, hidden on mouse-out % % \tooltip*** --> NON-draggable tip, toggle visiblity on mouse-over % % \tooltip**** --> NON-draggable tip, toggle visiblity on mouse-click (Evince!) % % Default link colour can be set with % % %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \usepackage{pdfbase}[2017/03/16] \usepackage{xparse,ocgbase} \usepackage{xcolor,calc} \usepackage{tikzpagenodes,linegoal} \usetikzlibrary{calc} \usepackage{tcolorbox} \ExplSyntaxOn \let\tpPdfAnnot\pbs_pdfannot:nnnn\let\tpPdfLastAnn\pbs_pdflastann: \let\tpAppendToFields\pbs_appendtofields:n \def\tpPdfXform{\pbs_pdfxform:nnnnn{1}{1}{}{}} \let\tpPdfLastXform\pbs_pdflastxform: \let\cListSet\clist_set:Nn\let\cListItem\clist_item:Nn \ExplSyntaxOff \makeatletter \NewDocumentCommand{\tooltip}{% }{{% \leavevmode% \IfBooleanT{#2}{% %for variants with two and more stars, put tip box on a PDF Layer (OCG) \ocgbase@new@ocg{tipOCG.\thetcnt}{% /Print<</PrintState/OFF>>/Export<</ExportState/OFF>>% }{false}% \xdef\tpTipOcg{\ocgbase@last@ocg}% %prevent simultaneous visibility of multiple non-draggable tooltips }% \IfBooleanTF{#4}{% /Subtype/Link/Border[0 0 0]/A <</S/SetOCGState/State [/Toggle \tpTipOcg]>> }{% /Subtype/Screen% /AA<<% \IfBooleanTF{#3}{% /E<</S/SetOCGState/State [/Toggle \tpTipOcg]>>% }{% \IfBooleanTF{#2}{% /E<</S/SetOCGState/State [/ON \tpTipOcg]>>% /X<</S/SetOCGState/State [/OFF \tpTipOcg]>>% }{ \IfBooleanTF{#1}{% /E<</S/JavaScript/JS(% var fd=this.getField('tip.\thetcnt');% if(typeof(click\thetcnt)=='undefined'){% var click\thetcnt=false;% var fdor\thetcnt=fd.rect;var dragging\thetcnt=false;% }% if(fd.display==display.hidden){% fd.delay=true;fd.display=display.visible;fd.delay=false;% }else{% if(!click\thetcnt&&!dragging\thetcnt){fd.display=display.hidden;}% if(!dragging\thetcnt){click\thetcnt=false;}% }% this.dirty=false;% )>>% }{% /E<</S/JavaScript/JS(% var fd=this.getField('tip.\thetcnt');% if(typeof(click\thetcnt)=='undefined'){% var click\thetcnt=false;% var fdor\thetcnt=fd.rect;var dragging\thetcnt=false;% }% if(fd.display==display.hidden){% fd.delay=true;fd.display=display.visible;fd.delay=false;% }% this.dirty=false;% )>>% /X<</S/JavaScript/JS(% if(!click\thetcnt&&!dragging\thetcnt){fd.display=display.hidden;}% if(!dragging\thetcnt){click\thetcnt=false;}% this.dirty=false;% )>>% }% /U<</S/JavaScript/JS(click\thetcnt=true;this.dirty=false;)>>% /PC<</S/JavaScript/JS (% var fd=this.getField('tip.\thetcnt');% try{fd.rect=fdor\thetcnt;}catch(e){}% fd.display=display.hidden;this.dirty=false;% )>>% /PO<</S/JavaScript/JS(this.dirty=false;)>>% }% }% >>% }% }{{\color{#5}#6}}% \sbox\tiptext{% \IfBooleanT{#2}{% \ocgbase@oc@bdc{\tpTipOcg}\ocgbase@open@stack@push{\tpTipOcg}}% %\fcolorbox{black}{#7}{#8}% \tcbox[colframe=black,colback=#7,size=fbox,arc=1ex,sharp corners=southwest]{#8}% \IfBooleanT{#2}{\ocgbase@oc@emc\ocgbase@open@stack@pop\tpNull}% }% \cListSet\tpOffsets{#9}% \edef\twd{\the\wd\tiptext}% \edef\tht{\the\ht\tiptext}% \edef\tdp{\the\dp\tiptext}% \tipshift=0pt% \IfBooleanTF{#2}{% %OCG-based (that is, all non-draggable) boxes should not extend beyond the %current column as they may get overlaid by text in the neighbouring column \setlength\whatsleft{\linegoal}% }{% \measureremainder{\whatsleft}% }% \ifdim\whatsleft<\dimexpr\twd+\cListItem\tpOffsets{1}\relax% \setlength\tipshift{\whatsleft-\twd-\cListItem\tpOffsets{1}}\fi% \IfBooleanF{#2}{\tpPdfXform{\tiptext}}% \raisebox{\heightof{#6}+\tdp+\cListItem\tpOffsets{2}}[0pt][0pt]{% \makebox[0pt][l]{\hspace{\dimexpr\tipshift+\cListItem\tpOffsets{1}\relax}% \IfBooleanTF{#2}{\usebox{\tiptext}}{% \tpPdfAnnot{\twd}{\tht}{\tdp}{% /Subtype/Widget/FT/Btn/T (tip.\thetcnt)% /AP<</N \tpPdfLastXform>>% /MK<</TP 1/I \tpPdfLastXform/IF<</S/A/FB true/A [0.0 0.0]>>>>% /Ff 65536/F 3% /AA <<% /U <<% /S/JavaScript/JS(% var fd=event.target;% var mX=this.mouseX;var mY=this.mouseY;% var drag=function(){% var nX=this.mouseX;var nY=this.mouseY;% var dX=nX-mX;var dY=nY-mY;% var fdr=fd.rect;% fdr[0]+=dX;fdr[1]+=dY;fdr[2]+=dX;fdr[3]+=dY;% fd.rect=fdr;mX=nX;mY=nY;% };% if(!dragging\thetcnt){% dragging\thetcnt=true;Int=app.setInterval("drag()",1);% }% else{app.clearInterval(Int);dragging\thetcnt=false;}% this.dirty=false;% )% >>% >>% }% \tpAppendToFields{\tpPdfLastAnn}% }% }}% \stepcounter{tcnt}% }} \makeatother \newsavebox\tiptext\newcounter{tcnt} \newlength{\whatsleft}\newlength{\tipshift} \newcommand{\measureremainder}[1]{% \begin{tikzpicture}[overlay,remember picture] \path let \p0 = (0,0), \p1 = (current page.east) in [/utils/exec={\pgfmathsetlength#1{\x1-\x0}\global#1=#1}]; \end{tikzpicture}% } %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \begin{document} Einstein's \tooltip****{formula}{$E=m c^2$} is well known. Another famous formula is due to \tooltip****{Pythagoras}{$a^2+b^2=c^2$}. This \tooltip{tip}{is visible only in AR} is draggable and shown on mouse-over. \end{document} • I copy/pasted the code above but the compilation failed. Does someone has an idea why? The complete log file is there. The first error message is: ! Undefined control sequence. \tpPdfXform ->\pbs_pdfxform:nnn {1}{1} l.133 Einstein's \tooltip**{formula}{$E=m c^2$} is well known. The control sequence at the end of the top line of your error message was never \def'ed. [...] – Gilles Bonnet Sep 8 '16 at 16:45 • The problem seems to be that my system is too old. see: chat.stackexchange.com/transcript/message/32189614#32189614 – Gilles Bonnet Sep 8 '16 at 17:45 • I found why it did not work. I received no complaint from a spelling error in the name of the file containing your macro. Now it is fixed. Thanks ! – Mikaël Mayer Jul 10 '17 at 8:54 • Overleaf's package status is ancient. You would be better off having TeXLive or MiKTeX installed on your computer. – AlexG Jul 10 '19 at 15:20 • @copy It's Firefox's fault. – AlexG Jun 26 '20 at 12:29 Tooltips in PDF documents are generally possible. I do know two packages; one of them you tagged yourself for your question, the other one I added, click on these tags to see, which questions were already asked in TeX.SX: & . But as Joseph Wright in a comment wrote: How PDF viewers show 'hover text' is down the them, not the source (LaTeX or otherwise). Thus the best you can hope is to check with a set of viewers. I can tell you, that SumatraPDF, a popular viewer for Windows, does not show any tooltip at all with “tooltip” in a strict meaning, but does show comments, when you hover with the mouse over one of them. What is supported, what not, can be seen when reading the documentation of package pdfcomment. There is a type \pdfmarkupcomment that actually almost acts like a tooltip.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6149890422821045, "perplexity": 13211.622692143032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178364932.30/warc/CC-MAIN-20210302221633-20210303011633-00129.warc.gz"}
https://ocw.un-ihe.org/mod/resource/view.php?id=3889&lang=es
## Introduction to Urban Water Distribution - Appendix I Introduction to Urban Water Distribution - Appendix I
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8930023908615112, "perplexity": 19410.461874169494}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655883439.15/warc/CC-MAIN-20200703215640-20200704005640-00511.warc.gz"}
https://maxwelldemon.com/2011/12/04/polynomials-in-wood/
## Polynomials in Wood What has $1-x/2-6x^2+11x^3-7x^4+3/2x^5$ got to do with wood? Like you until a few days ago I would have said “Probably nothing” then I came across this chart: Where it relates to how the bending strength of wood changes depending on the number of knots. From this lovely book, that I found at the local second hand book shop during Samuel Hansen’s recent visit to Fayetteville: Which, is full of other equations and models, such as this one: $N = \frac{PQ}{P sin^n \theta + Q cos^n \theta}$ which is then explored for several values of n. Some of the tables caught my eye just for beautiful way that they present information: Finally, its not just equations, there is also a collection of patterns, along with the intriguing chapter on Structural Design of Sandwich Construction (probably not what I am thinking about): All this points out to me, once again how mathematics can be a powerful tool to help study anything. I know that when it comes down to it this is really just the well established link between mathematics and engineering, but, as a material, wood is so much more accessible and visceral than, say, concrete. For some a book on wood might even answer the eternal question of “How am I going to use this?” but it does at least show that quintic polynomials really do come up in real situations!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 2, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2566966116428375, "perplexity": 715.477766117713}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501174154.34/warc/CC-MAIN-20170219104614-00611-ip-10-171-10-108.ec2.internal.warc.gz"}
http://www.researchgate.net/publication/252146865_Interplay_between_magnetism_and_superconductivity_in_EuFe2-xCoxAs2_studied_by_57Fe_and_151Eu_Mssbauer_spectroscopy
Article # Interplay between magnetism and superconductivity in EuFe2-xCoxAs2 studied by 57Fe and 151Eu Mössbauer spectroscopy • ##### P. J. W. Moll Physical review. B, Condensed matter (Impact Factor: 3.66). 01/2011; 84. DOI: 10.1103/PhysRevB.84.174503 ABSTRACT The compound EuFe2-xCoxAs2 was investigated by means of 57Fe and 151Eu Mössbauer spectroscopy versus temperature (4.2-300 K) for x = 0 (parent), x = 0.34-0.39 (superconductor), and x = 0.58 (overdoped). It was found that the spin density wave (SDW) is suppressed by Co substitution; however, it survives in the region of superconductivity, but iron spectra exhibit some nonmagnetic components in the superconducting region. Europium orders magnetically, regardless of the cobalt concentration, with the spin reorientation from the a-axis in the parent compound toward the c-axis with increasing replacement of iron by cobalt. The reorientation takes place close to the a-c plane. Some trivalent europium appears in EuFe2-xCoxAs2 versus substitution due to the chemical pressure induced by Co atoms, and it experiences some transferred hyperfine field from Eu2+. Iron experiences some transferred field due to the europium ordering for substituted samples in the SDW and nonmagnetic state both, while the transferred field is undetectable in the parent compound. Superconductivity coexists with the 4f-europium magnetic order within the same volume. It seems that superconductivity has some filamentary character in EuFe2-xCoxAs2, and it is confined to the nonmagnetic component seen by the iron Mössbauer spectroscopy. 0 Bookmarks · 42 Views • ##### Article: Local structure and hyperfine interactions of (57)Fe in NaFeAs studied by Mössbauer spectroscopy. [Hide abstract] ABSTRACT: Detailed (57)Fe Mössbauer spectroscopy measurements on superconducting NaFeAs powder samples have been performed in the temperature range 13 K ≤ T < 300 K. The (57)Fe spectra recorded in the paramagnetic range (T > TN ≈ 46 K) are discussed supposing that most of the Fe(2+) ions are located in distorted (FeAs4) tetrahedra of NaFeAs phase, while an additional minor (<10%) component of the spectra corresponds to impurity or intergrowth NaFe2As2 phase with a nominal composition near NaFe2As2. Our results reveal that the structural transition (TS ≈ 55 K) has a weak effect on the electronic structure of iron ions, while at T ≤ TN the spectra show a continuous distribution of hyperfine fields HFe. The shape of these spectra is analyzed in terms of two models: (i) an incommensurate spin density wave modulation of iron magnetic structure, (ii) formation of a microdomain structure or phase separation. It is shown that the hyperfine parameters obtained using these two methods have very similar values over the whole temperature range. Analysis of the temperature dependence HFe(T) with the Bean-Rodbell model leads to ζ = 1.16 ± 0.05, suggesting that the magnetic phase transition is first order in nature. A sharp evolution of the VZZ(T) and η(T) parameters of the full Hamiltonian of hyperfine interactions near T ≈ (TN,TS) is interpreted as a manifestation of the anisotropic electron redistribution between the dxz-, dyz- and dxy-orbitals of the iron ions. Journal of Physics Condensed Matter 08/2013; 25(34):346003. · 2.22 Impact Factor • ##### Article: Electron spin resonance in iron pnictides [Hide abstract] ABSTRACT: We report on electron spin resonance studies in Eu based 122-superconductors where the Eu^2+ ions serve as a probe of the normal and superconducting state. In polycrystalline Eu0.5K0.5Fe2As2 the spin-lattice relaxation rate 1/T1^ESR obtained from the ESR linewidth exhibits a Korringa-like linear increase with increasing temperature above Tc evidencing a normal Fermi-liquid behavior. Below Tc the spin lattice relaxation rate 1/T1^ESR follows a T^1.5-behavior without any appearance of a coherence peak. In superconducting EuFe2As1.8P0.2 single crystals we find a similar Korringa slope in the normal state and observe anisotropic spectra for measuring with the external field parallel and perpendicular to the c-axis. In addition, we will discuss the ESR properties of selected systems from the 1111 and 11 families. Physical review. B, Condensed matter 09/2012; 86(9). · 3.66 Impact Factor • ##### Article: Interplay between spin density wave and superconductivity in '122' iron pnictides: 57Fe M\ [Hide abstract] ABSTRACT: Iron-based superconductors Ba0.7Rb0.3Fe2As2 and CaFe1.92Co0.08As2 of the '122' family have been investigated by means of the 14.41-keV Moessbauer transition in 57Fe versus temperature ranging from the room temperature till 4.2 K. A comparison is made with the previously investigated parent compounds BaFe2As2 and CaFe2As2. It has been found that Moessbauer spectra of these superconductors are composed of the magnetically split component due to development of spin density wave (SDW) and non-magnetic component surviving even at lowest temperatures. The latter component is responsible for superconductivity. Hence, the superconductivity occurs in the part of the sample despite the sample is single phase. This phenomenon is caused by the slight variation of the dopant concentration across the sample (crystal). 10/2011;
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8510969281196594, "perplexity": 4007.4124816760154}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447548623.95/warc/CC-MAIN-20141224185908-00013-ip-10-231-17-201.ec2.internal.warc.gz"}
http://mathhelpforum.com/number-theory/42601-hello-all-need-help-number-theory.html
# Thread: hello to all! need help in number theory 1. ## hello to all! need help in number theory Let p be a prime of the form 4k+3. Prove that either {(p-1)/2}!≡1 (mod p) or{(p-1)/2}!≡-1 (mod p) 2. Originally Posted by jen_mojic Let p be a prime of the form 4k+3. Prove that either {(p-1)/2}!≡1 (mod p) or{(p-1)/2}!≡-1 (mod p) By Wilson's theorem we have, $1\cdot 2\cdot 3 \cdot ... \cdot (p-1) \equiv -1(\bmod p)$. Now, $p-1 \equiv -1$, and $p-2\equiv -2$, and so on ... until the middle. Thus, $(-1)^{(p-1)/2} \left[ (\tfrac{p-1}{2})! \right] \equiv -1(\bmod p)$. However, $(-1)^{(p-1)/2} = -1$ since $p=4k+3$. And therefore we have, $\left[(\tfrac{p-1}{2})!\right]^2 \equiv 1(\bmod p) \implies (\tfrac{p-1}{2})! \equiv \pm 1(\bmod p)$. For example, let $p=7$, then, $1\cdot 2\cdot 3 \cdot 4\cdot 5 \cdot 6 \equiv -1(\bmod 7)$ Do the trick above, $1\cdot 2 \cdot 3 \cdot (-3) \cdot (-2)\cdot (-1) \equiv -1(\bmod 7)$ Thus, $(-1)^3 (3!)^2 \equiv -1(\bmod 7)$ And the rest follows.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 11, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9919956922531128, "perplexity": 3085.402535804386}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720000.45/warc/CC-MAIN-20161020183840-00377-ip-10-171-6-4.ec2.internal.warc.gz"}
http://www.popflock.com/learn?s=Darboux_vector
Darboux Vector Get Darboux Vector essential facts below. View Videos or join the Darboux Vector discussion. Add Darboux Vector to your PopFlock.com topic list for future reference or share this resource on social media. Darboux Vector In differential geometry, especially the theory of space curves, the Darboux vector is the angular velocity vector of the Frenet frame of a space curve.[1] It is named after Gaston Darboux who discovered it.[2] It is also called angular momentum vector, because it is directly proportional to angular momentum. In terms of the Frenet-Serret apparatus, the Darboux vector ? can be expressed as[3] ${\displaystyle {\boldsymbol {\omega }}=\tau \mathbf {T} +\kappa \mathbf {B} \qquad \qquad (1)}$ and it has the following symmetrical properties:[2] ${\displaystyle {\boldsymbol {\omega }}\times \mathbf {T} =\mathbf {T'} ,}$ ${\displaystyle {\boldsymbol {\omega }}\times \mathbf {N} =\mathbf {N'} ,}$ ${\displaystyle {\boldsymbol {\omega }}\times \mathbf {B} =\mathbf {B'} ,}$ which can be derived from Equation (1) by means of the Frenet-Serret theorem (or vice versa). Let a rigid object move along a regular curve described parametrically by ?(t). This object has its own intrinsic coordinate system. As the object moves along the curve, let its intrinsic coordinate system keep itself aligned with the curve's Frenet frame. As it does so, the object's motion will be described by two vectors: a translation vector, and a rotation vector ?, which is an areal velocity vector: the Darboux vector. Note that this rotation is kinematic, rather than physical, because usually when a rigid object moves freely in space its rotation is independent of its translation. The exception would be if the object's rotation is physically constrained to align itself with the object's translation, as is the case with the cart of a roller coaster. Consider the rigid object moving smoothly along the regular curve. Once the translation is "factored out", the object is seen to rotate the same way as its Frenet frame. The total rotation of the Frenet frame is the combination of the rotations of each of the three Frenet vectors: ${\displaystyle {\boldsymbol {\omega }}={\boldsymbol {\omega }}_{\mathbf {T} }+{\boldsymbol {\omega }}_{\mathbf {N} }+{\boldsymbol {\omega }}_{\mathbf {B} }.}$ Each Frenet vector moves about an "origin" which is the centre of the rigid object (pick some point within the object and call it its centre). The areal velocity of the tangent vector is: ${\displaystyle {\boldsymbol {\omega }}_{\mathbf {T} }=\lim _{\Delta t\rightarrow 0}{\mathbf {T} (t)\times \mathbf {T} (t+\Delta t) \over 2\,\Delta t}}$ ${\displaystyle ={\mathbf {T} (t)\times \mathbf {T'} (t) \over 2}.}$ Likewise, ${\displaystyle {\boldsymbol {\omega }}_{\mathbf {N} }={1 \over 2}\ \mathbf {N} (t)\times \mathbf {N'} (t),}$ ${\displaystyle {\boldsymbol {\omega }}_{\mathbf {B} }={1 \over 2}\ \mathbf {B} (t)\times \mathbf {B'} (t).}$ Now apply the Frenet-Serret theorem to find the areal velocity components: ${\displaystyle {\boldsymbol {\omega }}_{\mathbf {T} }={1 \over 2}\mathbf {T} \times \mathbf {T'} ={1 \over 2}\kappa \mathbf {T} \times \mathbf {N} ={1 \over 2}\kappa \mathbf {B} }$ ${\displaystyle {\boldsymbol {\omega }}_{\mathbf {N} }={1 \over 2}\mathbf {N} \times \mathbf {N'} ={1 \over 2}(-\kappa \mathbf {N} \times \mathbf {T} +\tau \mathbf {N} \times \mathbf {B} )={1 \over 2}(\kappa \mathbf {B} +\tau \mathbf {T} )}$ ${\displaystyle {\boldsymbol {\omega }}_{\mathbf {B} }={1 \over 2}\mathbf {B} \times \mathbf {B'} =-{1 \over 2}\tau \mathbf {B} \times \mathbf {N} ={1 \over 2}\tau \mathbf {T} }$ so that ${\displaystyle {\boldsymbol {\omega }}={1 \over 2}\kappa \mathbf {B} +{1 \over 2}(\kappa \mathbf {B} +\tau \mathbf {T} )+{1 \over 2}\tau \mathbf {T} =\kappa \mathbf {B} +\tau \mathbf {T} ,}$ as claimed. The Darboux vector provides a concise way of interpreting curvature ? and torsion ? geometrically: curvature is the measure of the rotation of the Frenet frame about the binormal unit vector, whereas torsion is the measure of the rotation of the Frenet frame about the tangent unit vector.[2] ## References 1. ^ Stoker, J. J. (2011), Differential Geometry, Pure and applied mathematics, 20, John Wiley & Sons, p. 62, ISBN 9781118165478. 2. ^ a b c Farouki, Rida T. (2008), Pythagorean-Hodograph Curves: Algebra and Geometry Inseparable, Geometry and Computing, 1, Springer, p. 181, ISBN 9783540733980. 3. ^ Oprea, John (2007), Differential Geometry and Its Applications, Mathematical Association of America Textbooks, MAA, p. 21, ISBN 9780883857489. This article uses material from the Wikipedia page available here. It is released under the Creative Commons Attribution-Share-Alike License 3.0.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 13, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7037326097488403, "perplexity": 823.2061078012432}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371611051.77/warc/CC-MAIN-20200405213008-20200406003508-00390.warc.gz"}
https://www.amherst.edu/help/moodle_help/help_4_instructors/rubric_grades
## With rubrics, why doesn't the gradebook show the correct rubric points? Valid for Moodle 2.8.  Checked on 0/21/2015.Rubric grades are based on the minimum and maximum grades possible for each criteria. The calculations involves summing up the difference between the minumum grade possbile and the grade given for each criteria and then dividing the result by the sum of maximum number of points possible minus the sum of minimum points possible. The resulting percentage is then applied to the number of points for the overall assignment. In the words of moodle.org's page on rubrics: The rubric normalized score (ie basically a percentage grade) is calculated as $G_s = \frac{\sum_{i=1}^N (g_i - min_i) }{\sum_{i=1}^N (max_i - min_i)}$ where $g_i \in \mathbb{N}$ is the number of points given to the i-th criterion, $min_i \in \mathbb{N}$ is the minimal possible number of points for of the i-th criterion, $max_i \in \mathbb{N}$ is the maximal possible number of points for the i-th criterion and $N \in \mathbb{N}$ is the number of criteria in the rubric. Example of a single criterion can be: Overall quality of the paper with the levels 5 - An excellent paper, 3 - A mediocre paper, 0 - A weak paper (the number represent the number of points). Example: let us have an assessment form with two criteria, which both have four levels 1, 2, 3, 4. The teacher chooses level with 2 points for the first criterion and 3 points for the second criterion. Then the normalized score is: $G_s = \frac{(2 - 1) + (3 - 1)}{(4 - 1) + (4 - 1)} = \frac{3}{6} = 50 %$ Note that this calculation may be different from how you intuitively use rubric. For example, when the teacher in the previous example chose both levels with 1 point, the plain sum would be 2 points. But that is actually the lowest possible score so it maps to the grade 0 in Moodle. To avoid confusion, it is recommended to always include a level with 0 points in the rubric definition.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7180707454681396, "perplexity": 534.2838764933706}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860122501.61/warc/CC-MAIN-20160428161522-00029-ip-10-239-7-51.ec2.internal.warc.gz"}
https://export.arxiv.org/abs/1802.02806
math-ph (what is this?) # Title: 3D Current Algebra and Twisted K Theory Abstract: Equivariant twisted K theory classes on compact Lie groups $G$ can be realized as families of Fredholm operators acting in a tensor product of a fermionic Fock space and a representation space of a central extension of the loop algebra $LG$ using a supersymmetric Wess-Zumino-Witten model. The aim of the present article is to extend the construction to higher loop algebras using an abelian extension of a $3D$ current algebra. We have only partial success: Instead of true Fredholm operators we have formal algebraic expressions in terms of the generators of the current algebra and an infinite dimensional Clifford algebra. These give rise to sesquilinear forms in a Hilbert bundle which transform in the expected way with respect to $3D$ gauge transformations but do not define true Hilbert space operators. Comments: For the Ludvig Faddeev memorial volume Subjects: Mathematical Physics (math-ph); K-Theory and Homology (math.KT); Representation Theory (math.RT) DOI: 10.1142/S0129055X18400111 Cite as: arXiv:1802.02806 [math-ph] (or arXiv:1802.02806v1 [math-ph] for this version) ## Submission history From: Jouko Mickelsson [view email] [v1] Thu, 8 Feb 2018 11:01:11 GMT (9kb) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6454647183418274, "perplexity": 1076.510823029727}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488540235.72/warc/CC-MAIN-20210623195636-20210623225636-00131.warc.gz"}
https://worldwidescience.org/topicpages/t/telescope+wide+field.html
#### Sample records for telescope wide field 1. Extreme multiplex spectroscopy at wide-field 4-m telescopes Science.gov (United States) Content, Robert; Shanks, Tom 2008-07-01 We describe the design and science case for a spectrograph for the prime focus of classical 4-m wide-field telescopes that can deliver at least 4000 MOS slits over a 1° field. This extreme multiplex capability means that 25000 galaxy redshifts can be measured in a single night, opening up the possibilities for large galaxy redshift surveys out to z~0.7 and beyond for the purpose of measuring the Baryon Acoustic Oscillation (BAO) scale and for many other science goals. The design features four cloned spectrographs and exploits the exclusive possibility of tiling the focal plane of wide-field 4-m telescopes with CCDs for multi-object spectroscopic purposes. In ~200 night projects, such spectrographs have the potential to make galaxy redshift surveys of ~6×106 galaxies over a wide redshift range and thus may provide a low-cost alternative to other survey routes such as WFMOS and SKA. Two of these extreme multiplex spectrographs are currently being designed for the AAT (NG1dF) and Calar Alto (XMS) 4-m class telescopes. NG2dF, a larger version for the AAT 2° field, would have 12 clones and at least 12000 slits. The clones use a transparent design including a grism in which all optics are smaller than the clone square subfield so that the clones can be tightly packed with little gaps between the contiguous fields. Only low cost glasses are used; the variations in chromatic aberrations between bands are compensated by changing one or two of the lenses adjacent to the grism. The total weight and length is smaller with a few clones than a unique spectrograph which makes it feasible to place the spectrograph at the prime focus. 2. Wide-Field InfraRed Survey Telescope WFIRST Science.gov (United States) Green, J.; Schechter, P.; Baltay, C.; Bean, R.; Bennett, D.; Brown, R.; Conselice, C.; Donahue, M.; Fan, X.; Rauscher, B.; 2012-01-01 In December 2010, NASA created a Science Definition Team (SDT) for WFIRST, the Wide Field Infra-Red Survey Telescope, recommended by the Astro 2010 Decadal Survey as the highest priority for a large space mission. The SDT was chartered to work with the WFIRST Project Office at GSFC and the Program Office at JPL to produce a Design Reference Mission (DRM) for WFIRST. Part of the original charge was to produce an interim design reference mission by mid-2011. That document was delivered to NASA and widely circulated within the astronomical community. In late 2011 the Astrophysics Division augmented its original charge, asking for two design reference missions. The first of these, DRM1, was to be a finalized version of the interim DRM, reducing overall mission costs where possible. The second of these, DRM2, was to identify and eliminate capabilities that overlapped with those of NASA's James Webb Space Telescope (henceforth JWST), ESA's Euclid mission, and the NSF's ground-based Large Synoptic Survey Telescope (henceforth LSST), and again to reduce overall mission cost, while staying faithful to NWNH. This report presents both DRM1 and DRM2. 3. Science.gov (United States) Goullioud, R.; Content, D. A.; Kuan, G. M.; Moore, J. D.; Chang, Z.; Sunada, E. T.; Villalvazo, J.; Hawk, J. P.; Armani, N. V.; Johnson, E. L.; Powell, C. A. 2012-09-01 The Wide Field Infrared Survey Telescope (WFIRST) mission concept was ranked first in new space astrophysics missions by the Astro2010 Decadal Survey, incorporating the Joint Dark Energy Mission payload concept and multiple science white papers. This mission is based on a space telescope at L2 studying exoplanets [via gravitational microlensing], probing dark energy, and surveying the near infrared sky. Since the release of the Astro2010 Decadal Survey, the team has been working with the WFIRST Science Definition Team to refine mission and payload concepts. We present the current interim reference mission point design of the payload, based on the use of a 1.3m unobscured aperture three mirror anastigmat form, with focal imaging and slit-less spectroscopy science channels. We also present the first results of Structural/Thermal/Optical performance modeling of the telescope point design. 4. Wide-Field Imaging Telescope-0 (WIT0) with automatic observing system Science.gov (United States) Ji, Tae-Geun; Byeon, Seoyeon; Lee, Hye-In; Park, Woojin; Lee, Sang-Yun; Hwang, Sungyong; Choi, Changsu; Gibson, Coyne Andrew; Kuehne, John W.; Prochaska, Travis; Marshall, Jennifer L.; Im, Myungshin; Pak, Soojong 2018-01-01 We introduce Wide-Field Imaging Telescope-0 (WIT0), with an automatic observing system. It is developed for monitoring the variabilities of many sources at a time, e.g. young stellar objects and active galactic nuclei. It can also find the locations of transient sources such as a supernova or gamma-ray bursts. In 2017 February, we installed the wide-field 10-inch telescope (Takahashi CCA-250) as a piggyback system on the 30-inch telescope at the McDonald Observatory in Texas, US. The 10-inch telescope has a 2.35 × 2.35 deg field-of-view with a 4k × 4k CCD Camera (FLI ML16803). To improve the observational efficiency of the system, we developed a new automatic observing software, KAOS30 (KHU Automatic Observing Software for McDonald 30-inch telescope), which was developed by Visual C++ on the basis of a windows operating system. The software consists of four control packages: the Telescope Control Package (TCP), the Data Acquisition Package (DAP), the Auto Focus Package (AFP), and the Script Mode Package (SMP). Since it also supports the instruments that are using the ASCOM driver, the additional hardware installations become quite simplified. We commissioned KAOS30 in 2017 August and are in the process of testing. Based on the WIT0 experiences, we will extend KAOS30 to control multiple telescopes in future projects. 5. Design Evolution of the Wide Field Infrared Survey Telescope Using Astrophysics Focused Telescope Assets (WFIRST-AFTA) and Lessons Learned Science.gov (United States) Peabody, Hume L.; Peters, Carlton V.; Rodriguez-Ruiz, Juan E.; McDonald, Carson S.; Content, David A.; Jackson, Clifton E. 2015-01-01 The design of the Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) continues to evolve as each design cycle is analyzed. In 2012, two Hubble sized (2.4 m diameter) telescopes were donated to NASA from elsewhere in the Federal Government. NASA began investigating potential uses for these telescopes and identified WFIRST as a mission to benefit from these assets. With an updated, deeper, and sharper field of view than previous design iterations with a smaller telescope, the optical designs of the WFIRST instruments were updated and the mechanical and thermal designs evolved around the new optical layout. Beginning with Design Cycle 3, significant analysis efforts yielded a design and model that could be evaluated for Structural-Thermal-Optical-Performance (STOP) purposes for the Wide Field Imager (WFI) and provided the basis for evaluating the high level observatory requirements. Development of the Cycle 3 thermal model provided some valuable analysis lessons learned and established best practices for future design cycles. However, the Cycle 3 design did include some major liens and evolving requirements which were addressed in the Cycle 4 Design. Some of the design changes are driven by requirements changes, while others are optimizations or solutions to liens from previous cycles. Again in Cycle 4, STOP analysis was performed and further insights into the overall design were gained leading to the Cycle 5 design effort currently underway. This paper seeks to capture the thermal design evolution, with focus on major design drivers, key decisions and their rationale, and lessons learned as the design evolved. 6. Innovative compact focal plane array for wide field vis and ir orbiting telescopes Science.gov (United States) Hugot, Emmanuel; Vives, Sébastien; Ferrari, Marc; Gaeremynck, Yann; Jahn, Wilfried 2017-11-01 The future generation of high angular resolution space telescopes will require breakthrough technologies to combine large diameters and large focal plane arrays with compactness and lightweight mirrors and structures. Considering the allocated volume medium-size launchers, short focal lengths are mandatory, implying complex optical relays to obtain diffraction limited images on large focal planes. In this paper we present preliminary studies to obtain compact focal plane arrays (FPA) for earth observations on low earth orbits at high angular resolution. Based on the principle of image slicers, we present an optical concept to arrange a 1D FPA into a 2D FPA, allowing the use of 2D detector matrices. This solution is particularly attractive for IR imaging requiring a cryostat, which volume could be considerably reduced as well as the relay optics complexity. Enabling the use of 2D matrices for such an application offers new possibilities. Recent developments on curved FPA allows optimization without concerns on the field curvature. This innovative approach also reduces the complexity of the telescope optical combination, specifically for fast telescopes. This paper will describe the concept and optical design of an F/5 - 1.5m telescope equipped with such a FPA, the performances and the impact on the system with a comparison with an equivalent 1.5m wide field Korsch telescope. 7. SIMULTANEOUS EXOPLANET CHARACTERIZATION AND DEEP WIDE-FIELD IMAGING WITH A DIFFRACTIVE PUPIL TELESCOPE International Nuclear Information System (INIS) Guyon, Olivier; Eisner, Josh A.; Angel, Roger; Woolf, Neville J.; Bendek, Eduardo A.; Milster, Thomas D.; Ammons, S. Mark; Shao, Michael; Shaklan, Stuart; Levine, Marie; Nemati, Bijan; Martinache, Frantz; Pitman, Joe; Woodruff, Robert A.; Belikov, Ruslan 2013-01-01 High-precision astrometry can identify exoplanets and measure their orbits and masses while coronagraphic imaging enables detailed characterization of their physical properties and atmospheric compositions through spectroscopy. In a previous paper, we showed that a diffractive pupil telescope (DPT) in space can enable sub-μas accuracy astrometric measurements from wide-field images by creating faint but sharp diffraction spikes around the bright target star. The DPT allows simultaneous astrometric measurement and coronagraphic imaging, and we discuss and quantify in this paper the scientific benefits of this combination for exoplanet science investigations: identification of exoplanets with increased sensitivity and robustness, and ability to measure planetary masses to high accuracy. We show how using both measurements to identify planets and measure their masses offers greater sensitivity and provides more reliable measurements than possible with separate missions, and therefore results in a large gain in mission efficiency. The combined measurements reliably identify potentially habitable planets in multiple systems with a few observations, while astrometry or imaging alone would require many measurements over a long time baseline. In addition, the combined measurement allows direct determination of stellar masses to percent-level accuracy, using planets as test particles. We also show that the DPT maintains the full sensitivity of the telescope for deep wide-field imaging, and is therefore compatible with simultaneous scientific observations unrelated to exoplanets. We conclude that astrometry, coronagraphy, and deep wide-field imaging can be performed simultaneously on a single telescope without significant negative impact on the performance of any of the three techniques. 8. Micrometeoroid Impacts on the Hubble Space Telescope Wide Field and Planetary Camera 2: Larger Particles Science.gov (United States) Kearsley, A. T.; Grime, G. W.; Webb, R. P.; Jeynes, C.; Palitsin, V.; Colaux, J. L.; Ross, D. K.; Anz-Meador, P.; Liou, J. C.; Opiela, J.; 2014-01-01 The Wide Field and Planetary Camera 2 (WFPC2) was returned from the Hubble Space Telescope (HST) by shuttle mission STS-125 in 2009. In space for 16 years, the surface accumulated hundreds of impact features on the zinc orthotitanate paint, some penetrating through into underlying metal. Larger impacts were seen in photographs taken from within the shuttle orbiter during service missions, with spallation of paint in areas reaching 1.6 cm across, exposing alloy beneath. Here we describe larger impact shapes, the analysis of impactor composition, and the micrometeoroid (MM) types responsible. 9. Science.gov (United States) Gloesener, P.; Wolfs, F.; Lemagne, F.; Cola, M.; Flebus, C.; Blanchard, G.; Kirschner, V. 2017-11-01 Regarding Earth observation missions, it has become unnecessary to point out the importance of making available wide field of view optical instruments for the purpose of spectral imaging. Taking advantage of the pushbroom instrument concept with its linear field across the on-ground track, it is in particular relevant to consider front-end optical configurations that involve an all-reflective system presenting inherent and dedicated advantages such as achromaticity, unobscuration and compactness, while ensuring the required image quality over the whole field. The attractiveness of the concept must be balanced with respect to the state-of-the-art mirror manufacturing technologies as the need for fast, broadband and wide field systems increases the constraints put on the feasibility of each individual component. As part of an ESTEC contract, AMOS designed, manufactured and tested a breadboard of a four-mirror wide field telescope for typical Earth observation superspectral missions. The initial purpose of the development was to assess the feasibility of a telecentric spaceborne three-mirror system covering an unobscured rectangular field of view of 26 degrees across track (ACT) by 6 degrees along track (ALT) with a f-number of 3.5 and a focal length of 500 mm and presenting an overall image quality better than 100 nm RMS wavefront error within the whole field. 10. Micrometeoroid Impacts on the Hubble Space Telescope Wide Field and Planetary Camera 2: Smaller Particle Impacts Science.gov (United States) Ross, D. K.; Anz-Meador, P.; Liou, J.C.; Opiela, J.; Kearsley, A. T.; Grime, G.; Webb, R.; Jeynes, C.; Palitsin, V.; Colaux, J.; 2014-01-01 The radiator shield on the Wide Field and Planetary Camera 2 (WFPC2) was subject to optical inspection following return from the Hubble Space Telescope (HST) in 2009. The survey revealed over 600 impact features of > 300 micrometers diameter, from exposure in space for 16 years. Subsequently, an international collaborative programme of analysis was organized to determine the origin of hypervelocity particles responsible for the damage. Here we describe examples of the numerous smaller micrometeoroid (MM) impact features (< 700 micrometers diameter) which excavated zinc orthotitanate (ZOT) paint from the radiator surface, but did not incorporate material from underlying Al alloy; larger impacts are described by [3]. We discuss recognition and interpretation of impactor remains, and MM compositions found on WFPC2. 11. Science.gov (United States) Miyazaki, Satoshi 2015-08-01 Hyper Suprime-Cam (HSC) is a new wide field optical imaging camera built for 8.2 m Subaru telescope. The field of view is 1.5 degree in diameter and the nearly 50 cm image circle was paved by 116 fully depleted CCDs (2k x 4k 15 micron square pixels). To realize a seeing limit imaging at Mauna Kea, the specification on the overall instrument PSF is set as 0.32 arc-second (FWHM). This is crucial for our primary scientific objectives: weak gravitational lensing survey to probe dark matter distribution. We started building the camera in 2006 and had a first light in 2012. The delivered image quality turned out to be mostly seeing limited as designed. We once observed the seeing size of 0.43 arc-second (median value over the field of view) in Y-band with 300 seconds exposure. Our 300 nights observing proposal has been accepted. The program started in March 2014 and continues over 5 years. The wide survey plans to cover 1,400 square degree with the limiting magnitude of i_AB = 26 (5 sigma, 2 arcsec aperture). General observer programs are carried out in parallel. In this talk, we will present the design and the actual performance of the camera as well as how we implement the massive (1.6 GByte/exposure) data management system. 12. Wide-Field InfraRed Survey Telescope (WFIRST) Slitless Spectrometer: Design, Prototype, and Results Science.gov (United States) Gong, Qian; Content, David; Dominguez, Margaret; Emmett, Thomas; Griesmann, Ulf; Hagopian, John; Kruk, Jeffrey; Marx, Catherine; Pasquale, Bert; Wallace, Thomas; 2016-01-01 The slitless spectrometer plays an important role in the Wide-Field InfraRed Survey Telescope (WFIRST) mission for the survey of emission-line galaxies. This will be an unprecedented very wide field, HST quality 3D survey of emission line galaxies. The concept of the compound grism as a slitless spectrometer has been presented previously. The presentation briefly discusses the challenges and solutions of the optical design, and recent specification updates, as well as a brief comparison between the prototype and the latest design. However, the emphasis of this paper is the progress of the grism prototype: the fabrication and test of the complicated diffractive optical elements and powered prism, as well as grism assembly alignment and testing. Especially how to use different tools and methods, such as IR phase shift and wavelength shift interferometry, to complete the element and assembly tests. The paper also presents very encouraging results from recent element tests to assembly tests. Finally we briefly touch the path forward plan to test the spectral characteristic, such as spectral resolution and response. 13. NARCIS (Netherlands) Dalton, Gavin; Trager, Scott C.; Abrams, Don Carlos; Carter, David; Bonifacio, Piercarlo; Aguerri, J. Alfonso L.; MacIntosh, Mike; Evans, Chris; Lewis, Ian; Navarro, Ramon; Agocs, Tibor; Dee, Kevin; Rousset, Sophie; Tosh, Ian; Middleton, Kevin; Pragt, Johannes; Terrett, David; Brock, Matthew; Benn, Chris; Verheijen, Marc; Cano Infantes, Diego; Bevil, Craige; Steele, Iain; Mottram, Chris; Bates, Stuart; Gribbin, Francis J.; Rey, Jürg; Rodriguez, Luis Fernando; Delgado, Jose Miguel; Guinouard, Isabelle; Walton, Nic; Irwin, Michael J.; Jagourel, Pascal; Stuik, Remko; Gerlofsma, Gerrit; Roelfsma, Ronald; Skillen, Ian; Ridings, Andy; Balcells, Marc; Daban, Jean-Baptiste; Gouvret, Carole; Venema, Lars; Girard, Paul We present the preliminary design of the WEAVE next generation spectroscopy facility for the William Herschel Telescope (WHT), principally targeting optical ground-based follow up of upcoming ground-based (LOFAR) and spacebased (Gaia) surveys. WEAVE is a multi-object and multi-IFU facility utilizing 14. Hubble Space Telescope Wide Field Planetary Camera 2 Observations of Neptune Science.gov (United States) 1995-01-01 15. Measuring metallicities with Hubble space telescope/wide-field camera 3 photometry Energy Technology Data Exchange (ETDEWEB) Ross, Teresa L.; Holtzman, Jon A. [Department of Astronomy, New Mexico State University, P.O. Box 30001, MSC 4500, Las Cruces, NM 88003-8001 (United States); Anthony-Twarog, Barbara J.; Twarog, Bruce [Department of Physics and Astronomy, University of Kansas, Lawrence, KS 66045-7582 (United States); Bond, Howard E. [Department of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802 (United States); Saha, Abhijit [National Optical Astronomy Observatory, P.O. Box 26732, Tucson, AZ 85726 (United States); Walker, Alistair, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Cerro Tololo Inter-American Observatory (CTIO), National Optical Astronomy Observatory, Casilla 603, La Serena (Chile) 2014-01-01 We quantified and calibrated the metallicity and temperature sensitivities of colors derived from nine Wide-Field Camera 3 filters on board the Hubble Space Telescope using Dartmouth isochrones and Kurucz atmosphere models. The theoretical isochrone colors were tested and calibrated against observations of five well studied galactic clusters, M92, NGC 6752, NGC 104, NGC 5927, and NGC 6791, all of which have spectroscopically determined metallicities spanning –2.30 < [Fe/H] <+0.4. We found empirical corrections to the Dartmouth isochrone grid for each of the following color-magnitude diagrams (CMDs): (F555W-F814W, F814W), (F336W-F555W, F814W), (F390M-F555W, F814W), and (F390W-F555W, F814W). Using empirical corrections, we tested the accuracy and spread of the photometric metallicities assigned from CMDs and color-color diagrams (which are necessary to break the age-metallicity degeneracy). Testing three color-color diagrams [(F336W-F555W),(F390M-F555W),(F390W-F555W), versus (F555W-F814W)], we found the colors (F390M-F555W) and (F390W-F555W) to be the best suited to measure photometric metallicities. The color (F390W-F555W) requires much less integration time, but generally produces wider metallicity distributions and, at very low metallicity, the metallicity distribution function (MDF) from (F390W-F555W) is ∼60% wider than that from (F390M-F555W). Using the calibrated isochrones, we recovered the overall cluster metallicity to within ∼0.1 dex in [Fe/H] when using CMDs (i.e., when the distance, reddening, and ages are approximately known). The measured MDF from color-color diagrams shows that this method measures metallicities of stellar clusters of unknown age and metallicity with an accuracy of ∼0.2-0.5 dex using F336W-F555W, ∼0.15-0.25 dex using F390M-F555W, and ∼0.2-0.4 dex with F390W-F555W, with the larger uncertainty pertaining to the lowest metallicity range. 16. KOALA, a wide-field 1000 element integral-field unit for the Anglo-Australian Telescope: assembly and commissioning Science.gov (United States) Zhelem, Ross; Brzeski, Jurek; Case, Scott; Churilov, Vladimir; Ellis, Simon; Farrell, Tony; Green, Andrew; Heng, Anthony; Horton, Anthony; Ireland, Michael; Jones, Damien; Klauser, Urs; Lawrence, Jon; Miziarski, Stan; Orr, David; Pai, Naveen; Staszak, Nick; Tims, Julia; Vuong, Minh; Waller, Lew; Xavier, Pascal 2014-07-01 The KOALA optical fibre feed for the AAOmega spectrograph has been commissioned at the Anglo-Australian Telescope. The instrument samples the reimaged telescope focal plane at two scales: 1.23 arcsec and 0.70 arcsec per image slicing hexagonal lenslet over a 49x27 and 28x15 arcsec field of view respectively. The integral field unit consists of 2D hexagonal and circular lenslet arrays coupling light into 1000 fibres with 100 micron core diameter. The fibre run is over 35m long connecting the telescope Cassegrain focus with the bench mounted spectrograph room where all fibres are reformatted into a one-dimensional slit. Design and assembly of the KOALA components, engineering challenges encountered, and commissioning results are discussed. 17. Active optics and modified-Rumsey wide-field telescopes: MINITRUST demonstrators with vase- and tulip-form mirrors Science.gov (United States) Lemaître, Gérard R.; Montiel, Pierre; Joulié, Patrice; Dohlen, Kjetil; Lanzoni, Patrick 2005-12-01 Wide-field astronomy requires the development of larger aperture telescopes. The optical properties of a three-mirror modified-Rumsey design provide significant advantages when compared to other telescope designs: (i) at any wavelength, the design has a flat field and is anastigmatic; (ii) the system is extremely compact, i.e., it is almost four times shorter than a Schmidt. Compared to the equally compact flat-field Ritchey-Chrétien with a doublet-lens corrector, as developed for the Sloan digital sky survey - and which requires the polishing of six optical surfaces - the proposed modified-Rumsey design requires only a two-surface polishing and provides a better imaging quality. All the mirrors are spheroids of the hyperboloid type. Starting from the classical Rumsey design, it is shown that the use of all eight available free parameters allows the simultaneous aspherization of the primary and tertiary mirrors by active optics methods from a single deformable substrate. The continuity conditions between the primary and the tertiary hyperbolizations are achieved by an intermediate narrow ring of constant thickness that is not optically used. After the polishing of a double vase form in a spherical shape, the primary-tertiary hyperbolizations are achieved by in situ stressing. The tulip-form secondary is hyperbolized by stress polishing. Other active optics alternatives are possible for a space telescope. The modified-Rumsey design is of interest for developing large space- and ground-based survey telescopes in UV, visible, or IR ranges, such as currently demonstrated with the construction of identical telescopes MINITRUST-1 and -2, f/5 - 2° field of view. Double-pass optical tests show diffraction-limited images. 18. Addressing Thermal Model Run Time Concerns of the Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) Science.gov (United States) Peabody, Hume; Guerrero, Sergio; Hawk, John; Rodriguez, Juan; McDonald, Carson; Jackson, Cliff 2016-01-01 The Wide Field Infrared Survey Telescope using Astrophysics Focused Telescope Assets (WFIRST-AFTA) utilizes an existing 2.4 m diameter Hubble sized telescope donated from elsewhere in the federal government for near-infrared sky surveys and Exoplanet searches to answer crucial questions about the universe and dark energy. The WFIRST design continues to increase in maturity, detail, and complexity with each design cycle leading to a Mission Concept Review and entrance to the Mission Formulation Phase. Each cycle has required a Structural-Thermal-Optical-Performance (STOP) analysis to ensure the design can meet the stringent pointing and stability requirements. As such, the models have also grown in size and complexity leading to increased model run time. This paper addresses efforts to reduce the run time while still maintaining sufficient accuracy for STOP analyses. A technique was developed to identify slews between observing orientations that were sufficiently different to warrant recalculation of the environmental fluxes to reduce the total number of radiation calculation points. The inclusion of a cryocooler fluid loop in the model also forced smaller time-steps than desired, which greatly increases the overall run time. The analysis of this fluid model required mitigation to drive the run time down by solving portions of the model at different time scales. Lastly, investigations were made into the impact of the removal of small radiation couplings on run time and accuracy. Use of these techniques allowed the models to produce meaningful results within reasonable run times to meet project schedule deadlines. 19. Impacts on the Hubble Space Telescope Wide Field and Planetary Camera 2: Experimental Simulation of Micrometeoroid Capture Science.gov (United States) Price, M. C.; Kearsley, A. T.; Wozniakiewicz, P. J.; Spratt, J.; Burchell, M. J.; Cole, M. J.; Anz-Meador, P.; Liou, J. C.; Ross, D. K.; Opiela, J.; 2014-01-01 Hypervelocity impact features have been recognized on painted surfaces returned from the Hubble Space Telescope (HST). Here we describe experiments that help us to understand their creation, and the preservation of micrometeoroid (MM) remnants. We simulated capture of silicate and sulfide minerals on the Zinc orthotitanate (ZOT) paint and Al alloy plate of the Wide Field and Planetary Camera 2 (WFPC2) radiator, which was returned from HST after 16 years in low Earth orbit (LEO). Our results also allow us to validate analytical methods for identification of MM (and orbital debris) impacts in LEO. 20. Science.gov (United States) Kent, Stephen M. 2018-04-01 If the optical system of a telescope is perturbed from rotational symmetry, the Zernike wavefront aberration coefficients describing that system can be expressed as a function of position in the focal plane using spin-weighted Zernike polynomials. Methodologies are presented to derive these polynomials to arbitrary order. This methodology is applied to aberration patterns produced by a misaligned Ritchey–Chrétien telescope and to distortion patterns at the focal plane of the DESI optical corrector, where it is shown to provide a more efficient description of distortion than conventional expansions. 1. Hubble Space Telescope ACS wide-field photometry of the sombrero galaxy globular cluster system NARCIS (Netherlands) Spitler, L.; Larsen, S.S.; Strader, J.; Brodie, J.P.; Forbes, D.A.; Beasley, M.A. 2006-01-01 A detailed imaging analysis of the globular cluster (GC) system of the Sombrero galaxy (NGC 4594) has been accomplished using a six-image mosaic from the Hubble Space Telescope Advanced Camera for Surveys. The quality of the data is such that contamination by foreground stars and background galaxies 2. Wide-Field InfraRed Survey Telescope (WFIRST) Mission and Synergies with LISA and LIGO-Virgo Science.gov (United States) Gehrels, N.; Spergel, D. 2015-01-01 The Wide-Field InfraRed Survey Telescope (WFIRST) is a NASA space mission in study for launch in 2024. It has a 2.4 m telescope, wide-field IR instrument operating in the 0.7 - 2.0 micron range and an exoplanet imaging coronagraph instrument operating in the 400 - 1000 nm range. The observatory will perform galaxy surveys over thousands of square degrees to J=27 AB for dark energy weak lensing and baryon acoustic oscillation measurements and will monitor a few square degrees for dark energy SN Ia studies. It will perform microlensing observations of the galactic bulge for an exoplanet census and direct imaging observations of nearby exoplanets with a pathfinder coronagraph. The mission will have a robust and wellfunded guest observer program for 25% of the observing time. WFIRST will be a powerful tool for time domain astronomy and for coordinated observations with gravitational wave experiments. Gravitational wave events produced by mergers of nearby binary neutron stars (LIGO-Virgo) or extragalactic supermassive black hole binaries (LISA) will produce electromagnetic radiation that WFIRST can observe. 3. Micrometeoroid Impacts on the Hubble Sace Telescope Wide Field and Planetary Camera 2: Ion Beam Analysis of Subtle Impactor Traces Science.gov (United States) Grime, G. W.; Webb, R. P.; Jeynes, C.; Palitsin, V. V.; Colaux, J. L.; Kearsley, A. T.; Ross, D. K.; Anz-Meador, P.; Liou, J. C.; Opiela, J.; 2014-01-01 Recognition of origin for particles responsible for impact damage on spacecraft such as the Hubble Space Telescope (HST) relies upon postflight analysis of returned materials. A unique opportunity arose in 2009 with collection of the Wide Field and Planetary Camera 2 (WFPC2) from HST by shuttle mission STS-125. A preliminary optical survey confirmed that there were hundreds of impact features on the radiator surface. Following extensive discussion between NASA, ESA, NHM and IBC, a collaborative research program was initiated, employing scanning electron microscopy (SEM) and ion beam analysis (IBA) to determine the nature of the impacting grains. Even though some WFPC2 impact features are large, and easily seen without the use of a microscope, impactor remnants may be hard to find. 4. Science.gov (United States) Kearsley, A. T.; Ross, D. K.; Anz-Meador, P.; Liou, J. C.; Opiela, J.; Grime, G. W.; Webb, R. P.; Jeynes, C.; Palitsin, V. V.; Colaux, J. L.; 2014-01-01 Postflight surveys of the Wide Field and Planetary Camera 2 (WFPC2) on the Hubble Space Telescope have located hundreds of features on the 2.2 by 0.8 m curved plate, evidence of hypervelocity impact by small particles during 16 years of exposure to space in low Earth orbit (LEO). The radiator has a 100 - 200 micron surface layer of white paint, overlying 4 mm thick Al alloy, which was not fully penetrated by any impact. Over 460 WFPC2 samples were extracted by coring at JSC. About half were sent to NHM in a collaborative program with NASA, ESA and IBC. The structural and compositional heterogeneity at micrometer scale required microanalysis by electron and ion beam microscopes to determine the nature of the impactors (artificial orbital debris, or natural micrometeoroids, MM). Examples of MM impacts are described elsewhere. Here we describe the development of novel electron beam analysis protocols, required to recognize the subtle traces of MM residues. 5. Science.gov (United States) Barnsley, R. M.; Steele, Iain A.; Smith, R. J.; Mawson, Neil R. 2014-07-01 The Small Telescopes Installed at the Liverpool Telescope (STILT) project has been in operation since March 2009, collecting data with three wide field unfiltered cameras: SkycamA, SkycamT and SkycamZ. To process the data, a pipeline was developed to automate source extraction, catalogue cross-matching, photometric calibration and database storage. In this paper, modifications and further developments to this pipeline will be discussed, including a complete refactor of the pipeline's codebase into Python, migration of the back-end database technology from MySQL to PostgreSQL, and changing the catalogue used for source cross-matching from USNO-B1 to APASS. In addition to this, details will be given relating to the development of a preliminary front-end to the source extracted database which will allow a user to perform common queries such as cone searches and light curve comparisons of catalogue and non-catalogue matched objects. Some next steps and future ideas for the project will also be presented. 6. Wide-Field Hubble Space Telescope Observations of the Globular Cluster System in NGC 1399* Science.gov (United States) Puzia, Thomas H.; Paolillo, Maurizio; Goudfrooij, Paul; Maccarone, Thomas J.; Fabbiano, Giuseppina; Angelini, Lorella 2014-01-01 We present a comprehensive high spatial resolution imaging study of globular clusters (GCs) in NGC 1399, thecentral giant elliptical cD galaxy in the Fornax galaxy cluster, conducted with the Advanced Camera for Surveys(ACS) aboard theHubble Space Telescope(HST).Using a novel technique to construct drizzled point-spreadfunction libraries for HSTACS data, we accurately determine the fidelity of GC structural parameter measurementsfrom detailed artificial star cluster experiments and show the superior robustness of the GC half-light radius,rh,compared with other GC structural parameters, such as King core and tidal radius. The measurement ofrhfor themajor fraction of the NGC 1399 GC system reveals a trend of increasingrhversus galactocentric distance,Rgal,out to about 10 kpc and a flat relation beyond. This trend is very similar for blue and red GCs, which are found tohave a mean size ratio ofrh,redrh,blue0.820.11 at all galactocentric radii from the core regions of the galaxyout to40 kpc. This suggests that the size difference between blue and red GCs is due to internal mechanismsrelated to the evolution of their constituent stellar populations. Modeling the mass density profile of NGC 1399shows that additional external dynamical mechanisms are required to limit the GC size in the galaxy halo regionstorh2 pc. We suggest that this may be realized by an exotic GC orbit distribution function, an extended darkmatter halo, andor tidal stress induced by the increased stochasticity in the dwarf halo substructure at largergalactocentric distances. We compare our results with the GCrhdistribution functions in various galaxies and findthat the fraction of extended GCs withrh5 pc is systematically larger in late-type galaxies compared with GCsystems in early-type galaxies. This is likely due to the dynamically more violent evolution of early-type galaxies.We match our GCrhmeasurements with radial velocity data from the literature and split the resulting sample at 7. KOALA: a wide-field 1000 element integral-field unit for the Anglo-Australian Telescope Science.gov (United States) Ellis, S. C.; Ireland, M.; Lawrence, J. S.; Tims, J.; Staszak, N.; Brzeski, J.; Parker, Q. A.; Sharp, R.; Bland-Hawthorn, J.; Case, S.; Colless, M.; Croom, S.; Couch, W.; De Marco, O.; Glazebrook, K.; Saunders, W.; Webster, R.; Zucker, D. B. 2012-09-01 KOALA, the Kilofibre Optimised Astronomical Lenslet Array, is a wide-field, high efficiency integral field unit being designed for use with the bench mounted AAOmega spectrograph on the AAT. KOALA will have 1000 fibres in a rectangular array with a selectable field of view of either 1390 or 430 sq. arcseconds with a spatial sampling of 1.25" or 0.7" respectively. To achieve this KOALA will use a telecentric double lenslet array with interchangeable fore-optics. The IFU will feed AAOmega via a 31m fibre run. The efficiency of KOALA is expected to be ≍ 52% at 3700A and ≍ 66% at 6563°Å with a throughput of > 52% over the entire wavelength range. 8. GRT-WF (Goddard Robotic Telescope Wide Field) Observations on Sprites to Study Correlations Between Sprites and TGFs Science.gov (United States) Watanabe, Ken; Hegley, Jakob; Vydra, Ekaterina; Sakamoto, Takanori; Okajima, Takashi; Gehrels, Neil 2015-08-01 It is believed that accelerated electrons are responsible for both Sprites and terrestrial gamma- ray flashes (TGFs). Although several theoretical explanations have been made, we still do not fully understand how TGFs are generated. Therefore, we search for correlations between Sprites and TGFs. We constructed a wide field optical camera system (GRT- WF) using off- the- shelf hardware in June, 2011 at Florida Gulf Coast University (FGCU), Fort Myers, Florida where a high thunderstorm activity during summer is observed. Seven cameras have been set to point along azimuth directions to cover most of the visible sky. The field of view of each camera is ~40 x 60 deg. The events are captured automatically by off- the- shelf software. We have observed around five hundred Sprites in the past four years. We have compared these Sprites with the TGFs detected by the Fermi Gamma-ray Space Telescope LAT in times and locations as well as other instruments. We discuss the preliminary results of our study. 9. Continuation of Search for Correlations Between Sprites and Tgfs By Goddard Robotic Telescope Wide Field (GRT-WF) Science.gov (United States) Hegley, J. C.; Watanabe, K.; Sakamoto, T.; Schlitz, J. R.; Vydra, E.; Okajima, T.; Gehrels, N. 2015-12-01 It is believed that accelerated electrons are responsible for both Sprites and terrestrial gamma- ray flashes (TGFs). Although several theoretical explanations have been made, we still do not fully understand how TGFs are generated. Therefore, we search for correlations between Sprites and TGFs. We constructed a wide field optical camera system (GRT- WF) using off- the- shelf hardware in June, 2011 at Florida Gulf Coast University (FGCU), Fort Myers, Florida where a high thunderstorm activity during summer is observed. Seven cameras have been set to point along azimuth directions to cover most of the visible sky. The field of view of each camera is ~40 x 60 deg. The events are captured automatically by off- the- shelf software. We have observed over five hundred of Sprites in the past four years. We search for the temporal and locational coincidence of these Sprites with the TGFs detected by the Fermi Gamma-ray Space Telescope and RHESSI. We discuss the preliminary results of our analysis with new data we have detected since we presented at the last AGU Fall meeting. 10. Active optics and the axisymmetric case: MINITRUST wide-field three-reflection telescopes with mirrors aspherized from tulip and vase forms Science.gov (United States) Lemaitre, Gerard R.; Montiel, Pierre; Joulie, Patrice; Dohlen, Kjetil; Lanzoni, Patrick 2004-09-01 Wide-field astronomy requires larger size telescopes. Compared to the catadioptric Schmidt, the optical properties of a three mirror telescope provides significant advantages. (1) The flat field design is anastigmatic at any wavelength, (2) the system is extremely compact -- four times shorter than a Schmidt -- and, (3) compared to a Schmidt with refractive corrector -- requiring the polishing of three optical surfaces --, the presently proposed Modified-Rumsey design uses all of eight available free parameters of a flat fielded anastigmatic three mirror telescope for mirrors generated by active optics methods. Compared to a Rumsey design, these parameters include the additional slope continuity condition at the primary-tertiary link for in-situ stressing and aspherization from a common sphere. Then, active optics allows the polishing of only two spherical surfaces: the combined primary-tertiary mirror and the secondary mirror. All mirrors are spheroids of the hyperboloid type. This compact system is of interest for space and ground-based astronomy and allows to built larger wide-field telescopes such as demonstrated by the design and construction of identical telescopes MINITRUST-1 and -2, f/5 - 2° FOV, consisting of an in-situ stressed double vase form primary-tertiary and of a stress polished tulip form secondary. Optical tests of these telescopes, showing diffraction limited images, are presented. 11. Argus+: The Future of Wide-Field, Spectral-Line Imaging at 3-mm with the Green Bank Telescope Science.gov (United States) Maddalena, Ronald; Frayer, David; Lockman, Felix; O'Neil, Karen; White, Steven; Argus+ Collaboration 2018-01-01 The Robert C Byrd Green Bank Telescope has met its design goal of providing high-quality observations at 115 GHz. Observers also have access to the new, 16-pixel, 3-mm Argus receiver, which is providing high-dynamic range images over wide fields for the multitude of spectral lines between 85 and 115 GHz, including CO, 13CO, C18O, SiO, HCN, HCO+, HNC, N2H+, and CS. The small number of pixels in Argus limits its ability to map many of the most interesting objects whose extent exceeds many arc-minutes. The successful performance of Argus, and its modular design, demonstrates that receivers with many more pixels could be built for the GBT. A 12 x 12 array of the Argus design would have mapping speeds about nine times faster than Argus without suffering any degradation in performance for the outer pixels in the array. We present our plans to build the next-generation Argus instrument (Argus+) with 144-pixels, a footprint 5’x5’, and 7" resolution at 110 GHz. The project will be a collaboration between the Green Bank Observatory and university groups, who will supply key components. The key science drivers for Argus+ are studies of molecular filaments in the Milky Way, studies of molecular clouds in nearby galaxies, and the observations of rapidly evolving solar system objects. 12. The HST/WFC3 Quicklook Project: A User Interface to Hubble Space Telescope Wide Field Camera 3 Data Science.gov (United States) Bourque, Matthew; Bajaj, Varun; Bowers, Ariel; Dulude, Michael; Durbin, Meredith; Gosmeyer, Catherine; Gunning, Heather; Khandrika, Harish; Martlin, Catherine; Sunnquist, Ben; Viana, Alex 2017-06-01 The Hubble Space Telescope's Wide Field Camera 3 (WFC3) instrument, comprised of two detectors, UVIS (Ultraviolet-Visible) and IR (Infrared), has been acquiring ~ 50-100 images daily since its installation in 2009. The WFC3 Quicklook project provides a means for instrument analysts to store, calibrate, monitor, and interact with these data through the various Quicklook systems: (1) a ~ 175 TB filesystem, which stores the entire WFC3 archive on disk, (2) a MySQL database, which stores image header data, (3) a Python-based automation platform, which currently executes 22 unique calibration/monitoring scripts, (4) a Python-based code library, which provides system functionality such as logging, downloading tools, database connection objects, and filesystem management, and (5) a Python/Flask-based web interface to the Quicklook system. The Quicklook project has enabled large-scale WFC3 analyses and calibrations, such as the monitoring of the health and stability of the WFC3 instrument, the measurement of ~ 20 million WFC3/UVIS Point Spread Functions (PSFs), the creation of WFC3/IR persistence calibration products, and many others. 13. Mathematical Formalism for Designing Wide-Field X-Ray Telescopes: Mirror Nodal Positions and Detector Tilts Science.gov (United States) Elsner, R. F.; O'Dell, S. L.; Ramsey, B. D.; Weisskopf, M. C. 2011-01-01 We provide a mathematical formalism for optimizing the mirror nodal positions along the optical axis and the tilt of a commonly employed detector configuration at the focus of a x-ray telescope consisting of nested mirror shells with known mirror surface prescriptions. We adopt the spatial resolution averaged over the field-of-view as the figure of merit M. A more complete description appears in our paper in these proceedings. 14. Atmospheric Characterization of Five Hot Jupiters with the Wide Field Camera 3 on the Hubble Space Telescope Science.gov (United States) Ranjan, Sukrit; Charbonneau, David; Desert, Jean-Michel; Madhusudhan, Nikku; Deming, Drake; Wilkins, Ashlee; Mandell, Avi M. 2014-01-01 We probe the structure and composition of the atmospheres of five hot Jupiter exoplanets using the Hubble Space Telescope Wide Field Camera 3 (WFC3) instrument. We use the G141 grism (1.1-1.7 micrometers) to study TrES-2b, TrES-4b, and CoRoT-1b in transit; TrES-3b in secondary eclipse; and WASP-4b in both. This wavelength region includes a predicted absorption feature from water at 1.4 micrometers, which we expect to be nondegenerate with the other molecules that are likely to be abundant for hydrocarbon-poor (e.g., solar composition) hot Jupiter atmospheres. We divide our wavelength regions into 10 bins. For each bin we produce a spectrophotometric light curve spanning the time of transit or eclipse. We correct these light curves for instrumental systematics without reference to an instrument model. For our transmission spectra, our mean 1s precision per bin corresponds to variations of 2.1, 2.8, and 3.0 atmospheric scale heights for TrES-2b, TrES-4b, and CoRoT-1b, respectively. We find featureless spectra for these three planets. We are unable to extract a robust transmission spectrum for WASP-4b. For our dayside emission spectra, our mean 1 sigma precision per bin corresponds to a planet-to-star flux ratio of 1.5 x 10(exp -4) and 2.1 x 10(exp -4) for WASP-4b and TrES-3b, respectively. We combine these estimates with previous broadband measurements and conclude that for both planets isothermal atmospheres are disfavored. We find no signs of features due to water. We confirm that WFC3 is suitable for studies of transiting exoplanets, but in staring mode multivisit campaigns are necessary to place strong constraints on water abundance. 15. Detector Control and Data Acquisition for the Wide-Field Infrared Survey Telescope (WFIRST) with a Custom ASIC Science.gov (United States) Smith, Brian S.; Loose, Markus; Alkire, Greg; Joshi, Atul; Kelly, Daniel; Siskind, Eric; Rossetti, Dino; Mah, Jonathan; Cheng, Edward; Miko, Laddawan; 2016-01-01 The Wide-Field Infrared Survey Telescope (WFIRST) will have the largest near-IR focal plane ever flown by NASA, a total of 18 4K x 4K devices. The project has adopted a system-level approach to detector control and data acquisition where 1) control and processing intelligence is pushed into components closer to the detector to maximize signal integrity, 2) functions are performed at the highest allowable temperatures, and 3) the electronics are designed to ensure that the intrinsic detector noise is the limiting factor for system performance. For WFIRST, the detector arrays operate at 90 to 100 K, the detector control and data acquisition functions are performed by a custom ASIC at 150 to 180 K, and the main data processing electronics are at the ambient temperature of the spacecraft, notionally approx.300 K. The new ASIC is the main interface between the cryogenic detectors and the warm instrument electronics. Its single-chip design provides basic clocking for most types of hybrid detectors with CMOS ROICs. It includes a flexible but simple-to-program sequencer, with the option of microprocessor control for more elaborate readout schemes that may be data-dependent. All analog biases, digital clocks, and analog-to-digital conversion functions are incorporated and are connected to the nearby detectors with a short cable that can provide thermal isolation. The interface to the warm electronics is simple and robust through multiple LVDS channels. It also includes features that support parallel operation of multiple ASICs to control detectors that may have more capability or requirements than can be supported by a single chip. 16. Science.gov (United States) Hagopian, John; Armani, Nerses; Bartusek, Lisa; Casey, Tom; Content, Dave; Conturie, Yves; Gao, Guangjun; Jurling, Alden; Marx, Cathy; Marzouk, Joe; Pasquale, Bert; Smith, J. Scott; Tang, Hong; Whipple, Arthur 2017-08-01 The Wide-Field Infrared Survey Telescope (WFIRST) mission[1] is the top-ranked large space mission in the New Worlds, New Horizon (NWNH) Decadal Survey of Astronomy and Astrophysics. WFIRST will settle essential questions in both exoplanet and dark energy research and will advance topics ranging from galaxy evolution to the study of objects within the galaxy. The WFIRST mission uses a repurposed 2.4-m Forward Optical Telescope assembly (FOA), which, when completed with new aft optics will be an Integrated Optical Assembly (IOA). WFIRST is equipped with a Wide Field Instrument (WFI) and a Coronagraph Instrument (CGI). An Instrument Carrier (IC) meters these payload elements together and to the spacecraft bus (S/C). A distributed ground system receives the data, uploads commands and software updates, and processes the data. After transition from the study phase, Pre-Phase-A (a.k.a., "Cycle 6") design to NASA Phase A formulation, a significant change to the IOA was initiated; including moving the tertiary mirror from the instrument package to a unified three-mirror anastigmat (TMA) placement, that provides a wide 0.28-sq° instrumented field of view to the Wide Field Instrument (WFI). In addition, separate relays from the primary and secondary mirror feed the Wide Field Instrument (WFI) and Coronagraph Instrument (CGI). During commissioning the telescope is aligned using wavefront sensing with the WFI[2]. A parametric and Monte-Carlo analysis was performed, which determined that alignment compensation with the secondary mirror alone degraded performance in the other instruments. This led to the addition of a second compensator in the WFI optical train to alleviate this concern. This paper discusses the trades and analyses that were performed and resulting changes to the WFIRST telescope architecture. 17. A PANCHROMATIC CATALOG OF EARLY-TYPE GALAXIES AT INTERMEDIATE REDSHIFT IN THE HUBBLE SPACE TELESCOPE WIDE FIELD CAMERA 3 EARLY RELEASE SCIENCE FIELD International Nuclear Information System (INIS) Rutkowski, M. J.; Cohen, S. H.; Windhorst, R. A.; Kaviraj, S.; Crockett, R. M.; Silk, J.; O'Connell, R. W.; Hathi, N. P.; McCarthy, P. J.; Ryan, R. E. Jr.; Koekemoer, A.; Bond, H. E.; Yan, H.; Kimble, R. A.; Balick, B.; Calzetti, D.; Disney, M. J.; Dopita, M. A.; Frogel, J. A.; Hall, D. N. B. 2012-01-01 In the first of a series of forthcoming publications, we present a panchromatic catalog of 102 visually selected early-type galaxies (ETGs) from observations in the Early Release Science (ERS) program with the Wide Field Camera 3 (WFC3) on the Hubble Space Telescope (HST) of the Great Observatories Origins Deep Survey-South (GOODS-S) field. Our ETGs span a large redshift range, 0.35 ∼ 11 * [M ☉ ] 12 . By transforming the observed photometry into the Galaxy Evolution Explorer FUV and NUV, Johnson V, and Sloan Digital Sky Survey g' and r' bandpasses we identify a noteworthy diversity in the rest-frame UV-optical colors and find the mean rest-frame (FUV–V) = 3.5 and (NUV–V) = 3.3, with 1σ standard deviations ≅1.0. The blue rest-frame UV-optical colors observed for most of the ETGs are evidence for star formation during the preceding gigayear, but no systems exhibit UV-optical photometry consistent with major recent (∼<50 Myr) starbursts. Future publications which address the diversity of stellar populations likely to be present in these ETGs, and the potential mechanisms by which recent star formation episodes are activated, are discussed. 18. Project overview and update on WEAVE: the next generation wide-field spectroscopy facility for the William Herschel Telescope NARCIS (Netherlands) Dalton, Gavin; Trager, Scott; Abrams, Don Carlos; Bonifacio, Piercarlo; López Aguerri, J. Alfonso; Middleton, Kevin; Benn, Chris; Dee, Kevin; Sayède, Frédéric; Lewis, Ian; Pragt, Johan; Pico, Sergio; Walton, Nic; Rey, Juerg; Allende Prieto, Carlos; Peñate, José; Lhome, Emilie; Agócs, Tibor; Alonso, José; Terrett, David; Brock, Matthew; Gilbert, James; Ridings, Andy; Guinouard, Isabelle; Verheijen, Marc A.W.; Tosh, Ian; Rogers, Kevin; Steele, Iain; Stuik, Remko; Tromp, Neils; Jasko, Attila; Kragt, Jan; Lesman, Dirk; Mottram, Chris; Bates, Stuart; Gribbin, Frank; Rodriguez, Luis Fernando; Delgado, José M.; Martin, Carlos; Cano, Diego; Navarro, Ramón; Irwin, Mike; Lewis, Jim; Gonzalez Solares, Eduardo; O'Mahony, Neil; Bianco, Andrea; Zurita, Christina; ter Horst, Rik; Molinari, Emilio; Lodi, Marcello; Guerra, José; Vallenari, Antonella; Baruffolo, Andrea We present an overview of and status report on the WEAVE next-generation spectroscopy facility for the William Herschel Telescope (WHT). WEAVE principally targets optical ground-based follow up of upcoming ground-based (LOFAR) and space-based (Gaia) surveys. WEAVE is a multi-object and multi-IFU 19. Final design and progress of WEAVE: the next generation wide-field spectroscopy facility for the William Herschel Telescope NARCIS (Netherlands) Dalton, Gavin; Trager, Scott; Abrams, Don Carlos; Bonifacio, Piercarlo; Aguerri, J. Alfonso L.; Middleton, Kevin; Benn, Chris; Dee, Kevin; Sayède, Frédéric; Lewis, Ian; Pragt, Johannes; Pico, Sergio; Walton, Nic; Rey, Jeurg; Allende Prieto, Carlos; Peñate, José; Lhome, Emilie; Agócs, Tibor; Alonso, José; Terrett, David; Brock, Matthew; Gilbert, James; Schallig, Ellen; Ridings, Andy; Guinouard, Isabelle; Verheijen, Marc; Tosh, Ian; Rogers, Kevin; Lee, Martin; Steele, Iain; Stuik, Remko; Tromp, Niels; Jaskó, Attila; Carrasco, Esperanza; Farcas, Szigfrid; Kragt, Jan; Lesman, Dirk; Kroes, Gabby; Mottram, Chris; Bates, Stuart; Rodriguez, Luis Fernando; Gribbin, Frank; Delgado, José Miguel; Herreros, José Miguel; Martin, Carlos; Cano, Diego; Navarro, Ramon; Irwin, Mike; Lewis, Jim; Gonzalez Solares, Eduardo; Murphy, David; Worley, Clare; Bassom, Richard; O'Mahoney, Neil; Bianco, Andrea; Zurita, Christina; ter Horst, Rik; Molinari, Emilio; Lodi, Marcello; Guerra, José; Martin, Adrian; Vallenari, Antonella; Salasnich, Bernardo; Baruffolo, Andrea; Jin, Shoko; Hill, Vanessa; Smith, Dan; Drew, Janet; Poggianti, Bianca; Pieri, Mat; Dominquez Palmero, Lillian; Farina, Cecilia 2016-01-01 We present the Final Design of the WEAVE next-generation spectroscopy facility for the William Herschel Telescope (WHT), together with a status update on the details of manufacturing, integration and the overall project schedule now that all the major fabrication contracts are in place. We also 20. The Infrared Eye of the Wide-Field Camera 3 on the Hubble Space Telescope Reveals Multiple Main Sequences of Very Low Mass Stars in NGC 2808 Science.gov (United States) Milone, A. P.; Marino, A. F.; Cassisi, S.; Piotto, G.; Bedin, L. R.; Anderson, J.; Allard, F.; Aparicio, A.; Bellini, A.; Buonanno, R.; Monelli, M.; Pietrinferni, A. 2012-08-01 We use images taken with the infrared channel of the Wide Field Camera 3 on the Hubble Space Telescope to study the multiple main sequences (MSs) of NGC 2808. Below the turnoff, the red, the middle, and the blue MS, previously detected from visual-band photometry, are visible over an interval of about 3.5 F160W magnitudes. The three MSs merge together at the level of the MS bend. At fainter magnitudes, the MS again splits into two components containing ~65% and ~35% of stars, with the most-populated MS being the bluest one. Theoretical isochrones suggest that the latter is connected to the red MS discovered in the optical color-magnitude diagram (CMD) and hence corresponds to the first stellar generation, having primordial helium and enhanced carbon and oxygen abundances. The less-populated MS in the faint part of the near-IR CMD is helium-rich and poor in carbon and oxygen, and it can be associated with the middle and the blue MS of the optical CMD. The finding that the photometric signature of abundance anti-correlation is also present in fully convective MS stars reinforces the inference that they have a primordial origin. Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS 5-26555. 1. A PANCHROMATIC CATALOG OF EARLY-TYPE GALAXIES AT INTERMEDIATE REDSHIFT IN THE HUBBLE SPACE TELESCOPE WIDE FIELD CAMERA 3 EARLY RELEASE SCIENCE FIELD Energy Technology Data Exchange (ETDEWEB) Rutkowski, M. J.; Cohen, S. H.; Windhorst, R. A. [School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287-1404 (United States); Kaviraj, S.; Crockett, R. M.; Silk, J. [Department of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH (United Kingdom); O' Connell, R. W. [Department of Astronomy, University of Virginia, P.O. Box 3818, Charlottesville, VA 22903 (United States); Hathi, N. P.; McCarthy, P. J. [Observatories of the Carnegie Institute of Washington, Pasadena, CA 91101 (United States); Ryan, R. E. Jr.; Koekemoer, A.; Bond, H. E. [Space Telescope Science Institute, Baltimore, MD 21218 (United States); Yan, H. [Center for Cosmology and Astroparticle Physics, Ohio State University, Columbus, OH 43210 (United States); Kimble, R. A. [NASA-Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Balick, B. [Department of Astronomy, University of Washington, Seattle, WA 98195-1580 (United States); Calzetti, D. [Department of Astronomy, University of Massachusetts, Amherst, MA 01003 (United States); Disney, M. J. [School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom); Dopita, M. A. [Research School of Physics and Astronomy, The Australian National University, ACT 2611 (Australia); Frogel, J. A. [Astronomy Department, King Abdulaziz University, P.O. Box 80203, Jeddah (Saudi Arabia); Hall, D. N. B. [Institute for Astronomy, University of Hawaii, Honolulu, HI 96822 (United States); and others 2012-03-01 In the first of a series of forthcoming publications, we present a panchromatic catalog of 102 visually selected early-type galaxies (ETGs) from observations in the Early Release Science (ERS) program with the Wide Field Camera 3 (WFC3) on the Hubble Space Telescope (HST) of the Great Observatories Origins Deep Survey-South (GOODS-S) field. Our ETGs span a large redshift range, 0.35 {approx}< z {approx}< 1.5, with each redshift spectroscopically confirmed by previous published surveys of the ERS field. We combine our measured WFC3 ERS and Advanced Camera for Surveys (ACS) GOODS-S photometry to gain continuous sensitivity from the rest-frame far-UV to near-IR emission for each ETG. The superior spatial resolution of the HST over this panchromatic baseline allows us to classify the ETGs by their small-scale internal structures, as well as their local environment. By fitting stellar population spectral templates to the broadband photometry of the ETGs, we determine that the average masses of the ETGs are comparable to the characteristic stellar mass of massive galaxies, 10{sup 11} < M{sub *}[M{sub Sun }]<10{sup 12}. By transforming the observed photometry into the Galaxy Evolution Explorer FUV and NUV, Johnson V, and Sloan Digital Sky Survey g' and r' bandpasses we identify a noteworthy diversity in the rest-frame UV-optical colors and find the mean rest-frame (FUV-V) = 3.5 and (NUV-V) = 3.3, with 1{sigma} standard deviations {approx_equal}1.0. The blue rest-frame UV-optical colors observed for most of the ETGs are evidence for star formation during the preceding gigayear, but no systems exhibit UV-optical photometry consistent with major recent ({approx}<50 Myr) starbursts. Future publications which address the diversity of stellar populations likely to be present in these ETGs, and the potential mechanisms by which recent star formation episodes are activated, are discussed. 2. The Star Formation Histories of Local Group Dwarf Galaxies. I. Hubble Space Telescope/Wide Field Planetary Camera 2 Observations Science.gov (United States) Weisz, Daniel R.; Dolphin, Andrew E.; Skillman, Evan D.; Holtzman, Jon; Gilbert, Karoline M.; Dalcanton, Julianne J.; Williams, Benjamin F. 2014-07-01 We present uniformly measured star formation histories (SFHs) of 40 Local Group (LG) dwarf galaxies based on color-magnitude diagram (CMD) analysis from archival Hubble Space Telescope imaging. We demonstrate that accurate SFHs can be recovered from CMDs that do not reach the oldest main sequence turn-off (MSTO), but emphasize that the oldest MSTO is critical for precisely constraining the earliest epochs of star formation. We find that: (1) the average lifetime SFHs of dwarf spheroidals (dSphs) can be approximated by an exponentially declining SFH with τ ~ 5 Gyr (2) lower luminosity dSphs are less likely to have extended SFHs than more luminous dSphs; (3) the average SFHs of dwarf irregulars (dIrrs), transition dwarfs, and dwarf ellipticals can be approximated by the combination of an exponentially declining SFH (τ ~ 3-4 Gyr) for lookback ages >10-12 Gyr ago and a constant SFH thereafter; (4) the observed fraction of stellar mass formed prior to z = 2 ranges considerably (80% for galaxies with M 107 M ⊙) and is largely explained by environment; (5) the distinction between "ultra-faint" and "classical" dSphs is arbitrary; (6) LG dIrrs formed a significantly higher fraction of stellar mass prior to z = 2 than the Sloan Digital Sky Survey galaxies from Leitner and the SFHs from the abundance matching models of Behroozi et al. This may indicate higher than expected star formation efficiencies at early times in low mass galaxies. Finally, we provide all the SFHs in tabulated electronic format for use by the community. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained from the Data Archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. 3. SUPERNOVA REMNANTS AND THE INTERSTELLAR MEDIUM OF M83: IMAGING AND PHOTOMETRY WITH THE WIDE FIELD CAMERA 3 ON THE HUBBLE SPACE TELESCOPE International Nuclear Information System (INIS) Dopita, Michael A.; Blair, William P.; Kuntz, Kip D.; Long, Knox S.; Mutchler, Max; Whitmore, Bradley C.; Bond, Howard E.; MacKenty, John; Balick, Bruce; Calzetti, Daniela; Carollo, Marcella; Disney, Michael; Frogel, Jay A.; O'Connell, Robert; Hall, Donald; Holtzman, Jon A.; Kimble, Randy A.; McCarthy, Patrick; Paresce, Francesco; Saha, Abhijit 2010-01-01 We present Wide Field Camera 3 images taken with the Hubble Space Telescope within a single field in the southern grand design star-forming galaxy M83. Based on their size, morphology, and photometry in continuum-subtracted Hα, [S II], Hβ, [O III], and [O II] filters, we have identified 60 supernova remnant (SNR) candidates, as well as a handful of young ejecta-dominated candidates. A catalog of these remnants, their sizes and, where possible, their Hα fluxes are given. Radiative ages and pre-shock densities are derived from those SNRs that have good photometry. The ages lie in the range 2.62 rad /yr) 0 /cm -3 min = 16 +7 -5 M sun . Finally, we give evidence for the likely detection of the remnant of the historical supernova, SN1968L. 4. Calibration Improvements for the Hubble Space Telescope Advanced Camera for Surveys Wide Field Channel: Post-Flash and Commanding Overheads Science.gov (United States) Miles, Nathan; Grogin, Norman; ACS Instrument Team 2018-01-01 The Advanced Camera for Surveys (ACS) Wide Field Channel (WFC) post-flash calibration reference file currently suffers from an improper dark subtraction resulting in a variety of image artifacts. In order to cure these artifacts, a new technique has been implemented where the total sum of the exposure time and flash duration for each image is held constant. The flash duration and exposure time are varied to produce two sets of images that are differenced to produce the new post-flash reference file. The first set all have long exposure times and short flash durations, while the second set has exactly the opposite. Next, using the newly generated post-flash reference file we derive the commanding overheads associated with any ACS/WFC post-flashed observation. Whenever ACS/WFC receives commands it takes a finite amount of time for the instrument to execute them, when commands are executed while the instrument is in ACCUM mode additional dark current builds up and is added to the exposure. This additional dark current is not accounted for in the EXPTIME header keyword and therefore is not removed during the DARKCORR processing step in CALACS. By leveraging the stability of hot-stable pixels and the new post-flash reference file, we analyze 1,273 post-flashed darks and extract the commanding overheads associated with ACS/WFC post-flashed data. 5. Exploring the NRO Opportunity for a Hubble-Sized Wide-Field Near-IR Space Telescope - New WFIRST Science.gov (United States) Dressler, Alan; Spergel, David; Mountain, Matt; Postman, Mark; Elliott, Erin; Bendek, Eduardo; Bennett, David; Dalcanton, Julianne; Gaudi, Scott; Gehrels, Neil; 2013-01-01 We discuss scientific, technical, and programmatic issues related to the use of an NRO 2.4m telescope for the WFIRST initiative of the 2010 Decadal Survey. We show that this implementation of WFIRST, which we call "NEW WFIRST," would achieve the goals of the NWNH Decadal Survey for the WFIRST core programs of Dark Energy and Microlensing Planet Finding, with the crucial benefit of deeper and/or wider near-IR surveys for GO science and a potentially Hubble-like Guest Observer program. NEW WFIRST could also include a coronagraphic imager for direct detection of dust disks and planets around neighboring stars, a high-priority science and technology precursor for future ambitious programs to image Earth-like planets around neighboring stars. 6. Science.gov (United States) Sharakin, Sergey A.; Khrenov, Boris A.; Klimov, Pavel A.; Panasyuk, Mikhail I.; Potanin, Sergey A.; Yashin, Ivan V. 2012-09-01 Idea of ultrahigh cosmic rays (UHECR) measurement from satellites was suggested by Linsley in 1981 and since has being developed into projects of cosmic rays telescopes for International Space Station (ISS): JEM-EUSO - to be installed on the Japanese experimental module and KLYPVE - on the Russian ISS segment. A series of space-based detectors for measurements of background phenomena in those telescopes were developed in Russia (Universitetsky-Tatiana, Universitetsky-Tatiana-2 , Chibis satellites). The satellite Lomonosov with UHECR detector TUS on its board will be launched in 2013. TUS contains multi-channel photo receiver and Fresnel-type mirror manufactured with use of special multi-layer carbon plastic technology in RSC “Energia". In this paper one and two component optical systems with 360 cm entrance diameter and 400 cm focal distance for wide angle detector KLYPVE are studied. In one component case using generalized Davies-Cotton systems (Fresnel-type mirror with ellipsoidal gross surface) it is possible to obtain 8-10° field of view (FoV) with focal spot size less than pixel size equal to 15 x 15 mm. In two component system (parabolic mirror and a Fresnel lens, mounted close to photo receiver) it is possible to increase FoV up to 10-12° and significantly simplify the primary mirror construction. 7. The star formation histories of local group dwarf galaxies. I. Hubble space telescope/wide field planetary camera 2 observations Energy Technology Data Exchange (ETDEWEB) Weisz, Daniel R. [Department of Astronomy, University of California at Santa Cruz, 1156 High Street, Santa Cruz, CA 95064 (United States); Dolphin, Andrew E. [Raytheon Company, 1151 East Hermans Road, Tucson, AZ 85756 (United States); Skillman, Evan D. [Minnesota Institute for Astrophysics, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455 (United States); Holtzman, Jon [Department of Astronomy, New Mexico State University, Box 30001, 1320 Frenger Street, Las Cruces, NM 88003 (United States); Gilbert, Karoline M.; Dalcanton, Julianne J.; Williams, Benjamin F., E-mail: [email protected] [Department of Astronomy, University of Washington, Box 351580, Seattle, WA 98195 (United States) 2014-07-10 We present uniformly measured star formation histories (SFHs) of 40 Local Group (LG) dwarf galaxies based on color-magnitude diagram (CMD) analysis from archival Hubble Space Telescope imaging. We demonstrate that accurate SFHs can be recovered from CMDs that do not reach the oldest main sequence turn-off (MSTO), but emphasize that the oldest MSTO is critical for precisely constraining the earliest epochs of star formation. We find that: (1) the average lifetime SFHs of dwarf spheroidals (dSphs) can be approximated by an exponentially declining SFH with τ ∼ 5 Gyr; (2) lower luminosity dSphs are less likely to have extended SFHs than more luminous dSphs; (3) the average SFHs of dwarf irregulars (dIrrs), transition dwarfs, and dwarf ellipticals can be approximated by the combination of an exponentially declining SFH (τ ∼ 3-4 Gyr) for lookback ages >10-12 Gyr ago and a constant SFH thereafter; (4) the observed fraction of stellar mass formed prior to z = 2 ranges considerably (80% for galaxies with M < 10{sup 5} M{sub ☉} to 30% for galaxies with M > 10{sup 7} M{sub ☉}) and is largely explained by environment; (5) the distinction between 'ultra-faint' and 'classical' dSphs is arbitrary; (6) LG dIrrs formed a significantly higher fraction of stellar mass prior to z = 2 than the Sloan Digital Sky Survey galaxies from Leitner and the SFHs from the abundance matching models of Behroozi et al. This may indicate higher than expected star formation efficiencies at early times in low mass galaxies. Finally, we provide all the SFHs in tabulated electronic format for use by the community. 8. THE LUMINOSITY, MASS, AND AGE DISTRIBUTIONS OF COMPACT STAR CLUSTERS IN M83 BASED ON HUBBLE SPACE TELESCOPE/WIDE FIELD CAMERA 3 OBSERVATIONS International Nuclear Information System (INIS) Chandar, Rupali; Whitmore, Bradley C.; Mutchler, Max; Bond, Howard; Kim, Hwihyun; Kaleida, Catherine; Calzetti, Daniela; Saha, Abhijit; O'Connell, Robert; Balick, Bruce; Carollo, Marcella; Disney, Michael; Dopita, Michael A.; Frogel, Jay A.; Hall, Donald; Holtzman, Jon A.; Kimble, Randy A.; McCarthy, Patrick; Paresce, Francesco; Silk, Joe 2010-01-01 The newly installed Wide Field Camera 3 (WFC3) on the Hubble Space Telescope has been used to obtain multi-band images of the nearby spiral galaxy M83. These new observations are the deepest and highest resolution images ever taken of a grand-design spiral, particularly in the near-ultraviolet, and allow us to better differentiate compact star clusters from individual stars and to measure the luminosities of even faint clusters in the U band. We find that the luminosity function (LF) for clusters outside of the very crowded starburst nucleus can be approximated by a power law, dN/dL ∝ L α , with α = -2.04 ± 0.08, down to M V ∼ -5.5. We test the sensitivity of the LF to different selection techniques, filters, binning, and aperture correction determinations, and find that none of these contribute significantly to uncertainties in α. We estimate ages and masses for the clusters by comparing their measured UBVI, Hα colors with predictions from single stellar population models. The age distribution of the clusters can be approximated by a power law, dN/dτ ∝ τ γ , with γ = -0.9 ± 0.2, for M ∼> few x 10 3 M sun and τ ∼ 8 yr. This indicates that clusters are disrupted quickly, with ∼80%-90% disrupted each decade in age over this time. The mass function of clusters over the same M-τ range is a power law, dN/dM ∝ M β , with β = -1.94 ± 0.16, and does not have bends or show curvature at either high or low masses. Therefore, we do not find evidence for a physical upper mass limit, M C , or for the earlier disruption of lower mass clusters when compared with higher mass clusters, i.e., mass-dependent disruption. We briefly discuss these implications for the formation and disruption of the clusters. 9. IOT Overview: Wide-Field Imaging Science.gov (United States) Selman, F. J. The Wide Field Imager (WFI) instrument at La Silla has been the workhorse of wide-field imaging instruments at ESO for several years. In this contribution I will summarize the issues relating to its productivity for the community both in terms of the quality and quantity of data that has come out of it. Although only surveys of limited scope have been completed using WFI, it is ESO's stepping-stone to the new generation of survey telescopes. 10. Time Series Data Visualization in World Wide Telescope Science.gov (United States) Fay, J. WorldWide Telescope provides a rich set of timer series visualization for both archival and real time data. WWT consists of both interactive desktop tools for interactive immersive visualization and HTML5 web based controls that can be utilized in customized web pages. WWT supports a range of display options including full dome, power walls, stereo and virtual reality headsets. 11. The Hubble Space Telescope Frontier Fields Program Science.gov (United States) Koekemoer, Anton M.; Mack, Jennifer; Lotz, Jennifer M.; Borncamp, David; Khandrika, Harish G.; Lucas, Ray A.; Martlin, Catherine; Porterfield, Blair; Sunnquist, Ben; Anderson, Jay; Avila, Roberto J.; Barker, Elizabeth A.; Grogin, Norman A.; Gunning, Heather C.; Hilbert, Bryan; Ogaz, Sara; Robberto, Massimo; Sembach, Kenneth; Flanagan, Kathryn; Mountain, Matt 2017-08-01 The Hubble Space Telescope Frontier Fields program is a large Director's Discretionary program of 840 orbits, to obtain ultra-deep observations of six strong lensing clusters of galaxies, together with parallel deep blank fields, making use of the strong lensing amplification by these clusters of distant background galaxies to detect the faintest galaxies currently observable in the high-redshift universe. The entire program has now completed successfully for all 6 clusters, namely Abell 2744, Abell S1063, Abell 370, MACS J0416.1-2403, MACS J0717.5+3745 and MACS J1149.5+2223,. Each of these was observed over two epochs, to a total depth of 140 orbits on the main cluster and an associated parallel field, obtaining images in ACS (F435W, F606W, F814W) and WFC3/IR (F105W, F125W, F140W, F160W) on both the main cluster and the parallel field in all cases. Full sets of high-level science products have been generated for all these clusters by the team at STScI, including cumulative-depth data releases during each epoch, as well as full-depth releases after the completion of each epoch. These products include all the full-depth distortion-corrected drizzled mosaics and associated products for each cluster, which are science-ready to facilitate the construction of lensing models as well as enabling a wide range of other science projects. Many improvements beyond default calibration for ACS and WFC3/IR are implemented in these data products, including corrections for persistence, time-variable sky, and low-level dark current residuals, as well as improvements in astrometric alignment to achieve milliarcsecond-level accuracy. The full set of resulting high-level science products and mosaics are publicly delivered to the community via the Mikulski Archive for Space Telescopes (MAST) to enable the widest scientific use of these data, as well as ensuring a public legacy dataset of the highest possible quality that is of lasting value to the entire community. 12. INFRARED TRANSMISSION SPECTROSCOPY OF THE EXOPLANETS HD 209458b AND XO-1b USING THE WIDE FIELD CAMERA-3 ON THE HUBBLE SPACE TELESCOPE Energy Technology Data Exchange (ETDEWEB) Deming, Drake; Wilkins, Ashlee [Department of Astronomy, University of Maryland, College Park, MD 20742 (United States); McCullough, Peter; Crouzet, Nicolas [Space Telescope Science Institute, Baltimore, MD 21218 (United States); Burrows, Adam [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544-1001 (United States); Fortney, Jonathan J. [Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064 (United States); Agol, Eric; Dobbs-Dixon, Ian [NASA Astrobiology Institute' s Virtual Planetary Laboratory (United States); Madhusudhan, Nikku [Yale Center for Astronomy and Astrophysics, Yale University, New Haven, CT 06511 (United States); Desert, Jean-Michel; Knutson, Heather A.; Line, Michael [Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, CA 91125 (United States); Gilliland, Ronald L. [Center for Exoplanets and Habitable Worlds, The Pennsylvania State University, University Park, PA 16802 (United States); Haynes, Korey [Department of Physics and Astronomy, George Mason University, Fairfax, VA 22030 (United States); Magic, Zazralt [Max-Planck-Institut fuer Astrophysik, D-85741 Garching (Germany); Mandell, Avi M.; Clampin, Mark [NASA' s Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Ranjan, Sukrit; Charbonneau, David [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States); Seager, Sara, E-mail: [email protected] [Department of Earth, Atmospheric and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); and others 2013-09-10 Exoplanetary transmission spectroscopy in the near-infrared using the Hubble Space Telescope (HST) NICMOS is currently ambiguous because different observational groups claim different results from the same data, depending on their analysis methodologies. Spatial scanning with HST/WFC3 provides an opportunity to resolve this ambiguity. We here report WFC3 spectroscopy of the giant planets HD 209458b and XO-1b in transit, using spatial scanning mode for maximum photon-collecting efficiency. We introduce an analysis technique that derives the exoplanetary transmission spectrum without the necessity of explicitly decorrelating instrumental effects, and achieves nearly photon-limited precision even at the high flux levels collected in spatial scan mode. Our errors are within 6% (XO-1) and 26% (HD 209458b) of the photon-limit at a resolving power of {lambda}/{delta}{lambda} {approx} 70, and are better than 0.01% per spectral channel. Both planets exhibit water absorption of approximately 200 ppm at the water peak near 1.38 {mu}m. Our result for XO-1b contradicts the much larger absorption derived from NICMOS spectroscopy. The weak water absorption we measure for HD 209458b is reminiscent of the weakness of sodium absorption in the first transmission spectroscopy of an exoplanet atmosphere by Charbonneau et al. Model atmospheres having uniformly distributed extra opacity of 0.012 cm{sup 2} g{sup -1} account approximately for both our water measurement and the sodium absorption. Our results for HD 209458b support the picture advocated by Pont et al. in which weak molecular absorptions are superposed on a transmission spectrum that is dominated by continuous opacity due to haze and/or dust. However, the extra opacity needed for HD 209458b is grayer than for HD 189733b, with a weaker Rayleigh component. 13. WorldWide Telescope Ambassadors: A Year 3 Update OpenAIRE Udomprasert, Patricia S; Goodman, Alyssa A.; Wong, Curtis 2013-01-01 We give a brief overview of some key features of WorldWide Telescope and its Ambassadors Program, and we describe two goals for expanding the program in the coming year: scaling up training efforts; and developing “plug and play” Visualization Lab modules that teach key Earth and Space Science concepts to students while emphasizing important scientific processes and skills. We discuss several different ways that members of the astronomy education and outreach community can incorporate WWT-bas... 14. HST WIDE FIELD PLANETARY CAMERA 2 OBSERVATIONS OF MARS Data.gov (United States) National Aeronautics and Space Administration — The Hubble Space Telescope Wide Field Planetary Camera 2 data archive contains calibrated data of Mars observed between April 27, 1999 and September 4, 2001. These... 15. The Receiver System for the Ooty Wide Field Array The legacy Ooty Radio Telescope (ORT) is being reconfigured as a 264-element synthesis telescope, called the Ooty Wide Field Array (OWFA). Its antenna elements are the contiguous 1.92 m sections of the parabolic cylinder. It will operate in a 38-MHz frequency band centred at 326.5 MHz and will be equipped with a ... 16. A Physical Model-based Correction for Charge Traps in the Hubble Space Telescope ’s Wide Field Camera 3 Near-IR Detector and Its Applications to Transiting Exoplanets and Brown Dwarfs Energy Technology Data Exchange (ETDEWEB) Zhou, Yifan; Apai, Dániel; Schneider, Glenn [Department of Astronomy/Steward Observatory, The University of Arizona, 933 N. Cherry Avenue, Tucson, AZ 85721 (United States); Lew, Ben W. P., E-mail: [email protected] [Department of Planetary Science/Lunar and Planetary Laboratory, The University of Arizona, 1640 E. University Boulevard, Tucson, AZ 85718 (United States) 2017-06-01 The Hubble Space Telescope Wide Field Camera 3 (WFC3) near-IR channel is extensively used in time-resolved observations, especially for transiting exoplanet spectroscopy as well as brown dwarf and directly imaged exoplanet rotational phase mapping. The ramp effect is the dominant source of systematics in the WFC3 for time-resolved observations, which limits its photometric precision. Current mitigation strategies are based on empirical fits and require additional orbits to help the telescope reach a thermal equilibrium . We show that the ramp-effect profiles can be explained and corrected with high fidelity using charge trapping theories. We also present a model for this process that can be used to predict and to correct charge trap systematics. Our model is based on a very small number of parameters that are intrinsic to the detector. We find that these parameters are very stable between the different data sets, and we provide best-fit values. Our model is tested with more than 120 orbits (∼40 visits) of WFC3 observations and is proved to be able to provide near photon noise limited corrections for observations made with both staring and scanning modes of transiting exoplanets as well as for starting-mode observations of brown dwarfs. After our model correction, the light curve of the first orbit in each visit has the same photometric precision as subsequent orbits, so data from the first orbit no longer need to be discarded. Near-IR arrays with the same physical characteristics (e.g., JWST/NIRCam ) may also benefit from the extension of this model if similar systematic profiles are observed. 17. Michelson wide-field stellar interferometry : Principles and experimental verification NARCIS (Netherlands) Montilla, I.; Pereira, S.F.; Braat, J.J.M. 2005-01-01 A new interferometric technique for Michelson wide-field interferometry is presented that consists of a Michelson pupil-plane combination scheme in which a wide field of view can be achieved in one shot. This technique uses a stair-shaped mirror in the intermediate image plane of each telescope in 18. WorldWide Telescope in High School Astronomy Competitions Science.gov (United States) Constantin, Ana-Maria; Goodman, A. A.; Udomprasert, P. S. 2014-01-01 This project aims to improve astronomy education at the high school level, and to increase awareness in astronomy for pre-university students, on an international scale. In 2013, the WorldWide Telescope Ambassadors Program began a collaboration with the International Olympiad in Astronomy and Astrophysics (IOAA), which was held in the city of Volos, Greece in August 2013. Now at its VIIth edition, IOAA is the largest annual astronomy competition for high school students, and it consists of one team task and three individual ones - Theoretical, Data Analysis, and Observational. Each of the participating countries (35 in 2013, compared to 21 in 2007) is responsible for selecting up to five representative students for the International round. IOAA is meant to promote future collaborations between these students, and to encourage friendships inside a global scientific community. Ana-Maria Constantin, a current Harvard undergraduate student and a former medalist of IOAA, represented WorldWide Telescope Ambassadors in Greece by giving a talk on the advantages of using WWT as a tool for research and education. As a result, the President and the International Board of the Olympiad have expressed support for including WWT in the competition for future editions. WWTA is working with the Organizing Board for next year’s competition in Romania, to include WWT as a testing tool. This poster will summarize key points from the WWTA presentation in Greece, present ideas for WWT-based activities in future IOAA competitions, and outline plans for new collaborations from representatives of Sri Lanka, Poland, Bangladesh, and Colombia. Given the positive feedback we have received after the presentation in Greece, we are also considering future implementations of WWT in summer research camps for high school students, such as the Summer Science Program. 19. DUST EXTINCTION FROM BALMER DECREMENTS OF STAR-FORMING GALAXIES AT 0.75 {<=} z {<=} 1.5 WITH HUBBLE SPACE TELESCOPE/WIDE-FIELD-CAMERA 3 SPECTROSCOPY FROM THE WFC3 INFRARED SPECTROSCOPIC PARALLEL SURVEY Energy Technology Data Exchange (ETDEWEB) Dominguez, A.; Siana, B.; Masters, D. [Department of Physics and Astronomy, University of California Riverside, Riverside, CA 92521 (United States); Henry, A. L.; Martin, C. L. [Department of Physics, University of California, Santa Barbara, CA 93106 (United States); Scarlata, C.; Bedregal, A. G. [Minnesota Institute for Astrophysics, University of Minnesota, Minneapolis, MN 55455 (United States); Malkan, M.; Ross, N. R. [Department of Physics and Astronomy, University of California Los Angeles, Los Angeles, CA 90095 (United States); Atek, H.; Colbert, J. W. [Spitzer Science Center, Caltech, Pasadena, CA 91125 (United States); Teplitz, H. I.; Rafelski, M. [Infrared Processing and Analysis Center, Caltech, Pasadena, CA 91125 (United States); McCarthy, P.; Hathi, N. P.; Dressler, A. [Observatories of the Carnegie Institution for Science, Pasadena, CA 91101 (United States); Bunker, A., E-mail: [email protected] [Department of Physics, Oxford University, Denys Wilkinson Building, Keble Road, Oxford, OX1 3RH (United Kingdom) 2013-02-15 Spectroscopic observations of H{alpha} and H{beta} emission lines of 128 star-forming galaxies in the redshift range 0.75 {<=} z {<=} 1.5 are presented. These data were taken with slitless spectroscopy using the G102 and G141 grisms of the Wide-Field-Camera 3 (WFC3) on board the Hubble Space Telescope as part of the WFC3 Infrared Spectroscopic Parallel survey. Interstellar dust extinction is measured from stacked spectra that cover the Balmer decrement (H{alpha}/H{beta}). We present dust extinction as a function of H{alpha} luminosity (down to 3 Multiplication-Sign 10{sup 41} erg s{sup -1}), galaxy stellar mass (reaching 4 Multiplication-Sign 10{sup 8} M {sub Sun }), and rest-frame H{alpha} equivalent width. The faintest galaxies are two times fainter in H{alpha} luminosity than galaxies previously studied at z {approx} 1.5. An evolution is observed where galaxies of the same H{alpha} luminosity have lower extinction at higher redshifts, whereas no evolution is found within our error bars with stellar mass. The lower H{alpha} luminosity galaxies in our sample are found to be consistent with no dust extinction. We find an anti-correlation of the [O III] {lambda}5007/H{alpha} flux ratio as a function of luminosity where galaxies with L {sub H{alpha}} < 5 Multiplication-Sign 10{sup 41} erg s{sup -1} are brighter in [O III] {lambda}5007 than H{alpha}. This trend is evident even after extinction correction, suggesting that the increased [O III] {lambda}5007/H{alpha} ratio in low-luminosity galaxies is likely due to lower metallicity and/or higher ionization parameters. 20. Hubble Space Telescope  Wide Field Camera 3 Observations of Escaping Lyman Continuum Radiation from Galaxies and Weak AGN at Redshifts z ∼ 2.3–4.1 Science.gov (United States) Smith, Brent M.; Windhorst, Rogier A.; Jansen, Rolf A.; Cohen, Seth H.; Jiang, Linhua; Dijkstra, Mark; Koekemoer, Anton M.; Bielby, Richard; Inoue, Akio K.; MacKenty, John W.; O’Connell, Robert W.; Silk, Joseph I. 2018-02-01 We present observations of escaping Lyman Continuum (LyC) radiation from 34 massive star-forming galaxies (SFGs) and 12 weak AGN with reliably measured spectroscopic redshifts at z≃ 2.3{--}4.1. We analyzed Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3) mosaics of the Early Release Science (ERS) field in three UVIS filters to sample the rest-frame LyC over this redshift range. With our best current assessment of the WFC3 systematics, we provide 1σ upper limits for the average LyC emission of galaxies at = 2.35, 2.75, and 3.60 to ∼28.5, 28.1, and 30.7 mag in image stacks of 11–15 galaxies in the WFC3/UVIS F225W, F275W, and F336W, respectively. The LyC flux of weak AGN at = 2.62 and 3.32 are detected at 28.3 and 27.4 mag with S/Ns of ∼2.7 and 2.5 in F275W and F336W for stacks of 7 and 3 AGN, respectively, while AGN at = 2.37 are constrained to ≳27.9 mag at 1σ in a stack of 2 AGN. The stacked AGN LyC light profiles are flatter than their corresponding non-ionizing UV continuum profiles out to radii of r≲ 0\\buildrel{\\prime\\prime}\\over{.} 9, which may indicate a radial dependence of porosity in the ISM. With synthetic stellar SEDs fit to UV continuum measurements longward of {{Ly}}α and IGM transmission models, we constrain the absolute LyC escape fractions to {f}{esc}{abs}≃ {22}-22+44% at = 2.35 and ≲55% at = 2.75 and 3.60, respectively. All available data for galaxies, including published work, suggests a more sudden increase of {f}{esc} with redshift at z≃ 2. Dust accumulating in (massive) galaxies over cosmic time correlates with increased H I column density, which may lead to reducing {f}{esc} more suddenly at z≲ 2. This may suggest that SFGs collectively contributed to maintaining cosmic reionization at redshifts z≳ 2{--}4, while AGN likely dominated reionization at z≲ 2. 1. Wide Field Instrument Adjutant Scientist Science.gov (United States) Spergel, David As Wide Field Instrument Adjutant Scientist, my goal will be to maximize the science capability of the mission in a cost-contained environment. I hope to work with the HQ, project and the FSWG to assure mission success. I plan to play a leadership role in communicating the WFIRST science capabilities to the astronomy community , obtain input from both science teams and the broader community that help derive performance requirements and calibration metrics. I plan to focus on developing the observing program for the deep fields and focus on using them to calibrate instrument performance and capabilities. I plan to organize workshops that will bring together WFIRST team members with astronomers working on LSST, Euclid, JWST, and the ELTs to maximize combined science return. I am also eager to explore the astrometric and stellar seismology capabilities of the instrument with a goal of maximizing science return without affecting science requirements. 2. World-Wide Effort Bringing ALMA Telescope Into Reality Science.gov (United States) 2008-02-01 In the thin, dry air of northern Chile's Atacama Desert, at an altitude of 16,500 feet, an amazing new telescope system is taking shape, on schedule to provide the world's astronomers with unprecedented views of the origins of stars, galaxies, and planets. The Atacama Large Millimeter/submillimeter Array (ALMA) will open an entirely new "window" on the Universe, allowing scientists to unravel longstanding and important astronomical mysteries. ALMA Artist's Concept Artist's Concept of Completed ALMA CREDIT: ALMA/ESO/NRAO/NAOJ Click on image for high-resolution file (182 KB) "Most of the photons in the Universe are in the wavelength range that ALMA will receive, and ALMA will give us our first high-resolution views at these wavelengths. This will be a tremendous advancement for astronomy and open one of our science's last frontiers," Anneila Sargent, a Caltech professor and ALMA Board member, told the American Association for the Advancement of Science at its meeting in Boston, Mass. The millimeter and submillimeter wavelength range lies between what is traditionally considered radio waves and infrared waves. ALMA, a system using up to 66 high-precision dish antennas working together, will provide astronomers with dramatically greater sensitivity, the ability to detect faint objects, and resolving power, the ability to see fine detail, than has ever before been available in this range. "This ambitious project is the product of an international collaboration that spans the globe," Sargent said. "ALMA truly will enable transformational science and providing this capability has required a massive, world-wide effort," she added. The ALMA project is a partnership between Europe, Japan and North America in cooperation with the Republic of Chile. ALMA is funded in Europe by ESO, in Japan by the National Institutes of Natural Sciences in cooperation with the Academia Sinica in Taiwan and in North America by the U.S. National Science Foundation in cooperation with the 3. HARMONI : A single-field wide-band integral-field spectrograph for the European ELT NARCIS (Netherlands) Thatte, Niranjan; Tecza, Mathias; Clarke, Fraser; Davies, Roger L.; Remillieux, Alban; Bacon, Roland; Lunney, David; Arribas, Santiago; Mediavilla, Evencio; Gago, Fernando; Bezawada, Naidu; Ferruit, Pierre; Fragoso, Ana; Freeman, David; Fuentes, Javier; Fusco, Thierry; Gallie, Angus; Garcia, Adolfo; Goodsall, Timothy; Gracia, Felix; Jarno, Aurelien; Kosmalski, Johan; Lynn, James; McLay, Stuart; Montgomery, David; Pecontal, Arlette; Schnetler, Hermine; Smith, Harry; Sosa, Dario; Battaglia, Giuseppina; Bowles, Neil; Colina, Luis; Emsellem, Eric; Garcia-Perez, Ana; Gladysz, Szymon; Hook, Isobel; Irwin, Patrick; Jarvis, Matt; Kennicutt, Robert; Levan, Andrew; Longmore, Andy; Magorrian, John; McCaughrean, Mark; Origlia, Livia; Rebolo, Rafael; Rigopoulou, Dimitra; Ryan, Sean; Swinbank, Mark; Tanvir, Nial; Tolstoy, Eline; Verma, Aprajita We describe the results of a Phase A study for a single field, wide band, near-infrared integral field spectrograph for the European Extremely Large Telescope (E-ELT). HARMONI, the High Angular Resolution Monolithic Optical & Nearinfrared Integral field spectrograph, provides the E-ELT's core 4. A Galaxy Zoo - WorldWide Telescope Mashup: Expanding User Defined Exploration Science.gov (United States) Luebbert, Jarod; Sands, M.; Fay, J.; Smith, A.; Gay, P. L.; Galaxy Zoo Team 2010-01-01 We present a new way of exploring your favorite Galaxy Zoo galaxies within the context of the sky using Microsoft Research's WorldWide Telescope. Galaxy Zoo has a fantastic community that is eager to learn and contribute to science through morphological classifications of galaxies. WorldWide Telescope is an interactive observatory that allows users to explore the sky. WorldWide Telescope uses images from the world's best telescopes, including the galaxies of the Sloan Digital Sky Survey. WorldWide Telescope provides a fantastic sense of size and distance that is hard to experience in Galaxy Zoo. Creating tours from favorite galaxies directly from Galaxy Zoo aims to solve this dilemma.The incorporation of Galaxy Zoo and WorldWide telescope provides a great resource for users to learn more about the galaxies they are classifying. Users can now explore the areas around certain galaxies and view information about that location from within WorldWide Telescope. Not only does this encourage self-motivated research but after tours are created they can be shared with anyone. We hope this will help spread citizen science to different audiences via email, Facebook, and Twitter.Without the WorldWide Telescope team at Microsoft Research this project would not have been possible. Please go start exploring at http://wwt.galaxyzoo.org. This project was funded through the Microsoft Research Academic Program. 5. The development of WIFIS: a wide integral field infrared spectrograph Science.gov (United States) Sivanandam, Suresh; Chou, Richard C. Y.; Moon, Dae-Sik; Ma, Ke; Millar-Blanchaer, Maxwell; Eikenberry, Stephen S.; Chun, Moo-Young; Kim, Sang Chul; Raines, Steven N.; Eisner, Joshua 2012-09-01 We present the current results from the development of a wide integral field infrared spectrograph (WIFIS). WIFIS offers an unprecedented combination of etendue and spectral resolving power for seeing-limited, integral field observations in the 0.9 - 1.8 μm range and is most sensitive in the 0.9 - 1.35 μ,m range. Its optical design consists of front-end re-imaging optics, an all-reflective image slicer-type, integral field unit (IFU) called FISICA, and a long-slit grating spectrograph back-end that is coupled with a HAWAII 2RG focal plane array. The full wavelength range is achieved by selecting between two different gratings. By virtue of its re-imaging optics, the spectrograph is quite versatile and can be used at multiple telescopes. The size of its field-of-view is unrivalled by other similar spectrographs, offering a 4.511x 1211 integral field at a 10-meter class telescope (or 2011 x 5011 at a 2.3-meter telescope). The use of WIFIS will be crucial in astronomical problems which require wide-field, two-dimensional spectroscopy such as the study of merging galaxies at moderate redshift and nearby star/planet-forming regions and supernova remnants. We discuss the final optical design of WIFIS, and its predicted on-sky performance on two reference telescope platforms: the 2.3-m Steward Bok telescope and the 10.4-m Gran Telescopio Canarias. We also present the results from our laboratory characterization of FISICA. IFU properties such as magnification, field-mapping, and slit width along the entire slit length were measured by our tests. The construction and testing of WIFIS is expected to be completed by early 2013. We plan to commission the instrument at the 2.3-m Steward Bok telescope at Kitt Peak, USA in Spring 2013. 6. The LOFT wide field monitor DEFF Research Database (Denmark) Brandt, Søren; Hernanz, M.; Alvarez, L. 2012-01-01 LOFT (Large Observatory For x-ray Timing) is one of the four missions selected in 2011 for assessment study for the ESA M3 mission in the Cosmic Vision program, expected to be launched in 2024. The LOFT mission will carry two instruments with their prime sensitivity in the 2-30 keV range: a 10 m2...... class large area detector (LAD) with a effective area ~20 times larger than any previous mission and will by timing studies...... be able to address fundamental questions about strong gravity in the vicinity of black holes and the equation of state of nuclear matter in neutron stars. The prime goal of the WFM will be to detect transient sources to be observed by the LAD. However, with its wide field of view and good energy... 7. A very wide band telescope for Planck using optical and radio frequency techniques Science.gov (United States) Fargant, Guy; Dubruel, Denis; Cornut, Myriam; Riti, Jean-Bernard; Passvogel, Thomas; de Maagt, Peter; Anderegg, Michel; Tauber, Jan 2017-11-01 Planck associated to FIRST is one of the ESA scientific missions belonging to the Horizon 2000 programme. It will be launched by an Ariane 5 in 2007. Planck aims at obtaining very accurate images of the Cosmic Microwave Background fluctuations, thanks to a spaceborne telescope featuring a wide wavelength range and an excellent control of straylight and thermal variations. The telescope is based on an off-axis gregorian design consisting of two concave ellipsoidal mirrors with a 1.5-meter pupil, derived from radio frequency antenna, but with a very wide spectral domain which ranges from far infrared (350 μm) up to millimetric wavelengths (10 mm). Its field of view is large (10 degrees) owing to a high number of detectors in the focal plane. The short wavelength detectors (bolometers operating at 0.1 K) are located at the centre of the focal plane unit while the long wavelength ones (based on HEMT amplifier technology operating at 20 K) are located at the periphery. The Planck telescope operates at a temperature below 60 K. This level is achieved in a passive way, i.e. using a cryogenic radiator. Furthermore, this radiator must accommodate a set of coolers dedicated to the focal plane unit, cooling one of the experiments down to 0.1 K. The Planck mission leads to very stringent requirements (straylight, thermal stability) that can only be achieved by designing the spacecraft at system level, combining optical, radio frequency and thermal techniques in order to achieve the required performance. 8. Stray light field dependence for large astronomical space telescopes Science.gov (United States) Lightsey, Paul A.; Bowers, Charles W. 2017-09-01 aspect ratio of the tubular baffle length to PM diameter. Additional analysis has been done to examine the stray light implications for the fields near the image of a bright source. This near field stray light is shown to be dependent on the Bidirectional Reflectance Distribution Function (BRDF) characteristics of the mirrors in the optical train. The near field stray light contribution is dominated by those mirrors closer to the focal plane compared to the contributions from the PM and SM. Hence the near field stray light is independent of the exterior telescope baffle geometry. Contributions from self-emission from the telescope have been compared to natural background for telescopes operating at infrared wavelengths. 9. Michelson wide-field stellar interferometry NARCIS (Netherlands) Montilla, I. 2004-01-01 The main goal of this thesis is to develop a system to permit wide field operation of Michelson Interferometers. A wide field of view is very important in applications such as the observation of extended or multiple objects, the fringe acquisition and/ or tracking on a nearby unresolved object, and 10. WFIRST: Astrometry with the Wide-Field Imager Science.gov (United States) Bellini, Andrea; WFIRST Astrometry Working Group 2018-01-01 The wide field of view and stable, sharp images delivered by WFIRST's Wide-Field Imager make it an excellent instrument for astrometry, one of five major discovery areas identified in the 2010 Decadal Survey. Compared to the Hubble Space Telescope, WFIRST's wider field of view with similar image quality will provide hundreds more astrometric targets per image as well as background galaxies and stars with precise positions in the Gaia catalog. In addition, WFIRST will operate in the infrared, a wavelength regime where the most precise astrometry has so far been achieved with adaptive optics images from large ground-based telescopes. WFIRST will provide at least a factor of three improvement in astrometry over the current state of the art in this wavelength range, while spanning a field of view thousands of times larger. WFIRST is thus poised to make major contributions to multiple science topics in which astrometry plays an important role, without major alterations to the planned mission or instrument. We summarize a few of the most compelling science cases where WFIRST astrometry could prove transformational. 11. The Receiver System for the Ooty Wide Field Array Science.gov (United States) Subrahmanya, C. R.; Prasad, P.; Girish, B. S.; Somashekar, R.; Manoharan, P. K.; Mittal, A. K. 2017-03-01 The legacy Ooty Radio Telescope (ORT) is being reconfigured as a 264-element synthesis telescope, called the Ooty Wide Field Array (OWFA). Its antenna elements are the contiguous 1.92 m sections of the parabolic cylinder. It will operate in a 38-MHz frequency band centred at 326.5 MHz and will be equipped with a digital receiver including a 264-element spectral correlator with a spectral resolution of 48 kHz. OWFA is designed to retain the benefits of equatorial mount, continuous 9-hour tracking ability and large collecting area of the legacy telescope and use of modern digital techniques to enhance the instantaneous field-of-view by more than an order of magnitude. OWFA has unique advantages for contemporary investigations related to large scale structure, transient events and space weather watch. In this paper, we describe the RF subsystems, digitizers and fibre optic communication of OWFA and highlight some specific aspects of the system relevant for the observations planned during the initial operation. 12. Virtual Reality Astronomy Education Using AAS WorldWide Telescope and Oculus Rift Science.gov (United States) Weigel, A. David; Moraitis, Christina D. 2017-01-01 The Boyd E. Christenberry Planetarium at Samford University (Birmingham, AL) offers family friendly, live, and interactive planetarium presentations that educate the public on topics from astronomy basics to current cutting edge astronomical discoveries. With limited funding, it is not possible to provide state of the art planetarium hardware for these community audiences. In a society in which many people, even young children, have access to high resolution smart phones and highly realistic video games, it is important to leverage cutting-edge technology to intrigue young and old minds alike. We use an Oculus Rift virtual reality headset running AAS WorldWide Telescope software to visualize 3D data in a fully immersive environment. We create interactive experiences and videos to highlight astronomical concepts and also to communicate the beauty of our universe. The ease of portability enables us to set up at Virtual Reality (VR) experience at various events, festivals, and even in classrooms to provide a community outreach that a fixed planetarium cannot. This VR experience adds the “wow” factor that encourages children and adults to engage in our various planetarium events to learn more about astronomy and continue to explore the final frontier of space. These VR experiences encourages our college students to participate in our astronomy education resulting in increased interest in STEM fields, particularly physics and math. 13. Mechanical setup for optical aperture synthesis for wide field imaging NARCIS (Netherlands) Giesen, P.T.M.; Ouwerkerk, B.R.; Brug, H. van; Dool, T.C. van den; Avoort, C. van der 2004-01-01 Homothetic mapping is a technique that combines the images from several telescopes so that it looks like as though they came form a single large telescope. This technique enables a much wider interferometric field of image than current techniques can provide. To investigate the feasibility, a 14. Imaging spectrometer wide field catadioptric design Science.gov (United States) Chrisp,; Michael, P [Danville, CA 2008-08-19 A wide field catadioptric imaging spectrometer with an immersive diffraction grating that compensates optical distortions. The catadioptric design has zero Petzval field curvature. The imaging spectrometer comprises an entrance slit for transmitting light, a system with a catadioptric lens and a dioptric lens for receiving the light and directing the light, an immersion grating, and a detector array. The entrance slit, the system for receiving the light, the immersion grating, and the detector array are positioned wherein the entrance slit transmits light to the system for receiving the light and the system for receiving the light directs the light to the immersion grating and the immersion grating receives the light and directs the light through the system for receiving the light to the detector array. 15. Ground-based complex for detection and investigation of fast optical transients in wide field Science.gov (United States) 2008-07-01 To study short stochastic optical flares of different objects (GRBs, SNs, etc) of unknown localizations as well as NEOs it is necessary to monitor large regions of sky with high time resolution. We developed a system which consists of wide-field camera (FOW is 400-600 sq.deg.) using TV-CCD with time resolution of 0.13 s to record and classify optical transients, and a fast robotic telescope aimed to perform their spectroscopic and photometric investigation just after detection. Such two telescope complex TORTOREM combining wide-field camera TORTORA and robotic telescope REM operated from May 2006 at La Silla ESO observatory. Some results of its operation, including first fast time resolution study of optical transient accompanying GRB and discovery of its fine time structure, are presented. Prospects for improving the complex efficiency are given. 16. WorldWide Telescope: A Newly Open Source Astronomy Visualization System Science.gov (United States) Fay, Jonathan; Roberts, Douglas A. 2016-01-01 After eight years of development by Microsoft Research, WorldWide Telescope (WWT) was made an open source project at the end of June 2015. WWT was motivated by the desire to put new surveys of objects, such as the Sloan Digital Sky Survey in the context of the night sky. The development of WWT under Microsoft started with the creation of a Windows desktop client that is widely used in various education, outreach and research projects. Using this, users can explore the data built into WWT as well as data that is loaded in. Beyond exploration, WWT can be used to create tours that present various datasets a narrative format.In the past two years, the team developed a collection of web controls, including an HTML5 web client, which contains much of the functionality of the Windows desktop client. The project under Microsoft has deep connections with several user communities such as education through the WWT Ambassadors program, http://wwtambassadors.org/ and with planetariums and museums such as the Adler Planetarium. WWT can also support research, including using WWT to visualize the Bones of the Milky Way and rich connections between WWT and the Astrophysical Data Systems (ADS, http://labs.adsabs.harvard.edu/adsabs/). One important new research connection is the use of WWT to create dynamic and potentially interactive supplements to journal articles, which have been created in 2015.Now WWT is an open source community lead project. The source code is available in GitHub (https://github.com/WorldWideTelescope). There is significant developer documentation on the website (http://worldwidetelescope.org/Developers/) and an extensive developer workshops (http://wwtworkshops.org/?tribe_events=wwt-developer-workshop) has taken place in the fall of 2015.Now that WWT is open source anyone who has the interest in the project can be a contributor. As important as helping out with coding, the project needs people interested in documentation, testing, training and other roles. 17. Wide-Field Ultraviolet Spectrometer for Planetary Exospheres and Thermospheres Science.gov (United States) Fillingim, M. O.; Wishnow, E. H.; Miller, T.; Edelstein, J.; Lillis, R. J.; Korpela, E.; England, S.; Shourt, W. V.; Siegmund, O.; McPhate, J.; Courtade, S.; Curtis, D. W.; Deighan, J.; Chaffin, M.; Harmoul, A.; Almatroushi, H. R. 2016-12-01 Understanding the composition, structure, and variability of a planet's upper atmosphere - the exosphere and thermosphere - is essential for understanding how the upper atmosphere is coupled to the lower atmosphere, magnetosphere and near-space environment, and the Sun. Ultraviolet spectroscopy can directly observe emissions from constituents in the exosphere and thermosphere. From such observations, the structure, composition, and variability can be determined.We will present the preliminary design for a wide field ultraviolet imaging spectrometer for remote sensing of planetary atmospheres. The imaging spectrometer achieves an extremely large instantaneous 110 degree field of view with no moving scanning mirror. The imaging resolution is very appropriate for extended atmospheric emission studies, with a resolution of better than 0.3 degrees at the center to 0.4 degrees at the edges of the field. The spectral range covers 120 - 170 nm, encompassing emissions from H, O, C, N, CO, and N2, with an average spectral resolution of 1.5 nm. The instrument is composed of a 2-element wide-field telescope, a 3-element Offner spectrometer, and a sealed MCP detector system contained within a compact volume of about 40 x 25 x 20 cm. We will present the optical and mechanical design as well as the predicted optical performance.The wide instantaneous FOV simplifies instrument and spacecraft operations by removing the need for multiple scans (either from a scan mirror or spacecraft slews) to cover the regions of interest. This instrumentation can allow for two-dimensional spectral information to be built up with simple spacecraft operation or just using spacecraft motion. Applications to the terrestrial geocorona and thermosphere will be addressed as well as applications to the upper atmospheres of other planetary objects. 18. Space telescope phase B definition study. Volume 2A: Science instruments, f24 field camera Science.gov (United States) Grosso, R. P.; Mccarthy, D. J. 1976-01-01 The analysis and design of the F/24 field camera for the space telescope are discussed. The camera was designed for application to the radial bay of the optical telescope assembly and has an on axis field of view of 3 arc-minutes by 3 arc-minutes. 19. Thick Disks in the Hubble Space Telescope Frontier Fields Energy Technology Data Exchange (ETDEWEB) Elmegreen, Bruce G. [IBM Research Division, T.J. Watson Research Center, 1101 Kitchawan Road, Yorktown Heights, NY 10598 (United States); Elmegreen, Debra Meloy; Tompkins, Brittany; Jenks, Leah G., E-mail: [email protected], E-mail: [email protected] [Department of Physics and Astronomy, Vassar College, Poughkeepsie, NY 12604 (United States) 2017-09-20 Thick disk evolution is studied using edge-on galaxies in two Hubble Space Telescope Frontier Field Parallels. The galaxies were separated into 72 clumpy types and 35 spiral types with bulges. Perpendicular light profiles in F435W, F606W, and F814W ( B , V , and I ) passbands were measured at 1 pixel intervals along the major axes and fitted to sech{sup 2} functions convolved with the instrument line spread function (LSF). The LSF was determined from the average point spread function of ∼20 stars in each passband and field, convolved with a line of uniform brightness to simulate disk blurring. A spread function for a clumpy disk was also used for comparison. The resulting scale heights were found to be proportional to galactic mass, with the average height for a 10{sup 10±0.5} M {sub ⊙} galaxy at z = 2 ± 0.5 equal to 0.63 ± 0.24 kpc. This value is probably the result of a blend between thin and thick disk components that cannot be resolved. Evidence for such two-component structure is present in an inverse correlation between height and midplane surface brightness. Models suggest that the thick disk is observed best between the clumps, and there the average scale height is 1.06 ± 0.43 kpc for the same mass and redshift. A 0.63 ± 0.68 mag V − I color differential with height is also evidence for a mixture of thin and thick components. 20. A wide field of view plasma spectrometer Science.gov (United States) Skoug, R. M.; Funsten, H. O.; Möbius, E.; Harper, R. W.; Kihara, K. H.; Bower, J. S. 2016-07-01 We present a fundamentally new type of space plasma spectrometer, the wide field of view plasma spectrometer, whose field of view is > 1.25π ster using fewer resources than traditional methods. The enabling component is analogous to a pinhole camera with an electrostatic energy-angle filter at the image plane. Particle energy-per-charge is selected with a tunable bias voltage applied to the filter plate relative to the pinhole aperture plate. For a given bias voltage, charged particles from different directions are focused by different angles to different locations. Particles with appropriate locations and angles can transit the filter plate and are measured using a microchannel plate detector with a position-sensitive anode. Full energy and angle coverage are obtained using a single high-voltage power supply, resulting in considerable resource savings and allowing measurements at fast timescales. We present laboratory prototype measurements and simulations demonstrating the instrument concept and discuss optimizations of the instrument design for application to space measurements. 1. The Wide Field Imager instrument for Athena Science.gov (United States) Meidinger, Norbert; Barbera, Marco; Emberger, Valentin; Fürmetz, Maria; Manhart, Markus; Müller-Seidlitz, Johannes; Nandra, Kirpal; Plattner, Markus; Rau, Arne; Treberspurg, Wolfgang 2017-08-01 ESA's next large X-ray mission ATHENA is designed to address the Cosmic Vision science theme 'The Hot and Energetic Universe'. It will provide answers to the two key astrophysical questions how does ordinary matter assemble into the large-scale structures we see today and how do black holes grow and shape the Universe. The ATHENA spacecraft will be equipped with two focal plane cameras, a Wide Field Imager (WFI) and an X-ray Integral Field Unit (X-IFU). The WFI instrument is optimized for state-of-the-art resolution spectroscopy over a large field of view of 40 amin x 40 amin and high count rates up to and beyond 1 Crab source intensity. The cryogenic X-IFU camera is designed for high-spectral resolution imaging. Both cameras share alternately a mirror system based on silicon pore optics with a focal length of 12 m and large effective area of about 2 m2 at an energy of 1 keV. Although the mission is still in phase A, i.e. studying the feasibility and developing the necessary technology, the definition and development of the instrumentation made already significant progress. The herein described WFI focal plane camera covers the energy band from 0.2 keV to 15 keV with 450 μm thick fully depleted back-illuminated silicon active pixel sensors of DEPFET type. The spatial resolution will be provided by one million pixels, each with a size of 130 μm x 130 μm. The time resolution requirement for the WFI large detector array is 5 ms and for the WFI fast detector 80 μs. The large effective area of the mirror system will be completed by a high quantum efficiency above 90% for medium and higher energies. The status of the various WFI subsystems to achieve this performance will be described and recent changes will be explained here. 2. Wide field and diffraction limited array camera for SIRTF International Nuclear Information System (INIS) Fazio, G.G.; Koch, D.G.; Melnick, G.J. 1986-01-01 The Infrared Array Camera for the Space Infrared Telescope Facility (SIRTF/IRAC) is capable of two-dimensional photometry in either a wide field or diffraction-limited mode over the wavelength interval from 2 to 30 microns. Three different two-dimensional direct readout (DRO) array detectors are being considered: Band 1-InSb or Si:In (2-5 microns) 128 x 128 pixels, Band 2-Si:Ga (5-18 microns) 64 x 64 pixels, and Band 3-Si:Sb (18-30 microns) 64 x 64 pixels. The hybrid DRO readout architecture has the advantages of low read noise, random pixel access with individual readout rates, and nondestructive readout. The scientific goals of IRAC are discussed, which are the basis for several important requirements and capabilities of the array camera: (1) diffraction-limited resolution from 2-30 microns, (2) use of the maximum unvignetted field of view of SIRTF, (3) simultaneous observations within the three infrared spectral bands, and (4) the capability for broad and narrow bandwidth spectral resolution. A strategy has been developed to minimize the total electronic and environmental noise sources to satisfy the scientific requirements. 7 references 3. CFHT's SkyProbe: True Atmospheric Attenuation Measurement in the Telescope Field Science.gov (United States) Cuillandre, J.-C.; Magnier, E. A.; Isani, S.; Sabin, D.; Knight, W.; Kras, S.; Lai, K. Developed at the Canada France Hawaii Telescope (CFHT), SkyProbe is a system that allows the direct measurement of the true attenuation by clouds. This measurement is performed approximately once per min, directly on the field viewed by the telescope. It has been possible to make this system relatively inexpensively due to low cost CCD cameras available on the amateur market. A crucial addition to this hardware is the recent availability of a full-sky photometry catalog at the appropriate depth: the Tycho catalog from the Hipparcos mission. A very important element in the SkyProbe data set creation is the automatic data analysis pipeline, Elixir, developed at CFHT for the improved operation of the CFHT wide-field imagers CFH12K and MegaCam. SkyProbe's FITS images are processed in real time, and the pipeline output (a zero point attenuation) provides the current sky transmission to the observers and aids immediate decision making. These measurements are also attached to the archived data, adding a key tool for future use by other astronomers. Specific features of the detector, such as intra pixel quantum efficiency variations, must be taken into consideration since the data are strongly undersampled. 4. WorldWide Telescope and Google Sky: New Technologies to Engage Students and the Public Science.gov (United States) Landsberg, R. H.; Subbarao, M. U.; Dettloff, L. 2010-08-01 New, visually rich, astronomical software environments coupled with large web-accessible data sets hold the promise of new and exciting ways to teach, collaborate, and explore the universe. These freeware tools provide contextual views of astronomical objects, real time access to multi-wavelength sky surveys, and, most importantly, the ability to incorporate new data and to produce user created content. This interactive panel examined the capabilities of Google Sky and WorldWide Telescope, and explored case studies of how these tools have been used to create compelling and participatory educational experiences in both formal (i.e., K-12 and undergraduate non-science majors classrooms), and informal (e.g., museum) settings. The overall goal of this session was to stimulate a discussion about future uses of these technologies. Substantial time was allotted for participants to create conceptual designs of learning experiences for use at their home institutions, with feedback provided by the panel members. Activities included technical discussions (e.g., mechanisms for incorporating new data and dissemination tools), exercises in narrative preparation, and a brainstorming session to identify potential future uses of these technologies. 5. Wide Area Wind Field Monitoring Status & Results Energy Technology Data Exchange (ETDEWEB) Alan Marchant; Jed Simmons 2011-09-30 Volume-scanning elastic has been investigated as a means to derive 3D dynamic wind fields for characterization and monitoring of wind energy sites. An eye-safe volume-scanning lidar system was adapted for volume imaging of aerosol concentrations out to a range of 300m. Reformatting of the lidar data as dynamic volume images was successfully demonstrated. A practical method for deriving 3D wind fields from dynamic volume imagery was identified and demonstrated. However, the natural phenomenology was found to provide insufficient aerosol features for reliable wind sensing. The results of this study may be applicable to wind field measurement using injected aerosol tracers. 6. Review The Ooty Wide Field Array ) studies of the propagation of plasma irregularities through the inner heliosphere and (3) blind surveys for .... The digital systems (both in the field as well as in the central building) have been installed and inter- connected by optical fibre links. 7. Review The Ooty Wide Field Array on the upgrade, as well as on the expected science uses can be found in other papers in this special issue. ... the legacy system the phased output of all modules are .... 3.1 HI at z∼3.3. The HI 21-cm line is emerging as an important probe of both cosmological parameters as well as structure formation over a wide redshift ... 8. Compact multispectral and hyperspectral imagers based on a wide field of view TMA Science.gov (United States) Grabarnik, S.; Taccola, M.; Maresi, L.; Moreau, V.; de Vos, L.; Versluys, J.; Gubbels, G. 2017-11-01 Three mirror anastigmat (TMA) telescope designs [1] had been implemented in different projects ranging from the narrow Field-Of-View large instruments as Quickbird (2° FOV) [2] to smaller telescopes as JSS 12° FOV developed for RapidEye mission [3]. This telescope configuration had been also selected for the PROBA-V payload, the successor of Vegetation, a multispectral imager flown on Spot-4 and subsequently on Spot-5 French satellites for Earth Observation and defence. PROBA-V, small PROBA-type satellite, will continue acquisition of vegetation data after the lifetime of Spot-5 expires in 2012. The PROBA-V TMA optical design achieves a 34° FOV across track and makes use of highly aspherical mirrors. Such a telescope had become feasible due to the recently developed Single Point Diamond Turning fabrication technology. The telescope mirrors and structure are fabricated in aluminium and form an athermal optical system. This paper presents the development of the compact wide FOV TMA, its implementation in PROBA-V multispectral imager and reviews optics fabrication technology that made this development possible. Furthermore, this TMA is being used in combination with a linear variable filter in a breadboard of a compact hyperspectral imager. Moreover, current technology allows miniaturization of TMA, so it is possible to use a TMA-based hyperspectral imager on a cubesat platform. 9. Pixel History for Advanced Camera for Surveys Wide Field Channel Science.gov (United States) Borncamp, D.; Grogin, N.; Bourque, M.; Ogaz, S. 2017-06-01 Excess thermal energy present in a Charged Coupled Device (CCD) can result in additional electrical current. This excess charge is trapped within the silicon lattice structure of the CCD electronics. It can persist through multiple exposures and have an adverse effect on science performance of the detectors unless properly flagged and corrected for. The traditional way to correct for this extra charge is to take occasional long-exposure images with the camera shutter closed. These images, generally referred to as "dark" images, allow for the measurement of the thermal-electron contamination present in each pixel of the CCD lattice. This so-called "dark current" can then be subtracted from the science images by re-scaling the dark to the corresponding exposure times. Pixels that have signal above a certain threshold are traditionally marked as "hot" and flagged in the data quality array. Many users will discard these because of the extra current. However, these pixels may not be unusable because of an unreliable dark subtraction; if we find these pixels to be stable over an anneal period, we can properly subtract the charge and the extra Poisson noise from this dark current will be propagated into the error arrays. Here we present the results of a pixel history study that analyzes every individual pixel of the Hubble Space Telescope's (HST) Advanced Camera for Surveys (ACS) Wide Field Channel (WFC) CCDs over time and allows pixels that were previously flagged as unusable to be brought back into the science image as a reliable pixel. 10. X-shooter, the new wide band intermediate resolution spectrograph at the ESO Very Large Telescope NARCIS (Netherlands) Vernet, J.; Dekker, H.; D'Odorico, S.; Kaper, L.; Kjaergaard, P.; Hammer, F.; Randich, S.; Zerbi, F.; Groot, P.J.; Hjorth, J.; Guinouard, I.; Navarro, R.; Adolfse, T.; Albers, P.W.; Amans, J.-P.; Andersen, J.J.; Andersen, M.I.; Binetruy, P.; Bristow, P.; Castillo, R.; Chemla, F.; Christensen, L.; Conconi, P.; Conzelmann, R.; Dam, J.; De Caprio, V.; de Ugarte Postigo, A.; Delabre, B.; Di Marcantonio, P.; Downing, M.; Elswijk, E.; Finger, G.; Fischer, G.; Flores, H.; François, P.; Goldoni, P.; Guglielmi, L.; Haigron, R.; Hanenburg, H.; Hendriks, I.; Horrobin, M.; Horville, D.; Jessen, N.C.; Kerber, F.; Kern, L.; Kiekebusch, M.; Kleszcz, P.; Klougart, J.; Kragt, J.; Larsen, H.H.; Lizon, J.-L.; Lucuix, C.; Mainieri, V.; Manuputy, R.; Martayan, C.; Mason, E.; Mazzoleni, R.; Michaelsen, N.; Modigliani, A.; Moehler, S.; Møller, P.; Norup Sørensen, A.; Nørregaard, P.; Péroux, C.; Patat, F.; Pena, E.; Pragt, J.; Reinero, C.; Rigal, F.; Riva, M.; Roelfsema, R.; Royer, F.; Sacco, G.; Santin, P.; Schoenmaker, T.; Spano, P.; Sweers, E.; ter Horst, R.; Tintori, M.; Tromp, N.; van Dael, P.; van Vliet, H.; Venema, L.; Vidali, M.; Vinther, J.; Vola, P.; Winters, R.; Wistisen, D.; Wulterkens, G.; Zacchei, A. 2011-01-01 X-shooter is the first 2nd generation instrument of the ESO Very Large Telescope (VLT). It is a very efficient, single-target, intermediate-resolution spectrograph that was installed at the Cassegrain focus of UT2 in 2009. The instrument covers, in a single exposure, the spectral range from 300 to 11. X-shooter, the new wide band intermediate resolution spectrograph at the ESO Very Large Telescope DEFF Research Database (Denmark) Vernet, J.; Dekker, H.; D'Odorico, S. 2011-01-01 X-shooter is the first 2nd generation instrument of the ESO Very Large Telescope (VLT). It is a very efficient, single-target, intermediate-resolution spectrograph that was installed at the Cassegrain focus of UT2 in 2009. The instrument covers, in a single exposure, the spectral range from 300 t... 12. WISH: Wide-field Imaging Durvayor for High-redshift Science.gov (United States) 2015-08-01 We introduce the concept and current status of WISH project and discuss the science cases. WISH is a proposed space science mission for JAXA, which is dedicated for the deep and wide-field near-infrared imaging surveys. The mission contains the 1.5m cooled telescope as well as the imager with the FoV of ~850 square arcmin. The main goal of WISH is to detect and study galaxies at z=8-15 in the earliest history of structure formation in the universe. The key feature is to conduct WISH Ultra Deep Survey, which images in total of 100 square degrees in 6 broad-band filters at 0.9-4.5 micron down to 28AB magnitude. While more than 10^5 galaxies at z=8-9, 10^4 galaxies at z=11-12 will be detected, WISH-UDS is designed to constrain UV luminosity function at z=15. Depending on the models of the earliest evolution history, 1-1000 galaxies at z~15 (~100 galaxies for the moderate cases) will be detected. The UV spectral properties as well as the clustering properties of galaxies at z=8-15 can be studied as well; UV slope can be measured up to z=15, and the stellar and dark-matter-halo masses can be obtained up to z=9. WISH UDS can provide excellent opportunities for studying SNe at high redshift. Up to ~7000 type Ia SNe at z>1 can be detected and the distance modulus can be constrained with the precision of 0.9-1.5% at z>1.5. More than 100 Super Luminous SNe at z>6, and 10 SLSN at z>10 can also be detected, which allow us to study the earliest history of massive star formation in the universe. WISH imaging surveys as well as WISHSpec, which is an optional parallel-operation simple IFU spectrograph, also provide unique opportunities in various astronomical fields. WISH mission proposal was submitted to JAXA in February 2015 for the first down selection of JAXA Large Strategic Science Mission targeting the launch date in 2020-22. International collaborations including SAO (G.Fazio et al.), LAM (D. Burgarella et al.) and Canada (M.Sawicki et al.) are also actively coordinated. 13. Fields Of View Of X-Ray-Telescope Collimator Tubes Science.gov (United States) Safren, Harvey G. 1992-01-01 Results of theoretical analysis conducted to determine fields of view and irradiation patterns of collimator tubes of various shapes indicates, while background flux incident on surface at outlet of tube determined by ratio between diameter and length, shape of tube important in screening out background radiation. 14. AAS Publishing: What Can WorldWide Telescope Do for You? Science.gov (United States) Kohler, Susanna 2016-01-01 During the 227th American Astronomical Society meeting last week in Kissimmee, the AAS announced the exciting news that it will become the new institutional home of Microsofts WorldWide Telescope (WWT) astronomy software.WWT is a scriptable and interactive way of browsing the multi-wavelength sky as it is seen from Earth, and the universe as we would travel within it. WWT can be run either as a desktop app or from within an internet browser. And of interest to researchers especially its an incredibly useful way to visualize and contextualize astronomical data.What does WWTs transition to the AAS as its new host mean? WWT was open-sourced by Microsoft Research last year, and hosting by the AAS will permit broad community involvement in the form of contribution of both code and guidance in WWTs further development.All of this begs the question: why might YOU want to use WWT? That depends on whether your goal is to use it for research, education, or just for fun.WWT for ResearchIfyou thought WWT was just for education and outreach, think again! Here are just a few things you can do with WWT to advance your astronomical research1:1) Put surveys into context, on top of more than 40 different all-sky images, spanning the electromagnetic spectrum.2) Perform literature searches from the sky.3) Compare images and catalogs at different wavelengths, on-the-fly in seconds.4) Show your own online data to the world, in an API that allows users to see it on the sky in their browsers.5) Communicate to colleagues and learners about the sky using interactive tours of your data and ideas.An example of WWT used to perform astronomy research is the recently highlighted work on the bones of the Milky Way, in which the authors used WWT to overlay multiple data sets and visually identify and then search for infrared dark clouds along the predicted positions of Milky Way spiral arms.An example of WWT used to communicate research is given in this paper, wherein a link in the caption of a 15. Solar cooker effect test and temperature field simulation of radio telescope subreflector International Nuclear Information System (INIS) Chen, Deshen; Wang, Huajie; Qian, Hongliang; Zhang, Gang; Shen, Shizhao 2016-01-01 Highlights: • Solar cooker effect test of a telescope subreflector is conducted for the first time. • The cause and temperature distribution regularities are analyzed contrastively. • Simulation methods are proposed using light beam segmentation and tracking methods. • The validity of simulation methods is evaluated using the test results. - Abstract: The solar cooker effect can cause a local high temperature of the subreflector and can directly affect the working performance of the radio telescope. To study the daily temperature field and solar cooker effect of a subreflector, experimental studies are carried out with a 3-m-diameter radio telescope model for the first time. Initially, the solar temperature distribution rules, especially the solar cooker effect, are summarized according to the field test results under the most unfavorable conditions. Then, a numerical simulation for the solar temperature field of the subreflector is studied by light beam segmentation and tracking methods. Finally, the validity of the simulation methods is evaluated using the test results. The experimental studies prove that the solar cooker effect really exists and should not be overlooked. In addition, simulation methods for the subreflector temperature field proposed in this paper are effective. The research methods and conclusions can provide valuable references for thermal design, monitoring and control of similar high-precision radio telescopes. 16. High-resolution optical telescope for ultraviolet /UV/ radiation field Science.gov (United States) Karayan, W. W. 1979-01-01 Design techniques are discussed for all-reflecting optics from first-order system considerations and applications currently utilized in the field of astronomical optics. The solution of the Dall-Karkham design problem is described, showing the advantage of inexpensive construction as compared with higher order surfaces. The design process reported here is a F/5 collecting system which quickly mates directly with the spectrometer; it is capable of achieving desired high resolution and sensitivity requirements. The theoretical limit of aberration tolerances is achieved with less than 1/8 of a wavelength at final focus (OPD). The design of spectrometer for ultra-violet (UV) radiation and its mechanism is included in this study. 17. Wide-field high-performance geosynchronous imaging International Nuclear Information System (INIS) Wood, H. John; Jenstrom, Del; Wilson, Mark; Hinkal, Sanford; Kirchman, Frank 1998-01-01 The NASA Mission to Planet Earth (MTPE) Program and the National Oceanographic and Atmospheric Administration (NOAA) are sponsoring the Advanced Geosynchronous Studies (AGS) to develop technologies and system concepts for Earth observation from geosynchronous orbit. This series of studies is intended to benefit both MTPE science and the NOAA GOES Program. Within the AGS program, advanced imager trade studies have investigated two candidate concepts for near-term advanced geosynchronous imagers. One concept uses a scan mirror to direct the line of sight from a 3-axis stabilized platform. Another eliminates the need for a scan mirror by using an agile spacecraft bus to scan the entire instrument. The purpose of this paper is to discuss the optical design trades and system issues encountered in evaluating the two scanning approaches. The imager design started with a look at first principles: what is the most efficient way to image the Earth in those numerous spectral bands of interest to MTPE scientists and NOAA weather forecasters. Optical design trades included rotating filter wheels and dispersive grating instruments. The design converged on a bandpass filter instrument using four focal planes to cover the spectral range 0.45 to 13.0 micrometers. The first imager design uses a small agile spacecraft supporting an afocal optical telescope. Dichroic beamsplitters feed refractive objectives to four focal planes. The detectors are a series of long linear and rectangular arrays which are scanned in a raster fashion over the 17 degree Earth image. The use of the spacecraft attitude control system to raster the imager field-of-view (FOV) back and forth over the Earth eliminates the need for a scan mirror. However, the price paid is significant energy and time required to reverse the spacecraft slew motions at the end of each scan line. Hence, it is desired to minimize the number of scan lines needed to cover the full Earth disk. This desire, coupled with the ground 18. Wide-Field, Deep UV Raman Hyperspectral Imager, Phase I Data.gov (United States) National Aeronautics and Space Administration — ChemImage Sensor Systems (CISS), teaming with the University of South Carolina, proposes a revolutionary wide-field Raman hyperspectral imaging system capable of... 19. The High-Speed and Wide-Field TORTORA Camera: description & results . Science.gov (United States) Greco, G.; Beskin, G.; Karpov, S.; Guarnieri, A.; Bartolini, C.; Bondar, S.; Piccioni, A.; Molinari, E. We present the description and the most significant results of the wide-field and ultra-fast TORTORA camera devoted to the investigation of rapid changes in light intensity in a phenomenon occurring within an extremely short period of time and randomly distributed over the sky. In particular, the ground-based TORTORA observations synchronized with the gamma -ray BAT telescope on board of the Swift satellite has permitted to trace the optical burst time-structure of the Naked-Eye GRB 080319B with an unprecedented level of accuracy. 20. Wide field monitoring of the X-ray sky using Rotation Modulation Collimators DEFF Research Database (Denmark) Lund, Niels; Brandt, Søren 1995-01-01 Wide field monitoring is of particular interest in X-ray astronomy due to the strong time-variability of most X-ray sources. Not only does the time-profiles of the persistent sources contain characteristic signatures of the underlying physical systems, but, additionally, some of the most intriguing...... sources have long periods of quiesense in which they are almost undetectable as X-ray sources, interspersed with relatively brief periods of intense outbursts, where we have unique opportunities of studying dynamical effects, in, for instance, the evolution of accretion discs. Another question for which...... by the grazing incidence telescope systems. The potential of the RMC's as wide field monitors have recently been demonstrated by the WATCH instruments on GRANAT and EURECA. It now appears likely, that for use on large, 3-axis stabilized spacecraft, a pinhole camera system may provide better sensitivity than... 1. A focal plane detector design for a wide-band Laue-lens telescope DEFF Research Database (Denmark) Caroli, E.; Auricchio, N.; Amati, L. 2005-01-01 The energy range above 60 keV is important for the study of many open problems in high energy astrophysics such as the role of Inverse Compton with respect to synchrotron or thermal processes in GRBs, non thermal mechanisms in SNR, the study of the high energy cut-offs in AGN spectra......, and the detection of nuclear and annihilation lines. Recently the development of high energy Laue lenses with broad energy bandpasses from 60 to 600 keV have been proposed for a Hard X ray focusing Telescope (HAXTEL) in order to study the X-ray continuum of celestial sources. The required focal plane detector...... should have high detection efficiency over the entire operative range, a spatial resolution of about 1 mm, an energy resolution of a few keV at 500 keV and a sensitivity to linear polarization. We describe a possible configuration of the focal plane detector based on several CdTe/CZT pixelated layers... 2. PERSPECTIVE: Toward a wide-field retinal prosthesis Science.gov (United States) Ameri, Hossein; Ratanapakorn, Tanapat; Ufer, Stefan; Eckhardt, Helmut; Humayun, Mark S.; Weiland, James D. 2009-06-01 The purpose of this paper is to present a wide field electrode array that may increase the field of vision in patients implanted with a retinal prosthesis. Mobility is often impaired in patients with low vision, particularly in those with peripheral visual loss. Studies on low vision patients as well as simulation studies on normally sighted individuals have indicated a strong correlation between the visual field and mobility. In addition, it has been shown that an increased visual field is associated with a significant improvement in visual acuity and object discrimination. Current electrode arrays implanted in animals or human vary in size; however, the retinal area covered by the electrodes has a maximum projected visual field of about 10°. We have designed wide field electrode arrays that could potentially provide a visual field of 34°, which may significantly improve the mobility. Tests performed on a mechanical eye model showed that it was possible to fix 10 mm wide flexible polyimide dummy electrode arrays onto the retina using a single retinal tack. They also showed that the arrays could conform to the inner curvature of the eye. Surgeries on an enucleated porcine eye model demonstrated feasibility of implantation of 10 mm wide arrays through a 5 mm eye wall incision. 3. Contextual Narrative as an Information Architecture for the WorldWide Telescope Science.gov (United States) Wong, C. 2008-05-01 The evolution of the world wide web has enabled access to information about almost any topic conceivable. However, access to information is only one component of learning and understanding. How do people initially engage with unfamiliar or uninteresting subjects, where they do not know enough even to ask a question? How do educators and communicators make a topic sufficiently compelling to pique curiosity and sustain enough interest to facilitate learning? 4. A new Recoil Proton Telescope for energy and fluence measurement of fast neutron fields Energy Technology Data Exchange (ETDEWEB) Lebreton, Lena; Bachaalany, Mario [IRSN / LMDN (Institut de Radioprotection et de Surete nucleaire / Laboratoire de Metrologie et de dosimetrie des neutrons), Cadarache Bat.159, 13115 Saint Paul-lez-Durance, (France); Husson, Daniel; Higueret, Stephane [IPHC / RaMsEs (Institut Pluridisciplinaire Hubert Curien / Radioprotection et Mesures Environnementales), 23 rue du loess - BP28, 67037 Strasbourg cedex 2, (France) 2015-07-01 The spectrometer ATHENA (Accurate Telescope for High Energy Neutron metrology Applications), is being developed at the IRSN / LMDN (Institut de Radioprotection et de Surete nucleaire / Laboratoire de Metrologie et de dosimetrie des neutrons) and aims at characterizing energy and fluence of fast neutron fields. The detector is a Recoil Proton Telescope and measures neutron fields in the range of 5 to 20 MeV. This telescope is intended to become a primary standard for both energy and fluence measurements. The neutron detection is achieved by a polyethylene radiator for n-p conversion, three 50{sub m} thick silicon sensors that use CMOS technology for the proton tracking and a 3 mm thick silicon diode to measure the residual proton energy. This first prototype used CMOS sensors called MIMOSTAR, initially developed for heavy ion physics. The use of CMOS sensors and silicon diode increases the intrinsic efficiency of the detector by a factor of ten compared with conventional designs. The first prototype has already been done and was a successful study giving the results it offered in terms of energy and fluence measurements. For mono energetic beams going from 5 to 19 MeV, the telescope offered an energy resolution between 5 and 11% and fluence difference going from 5 to 7% compared to other home standards. A second and final prototype of the detector is being designed. It will hold upgraded CMOS sensors called FastPixN. These CMOS sensors are supposed to run 400 times faster than the older version and therefore give the telescope the ability to support neutron flux in the order of 107 to 108cm{sup 2}:s{sup 1}. The first prototypes results showed that a 50 m pixel size is enough for a precise scattering angle reconstruction. Simulations using MCNPX and GEANT4 are already in place for further improvements. A DeltaE diode will replace the third CMOS sensor and will be installed right before the silicon diode for a better recoil proton selection. The final prototype with 5. Advanced MOKE magnetometry in wide-field Kerr-microscopy Science.gov (United States) Soldatov, I. V.; Schäfer, R. 2017-10-01 The measurement of MOKE (Magneto-Optical Kerr Effect) magnetization loops in a wide-field Kerr microscope offers the advantage that the relevant domain images along the loop can be readily recorded. As the microscope's objective lens is exposed to the magnetic field, the loops are usually strongly distorted by non-linear Faraday rotations of the polarized light that occur in the objective lens and that are superimposed to the MOKE signal. In this paper, an experimental method, based on a motorized analyzer, is introduced which allows to compensate the Faraday contributions, thus leading to pure MOKE loops. A wide field Kerr microscope, equipped with this technology, works well as a laser-based MOKE magnetometer, additionally offering domain images and thus providing the basis for loop interpretation. 6. Review FRB Event Rate Predictions for the Ooty Wide Field Array 2Centre for Theoretical Studies, Indian Institute of Technology, Kharagpur 721 302, India. 3National .... majorly upgraded version of the Ooty Radio Telescope, ... The symbols NA, ν, B, νc and FoV stand for number of elements, observational frequency, bandwidth, spectral channel width and field-of-view of the telescope. 7. Results of magnetic field measurements performed with the 6-m telescope. IV. Observations in 2010 Science.gov (United States) Romanyuk, I. I.; Semenko, E. A.; Kudryavtsev, D. O.; Moiseeva, A. V.; Yakunin, I. A. 2017-10-01 We present the results of measurements of magnetic fields, radial velocities and rotation velocities for 92 objects, mainly main-sequence chemically peculiar stars. Observations were performed at the 6-m BTA telescope using Main Stellar Spectrograph with a Zeeman analyzer. In 2010, twelve new magnetic stars were discovered: HD 17330, HD 29762, HD 49884, HD 54824, HD 89069, HD 96003, HD 113894, HD 118054, HD 135679, HD 138633, HD 138777, BD +53.1183. The presence of a field is suspected in HD 16705, HD 35379 and HD 35881. Observations of standard stars without a magnetic field confirm the absence of systematic errors which can introduce distortions into the measurements of longitudinal field. The paper gives comments on the results of investigation of each star. 8. Photometric redshifts for the CFHTLS T0004 deep and wide fields Science.gov (United States) Coupon, J.; Ilbert, O.; Kilbinger, M.; McCracken, H. J.; Mellier, Y.; Arnouts, S.; Bertin, E.; Hudelot, P.; Schultheis, M.; Le Fèvre, O.; Le Brun, V.; Guzzo, L.; Bardelli, S.; Zucca, E.; Bolzonella, M.; Garilli, B.; Zamorani, G.; Zanichelli, A.; Tresse, L.; Aussel, H. 2009-06-01 Aims: We compute photometric redshifts in the fourth public release of the Canada-France-Hawaii Telescope Legacy Survey. This unique multi-colour catalogue comprises u^*, g', r', i', z' photometry in four deep fields of 1 deg2 each and 35 deg2 distributed over three wide fields. Methods: We used a template-fitting method to compute photometric redshifts calibrated with a large catalogue of 16 983 high-quality spectroscopic redshifts from the VVDS-F02, VVDS-F22, DEEP2, and the zCOSMOS surveys. The method includes correction of systematic offsets, template adaptation, and the use of priors. We also separated stars from galaxies using both size and colour information. Results: Comparing with galaxy spectroscopic redshifts, we find a photometric redshift dispersion, σΔ z/(1+z_s), of 0.028-0.30 and an outlier rate, |Δ z| ≥ 0.15× (1+z_s), of 3-4% in the deep field at i'_AB < 24. In the wide fields, we find a dispersion of 0.037-0.039 and an outlier rate of 3-4% at i'_AB < 22.5. Beyond i'_AB = 22.5 in the wide fields the number of outliers rises from 5% to 10% at i'_AB < 23 and i'_AB < 24, respectively. For the wide sample the systematic redshift bias stays below 1% to i'_AB < 22.5, whereas we find no significant bias in the deep fields. We investigated the effect of tile-to-tile photometric variations and demonstrated that the accuracy of our photometric redshifts is reduced by at most 21%. Application of our star-galaxy classifier reduced the contamination by stars in our catalogues from 60% to 8% at i'_AB < 22.5 in our field with the highest stellar density while keeping a complete galaxy sample. Our CFHTLS T0004 photometric redshifts are distributed to the community. Our release includes 592891 (i'_AB < 22.5) and 244701 (i'_AB < 24) reliable galaxy photometric redshifts in the wide and deep fields, respectively. Based on observations obtained with MegaPrime/MegaCam, a joint project of CFHT and CEA/DAPNIA, at the Canada-France-Hawaii Telescope (CFHT) which is 9. Wide-field surveys from the SNAP mission International Nuclear Information System (INIS) 2002-01-01 The Supernova/Acceleration Probe (SNAP) is a proposed space-borne observatory that will survey the sky with a wide-field optical/NIR imager. The images produced by SNAP will have an unprecedented combination of depth, solid-angle, angular resolution, and temporal sampling. Two 7.5 square-degree fields will be observed every four days over 16 months to a magnitude depth of AB = 27.7 in each of nine filters. Co-adding images over all epochs will give an AB = 30.3 per filter. A 300 square-degree field will be surveyed with no repeat visits to AB = 28 per filter. The nine filters span 3500-17000 (angstrom). Although the survey strategy is tailored for supernova and weak gravitational lensing observations, the resulting data supports a broad range of auxiliary science programs 10. Cross-Calibrating Sunspot Magnetic Field Strength Measurements from the McMath-Pierce Solar Telescope and the Dunn Solar Telescope Science.gov (United States) Watson, Fraser T.; Beck, Christian; Penn, Matthew J.; Tritschler, Alexandra; Pillet, Valentín Martinez; Livingston, William C. 2015-11-01 In this article we describe a recent effort to cross-calibrate data from an infrared detector at the McMath-Pierce Solar Telescope and the Facility InfraRed Spectropolarimeter (FIRS) at the Dunn Solar Telescope. A synoptic observation program at the McMath-Pierce has measured umbral magnetic field strengths since 1998, and this data set has recently been compared with umbral magnetic field observations from SOHO/MDI and SDO/HMI. To further improve on the data from McMath-Pierce, we compared the data with measurements taken at the Dunn Solar Telescope with far greater spectral resolution than has been possible with space instrumentation. To minimise potential disruption to the study, concurrent umbral measurements were made so that the relationship between the two datasets can be most accurately characterised. We find that there is a strong agreement between the umbral magnetic field strengths recorded by each instrument, and we reduced the FIRS data in two different ways to successfully test this correlation further. 11. WISE: The Wide-field Infrared Survey Explorer Science.gov (United States) Eisenhardt, Peter R.; Wright, E. L.; Benford, D.; Blain, A.; Cohen, M.; Cutri, R.; Gautier, T. N.; Jarrett, T.; Kirkpatrick, J. D.; Leisawitz, D.; Lonsdale, C.; Mainzer, A.; Mather, J.; McLean, I.; McMillan, R.; Mendez, B.; Padgett, D.; Ressler, M.; Skrutskie, M.; Stanford, S. A.; Walker, R. 2009-01-01 WISE will map the entire sky at 3.3, 4.7, 12 and 23 microns with sensitivities of 0.12, 0.16. 0.65, and 2.6 mJy. WISE will find the most luminous galaxies in the universe, the closest stars to the Sun, and detect most main belt asteroids larger than 3 km. WISE will be placed into a Sun-synchronous polar orbit on a Delta 7320-10 rocket, rotating at a constant rate while a scan mirror freezes the line of sight during each exposure, covering the sky in 6 months following a one month checkout. Orbit to orbit overlap provides 8 or more exposures at each location. The instrument, provided by the Space Dynamics Laboratory, includes an all-reflective aluminum telescope with a 40 cm primary built by SSG-Tinsley, a solid hydrogen cryostat built by Lockheed-Martin's Advanced Technology Center, and 1024x1024 pixel Si:As and HgCdTe arrays built by DRS and Teledyne. Dichroic beamsplitters allow simultaneous images in the four bands over a 47'x47' field of view with 5" resolution to be obtained every 11 seconds. Ball Aerospace is providing the spacecraft, including a 500W fixed solar array, Li-ion battery, two star trackers, reaction wheels, and torque rods. The 50 GB per day of images are losslessly compressed, stored in flash memory, and downlinked at 100 Mbps four times per day using a fixed antenna and TDRSS satellites. The Infrared Processing and Analysis Center will process the data and deliver the image atlas and source catalog, with a preliminary release 6 months after the survey, and a final release 2 years after the survey. JPL manages the project for UCLA PI Ned Wright, and conducts mission operations. Education and Public Outreach is provided by UC Berkeley's Space Science Laboratory. WISE hardware is presently being integrated and tested, with launch scheduled in November 2009. 12. THE FIRST ULTRA-COOL BROWN DWARF DISCOVERED BY THE WIDE-FIELD INFRARED SURVEY EXPLORER International Nuclear Information System (INIS) Mainzer, A.; Cushing, Michael C.; Eisenhardt, P.; Skrutskie, M.; Beaton, R.; Gelino, C. R.; Kirkpatrick, J. Davy; Jarrett, T.; Masci, F.; Marsh, K.; Padgett, D.; Marley, Mark S.; Saumon, D.; Wright, E.; McLean, I.; Dietrich, M.; Garnavich, P.; Rueff, K.; Kuhn, O.; Leisawitz, D. 2011-01-01 We report the discovery of the first new ultra-cool brown dwarf (BDs) found with the Wide-field Infrared Survey Explorer (WISE). The object's preliminary designation is WISEPC J045853.90+643451.9. Follow-up spectroscopy with the LUCIFER instrument on the Large Binocular Telescope indicates that it is a very late-type T dwarf with a spectral type approximately equal to T9. Fits to an IRTF/SpeX 0.8-2.5 μm spectrum to the model atmospheres of Marley and Saumon indicate an effective temperature of approximately 600 K as well as the presence of vertical mixing in its atmosphere. The new BD is easily detected by WISE, with a signal-to-noise ratio of ∼36 at 4.6 μm. Current estimates place it at a distance of 6-10 pc. This object represents the first in what will likely be hundreds of nearby BDs found by WISE that will be suitable for follow-up observations, including those with the James Webb Space Telescope. One of the two primary scientific goals of the WISE mission is to find the coolest, closest stars to our Sun; the discovery of this new BD proves that WISE is capable of fulfilling this objective. 13. Wide-field ultraviolet imager for astronomical transient studies Science.gov (United States) Mathew, Joice; Ambily, S.; Prakash, Ajin; Sarpotdar, Mayuresh; Nirmal, K.; Sreejith, A. G.; Safonova, Margarita; Murthy, Jayant; Brosch, Noah 2018-03-01 Though the ultraviolet (UV) domain plays a vital role in the studies of astronomical transient events, the UV time-domain sky remains largely unexplored. We have designed a wide-field UV imager that can be flown on a range of available platforms, such as high-altitude balloons, CubeSats, and larger space missions. The major scientific goals are the variability of astronomical sources, detection of transients such as supernovae, novae, tidal disruption events, and characterizing active galactic nuclei variability. The instrument has a 80 mm aperture with a circular field of view of 10.8 degrees, an angular resolution of ˜22 arcsec, and a 240 - 390 nm spectral observation window. The detector for the instrument is a Microchannel Plate (MCP)-based image intensifier with both photon counting and integration capabilities. An FPGA-based detector readout mechanism and real time data processing have been implemented. The imager is designed in such a way that its lightweight and compact nature are well fitted for the CubeSat dimensions. Here we present various design and developmental aspects of this UV wide-field transient explorer. 14. All sky coordination initiative, simple service for wide-field monitoring systems to cooperate in searching for fast optical transients Science.gov (United States) Karpov, S.; Sokołowski, M.; Gorbovskoy, E. Here we stress the necessity of cooperation between different wide-field monitoring projects (FAVOR/TORTORA, Pi of the Sky, MASTER, etc), aimed for independent detection of fast optical transients, in order to maximize the area of the sky covered at any moment and to coordinate the monitoring of gamma-ray telescopes' field of view. We review current solutions available for it and propose a simple protocol with dedicated service (ASCI) for such systems to share their current status and pointing schedules. 15. DMD-based programmable wide field spectrograph for Earth observation Science.gov (United States) Zamkotsian, Frédéric; Lanzoni, Patrick; Liotard, Arnaud; Viard, Thierry; Costes, Vincent; Hébert, Philippe-Jean 2015-03-01 In Earth Observation, Universe Observation and Planet Exploration, scientific return could be optimized in future missions using MOEMS devices. In Earth Observation, we propose an innovative reconfigurable instrument, a programmable wide-field spectrograph where both the FOV and the spectrum could be tailored thanks to a 2D micromirror array (MMA). For a linear 1D field of view (FOV), the principle is to use a MMA to select the wavelengths by acting on intensity. This component is placed in the focal plane of a first grating. On the MMA surface, the spatial dimension is along one side of the device and for each spatial point, its spectrum is displayed along the perpendicular direction: each spatial and spectral feature of the 1D FOV is then fully adjustable dynamically and/or programmable. A second stage with an identical grating recomposes the beam after wavelengths selection, leading to an output tailored 1D image. A mock-up has been designed, fabricated and tested. The micromirror array is the largest DMD in 2048 x 1080 mirrors format, with a pitch of 13.68μm. A synthetic linear FOV is generated and typical images have been recorded o at the output focal plane of the instrument. By tailoring the DMD, we could modify successfully each pixel of the input image: for example, it is possible to remove bright objects or, for each spatial pixel, modify the spectral signature. The very promising results obtained on the mock-up of the programmable wide-field spectrograph reveal the efficiency of this new instrument concept for Earth Observation. 16. Affordable Wide-field Optical Space Surveillance using sCMOS and GPUs Science.gov (United States) Zimmer, P.; McGraw, J.; Ackermann, M. 2016-09-01 Recent improvements in sCMOS technology allow for affordable, wide-field, and rapid cadence surveillance from LEO to out past GEO using largely off-the-shelf hardware. sCMOS sensors, until very recently, suffered from several shortcomings when compared to CCD sensors - lower sensitivity, smaller physical size and less predictable noise characteristics. Sensors that overcome the first two of these are now available commercially and the principals at J.T. McGraw and Associates (JTMA) have developed observing strategies that minimize the impact of the third, while leveraging the key features of sCMOS, fast readout and low average readout noise. JTMA has integrated a new generation sCMOS sensor into an existing COTS telescope system in order to develop and test new detection techniques designed for uncued optical surveillance across a wide range of apparent object angular rates - from degree per second scale of LEO objects to a few arcseconds per second for objects out past GEO. One further complication arises from this: increased useful frame rate means increased data volume. Fortunately, GPU technology continues to advance at a breakneck pace and we report on the results and performance of our new detection techniques implemented on new generation GPUs. Early results show significance within 20% of the expected theoretical limiting signal-to-noise using commodity GPUs in near real time across a wide range of object parameters, closing the gap in detectivity between moving objects and tracked objects. 17. WIDE-FIELD PRECISION KINEMATICS OF THE M87 GLOBULAR CLUSTER SYSTEM Energy Technology Data Exchange (ETDEWEB) Strader, Jay [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States); Romanowsky, Aaron J.; Brodie, Jean P.; Beasley, Michael A.; Arnold, Jacob A. [UCO/Lick Observatory, University of California, Santa Cruz, CA 95064 (United States); Spitler, Lee R. [Center for Astrophysics and Supercomputing, Swinburne University, Hawthorn, VIC 3122 (Australia); Tamura, Naoyuki [Subaru Telescope, National Astronomical Observatory of Japan, Hilo, HI 96720 (United States); Sharples, Ray M. [Department of Physics, University of Durham, South Road, Durham (United Kingdom); Arimoto, Nobuo, E-mail: [email protected] [National Astronomical Observatory of Japan, Mitaka, Tokyo 181-8588 (Japan) 2011-12-01 We present the most extensive combined photometric and spectroscopic study to date of the enormous globular cluster (GC) system around M87, the central giant elliptical galaxy in the nearby Virgo Cluster. Using observations from DEIMOS and the Low Resolution Imaging Spectrometer at Keck, and Hectospec on the Multiple Mirror Telescope, we derive new, precise radial velocities for 451 GCs around M87, with projected radii from {approx}5 to 185 kpc. We combine these measurements with literature data for a total sample of 737 objects, which we use for a re-examination of the kinematics of the GC system of M87. The velocities are analyzed in the context of archival wide-field photometry and a novel Hubble Space Telescope catalog of half-light radii, which includes sizes for 344 spectroscopically confirmed clusters. We use this unique catalog to identify 18 new candidate ultracompact dwarfs and to help clarify the relationship between these objects and true GCs. We find much lower values for the outer velocity dispersion and rotation of the GC system than in earlier papers and also differ from previous work in seeing no evidence for a transition in the inner halo to a potential dominated by the Virgo Cluster, nor for a truncation of the stellar halo. We find little kinematical evidence for an intergalactic GC population. Aided by the precision of the new velocity measurements, we see significant evidence for kinematical substructure over a wide range of radii, indicating that M87 is in active assembly. A simple, scale-free analysis finds less dark matter within {approx}85 kpc than in other recent work, reducing the tension between X-ray and optical results. In general, out to a projected radius of {approx}150 kpc, our data are consistent with the notion that M87 is not dynamically coupled to the Virgo Cluster; the core of Virgo may be in the earliest stages of assembly. 18. Full-Scale Field Test of a Blade-Integrated Dual-Telescope Wind Lidar DEFF Research Database (Denmark) Pedersen, Anders Tegtmeier; Sjöholm, Mikael; Angelou, Nikolas Introduction In recent years the use of wind lidars mounted directly on wind turbines has received increasing attention, and such systems are becoming commercially available. One aim of turbine-mounted wind lidars is to use them for prevision in connection with advanced feed-forward control systems...... for load reduction and power optimization. To date, main attention has been on control schemes where measurements of wind speeds and direction upwind are used for yaw and speed corrections. In this study we investigate experimentally the feasibility of using lidars integrated in the turbine blades...... in the top and bottom of the rotor plane. Conclusion We present here what we believe is the first successful wind speed measurements from a dual-telescope lidar installed on the blade of an operating wind turbine. The full-scale field test performed in the summer of 2012 has clearly demonstrated... 19. Wide-field subdiffraction RESOLFT microscopy using fluorescent protein photoswitching. Science.gov (United States) Schwentker, Miriam A; Bock, Hannes; Hofmann, Michael; Jakobs, Stefan; Bewersdorf, Jörg; Eggeling, Christian; Hell, Stefan W 2007-03-01 Subdiffraction fluorescence imaging is presented in a parallelized wide-field arrangement exploiting the principle of reversible saturable/switchable optical transitions (RESOLFT). The diffraction barrier is overcome by photoswitching ensembles of the label protein asFP595 between a nonfluorescent off- and a fluorescent on-state. Relying on ultralow continuous-wave intensities, reversible protein switching facilitates parallelized fast image acquisition. The RESOLFT principle is implemented by illuminating with intensity distributions featuring zero intensity lines that are further apart than the conventional Abbe resolution limit. The subdiffraction resolution is verified by recording live Escherichia coli bacteria labeled with asFP595. The obtained resolution of 50 nm ( approximately lambda/12) is limited only by the spectroscopic properties of the proteins and the imperfections of the optical implementation, but not on principle grounds. (c) 2007 Wiley-Liss, Inc. 20. Portable wide-field hand-held NIR scanner Science.gov (United States) Jung, Young-Jin; Roman, Manuela; Carrasquilla, Jennifer; Erickson, Sarah J.; Godavarty, Anuradha 2013-03-01 Near-infrared (NIR) optical imaging modality is one of the widely used medical imaging techniques for breast cancer imaging, functional brain mapping, and many other applications. However, conventional NIR imaging systems are bulky and expensive, thereby limiting their accelerated clinical translation. Herein a new compact (6 × 7 × 12 cm3), cost-effective, and wide-field NIR scanner has been developed towards contact as well as no-contact based real-time imaging in both reflectance and transmission mode. The scanner mainly consists of an NIR source light (between 700- 900 nm), an NIR sensitive CCD camera, and a custom-developed image acquisition and processing software to image an area of 12 cm2. Phantom experiments have been conducted to estimate the feasibility of diffuse optical imaging by using Indian-Ink as absorption-based contrast agents. As a result, the developed NIR system measured the light intensity change in absorption-contrasted target up to 4 cm depth under transillumination mode. Preliminary in-vivo studies demonstrated the feasibility of real-time monitoring of blood flow changes. Currently, extensive in-vivo studies are carried out using the ultra-portable NIR scanner in order to assess the potential of the imager towards breast imaging.. 1. Restriction of motions in wide pairs in the Galactic field Science.gov (United States) Matvienko, A. S.; Orlov, V. V. 2015-06-01 The motions of the components of wide binary stars in the solar neighborhood in the regular Galactic gravitational field on time scales ˜1010 yr have been studied numerically. The regions of restricted motions of the components in wide pairs have been found depending on the initial conditions: the magnitude of the relative velocity of the components, their mutual distance, and the inclination of the relative velocity vector to the Galactic plane. The size of the main part of the region of restricted motions is approximately equal to the tidal radius. Profound changes in the eccentricity of the binary orbit occur at inclinations close to 90°, which can lead to close approaches of the stars with a pericenter distance less than 1 AU. In the case of retrograde motions (the binary rotates in a direction opposite to the Galactic rotation), there is a region of restricted motions extending at least to 10 pc. Examples of the trajectories of relative motion of the stars and the change in osculating orbital elements are given for systems with restricted motions. 2. Development of stable monolithic wide-field Michelson interferometers. Science.gov (United States) Wan, Xiaoke; Ge, Jian; Chen, Zhiping 2011-07-20 Bulk wide-field Michelson interferometers are very useful for high precision applications in remote sensing and astronomy. A stable monolithic Michelson interferometer is a key element in high precision radial velocity (RV) measurements for extrasolar planets searching and studies. Thermal stress analysis shows that matching coefficients of thermal expansion (CTEs) is a critical requirement for ensuring interferometer stability. This requirement leads to a novel design using BK7 and LAK7 materials, such that the monolithic interferometer is free from thermal distortion. The processes of design, fabrication, and testing of interferometers are described in detail. In performance evaluations, the field angle is typically 23.8° and thermal sensitivity is typically -2.6×10(-6)/°C near 550 nm, which corresponds to ∼800 m/s/°C in the RV scale. Low-cost interferometer products have been commissioned in multiple RV instruments, and they are producing high stability performance over long term operations. © 2011 Optical Society of America 3. WIDE FIELD CO MAPPING IN THE REGION OF IRAS 19312+1950 Energy Technology Data Exchange (ETDEWEB) Nakashima, Jun-ichi [Department of Astronomy and Geodesy, Ural Federal University, Lenin Avenue 51, 620000, Ekaterinburg (Russian Federation); Ladeyschikov, Dmitry A.; Sobolev, Andrej M. [Astronomical Observatory, Ural Federal University, Lenin Avenue 51, 620000, Ekaterinburg (Russian Federation); Zhang, Yong; Hsia, Chih-Hao [Department of Physics, University of Hong Kong, Pokfulam Road, Hong Kong (China); Yung, Bosco H. K., E-mail: [email protected] [N. Copernicus Astronomical Center, Rabiańska 8, 87-100 Toruń (Poland) 2016-07-01 We report the results of wide field CO mapping in the region of IRAS 19312+1950. This Infrared Astronomical Satellite ( IRAS ) object exhibits SiO/H{sub 2}O/OH maser emission, and is embedded in a chemically rich molecular component, the origin of which is still unknown. In order to reveal the entire structure and gas mass of the surrounding molecular component for the first time, we have mapped a wide region around IRAS 19312+1950 in the {sup 12}CO J = 1–0, {sup 13}CO J = 1–0 and C{sup 18}O J = 1–0 lines using the Nobeyama 45 m telescope. In conjunction with archival CO maps, we investigated a region up to 20′ × 20′ in size around this IRAS object. We calculated the CO gas mass assuming local thermal equilibrium, the stellar velocity through the interstellar medium assuming an analytic model of bow shock, and the absolute luminosity, using the latest archival data and trigonometric parallax distance. The derived gas mass (225 M {sub ⊙}–478 M {sub ⊙}) of the molecular component and the relatively large luminosity (2.63 × 10{sup 4} L {sub ☉}) suggest that the central SiO/H{sub 2}O/OH maser source is a red supergiant rather than an asymptotic giant branch (AGB) star or post-AGB star. 4. Demonstration of the Wide-Field Imaging Interferometer Testbed Using a Calibrated Hyperspectral Image Projector Science.gov (United States) Bolcar, Matthew R.; Leisawitz, David; Maher, Steve; Rinehart, Stephen 2012-01-01 The Wide-field Imaging Interferometer testbed (WIIT) at NASA's Goddard Space Flight Center uses a dual-Michelson interferometric technique. The WIIT combines stellar interferometry with Fourier-transform interferometry to produce high-resolution spatial-spectral data over a large field-of-view. This combined technique could be employed on future NASA missions such as the Space Infrared Interferometric Telescope (SPIRIT) and the Sub-millimeter Probe of the Evolution of Cosmic Structure (SPECS). While both SPIRIT and SPECS would operate at far-infrared wavelengths, the WIIT demonstrates the dual-interferometry technique at visible wavelengths. The WIIT will produce hyperspectral image data, so a true hyperspectral object is necessary. A calibrated hyperspectral image projector (CHIP) has been constructed to provide such an object. The CHIP uses Digital Light Processing (DLP) technology to produce customized, spectrally-diverse scenes. CHIP scenes will have approximately 1.6-micron spatial resolution and the capability of . producing arbitrary spectra in the band between 380 nm and 1.6 microns, with approximately 5-nm spectral resolution. Each pixel in the scene can take on a unique spectrum. Spectral calibration is achieved with an onboard fiber-coupled spectrometer. In this paper we describe the operation of the CHIP. Results from the WIIT observations of CHIP scenes will also be presented. 5. Static and predictive tomographic reconstruction for wide-field multi-object adaptive optics systems. Science.gov (United States) Correia, C; Jackson, K; Véran, J-P; Andersen, D; Lardière, O; Bradley, C 2014-01-01 Multi-object adaptive optics (MOAO) systems are still in their infancy: their complex optical designs for tomographic, wide-field wavefront sensing, coupled with open-loop (OL) correction, make their calibration a challenge. The correction of a discrete number of specific directions in the field allows for streamlined application of a general class of spatio-angular algorithms, initially proposed in Whiteley et al. [J. Opt. Soc. Am. A15, 2097 (1998)], which is compatible with partial on-line calibration. The recent Learn & Apply algorithm from Vidal et al. [J. Opt. Soc. Am. A27, A253 (2010)] can then be reinterpreted in a broader framework of tomographic algorithms and is shown to be a special case that exploits the particulars of OL and aperture-plane phase conjugation. An extension to embed a temporal prediction step to tackle sky-coverage limitations is discussed. The trade-off between lengthening the camera integration period, therefore increasing system lag error, and the resulting improvement in SNR can be shifted to higher guide-star magnitudes by introducing temporal prediction. The derivation of the optimal predictor and a comparison to suboptimal autoregressive models is provided using temporal structure functions. It is shown using end-to-end simulations of Raven, the MOAO science, and technology demonstrator for the 8 m Subaru telescope that prediction allows by itself the use of 1-magnitude-fainter guide stars. 6. Wide-Field Astronomical Surveys in the Next Decade Energy Technology Data Exchange (ETDEWEB) Strauss, Michael A.; /Princeton U.; Tyson, J.Anthony; /UC, Davis; Anderson, Scott F.; /Washington U., Seattle, Astron. Dept.; Axelrod, T.S.; /LSST Corp.; Becker, Andrew C.; /Washington U., Seattle, Astron. Dept.; Bickerton, Steven J.; /Princeton U.; Blanton, Michael R.; /New York U.; Burke, David L.; /SLAC; Condon, J.J.; /NRAO, Socorro; Connolly, A.J. 2009-03-01 Wide-angle surveys have been an engine for new discoveries throughout the modern history of astronomy, and have been among the most highly cited and scientifically productive observing facilities in recent years. This trend is likely to continue over the next decade, as many of the most important questions in astrophysics are best tackled with massive surveys, often in synergy with each other and in tandem with the more traditional observatories. We argue that these surveys are most productive and have the greatest impact when the data from the surveys are made public in a timely manner. The rise of the 'survey astronomer' is a substantial change in the demographics of our field; one of the most important challenges of the next decade is to find ways to recognize the intellectual contributions of those who work on the infrastructure of surveys (hardware, software, survey planning and operations, and databases/data distribution), and to make career paths to allow them to thrive. 7. ROTATION PERIODS OF WIDE BINARIES IN THE KEPLER FIELD International Nuclear Information System (INIS) Janes, K. A. 2017-01-01 In a search of proper motion catalogs for common proper motion stars in the field of the Kepler spacecraft I identified 93 likely binary systems. A comparison of their rotation periods is a test of the gyrochronology concept. To find their periods I calculated the autocorrelation function (ACF) of the Kepler mission photometry for each star. In most systems for which good periods can be found, the cooler star has a longer period than the hotter component, in general agreement with models. However, there is a wide range in the gradients of lines connecting binary pairs in a period–color diagram. Furthermore, near the solar color, only a few stars have longer periods than the Sun, suggesting that they, and their cooler companions, are not much older than the Sun. In addition, there is an apparent gap at intermediate periods in the period distribution of the late K and early M stars. Either star formation in this direction has been variable, or stars evolve in period at a non-uniform rate, or some stars evolve more rapidly than others at the same mass. Finally, using the ACF as a measure of the activity level, I found that while the F, G, and early K stars become less active as their periods increase, there is no correlation between period and activity for the mid K to early M stars. 8. Wide-Field Astronomical Surveys in the Next Decade Energy Technology Data Exchange (ETDEWEB) Strauss, Michael A.; /Princeton U.; Tyson, J.Anthony; /UC, Davis; Anderson, Scott F.; /Washington U., Seattle, Astron. Dept.; Axelrod, T.S.; /LSST Corp.; Becker, Andrew C.; /Washington U., Seattle, Astron. Dept.; Bickerton, Steven J.; /Princeton U.; Blanton, Michael R.; /New York U.; Burke, David L.; /SLAC; Condon, J.J.; /NRAO, Socorro; Connolly, A.J.; /Washington U., Seattle, Astron. Dept.; Cooray, Asantha R.; /UC, Irvine; Covey, Kevin R.; /Harvard U.; Csabai, Istvan; /Eotvos U.; Ferguson, Henry C.; /Baltimore, Space Telescope Sci.; Ivezic, Zeljko; /Washington U., Seattle, Astron. Dept.; Kantor, Jeffrey; /LSST Corp.; Kent, Stephen M.; /Fermilab; Knapp, G.R.; /Princeton U.; Myers, Steven T.; /NRAO, Socorro; Neilsen, Eric H., Jr.; /Fermilab; Nichol, Robert C.; /Portsmouth U., ICG /Harish-Chandra Res. Inst. /Caltech, IPAC /Potsdam, Max Planck Inst. /Harvard U. /Hawaii U. /UC, Berkeley, Astron. Dept. /Baltimore, Space Telescope Sci. /NOAO, Tucson /Carnegie Mellon U. /Chicago U., Astron. Astrophys. Ctr. 2011-11-14 Wide-angle surveys have been an engine for new discoveries throughout the modern history of astronomy, and have been among the most highly cited and scientifically productive observing facilities in recent years. This trend is likely to continue over the next decade, as many of the most important questions in astrophysics are best tackled with massive surveys, often in synergy with each other and in tandem with the more traditional observatories. We argue that these surveys are most productive and have the greatest impact when the data from the surveys are made public in a timely manner. The rise of the 'survey astronomer' is a substantial change in the demographics of our field; one of the most important challenges of the next decade is to find ways to recognize the intellectual contributions of those who work on the infrastructure of surveys (hardware, software, survey planning and operations, and databases/data distribution), and to make career paths to allow them to thrive. 9. ROTATION PERIODS OF WIDE BINARIES IN THE KEPLER FIELD Energy Technology Data Exchange (ETDEWEB) Janes, K. A. [Astronomy Department, Boston University, Boston, MA 02215 (United States) 2017-01-20 In a search of proper motion catalogs for common proper motion stars in the field of the Kepler spacecraft I identified 93 likely binary systems. A comparison of their rotation periods is a test of the gyrochronology concept. To find their periods I calculated the autocorrelation function (ACF) of the Kepler mission photometry for each star. In most systems for which good periods can be found, the cooler star has a longer period than the hotter component, in general agreement with models. However, there is a wide range in the gradients of lines connecting binary pairs in a period–color diagram. Furthermore, near the solar color, only a few stars have longer periods than the Sun, suggesting that they, and their cooler companions, are not much older than the Sun. In addition, there is an apparent gap at intermediate periods in the period distribution of the late K and early M stars. Either star formation in this direction has been variable, or stars evolve in period at a non-uniform rate, or some stars evolve more rapidly than others at the same mass. Finally, using the ACF as a measure of the activity level, I found that while the F, G, and early K stars become less active as their periods increase, there is no correlation between period and activity for the mid K to early M stars. 10. Refined adaptive optics simulation with wide field of view for the E-ELT International Nuclear Information System (INIS) Chebbo, Manal 2012-01-01 Refined simulation tools for wide field AO systems (such as MOAO, MCAO or LTAO) on ELTs present new challenges. Increasing the number of degrees of freedom (scales as the square of the telescope diameter) makes the standard simulation's codes useless due to the huge number of operations to be performed at each step of the Adaptive Optics (AO) loop process. This computational burden requires new approaches in the computation of the DM voltages from WFS data. The classical matrix inversion and the matrix vector multiplication have to be replaced by a cleverer iterative resolution of the Least Square or Minimum Mean Square Error criterion (based on sparse matrices approaches). Moreover, for this new generation of AO systems, concepts themselves will become more complex: data fusion coming from multiple Laser and Natural Guide Stars (LGS / NGS) will have to be optimized, mirrors covering all the field of view associated to dedicated mirrors inside the scientific instrument itself will have to be coupled using split or integrated tomography schemes, differential pupil or/and field rotations will have to be considered, etc. All these new entries should be carefully simulated, analysed and quantified in terms of performance before any implementation in AO systems. For those reasons I developed, in collaboration with the ONERA, a full simulation code, based on iterative solution of linear systems with many parameters (use of sparse matrices). On this basis, I introduced new concepts of filtering and data fusion (LGS / NGS) to effectively manage modes such as tip, tilt and defocus in the entire process of tomographic reconstruction. The code will also eventually help to develop and test complex control laws (Multi-DM and multi-field) who have to manage a combination of adaptive telescope and post-focal instrument including dedicated deformable mirrors. The first application of this simulation tool has been studied in the framework of the EAGLE multi-object spectrograph 11. The First Wide-field X-ray Imaging Telescope for Observations of Charge Exchange Data.gov (United States) National Aeronautics and Space Administration — Soft x-ray emission from the interaction of solar wind with the earth's exosphere provides a very significant foreground to all soft x-ray observations. It is... 12. Science with a Wide-field 6.5-m Spectroscopic Telescope OpenAIRE R. J. Terlevich; J. J. González; M. Chávez; E. Bertone; A. Carramiñana; S. Vázquez; J. Franco; M. Peimbert; F. Cobos; J. Bohigas; A. López; S. Cuevas; J. A. de Diego; E. Ruiz; C. Tejada 2007-01-01 Se describe el caso científico para un telescopio espectroscópico de campo amplio de 6.5-m. Las metas científicas principales están ligadas a dos modos diferentes de operación: (1) El estudio de la astrofísica detallada de sistemas extendidos por medio de espectroscopía de alta calidad y mediana resolución usando múltiples unidades de campo integral (IFU) y (2) El estudio de la estructura del Universo mediante espectroscopía de múltiples obras de sistemas distantes y/o compactos. ... 13. Comparison of wavefront control algorithms and first results on the high-contrast imager for complex aperture telescopes (hicat) testbed Science.gov (United States) Leboulleux, L.; N'Diaye, M.; Mazoyer, J.; Pueyo, L.; Perrin, M.; Egron, S.; Choquet, E.; Sauvage, J.-F.; Fusco, T.; Soummer, R. 2017-09-01 The next generation of space telescopes for direct imaging and spectroscopy of exoplanets includes telescopes with a monolithic mirror, such as the Wide Field Infrared Survey Telescope (WFIRST) [1] and Large Ultra-Violet Optical Infrared (LUVOIR) telescopes with segmented primary mirror, like ATLAST [2, 3] or HDST [4]. 14. 3D defect detection using optical wide-field microscopy Science.gov (United States) Tympel, Volker; Schaaf, Marko; Srocka, Bernd 2007-06-01 We report a method to detect signed differences in two similar data sets representing 3-dimensional intensity profiles recorded by optical wide-field microscopes. The signed differences describe missing or unexpected intensity values, defined as defects. In technical applications like wafer and mask inspection, data sets often represent surfaces. The reported method is able to describe the size and position especially in relation to the neighboring surface and is called Three-Dimension-Aberration (TDA)-Technology. To increase the tool performance and to handle different sizes of defects a scaled bottom-up method is implemented and started with high reduced data sets for the search of large defects. Each analysis contains three steps. The first step is a correlation to calculate the displacement vector between the similar data sets. In the second step a new data set is created. The new data set consists of intensity differences. Extreme values in the data set represent the position of defects. By the use of linear and non-linear filters the stability of detection can be improved. If all differences are below a threshold the bottom-up method starts with the next larger scaled data set. In the other case it is assumed that the defect is detected and step three starts with the detection of the convex hull of the defect and the search of the neighboring surface. As a result the defect is described by a parameter set including the relative position. Because of the layered structure of the data set and the bottom-up technique the method is suitable for multi-core processor architectures. 15. Wide-Field Imaging of Omega Centauri with the Advanced Camera for Surveys Science.gov (United States) Haggard, D.; Dorfman, J. L.; Cool, A. M.; Anderson, J.; Bailyn, C. D.; Edmonds, P. D.; Grindlay, J. E. 2003-12-01 We present initial results of a wide-field imaging study of the globular cluster Omega Cen (NGC 5139) using the Advanced Camera for Surveys (ACS). We have obtained a mosaic of 3x3 pointings of the cluster using the HST/ACS Wide Field Camera covering approximately 10' x 10', roughly out to the cluster's half-mass radius. Using F435W (B435), F625W (R625) and F658N (H-alpha) filters, we are searching for optical counterparts of Chandra X-ray sources and studying the cluster's stellar populations. Here we report the discovery of an optical counterpart to the X-ray source identified by Rutledge et al. (2002) as a possible quiescent neutron star on the basis of its X-ray spectrum. The star's magnitude and color (R625 = 24.4, B435-R625 = 1.5) place it more than 1.5 magnitudes to the blue side of the main sequence. Through the H-alpha filter it is about 1.3 magnitudes brighter than cluster stars of comparable R625 magnitude. The blue color and H-alpha excess suggest the presence of an accretion disk, implying that the neutron star is a member of a quiescent low-mass X-ray binary. The object's faint absolute magnitude (M625 ˜ 10.6, M435 ˜ 11.8) implies that the system contains an unusually weak disk and that the companion, if it is a main-sequence star, is of very low mass (ACS study. This work is supported by NASA grant GO-9442 from the Space Telescope Science Institute. 16. Non-uniform Solar Temperature Field on Large Aperture, Fully-Steerable Telescope Structure Science.gov (United States) Liu, Yan 2016-09-01 In this study, a 110-m fully steerable radio telescope was used as an analysis platform and the integral parametric finite element model of the antenna structure was built in the ANSYS thermal analysis module. The boundary conditions of periodic air temperature, solar radiation, long-wave radiation shadows of the surrounding environment, etc. were computed at 30 min intervals under a cloudless sky on a summer day, i.e., worstcase climate conditions. The transient structural temperatures were then analyzed under a period of several days of sunshine with a rational initial structural temperature distribution until the whole set of structural temperatures converged to the results obtained the day before. The non-uniform temperature field distribution of the entire structure and the main reflector surface RMS were acquired according to changes in pitch and azimuth angle over the observation period. Variations in the solar cooker effect over time and spatial distributions in the secondary reflector were observed to elucidate the mechanism of the effect. The results presented here not only provide valuable realtime data for the design, construction, sensor arrangement and thermal deformation control of actuators but also provide a troubleshooting reference for existing actuators. 17. Wide Field-of-View (FOV) Soft X-Ray Imager Data.gov (United States) National Aeronautics and Space Administration — The Wide Field-of-View (FOV) Soft X-Ray Imager will flight-prove and optimized a prototype imager for advanced Heliophysics science analysis.The Wide Field-of-View... 18. The effects of Earth's magnetic field on 3-inch diameter photomultipliers used in KM3NeT neutrino telescope Science.gov (United States) Giordano, V.; Aiello, S.; Leonora, E.; Randazzo, N. 2016-04-01 The KM3NeT neutrino telescope will be the largest underwater neutrino telescope and will be located in the abyss of the Mediterranean Sea. In neutrino telescopes the key element of the detector is the optical module and for KM3NeT it consists of 31 PMTs stored inside a transparent pressure-resistant glass sphere of 17-inch that serves as mechanical protection while ensuring good light transmission. Since the PMTs installed into an underwater neutrino telescope can change their orientation because of movements of the detector structure due to sea currents, the influence of Earth's magnetic field has been investigated. Magnetic shielding by means of a mu-metal cage is used to reduce magnetic effects and to make the response of the PMT sufficiently orientation independent. In order to quantify the effect on magnetic field, we compared measurements on variation of gain, transit time spread and detection efficiency for a 3-inch PMT in shielded and unshielded condition at 3 PMT inclinations. Data shows that variations are sufficiently low especially for timing properties. 19. Little Blue Dots in the Hubble Space Telescope Frontier Fields: Precursors to Globular Clusters? Science.gov (United States) Elmegreen, Debra Meloy; Elmegreen, Bruce G. 2017-12-01 Galaxies with stellar masses {10}-7.4 yr‑1 were examined on images of the Hubble Space Telescope Frontier Field Parallels for Abell 2744 and MACS J0416.1-02403. They appear as unresolved “Little Blue Dots” (LBDs). They are less massive and have higher specific star formation rates (sSFRs) than “blueberries” studied by Yang et al. and higher sSFRs than “Blue Nuggets” studied by Tacchella et al. We divided the LBDs into three redshift bins and, for each, stacked the B435, V606, and I814 images convolved to the same stellar point-spread function (PSF). Their radii were determined from PSF deconvolution to be ∼80 to ∼180 pc. The high sSFRs suggest that their entire stellar mass has formed in only 1% of the local age of the universe. The sSFRs at similar epochs in local dwarf galaxies are lower by a factor of ∼100. Assuming that the star formation rate is {ε }{ff}{M}{gas}/{t}{ff} for efficiency {ε }{ff}, gas mass M gas, and free-fall time, t ff, the gas mass and gas-to-star mass ratio are determined. This ratio exceeds 1 for reasonable efficiencies, and is likely to be ∼5 even with a high {ε }{ff} of 0.1. We consider whether these regions are forming today’s globular clusters. With their observed stellar masses, the maximum likely cluster mass is ∼ {10}5 {M}ȯ , but if star formation continues at the current rate for ∼ 10{t}{ff}∼ 50 {Myr} before feedback and gas exhaustion stop it, then the maximum cluster mass could become ∼ {10}6 {M}ȯ . 20. Simulated predictions for H I at z = 3.35 with the Ooty Wide Field Array - I. Instrument and the foregrounds Science.gov (United States) Marthi, Visweshwar Ram; Chatterjee, Suman; Chengalur, Jayaram N.; Bharadwaj, Somnath 2017-11-01 Foreground removal is the most important step in detecting the large-scale redshifted H I 21-cm signal. Modelling foreground spectra is challenging and is further complicated by the chromatic response of the telescope. We present a multifrequency angular power spectrum (MAPS) estimator for use in a survey for redshifted H I 21-cm emission from z ˜ 3.35 and demonstrate its ability to accurately characterize the foregrounds. This survey will be carried out with the two wide-field interferometer modes of the upgraded Ooty Radio Telescope, called the Ooty Wide Field Array (OWFA), at 326.5 MHz. We have tailored the two-visibility correlation for OWFA to estimate the MAPS and test it with simulated foregrounds. In the process, we describe a software model that encodes the geometry and the details of the telescope and simulates a realistic model for the bright radio sky. This article presents simulations that include the full chromatic response of the telescope in addition to the frequency dependence intrinsic to the foregrounds. We find that the visibility correlation MAPS estimator recovers the input angular power spectrum accurately and that the instrument response to the foregrounds dominates the systematic errors in the recovered foreground power spectra. 1. Wide Field Imaging of the Hubble Deep Field-South Region III: Catalog Science.gov (United States) Palunas, Povilas; Collins, Nicholas R.; Gardner, Jonathan P.; Hill, Robert S.; Malumuth, Eliot M.; Rhodes, Jason; Teplitz, Harry I.; Woodgate, Bruce E. 2002-01-01 We present 1/2 square degree uBVRI imaging around the Hubble Deep Field - South. These data have been used in earlier papers to examine the QSO population and the evolution of the correlation function in the region around the HDF-S. The images were obtained with the Big Throughput Camera at CTIO in September 1998. The images reach 5 sigma limits of u approx. 24.4, B approx. 25.6, V approx. 25.3, R approx. 24.9 and I approx. 23.9. We present a catalog of approx. 22,000 galaxies. We also present number-magnitude counts and a comparison with other observations of the same field. The data presented here are available over the world wide web. 2. TAURUS - a wide field imaging Fabry-Perot spectrometer International Nuclear Information System (INIS) Atherton, P.D.; Taylor, K. 1983-01-01 TAURUS, an imaging Fabry-Perot system developed by the Royal Greenwich Observatory and Imperial College London, is described. The imaging process is explained and the technique is compared with grating spectrographs. It is argued that TAURUS is superior for obtaining field information from extended emission line sources. (Auth.) 3. 3MeerLICHT and BlackGEM: custom-built telescopes to detect faint optical transients NARCIS (Netherlands) Bloemen, S. (Steven); Groot, P.J. (Paul J.); Woudt, P. (Patrick); Wolt, M.K. (Marc Klein); Mcbride, V. (Vanessa); Nelemans, G. (Gijs); Körding, E. (Elmar); Pretorius, M.L. (Magaretha L.); Roelfsema, R. (Ronald); Bettonvil, F. (Felix); Balster, H. (Harry); Bakker, R. (Roy); Dolron, P. (Peter); Van Elteren, A. (Arjen); Elswijk, E. (Eddy); Engels, A. (Arno); R.P. Fender; Fokker, M. (Marc); Haan, M. (Menno De); Hagoort, K. (Klaas); De Hoog, J. (Jasper); Horst, R.T. (Rik Ter); Van Der Kevie, G. (Giel); Lowski, S.L.K. (Stanis Law Koz); Kragt, J. (Jan); Lech, G. (Grzegorz); Le Poole, R. (Rudolf); Lesman, D. (Dirk); J. Morren (Johan); Navarro, R. (Ramon); Paalberends, W.-J. (Willem-Jelle); K.G. Paterson (Kerry); Laszek, R.P. (Rafal Paw); Pessemier, W. (Wim); Raskin, G. (Gert); Rutten, H. (Harrie); L.H.A. Scheers (Bart); Schuil, M. (Menno); Sybilski, P.W. (Piotr W.) 2016-01-01 textabstractWe present the MeerLICHT and BlackGEM telescopes, which are wide-field optical telescopes that are currently being built to study transient phenomena, gravitational wave counterparts and variable stars. The telescopes have 65 cm primary mirrors and a 2.7 square degree field-of-view. The 4. Wide Band Low Noise Love Wave Magnetic Field Sensor System. Science.gov (United States) Kittmann, Anne; Durdaut, Phillip; Zabel, Sebastian; Reermann, Jens; Schmalz, Julius; Spetzler, Benjamin; Meyners, Dirk; Sun, Nian X; McCord, Jeffrey; Gerken, Martina; Schmidt, Gerhard; Höft, Michael; Knöchel, Reinhard; Faupel, Franz; Quandt, Eckhard 2018-01-10 We present a comprehensive study of a magnetic sensor system that benefits from a new technique to substantially increase the magnetoelastic coupling of surface acoustic waves (SAW). The device uses shear horizontal acoustic surface waves that are guided by a fused silica layer with an amorphous magnetostrictive FeCoSiB thin film on top. The velocity of these so-called Love waves follows the magnetoelastically-induced changes of the shear modulus according to the magnetic field present. The SAW sensor is operated in a delay line configuration at approximately 150 MHz and translates the magnetic field to a time delay and a related phase shift. The fundamentals of this sensor concept are motivated by magnetic and mechanical simulations. They are experimentally verified using customized low-noise readout electronics. With an extremely low magnetic noise level of ≈100 pT/[Formula: see text], a bandwidth of 50 kHz and a dynamic range of 120 dB, this magnetic field sensor system shows outstanding characteristics. A range of additional measures to further increase the sensitivity are investigated with simulations. 5. The FALCON concept: multi-object adaptive optics and atmospheric tomography for integral field spectroscopy - principles and performance on an 8-m telescope Science.gov (United States) Assémat, F.; Gendron, E.; Hammer, F. 2007-03-01 Integral field spectrographs are major instruments with which to study the mechanisms involved in the formation and the evolution of early galaxies. When combined with multi-object spectroscopy, those spectrographs can behave as machines used to derive physical parameters of galaxies during their formation process. Up to now, there has been only one available spectrograph with multiple integral field units, i.e. FLAMES/GIRAFFE on the European Southern Observatory (ESO) Very Large Telescope (VLT). However, current ground-based instruments suffer from a degradation of their spatial resolution due to atmospheric turbulence. In this article we describe the performance of FALCON, an original concept of a new-generation multi-object integral field spectrograph with adaptive optics for the ESO VLT. The goal of FALCON is to combine high angular resolution (0.25 arcsec) and high spectral resolution (R > 5000) in the J and H bands over a wide field of view (10 × 10 arcmin2) in the VLT Nasmyth focal plane. However, instead of correcting the whole field, FALCON will use multi-object adaptive optics (MOAO) to perform the adaptive optics correction locally on each scientific target. This requires us then to use atmospheric tomography in order to use suitable natural guide stars for wavefront sensing. We will show that merging MOAO and atmospheric tomography allows us to determine the internal kinematics of distant galaxies up to z ~ 2 with a sky coverage of 50 per cent, even for objects observed near the Galactic pole. The application of such a concept to extremely large telescopes seems therefore to be a very promising way to study galaxy evolution from z = 1 to redshifts as high as z = 7. 6. Chip-based wide field-of-view nanoscopy Science.gov (United States) Diekmann, Robin; Helle, Øystein I.; Øie, Cristina I.; McCourt, Peter; Huser, Thomas R.; Schüttpelz, Mark; Ahluwalia, Balpreet S. 2017-04-01 Present optical nanoscopy techniques use a complex microscope for imaging and a simple glass slide to hold the sample. Here, we demonstrate the inverse: the use of a complex, but mass-producible optical chip, which hosts the sample and provides a waveguide for the illumination source, and a standard low-cost microscope to acquire super-resolved images via two different approaches. Waveguides composed of a material with high refractive-index contrast provide a strong evanescent field that is used for single-molecule switching and fluorescence excitation, thus enabling chip-based single-molecule localization microscopy. Additionally, multimode interference patterns induce spatial fluorescence intensity variations that enable fluctuation-based super-resolution imaging. As chip-based nanoscopy separates the illumination and detection light paths, total-internal-reflection fluorescence excitation is possible over a large field of view, with up to 0.5 mm × 0.5 mm being demonstrated. Using multicolour chip-based nanoscopy, we visualize fenestrations in liver sinusoidal endothelial cells. 7. Towards an automatic wind speed and direction profiler for Wide Field adaptive optics systems Science.gov (United States) Sivo, G.; Turchi, A.; Masciadri, E.; Guesalaga, A.; Neichel, B. 2018-05-01 Wide Field Adaptive Optics (WFAO) systems are among the most sophisticated adaptive optics (AO) systems available today on large telescopes. Knowledge of the vertical spatio-temporal distribution of wind speed (WS) and direction (WD) is fundamental to optimize the performance of such systems. Previous studies already proved that the Gemini Multi-Conjugated AO system (GeMS) is able to retrieve measurements of the WS and WD stratification using the SLOpe Detection And Ranging (SLODAR) technique and to store measurements in the telemetry data. In order to assess the reliability of these estimates and of the SLODAR technique applied to such complex AO systems, in this study we compared WS and WD values retrieved from GeMS with those obtained with the atmospheric model Meso-NH on a rich statistical sample of nights. It has previously been proved that the latter technique provided excellent agreement with a large sample of radiosoundings, both in statistical terms and on individual flights. It can be considered, therefore, as an independent reference. The excellent agreement between GeMS measurements and the model that we find in this study proves the robustness of the SLODAR approach. To bypass the complex procedures necessary to achieve automatic measurements of the wind with GeMS, we propose a simple automatic method to monitor nightly WS and WD using Meso-NH model estimates. Such a method can be applied to whatever present or new-generation facilities are supported by WFAO systems. The interest of this study is, therefore, well beyond the optimization of GeMS performance. 8. High-Speed and Wide-Field Photometry with TORTORA Directory of Open Access Journals (Sweden) G. Greco 2010-01-01 Full Text Available We present the photometric analysis of the extended sky fields observed by the TORTORA optical monitoring system. The technology involved in the TORTORA camera is based on the use of a fast TV-CCD matrix with an image intensifier. This approach can both significantly reduce the readout noise and shorten the focal length following to monitor relatively large sky regions with high temporal resolution and adequate detection limit. The performance of the system has been tested using the relative magnitudes of standard stars by means of long image sequences collected at different airmasses and at various intensities of the moon illumination. As expected from the previous laboratory measurements, artifact sources are negligible and do not affect the photometric results. The following analysis is based on a large sample of images acquired by the TORTORA instrument since July 2006. 9. Wide field of view spectroscopy using Fabry-Perot Interferometers Science.gov (United States) Nikoleyczik, Jonathan We present a high resolution spectrometer consisting of dual solid Fabry-Perot Interferometers (FPIs). This work is intended to be an all inclusive documentation of the instrument including discussion of the design of this instrument, the methods used in data reduction, and the analysis of these data. Each FPI is made of a single piece of L-BBH2 glass which has a high index of refraction n 2.07 with a thickness on the order of 100 mum. Each is then coated with partially reflective mirrors to create a resonant cavity and thus achieve a spectral resolution of R 30,000. Running the FPIs in tandem reduces the overlapping orders and allows for a much wider free spectral range and higher contrast. We will also discuss the properties of the FPIs which we have measured. This includes the tuning of the FPIs which is achieved by adjusting the temperature and thus changing the FPI gap and the refractive index of the material. The spectrometer then moves spatially in order to get spectral information at every point in the field of view. We select spectral lines for further analysis and create maps of the line depths across the field. Using this technique we are able to measure the fluorescence of chlorophyll in plants and attempt to observe zodiacal light. In the chlorophyll analysis we are able to detect chlorophyll fluorescence using the line depth in a plant using the sky as a reference solar spectrum. This instrument has possible applications in either a cubesat or aerial observations to measure bulk plant activity over large areas. 10. Calibrating the Athena telescope Science.gov (United States) de Bruijne, J.; Guainazzi, M.; den Herder, J.; Bavdaz, M.; Burwitz, V.; Ferrando, P.; Lumb, D.; Natalucci, L.; Pajot, F.; Pareschi, G. 2017-10-01 Athena is ESA's upcoming X-ray mission, currently set for launch in 2028. With two nationally-funded, state-of-the-art instruments (a high-resolution spectrograph named X-IFU and a wide-field imager named WFI), and a telescope collecting area of 1.4-2 m^2 at 1 keV, the calibration of the spacecraft is a challenge in itself. This poster presents the current (spring 2017) plan of how to calibrate the Athena telescope. It is based on a hybrid approach, using bulk manufacturing and integration data as well as dedicated calibration measurements combined with a refined software model to simulate the full response of the optics. 11. Study on the temperature field effect analysis and test of the five-hundred-meter aperture spherical radio telescope Science.gov (United States) Song, Li-qiang; Wang, Qi-ming 2016-10-01 The thermal problem is one of the important research contents of the design and operation about giant radio antenna. This kind of influence to the antenna has been concerned in the astronomy field. Due to the instantaneous temperature load and uncertainty, it is difficult to accurately analysis and effectively control about its effect. It has important significance to analyze the thermal problem of giant radio antenna to its design and operation. The research of solar cookers and temperature field on Five-hundred-meter Aperture Spherical radio Telescope (FAST) were preceded in detail. The tests of temperature distribute about 30 meters antenna in Mi-yun observatory station were performed. The research work including the parameters related to the sun, the flow algorithm of telescope site, mathematical model of solar cooker, analysis results of temperature field and corresponding control strategy, the temperature distribution test of 30 meters model. The results showed that: solar cookers could be weakened and controlled effectively of FAST. This work will provide a reference to design and operation of the FAST and same big antenna. It has certain theory significance, engineering significance and application value. 12. Wide-Field Gamma-Spectrometer BDRG: GRB Monitor On-Board the Lomonosov Mission Science.gov (United States) Svertilov, S. I.; Panasyuk, M. I.; Bogomolov, V. V.; Amelushkin, A. M.; Barinova, V. O.; Galkin, V. I.; Iyudin, A. F.; Kuznetsova, E. A.; Prokhorov, A. V.; Petrov, V. L.; Rozhkov, G. V.; Yashin, I. V.; Gorbovskoy, E. S.; Lipunov, V. M.; Park, I. H.; Jeong, S.; Kim, M. B. 2018-02-01 The study of GRB prompt emissions (PE) is one of the main goals of the Lomonosov space mission. The payloads of the GRB monitor (BDRG) with the wide-field optical cameras (SHOK) and the ultra-fast flash observatory (UFFO) onboard the Lomonosov satellite are intended for the observation of GRBs, and in particular, their prompt emissions. The BDRG gamma-ray spectrometer is designed to obtain the temporal and spectral information of GRBs in the energy range of 10-3000 keV as well as to provide GRB triggers on several time scales (10 ms, 1 s and 20 s) for ground and space telescopes, including the UFFO and SHOK. The BDRG instrument consists of three identical detector boxes with axes shifted by 90° from each other. This configuration allows us to localize a GRB source in the sky with an accuracy of ˜ 2°. Each BDRG box contains a phoswich NaI(Tl)/CsI(Tl) scintillator detector. A thick CsI(Tl) crystal in size of \\varnothing 130 × 17 mm is placed underneath the NaI(Tl) as an active shield in the soft energy range and as the main detector in the hard energy range. The ratio of the CsI(Tl) to NaI(Tl) event rates at varying energies can be employed as an independent metric to distinguish legitimate GRB signals from false positives originating from electrons in near-Earth vicinities. The data from three detectors are collected in a BA BDRG information unit, which generates a GRB trigger and a set of data frames in output format. The scientific data output is ˜ 500 Mb per day, including ˜ 180 Mb of continuous data for events with durations in excess of 100 ms for 16 channels in each detector, detailed energy spectra, and sets of frames with ˜ 5 Mb of detailed information for each burst-like event. A number of pre-flight tests including those for the trigger algorithm and calibration were carried out to confirm the reliability of the BDRG for operation in space. 13. Gravimetric monitoring of the first field-wide steam injection in a fractured carbonate field in Oman - A feasibility study NARCIS (Netherlands) Glegola, M.; Ditmar, P.; Vossepoel, F.; Arts, R.; Al-Kindy, F.; Klees, R. 2015-01-01 Gas-Oil Gravity Drainage is to be enhanced by steam injection in a highly fractured, low permeability carbonate field in Oman. Following a successful pilot, field-wide steam injection is being implemented, first of this type in the world. A dedicated monitoring program has been designed to track 14. Wide-field monitoring strategy for the study of fast optical transients Science.gov (United States) 2010-10-01 We discuss the strategy of search for fast optical transients accompanying gamma-ray bursts by means of continuous monitoring of wide sky fields with high temporal resolution. We describe the design, performance and results of our cameras, FAVOR and TORTORA. Also we discuss the perspectives of this strategy and possible design of next-generation equipment for wide-field monitoring which will be able to detect optical transients and to study their color and polarization properties with high time resolution. 15. Reconsidering the advantages of the three-dimensional representation of the interferometric transform for imaging with non-coplanar baselines and wide fields of view Science.gov (United States) Smith, D. M. P.; Young, A.; Davidson, D. B. 2017-07-01 Radio telescopes with baselines that span thousands of kilometres and with fields of view that span tens of degrees have been recently deployed, such as the Low Frequency Array, and are currently being developed, such as the Square Kilometre Array. Additionally, there are proposals for space-based instruments with all-sky imaging capabilities, such as the Orbiting Low Frequency Array. Such telescopes produce observations with three-dimensional visibility distributions and curved image domains. In most work to date, the visibility distribution has been converted to a planar form to compute the brightness map using a two-dimensional Fourier transform. The celestial sphere is faceted in order to counter pixel distortion at wide angles, with each such facet requiring a unique planar form of the visibility distribution. Under the above conditions, the computational and storage complexities of this approach can become excessive. On the other hand, when using the direct Fourier transform approach, which maintains the three-dimensional shapes of the visibility distribution and celestial sphere, the non-coplanar visibility component requires no special attention. Furthermore, as the celestial samples are placed directly on the curved surface of the celestial sphere, pixel distortion at wide angles is avoided. In this paper, a number of examples illustrate that under these conditions (very long baselines and very wide fields of view) the costs of the direct Fourier transform may be comparable to (or even lower than) methods that utilise the two-dimensional fast Fourier transform. 16. Neutrino Telescope International Nuclear Information System (INIS) Coelin Baldo, Milla 2009-01-01 The present volume contains the proceedings of the 13. International Workshop on 'Neutrino Telescope', 17. of the series 'Un altro modo di guardare il cielo', held in Venice at the 'Istituto Veneto di Scienze, Lettere ed Arti' from March 10 to March 13, 2009. This series started in Venice 21 years ago, in 1988, motivated by the growing interest in the exciting field of the neutrino physics and astrophysics, with the aim to bring together experimentalists and theorists and encourage discussion on the most recent results and to chart the direction of future researchers. 17. The NIKA2 Large Field-of-View Millimeter Continuum Camera for the 30-M IRAM Telescope Science.gov (United States) Monfardini, Alessandro 2018-01-01 We have constructed and deployed a multi-thousands pixels dual-band (150 and 260 GHz, respectively 2mm and 1.15mm wavelengths) camera to image an instantaneous field-of-view of 6.5arc-min and configurable to map the linear polarization at 260GHz. We are providing a detailed description of this instrument, named NIKA2 (New IRAM KID Arrays 2), in particular focusing on the cryogenics, the optics, the focal plane arrays based on Kinetic Inductance Detectors (KID) and the readout electronics. We are presenting the performance measured on the sky during the commissioning runs that took place between October 2015 and April 2017 at the 30-meter IRAM (Institute of Millimetric Radio Astronomy) telescope at Pico Veleta, and preliminary science-grade results. 18. Thermostructural Analysis of the SOFIA Fine Field and Wide Field Imagers Subjected to Convective Thermal Shock Science.gov (United States) Kostyk, Christopher B. 2012-01-01 The Stratospheric Observatory For Infrared Astronomy (SOFIA) is a highly modified Boeing 747-SP with a 17- ton infrared telescope installed in the aft portion of the aircraft. Unlike ground- and space-based platforms, SOFIA can deploy to make observations anytime, anywhere, in the world. The originally designed aircraft configuration included a ground pre-cool system, however, due to various factors in the history of the project, that system was not installed. This lack of ground pre-cooling was the source of the concern about whether or not the imagers would be exposed to a potentially unsafe thermostructural environment. This concern was in addition to the already-existing concern of some project members that the air temperature rate of change during flight (both at the same altitude as well as ascent or descent) could cause the imagers to be exposed to an unsafe thermostructural environment. Four optical components were identified as the components of concern: two of higher concern (one in each imager), and two of lower concern (one in each imager). The analysis effort began by analyzing one component, after which the analyses for the other components was deemed unnecessary. The purpose of this report is to document these findings as well as lessons learned from the effort. 19. Mapping the Tidal Destruction of the Hercules Dwarf: A Wide-field DECam Imaging Search for RR Lyrae Stars Science.gov (United States) Garling, Christopher; Willman, Beth; Sand, David J.; Hargis, Jonathan; Crnojević, Denija; Bechtol, Keith; Carlin, Jeffrey L.; Strader, Jay; Zou, Hu; Zhou, Xu; Nie, Jundan; Zhang, Tianmeng; Zhou, Zhimin; Peng, Xiyan 2018-01-01 We investigate the hypothesized tidal disruption of the Hercules ultra-faint dwarf galaxy (UFD). Previous tidal disruption studies of the Hercules UFD have been hindered by the high degree of foreground contamination in the direction of the dwarf. We bypass this issue by using RR Lyrae stars, which are standard candles with a very low field-volume density at the distance of Hercules. We use wide-field imaging from the Dark Energy Camera on CTIO to identify candidate RR Lyrae stars, supplemented with observations taken in coordination with the Beijing–Arizona Sky Survey on the Bok Telescope. Combining color, magnitude, and light-curve information, we identify three new RR Lyrae stars associated with Hercules. All three of these new RR Lyrae stars lie outside its published tidal radius. When considered with the nine RR Lyrae stars already known within the tidal radius, these results suggest that a substantial fraction of Hercules’ stellar content has been stripped. With this degree of tidal disruption, Hercules is an interesting case between a visibly disrupted dwarf (such as the Sagittarius dwarf spheroidal galaxy) and one in dynamic equilibrium. The degree of disruption also shows that we must be more careful with the ways we determine object membership when estimating dwarf masses in the future. One of the three discovered RR Lyrae stars sits along the minor axis of Hercules, but over two tidal radii away. This type of debris is consistent with recent models that suggest Hercules’ orbit is aligned with its minor axis. 20. Searching for z~=6 Objects with the Hubble Space Telescope Advanced Camera for Surveys: Preliminary Analysis of a Deep Parallel Field Science.gov (United States) Yan, Haojing; Windhorst, Rogier A.; Cohen, Seth H. 2003-03-01 Recent results suggest that z~=6 marks the end of the reionization era. A large sample of objects at z~=6, therefore, will be of enormous importance, as it will enable us to observationally determine the exact epoch of the reionization and the sources that are responsible for it. With the Hubble Space Telescope Advanced Camera for Surveys (ACS) coming on-line, we now have a unique opportunity to discover a significant number of objects at z~=6. The pure parallel mode implemented for the Wide-Field Camera (WFC) has greatly enhanced this ability. We present our preliminary analysis of a deep ACS/WFC parallel field at |b|=74.4d. We find 30 plausible z~=6 candidates, all of which have signal-to-noise ratios greater than 7 in the F850LP band. The major source of contamination could be faint cool Galactic dwarfs, and we estimated that they would contribute at most four objects to our candidate list. We derived the cumulative number density of galaxies at 6.0contamination rate, it could possibly imply that the faint-end slope of the z~=6 luminosity function is steeper than α=-1.6. At the very least, our result suggests that galaxies with L 1. A Wide-field Camera and Fully Remote Operations at the Wyoming Infrared Observatory Science.gov (United States) Findlay, Joseph R.; Kobulnicky, Henry A.; Weger, James S.; Bucher, Gerald A.; Perry, Marvin C.; Myers, Adam D.; Pierce, Michael J.; Vogel, Conrad 2016-11-01 Upgrades at the 2.3 meter Wyoming Infrared Observatory telescope have provided the capability for fully remote operations by a single operator from the University of Wyoming campus. A line-of-sight 300 Megabit s-1 11 GHz radio link provides high-speed internet for data transfer and remote operations that include several realtime video feeds. Uninterruptable power is ensured by a 10 kVA battery supply for critical systems and a 55 kW autostart diesel generator capable of running the entire observatory for up to a week. The construction of a new four-element prime-focus corrector with fused-silica elements allows imaging over a 40‧ field of view with a new 40962 UV-sensitive prime-focus camera and filter wheel. A new telescope control system facilitates the remote operations model and provides 20″ rms pointing over the usable sky. Taken together, these improvements pave the way for a new generation of sky surveys supporting space-based missions and flexible-cadence observations advancing emerging astrophysical priorities such as planet detection, quasar variability, and long-term time-domain campaigns. 2. Development of MIMIZUKU: a mid-infrared multi-field imager for 6.5-m TAO telescope Science.gov (United States) Kamizuka, Takafumi; Miyata, Takashi; Sako, Shigeyuki; Nakamura, Tomohiko; Asano, Kentaro; Uchiyama, Mizuho; Okada, Kazushi; Onaka, Takashi; Sakon, Itsuki; Kataza, Hirokazu; Sarugaku, Yuki; Yoshii, Yuzuru; Doi, Mamoru; Kohno, Kotaro; Kawara, Kimiaki; Tanaka, Masuo; Motohara, Kentaro; Tanabe, Toshihiko; Minezaki, Takeo; Morokuma, Tomoki; Tamura, Yoichi; Aoki, Tsutomu; Soyano, Takao; Tarusawa, Ken'ichi; Kato, Natsuko; Konishi, Masahiro; Takahashi, Hidenori; Koshida, Shintaro; Tateuchi, Ken; Handa, Toshihiro 2012-09-01 TAO (The University of Tokyo Atacama Observatory) is planned to be constructed at the summit of Co. Chajnantor (5640 m altitude) in Chile. MIMIZUKU (Mid-Infrared Multi-field Imager for gaZing at the UnKnown Universe) is a mid-infrared imager (Field of View: 1' x 1'- 2' x 2') and spectrometer (Δλ/λ: 60-230) for the 6.5-m TAO telescope, covering the wavelength range of 2-38 μm. The MIMIZUKU has a unique equipment called Field Stacker (FS) which enables the simultaneous observation of target and reference object. The simultaneity is expected to improve photometric accuracy and to realize long-term monitoring observations. The development status of the MIMIZUKU is reported in this paper. The FS and the cryostat of the MIMIZUKU have been fabricated and under testing. The cold optics (550 mm x 750 mm x 2 floors) with 28 mirrors has been constructed. The mirrors were aligned with the positional precision of 0.1 mm and the angular precision of 0.1 deg. The evaluated optical performance is that the diffraction-limited image at λ gears are employed and work well even in cryogenic environment. The grisms made with silicon and germanium have been fabricated by ultraprecision cutting. It was found that their surface roughness, grating constant, and blaze angle almost measure up to the designed values. 3. Wide-field schematic eye models with gradient-index lens. Science.gov (United States) Goncharov, Alexander V; Dainty, Chris 2007-08-01 We propose a wide-field schematic eye model, which provides a more realistic description of the optical system of the eye in relation to its anatomical structure. The wide-field model incorporates a gradient-index (GRIN) lens, which enables it to fulfill properties of two well-known schematic eye models, namely, Navarro's model for off-axis aberrations and Thibos's chromatic on-axis model (the Indiana eye). These two models are based on extensive experimental data, which makes the derived wide-field eye model also consistent with that data. A mathematical method to construct a GRIN lens with its iso-indicial contours following the optical surfaces of given asphericity is presented. The efficiency of the method is demonstrated with three variants related to different age groups. The role of the GRIN structure in relation to the lens paradox is analyzed. The wide-field model with a GRIN lens can be used as a starting design for the eye inverse problem, i.e., reconstructing the optical structure of the eye from off-axis wavefront measurements. Anatomically more accurate age-dependent optical models of the eye could ultimately help an optical designer to improve wide-field retinal imaging. 4. The FLARE mission: deep and wide-field 1-5um imaging and spectroscopy for the early universe: a proposal for M5 cosmic vision call Science.gov (United States) Burgarella, D.; Levacher, P.; Vives, S.; Dohlen, K.; Pascal, S. 2016-07-01 FLARE (First Light And Reionization Explorer) is a space mission that will be submitted to ESA (M5 call). Its primary goal (~80% of lifetime) is to identify and study the universe before the end of the reionization at z > 6. A secondary objective (~20% of lifetime) is to survey star formation in the Milky Way. FLARE's strategy optimizes the science return: imaging and spectroscopic integral-field observations will be carried out simultaneously on two parallel focal planes and over very wide instantaneous fields of view. FLARE will help addressing two of ESA's Cosmic Vision themes: a) universe originate and what is it made of? » and b) « What are the conditions for planet formation and the emergence of life? >> and more specifically, >. FLARE will provide to the ESA community a leading position to statistically study the early universe after JWST's deep but pin-hole surveys. Moreover, the instrumental development of wide-field imaging and wide-field integral-field spectroscopy in space will be a major breakthrough after making them available on ground-based telescopes. 5. Wide-field single photon counting imaging with an ultrafast camera and an image intensifier Science.gov (United States) Zanda, Gianmarco; Sergent, Nicolas; Green, Mark; Levitt, James A.; Petrášek, Zdeněk; Suhling, Klaus 2012-12-01 We are reporting a method for wide-field photon counting imaging using a CMOS camera with a 40 kHz frame rate coupled with a three-stage image intensifier mounted on a standard fluorescence microscope. This system combines high frame rates with single photon sensitivity. The output of the phosphor screen, consisting of single-photon events, is collected by a CMOS camera allowing to create a wide-field image with parallel positional and timing information of each photon. Using a pulsed excitation source and a luminescent sample, the arrival time of hundreds of photons can be determined simultaneously in many pixels with microsecond resolution. 6. NEW M, L, AND T DWARF COMPANIONS TO NEARBY STARS FROM THE WIDE-FIELD INFRARED SURVEY EXPLORER International Nuclear Information System (INIS) Luhman, Kevin L.; Loutrel, Nicholas P.; McCurdy, Nicholas S.; Melso, Nicole D.; Star, Kimberly M.; Terrien, Ryan C.; Mace, Gregory N.; McLean, Ian S.; Young, Michael D.; Rhode, Katherine L.; Davy Kirkpatrick, J. 2012-01-01 We present 11 candidate late-type companions to nearby stars identified with data from the Wide-field Infrared Survey Explorer (WISE) and the Two Micron All Sky Survey (2MASS). Eight of the candidates are likely to be companions based on their common proper motions with the primaries. The remaining three objects are rejected as companions, one of which is a free-floating T7 dwarf. Spectral types are available for five of the companions, which consist of M2V, M8.5V, L5, T8, and T8. Based on their photometry, the unclassified companions are probably two mid-M dwarfs and one late-M/early-L dwarf. One of the T8 companions, WISE J142320.84+011638.0, has already been reported by Pinfield and coworkers. The other T8 companion, ULAS J095047.28+011734.3, was discovered by Burningham and coworkers through the United Kingdom Infrared Telescope Infrared Deep Sky Survey, but its companionship has not been previously recognized in the literature. The L5 companion, 2MASS J17430860+8526594, is a new member of a class of L dwarfs that exhibit unusually blue near-IR colors. Among the possible mechanisms that have been previously proposed for the peculiar colors of these L dwarfs, low metallicity does not appear to be a viable explanation for 2MASS J17430860+8526594 since our spectrum of the primary suggests that its metallicity is not significantly subsolar. 7. The NIKA2 large-field-of-view millimetre continuum camera for the 30 m IRAM telescope Science.gov (United States) Adam, R.; Adane, A.; Ade, P. A. R.; André, P.; Andrianasolo, A.; Aussel, H.; Beelen, A.; Benoît, A.; Bideaud, A.; Billot, N.; Bourrion, O.; Bracco, A.; Calvo, M.; Catalano, A.; Coiffard, G.; Comis, B.; De Petris, M.; Désert, F.-X.; Doyle, S.; Driessen, E. F. C.; Evans, R.; Goupy, J.; Kramer, C.; Lagache, G.; Leclercq, S.; Leggeri, J.-P.; Lestrade, J.-F.; Macías-Pérez, J. F.; Mauskopf, P.; Mayet, F.; Maury, A.; Monfardini, A.; Navarro, S.; Pascale, E.; Perotto, L.; Pisano, G.; Ponthieu, N.; Revéret, V.; Rigby, A.; Ritacco, A.; Romero, C.; Roussel, H.; Ruppin, F.; Schuster, K.; Sievers, A.; Triqueneaux, S.; Tucker, C.; Zylka, R. 2018-01-01 Context. Millimetre-wave continuum astronomy is today an indispensable tool for both general astrophysics studies (e.g. star formation, nearby galaxies) and cosmology (e.g. cosmic microwave background and high-redshift galaxies). General purpose, large-field-of-view instruments are needed to map the sky at intermediate angular scales not accessible by the high-resolution interferometers (e.g. ALMA in Chile, NOEMA in the French Alps) and by the coarse angular resolution space-borne or ground-based surveys (e.g. Planck, ACT, SPT). These instruments have to be installed at the focal plane of the largest single-dish telescopes, which are placed at high altitude on selected dry observing sites. In this context, we have constructed and deployed a three-thousand-pixel dual-band (150 GHz and 260 GHz, respectively 2 mm and 1.15 mm wavelengths) camera to image an instantaneous circular field-of-view of 6.5 arcmin in diameter, and configurable to map the linear polarisation at 260 GHz. Aims: First, we are providing a detailed description of this instrument, named NIKA2 (New IRAM KID Arrays 2), in particular focussing on the cryogenics, optics, focal plane arrays based on Kinetic Inductance Detectors, and the readout electronics. The focal planes and part of the optics are cooled down to the nominal 150 mK operating temperature by means of an adhoc dilution refrigerator. Secondly, we are presenting the performance measured on the sky during the commissioning runs that took place between October 2015 and April 2017 at the 30-m IRAM telescope at Pico Veleta, near Granada (Spain). Methods: We have targeted a number of astronomical sources. Starting from beam-maps on primary and secondary calibrators we have then gone to extended sources and faint objects. Both internal (electronic) and on-the-sky calibrations are applied. The general methods are described in the present paper. Results: NIKA2 has been successfully deployed and commissioned, performing in-line with expectations. In 8. Neutrino telescopes CERN Document Server Carr, J 2002-01-01 This review presents the scientific objectives and status of Neutrino Telescope Projects. The science program of these projects covers: neutrino astronomy, dark matter searches and measurements of neutrino oscillations. The two neutrino telescopes in operation: AMANDA and BAIKAL will be described together with the ANTARES neutrino telescope being built in the Mediterranean. (18 refs). 9. An EUV Wide-Field Imager and Spectrometer for the ISS Science.gov (United States) Golub, Leon; Savage, Sabrina 2016-01-01 The Coronal Spectrographic Imager in the EUV, COSIE, combines a wide-field solar coronal EUV imager (EUVC) and an on-disk EUV imaging spectrometer (EUVS). Located on the International Space Station (ISS), the goal of the mission is to enhance our understanding of the dynamics of the Transition Corona (the region in which the coronal magnetic field transitions from closed to open), and to provide improved detection and tracking of solar eruptive events for space weather research. 10. Hybrid Lyot coronagraph for wide-field infrared survey telescope-astrophysics focused telescope assets: occulter fabrication and high contrast narrowband testbed demonstration Science.gov (United States) Seo, Byoung-Joon; Gordon, Brian; Kern, Brian; Kuhnert, Andy; Moody, Dwight; Muller, Richard; Poberezhskiy, Ilya; Trauger, John; Wilson, Daniel 2016-01-01 Hybrid Lyot coronagraph (HLC) is one of the two operating modes of the WFIRST-AFTA coronagraph instrument. It produces starlight suppression over the full 360-deg annular region and thus is particularly suitable to improve the discovery space around WFIRST-AFTA targets. Since being selected by the National Aeronautics and Space Administration in December 2013, the coronagraph technology is being matured to technology readiness level 5 by September 2016. We present the progress of HLC key component fabrication and testbed demonstrations with the WFIRST-AFTA pupil. For the first time, a circular HLC occulter mask consisting of metal and dielectric layers is fabricated and characterized. Wavefront control using two deformable mirrors is successfully demonstrated in a vacuum testbed with narrowband light (associated analysis. 11. Operating performance of the gamma-ray Cherenkov telescope: An end-to-end Schwarzschild–Couder telescope prototype for the Cherenkov Telescope Array Energy Technology Data Exchange (ETDEWEB) Dournaux, J.L., E-mail: [email protected] [GEPI, Observatoire de Paris, PSL Research University, CNRS, Sorbonne Paris Cité, Université Paris Diderot, Place J. Janssen, 92190 Meudon (France); De Franco, A. [Department of Physics, University of Oxford, Keble Road, Oxford OX1 3RH (United Kingdom); Laporte, P. [GEPI, Observatoire de Paris, PSL Research University, CNRS, Sorbonne Paris Cité, Université Paris Diderot, Place J. Janssen, 92190 Meudon (France); White, R. [Max-Planck-Institut für Kernphysik, Saupfercheckweg 1, 69117 Heidelberg (Germany); Greenshaw, T. [University of Liverpool, Oliver Lodge Laboratory, P.O. Box 147, Oxford Street, Liverpool L69 3BX (United Kingdom); Sol, H. [LUTH, Observatoire de Paris, PSL Research University, CNRS, Université Paris Diderot, Place J. Janssen, 92190 Meudon (France); Abchiche, A. [CNRS, Division technique DT-INSU, 1 Place Aristide Briand, 92190 Meudon (France); Allan, D. [Department of Physics and Centre for Advanced Instrumentation, Durham University, South Road, Durham DH1 3LE (United Kingdom); Amans, J.P. [GEPI, Observatoire de Paris, PSL Research University, CNRS, Sorbonne Paris Cité, Université Paris Diderot, Place J. Janssen, 92190 Meudon (France); Armstrong, T.P. [Department of Physics and Centre for Advanced Instrumentation, Durham University, South Road, Durham DH1 3LE (United Kingdom); Balzer, A.; Berge, D. [GRAPPA, University of Amsterdam, Science Park 904, 1098 XH Amsterdam (Netherlands); Boisson, C. [LUTH, Observatoire de Paris, PSL Research University, CNRS, Université Paris Diderot, Place J. Janssen, 92190 Meudon (France); and others 2017-02-11 The Cherenkov Telescope Array (CTA) consortium aims to build the next-generation ground-based very-high-energy gamma-ray observatory. The array will feature different sizes of telescopes allowing it to cover a wide gamma-ray energy band from about 20 GeV to above 100 TeV. The highest energies, above 5 TeV, will be covered by a large number of Small-Sized Telescopes (SSTs) with a field-of-view of around 9°. The Gamma-ray Cherenkov Telescope (GCT), based on Schwarzschild–Couder dual-mirror optics, is one of the three proposed SST designs. The GCT is described in this contribution and the first images of Cherenkov showers obtained using the telescope and its camera are presented. These were obtained in November 2015 in Meudon, France. 12. Synergy of CETUS with Survey Telescopes of the 2020's Science.gov (United States) Heap, Sara; and the CETUS Science Team 2018-01-01 There has been an explosion in wide-field telescopes conducting astrophysical surveys that will come to fruition in the 2020’s. These wide and deep telescopes will survey the sky at wavelengths ranging from gamma rays to radio waves. E-ROSITA will perform an all-sky X-ray survey with unprecedented sensitivity and resolution. Numerous telescopes on the ground and in space will observe electromagnetic counterparts to gravitational-wave sources. The Large Synoptic Survey Telescope, LSST, will map the southern sky discovering billions of new galaxies and stars and detecting transient objects. Subaru’s Hyper Suprime Cam and Prime Focus Spectrograph will work to understand dark energy, and galaxy evolution at redshifts, z~1-2 using optical-IR spectra, and to carry out studies of stellar archeology. The Wide-Field Infrared Survey Telescope, WFIRST, will conduct imaging and slitless spectroscopic surveys of the sky at near-IR wavelengths including nebular emission of H-alpha at redshifts up to z=2. The Square Kilometer Array (SKA) and other radio telescopes will map a billion galaxies using the 21-cm line of neutral hydrogen. We will show how CETUS’s near-UV and far-UV cameras and its near-UV multi-object spectrograph will work in synergy with these other survey telescopes. 13. Goddard Robotic Telescope International Nuclear Information System (INIS) Sakamoto, Takanori; Donato, Davide; Gehrels, Neil; Okajima, Takashi; Ukwatta, Tilan N. 2009-01-01 We are constructing the 14'' fully automated optical robotic telescope, Goddard Robotic Telescope (GRT), at the Goddard Geophysical and Astronomical Observatory. The aims of our robotic telescope are 1) to follow-up the Swift/Fermi Gamma-Ray Bursts (GRBs) and 2) to perform the coordinated optical observations of the Fermi/Large Area Telescope (LAT) Active Galactic Nuclei (AGN). Our telescope system consists of the 14'' Celestron Optical Telescope Assembly (OTA), the Astro-Physics 1200GTO mount, the Apogee U47 CCD camera, the JMI's electronic focuser, and the Finger Lake Instrumentation's color filter wheel with U, B, V, R and I filters. With the focal reducer, 20'x20' field of view has been achieved. The observatory dome is the Astro Haven's 7 ft clam-shell dome. We started the scientific observations on mid-November 2008. While not observing our primary targets (GRBs and AGNs), we are planning to open our telescope time to the public for having a wider use of our telescope in both a different research field and an educational purpose. 14. Optimization of Grazing Incidence Optics for Wide-Field X-Ray Survey Imaging Science.gov (United States) Roming, P. W. A.; Burrows, D. N.; Garmire, G. P.; Roush, W. B. 1999-12-01 Optimization of wide-field X-ray optics could greatly enhance X-ray surveys. Discussions of optimizing wide-field X-ray optics, with field-of-views less-than 1.1 degree-squared, have been made previously in the literature. However, very little has been published about the optimizing of wide-field X-ray optics with larger fields-of-view. We have been working on the design of a wide-field (3.1 degree-squared field-of-view), short focal length (190.5 cm), grazing incidence mirror shell set, with a desired rms image spot size of 15 arcsec. The baseline design incorporates Wolter I type mirror shells with polynomial perturbations applied to the grazing incidence surface. By optimizing the polynomial, the rms image spot size can be minimized for a large range of grazing angles. The overall minimization technique is to efficiently optimize the polynomial coefficients that directly influence the angular resolution, without stepping through the entire multidimensional coefficient space. The multidimensional minimization techniques that have been investigated include: the downhill simplex method; the coupling of genetic algorithms with full and fractional, including Plackett-Burman, factorial designs; and the coupling of genetic algorithms with Box-Behnken and central composite response surface designs. We have also examined the use of neural networks, coupled with genetic algorithms, as a method of multidimensional minimization. Investigations of backpropagation, probabilistic (PNN), general regression (GRNN), and group method of data handling (GMDH) neural networks have been made. We report our findings to date. This research is funded by NASA grant #NAG5-5093. 15. Surface impedance of superconductors in wide frequency ranges for wake field calculations International Nuclear Information System (INIS) Davidovskii, V.G. 2006-01-01 The problem of the surface impedance of superconductors in wide frequency ranges for calculations of wake fields, generated by bunches of charged particles moving axially inside a metallic vacuum chambers, is solved. The case of specular electron reflection at the superconductor surface is considered. The expression for the surface impedance of superconductors suitable for numerical computation is derived [ru 16. Integral-field spectroscopy at the resolution limit of large telescopes: the science program of OSIRIS at Keck Science.gov (United States) Quirrenbach, Andreas; Larkin, James E.; Krabbe, Alfred; Barczys, Matthew; LaFreniere, David 2003-03-01 OSIRIS (OH-Suppressing InfraRed Integral-field Spectrograph) is a new facility instrument for the Keck Observatory. Starting in 2004, it will provide the capability of performing three-dimensional spectroscopy in the near-infrared z, J, H, and K bands at the resolution limit of the Keck II telescope, which is equipped with adaptive optics and a laser guide star. The innovative capabilities of OSIRIS will enable many new observing projects. Galaxies in the early Universe will be among the most interesting targets for OSIRIS, which will perform detailed studies of their stellar content and dynamical properties. In more exotic objects, such as quasars, radio galaxies, and more nearby active galactic nuclei, OSIRIS can elucidate the relation of the central black hole to the properties of the host galaxy, and the mechanism by which gas is fed into the central engine. In the center of our own Galaxy, it will be possible to search for signatures of interaction between the massive black hole and stars in its immediate vicinity. Closer to home, OSIRIS will perform spectroscopic observations of young stars and their environment, and of brown dwarfs. Imaging spectroscopy of the giant planets, their moons, and asteroids will shed new light on meteorology, mineralogy, and volcanism in the Solar System. OSIRIS observations of Kuiper Belt objects will provide sufficient sensitivity to establish their surface composition, which will contribute substantially to our understanding of the history of the Solar System. 17. Modeling The Atmosphere In The Era Of Big Data From Extremely Wide Field-Of-View Telescopes Science.gov (United States) Gonzalez Quiles, Junellie; Nordin, Jakob 2018-01-01 Surveys like the Sloan Digital Sky Survey (SDSS), Pan-STARRS and the Palomar Transient Factory Survey (PTF) receive large amounts of data, which need to be processed and calibrated in order to correct for various factors. One of the limiting factors in obtaining high quality data is the atmosphere, and it is therefore essential to find the appropriate calibration for the atmospheric extinction. It is to be expected that a physical atmospheric model, compared to a photometric calibration used currently by PTF, is more effective in calibrating for the atmospheric extinction due to its ability to account for rapid atmospheric fluctuation and objects of different colors. We focused on creating tools to model the atmospheric extinction for the upcoming Zwicky Transient Factory Survey (ZTF). In order to model the atmosphere, we created a program that combines input data and catalogue values, and efficiently handles them. Then, using PTF data and the SDSS catalogue, we created several models to fit the data, and tested the quality of the fits by chi-square minimization. This will allow us to optimize atmospheric extinction for the upcoming ZTF in the near future. 18. 4MOST: 4-metre Multi-Object Spectroscopic Telescope NARCIS (Netherlands) de Jong, Roelof S.; Barden, Sam; Bellido-Tirado, Olga; Brynnel, Joar; Chiappini, Cristina; Depagne, Éric; Haynes, Roger; Johl, Diana; Phillips, Daniel P.; Schnurr, Olivier; Schwope, Axel D.; Walcher, Jakob; Bauer, Svend M.; Cescutti, Gabriele; Cioni, Maria-Rosa L.; Dionies, Frank; Enke, Harry; Haynes, Dionne M.; Kelz, Andreas; Kitaura, Francisco S.; Lamer, Georg; Minchev, Ivan; Müller, Volker; Nuza, Sebastián. E.; Olaya, Jean-Christophe; Piffl, Tilmann; Popow, Emil; Saviauk, Allar; Steinmetz, Matthias; Ural, Uǧur; Valentini, Monica; Winkler, Roland; Wisotzki, Lutz; Ansorge, Wolfgang R.; Banerji, Manda; Gonzalez Solares, Eduardo; Irwin, Mike; Kennicutt, Robert C.; King, David M. P.; McMahon, Richard; Koposov, Sergey; Parry, Ian R.; Sun, Xiaowei; Walton, Nicholas A.; Finger, Gert; Iwert, Olaf; Krumpe, Mirko; Lizon, Jean-Louis; Mainieri, Vincenzo; Amans, Jean-Philippe; Bonifacio, Piercarlo; Cohen, Matthieu; François, Patrick; Jagourel, Pascal; Mignot, Shan B.; Royer, Frédéric; Sartoretti, Paola; Bender, Ralf; Hess, Hans-Joachim; Lang-Bardl, Florian; Muschielok, Bernard; Schlichter, Jörg; Böhringer, Hans; Boller, Thomas; Bongiorno, Angela; Brusa, Marcella; Dwelly, Tom; Merloni, Andrea; Nandra, Kirpal; Salvato, Mara; Pragt, Johannes H.; Navarro, Ramón; Gerlofsma, Gerrit; Roelfsema, Ronald; Dalton, Gavin B.; Middleton, Kevin F.; Tosh, Ian A.; Boeche, Corrado; Caffau, Elisabetta; Christlieb, Norbert; Grebel, Eva K.; Hansen, Camilla J.; Koch, Andreas; Ludwig, Hans-G.; Mandel, Holger; Quirrenbach, Andreas; Sbordone, Luca; Seifert, Walter; Thimm, Guido; Helmi, Amina; trager, Scott C.; Bensby, Thomas; Feltzing, Sofia; Ruchti, Gregory; Edvardsson, Bengt; Korn, Andreas; Lind, Karin; Boland, Wilfried; Colless, Matthew; Frost, Gabriella; Gilbert, James; Gillingham, Peter; Lawrence, Jon; Legg, Neville; Saunders, Will; Sheinis, Andrew; Driver, Simon; Robotham, Aaron; Bacon, Roland; Caillier, Patrick; Kosmalski, Johan; Laurent, Florence; Richard, Johan 4MOST is a wide-field, high-multiplex spectroscopic survey facility under development for the VISTA telescope of the European Southern Observatory (ESO). Its main science drivers are in the fields of galactic archeology, high-energy physics, galaxy evolution and cosmology. 4MOST will in particular 19. Wide-field optical mapping of neural activity and brain haemodynamics: considerations and novel approaches Science.gov (United States) Ma, Ying; Shaik, Mohammed A.; Kozberg, Mariel G.; Thibodeaux, David N.; Zhao, Hanzhi T.; Yu, Hang 2016-01-01 Although modern techniques such as two-photon microscopy can now provide cellular-level three-dimensional imaging of the intact living brain, the speed and fields of view of these techniques remain limited. Conversely, two-dimensional wide-field optical mapping (WFOM), a simpler technique that uses a camera to observe large areas of the exposed cortex under visible light, can detect changes in both neural activity and haemodynamics at very high speeds. Although WFOM may not provide single-neuron or capillary-level resolution, it is an attractive and accessible approach to imaging large areas of the brain in awake, behaving mammals at speeds fast enough to observe widespread neural firing events, as well as their dynamic coupling to haemodynamics. Although such wide-field optical imaging techniques have a long history, the advent of genetically encoded fluorophores that can report neural activity with high sensitivity, as well as modern technologies such as light emitting diodes and sensitive and high-speed digital cameras have driven renewed interest in WFOM. To facilitate the wider adoption and standardization of WFOM approaches for neuroscience and neurovascular coupling research, we provide here an overview of the basic principles of WFOM, considerations for implementation of wide-field fluorescence imaging of neural activity, spectroscopic analysis and interpretation of results. This article is part of the themed issue ‘Interpreting BOLD: a dialogue between cognitive and cellular neuroscience’. PMID:27574312 20. Wide-field optical mapping of neural activity and brain haemodynamics: considerations and novel approaches. Science.gov (United States) Ma, Ying; Shaik, Mohammed A; Kim, Sharon H; Kozberg, Mariel G; Thibodeaux, David N; Zhao, Hanzhi T; Yu, Hang; Hillman, Elizabeth M C 2016-10-05 Although modern techniques such as two-photon microscopy can now provide cellular-level three-dimensional imaging of the intact living brain, the speed and fields of view of these techniques remain limited. Conversely, two-dimensional wide-field optical mapping (WFOM), a simpler technique that uses a camera to observe large areas of the exposed cortex under visible light, can detect changes in both neural activity and haemodynamics at very high speeds. Although WFOM may not provide single-neuron or capillary-level resolution, it is an attractive and accessible approach to imaging large areas of the brain in awake, behaving mammals at speeds fast enough to observe widespread neural firing events, as well as their dynamic coupling to haemodynamics. Although such wide-field optical imaging techniques have a long history, the advent of genetically encoded fluorophores that can report neural activity with high sensitivity, as well as modern technologies such as light emitting diodes and sensitive and high-speed digital cameras have driven renewed interest in WFOM. To facilitate the wider adoption and standardization of WFOM approaches for neuroscience and neurovascular coupling research, we provide here an overview of the basic principles of WFOM, considerations for implementation of wide-field fluorescence imaging of neural activity, spectroscopic analysis and interpretation of results.This article is part of the themed issue 'Interpreting BOLD: a dialogue between cognitive and cellular neuroscience'. © 2016 The Authors. 1. Synthetic White-light Imagery for the Wide-field Imager for Solar Probe Plus (WISPR) Science.gov (United States) Liewer, P. C.; Thernisien, A. F.; Vourlidas, A.; Howard, R.; DeForest, C. E.; DeJong, E.; Desai, A. 2015-12-01 The Solar Probe Plus trajectory, approaching within 10 solar radii, will enable the white light imager, WISPR, to fly through corona features now only imaged remotely. The dependency of the Thomson scattering on the imaging geometry (distance and angle from the Sun) dictates that the outer WISPR telescope will be sensitive to the emission from plasma close to the spacecraft, in contrast to the situation for imaging from Earth orbit. Thus WISPR will be the first 'local' imager providing a crucial link between the large-scale corona and SPP's in-situ measurements. The high speed at perihelion will provide tomographic-like views of coronal structures at ≤1° resolution. As SPP approaches perihelion, WISPR, with a 95° radial by 58° transverse field of view, will resolve the fine-scale structure with high spatial resolution. To prepare for this unprecedented viewing of the structure of the inner corona, we are creating synthetic white light images and animations from the WISPR viewpoint using the white-light ray-tracing package developed at NRL (available through SolarSoft). We will present simulated observations of multi-strand models of coronal streamers and flux ropes of various size and make comparisons with views from Earth, Solar Orbiter and SPP. Analysis techniques for WISPR images will also be discussed. 2. Electrowetting liquid lens array on curved substrates for wide field of view image sensor Science.gov (United States) Bang, Yousung; Lee, Muyoung; Won, Yong Hyub 2016-03-01 In this research, electrowetting liquid lens array on curved substrates is developed for wide field of view image sensor. In the conventional image sensing system, this lens array is usually in the form of solid state. However, in this state, the lens array which is similar to insect-like compound eyes in nature has several limitations such as degradation of image quality and narrow field of view because it cannot adjust focal length of lens. For implementation of the more enhanced system, the curved array of lenses based on electrowetting effect is developed in this paper, which can adjust focal length of lens. The fabrication of curved lens array is conducted upon the several steps, including chamber fabrication, electrode & dielectric layer deposition, liquid injection, and encapsulation. As constituent materials, IZO coated convex glass, UV epoxy (NOA 68), DI water, and dodecane are used. The number of lenses on the fabricated panel is 23 by 23 and each lens has 1mm aperture with 1.6mm pitch between adjacent lenses. When the voltage is applied on the device, it is observed that each lens is changed from concave state to convex state. From the unique optical characteristics of curved array of liquid lenses such as controllable focal length and wide field of view, we can expect that it has potential applications in various fields such as medical diagnostics, surveillance systems, and light field photography. 3. Wide-Field Vibrational Phase Contrast Imaging Based on Coherent Anti-Stokes Raman Scattering Holography International Nuclear Information System (INIS) Lv Yong-Gang; Ji Zi-Heng; Dong Da-Shan; Gong Qi-Huang; Shi Ke-Bin 2015-01-01 We propose and implement a wide-field vibrational phase contrast detection to obtain imaging of imaginary components of third-order nonlinear susceptibility in a coherent anti-Stokes Raman scattering (CARS) microscope with full suppression of the non-resonant background. This technique is based on the unique ability of recovering the phase of the generated CARS signal based on holographic recording. By capturing the phase distributions of the generated CARS field from the sample and from the environment under resonant illumination, we demonstrate the retrieval of imaginary components in the CARS microscope and achieve background free coherent Raman imaging. (paper) 4. Wide Field-of-View Soft X-Ray Imaging for Solar Wind-Magnetosphere Interactions Science.gov (United States) Walsh, B. M.; Collier, M. R.; Kuntz, K. D.; Porter, F. S.; Sibeck, D. G.; Snowden, S. L.; Carter, J. A.; Collado-Vega, Y.; Connor, H. K.; Cravens, T. E.; 2016-01-01 Soft X-ray imagers can be used to study the mesoscale and macroscale density structures that occur whenever and wherever the solar wind encounters neutral atoms at comets, the Moon, and both magnetized and unmagnetized planets. Charge exchange between high charge state solar wind ions and exospheric neutrals results in the isotropic emission of soft X-ray photons with energies from 0.1 to 2.0 keV. At Earth, this process occurs primarily within the magnetosheath and cusps. Through providing a global view, wide field-of-view imaging can determine the significance of the various proposed solar wind-magnetosphere interaction mechanisms by evaluating their global extent and occurrence patterns. A summary of wide field-of-view (several to tens of degrees) soft X-ray imaging is provided including slumped micropore microchannel reflectors, simulated images, and recent flight results. 5. Science.gov (United States) Yudin, Alexey N. 2011-11-01 Work is focused on study of capability of solid Schmidt camera to serve as a wide-field infrared lens for aircraft system with whole sphere coverage, working in 8-14 um spectral range, coupled with spherical focal array of megapixel class. Designs of 16 mm f/0.2 lens with 60 and 90 degrees sensor diagonal are presented, their image quality is compared with conventional solid design. Achromatic design with significantly improved performance, containing enclosed soft correcting lens behind protective front lens is proposed. One of the main goals of the work is to estimate benefits from curved detector arrays in 8-14 um spectral range wide-field systems. Coupling of photodetector with solid Schmidt camera by means of frustrated total internal reflection is considered, with corresponding tolerance analysis. The whole lens, except front element, is considered to be cryogenic, with solid Schmidt unit to be flown by hydrogen for improvement of bulk transmission. 6. Performance of the second MEMS space telescope for observation of extreme lightning from space Science.gov (United States) Jeon, Jin-A.; Lee, Hye Young; Kim, Ji Eun; Lee, Jik; Park, Il H. 2016-03-01 A small space-telescope equipped with a micro-electro-mechanical system (MEMS) micro-mirror is applied to space missions for observing random, rare and temporal events like transient luminous events (TLEs). The measurement of TLEs with fine time resolution will show the different temporal profiles predicted by the various models for sprites, blue jets, elves and halos. The proposed space-telescope consists of three components: two sub-telescopes with different focal lengths and a spectrometer. The trigger telescope with a short focal length surveys a wide field of view. The zoom-in telescope with a long focal length looks into a small field of view area that is part of the trigger telescope's wide field of view. Upon identifying a candidate TLE, the trigger telescope determines the location of the event and provides the location to the MEMS micro-mirror. Then, the micro-mirror, which is placed as a pinhole in front of the zoom-in telescope, rotates its mirror plane by such an angle that the zoom-in telescope will watch the small field of view around the center of the event. In this manner, the zoom-in telescope achieves the zoom-in designed by its long focal length. The first such small-space telescope, the MEMS Telescope for Extreme Lightning (MTEL), was launched into space in 2009 and identified a few candidates sprites. However a power failure (over-charge of the solar battery) of the main satellite occurred, and the MTEL was not able to continue space operation to acquire sizable statistics for TLE events. We developed and constructed the second small-space telescope, called MTEL-II, to continue to observe TLE events in space. In this paper, we present the performance of MTEL-II based on ground tests. 7. Thirty Meter Telescope (TMT) Narrow Field Infrared Adaptive Optics System (NFIRAOS) real-time controller preliminary architecture Science.gov (United States) Kerley, Dan; Smith, Malcolm; Dunn, Jennifer; Herriot, Glen; Véran, Jean-Pierre; Boyer, Corinne; Ellerbroek, Brent; Gilles, Luc; Wang, Lianqi 2016-08-01 The Narrow Field Infrared Adaptive Optics System (NFIRAOS) is the first light Adaptive Optics (AO) system for the Thirty Meter Telescope (TMT). A critical component of NFIRAOS is the Real-Time Controller (RTC) subsystem which provides real-time wavefront correction by processing wavefront information to compute Deformable Mirror (DM) and Tip/Tilt Stage (TTS) commands. The National Research Council of Canada - Herzberg (NRC-H), in conjunction with TMT, has developed a preliminary design for the NFIRAOS RTC. The preliminary architecture for the RTC is comprised of several Linux-based servers. These servers are assigned various roles including: the High-Order Processing (HOP) servers, the Wavefront Corrector Controller (WCC) server, the Telemetry Engineering Display (TED) server, the Persistent Telemetry Storage (PTS) server, and additional testing and spare servers. There are up to six HOP servers that accept high-order wavefront pixels, and perform parallelized pixel processing and wavefront reconstruction to produce wavefront corrector error vectors. The WCC server performs low-order mode processing, and synchronizes and aggregates the high-order wavefront corrector error vectors from the HOP servers to generate wavefront corrector commands. The Telemetry Engineering Display (TED) server is the RTC interface to TMT and other subsystems. The TED server receives all external commands and dispatches them to the rest of the RTC servers and is responsible for aggregating several offloading and telemetry values that are reported to other subsystems within NFIRAOS and TMT. The TED server also provides the engineering GUIs and real-time displays. The Persistent Telemetry Storage (PTS) server contains fault tolerant data storage that receives and stores telemetry data, including data for Point-Spread Function Reconstruction (PSFR). 8. Wide-field Infrared Survey Explorer Observations of the Evolution of Massive Star-forming Regions OpenAIRE Koenig, X. P.; Leisawitz, D. T.; Benford, D. J.; Rebull, L. M.; Padgett, D. L.; Assef, R. J. 2012-01-01 We present the results of a mid-infrared survey of 11 outer Galaxy massive star-forming regions and 3 open clusters with data from the Wide-field Infrared Survey Explorer (WISE). Using a newly developed photometric scheme to identify young stellar objects and exclude extragalactic contamination, we have studied the distribution of young stars within each region. These data tend to support the hypothesis that latter generations may be triggered by the interaction of winds and radiation from th... 9. Hubble Space Telescope Deep Field Lesson Package. Teacher's Guide, Grades 6-8. Amazing Space: Education On-Line from the Hubble Space Telescope. Science.gov (United States) National Aeronautics and Space Administration, Washington, DC. This lesson guide accompanies the Hubble Deep Field set of 10 lithographs and introduces 4 astronomy lesson plans for middle school students. Lessons include: (1) "How Many Objects Are There?"; (2) "Classifying and Identifying"; (3) "Estimating Distances in Space"; and (4) "Review and Assessment." Appendices… 10. Wide field-of-view dual-band multispectral muzzle flash detection Science.gov (United States) Montoya, J.; Melchor, J.; Spiliotis, P.; Taplin, L. 2013-06-01 Sensor technologies are undergoing revolutionary advances, as seen in the rapid growth of multispectral methodologies. Increases in spatial, spectral, and temporal resolution, and in breadth of spectral coverage, render feasible sensors that function with unprecedented performance. A system was developed that addresses many of the key hardware requirements for a practical dual-band multispectral acquisition system, including wide field of view and spectral/temporal shift between dual bands. The system was designed using a novel dichroic beam splitter and dual band-pass filter configuration that creates two side-by-side images of a scene on a single sensor. A high-speed CMOS sensor was used to simultaneously capture data from the entire scene in both spectral bands using a short focal-length lens that provided a wide field-of-view. The beam-splitter components were arranged such that the two images were maintained in optical alignment and real-time intra-band processing could be carried out using only simple arithmetic on the image halves. An experiment related to limitations of the system to address multispectral detection requirements was performed. This characterized the system's low spectral variation across its wide field of view. This paper provides lessons learned on the general limitation of key hardware components required for multispectral muzzle flash detection, using the system as a hardware example combined with simulated multispectral muzzle flash and background signatures. 11. Wide field fluorescence imaging in narrow passageways using scanning fiber endoscope technology Science.gov (United States) Lee, Cameron M.; Chandler, John E.; Seibel, Eric J. 2010-02-01 An ultrathin scanning fiber endoscope (SFE) has been developed for high resolution imaging of regions in the body that are commonly inaccessible. The SFE produces 500 line color images at 30 Hz frame rate while maintaining a 1.2-1.7 mm outer diameter. The distal tip of the SFE houses a 9 mm rigid scan engine attached to a highly flexible tether (minimum bend radius technologies, the unique characteristics of this system have allowed the SFE to navigate narrow passages without sacrificing image quality. To date, the SFE has been used for in vivo imaging of the bile duct, esophagus and peripheral airways. In this study, the standard SFE operation was tailored to capture wide field fluorescence images and spectra. Green (523 nm) and blue (440 nm) lasers were used as illumination sources, while the white balance gain values were adjusted to accentuate red fluorescence signal. To demonstrate wide field fluorescence imaging of small lumens, the SFE was inserted into a phantom model of a human pancreatobiliary tract and navigated to a custom fluorescent target. Both wide field fluorescence and standard color images of the target were captured to demonstrate multimodal imaging. 12. Wide-field four-channel fluorescence imager for biological applications Science.gov (United States) Thakur, Madhuri; Melnik, Dmitry; Barnett, Heather; Daly, Kevin; Moran, Christine H.; Chang, Wei-Shun; Link, Stephan; Bucher, Christopher Theodore; Kittrell, Carter; Curl, Robert 2010-03-01 A wide-field four-channel fluorescence imager has been developed. The instrument uses four expanded laser beams to image a large section (6 mm×9 mm). An object can be sequentially illuminated with any combination of 408-, 532-, 658-, and 784-nm lasers for arbitrary (down to 1 ms) exposure times for each laser. Just two notch filters block scattered light from all four lasers. The design approach described here offers great flexibility in treatment of objects, very good sensitivity, and a wide field of view at low cost. There appears to be no commercial instrument capable of simultaneous fluorescence imaging of a wide field of view with four-laser excitation. Some possible applications are following events such as flow and mixing in microchannel systems, the transmission of biological signals across a culture, and following simulations of biological membrane diffusion. It can also be used in DNA sequencing by synthesis to follow the progress of the photolytic removal of dye and terminator. Without utilizing its time resolution, it can be used to obtain four independent images of a single tissue section stained with four targeting agents, with each coupled to a different dye matching one of the lasers. International Nuclear Information System (INIS) Ki, Yongkan; Kim, Wontaek; Nam, Jiho; Kim, Donghyun; Jeon, Hosang; Park, Dahl; Kim, Dongwon 2011-01-01 Purpose: Wide-field radiation therapy (WFRT) is an effective treatment for widespread bone metastasis. We evaluated local-field irradiation (LFI) after fractionated WFRT (f-WFRT) for treating the patients with multiple painful bone lesions. Methods and Materials: From 1998 to 2007, 32 patients with multiple bone metastases were treated with fractionated LFI (f-LFI) after f-WFRT. All patients initially received 15 Gy in 5 fractions to a wide field, followed by LFI (9-15 Gy in 3 Gy fractions). Response was assessed by evaluating the degree of pain relief using a visual analog scale before radiotherapy, after f-WFRT, and after f-LFI. Results: Fractionated LFI following f-WFRT yielded an overall relief rate of 93.8% and a complete relief rate of 43.8%. The rate of the appearance of new disease was 6.3% for the patients with complete relief, 20.5% for the patients with a partial relief, and 50% for the patients with no relief. Conclusion: Fractionated LFI after f-WFRT is a well-tolerated and effective treatment for multiple metastatic bone disease. 14. A Wide Field Search for Extreme Trans-Neptunian Objects and a Super Earth in the Solar System Science.gov (United States) Trujillo, Chadwick A.; Sheppard, Scott S.; Tholen, David J. 2017-10-01 We are currently conducting the deepest and widest field survey to date sensitive to Extreme Trans-Neptunian Objects (ETNOs), bodies that have semimajor axes greater than 150 au and perihelia higher than 35 au. Our survey is also sensitive to distant super-Earth mass planets such as that recently hypothesized to explain the orbital characteristics of ETNOs.Our survey instruments are Subaru Telescope Hyper Suprime-Cam (HSC) and the Cerro Tololo Interamerican Observatory Dark Energy Camera (DECam). HSC has a field of view of 1.75 square degrees on an 8 meter diameter telescope and DECam has a field of view of about 3 square degrees on a 4 meter diameter telescope. HSC and DECam are two of the largest light grasp survey tools in the world capable of detecting the hypothesized planet. We have surveyed a few thousand square degrees with DECam (magnitude 24) and HSC (magnitude 25).We probe both specific locations in the sky which are likely to contain the hypothesized planet as well as nearly uniform longitude range in both hemispheres of the sky to minimize the impact of observational bias. We will discuss current survey progress, which to date has found several distant objects beyond 50 au with interesting orbital properties. 15. THE SIZE EVOLUTION OF PASSIVE GALAXIES: OBSERVATIONS FROM THE WIDE-FIELD CAMERA 3 EARLY RELEASE SCIENCE PROGRAM Energy Technology Data Exchange (ETDEWEB) Ryan, R. E. Jr. [Physics Department, University of California, Davis, CA 95616 (United States); McCarthy, P. J. [Observatories of the Carnegie Institute of Washington, Pasadena, CA 91101 (United States); Cohen, S. H.; Rutkowski, M. J.; Mechtley, M. R.; Windhorst, R. A. [School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287 (United States); Yan, H. [Center for Cosmology and Astroparticle Physics, Ohio State University, Columbus, OH 43210 (United States); Hathi, N. P. [Department of Physics and Astronomy, University of California, Riverside, CA 92521 (United States); Koekemoer, A. M.; Bond, H. E.; Bushouse, H. [Space Telescope Science Institute, Baltimore, MD 21218 (United States); O' Connell, R. W. [Department of Astronomy, University of Virginia, Charlottesville, VA 22904 (United States); Balick, B. [Department of Astronomy, University of Washington, Seattle, WA 98195 (United States); Calzetti, D. [Department of Astronomy, University of Massachusetts, Amherst, MA 01003 (United States); Crockett, R. M. [Department of Physics, University of Oxford, Oxford OX1 3PU (United Kingdom); Disney, M. [School of Physics and Astronomy, Cardiff University, Cardiff CF24 3AA (United Kingdom); Dopita, M. A. [Research School of Astronomy and Astrophysics, The Australian National University, Weston Creek, ACT 2611 (Australia); Frogel, J. A. [Galaxies Unlimited, Lutherville, MD 21093 (United States); Hall, D. N. B. [Institute for Astronomy, University of Hawaii, Honolulu, HI 96822 (United States); Holtzman, J. A., E-mail: [email protected] [Department of Astronomy, New Mexico State University, Las Cruces, NM 88003 (United States); and others 2012-04-10 We present the size evolution of passively evolving galaxies at z {approx} 2 identified in Wide-Field Camera 3 imaging from the Early Release Science program. Our sample was constructed using an analog to the passive BzK galaxy selection criterion, which isolates galaxies with little or no ongoing star formation at z {approx}> 1.5. We identify 30 galaxies in {approx}40 arcmin{sup 2} to H < 25 mag. By fitting the 10-band Hubble Space Telescope photometry from 0.22 {mu}m {approx}< {lambda}{sub obs} {approx}< 1.6 {mu}m with stellar population synthesis models, we simultaneously determine photometric redshift, stellar mass, and a bevy of other population parameters. Based on the six galaxies with published spectroscopic redshifts, we estimate a typical redshift uncertainty of {approx}0.033(1 + z). We determine effective radii from Sersic profile fits to the H-band image using an empirical point-spread function. By supplementing our data with published samples, we propose a mass-dependent size evolution model for passively evolving galaxies, where the most massive galaxies (M{sub *} {approx} 10{sup 11} M{sub Sun }) undergo the strongest evolution from z {approx} 2 to the present. Parameterizing the size evolution as (1 + z){sup -{alpha}}, we find a tentative scaling of {alpha} Almost-Equal-To (- 0.6 {+-} 0.7) + (0.9 {+-} 0.4)log (M{sub *}/10{sup 9} M{sub Sun }), where the relatively large uncertainties reflect the poor sampling in stellar mass due to the low numbers of high-redshift systems. We discuss the implications of this result for the redshift evolution of the M{sub *}-R{sub e} relation for red galaxies. 16. The MaNGA Integral Field Unit Fiber Feed System for the Sloan 2.5 m Telescope Science.gov (United States) Drory, N.; MacDonald, N.; Bershady, M. A.; Bundy, K.; Gunn, J.; Law, D. R.; Smith, M.; Stoll, R.; Tremonti, C. A.; Wake, D. A.; Yan, R.; Weijmans, A. M.; Byler, N.; Cherinka, B.; Cope, F.; Eigenbrot, A.; Harding, P.; Holder, D.; Huehnerhoff, J.; Jaehnig, K.; Jansen, T. C.; Klaene, M.; Paat, A. M.; Percival, J.; Sayres, C. 2015-02-01 We describe the design, manufacture, and performance of bare-fiber integral field units (IFUs) for the SDSS-IV survey Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) on the the Sloan 2.5 m telescope at Apache Point Observatory. MaNGA is a luminosity-selected integral-field spectroscopic survey of 104 local galaxies covering 360-1030 nm at R˜ 2200. The IFUs have hexagonal dense packing of fibers with packing regularity of 3 μm (rms), and throughput of 96 ± 0.5% from 350 nm to 1 μm in the lab. Their sizes range from 19 to 127 fibers (3-7 hexagonal layers) using Polymicro FBP 120:132:150 μm core:clad:buffer fibers to reach a fill fraction of 56%. High throughput (and low focal-ratio degradation (FRD)) is achieved by maintaining the fiber cladding and buffer intact, ensuring excellent surface polish, and applying a multi-layer anti-reflection (AR) coating of the input and output surfaces. In operations on-sky, the IFUs show only an additional 2.3% FRD-related variability in throughput despite repeated mechanical stressing during plate plugging (however other losses are present). The IFUs achieve on-sky throughput 5% above the single-fiber feeds used in SDSS-III/BOSS, attributable to equivalent performance compared to single fibers and additional gains from the AR coating. The manufacturing process is geared toward mass-production of high-multiplex systems. The low-stress process involves a precision ferrule with a hexagonal inner shape designed to lead inserted fibers to settle in a dense hexagonal pattern. The ferrule ID is tapered at progressively shallower angles toward its tip and the final 2 mm are straight and only a few microns larger than necessary to hold the desired number of fibers. Our IFU manufacturing process scales easily to accommodate other fiber sizes and can produce IFUs with substantially larger fiber counts. To assure quality, automated testing in a simple and inexpensive system enables complete characterization of throughput and 17. The MaNGA integral field unit fiber feed system for the Sloan 2.5 m telescope International Nuclear Information System (INIS) Drory, N.; MacDonald, N.; Byler, N.; Bershady, M. A.; Smith, M.; Tremonti, C. A.; Wake, D. A.; Eigenbrot, A.; Jaehnig, K.; Bundy, K.; Gunn, J.; Law, D. R.; Cherinka, B.; Stoll, R.; Yan, R.; Weijmans, A. M.; Cope, F.; Holder, D.; Huehnerhoff, J.; Harding, P. 2015-01-01 We describe the design, manufacture, and performance of bare-fiber integral field units (IFUs) for the SDSS-IV survey Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) on the the Sloan 2.5 m telescope at Apache Point Observatory. MaNGA is a luminosity-selected integral-field spectroscopic survey of 10 4 local galaxies covering 360–1030 nm at R∼2200. The IFUs have hexagonal dense packing of fibers with packing regularity of 3 μm (rms), and throughput of 96 ± 0.5% from 350 nm to 1 μm in the lab. Their sizes range from 19 to 127 fibers (3–7 hexagonal layers) using Polymicro FBP 120:132:150 μm core:clad:buffer fibers to reach a fill fraction of 56%. High throughput (and low focal-ratio degradation (FRD)) is achieved by maintaining the fiber cladding and buffer intact, ensuring excellent surface polish, and applying a multi-layer anti-reflection (AR) coating of the input and output surfaces. In operations on-sky, the IFUs show only an additional 2.3% FRD-related variability in throughput despite repeated mechanical stressing during plate plugging (however other losses are present). The IFUs achieve on-sky throughput 5% above the single-fiber feeds used in SDSS-III/BOSS, attributable to equivalent performance compared to single fibers and additional gains from the AR coating. The manufacturing process is geared toward mass-production of high-multiplex systems. The low-stress process involves a precision ferrule with a hexagonal inner shape designed to lead inserted fibers to settle in a dense hexagonal pattern. The ferrule ID is tapered at progressively shallower angles toward its tip and the final 2 mm are straight and only a few microns larger than necessary to hold the desired number of fibers. Our IFU manufacturing process scales easily to accommodate other fiber sizes and can produce IFUs with substantially larger fiber counts. To assure quality, automated testing in a simple and inexpensive system enables complete characterization of throughput 18. The MaNGA integral field unit fiber feed system for the Sloan 2.5 m telescope Energy Technology Data Exchange (ETDEWEB) Drory, N. [McDonald Observatory, The University of Texas at Austin, 1 University Station, Austin, TX 78712 (United States); MacDonald, N.; Byler, N. [Department of Astronomy, University of Washington, Box 351580 Seattle, WA 98195 (United States); Bershady, M. A.; Smith, M.; Tremonti, C. A.; Wake, D. A.; Eigenbrot, A.; Jaehnig, K. [Department of Astronomy, University of Wisconsin, 475 N. Charter St., Madison, WI 53706 (United States); Bundy, K. [Kavli Institute for the Physics and Mathematics of The Universe (Kavli IPMU, WPI), Todai Institutes for Advanced Study, The University of Tokyo, Kashiwa, Japan 277-8583 (Japan); Gunn, J. [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States); Law, D. R.; Cherinka, B. [Dunlap Institute for Astronomy and Astrophysics, University of Toronto, 50 St. George St, Toronto, ON M5S 3H4 (Canada); Stoll, R. [C Technologies, Inc., 757 Route 202/206, Bridgewater, NJ 08807 (United States); Yan, R. [Department of Physics and Astronomy, University of Kentucky, Lexington, Kentucky, 40506-0055 (United States); Weijmans, A. M. [School of Physics and Astronomy, University of St Andrews, North Haugh, St Andrews, Fife KY16 9SS (United Kingdom); Cope, F.; Holder, D.; Huehnerhoff, J. [Apache Point Observatory, P.O. Box 59, Sunspot, NM 88349 (United States); Harding, P., E-mail: [email protected] [Department of Astronomy, Case Western Reserve University, Cleveland, OH 44106 (United States); and others 2015-02-01 We describe the design, manufacture, and performance of bare-fiber integral field units (IFUs) for the SDSS-IV survey Mapping Nearby Galaxies at Apache Point Observatory (MaNGA) on the the Sloan 2.5 m telescope at Apache Point Observatory. MaNGA is a luminosity-selected integral-field spectroscopic survey of 10{sup 4} local galaxies covering 360–1030 nm at R∼2200. The IFUs have hexagonal dense packing of fibers with packing regularity of 3 μm (rms), and throughput of 96 ± 0.5% from 350 nm to 1 μm in the lab. Their sizes range from 19 to 127 fibers (3–7 hexagonal layers) using Polymicro FBP 120:132:150 μm core:clad:buffer fibers to reach a fill fraction of 56%. High throughput (and low focal-ratio degradation (FRD)) is achieved by maintaining the fiber cladding and buffer intact, ensuring excellent surface polish, and applying a multi-layer anti-reflection (AR) coating of the input and output surfaces. In operations on-sky, the IFUs show only an additional 2.3% FRD-related variability in throughput despite repeated mechanical stressing during plate plugging (however other losses are present). The IFUs achieve on-sky throughput 5% above the single-fiber feeds used in SDSS-III/BOSS, attributable to equivalent performance compared to single fibers and additional gains from the AR coating. The manufacturing process is geared toward mass-production of high-multiplex systems. The low-stress process involves a precision ferrule with a hexagonal inner shape designed to lead inserted fibers to settle in a dense hexagonal pattern. The ferrule ID is tapered at progressively shallower angles toward its tip and the final 2 mm are straight and only a few microns larger than necessary to hold the desired number of fibers. Our IFU manufacturing process scales easily to accommodate other fiber sizes and can produce IFUs with substantially larger fiber counts. To assure quality, automated testing in a simple and inexpensive system enables complete characterization of 19. The Wide Field Imager of the International X-ray Observatory Energy Technology Data Exchange (ETDEWEB) Stefanescu, A., E-mail: [email protected] [Max-Planck-Institut Halbleiterlabor, Otto-Hahn-Ring 6, 81739 Muenchen (Germany); Johannes Gutenberg-Universitaet, Inst. f. anorganische und analytische Chemie, 55099 Mainz (Germany); Bautz, M.W. [Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139-4307 (United States); Burrows, D.N. [Department of Astronomy and Astrophysics, Pennsylvania State University, University Park, PA 16802 (United States); Bombelli, L.; Fiorini, C. [Politecnico di Milano, Dipartimento di Elettronica e Informazione, Milano (Italy); INFN Sezione di Milano, Milano (Italy); Fraser, G. [Space Research Centre, Department of Physics and Astronomy, University of Leicester, University Road, Leicester LE1 7RH (United Kingdom); Heinzinger, K. [PNSensor GmbH, Roemerstr. 28, 80803 Muenchen (Germany); Herrmann, S. [Max-Planck-Institut Halbleiterlabor, Otto-Hahn-Ring 6, 81739 Muenchen (Germany); Max-Planck-Institut fuer extraterrestrische Physik, Giessenbachstr., 85748 Garching (Germany); Kuster, M. [Technische Universitaet Darmstadt, Institut fuer Kernphysik, Schlossgartenstr. 9, 64289 Darmstadt (Germany); Lauf, T. [Max-Planck-Institut Halbleiterlabor, Otto-Hahn-Ring 6, 81739 Muenchen (Germany); Max-Planck-Institut fuer extraterrestrische Physik, Giessenbachstr., 85748 Garching (Germany); Lechner, P. [PNSensor GmbH, Roemerstr. 28, 80803 Muenchen (Germany); Lutz, G. [Max-Planck-Institut Halbleiterlabor, Otto-Hahn-Ring 6, 81739 Muenchen (Germany); Max-Planck-Institut fuer Physik, Foehringer Ring 6, 80805 Muenchen (Germany); Majewski, P. [PNSensor GmbH, Roemerstr. 28, 80803 Muenchen (Germany); Meuris, A. [Max-Planck-Institut Halbleiterlabor, Otto-Hahn-Ring 6, 81739 Muenchen (Germany); Max-Planck-Institut fuer extraterrestrische Physik, Giessenbachstr., 85748 Garching (Germany); Murray, S.S. [Harvard/Smithsonian Center for Astrophysics, 60 Garden St., Cambridge, MA 02138 (United States) 2010-12-11 The International X-ray Observatory (IXO) will be a joint X-ray observatory mission by ESA, NASA and JAXA. It will have a large effective area (3 m{sup 2} at 1.25 keV) grazing incidence mirror system with good angular resolution (5 arcsec at 0.1-10 keV) and will feature a comprehensive suite of scientific instruments: an X-ray Microcalorimeter Spectrometer, a High Time Resolution Spectrometer, an X-ray Polarimeter, an X-ray Grating Spectrometer, a Hard X-ray Imager and a Wide-Field Imager. The Wide Field Imager (WFI) has a field-of-view of 18 ftx18 ft. It will be sensitive between 0.1 and 15 keV, offer the full angular resolution of the mirrors and good energy resolution. The WFI will be implemented as a 6 in. wafer-scale monolithical array of 1024x1024 pixels of 100x100{mu}m{sup 2} size. The DEpleted P-channel Field-Effect Transistors (DEPFET) forming the individual pixels are devices combining the functionalities of both detector and amplifier. Signal electrons are collected in a potential well below the transistor's gate, modulating the transistor current. Even when the device is powered off, the signal charge is collected and kept in the potential well below the gate until it is explicitly cleared. This makes flexible and fast readout modes possible. 20. WISPIR: A Wide-Field Imaging SPectrograph for the InfraRed for the SPICA Observatory Science.gov (United States) Benford, Dominic J.; Mundy, Lee G. 2010-01-01 We have undertaken a study of a far infrared imaging spectrometer based on a Fourier transform spectrometer that uses well-understood, high maturity optics, cryogenics, and detectors to further our knowledge of the chemical and astrophysical evolution of the Universe as it formed planets, stars, and the variety of galaxy morphologies that we observe today. The instrument, Wide-field Imaging Spectrometer for the InfraRed (WISPIR), would operate on the SPICA observatory, and will feature a spectral range from 35 - 210 microns and a spectral resolving power of R=1,000 to 6,000, depending on wavelength. WISPIR provides a choice of full-field spectral imaging over a 2'x2' field or long-slit spectral imaging along a 2' slit for studies of astrophysical structures in the local and high-redshift Universe. WISPIR in long-slit mode will attain a sensitivity two orders of magnitude better than what is currently available. 1. A Wide Field Auroral Imager (WFAI for low Earth orbit missions Directory of Open Access Journals (Sweden) N. P. Bannister 2007-03-01 Full Text Available A comprehensive understanding of the solar wind interaction with Earth's coupled magnetosphere-ionosphere system requires an ability to observe the charged particle environment and auroral activity from the same platform, generating particle and photon image data which are matched in time and location. While unambiguous identification of the particles giving rise to the aurora requires a Low Earth Orbit satellite, obtaining adequate spatial coverage of aurorae with the relatively limited field of view of current space bourne auroral imaging systems requires much higher orbits. A goal for future satellite missions, therefore, is the development of compact, wide field-of-view optics permitting high spatial and temporal resolution ultraviolet imaging of the aurora from small spacecraft in low polar orbit. Microchannel plate optics offer a method of achieving the required performance. We describe a new, compact instrument design which can observe a wide field-of-view with the required spatial resolution. We report the focusing of 121.6 nm radiation using a spherically-slumped, square-pore microchannel plate with a focal length of 32 mm and an F number of 0.7. Measurements are compared with detailed ray-trace simulations of imaging performance. The angular resolution is 2.7±0.2° for the prototype, corresponding to a footprint ~33 km in diameter for an aurora altitude of 110 km and a spacecraft altitude of 800 km. In preliminary analysis, a more recent optic has demonstrated a full width at half maximum of 5.0±0.3 arcminutes, corresponding to a footprint of ~1 km from the same spacecraft altitude. We further report the imaging properties of a convex microchannel plate detector with planar resistive anode readout; this detector, whose active surface has a radius of curvature of only 100 mm, is shown to meet the spatial resolution and sensitivity requirements of the new wide field auroral imager (WFAI. 2. Computer-aided discovery of debris disk candidates: A case study using the Wide-Field Infrared Survey Explorer (WISE) catalog Science.gov (United States) Nguyen, T.; Pankratius, V.; Eckman, L.; Seager, S. 2018-04-01 Debris disks around stars other than the Sun have received significant attention in studies of exoplanets, specifically exoplanetary system formation. Since debris disks are major sources of infrared emissions, infrared survey data such as the Wide-Field Infrared Survey (WISE) catalog potentially harbors numerous debris disk candidates. However, it is currently challenging to perform disk candidate searches for over 747 million sources in the WISE catalog due to the high probability of false positives caused by interstellar matter, galaxies, and other background artifacts. Crowdsourcing techniques have thus started to harness citizen scientists for debris disk identification since humans can be easily trained to distinguish between desired artifacts and irrelevant noises. With a limited number of citizen scientists, however, increasing data volumes from large surveys will inevitably lead to analysis bottlenecks. To overcome this scalability problem and push the current limits of automated debris disk candidate identification, we present a novel approach that uses citizen science results as a seed to train machine learning based classification. In this paper, we detail a case study with a computer-aided discovery pipeline demonstrating such feasibility based on WISE catalog data and NASA's Disk Detective project. Our approach of debris disk candidates classification was shown to be robust under a wide range of image quality and features. Our hybrid approach of citizen science with algorithmic scalability can facilitate big data processing for future detections as envisioned in future missions such as the Transiting Exoplanet Survey Satellite (TESS) and the Wide-Field Infrared Survey Telescope (WFIRST). 3. A wide-field suprachoroidal retinal prosthesis is stable and well tolerated following chronic implantation. Science.gov (United States) Villalobos, Joel; Nayagam, David A X; Allen, Penelope J; McKelvie, Penelope; Luu, Chi D; Ayton, Lauren N; Freemantle, Alexia L; McPhedran, Michelle; Basa, Meri; McGowan, Ceara C; Shepherd, Robert K; Williams, Chris E 2013-05-01 The safety of chronic implantation of a retinal prosthesis in the suprachoroidal space has not been established. This study aimed to determine the safety of a wide-field suprachoroidal electrode array following chronic implantation using histopathologic techniques and electroretinography. A platinum electrode array in a wide silicone substrate was implanted unilaterally in the suprachoroidal space in adult cats (n = 7). The lead and connector were tunneled out of the orbit and positioned subcutaneously. Postsurgical recovery was assessed using fundus photography and electroretinography (ERG). Following 3 months of passive implantation, the animals were terminated and the eyes assessed for the pathologic response to implantation. The implant was mechanically stable in the suprachoroidal space during the course of the study. The implanted eye showed a transient increase in ERG response amplitude at 2 weeks, which returned to normal by 3 months. Pigmentary changes were observed at the distal end of the implant, near the optic disc. Histopathologic assessment revealed a largely intact retina and a thin fibrous capsule around the suprachoroidal implant cavity. The foreign body response was minimal, with sporadic presence of macrophages and no active inflammation. All implanted eyes were negative for bacterial or fungal infections. A midgrade granuloma and thick fibrous buildup surrounded the extraocular cable. Scleral closure was maintained in six of seven eyes. There were no staphylomas or choroidal incarceration. A wide-field retinal prosthesis was stable and well tolerated during long-term suprachoroidal implantation in a cat model. The surgical approach was reproducible and overall safe. 4. Simple concept for a wide-field lensless digital holographic microscope using a laser diode Directory of Open Access Journals (Sweden) 2015-09-01 Full Text Available Wide-field, lensless digital holographic microscopy is a new microscopic imaging technique for telemedicine and for resource limited setting [1]. In this contribution we propose a very simple wide-field lensless digital holographic microscope using a laser diode. It is based on in-line digital holography which is capable to provide amplitude and phase images of a sample resulting from numerical reconstruction. The numerical reconstruction consists of the angular spectrum propagation method together with a phase retrieval algorithm. Amplitude and phase images of the sample with a resolution of ∽2 µm and with ∽24 mm2 field of view are obtained. We evaluate our setup by imaging first the 1951 USAF resolution test chart to verify the resolution. Second, we record holograms of blood smear and diatoms. The individual specimen can be easily identified after the numerical reconstruction. Our system is a very simple, compact and low-cost possibility of realizing a microscope capable of imaging biological samples. The availability of the phase provide topographic information of the sample extending the application of this system to be not only for biological sample but also for transparent microstructure. It is suitable for fault detection, shape and roughness measurements of these structures. 5. SHOK—The First Russian Wide-Field Optical Camera in Space Science.gov (United States) Lipunov, V. M.; Gorbovskoy, E. S.; Kornilov, V. G.; Panasyuk, M. I.; Amelushkin, A. M.; Petrov, V. L.; Yashin, I. V.; Svertilov, S. I.; Vedenkin, N. N. 2018-02-01 Onboard the spacecraft Lomonosov is established two fast, fixed, very wide-field cameras SHOK. The main goal of this experiment is the observation of GRB optical emission before, synchronously, and after the gamma-ray emission. The field of view of each of the cameras is placed in the gamma-ray burst detection area of other devices located onboard the "Lomonosov" spacecraft. SHOK provides measurements of optical emissions with a magnitude limit of ˜ 9-10m on a single frame with an exposure of 0.2 seconds. The device is designed for continuous sky monitoring at optical wavelengths in the very wide field of view (1000 square degrees each camera), detection and localization of fast time-varying (transient) optical sources on the celestial sphere, including provisional and synchronous time recording of optical emissions from the gamma-ray burst error boxes, detected by the BDRG device and implemented by a control signal (alert trigger) from the BDRG. The Lomonosov spacecraft has two identical devices, SHOK1 and SHOK2. The core of each SHOK device is a fast-speed 11-Megapixel CCD. Each of the SHOK devices represents a monoblock, consisting of a node observations of optical emission, the electronics node, elements of the mechanical construction, and the body. 6. Wide-field Spatio-Spectral Interferometry: Bringing High Resolution to the Far- Infrared Science.gov (United States) Leisawitx, David Wide-field spatio-spectral interferometry combines spatial and spectral interferometric data to provide integral field spectroscopic information over a wide field of view. This technology breaks through a mission cost barrier that stands in the way of resolving spatially and measuring spectroscopically at far-infrared wavelengths objects that will lead to a deep understanding of planetary system and galaxy formation processes. A space-based far-IR interferometer will combine Spitzer s superb sensitivity with a two order of magnitude gain in angular resolution, and with spectral resolution in the thousands. With the possible exception of detector technology, which is advancing with support from other research programs, the greatest challenge for far-IR interferometry is to demonstrate that the interferometer will actually produce the images and spectra needed to satisfy mission science requirements. With past APRA support, our team has already developed the highly specialized hardware testbed, image projector, computational model, and image construction software required for the proposed effort, and we have access to an ideal test facility. 7. Radiometric calibration of wide-field camera system with an application in astronomy Science.gov (United States) Vítek, Stanislav; Nasyrova, Maria; Stehlíková, Veronika 2017-09-01 Camera response function (CRF) is widely used for the description of the relationship between scene radiance and image brightness. Most common application of CRF is High Dynamic Range (HDR) reconstruction of the radiance maps of imaged scenes from a set of frames with different exposures. The main goal of this work is to provide an overview of CRF estimation algorithms and compare their outputs with results obtained under laboratory conditions. These algorithms, typically designed for multimedia content, are unfortunately quite useless with astronomical image data, mostly due to their nature (blur, noise, and long exposures). Therefore, we propose an optimization of selected methods to use in an astronomical imaging application. Results are experimentally verified on the wide-field camera system using Digital Single Lens Reflex (DSLR) camera. 8. Miniaturized high-resolution wide-field contact lens for panretinal photocoagulation Directory of Open Access Journals (Sweden) Koushan K 2014-04-01 Full Text Available Keyvan Koushan, KV Chalam Department of Ophthalmology, University of Florida College of Medicine, Jacksonville, FL, USA Background and objective: We describe a miniaturized lightweight high-refractive-index panretinal contact lens for diagnostic and therapeutic visualization of the peripheral retina. Instrument design: The miniaturized high-resolution wide-field contact lens includes three optical elements in a light (15 g and miniaturized (16 mm footplate, 24 mm external aperture, and 21 mm vertical height casing contributing to a total dioptric power of +171 diopters. This lens provides up to 165° visualization of the retina for diagnostic and therapeutic applications while allowing easier placement due to its miniaturization. Conclusion: This new lens (50% lighter and 89% smaller improves upon earlier contact lenses for visualization of the peripheral retina. Keywords: contact lens, panretinal photocoagulation, retinal examination, peripheral retina, high resolution view, wide-angle lens, lens 9. Results of the magnetic field measurements of CP stars carried out with the Russian 6-m telescope. II. Observations in 2008 Science.gov (United States) Romanyuk, I. I.; Semenko, E. A.; Kudryavtsev, D. O. 2015-10-01 We present the results of the magnetic field measurements of 37 chemically peculiar and 4 normal main sequence stars using circularly polarized spectra obtained in 2008 with a Zeeman analyzer on the Main Stellar Spectrograph (MSS) of the Russian 6-m telescope (BTA). Four new magnetic stars have been discovered (HD25999, HD35100, HD96237, and HD279021), the presence of a field was suspected in two stars (HD2887 and BD-12°2366), 16 previously known CP stars were continued to be monitored to study their fields. The results of the longitudinal magnetic field B e measurements show that in stars with narrow spectral lines, systematic errors in B e determination do not exceed 10-20 G, which is within the statistical error. Our study of stars with reliable phase curves of the longitudinal field B e show that there are no instrumental effects which can distort the observations. 10. Wide-range Vacuum Measurements from MWNT Field Emitters Grown Directly on Stainless Steel Substrates Science.gov (United States) Zhang, Jian; Li, Detian; Zhao, Yangyang; Cheng, Yongjun; Dong, Changkun 2016-01-01 The field emission properties and the vacuum measurement application are investigated from the multi-walled carbon nanotubes (MWNTs) grown directly on catalytic stainless steel substrates. The MWNT emitters present excellent emission properties after the acid treatment of the substrate. The MWNT gauge is able to work down to the extreme-high vacuum (XHV) range with linear measurement performance in wide range from 10-11 to 10-6 Torr. A modulating grid is attempted with improved gauge sensitivity. The extension of the lower pressure limit is attributed largely to low outgassing effect due to direct growth of MWNTs and justified design of the electron source. 11. A new system for quantitative evaluation of infant gaze capabilities in a wide visual field. Science.gov (United States) Pratesi, Andrea; Cecchi, Francesca; Beani, Elena; Sgandurra, Giuseppina; Cioni, Giovanni; Laschi, Cecilia; Dario, Paolo 2015-09-07 The visual assessment of infants poses specific challenges: many techniques that are used on adults are based on the patient's response, and are not suitable for infants. Significant advances in the eye-tracking have made this assessment of infant visual capabilities easier, however, eye-tracking still requires the subject's collaboration, in most cases and thus limiting the application in infant research. Moreover, there is a lack of transferability to clinical practice, and thus it emerges the need for a new tool to measure the paradigms and explore the most common visual competences in a wide visual field. This work presents the design, development and preliminary testing of a new system for measuring infant's gaze in the wide visual field called CareToy C: CareToy for Clinics. The system is based on a commercial eye tracker (SmartEye) with six cameras running at 60 Hz, suitable for measuring an infant's gaze. In order to stimulate the infant visually and audibly, a mechanical structure has been designed to support five speakers and five screens at a specific distance (60 cm) and angle: one in the centre, two on the right-hand side and two on the left (at 30° and 60° respectively). Different tasks have been designed in order to evaluate the system capability to assess the infant's gaze movements during different conditions (such as gap, overlap or audio-visual paradigms). Nine healthy infants aged 4-10 months were assessed as they performed the visual tasks at random. We developed a system able to measure infant's gaze in a wide visual field covering a total visual range of ±60° from the centre with an intermediate evaluation at ±30°. Moreover, the same system, thanks to different integrated software, was able to provide different visual paradigms (as gap, overlap and audio-visual) assessing and comparing different visual and multisensory sub-competencies. The proposed system endowed the integration of a commercial eye-tracker into a purposive setup in 12. Adaptive optics for fluorescence wide-field microscopy using spectrally independent guide star and markers. Science.gov (United States) Vermeulen, Pierre; Muro, Eleonora; Pons, Thomas; Loriette, Vincent; Fragola, Alexandra 2011-07-01 We describe the implementation and use of an adaptive optics loop in the imaging path of a commercial wide field microscope. We show that it is possible to maintain the optical performances of the original microscope when imaging through aberrant biological samples. The sources used for illuminating the adaptive optics loop are spectrally independent, in excitation and emission, from the sample, so they do not appear in the final image, and their use does not contribute to the sample bleaching. Results are compared with equivalent images obtained with an identical microscope devoid of adaptive optics system. 13. Electronic sand table three-dimensional display with the wide field of view Science.gov (United States) Wu, Lei; Sang, Xinzhu; Chen, Duo 2017-10-01 Sand table technology has a great prospect in wide fields. However, sand table technology is mainly based on two-dimension (2D) display. Based on 3D display and head-tracking technique, a novel 3D sand table technology is proposed. The technology consists of a 3D display module, a head-tracking module and an image processing module. The head-tracking module can track the position of observer's head closing to the center of eyes. According to the position, the image processing module would modify the projection matrix of virtual cameras. 3D virtual scene rendered by the image processing module would be display as if floating upon the projection screen in the display module. An experimental system for the 3D sand table was demonstrated, which offers the observer great field of view (FOV) to watch immersive 3D virtual scene. 14. OP09O-OP404-9 Wide Field Camera 3 CCD Quantum Efficiency Hysteresis Science.gov (United States) Collins, Nick 2009-01-01 The HST/Wide Field Camera (WFC) 3 UV/visible channel CCD detectors have exhibited an unanticipated quantum efficiency hysteresis (QEH) behavior. At the nominal operating temperature of -83C, the QEH feature contrast was typically 0.1-0.2% or less. The behavior was replicated using flight spare detectors. A visible light flat-field (540nm) with a several times full-well signal level can pin the detectors at both optical (600nm) and near-UV (230nm) wavelengths, suppressing the QEH behavior. We are characterizing the timescale for the detectors to become unpinned and developing a protocol for flashing the WFC3 CCDs with the instrument's internal calibration system in flight. The HST/Wide Field Camera 3 UV/visible channel CCD detectors have exhibited an unanticipated quantum efficiency hysteresis (QEH) behavior. The first observed manifestation of QEH was the presence in a small percentage of flat-field images of a bowtie-shaped contrast that spanned the width of each chip. At the nominal operating temperature of -83C, the contrast observed for this feature was typically 0.1-0.2% or less, though at warmer temperatures contrasts up to 5% (at -50C) have been observed. The bowtie morphology was replicated using flight spare detectors in tests at the GSFC Detector Characterization Laboratory by power cycling the detector while cold. Continued investigation revealed that a clearly-related global QE suppression at the approximately 5% level can be produced by cooling the detector in the dark; subsequent flat-field exposures at a constant illumination show asymptotically increasing response. This QE "pinning" can be achieved with a single high signal flat-field or a series of lower signal flats; a visible light (500-580nm) flat-field with a signal level of several hundred thousand electrons per pixel is sufficient for QE pinning at both optical (600nm) and near-UV (230nm) wavelengths. We are characterizing the timescale for the detectors to become unpinned and developing a 15. Telescopes and Techniques CERN Document Server Kitchin, C R 2013-01-01 Telescopes and Techniques has proved itself in its first two editions, having become probably one of the most widely used astronomy texts, both for amateur astronomers and astronomy and astrophysics undergraduates. Both earlier editions of the book were widely used for introductory practical astronomy courses in many universities. In this Third Edition the author guides the reader through the mathematics, physics and practical techniques needed to use today's telescopes (from the smaller models to the larger instruments installed in many colleges) and how to find objects in the sky. Most of the physics and engineering involved is described fully and requires little prior knowledge or experience. Both visual and electronic imaging techniques are covered, together with an introduction to how data (measurements) should be processed and analyzed. A simple introduction to radio telescopes is also included. Brief coverage of the more advanced topics of photometry and spectroscopy are included, but mainly to enable ... 16. Wide-field spectrally resolved quantitative fluorescence imaging system: toward neurosurgical guidance in glioma resection Science.gov (United States) Xie, Yijing; Thom, Maria; Ebner, Michael; Wykes, Victoria; Desjardins, Adrien; Miserocchi, Anna; Ourselin, Sebastien; McEvoy, Andrew W.; Vercauteren, Tom 2017-11-01 In high-grade glioma surgery, tumor resection is often guided by intraoperative fluorescence imaging. 5-aminolevulinic acid-induced protoporphyrin IX (PpIX) provides fluorescent contrast between normal brain tissue and glioma tissue, thus achieving improved tumor delineation and prolonged patient survival compared with conventional white-light-guided resection. However, commercially available fluorescence imaging systems rely solely on visual assessment of fluorescence patterns by the surgeon, which makes the resection more subjective than necessary. We developed a wide-field spectrally resolved fluorescence imaging system utilizing a Generation II scientific CMOS camera and an improved computational model for the precise reconstruction of the PpIX concentration map. In our model, the tissue's optical properties and illumination geometry, which distort the fluorescent emission spectra, are considered. We demonstrate that the CMOS-based system can detect low PpIX concentration at short camera exposure times, while providing high-pixel resolution wide-field images. We show that total variation regularization improves the contrast-to-noise ratio of the reconstructed quantitative concentration map by approximately twofold. Quantitative comparison between the estimated PpIX concentration and tumor histopathology was also investigated to further evaluate the system. 17. Using Wide-Field Meteor Cameras to Actively Engage Students in Science Science.gov (United States) Kuehn, D. M.; Scales, J. N. 2012-08-01 Astronomy has always afforded teachers an excellent topic to develop students' interest in science. New technology allows the opportunity to inexpensively outfit local school districts with sensitive, wide-field video cameras that can detect and track brighter meteors and other objects. While the data-collection and analysis process can be mostly automated by software, there is substantial human involvement that is necessary in the rejection of spurious detections, in performing dynamics and orbital calculations, and the rare recovery and analysis of fallen meteorites. The continuous monitoring allowed by dedicated wide-field surveillance cameras can provide students with a better understanding of the behavior of the night sky including meteors and meteor showers, stellar motion, the motion of the Sun, Moon, and planets, phases of the Moon, meteorological phenomena, etc. Additionally, some students intrigued by the possibility of UFOs and "alien visitors" may find that actual monitoring data can help them develop methods for identifying "unknown" objects. We currently have two ultra-low light-level surveillance cameras coupled to fish-eye lenses that are actively obtaining data. We have developed curricula suitable for middle or high school students in astronomy and earth science courses and are in the process of testing and revising our materials. 18. FNTD radiation dosimetry system enhanced with dual-color wide-field imaging International Nuclear Information System (INIS) Akselrod, M.S.; Fomenko, V.V.; Bartz, J.A.; Ding, F. 2014-01-01 At high neutron and photon doses Fluorescent Nuclear Track Detectors (FNTDs) require operation in analog mode and the measurement results depend on individual crystal color center concentration (coloration). We describe a new method for radiation dosimetry using FNTDs, which includes non-destructive, automatic sensitivity calibration for each individual FNTD. In the method presented, confocal laser scanning fluorescent imaging of FNTDs is combined with dual-color wide field imaging of the FNTD. The calibration is achieved by measuring the color center concentration in the detector through fluorescence imaging and reducing the effect of diffuse reflection on the lapped surface of the FNTD by imaging with infra-red (IR) light. The dual-color imaging of FNTDs is shown to provide a good estimation of the detector sensitivity at high doses of photons and neutrons, where conventional track counting is impeded by track overlap. - Highlights: • New method and optical imaging head was developed for FNTD used at high doses. • Dual-color wide-field imaging used for color center concentration measurement. • Green fluorescence corrected by diffuse reflection used for sensitivity correction. • FNTD dose measurements performed in analog processing mode 19. Radial Peripapillary Capillary Network Visualized Using Wide-Field Montage Optical Coherence Tomography Angiography. Science.gov (United States) Mase, Tomoko; Ishibazawa, Akihiro; Nagaoka, Taiji; Yokota, Harumasa; Yoshida, Akitoshi 2016-07-01 We quantitatively analyzed the features of a radial peripapillary capillary (RPC) network visualized using wide-field montage optical coherence tomography (OCT) angiography in healthy human eyes. Twenty eyes of 20 healthy subjects were recruited. En face 3 × 3-mm OCT angiograms of multiple locations in the posterior pole were acquired using the RTVue XR Avanti, and wide-field montage images of the RPC were created. To evaluate the RPC density, the montage images were binarized and skeletonized. The correlation between the RPC density and the retinal nerve fiber layer (RNFL) thickness measured by an OCT circle scan was investigated. The RPC at the temporal retina was detected as far as 7.6 ± 0.7 mm from the edge of the optic disc but not around the perifoveal area within 0.9 ± 0.1 mm of the fovea. Capillary-free zones beside the first branches of the arterioles were significantly (P optic disc edge were 13.6 ± 0.8, 11.9 ± 0.9, and 10.4 ± 0.9 mm-1. The RPC density also was correlated significantly (r = 0.64, P network. The RPC is present in the superficial peripapillary retina in proportion to the RNFL thickness, supporting the idea that the RPC may be the vascular network primarily responsible for RNFL nourishment. 20. Wide-field computational imaging of pathology slides using lens-free on-chip microscopy. Science.gov (United States) Greenbaum, Alon; Zhang, Yibo; Feizi, Alborz; Chung, Ping-Luen; Luo, Wei; Kandukuri, Shivani R; Ozcan, Aydogan 2014-12-17 Optical examination of microscale features in pathology slides is one of the gold standards to diagnose disease. However, the use of conventional light microscopes is partially limited owing to their relatively high cost, bulkiness of lens-based optics, small field of view (FOV), and requirements for lateral scanning and three-dimensional (3D) focus adjustment. We illustrate the performance of a computational lens-free, holographic on-chip microscope that uses the transport-of-intensity equation, multi-height iterative phase retrieval, and rotational field transformations to perform wide-FOV imaging of pathology samples with comparable image quality to a traditional transmission lens-based microscope. The holographically reconstructed image can be digitally focused at any depth within the object FOV (after image capture) without the need for mechanical focus adjustment and is also digitally corrected for artifacts arising from uncontrolled tilting and height variations between the sample and sensor planes. Using this lens-free on-chip microscope, we successfully imaged invasive carcinoma cells within human breast sections, Papanicolaou smears revealing a high-grade squamous intraepithelial lesion, and sickle cell anemia blood smears over a FOV of 20.5 mm(2). The resulting wide-field lens-free images had sufficient image resolution and contrast for clinical evaluation, as demonstrated by a pathologist's blinded diagnosis of breast cancer tissue samples, achieving an overall accuracy of ~99%. By providing high-resolution images of large-area pathology samples with 3D digital focus adjustment, lens-free on-chip microscopy can be useful in resource-limited and point-of-care settings. Copyright © 2014, American Association for the Advancement of Science. 1. Momentum and angular correlations study in $\\pi^{-}$ nuclei jets at high energies using emulsion telescopes technique with magnetic field CERN Multimedia 2002-01-01 This experiment aims at studying angular and momentum correlations between particles in high energy hadron jets, using emulsion telescopes technique. \\\\ \\\\ The aim of the experimental arrangement is to obtain the highest possible accuracy in angular data. The ordinary emulsion technique is known to be limited in precision by distorsion phenomena. We have developed a technique which is able to flow emulsion on both sides of glass sheets. We measure the co-ordinates of the tracks at the glass surfaces. All possible shrinkage and distorsions are eliminated. \\\\ \\\\ We use telescope units made of glass sheets, 60 $\\mu$m thick with 30 $\\mu$m emulsion on both sides; the telescopes we use contain 10 units whose position is measured before the experiment with an accuracy of about 5 $\\mu$m in the transverse direction, using an opticle rule. It is of about 1 $\\mu$m after geometrical fit on the beam tracks. In the longitudinal direction the accuracies are, respectively, 100 $\\mu$m and 10 $\\mu$m. If the target position is ... 2. Panoramic Radio Astronomy : Wide-field 1-2 GHz research on galaxy evolution NARCIS (Netherlands) Heald, G.; Serra, P. 2009-01-01 In this contribution we give a brief overview of the Panoramic Radio Astronomy (PRA) conference held on 2-5 June 2009 in Groningen, the Netherlands. The conference was motivated by the on-going development of a large number of new radio telescopes and instruments which, within a few years, will 3. Optical aperture synthesis : A comparison of techniques for wide-field interferometric imaging NARCIS (Netherlands) Van der Avoort, C. 2006-01-01 The research described in this thesis provides onsets for research in several areas of interest related to aperture synthesis and guidelines concerning the design of synthetic telescopes for imaging. As such, this research contributes to the improvement of instrumentation for observational 4. Wide-field two-dimensional multifocal optical-resolution photoacoustic computed microscopy Science.gov (United States) Xia, Jun; Li, Guo; Wang, Lidai; Nasiriavanaki, Mohammadreza; Maslov, Konstantin; Engelbach, John A.; Garbow, Joel R.; Wang, Lihong V. 2014-01-01 Optical-resolution photoacoustic microscopy (OR-PAM) is an emerging technique that directly images optical absorption in tissue at high spatial resolution. To date, the majority of OR-PAM systems are based on single focused optical excitation and ultrasonic detection, limiting the wide-field imaging speed. While one-dimensional multifocal OR-PAM (1D-MFOR-PAM) has been developed, the potential of microlens and transducer arrays has not been fully realized. Here, we present the development of two-dimensional multifocal optical-resolution photoacoustic computed microscopy (2D-MFOR-PACM), using a 2D microlens array and a full-ring ultrasonic transducer array. The 10 × 10 mm2 microlens array generates 1800 optical foci within the focal plane of the 512-element transducer array, and raster scanning the microlens array yields optical-resolution photoacoustic images. The system has improved the in-plane resolution of a full-ring transducer array from ≥100 µm to 29 µm and achieved an imaging time of 36 seconds over a 10 × 10 mm2 field of view. In comparison, the 1D-MFOR-PAM would take more than 4 minutes to image over the same field of view. The imaging capability of the system was demonstrated on phantoms and animals both ex vivo and in vivo. PMID:24322226 5. Wide field imaging spectrometer for ESA's future X-ray mission: XEUS CERN Document Server Strüder, L 1999-01-01 An active pixel sensor (APS) based on the DEpleted P-channel junction Field Effect Transistor (DEPFET) concept will be described as a potential wide field imager for ESA's high resolution, high throughput mission: 'X-ray Evolving Universe Spectroscopy' (XEUS). It comprises a parallel multichannel readout, low noise at high speed readout, backside illumination and a fill factor of 100% over the whole field of view. The depleted thickness will be 500 microns. These design parameters match the scientific requirements of the mission. The fabrication techniques of the DEPFET arrays are related to the high resistivity process of the X-ray pn-CCDs. Potential extensions of the already realized DEPFET structures are a non-destructive repetitive readout of the signal charges. This concept will be presented. As an alternative solution, frame store pn-CCDs are considered having the same format and pixel sizes as the proposed DEPFET arrays. Their development is a low risk, straightforward continuation of the XMM devices. ... 6. Eclipse telescope design factors Science.gov (United States) Hull, Tony; Trauger, John T.; Macenka, Steven A.; Moody, Dwight; Olarte, Guillermo; Sepulveda, Cesar; Tsuha, Walter; Cohen, David 2003-02-01 Very high contrast imagery, required for exoplanet image acquisition, imposes significantly different criteria upon telescope architecture than do the requirements imposed upon most spaceborne telescopes. For the Eclipse Mission, the fundamental figure-of-merit is a stellar contrast, or brightness reduction ratio, reaching a factor of 10-9 or better at star-planet distances as close as the 4th Airy ring. Factors necessary to achieve such contrast ratios are both irrelevant and largely ignored in contemporary telescope design. Although contemporary telescoeps now meet Hubble Space Telescope performance at substantially lower mass and cost than HST, control of mid-spatial-frequency (MSF) errors, crucial to coronagraphy, has not been emphasized. Accordingly, roughness at MSF has advanced little since HST. Fortunately, HST primary mirror smoothness would nearly satisfy Eclipse requirements, although other aspects of HST are undesirable for stellar coronagraphy. Conversely, the narrow field required for Eclipse eases other drivers of traditional telescope design. A systematic approach to telescope definition, with primary and sub-tier figures-of-merit, will be discussed in the context of the Eclipse Mission. 7. A comparison of image restoration approaches applied to three-dimensional confocal and wide-field fluorescence microscopy. Science.gov (United States) Verveer, P. J; Gemkow, M. J; Jovin, T. M 1999-01-01 We have compared different image restoration approaches for fluorescence microscopy. The most widely used algorithms were classified with a Bayesian theory according to the assumed noise model and the type of regularization imposed. We considered both Gaussian and Poisson models for the noise in combination with Tikhonov regularization, entropy regularization, Good's roughness and without regularization (maximum likelihood estimation). Simulations of fluorescence confocal imaging were used to examine the different noise models and regularization approaches using the mean squared error criterion. The assumption of a Gaussian noise model yielded only slightly higher errors than the Poisson model. Good's roughness was the best choice for the regularization. Furthermore, we compared simulated confocal and wide-field data. In general, restored confocal data are superior to restored wide-field data, but given sufficient higher signal level for the wide-field data the restoration result may rival confocal data in quality. Finally, a visual comparison of experimental confocal and wide-field data is presented. 8. Wide swath imaging spectrometer utilizing a multi-modular design Science.gov (United States) Chrisp, Michael P. 2010-10-05 A wide swath imaging spectrometer utilizing an array of individual spectrometer modules in the telescope focal plane to provide an extended field of view. The spectrometer modules with their individual detectors are arranged so that their slits overlap with motion on the scene providing contiguous spatial coverage. The number of modules can be varied to take full advantage of the field of view available from the telescope. 9. Wide-field kinematic structure of early-type galaxy halos Science.gov (United States) Arnold, Jacob Antony 2013-12-01 The stellar halos of nearby galaxies bare the signatures of the mass-assembly processes that have driven galaxy evolution over the last ˜10 Gyr. Finding and interpreting these relict clues in galaxies within and beyond the local group offers one of the most promising avenues for understanding how galaxies accumulate their stars over time. To tackle this problem we have performed a systematic study of the wide-field kinematic structure of nearby (Dspectroscopy out to several effective radii (˜3 R e). The 22 galaxies presented here span a range of environments (field, group, and cluster), intrinsic luminosities (-22.4 infrared Calcium II triplet. For each spectrum, we parameterize the line-of-sight velocity distribution (LOSVD) as a truncated Gauss-Hermite series convolved with an optimally weighted combination of stellar templates. These kinematic measurements (V, sigma, h3, and h4) are combined with literature values to construct spatially resolved maps of large-scale kinematic structure. A variety of kinematic behaviors are observed beyond ~1 Re, potentially reflecting the stochastic and chaotic assembly of stellar bulges and halos in early-type galaxies. Next, we describe a global analysis (out to 5 Re) of kinematics and metallicity in the nearest S0 galaxy, NGC 3115, along with implications for its assembly history. The data include high-quality wide-field imaging and multi-slit spectra of the field stars and globular clusters (GCs). Within two effective radii, the bulge (as traced by the stars and metal-rich GCs) is flattened and rotates rapidly. At larger radii, the rotation declines dramatically, while the characteristic GC metallicities also decrease with radius. We argue that this pattern is not naturally explained by a binary major merger, but instead by a two-phase assembly process where the inner regions have formed in an early violent, dissipative phase, followed by the protracted growth of the outer parts via minor mergers. To test this hypothesis 10. Wide-Field Imaging of Single-Nanoparticle Extinction with Sub-nm2 Sensitivity Science.gov (United States) Payne, Lukas M.; Langbein, Wolfgang; Borri, Paola 2018-03-01 We report on a highly sensitive wide-field imaging technique for quantitative measurement of the optical extinction cross section σext of single nanoparticles. The technique is simple and high speed, and it enables the simultaneous acquisition of hundreds of nanoparticles for statistical analysis. Using rapid referencing, fast acquisition, and a deconvolution analysis, a shot-noise-limited sensitivity down to 0.4 nm2 is achieved. Measurements on a set of individual gold nanoparticles of 5 nm diameter using this method yield σext=(10.0 ±3.1 ) nm2, which is consistent with theoretical expectations and well above the background fluctuations of 0.9 nm2 . 11. Wide-field microscopic FRET imaging using simultaneous spectral unmixing of excitation and emission spectra. Science.gov (United States) Du, Mengyan; Zhang, Lili; Xie, Shusen; Chen, Tongsheng 2016-07-11 Simultaneous spectral unmixing of excitation and emission spectra (ExEm unmixing) has the inherent ability to resolve donor emission, fluorescence resonance energy transfer (FRET)-sensitized acceptor emission and directly excited acceptor emission. We here develop an ExEm unmixing-based quantitative FRET measurement method (EES-FRET) independent of excitation intensity and detector parameter setting. The ratio factor (rK), predetermined using a donor-acceptor tandem construct, of total acceptor absorption to total donor absorption in excitation wavelengths used is introduced for determining the concentration ratio of acceptor to donor. We implemented EES-FRET method on a wide-field microscope to image living cells expressing tandem FRET constructs with different donor-acceptor stoichiometry. 12. Meteor observations with Mini-Mega-TORTORA wide-field monitoring system Science.gov (United States) Karpov, S.; Orekhova, N.; Beskin, G.; Biryukov, A.; Bondar, S.; Ivanov, E.; Katkova, E.; Perkov, A.; Sasyuk, V. 2016-12-01 Here we report on the results of meteor observations with 9-channel Mini-Mega-TORTORA (MMT-9) optical monitoring system with the wide field and high temporal resolution. During the first 1.5 years of operation more than 90 thousands of meteors have been detected, at a rate of 300-350 per night, with durations from 0.1 to 2.5 seconds and angular velocities up to 38 degrees per second. The faintest detected meteors have peak brightnesses about 10 mag, while the majority have them ranging from 4 to 8 mag. Some of the meteors have been observed in BVR filters simultaneously. Color variations along the trail for them have been determined. The parameters of the detected meteors have been published online. The database also includes data from 10 thousands of meteors detected by our previous FAVOR camera during 2006-2009. 13. A DEEP, WIDE-FIELD Hα SURVEY OF NEARBY CLUSTERS OF GALAXIES: DATA International Nuclear Information System (INIS) Sakai, Shoko; Kennicutt, Robert C. Jr.; Moss, Chris 2012-01-01 We present the results of a wide-field Hα imaging survey of eight nearby (z = 0.02-0.03) Abell clusters. We have measured Hα fluxes and equivalent widths for 465 galaxies, of which 360 are new detections. The survey was designed to obtain complete emission-line-selected inventories of star-forming galaxies in the inner regions of these clusters, extending to star formation rates below 0.1 M ☉ yr –1 . This paper describes the observations, data processing, and source identification procedures, and presents an Hα and R-band catalog of detected cluster members and other candidates. Future papers in the series will use these data to study the completeness of spectroscopically based star formation surveys, and to quantify the effects of cluster environment on the present-day populations of star-forming galaxies. The data will also provide a valuable foundation for imaging surveys of redshifted Hα emission in more distant clusters. 14. Commissioning of a medical accelerator photon beam Monte Carlo simulation using wide-field profiles Energy Technology Data Exchange (ETDEWEB) Pena, J [Departamento de Fisica de PartIculas, Facultade de Fisica, 15782 Santiago de Compostela (Spain); Franco, L [Departamento de Fisica de PartIculas, Facultade de Fisica, 15782 Santiago de Compostela (Spain); Gomez, F [Departamento de Fisica de PartIculas, Facultade de Fisica, 15782 Santiago de Compostela (Spain); Iglesias, A [Departamento de Fisica de PartIculas, Facultade de Fisica, 15782 Santiago de Compostela (Spain); Lobato, R [Hospital ClInico Universitario de Santiago, Santiago de Compostela (Spain); Mosquera, J [Hospital ClInico Universitario de Santiago, Santiago de Compostela (Spain); Pazos, A [Departamento de Fisica de PartIculas, Facultade de Fisica, 15782 Santiago de Compostela (Spain); Pardo, J [Departamento de Fisica de PartIculas, Facultade de Fisica, 15782 Santiago de Compostela (Spain); Pombar, M [Hospital ClInico Universitario de Santiago, Santiago de Compostela (Spain); RodrIguez, A [Departamento de Fisica de PartIculas, Facultade de Fisica, 15782 Santiago de Compostela (Spain); Sendon, J [Hospital ClInico Universitario de Santiago, Santiago de Compostela (Spain) 2004-11-07 A method for commissioning an EGSnrc Monte Carlo simulation of medical linac photon beams through wide-field lateral profiles at moderate depth in a water phantom is presented. Although depth-dose profiles are commonly used for nominal energy determination, our study shows that they are quite insensitive to energy changes below 0.3 MeV (0.6 MeV) for a 6 MV (15 MV) photon beam. Also, the depth-dose profile dependence on beam radius adds an additional uncertainty in their use for tuning nominal energy. Simulated 40 cm x 40 cm lateral profiles at 5 cm depth in a water phantom show greater sensitivity to both nominal energy and radius. Beam parameters could be determined by comparing only these curves with measured data. 15. Commissioning of a medical accelerator photon beam Monte Carlo simulation using wide-field profiles International Nuclear Information System (INIS) Pena, J; Franco, L; Gomez, F; Iglesias, A; Lobato, R; Mosquera, J; Pazos, A; Pardo, J; Pombar, M; RodrIguez, A; Sendon, J 2004-01-01 A method for commissioning an EGSnrc Monte Carlo simulation of medical linac photon beams through wide-field lateral profiles at moderate depth in a water phantom is presented. Although depth-dose profiles are commonly used for nominal energy determination, our study shows that they are quite insensitive to energy changes below 0.3 MeV (0.6 MeV) for a 6 MV (15 MV) photon beam. Also, the depth-dose profile dependence on beam radius adds an additional uncertainty in their use for tuning nominal energy. Simulated 40 cm x 40 cm lateral profiles at 5 cm depth in a water phantom show greater sensitivity to both nominal energy and radius. Beam parameters could be determined by comparing only these curves with measured data 16. Improving vessel segmentation in ultra-wide field-of-view retinal fluorescein angiograms. Science.gov (United States) Perez-Rovira, A; Zutis, K; Hubschman, J P; Trucco, E 2011-01-01 Vessel segmentation on ultra-wide field-of-view fluorescein angiogram sequences of the retina is a challenging problem. Vessel appearance undergoes severe changes, as different portions of the vascular structure become perfused in different frames. This paper presents a method for segmenting vessels in such sequences using steerable filters and automatic thresholding. We introduce a penalization stage on regions with high vessel response in the filtered image, improving the detection of peripheral vessels and reducing false positives around the optic disc and in regions of choroidal vessels and lesions. Quantitative results are provided, in which the penalization stage improves the segmentation precision segmentation by 11.84%, the recall by 12.98% and the accuracy by 0.40%. To facilitate further evaluation, usage, and algorithm comparison, the algorithm, the data set used, the ground truth, and the results are made available on the internet. 17. Galaxy Populations in Clusters and the Estimation of Cluster Optical Richness in Wide-Field Surveys Science.gov (United States) Koester, Ben 2006-12-01 Using the recently published maxBCG cluster catalog and the imaging survey of the SDSS, we make background-subtracted measurements of the color, spatial, and luminosity distributions of cluster galaxies at z cluster properties. These filters are then used to remeasure the richnesses of maxBCG clusters. Because the filters contain statistical information about observed cluster galaxy populations, they should generate a more informative mass proxy. Dynamical mass estimates confirm that this new richness measurement indeed contains more mass information than the basic Ngalsr200 richness provided with the maxBCG catalog. Techniques such as this will likely be a key component of future wide-field optical cluster surveys. 18. Wide-field Imaging of the Environments of LITTLE THINGS Dwarf Irregular Galaxies Science.gov (United States) Hunter, Deidre A.; Melton, Casey; Leshin, Stephen; Wong, Alson; Clark, Maurice; Kamienski, Jerald; Moriya, Netzer; Packwood, Burley; Birket, Bob; Edwards, William; Millward, Mervyn; Wheelband, Ian 2018-01-01 We have obtained wide-field images of 36 of the 41 LITTLE THINGS (Local Irregulars That Trace Luminosity Extremes, The H I Nearby Galaxy Survey) nearby (external factor in ongoing star formation. The limiting magnitudes of the images range from 19.7 to 28.3 mag arcsec‑2, with a median value of 25.9 mag arcsec‑2. We did not find any unknown companions. Two of the LITTLE THINGS galaxies, NGC 4163 and NGC 4214, and the fainter dwarf, UGCA 276, lie potentially within 100 kpc of each other, but our imaging does not reveal any stellar bridge between the galaxies. This project was part of the Lowell Amateur Research Initiative. 19. Improved iris localization by using wide and narrow field of view cameras for iris recognition Science.gov (United States) Kim, Yeong Gon; Shin, Kwang Yong; Park, Kang Ryoung 2013-10-01 Biometrics is a method of identifying individuals by their physiological or behavioral characteristics. Among other biometric identifiers, iris recognition has been widely used for various applications that require a high level of security. When a conventional iris recognition camera is used, the size and position of the iris region in a captured image vary according to the X, Y positions of a user's eye and the Z distance between a user and the camera. Therefore, the searching area of the iris detection algorithm is increased, which can inevitably decrease both the detection speed and accuracy. To solve these problems, we propose a new method of iris localization that uses wide field of view (WFOV) and narrow field of view (NFOV) cameras. Our study is new as compared to previous studies in the following four ways. First, the device used in our research acquires three images, one each of the face and both irises, using one WFOV and two NFOV cameras simultaneously. The relation between the WFOV and NFOV cameras is determined by simple geometric transformation without complex calibration. Second, the Z distance (between a user's eye and the iris camera) is estimated based on the iris size in the WFOV image and anthropometric data of the size of the human iris. Third, the accuracy of the geometric transformation between the WFOV and NFOV cameras is enhanced by using multiple matrices of the transformation according to the Z distance. Fourth, the searching region for iris localization in the NFOV image is significantly reduced based on the detected iris region in the WFOV image and the matrix of geometric transformation corresponding to the estimated Z distance. Experimental results showed that the performance of the proposed iris localization method is better than that of conventional methods in terms of accuracy and processing time. 20. Leaf Area Index Estimation Using Chinese GF-1 Wide Field View Data in an Agriculture Region. Science.gov (United States) Wei, Xiangqin; Gu, Xingfa; Meng, Qingyan; Yu, Tao; Zhou, Xiang; Wei, Zheng; Jia, Kun; Wang, Chunmei 2017-07-08 Leaf area index (LAI) is an important vegetation parameter that characterizes leaf density and canopy structure, and plays an important role in global change study, land surface process simulation and agriculture monitoring. The wide field view (WFV) sensor on board the Chinese GF-1 satellite can acquire multi-spectral data with decametric spatial resolution, high temporal resolution and wide coverage, which are valuable data sources for dynamic monitoring of LAI. Therefore, an automatic LAI estimation algorithm for GF-1 WFV data was developed based on the radiative transfer model and LAI estimation accuracy of the developed algorithm was assessed in an agriculture region with maize as the dominated crop type. The radiative transfer model was firstly used to simulate the physical relationship between canopy reflectance and LAI under different soil and vegetation conditions, and then the training sample dataset was formed. Then, neural networks (NNs) were used to develop the LAI estimation algorithm using the training sample dataset. Green, red and near-infrared band reflectances of GF-1 WFV data were used as the input variables of the NNs, as well as the corresponding LAI was the output variable. The validation results using field LAI measurements in the agriculture region indicated that the LAI estimation algorithm could achieve satisfactory results (such as R² = 0.818, RMSE = 0.50). In addition, the developed LAI estimation algorithm had potential to operationally generate LAI datasets using GF-1 WFV land surface reflectance data, which could provide high spatial and temporal resolution LAI data for agriculture, ecosystem and environmental management researches. 1. Can ultra-wide field retinal imaging replace colour digital stereoscopy for glaucoma detection? Science.gov (United States) Quinn, Nicola B; Azuara-Blanco, Augusto; Graham, Katie; Hogg, Ruth E; Young, Ian S; Kee, Frank 2018-02-01 Ultra-wide field (UWF) retinal imaging (Optomap, Optos plc, Dunfermline, UK) is a novel technique to image the peripheral fundus. The goal of this study was to explore the potential use of UWF imaging to detect glaucoma, and specifically to evaluate the reproducibility of measures of vertical cup-to-disc ratio (VCDR) using ultra-wide field (UWF), and the agreement between UWF and standard colour digital stereoscopy (CDS). An observational study. From a population-based epidemiological study we selected 100 eyes from 100 consecutive participants who were imaged using both standard CDS and UWF retinal imaging. Estimation of the VCDR using both modalities was made by a masked glaucoma specialist and two masked independent observers. Reliability and agreement between colour digital stereoscopy and the UWF imaging was assessed by Bland-Altman scatterplots. Intra-observer reproducibility of the UWF imaging in estimating VCDRs produced Limits of Agreement (LOA) ranging from -0.13 to 0.1 (mean 0.02) and -0.14 to 0.14 (mean 0.0004) for observer 1 and 2 respectively. Inter-observer reliability between observer 1 and the glaucoma specialist for VCDR measurements using CDS and UWF produced LOA ranging from -0.37 to 0.15 (mean -0.11) and -0.24 to 0.26 (mean 0.0005) respectively. Bland Altman plots produced LOA of -0.16 to 0.20 (mean 0.02) between the two imaging methods for assessing VCDR when carried out by a glaucoma specialist. Grading of UWF imaging has high reproducibility in evaluating VCDR and agreement with stereoscopic optic disc imaging and may be suitable for glaucoma diagnosis in situations where CDS is not available. 2. A wide field of view force protection system for ground vehicles Science.gov (United States) Way, Scott; Archer, Cynthia; Jolivet, Noel; Cannon, Bruce; Hansen, Joel; Holt, Jordon; Olsen, Steven; Sarao, Jeremy 2009-05-01 The latest generation of heavily armored vehicles and the proliferation of IEDs in urban combat environments dictate that electro-optical systems play a greater role in situational awareness for ground vehicles. FLIR systems has been addressing the needs of the ground vehicle community by developing unique sensor systems combining thermal imaging and electro-optical sensors, advanced image processing, and networking capabilities into compact, cost effective packages. This paper will discuss one of those new products, the WideEye II. The WideEye II combines long wave infrared and electro-optical sensors in a 180 degree field of view, single integrated package to meet the critical needs of the warfighter. It includes seamless electronic stitching of the 180 degree image, and state of the art networking capability to allow it to be operated standalone or to be fully integrated with modern combat vehicle systems. The paper will discuss system tradeoffs and capabilities of this new product and show potential applications for its use. 3. New telescope designs suitable for massively multiplexed spectroscopy Science.gov (United States) Pasquini, Luca; Delabre, B.; Ellis, R.; de Zeeuw, Tim 2016-07-01 We present two novel designs for a telescope suitable for massively-multiplexed spectroscopy. The first is a very wide field Cassegrain telescope optimised for fibre feeding. It provides a Field Of View (FOV) of 2.5 degrees diameter with a 10m primary mirror. It is telecentric and works at F/3, optimal for fibre injection. As an option, a gravity invariant focus for the central 10 arc-minutes can be added, to host, for instance, a giant integral field unit (IFU). It has acceptable performance in the 360-1300 nm wavelength range. The second concept is an innovative five mirror telescope design based on a Three Mirror Anastigmatic (TMA) concept. The design provides a large FOV in a convenient, gravityinvariant focal plane, and is scalable to a range of telescope diameters. As specific example, we present a 10m telescope with a 1.5 degree diameter FOV and a relay system that allows simultaneous spectroscopy with 10,000 mini-IFUs over a square degree, or, alternatively a 17.5 square arcminutes giant IFU, by using 240 MUSE-type spectrographs. We stress the importance of developing the telescope and instrument designs for both cases. 4. Wide-field imaging through scattering media by scattered light fluorescence microscopy Science.gov (United States) Zhou, Yulan; Li, Xun 2017-08-01 To obtain images through scattering media, scattered light fluorescence (SLF) microscopy that utilizes the optical memory effect has been developed. However, the small field of view (FOV) of SLF microscopy limits its application. In this paper, we have introduced a re-modulation method to achieve wide-field imaging through scattering media by SLF microscopy. In the re-modulation method, to raster scan the focus across the object plane, the incident wavefront is re-modulated via a spatial light modulator (SLM) in the updated phase compensation calculated using the optimized iterative algorithm. Compared with the conventional optical memory effect method, the re-modulation method can greatly increase the FOV of a SLF microscope. With the phase compensation theoretically calculated, the process of updating the phase compensation of a high speed SLM is fast. The re-modulation method does not increase the imaging time. The re-modulation method is, therefore, expected to make SLF microscopy have much wider applications in biology, medicine and physiology. 5. Wide field of view common-path lateral-shearing digital holographic interference microscope Science.gov (United States) Vora, Priyanka; Trivedi, Vismay; Mahajan, Swapnil; Patel, Nimit; Joglekar, Mugdha; Chhaniwal, Vani; Moradi, Ali-Reza; Javidi, Bahram; Anand, Arun 2017-12-01 Quantitative three-dimensional (3-D) imaging of living cells provides important information about the cell morphology and its time variation. Off-axis, digital holographic interference microscopy is an ideal tool for 3-D imaging, parameter extraction, and classification of living cells. Two-beam digital holographic microscopes, which are usually employed, provide high-quality 3-D images of micro-objects, albeit with lower temporal stability. Common-path digital holographic geometries, in which the reference beam is derived from the object beam, provide higher temporal stability along with high-quality 3-D images. Self-referencing geometry is the simplest of the common-path techniques, in which a portion of the object beam itself acts as the reference, leading to compact setups using fewer optical elements. However, it has reduced field of view, and the reference may contain object information. Here, we describe the development of a common-path digital holographic microscope, employing a shearing plate and converting one of the beams into a separate reference by employing a pin-hole. The setup is as compact as self-referencing geometry, while providing field of view as wide as that of a two-beam microscope. The microscope is tested by imaging and quantifying the morphology and dynamics of human erythrocytes. 6. PSF Estimation of Space-Variant Ultra-Wide Field of View Imaging Systems Directory of Open Access Journals (Sweden) Petr Janout 2017-02-01 Full Text Available Ultra-wide-field of view (UWFOV imaging systems are affected by various aberrations, most of which are highly angle-dependent. A description of UWFOV imaging systems, such as microscopy optics, security camera systems and other special space-variant imaging systems, is a difficult task that can be achieved by estimating the Point Spread Function (PSF of the system. This paper proposes a novel method for modeling the space-variant PSF of an imaging system using the Zernike polynomials wavefront description. The PSF estimation algorithm is based on obtaining field-dependent expansion coefficients of the Zernike polynomials by fitting real image data of the analyzed imaging system using an iterative approach in an initial estimate of the fitting parameters to ensure convergence robustness. The method is promising as an alternative to the standard approach based on Shack–Hartmann interferometry, since the estimate of the aberration coefficients is processed directly in the image plane. This approach is tested on simulated and laboratory-acquired image data that generally show good agreement. The resulting data are compared with the results of other modeling methods. The proposed PSF estimation method provides around 5% accuracy of the optical system model. 7. New 50-M-Class Single Dish Telescope: Large Submillimeter Telescope (LST) Science.gov (United States) Kawabe, Ryohei 2018-01-01 We report on the plan to construct a 50 m class millimeter (mm) and sub-mm single dish telescope, the Large Submillimeter Telescope (LST). The telescope is optimized for wide-area imaging and spectroscopic surveys in the 70 to 420 GHz main frequency range, which just covers main atmospheric windows at millimeter and submillimeter wavelengths for good observing sites such as the ALMA site in Chile. We also target observations at higher frequencies of up to 1 THz, using an inner part high-precision surface. Active surface control is required in order to correct gravitational and thermal deformations of the surface. The LST will facilitate new discovery spaces such as wide-field imaging with both continuum and spectral lines, along with new developments for time domain science. With exploiting synergy with ALMA and other telescopes, LST can contribute to a wide range of topics in astronomy and astrophysics, e.g., astrochemistry, star formation in the Galaxy and galaxies, evolution of galaxy clusters via SZ effect. We also report the recent progress on the technical study, e.g., the tentative study of the surface error budget and challenges to correction for the wind-load effect. 8. Dual-conjugate adaptive optics for wide-field high-resolution retinal imaging. Science.gov (United States) Thaung, Jörgen; Knutsson, Per; Popovic, Zoran; Owner-Petersen, Mette 2009-03-16 We present analysis and preliminary laboratory testing of a real-time dual-conjugate adaptive optics (DCAO) instrument for ophthalmology that will enable wide-field high resolution imaging of the retina in vivo. The setup comprises five retinal guide stars (GS) and two deformable mirrors (DM), one conjugate to the pupil and one conjugate to a plane close to the retina. The DCAO instrument has a closed-loop wavefront sensing wavelength of 834 nm and an imaging wavelength of 575 nm. It incorporates an array of collimator lenses to spatially filter the light from all guide stars using one adjustable iris, and images the Hartmann patterns of multiple reference sources on a single detector. Zemax simulations were performed at 834 nm and 575 nm with the Navarro 99 and the Liou- Brennan eye models. Two correction alternatives were evaluated; conventional single conjugate AO (SCAO, using one GS and a pupil DM) and DCAO (using multiple GS and two DM). Zemax simulations at 575 nm based on the Navarro 99 eye model show that the diameter of the corrected field of view for diffraction-limited imaging (Strehl >or= 0.8) increases from 1.5 deg with SCAO to 6.5 deg using DCAO. The increase for the less stringent condition of a wavefront error of 1 rad or less (Strehl >or= 0.37) is from 3 deg with SCAO to approximately 7.4 deg using DCAO. Corresponding results for the Liou-Brennan eye model are 3.1 deg (SCAO) and 8.2 deg (DCAO) for Strehl >or= 0.8, and 4.8 deg (SCAO) and 9.6 deg (DCAO) for Strehl >or= 0.37. Potential gain in corrected field of view with DCAO is confirmed both by laboratory experiments on a model eye and by preliminary in vivo imaging of a human eye. (c) 2009 Optical Society of America 9. Automatic detection of diabetic retinopathy features in ultra-wide field retinal images Science.gov (United States) Levenkova, Anastasia; Sowmya, Arcot; Kalloniatis, Michael; Ly, Angelica; Ho, Arthur 2017-03-01 Diabetic retinopathy (DR) is a major cause of irreversible vision loss. DR screening relies on retinal clinical signs (features). Opportunities for computer-aided DR feature detection have emerged with the development of Ultra-WideField (UWF) digital scanning laser technology. UWF imaging covers 82% greater retinal area (200°), against 45° in conventional cameras3 , allowing more clinically relevant retinopathy to be detected4 . UWF images also provide a high resolution of 3078 x 2702 pixels. Currently DR screening uses 7 overlapping conventional fundus images, and the UWF images provide similar results1,4. However, in 40% of cases, more retinopathy was found outside the 7-field ETDRS) fields by UWF and in 10% of cases, retinopathy was reclassified as more severe4 . This is because UWF imaging allows examination of both the central retina and more peripheral regions, with the latter implicated in DR6 . We have developed an algorithm for automatic recognition of DR features, including bright (cotton wool spots and exudates) and dark lesions (microaneurysms and blot, dot and flame haemorrhages) in UWF images. The algorithm extracts features from grayscale (green "red-free" laser light) and colour-composite UWF images, including intensity, Histogram-of-Gradient and Local binary patterns. Pixel-based classification is performed with three different classifiers. The main contribution is the automatic detection of DR features in the peripheral retina. The method is evaluated by leave-one-out cross-validation on 25 UWF retinal images with 167 bright lesions, and 61 other images with 1089 dark lesions. The SVM classifier performs best with AUC of 94.4% / 95.31% for bright / dark lesions. 10. mxCSM: A 100-slit, 6-wavelength wide-field coronal spectropolarimeter for the study of the dynamics and the magnetic fields of the solar corona Directory of Open Access Journals (Sweden) Haosheng eLin 2016-03-01 Full Text Available remendous progress has been made in the field of observational coronal magnetometry in the first decade of the 21st century. With the successful construction of the Coronal Multichannel Magnetometer (CoMP instrument, observations of the linear polarization of the coronal emission lines (CELs, which carry information about the azimuthal direction of the coronal magnetic fields, are now routinely available. However, reliable and regular measurements of the circular polarization signals of the CELs remain illusive. The CEL circular polarization signals allow us to infer the magnetic field strength in the corona, and is critically important {bf of} our understanding of the solar corona. Current telescopes and instrument can only measure the coronal magnetic field strength over a small field of view. Furthermore, the observations require very long integration time that preclude the study of dynamic events even when only a small field of view is required. This paper describes a new instrument concept that employees large-scale multiplexing technology to enhance the efficiency of current coronal spectropolarimeter by more than two orders of magnitude. This will allow for the instrument to increase of the integration time at each spatial location by the same factor, while also achieving a large field of view coverage. We will present the conceptual design of a 100-slit coronal spectropolarimeter that can observe six coronal emission lines simultaneously. Instruments based on this concept will allow us to study the evolution of the coronal magnetic field even with coronagraphs with modest aperture. 11. Searching for fast optical transients by means of a wide-field monitoring observations with high temporal resolution Science.gov (United States) Beskin, G.; Karpov, S.; Plokhotnichenko, V.; Bondar, S.; Ivanov, E.; Perkov, A.; Greco, G.; Guarnieri, A.; Bartolini, C. We discuss the strategy of search for fast optical transients accompanying gamma-ray bursts by means of continuous monitoring of wide sky fields with high temporal resolution. We describe the design, performance and results of our cameras, FAVOR and TORTORA. Also we discuss the perspectives of this strategy and possible design of next-generation equipment for wide-field monitoring which will be able to detect optical transients and to study their color and polarization properties with high time resolution. 12. A Comprehensive Study of ULIRGs in the Herschel Very Wide Field Surveys Science.gov (United States) Yan, Haojing Extreme starbursting galaxies exist at all redshifts, and most of them are so heavily obscured by dust that they are Ultra-Luminous InfraRed Galaxies (ULIRGs) while being faint in optical to near-IR. The latest example is at record-high z=6.337, approaching the end of the reionization. There have been numerous suggestions that understanding ULIRG is critical in constructing a comprehensive picture of galaxy formation history. These range from the hypothesis three decades ago that the ULIRG phase is the prelude to QSO and large ellipticals, to the recent tentative evidence that ULIRG could make a large (if not dominant) contribution to the global star formation rate density (GSFRD) at z>1. However, the exact nature of ULIRG and their role in galaxy assembly still remain illusive, largely due to the limited sample size and the severe source confusion problem in the far-IR (FIR). The very wide field surveys by Herschel have provided the best opportunity to date to systematically study ULIRG beyond the local universe, most importantly because of their wide coverage and high sensitivity to probe large volumes to high redshifts and the multiple FIR bands that allow for direct measurement of the IR luminosities. We propose to construct the largest possible ULIRG sample in these fields at all redshifts, and to study the evolution of ULIRGs. We will concentrate on the HerMES, the H-ATLAS and the HerS programs whose data are already public. While the confusion problem still persists in these Herschel data, we have demonstrated that it is possible to directly use the position priors from optical images to decompose the candidate contributors to a given Herschel source if its S/N suffices (Yan et al. 2014). This is a significant improvement over previous studies where higher-resolution mid-IR (mostly Spitzer MIPS 24-micron) data had to be used as the proxies to the FIR source locations, because (1) such proxy images also suffer from the blending problem in the first place and 13. Magnetic stars with wide depressions in the continuum. 2. The silicon star with a complex field structure HD 27404 Science.gov (United States) Semenko, E. A.; Romanyuk, I. I.; Semenova, E. S.; Moiseeva, A. V.; Kudryavtsev, D. O.; Yakunin, I. A. 2017-10-01 Observations of the chemically peculiar star HD 27404 with the 6-m SAO RAS telescope showed a strong magnetic field with the longitudinal field component varying in a complicated way in the range of -2.5 to 1 kG. Fundamental parameters of the star ( T eff = 11 300 K, log g = 3.9) were estimated analyzing photometric indices in the Geneva and in the Stro¨ mgren-Crawford photometric systems. We detected weak radial velocity variations which can be due to the presence of a close star companion or chemical spots in the photosphere. Rapid estimation of the key chemical element abundance allows us to refer HD 27404 to a SiCr or Si+ chemically peculiar A0-B9 star. 14. Afar-wide Crustal Strain Field from Multiple InSAR Tracks Science.gov (United States) Pagli, C.; Wright, T. J.; Wang, H.; Calais, E.; Bennati Rassion, L. S.; Ebinger, C. J.; Lewi, E. 2010-12-01 Onset of a rifting episode in the Dabbahu volcanic segment, Afar (Ethiopia), in 2005 renewed interest in crustal deformation studies in the area. As a consequence, an extensive geodetic data set, including InSAR and GPS measurements have been acquired over Afar and hold great potential towards improving our understanding of the extensional processes that operate during the final stages of continental rupture. The current geodetic observational and modelling strategy has focused on detailed, localised studies of dyke intrusions and eruptions mainly in the Dabbahu segment. However, an eruption in the Erta ‘Ale volcanic segment in 2008, and cluster of earthquakes observed in the Tat Ale segment, are testament to activity elsewhere in Afar. Here we make use of the vast geodetic dataset available to obtain strain information over the whole Afar depression. A systematic analysis of all the volcanic segments, including Dabbahu, Manda-Hararo, Alayta, Tat ‘Ale Erta Ale and the Djibouti deformation zone, is undertaken. We use InSAR data from multiple tracks together with available GPS measurements to obtain a velocity field model for Afar. We use over 300 radar images acquired by the Envisat satellite in both descending and ascending orbits, from 12 distinct tracks in image and wide swath modes, spanning the time period from October 2005 to present time. We obtain the line-of-sight deformation rates from each InSAR track using a network approach and then combine the InSAR velocities with the GPS observations, as suggested by Wright and Wang (2010) following the method of England and Molnar (1997). A mesh is constructed over the Afar area and then we solve for the horizontal and vertical velocities on each node. The resultant full 3D Afar-wide velocity field shows where current strains are being accumulated within the various volcanic segments of Afar, the width of the plate boundary deformation zone and possible connections between distinct volcanic segments on a 15. Systems, computer-implemented methods, and tangible computer-readable storage media for wide-field interferometry Science.gov (United States) Lyon, Richard G. (Inventor); Leisawitz, David T. (Inventor); Rinehart, Stephen A. (Inventor); Memarsadeghi, Nargess (Inventor) 2012-01-01 Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for wide field imaging interferometry. The method includes for each point in a two dimensional detector array over a field of view of an image: gathering a first interferogram from a first detector and a second interferogram from a second detector, modulating a path-length for a signal from an image associated with the first interferogram in the first detector, overlaying first data from the modulated first detector and second data from the second detector, and tracking the modulating at every point in a two dimensional detector array comprising the first detector and the second detector over a field of view for the image. The method then generates a wide-field data cube based on the overlaid first data and second data for each point. The method can generate an image from the wide-field data cube. 16. Corot telescope (COROTEL) Science.gov (United States) Viard, Thierry; Mathieu, Jean-Claude; Fer, Yann; Bouzou, Nathalie; Spalinger, Etienne; Chataigner, Bruno; Bodin, Pierre; Magnan, Alain; Baglin, Annie 2017-11-01 COROTEL is the telescope of the COROT Satellite which aims at measuring stellar flux variations very accurately. To perform this mission, COROTEL has to be very well protected against straylight (from Sun and Earth) and must be very stable with time. Thanks to its high experience in this field, Alcatel Alenia Space has proposed, manufactured and tested an original telescope concept associated with a high baffling performance. Since its delivery to LAM (Laboratoire d'Astrophysique de Marseille, CNRS) the telescope has passed successfully the qualification tests at instrument level performed by CNES. Now, the instrument is mounted on a Proteus platform and should be launched end of 2006. The satellite should bring to scientific community for the first time precious data coming from stars and their possible companions. 17. Nerve Fiber Flux Analysis Using Wide-Field Swept-Source Optical Coherence Tomography. Science.gov (United States) Tan, Ou; Liu, Liang; Liu, Li; Huang, David 2018-02-01 To devise a method to quantify nerve fibers over their arcuate courses over an extended peripapillary area using optical coherence tomography (OCT). Participants were imaged with 8 × 8-mm volumetric OCT scans centered at the optic disc. A new quantity, nerve fiber flux (NFF), represents the cross-sectional area transected perpendicular to the nerve fibers. The peripapillary area was divided into 64 tracks with equal flux. An iterative algorithm traced the trajectory of the tracks assuming that the relative distribution of the NFF was conserved with compensation for fiber connections to ganglion cells on the macular side. Average trajectory was averaged from normal eyes and use to calculate the NFF maps for glaucomatous eyes. The NFF maps were divided into eight sectors that correspond to visual field regions. There were 24 healthy and 10 glaucomatous eyes enrolled. The algorithm converged on similar patterns of NFL tracks for all healthy eyes. In glaucomatous eyes, NFF correlated with visual field sensitivity in the arcuate sectors (Spearman ρ = 0.53-0.62). Focal nerve fiber loss in glaucomatous eyes appeared as uniform tracks of NFF defects that followed the expected arcuate fiber trajectory. Using an algorithm based on the conservation of flux, we derived nerve fiber trajectories in the peripapillary area. The NFF map is useful for the visualization of focal defects and quantification of sector nerve fiber loss from wide-area volumetric OCT scans. NFF provides a cumulative measure of volumetric loss along nerve fiber tracks and could improve the detection of focal glaucoma damage. 18. 1-Million droplet array with wide-field fluorescence imaging for digital PCR. Science.gov (United States) Hatch, Andrew C; Fisher, Jeffrey S; Tovar, Armando R; Hsieh, Albert T; Lin, Robert; Pentoney, Stephen L; Yang, David L; Lee, Abraham P 2011-11-21 Digital droplet reactors are useful as chemical and biological containers to discretize reagents into picolitre or nanolitre volumes for analysis of single cells, organisms, or molecules. However, most DNA based assays require processing of samples on the order of tens of microlitres and contain as few as one to as many as millions of fragments to be detected. Presented in this work is a droplet microfluidic platform and fluorescence imaging setup designed to better meet the needs of the high-throughput and high-dynamic-range by integrating multiple high-throughput droplet processing schemes on the chip. The design is capable of generating over 1-million, monodisperse, 50 picolitre droplets in 2-7 minutes that then self-assemble into high density 3-dimensional sphere packing configurations in a large viewing chamber for visualization and analysis. This device then undergoes on-chip polymerase chain reaction (PCR) amplification and fluorescence detection to digitally quantify the sample's nucleic acid contents. Wide-field fluorescence images are captured using a low cost 21-megapixel digital camera and macro-lens with an 8-12 cm(2) field-of-view at 1× to 0.85× magnification, respectively. We demonstrate both end-point and real-time imaging ability to perform on-chip quantitative digital PCR analysis of the entire droplet array. Compared to previous work, this highly integrated design yields a 100-fold increase in the number of on-chip digitized reactors with simultaneous fluorescence imaging for digital PCR based assays. 19. Evaluation of illumination system uniformity for wide-field biomedical hyperspectral imaging Science.gov (United States) Sawyer, Travis W.; Siri Luthman, A.; E Bohndiek, Sarah 2017-04-01 Hyperspectral imaging (HSI) systems collect both spatial (morphological) and spectral (chemical) information from a sample. HSI can provide sensitive analysis for biological and medical applications, for example, simultaneously measuring reflectance and fluorescence properties of a tissue, which together with structural information could improve early cancer detection and tumour characterisation. Illumination uniformity is a critical pre-condition for quantitative data extraction from an HSI system. Non-uniformity can cause glare, specular reflection and unwanted shading, which negatively impact statistical analysis procedures used to extract abundance of different chemical species. Here, we model and evaluate several illumination systems frequently used in wide-field biomedical imaging to test their potential for HSI. We use the software LightTools and FRED. The analysed systems include: a fibre ring light; a light emitting diode (LED) ring; and a diffuse scattering dome. Each system is characterised for spectral, spatial, and angular uniformity, as well as transfer efficiency. Furthermore, an approach to measure uniformity using the Kullback-Leibler divergence (KLD) is introduced. The KLD is generalisable to arbitrary illumination shapes, making it an attractive approach for characterising illumination distributions. Although the systems are quite comparable in their spatial and spectral uniformity, the most uniform angular distribution is achieved using a diffuse scattering dome, yielding a contrast of 0.503 and average deviation of 0.303 over a ±60° field of view with a 3.9% model error in the angular domain. Our results suggest that conventional illumination sources can be applied in HSI, but in the case of low light levels, bespoke illumination sources may offer improved performance. 20. The South Pole Telescope Energy Technology Data Exchange (ETDEWEB) Ruhl, J.E.; Ade, P.A.R.; Carlstrom, J.E.; Cho, H.M.; Crawford,T.; Dobbs, M.; Greer, C.H.; Halverson, N.W.; Holzapfel, W.L.; Lanting,T.M.; Lee, A.T.; Leitch, E.M.; Leong, J.; Lu, W.; Lueker, M.; Mehl, J.; Meyer, S.S.; Mohr, J.J.; Padin, S.; Plagge, T.; Pryke, C.; Runyan, M.C.; Schwan, D.; Sharp, M.K.; Spieler, H.; Staniszewski, Z.; Stark, A.A. 2004-11-04 A new 10 meter diameter telescope is being constructed for deployment at the NSF South Pole research station. The telescope is designed for conducting large-area millimeter and sub-millimeter wave surveys of faint, low contrast emission, as required to map primary and secondary anisotropies in the cosmic microwave background. To achieve the required sensitivity and resolution, the telescope design employs an off-axis primary with a 10 m diameter clear aperture. The full aperture and the associated optics will have a combined surface accuracy of better than 20 microns rms to allow precision operation in the submillimeter atmospheric windows. The telescope will be surrounded with a large reflecting ground screen to reduce sensitivity to thermal emission from the ground and local interference. The optics of the telescope will support a square degree field of view at 2mm wavelength and will feed a new 1000-element micro-lithographed planar bolometric array with superconducting transition-edge sensors and frequency-multiplexed readouts. The first key project will be to conduct a survey over 4000 degrees for galaxy clusters using the Sunyaev-Zeldovich Effect. This survey should find many thousands of clusters with a mass selection criteria that is remarkably uniform with redshift. Armed with redshifts obtained from optical and infrared follow-up observations, it is expected that the survey will enable significant constraints to be placed on the equation of state of the dark energy. 1. Far Sidelobes Measurement of the Atacama Cosmology Telescope Science.gov (United States) Duenner, Rolando; Gallardo, Patricio; Wollack, Ed; Henriquez, Fernando; Jerez-Hanckes, Carlos 2012-01-01 The Atacama Cosmology Telescope (ACT) is a 6m telescope designed to map the Cosmic Microwave Background (CMB) simultaneously at 145GHz, 220 GHz and 280 GHz. Its off-axis Gregorian design is intended to minimize and control the off-axis sidelobe response, which is critical for scientific purposes. The expected sidelobe level for this kind of design is less than -50 dB and can be challenging to measure. Here we present a measurement of the 145 GHz far sidelobes of ACT done on the near-field of the telescope. We used a 1 mW microwave source placed 13 meters away from the telescope and a chopper wheel to produce a varying signal that could be detected by the camera for different orientations of the telescope. The source feed was designed to produce a wide beam profile. Given that the coupling is expected to be dominated by diffraction over the telescope shielding structure, when combined with a measurements of the main beam far field response, these measurement can be used to validate elements of optical design and constrain the level of spurious coupling at large angles. Our results show that the diffractive coupling beyond the ground screen is consistently below -75 dB, satisfying the design expectations. 2. Wide-Field Infrared Survey Explorer Observations of the Evolution of Massive Star-Forming Regions Science.gov (United States) Koenig, X. P.; Leisawitz, D. T.; Benford, D. J.; Rebull, L. M.; Padgett, D. L.; Asslef, R. J. 2012-01-01 We present the results of a mid-infrared survey of II outer Galaxy massive star-forming regions and 3 open clusters with data from the Wide-field Infrared Survey Explorer (WISE). Using a newly developed photometric scheme to identify young stellar objects and exclude extragalactic contamination, we have studied the distribution of young stars within each region. These data tend to support the hypothesis that latter generations may be triggered by the interaction of winds and radiation from the first burst of massive star formation with the molecular cloud material leftover from that earlier generation of stars. We dub this process the "fireworks hypothesis" since star formation by this mechanism would proceed rapidly and resemble a burst of fireworks. We have also analyzed small cutout WISE images of the structures around the edges of these massive star-forming regions. We observe large (1-3 pc size) pillar and trunk-like structures of diffuse emission nebulosity tracing excited polycyclic aromatic hydrocarbon molecules and small dust grains at the perimeter of the massive star-forming regions. These structures contain small clusters of emerging Class I and Class II sources, but some are forming only a single to a few new stars. 3. WIDE-FIELD INFRARED SURVEY EXPLORER OBSERVATIONS OF THE EVOLUTION OF MASSIVE STAR-FORMING REGIONS International Nuclear Information System (INIS) Koenig, X. P.; Leisawitz, D. T.; Benford, D. J.; Padgett, D. L.; Rebull, L. M.; Assef, R. J. 2012-01-01 We present the results of a mid-infrared survey of 11 outer Galaxy massive star-forming regions and 3 open clusters with data from the Wide-field Infrared Survey Explorer (WISE). Using a newly developed photometric scheme to identify young stellar objects and exclude extragalactic contamination, we have studied the distribution of young stars within each region. These data tend to support the hypothesis that latter generations may be triggered by the interaction of winds and radiation from the first burst of massive star formation with the molecular cloud material leftover from that earlier generation of stars. We dub this process the 'fireworks hypothesis' since star formation by this mechanism would proceed rapidly and resemble a burst of fireworks. We have also analyzed small cutout WISE images of the structures around the edges of these massive star-forming regions. We observe large (1-3 pc size) pillar and trunk-like structures of diffuse emission nebulosity tracing excited polycyclic aromatic hydrocarbon molecules and small dust grains at the perimeter of the massive star-forming regions. These structures contain small clusters of emerging Class I and Class II sources, but some are forming only a single to a few new stars. 4. Development of a wide-field fluorescence imaging system for evaluation of wound re-epithelialization Science.gov (United States) Franco, Walfre; Gutierrez-Herrera, Enoch; Purschke, Martin; Wang, Ying; Tam, Josh; Anderson, R. Rox; Doukas, Apostolos 2013-03-01 Normal skin barrier function depends on having a viable epidermis, an epithelial layer formed by keratinocytes. The transparent epidermis, which is less than a 100 mum thick, is nearly impossible to see. Thus, the clinical evaluation of re-epithelialization is difficult, which hinders selecting appropriate therapy for promoting wound healing. An imaging system was developed to evaluate epithelialization by detecting endogenous fluorescence emissions of cellular proliferation over a wide field of view. A custom-made 295 nm ultraviolet (UV) light source was used for excitation. Detection was done by integrating a near-UV camera with sensitivity down to 300 nm, a 12 mm quartz lens with iris and focus lock for the UV regime, and a fluorescence bandpass filter with 340 nm center wavelength. To demonstrate that changes in fluorescence are related to cellular processes, the epithelialization of a skin substitute was monitored in vitro. The skin substitute or construct was made by embedding microscopic live human skin tissue columns, 1 mm in diameter and spaced 1 mm apart, in acellular porcine dermis. Fluorescence emissions clearly delineate the extent of lateral surface migration of keratinocytes and the total surface covered by the new epithelium. The fluorescence image of new epidermis spatially correlates with the corresponding color image. A simple, user-friendly way of imaging the presence of skin epithelium would improve wound care in civilian burns, ulcers and surgeries. 5. Development of a Data Reduction algorithm for Optical Wide Field Patrol Directory of Open Access Journals (Sweden) Sun-youp Park 2013-09-01 Full Text Available The detector subsystem of the Optical Wide-field Patrol (OWL network efficiently acquires the position and time information of moving objects such as artificial satellites through its chopper system, which consists of 4 blades in front of the CCD camera. Using this system, it is possible to get more position data with the same exposure time by changing the streaks of the moving objects into many pieces with the fast rotating blades during sidereal tracking. At the same time, the time data from the rotating chopper can be acquired by the time tagger connected to the photo diode. To analyze the orbits of the targets detected in the image data of such a system, a sequential procedure of determining the positions of separated streak lines was developed that involved calculating the World Coordinate System (WCS solution to transform the positions into equatorial coordinate systems, and finally combining the time log records from the time tagger with the transformed position data. We introduce this procedure and the preliminary results of the application of this procedure to the test observation images. 6. High performance ring oscillators from 10-nm wide silicon nanowire field-effect transistors KAUST Repository Huang, Ruo-Gu 2011-06-24 We explore 10-nm wide Si nanowire (SiNW) field-effect transistors (FETs) for logic applications, via the fabrication and testing of SiNW-based ring oscillators. We report on SiNW surface treatments and dielectric annealing, for producing SiNW FETs that exhibit high performance in terms of large on/off-state current ratio (~108), low drain-induced barrier lowering (~30 mV) and low subthreshold swing (~80 mV/decade). The performance of inverter and ring-oscillator circuits fabricated from these nanowire FETs are also explored. The inverter demonstrates the highest voltage gain (~148) reported for a SiNW-based NOT gate, and the ring oscillator exhibits near rail-to-rail oscillation centered at 13.4 MHz. The static and dynamic characteristics of these NW devices indicate that these SiNW-based FET circuits are excellent candidates for various high-performance nanoelectronic applications. © 2011 Tsinghua University Press and Springer-Verlag Berlin Heidelberg. 7. WIDE-FIELD INFRARED SURVEY EXPLORER OBSERVATIONS OF THE EVOLUTION OF MASSIVE STAR-FORMING REGIONS Energy Technology Data Exchange (ETDEWEB) Koenig, X. P.; Leisawitz, D. T.; Benford, D. J.; Padgett, D. L. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Rebull, L. M. [Spitzer Science Center (SSC), California Institute of Technology, M/S 220-6, 1200 East California Boulevard, Pasadena, CA 91125 (United States); Assef, R. J. [Jet Propulsion Laboratory, MS 169-530, 4800 Oak Grove Drive, Pasadena, CA 91109 (United States) 2012-01-10 We present the results of a mid-infrared survey of 11 outer Galaxy massive star-forming regions and 3 open clusters with data from the Wide-field Infrared Survey Explorer (WISE). Using a newly developed photometric scheme to identify young stellar objects and exclude extragalactic contamination, we have studied the distribution of young stars within each region. These data tend to support the hypothesis that latter generations may be triggered by the interaction of winds and radiation from the first burst of massive star formation with the molecular cloud material leftover from that earlier generation of stars. We dub this process the 'fireworks hypothesis' since star formation by this mechanism would proceed rapidly and resemble a burst of fireworks. We have also analyzed small cutout WISE images of the structures around the edges of these massive star-forming regions. We observe large (1-3 pc size) pillar and trunk-like structures of diffuse emission nebulosity tracing excited polycyclic aromatic hydrocarbon molecules and small dust grains at the perimeter of the massive star-forming regions. These structures contain small clusters of emerging Class I and Class II sources, but some are forming only a single to a few new stars. 8. Characterization of high proper motion objects from the wide-field infrared survey explorer Energy Technology Data Exchange (ETDEWEB) Luhman, K. L. [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States); Sheppard, Scott S., E-mail: [email protected] [Department of Terrestrial Magnetism, Carnegie Institution of Washington, 5241 Broad Branch Road NW, Washington, DC 20015 (United States) 2014-06-01 We present an analysis of high proper motion objects that we have found in a recent study and in this work with multi-epoch astrometry from the Wide-field Infrared Survey Explorer (WISE). Using photometry and proper motions from the Two Micron All-Sky Survey and WISE, we have identified the members of this sample that are likely to be late-type, nearby, or metal-poor. We have performed optical and near-infrared spectroscopy on 41 objects, from which we measure spectral types that range from M4-T2.5. This sample includes 11 blue L dwarfs and 5 subdwarfs; the latter were also classified as such in the recent study by Kirkpatrick and coworkers. Based on their spectral types and photometry, several of our spectroscopic targets may have distances of <20 pc with the closest at ∼12 pc. The tangential velocities implied by the spectrophotometric distances and proper motions indicate that four of the five subdwarfs are probably members of the Galactic halo while several other objects, including the early-T dwarf WISE J210529.08–623558.7, may belong to the thick disk. 9. Mapping absolute tissue endogenous fluorophore concentrations with chemometric wide-field fluorescence microscopy Science.gov (United States) Xu, Zhang; Reilley, Michael; Li, Run; Xu, Min 2017-06-01 We report chemometric wide-field fluorescence microscopy for imaging the spatial distribution and concentration of endogenous fluorophores in thin tissue sections. Nonnegative factorization aided by spatial diversity is used to learn both the spectral signature and the spatial distribution of endogenous fluorophores from microscopic fluorescence color images obtained under broadband excitation and detection. The absolute concentration map of individual fluorophores is derived by comparing the fluorescence from "pure" fluorophores under the identical imaging condition following the identification of the fluorescence species by its spectral signature. This method is then demonstrated by characterizing the concentration map of endogenous fluorophores (including tryptophan, elastin, nicotinamide adenine dinucleotide, and flavin adenine dinucleotide) for lung tissue specimens. The absolute concentrations of these fluorophores are all found to decrease significantly from normal, perilesional, to cancerous (squamous cell carcinoma) tissue. Discriminating tissue types using the absolute fluorophore concentration is found to be significantly more accurate than that achievable with the relative fluorescence strength. Quantification of fluorophores in terms of the absolute concentration map is also advantageous in eliminating the uncertainties due to system responses or measurement details, yielding more biologically relevant data, and simplifying the assessment of competing imaging approaches. 10. Neutrino Telescope International Nuclear Information System (INIS) Mezzetto, M. 2011-01-01 The Conference Series 'Un Altro Modo di guardare il Cielo', held in Venice, started in 1988. It included 13.editions of 'Neutrino Telescopes' and four editions of 'Neutrino Oscillations in Venice'. The conference Series ideated , created and conducted by Prof. Milla Baldo Ceolin, after her guidance 'Un Altro Modo di guardare il Cielo' became one of the most important fixed appointments of thr neutrino physics and astrophysics community. 11. Schmidt Telescope Science.gov (United States) Murdin, P. 2000-11-01 A type of telescope, invented by the Estonian optician Bernhard Schmidt (1879-1935), that is used to photograph large areas of the sky. Because, in its original design, it was useable only for photography, the instrument is also known as the Schmidt camera. The Schmidt uses a concave spherical mirror as its light collector and corrects for the optical defect, known as spherical aberration, that i... 12. Combined high contrast and wide field of view in the scanning laser ophthalmoscope through dual detection of light paths Science.gov (United States) Carles, Guillem; Muyo, Gonzalo; van Hemert, Jano; Harvey, Andrew R. 2017-11-01 We demonstrate a multimode detection system in a scanning laser ophthalmoscope (SLO) that enables simultaneous operation in confocal, indirect, and direct modes to permit an agile trade between image contrast and optical sensitivity across the retinal field of view to optimize the overall imaging performance, enabling increased contrast in very wide-field operation. We demonstrate the method on a wide-field SLO employing a hybrid pinhole at its image plane, to yield a twofold increase in vasculature contrast in the central retina compared to its conventional direct mode while retaining high-quality imaging across a wide field of the retina, of up to 200 deg and 20 μm on-axis resolution. 13. Wide-field laser ophthalmoscopy for imaging of gas-filled eyes after macular hole surgery Directory of Open Access Journals (Sweden) Nakao S 2016-08-01 Full Text Available Shintaro Nakao,1 Ryoichi Arita,1 Yuki Sato,2 Hiroshi Enaida,3 Akifumi Ueno,2 Takaaki Matsui,2 Hani Salehi-Had,4 Tatsuro Ishibashi,1 Koh-hei Sonoda1 1Department of Ophthalmology, Graduate School of Medical Sciences, Kyushu University, 2Ohshima Hospital of Ophthalmology, Fukuoka, 3Department of Ophthalmology, Faculty of Medicine, Saga University, Saga, Japan; 4Atlantis Eyecare, Huntington Beach, CA, USA Background and objective: Existing ophthalmoscopy methods are unable to obtain clear fundus autofluorescence (FAF images in gas-filled eyes. The purpose of this study was to evaluate the capability of wide-field laser ophthalmoscopy (Optos in obtaining FAF images in gas-filled eyes for the assessment of macular hole (MH closure after surgery. Methods: This was an interventional case series. Eighteen consecutive patients with unilateral MH underwent vitrectomy with internal limiting membrane peeling and 20% sulfur hexafluoride gas tamponade. FAF images using Optos were recorded preoperatively and postoperatively (days 1, 2, and 7. Results: On postoperative days 1, 2, and 7, FAF images were obtained from 11/18 (61.1%, 9/18 (50.0%, and 17/18 eyes (94.4%, respectively, using Optos. The quality of FAF images using Optos was sufficient to determine MH closure in 9/18 (50.0% of gas-filled eyes postoperatively. Quantitative analysis of FAF images was helpful in determining complete or partial closure of the MH. Conclusion: FAF imaging using Optos might be a useful adjunct to optical coherence tomography as a supportive method to guide the release from facedown posturing in some cases of MH. Keywords: Optos, fundus autofluorescence, facedown, gas, vitrectomy 14. The wide field imager for the International X-ray Observatory Science.gov (United States) Treis, J.; Bombelli, L.; Fiorini, C.; Herrmann, S.; Lauf, T.; Lechner, P.; Lutz, G.; Majewski, P.; Porro, M.; Richter, R. H.; Stefanescu, A.; Strüder, L.; de Vita, G. 2009-08-01 The large collecting area of the X-ray optics on the International X-ray Observatory (IXO), their good angular resolution, the wide bandwidth of X-ray energies and the high radiation tolerance required for the X-ray detectors in the focal plane have stimulated a new development of devices which unify all those science driven specifications in one single detector. The concept of a monolithic, back-illuminated silicon active pixel sensor (APS) based on the DEPFET structure is proposed for the IXO mission, being a fully depleted, back-illuminated 450 μm thick detector with a physical size of about 10 × 10 cm2 corresponding to the 18 arcmin field of view. The backside will be covered with an integrated optical light and UV-filter. Corresponding to the 5 arcsec angular resolution of the X-ray optics, 100 x 100 cm2 large pixels in a format of approximately 1024 x 1024 are envisaged, matching the point spread function of approximately 500 μm HEW of the optics. The energy range from 100 eV to 15 keV is achieved by an ultra thin radiation entrance window for the low energies and 450 μm depleted silicon thickness for higher energies. The fast readout of 1.000 full frames per second is realized by a dedicated analog CMOS front end amplifier IC. The detector device is intrinsically radiation hard. The leakage current from the bulk damage is controlled through the operation temperature around -60 °C and by the high readout speed. Results of various prototype measurements will be shown. 15. THE DISCOVERY OF Y DWARFS USING DATA FROM THE WIDE-FIELD INFRARED SURVEY EXPLORER (WISE) International Nuclear Information System (INIS) Cushing, Michael C.; Mainzer, A.; Eisenhardt, Peter R.; Kirkpatrick, J. Davy; Gelino, Christopher R.; Griffith, Roger L.; Marsh, Kenneth A.; Beichman, Charles A.; Skrutskie, Michael F.; Burgasser, Adam J.; Prato, Lisa A.; Simcoe, Robert A.; Marley, Mark S.; Freedman, Richard S.; Saumon, D.; Wright, Edward L. 2011-01-01 We present the discovery of seven ultracool brown dwarfs identified with the Wide-field Infrared Survey Explorer (WISE). Near-infrared spectroscopy reveals deep absorption bands of H 2 O and CH 4 that indicate all seven of the brown dwarfs have spectral types later than UGPS J072227.51–054031.2, the latest-type T dwarf currently known. The spectrum of WISEP J182831.08+265037.8 is distinct in that the heights of the J- and H-band peaks are approximately equal in units of f λ , so we identify it as the archetypal member of the Y spectral class. The spectra of at least two of the other brown dwarfs exhibit absorption on the blue wing of the H-band peak that we tentatively ascribe to NH 3 . These spectral morphological changes provide a clear transition between the T dwarfs and the Y dwarfs. In order to produce a smooth near-infrared spectral sequence across the T/Y dwarf transition, we have reclassified UGPS 0722–05 as the T9 spectral standard and tentatively assign WISEP J173835.52+273258.9 as the Y0 spectral standard. In total, six of the seven new brown dwarfs are classified as Y dwarfs: four are classified as Y0, one is classified as Y0 (pec?), and WISEP J1828+2650 is classified as >Y0. We have also compared the spectra to the model atmospheres of Marley and Saumon and infer that the brown dwarfs have effective temperatures ranging from 300 K to 500 K, making them the coldest spectroscopically confirmed brown dwarfs known to date. 16. Space Variant PSF – Deconvolution of Wide-Field Astronomical Images Directory of Open Access Journals (Sweden) M. Řeřábek 2008-01-01 Full Text Available The properties of UWFC (Ultra Wide-Field Camera astronomical systems along with specific visual data in astronomical images contribute to a comprehensive evaluation of the acquired image data. These systems contain many different kinds of optical aberrations which have a negatively effect on image quality and imaging system transfer characteristics, and reduce the precision of astronomical measurement. It is very important to figure two main questions out. At first: In which astrometric depend on optical aberrations? And at second: How optical aberrations affect the transfer characteristics of the whole optical system. If we define the PSF (Point Spread Function [2] of an optical system, we can use some suitable methods for restoring the original image. Optical aberration models for LSI/LSV (Linear Space Invariant/Variant [2] systems are presented in this paper. These models are based on Seidel and Zernike approximating polynomials [1]. Optical aberration models serve as suitable tool for estimating and fitting the wavefront aberration of a real optical system. Real data from the BOOTES (Burst Observer and Optical Transient Exploring System experiment is used for our simulations. Problems related to UWFC imaging systems, especially a restoration method in the presence of space variant PSF are described in this paper. A model of the space variant imaging system and partially of the space variant optical system has been implemented in MATLAB. The “brute force” method has been used for restoration of the testing images. The results of different deconvolution algorithms are demonstrated in this paper. This approach could help to improve the precision of astronomic measurements. 17. First optical validation of a Schwarzschild Couder telescope: the ASTRI SST-2M Cherenkov telescope Science.gov (United States) Giro, E.; Canestrari, R.; Sironi, G.; Antolini, E.; Conconi, P.; Fermino, C. E.; Gargano, C.; Rodeghiero, G.; Russo, F.; Scuderi, S.; Tosti, G.; Vassiliev, V.; Pareschi, G. 2017-12-01 Context. The Cherenkov Telescope Array (CTA) represents the most advanced facility designed for Cherenkov Astronomy. ASTRI SST-2M has been developed as a demonstrator for the Small Size Telescope in the context of the upcoming CTA. Its main innovation consists in the optical layout which implements the Schwarzschild-Couder configuration and is fully validated for the first time. The ASTRI SST-2M optical system represents the first qualified example of a two-mirror telescope for Cherenkov Astronomy. This configuration permits us to (i) maintain high optical quality across a large field of view; (ii) demagnify the plate scale; and (iii) exploit new technological solutions for focal plane sensors. Aims: The goal of the paper is to present the optical qualification of the ASTRI SST-2M telescope. The qualification has been obtained measuring the point spread function (PSF) sizes generated in the focal plane at various distances from the optical axis. These values have been compared with the performances expected by design. Methods: After an introduction on Gamma-ray Astronomy from the ground, the optical design of ASTRI SST-2M and how it has been implemented is discussed. Moreover, the description of the set-up used to qualify the telescope over the full field of view is shown. Results: We report the results of the first-light optical qualification. The required specification of a flat PSF of 10 arcmin in a large field of view ( 10°) has been demonstrated. These results validate the design specifications, opening a new scenario for Cherenkov Gamma-ray Astronomy and, in particular, for the detection of high-energy (5-300 TeV) gamma rays and wide-field observations with CTA. 18. WIDE-FIELD INFRARED IMAGING: A Descriptive Review of Characteristics of Retinoschisis, Retinal Detachment, and Schisis Detachments. Science.gov (United States) Ho, Vincent Y; Wehmeier, Jarrod M; Shah, Gaurav K 2016-08-01 Retinoschisis and retinal detachments are primarily differentiated based on characteristic examination findings. In diagnostically challenging cases, noncontact wide-field infrared imaging can help diagnosis and visualize the extent/margins of retinoschisis, retinal detachment, or combined schisis detachments by comparing reflectivity patterns. This is a retrospective, observational, descriptive case series of 14 eyes of 14 nonconsecutive patients, ranging from 28 to 89 years old (mean 61), diagnosed with retinoschisis, retinal detachment, or schisis detachment from May 5, 2014 to March 4, 2015. Patients with secondary retinoschisis and/or retinal detachment from other causes were not included in the study. Heidelberg Wide-Field Module lens and Heidelberg Spectralis HRA+OCT machine (Heidelberg Engineering, Heidelberg, Germany) were used to obtain noncontact, wide-field infrared images on each study eye. Seven eyes with retinal detachments, four with retinoschises, and three with schisis detachments were imaged using this novel wide-field infrared technique. Retinoschisis appears light and translucent with prominent vasculature, retinal detachments appear dark and opaque, and combined retinoschisis/retinal detachment exhibit mixed reflectivity patterns. Wide-field infrared imaging provides a quick, noncontact, noninvasive method to accurately diagnose and to monitor for progression of retinoschisis, retinal detachment, or combined schisis detachments. 19. Far Ultraviolot Space Telescope (FAUST) Science.gov (United States) Bowyer, S. 1988-01-01 The Far Ultraviolet Space Telescope is a compact, wide field-of-view, far ultraviolet instrument designed for observations of extended and point sources of astronomical interest. It was originally used in sounding rocket work by both French and American investigators. The instrument was modified for flight on the space shuttle and flew on the Spacelab 1 mission as a joint effort between the Laboratoire d'Astronomie Spatiale and the University of California, Berkeley. The prime experiment objective of this telescope on the Atmospheric Laboratory Applications and Science (ATLAS 1) NASA mission is to observe faint astronomical sources in the far ultraviolet with sensitivities far higher than previously available. The experiment will cover the 1300 to 1800 A band, which is inaccessible to observers on earth. The observing program during the mission consists of obtaining deep sky images during spacecraft nighttime. The targets will include hot stars and nebulae in our own galaxy, faint diffuse galactic features similar to the cirrus clouds seen by the Infrared Astronomical Satellite (IRAS), large nearby galaxies, nearby clusters of galaxies, and objects of cosmological interest such as quasars and the diffuse far ultraviolet background. 20. The Falcon Telescope Network Science.gov (United States) Chun, F.; Tippets, R.; Dearborn, M.; Gresham, K.; Freckleton, R.; Douglas, M. 2014-09-01 The Falcon Telescope Network (FTN) is a global network of small aperture telescopes developed by the Center for Space Situational Awareness Research in the Department of Physics at the United States Air Force Academy (USAFA). Consisting of commercially available equipment, the FTN is a collaborative effort between USAFA and other educational institutions ranging from two- and four-year colleges to major research universities. USAFA provides the equipment (e.g. telescope, mount, camera, filter wheel, dome, weather station, computers and storage devices) while the educational partners provide the building and infrastructure to support an observatory. The user base includes USAFA along with K-12 and higher education faculty and students. Since the FTN has a general use purpose, objects of interest include satellites, astronomical research, and STEM support images. The raw imagery, all in the public domain, will be accessible to FTN partners and will be archived at USAFA in the Cadet Space Operations Center. FTN users will be able to submit observational requests via a web interface. The requests will then be prioritized based on the type of user, the object of interest, and a user-defined priority. A network wide schedule will be developed every 24 hours and each FTN site will autonomously execute its portion of the schedule. After an observational request is completed, the FTN user will receive notification of collection and a link to the data. The Falcon Telescope Network is an ambitious endeavor, but demonstrates the cooperation that can be achieved by multiple educational institutions. 1. Wide field CO J = 3 → 2 mapping of the Serpens cloud core DEFF Research Database (Denmark) Dionatos, Odyssefs; Nisini, Brunella; Codella, Claudio 2010-01-01 . Aims. The main objective of the paper is to study the overall outflow distribution and its association with the young population of the Serpens Core cluster. In addition, the paper addresses the correlation of the outflow momentum flux with the bolometric luminosity of their driving sources using...... this homogeneous dataset for a single star-forming site. Methods. An area comprising 460″ × 230″ of the Serpens cloud core was mapped in 12CO J = 3 → 2 with the HARP-B heterodyne array at the James Clerk Maxwell Telescope; J = 3 → 2 observations are more sensitive tracers of hot outflow gas than lower...... but two outflow/core pairs in our sample tend to have a projected orientation spanning roughly NW-SE. The overall momentum driven by outflows in Serpens lies between 3.2 and 5.1 × 10-1 M⊙ km s-1, the kinetic energy from 4.3 to 6.7 × 1043 erg, and momentum flux is between 2.8 and 4.4 × 10-4 M⊙ km s-1 yr-1... 2. Rock glaciers Gruben, Muragl and Murtel, Switzerland: Area-wide flow fields, Version 1 Data.gov (United States) National Aeronautics and Space Administration — Besides their thermal and mechanical properties, rock glaciers are essentially defined by their kinematics. Knowledge of the permafrost flow field provides important... 3. Completion of the Southern African Large Telescope Science.gov (United States) Buckley, D. A. H.; Charles, P. A.; O'Donoghue, D.; Nordsieck, K. H. 2006-08-01 The Southern African Large Telescope (SALT) is a low cost (19.7M), innovative, 10-m class optical telescope, which was inaugurated on 10 November 2005, just 5 years after ground-breaking. SALT and its first-light instruments are currently being commissioned, and full science operations are expected to begin later this year. This paper describes the design and construction of SALT, including the first-light instruments, SALTICAM and the Robert Stobie Spectrograph (RSS). A rigorous Systems Engineering approach was adopted to ensure that SALT was built to specification, on budget, close to the original schedule and using a relatively small project team. The design trade-offs, which include an active spherical primary mirror array in a fixed altitude telescope with a prime focus tracker, although restrictive in comparison to conventional telescopes, have resulted in an affordable and capable 10-m class telescope for South Africa and its ten partners. Coupled with an initial set of two seeing-limited instruments that concentrate on the UV-visible region (320 - 900nm) and featuring some unique observational capabilities, SALT will have an ability to conduct a wide range of science programs. These will include high time resolution studies, for which some initial results have already been obtained and are presented here. Many of the versatile modes available with the RSS will provide unparalleled opportunities for imaging polarimetry and spectropolarimetry. Likewise, Multi-Object Spectroscopy (using laser cut graphite slit masks) and imaging spectroscopy with the RSS, the latter using Fabry-Perot etalons and interference filters, will extend the multiplex advantage over resolutions from R = 300 to 9000 over fields of view of 2 to 8 arcminutes. Future instrumentation plans include an extremely stable, fibre-fed, high resolution échelle spectrograph and a near-IR (possibly to 1.7 μm) extension to the RSS. Future development possibilities include phasing the primary mirror 4. SparsePak: A Formatted Fiber Field Unit for the WIYN Telescope Bench Spectrograph. I. Design, Construction, and Calibration NARCIS (Netherlands) Bershady, Matthew A.; Andersen, David R.; Harker, Justin; Ramsey, Larry W.; Verheijen, Marc A. W. 2004-01-01 We describe the design and construction of a formatted fiber field unit, SparsePak, and characterize its optical and astrometric performance. This array is optimized for spectroscopy of low surface brightness extended sources in the visible and near-infrared. SparsePak contains 82, 4.7" fibers 5. PMAS: The Potsdam Multi-Aperture Spectrophotometer. II. The Wide Integral Field Unit PPak NARCIS (Netherlands) Kelz, Andreas; Verheijen, Marc A. W.; Roth, Martin M.; Bauer, Svend M.; Becker, Thomas; Paschke, Jens; Popow, Emil; Sánchez, Sebastian F.; Laux, Uwe 2006-01-01 PPak is a new fiber-based integral field unit (IFU) developed at the Astrophysical Institute of Potsdam and implemented as a module into the existing Potsdam Multi-Aperture Spectrophotometer (PMAS) spectrograph. The purpose of PPak is to provide an extended field of view with a large 6. Magnetic Fields In NGC 6946 Using Wide-Band Radio Polarimetry NARCIS (Netherlands) Williams, Anna; Heald, George; Wilcots, Eric M.; Gould Zweibel, Ellen Magnetic fields are important ingredients in the interstellar medium of galaxies. They accelerate cosmic rays, affect star formation, and regulate the redistribution of matter and energy. Despite their ubiquitous presence, the growth and coevolution of magnetic fields with galactic processes are not 7. Clinical assessment of human breast cancer margins with wide-field optical coherence micro-elastography (Conference Presentation) Science.gov (United States) Allen, Wes M.; Chin, Lixin; Wijesinghe, Philip; Kirk, Rodney W.; Latham, Bruce; Sampson, David D.; Saunders, Christobel M.; Kennedy, Brendan F. 2017-02-01 Breast cancer has the second highest mortality rate of all cancers in females. Surgical excision of malignant tissue forms a central component of breast-conserving surgery (BCS) procedures. Incomplete excision of malignant tissue is a major issue in BCS with typically 20 - 30% cases requiring a second surgical procedure due to postoperative detection of tumor in the margin. A major challenge for surgeons during BCS is the lack of effective tools to assess the surgical margin intraoperatively. Such tools would enable the surgeon to more effectively remove all tumor during the initial surgery, hence reducing re-excision rates. We report advances in the development of a new tool, optical coherence micro-elastography, which forms images, known as elastograms, based on mechanical contrast within the tissue. We demonstrate the potential of this technique to increase contrast between malignant tumor and healthy stroma in elastograms over OCT images. We demonstrate a key advance toward clinical translation by conducting wide-field imaging in intraoperative time frames with a wide-field scanning system, acquiring mosaicked elastograms with overall dimensions of 50 × 50 mm, large enough to image an entire face of most lumpectomy specimens. We describe this wide-field imaging system, and demonstrate its operation by presenting wide-field optical coherence tomography images and elastograms of a tissue mimicking silicone phantom and a number of representative freshly excised human breast specimens. Our results demonstrate the feasibility of scanning large areas of lumpectomies, which is an important step towards practical intraoperative margin assessment. 8. Rapid wide-field Mueller matrix polarimetry imaging based on four photoelastic modulators with no moving parts. Science.gov (United States) Alali, Sanaz; Gribble, Adam; Vitkin, I Alex 2016-03-01 A new polarimetry method is demonstrated to image the entire Mueller matrix of a turbid sample using four photoelastic modulators (PEMs) and a charge coupled device (CCD) camera, with no moving parts. Accurate wide-field imaging is enabled with a field-programmable gate array (FPGA) optical gating technique and an evolutionary algorithm (EA) that optimizes imaging times. This technique accurately and rapidly measured the Mueller matrices of air, polarization elements, and turbid phantoms. The system should prove advantageous for Mueller matrix analysis of turbid samples (e.g., biological tissues) over large fields of view, in less than a second. 9. Clustering of quasars in a wide luminosity range at redshift 4 with Subaru Hyper Suprime-Cam Wide-field imaging Science.gov (United States) He, Wanqiu; Akiyama, Masayuki; Bosch, James; Enoki, Motohiro; Harikane, Yuichi; Ikeda, Hiroyuki; Kashikawa, Nobunari; Kawaguchi, Toshihiro; Komiyama, Yutaka; Lee, Chien-Hsiu; Matsuoka, Yoshiki; Miyazaki, Satoshi; Nagao, Tohru; Nagashima, Masahiro; Niida, Mana; Nishizawa, Atsushi J.; Oguri, Masamune; Onoue, Masafusa; Oogi, Taira; Ouchi, Masami; Schulze, Andreas; Shirasaki, Yuji; Silverman, John D.; Tanaka, Manobu M.; Tanaka, Masayuki; Toba, Yoshiki; Uchiyama, Hisakazu; Yamashita, Takuji 2018-01-01 We examine the clustering of quasars over a wide luminosity range, by utilizing 901 quasars at \\overline{z}_phot˜ 3.8 with -24.73 Strategic Program (HSC-SSP) S16A Wide2 date release and 342 more luminous quasars at 3.4 Digital Sky Survey that fall in the HSC survey fields. We measure the bias factors of two quasar samples by evaluating the cross-correlation functions (CCFs) between the quasar samples and 25790 bright z ˜ 4 Lyman break galaxies in M1450 < -21.25 photometrically selected from the HSC dataset. Over an angular scale of 10.0" to 1000.0", the bias factors are 5.93+1.34-1.43 and 2.73+2.44-2.55 for the low- and high-luminosity quasars, respectively, indicating no significant luminosity dependence of quasar clustering at z ˜ 4. It is noted that the bias factor of the luminous quasars estimated by the CCF is smaller than that estimated by the auto-correlation function over a similar redshift range, especially on scales below 40.0". Moreover, the bias factor of the less-luminous quasars implies the minimal mass of their host dark matter halos is 0.3-2 × 1012 h-1 M⊙, corresponding to a quasar duty cycle of 0.001-0.06. 10. Hubble Space Telescope: The Telescope, the Observations & the Servicing Mission Science.gov (United States) 1999-11-01 Hubble's success is the advantage of being in orbit, beyond the Earth's atmosphere. From there it enjoys a crystal-clear view of the universe - without clouds and atmospheric disturbances to blur its vision. European astronomer Guido De Marchi from ESO in Munich has been using Hubble since the early days of the project. He explains: "HST can see the faintest and smallest details and lets us study the stars with great accuracy, even where they are packed together - just as with those in the centre of our Galaxy". Dieter Reimers from Hamburg Observatory adds: "HST has capabilities to see ultraviolet light, which is not possible from the ground due to the blocking effect of the atmosphere. And this is really vital to our work, the main aim of which is to discover the chemical composition of the Universe." The Servicing Missions In the early plans for telescope operations, maintenance visits were to have been made every 2.5 years. And every five years HST should have been transported back to the ground for thorough overhaul. This plan has changed somewhat over time and a servicing scheme, which includes Space Shuttle Servicing Missions every three years, was decided upon. The two first Servicing Missions, in December 1993 (STS-61) and February 1997 (STS-82) respectively, were very successful. In the first three years of operations HST did not meet expectations because its primary mirror was 2 microns too flat at the edge. The first Servicing Mission in 1993 (on which the European astronaut Claude Nicollier flew) dealt with this problem by installing a new instrument with corrective optics (COSTAR - Corrective Optics Space Telescope Axial Replacement). With this pair of "glasses" HST's golden age began. The images were as sharp as originally hoped and astonishing new results started to emerge on a regular basis. The first Servicing Mission also replaced the solar panels and installed a new camera (Wide Field and Planetary Camera 2 - WFPC2). The High-Speed Photometer (HSP) was 11. Parfocal wide field near infrared grism design and fabrication for WFIRST Data.gov (United States) National Aeronautics and Space Administration — WFIRST will have Hubble image quality with 100x the field area of HST/WFC3. It requires both imaging and, working in the same optical train, a grism allowing... 12. [A Concept Design of Flat-Field Spectrograph for Wide Wavelength Range]. Science.gov (United States) Li, Shi-yuan; Zhang, Guang-cai; Teng, Ai-ping 2015-05-01 The radiation spectrum from the plasmas contains a large amount of information of plasmas. Thus, one of the most effective methods to detecting the plasma parameters is measure the plasma radiation spectrum. Until now, since the restriction of the Toshiba mechanically ruled aberration-corrected concave gratings, the measurable wavelength range of the incidence flat-field grazing spectrometer in the soft X-ray range are only from 5 to 40 nm. In order to extend the wavelength rang of grazing incidence flat-field spectrometer, first, a grazing incidence concave reflection grating ray-trace code is written using optical path equation. Second, under the same conditions with reference 6, we compare our numerical results with Harada's results. The results show that our results agree very well with the results of Harada. The results of comparison show that our ray-trace code is believable. Finally, the variety of the flat-field curves are detailedly investigated using the ray-trace code with the different grazing incidence conditions. The results show that the measurable wavelength range of the incidence flat-field grazing spectrometer are extended to 5~80 nm from the soft X-ray wavelength range of 5~40 nm. This result theoretically demonstrates the possibility of expanded the traditional band flat-field grazing incidence spectrometer from soft X-ray band to the extreme ultraviolet (XUV), and also bring a new design ideas for improving the use of grazing incidence flat field concave grating. 13. Galaxies in the Diffuse Baryon Field Approaching Reionization: A Joint Study with JWST, HST, and Large Telescopes Science.gov (United States) Simcoe, Robert 2017-08-01 Our team is conducting a dedicated survey for emission-line galaxies at 5 6 quasars, using JWST/NIRCAM's slitless grism in a 110 hour GTO allocation. We have acquired deep near-IR spectra of the QSOs, revealing multiple heavy-element absorption systems and probing the HI optical depth within each object's survey volume. These data will provide the first systematic view of the circumgalactic medium at z > 4, allowing us to study early metal enrichment, correlations of the intergalactic HI optical depth with galaxy density, and the environment of the quasar hosts. These fields generally do not have deep multicolor photometry that would facilitate selection of broadband dropout galaxies for future observation with JWST/NIRSPEC. However during long spectroscopic integrations with NIRCAM's long channel we will obtain deep JWST photometry in F115W and F200W, together with F356W for wavelength calibration. Here we request 30 orbits with HST/ACS to acquire deep optical photometry that (together with the JWST IR bands) will constrain SED models and enable dropout selection of fainter objects. For lower redshift objects the rest-UV ACS data will improve estimates of star formation rate and stellar mass. Within a Small-GO program scope we will obtain sensitivity similar to CANDELS-Deep in all six fields, and approximately double the size of our galaxy sample appropriate for JWST/NIRSPEC followup at redshifts approaching the reionization epoch. 14. Aplanatic telescopes based on Schwarzschild optical configuration: from grazing incidence Wolter-like x-ray optics to Cherenkov two-mirror normal incidence telescopes Science.gov (United States) Sironi, Giorgia 2017-09-01 At the beginning of XX century Karl Schwarzschild defined a method to design large-field aplanatic telescopes based on the use of two aspheric mirrors. The approach was then refined by Couder (1926) who, in order to correct for the astigmatic aberration, introduced a curvature of the focal plane. By the way, the realization of normal-incidence telescopes implementing the Schwarzschild aplanatic configuration has been historically limited by the lack of technological solutions to manufacture and test aspheric mirrors. On the other hand, the Schwarzschild solution was recovered for the realization of coma-free X-ray grazing incidence optics. Wolter-like grazing incidence systems are indeed free of spherical aberration, but still suffer from coma and higher order aberrations degrading the imaging capability for off-axis sources. The application of the Schwarzschild's solution to X-ray optics allowed Wolter to define an optical system that exactly obeys the Abbe sine condition, eliminating coma completely. Therefore these systems are named Wolter-Schwarzschild telescopes and have been used to implement wide-field X-ray telescopes like the ROSAT WFC and the SOHO X-ray telescope. Starting from this approach, a new class of X-ray optical system was proposed by Burrows, Burg and Giacconi assuming polynomials numerically optimized to get a flat field of view response and applied by Conconi to the wide field x-ray telescope (WFXT) design. The Schwarzschild-Couder solution has been recently re-discovered for the application to normal-incidence Cherenkov telescopes, thanks to the suggestion by Vassiliev and collaborators. The Italian Institute for Astrophysics (INAF) realized the first Cherenkov telescope based on the polynomial variation of the Schwarzschild configuration (the so-called ASTRI telescope). Its optical qualification was successfully completed in 2016, demonstrating the suitability of the Schwarzschild-like configuration for the Cherenkov astronomy requirements 15. Summary of field operations Technical Area I well PGS-1. Site-Wide Hydrogeologic Characterization Project International Nuclear Information System (INIS) Fritts, J.E.; McCord, J.P. 1995-02-01 The Environmental Restoration (ER) Project at Sandia National Laboratories, New Mexico is managing the project to assess and, when necessary, to remediate sites contaminated by the lab operations. Within the ER project, the site-wide hydrogeologic characterization task is responsible for the area-wide hydrogeologic investigation. The purpose of this task is to reduce the uncertainty about the rate and direction of groundwater flow beneath the area and across its boundaries. This specific report deals with the installation of PGS-1 monitoring well which provides information on the lithology and hydrology of the aquifer in the northern area of the Kirtland Air Force Base. The report provides information on the well design; surface geology; stratigraphy; structure; drilling, completion, and development techniques; and borehole geophysics information 16. Operating a heterogeneous telescope network Science.gov (United States) Allan, Alasdair; Bischoff, Karsten; Burgdorf, Martin; Cavanagh, Brad; Christian, Damien; Clay, Neil; Dickens, Rob; Economou, Frossie; Fadavi, Mehri; Frazer, Stephen; Granzer, Thomas; Grosvenor, Sandy; Hessman, Frederic V.; Jenness, Tim; Koratkar, Anuradha; Lehner, Matthew; Mottram, Chris; Naylor, Tim; Saunders, Eric S.; Solomos, Nikolaos; Steele, Iain A.; Tuparev, Georg; Vestrand, W. Thomas; White, Robert R.; Yost, Sarah 2006-06-01 In the last few years the ubiquitous availability of high bandwidth networks has changed the way both robotic and non-robotic telescopes operate, with single isolated telescopes being integrated into expanding "smart" telescope networks that can span continents and respond to transient events in seconds. The Heterogeneous Telescope Networks (HTN)* Consortium represents a number of major research groups in the field of robotic telescopes, and together we are proposing a standards based approach to providing interoperability between the existing proprietary telescope networks. We further propose standards for interoperability, and integration with, the emerging Virtual Observatory. We present the results of the first interoperability meeting held last year and discuss the protocol and transport standards agreed at the meeting, which deals with the complex issue of how to optimally schedule observations on geographically distributed resources. We discuss a free market approach to this scheduling problem, which must initially be based on ad-hoc agreements between the participants in the network, but which may eventually expand into a electronic market for the exchange of telescope time. 17. Electric field and temperature measurement using ultra wide bandwidth pigtailed electro-optic probes. Science.gov (United States) Bernier, Maxime; Gaborit, Gwenaël; Duvillaret, Lionel; Paupert, Alain; Lasserre, Jean-Louis 2008-05-01 We present pigtailed electro-optic probes that allow a simultaneous measurement of high frequency electric fields and temperature using a unique laser probe beam. This has been achieved by the development of a novel probe design associated with a fully automated servo-controlled optical bench, initially developed to stabilize the electric field sensor response. The developed electro-optic probes present a stable response in outdoors conditions over a time duration exceeding 1 h, a frequency bandwidth from kHz to tens of GHz with a sensitivity of 0.7 Vm(-1)Hz(-(1/2)), and a temperature accuracy of 40 mK. 18. Wide-bandwidth charge sensitivity with a radio-frequency field-effect transistor NARCIS (Netherlands) Nishiguchi, K.; Yamaguchi, H.; Fujiwara, A.; Van der Zant, H.S.J.; Steele, G.A. 2013-01-01 We demonstrate high-speed charge detection at room temperature with single-electron resolution by using a radio-frequency field-effect transistor (RF-FET). The RF-FET combines a nanometer-scale silicon FET with an impedance-matching circuit composed of an inductor and capacitor. Driving the RF-FET 19. Optical design of the Discovery Channel Telescope Science.gov (United States) MacFarlane, Malcolm J.; Dunham, Edward W. 2004-10-01 The Discovery Channel Telescope (DCT) is a joint venture between Discovery Communications and Lowell Observatory. The telescope will have a 4.2-meter clear aperture, active primary mirror working at F/1.9. Two observing stations are presently planned; a Ritchey-Chretien focus some two meters behind the vertex of the primary mirror and a prime focus featuring a wide-field optical corrector (WFOC) with a two-degree field of view. The Ritchey-Chretien focus will be used for a variety of optical and near infrared imaging and spectroscopic instrumentation while the prime focus will be largely used as a survey tool to search for near-earth and Kuiper belt objects, for example. In order to take advantage of sub-arc second seeing at the DCT site, a stringent set of requirements has been placed on the two foci. The requirements are for the full-width, half-maximum (FWHM) image of a point source to be less than 0.20 arc second at the Ritchey-Chretien focus over a 21 arc minute field and less than 0.27 arc second at prime focus in each of six filter bands including a very broad band for survey purposes. This paper describes the optical design of the field correctors at the two foci. Particular attention is paid to the WFOC. This state of the art device poses a number of optical challenges which are discussed here, as well as mechanical challenges which are discussed elsewhere. 20. Crystal-field investigations of rare-earth-doped wide band gap semiconductors CERN Multimedia Muller, S; Wahl, U Crystal field investigations play a central role in the studies of rare earth doped semiconductors. Optical stark level spectroscopy and lattice location studies of radioactive rare earth isotopes implanted at ISOLDE have provided important insight into these systems during the last years. It has been shown that despite a major site preference of the probe atoms in the lattice, several defect configurations do exist. These sites are visible in the optical spectra but their origin and nature aren't deducible from these spectra alone. Hyperfine measurements on the other hand should reveal these defect configurations and yield the parameters necessary for a description of the optical properties at the atomic scale. In order to study the crystal field with this alternative approach, we propose a new concept for perturbed $\\gamma\\gamma$-angular correlation (PAC) experiments at ISOLDE based on digital signal processing in contrast to earlier analog setups. The general functionality of the spectrometer is explained ... 1. Wide-Field OCT Angiography at 400 KHz Utilizing Spectral Splitting Directory of Open Access Journals (Sweden) Laurin Ginner 2014-10-01 Full Text Available Optical angiography systems based on optical coherence tomography (OCT require dense sampling in order to maintain good vascular contrast. We demonstrate a way to gain acquisition speed and spatial sampling by using spectral splitting with a swept source OCT system. This method splits the recorded spectra into two to several subspectra. Using continuous lateral scanning, the lateral sampling is then increased by the same factor. This allows increasing the field of view of OCT angiography, while keeping the same transverse resolution and measurement time. The performance of our method is demonstrated in vivo at different locations of the human retina and verified quantitatively. Spectral splitting can be applied without any changes in the optical setup, thus offering an easy way to increase the field of view of OCT in general and in particular for OCT angiography. 2. Wide field-of-view and high-efficiency light concentrator Science.gov (United States) Zhi, Yu; Liang, Ye; Wang, Zhe; Chen, Shaomin 2018-03-01 To improve light yield and energy resolution in large-volume neutrino detectors, light concentrators are often mounted on photomultiplier tubes to increase the detection efficiency of optical photons from scintillation or Cherenkov light induced by charged particles. We propose a method to optimize previous light concentrators design in order to attain a field of view of 90∘ and a geometrical collection efficiency above 98%. This improvement could be crucial to Jinping and other future neutrino experiments whichever it is applicable. 3. A Cosmic Ray Telescope For Educational Purposes International Nuclear Information System (INIS) Voulgaris, G.; Kazanas, S.; Chamilothoris, I. 2010-01-01 Cosmic ray detectors are widely used, for educational purposes, in order to motivate students to the physics of elementary particles and astrophysics. Using a 'telescope' of scintillation counters, the directional characteristics, diurnal variation, correlation with solar activity, can be determined, and conclusions about the composition, origin and interaction of elementary particles with the magnetic field of earth can be inferred. A telescope was built from two rectangular scintillator panels with dimensions: 91.6x1.9x3.7 cm 3 . The scintillators are placed on top of each other, separated by a fixed distance of 34.6 cm. They are supported by a wooden frame which can be rotated around a horizontal axis. Direction is determined by the coincidence of the signals of the two PMTs. Standard NIM modules are used for readout. This device is to be used in the undergraduate nuclear and particle physics laboratory. The design and construction of the telescope as well as some preliminary results are presented. 4. Wide-field time-resolved luminescence imaging and spectroscopy to decipher obliterated documents in forensic science Science.gov (United States) Suzuki, Mototsugu; Akiba, Norimitsu; Kurosawa, Kenji; Kuroki, Kenro; Akao, Yoshinori; Higashikawa, Yoshiyasu 2016-01-01 We applied a wide-field time-resolved luminescence (TRL) method with a pulsed laser and a gated intensified charge coupled device (ICCD) for deciphering obliterated documents for use in forensic science. The TRL method can nondestructively measure the dynamics of luminescence, including fluorescence and phosphorescence lifetimes, which prove to be useful parameters for image detection. First, we measured the TRL spectra of four brands of black porous-tip pen inks on paper to estimate their luminescence lifetimes. Next, we acquired the TRL images of 12 obliterated documents at various delay times and gate times of the ICCD. The obliterated contents were revealed in the TRL images because of the difference in the luminescence lifetimes of the inks. This method requires no pretreatment, is nondestructive, and has the advantage of wide-field imaging, which makes it is easy to control the gate timing. This demonstration proves that TRL imaging and spectroscopy are powerful tools for forensic document examination. 5. Mini-Mega-TORTORA wide-field monitoring system with sub-second temporal resolution: first year of operation Science.gov (United States) Karpov, S.; Beskin, G.; Biryukov, A.; Bondar, S.; Ivanov, E.; Katkova, E.; Perkov, A.; Sasyuk, V. 2016-12-01 Here we present the summary of first years of operation and the first results of a novel 9-channel wide-field optical monitoring system with sub-second temporal resolution, Mini-Mega-TORTORA (MMT-9), which is in operation now at Special Astrophysical Observatory on Russian Caucasus. The system is able to observe the sky simultaneously in either wide (˜900 square degrees) or narrow (˜100 square degrees) fields of view, either in clear light or with any combination of color (Johnson-Cousins B, V or R) and polarimetric filters installed, with exposure times ranging from 0.1 s to hundreds of seconds. The real-time system data analysis pipeline performs automatic detection of rapid transient events, both near-Earth and extragalactic. The objects routinely detected by MMT include faint meteors and artificial satellites. The pipeline for a longer time scales variability analysis is still in development. 6. Picosecond wide-field time-correlated single photon counting fluorescence microscopy with a delay line anode detector Energy Technology Data Exchange (ETDEWEB) Hirvonen, Liisa M.; Le Marois, Alix; Suhling, Klaus, E-mail: [email protected] [Department of Physics, King' s College London, Strand, London WC2R 2LS (United Kingdom); Becker, Wolfgang; Smietana, Stefan [Becker & Hickl GmbH, Nahmitzer Damm 30, 12277 Berlin (Germany); Milnes, James; Conneely, Thomas [Photek Ltd., 26 Castleham Rd, Saint Leonards-on-Sea TN38 9NS (United Kingdom); Jagutzki, Ottmar [Institut für Kernphysik, Max-von-Laue-Str. 1, 60438 Frankfurt (Germany) 2016-08-15 We perform wide-field time-correlated single photon counting-based fluorescence lifetime imaging (FLIM) with a crossed delay line anode image intensifier, where the pulse propagation time yields the photon position. This microchannel plate-based detector was read out with conventional fast timing electronics and mounted on a fluorescence microscope with total internal reflection (TIR) illumination. The picosecond time resolution of this detection system combines low illumination intensity of microwatts with wide-field data collection. This is ideal for fluorescence lifetime imaging of cell membranes using TIR. We show that fluorescence lifetime images of living HeLa cells stained with membrane dye di-4-ANEPPDHQ exhibit a reduced lifetime near the coverslip in TIR compared to epifluorescence FLIM. 7. Status and Perspectives of the Mini-MegaTORTORA Wide-field Monitoring System with High Temporal Resolution Directory of Open Access Journals (Sweden) Sergey Karpov 2013-01-01 Full Text Available Here we briefly summarize our long-term experience of constructing and operating wide-field monitoring cameras with sub-second temporal resolution to look for optical components of GRBs, fast-moving satellites and meteors. The general hardware requirements for these systems are discussed, along with algorithms for real-time detection and classification of various kinds of short optical transients. We also give a status report on the next generation, the MegaTORTORA multi-objective and transforming monitoring system, whose 6-channel (Mini-MegaTORTORA-Spain and 9-channel prototypes (Mini-MegaTORTORA-Kazan we have been building at SAO RAS. This system combines a wide field of view with subsecond temporal resolution in monitoring regime, and is able, within fractions of a second, to reconfigure itself to follow-up mode, which has better sensitivity and simultaneously provides multi-color and polarimetric information on detected transients. 8. Optical design for CETUS: a wide-field 1.5m aperture UV payload being studied for a NASA probe class mission study Science.gov (United States) Woodruff, Robert; Robert Woodruff, Goddard Space Flight Center, Kendrick Optical Consulting 2018-01-01 We are developing a NASA Headquarters selected Probe-class mission concept called the Cosmic Evolution Through UV Spectroscopy (CETUS) mission, which includes a 1.5-m aperture diameter large field-of-view (FOV) telescope optimized for UV imaging, multi-object spectroscopy, and point-source spectroscopy. The optical system includes a Three Mirror Anastigmatic (TMA) telescope that simultaneously feeds three separate scientific instruments: the near-UV (NUV) Multi-Object Spectrograph (MOS) with a next-generation Micro-Shutter Array (MSA); the two-channel camera covering the far-UV (FUV) and NUV spectrum; and the point-source spectrograph covering the FUV and NUV region with selectable R~ 40,000 echelle modes and R~ 2,000 first order modes. The optical system includes fine guidance sensors, wavefront sensing, and spectral and flat-field in-flight calibration sources. This paper will describe the current optical design of CETUS. 9. A Search for Technosignatures from 14 Planetary Systems in the Kepler Field with the Green Bank Telescope at 1.15–1.73 GHz Science.gov (United States) Margot, Jean-Luc; Greenberg, Adam H.; Pinchuk, Pavlo; Shinde, Akshay; Alladi, Yashaswi; Prasad MN, Srinivas; Bowman, M. Oliver; Fisher, Callum; Gyalay, Szilard; McKibbin, Willow; Miles, Brittany; Nguyen, Donald; Power, Conor; Ramani, Namrata; Raviprasad, Rashmi; Santana, Jesse; Lynch, Ryan S. 2018-05-01 Analysis of Kepler mission data suggests that the Milky Way includes billions of Earth-sized planets in the habitable zone of their host stars. Current technology enables the detection of technosignatures emitted from a large fraction of the Galaxy. We describe a search for technosignatures that is sensitive to Arecibo-class transmitters located within ∼420 ly of Earth and transmitters that are 1000 times more effective than Arecibo within ∼13000 ly of Earth. Our observations focused on 14 planetary systems in the Kepler field and used the L-band receiver (1.15–1.73 GHz) of the 100 m diameter Green Bank Telescope. Each source was observed for a total integration time of 5 minutes. We obtained power spectra at a frequency resolution of 3 Hz and examined narrowband signals with Doppler drift rates between ±9 Hz s‑1. We flagged any detection with a signal-to-noise ratio in excess of 10 as a candidate signal and identified approximately 850,000 candidates. Most (99%) of these candidate signals were automatically classified as human-generated radio-frequency interference (RFI). A large fraction (>99%) of the remaining candidate signals were also flagged as anthropogenic RFI because they have frequencies that overlap those used by global navigation satellite systems, satellite downlinks, or other interferers detected in heavily polluted regions of the spectrum. All 19 remaining candidate signals were scrutinized and none were attributable to an extraterrestrial source. 10. Development of highly efficient proton recoil counter telescope for absolute measurement of neutron fluences in quasi-monoenergetic neutron calibration fields of high energy International Nuclear Information System (INIS) Shikaze, Yoshiaki; Tanimura, Yoshihiko; Saegusa, Jun; Tsutsumi, Masahiro 2010-01-01 Precise calibration of monitors and dosimeters for use with high energy neutrons necessitates reliable and accurate neutron fluences being evaluated with use of a reference point. A highly efficient Proton Recoil counter Telescope (PRT) to make absolute measurements with use of a reference point was developed to evaluate neutron fluences in quasi-monoenergetic neutron fields. The relatively large design of the PRT componentry and relatively thick, approximately 2 mm, polyethylene converter contributed to high detection efficiency at the reference point over a large irradiation area at a long distance from the target. The polyethylene converter thickness was adjusted to maintain the same carbon density per unit area as the graphite converter for easy background subtraction. The high detection efficiency and thickness adjustment resulted in efficient absolute measurements being made of the neutron fluences of sufficient statistical precision over a short period of time. The neutron detection efficiencies of the PRT were evaluated using MCNPX code at 2.61x10 -6 , 2.16x10 -6 and 1.14x10 -6 for the respective neutron peak energies of 45, 60 and 75 MeV. The neutron fluences were determined to have been evaluated at an uncertainty of within 6.5% using analysis of measured data and the detection efficiencies. The PRT was also designed so as to be capable of simultaneously obtaining TOF data. The TOF data also increased the reliability of neutron fluence measurements and provided useful information for use in interpreting the source of proton events. 11. The Power of Wide Field HI Surveys: ALFALFA Imaging of Massive Tidal Features in the Leo Cloud of Galaxies Science.gov (United States) Leisman, Luke; Haynes, Martha P.; Giovanelli, Riccardo; ALFALFA Almost Darks Team 2016-01-01 Tidal interactions are well known to play an important role in galactic evolution in group environments, but the extent of these interactions, and their relative impact on the morphology-density relation is still unclear. Neutral hydrogen (HI) mapping can reveal the recent interaction history of group galaxies, but is difficult to execute due to the need for high sensitivity over wide fields. The Arecibo Legacy Fast ALFA survey (ALFALFA; Giovanelli et al. 2005; Haynes et al. 2011) provides high sensitivity, unbiased, wide field maps of HI in the local volume; here we will present a 50 deg2 ALFALFA map of a well studied region of the Leo Cloud of galaxies, which includes the NGC3226/7 group and HCG44. These observations reveal HI tails and plumes with extents exceeding 1.4 deg (~600 kpc), well beyond the primary beams of previous observations. These tails constitute a significant fraction of the total HI mass in NGC3226/7 (Arp 94) and HCG44. We will also present WSRT maps of the extended emission near Arp 94, which show tail morphologies inconsistent with 2 body interactions. These observations demonstrate that large scale group interactions will be an important science outcome for future sensitive, wide field HI surveys.This work is supported by NSF grants AST-0607007 and AST-1107390 and by grants from the Brinson Foundation. 12. Development, Demonstration, and Field Testing of Enterprise-Wide Distributed Generation Energy Management System: Final Report Energy Technology Data Exchange (ETDEWEB) Greenberg, S.; Cooley, C. 2005-01-01 This report details progress on subcontract NAD-1-30605-1 between the National Renewable Energy Laboratory and RealEnergy (RE), the purpose of which is to describe RE's approach to the challenges it faces in the implementation of a nationwide fleet of clean cogeneration systems to serve contemporary energy markets. The Phase 2 report covers: utility tariff risk and its impact on market development; the effect on incentives on distributed energy markets; the regulatory effectiveness of interconnection in California; a survey of practical field interconnection issues; trend analysis for on-site generation; performance of dispatch systems; and information design hierarchy for combined heat and power. 13. From Widely Accepted Concepts in Coordination Chemistry to Inverted Ligand Fields. Science.gov (United States) Hoffmann, Roald; Alvarez, Santiago; Mealli, Carlo; Falceto, Andrés; Cahill, Thomas J; Zeng, Tao; Manca, Gabriele 2016-07-27 We begin with a brief historical review of the development of our understanding of the normal ordering of nd orbitals of a transition metal interacting with ligands, the most common cases being three below two in an octahedral environment, two below three in tetrahedral coordination, and four below one in a square-planar environment. From the molecular orbital construction of these ligand field splittings evolves a strategy for inverting the normal order: the obvious way to achieve this is to raise the ligand levels above the metal d's; that is, make the ligands better Lewis bases. However, things are not so simple, for such metal/ligand level placement may lead to redox processes. For 18-electron octahedral complexes one can create the inverted situation, but it manifests itself in the makeup of valence orbitals (are they mainly on metal or ligands?) rather than energy. One can also see the effect, in small ways, in tetrahedral Zn(II) complexes. We construct several examples of inverted ligand field systems with a hypothetical but not unrealistic AlCH3 ligand and sketch the consequences of inversion on reactivity. Special attention is paid to the square-planar case, exemplified by [Cu(CF3)4](-), in which Snyder had the foresight to see a case of an inverted field, with the empty valence orbital being primarily ligand centered, the dx2-y2 orbital heavily occupied, in what would normally be called a Cu(III) complex. For [Cu(CF3)4](-) we provide theoretical evidence from electron distributions, geometry of the ligands, thermochemistry of molecule formation, and the energetics of abstraction of a CF3 ligand by a base, all consistent with oxidation of the ligands in this molecule. In [Cu(CF3)4](-), and perhaps more complexes on the right side of the transition series than one has imagined, some ligands are σ-noninnocent. Exploration of inverted ligand fields helps us see the continuous, borderless transition from transition metal to main group bonding. We also give 14. Wide field of view spectroscopy using solid Fabry-Perot interferometers Science.gov (United States) Nikoleyczik, Jonathan; Kutyrev, Alexander; Moseley, Harvey; Veilleux, Sylvain 2016-08-01 We present a high resolution spectrometer consisting of dual solid Fabry-Perot Interferometers (FPI). Each FPI is made of a single piece of L-BBH2 glass which has a high index of refraction n 2.07. Each is then coated with partially reflective mirrors to achieve a spectral resolution of R 30,000. Running the FPIs in tandem reduces the overlapping orders and allows for a much wider free spectral range and higher contrast. Tuning of the FPIs is achieved by adjusting the temperature and thus changing the FPI gap and the refractive index of the material. The spectrometer then moves spatially in order to get spectral information at every point in the field of view. We select spectral lines for further analysis and create maps of the line depths across the field. Using this technique we are able to measure the fluorescence of chlorophyll in plants and observe zodiacal light. In the chlorophyll analysis we are able to detect chlorophyll fluorescence using the line depth in a plant using the sky as a reference solar spectrum. This instrument has possible applications in either a cubesat or aerial observations to measure bulk plant activity over large areas. 15. Introduction to the Solar Space Telescope tribpo The design of the space solar telescope (SST) (phase B) has ... Key words. Space telescopes. 1. Introduction. The world wide development of solar space based observations went through two steps in the spatial resolution: low resolution .... the National Science Foundation of China, the Ministry of Science and Technology. 16. Upconverting nanoparticles: a versatile platform for wide-field two-photon microscopy and multi-modal in vivo imaging. Science.gov (United States) Park, Yong Il; Lee, Kang Taek; Suh, Yung Doug; Hyeon, Taeghwan 2015-03-21 Lanthanide-doped upconverting nanoparticles (UCNPs) have recently attracted enormous attention in the field of biological imaging owing to their unique optical properties: (1) efficient upconversion photoluminescence, which is intense enough to be detected at the single-particle level with a (nonscanning) wide-field microscope setup equipped with a continuous wave (CW) near-infrared (NIR) laser (980 nm), and (2) resistance to photoblinking and photobleaching. Moreover, the use of NIR excitation minimizes adverse photoinduced effects such as cellular photodamage and the autofluorescence background. Finally, the cytotoxicity of UCNPs is much lower than that of other nanoparticle systems. All these advantages can be exploited simultaneously without any conflicts, which enables the establishment of a novel UCNP-based platform for wide-field two-photon microscopy. UCNPs are also useful for multimodal in vivo imaging because simple variations in the composition of the lattice atoms and dopant ions integrated into the particles can be easily implemented, yielding various distinct biomedical activities relevant to magnetic resonance imaging (MRI), computed tomography (CT), and positron emission tomography (PET). These multiple functions embedded in a single type of UCNPs play a crucial role in precise disease diagnosis. The application of UCNPs is extended to therapeutic fields such as photodynamic and photothermal cancer therapies through advanced surface conjugation schemes. 17. Fiber-Coupled Wide Field of View Optical Receiver for High Speed Space Communication Science.gov (United States) Suddath, Shannon N. Research groups at NASA Glenn Research Center are interested in improving data rates on the International Space Station (ISS) using a free-space optical (FSO) link. However, known flexure of the ISS structure is expected to cause misalignment of the FSO link. Passive-control designs for mitigating misalignment are under investigation, including using a fiber-bundle for improved field of view. The designs must overcome the obstacle of coupling directly to fiber, rather than a photodetector, as NASA will maintain the use of small form-factor pluggable optical transceivers (SFPs) in the ISS network. In this thesis, a bundle-based receiver capable of coupling directly to fiber is designed, simulated, and tested in lab. Two 3-lens systems were evaluated for power performance in the lab, one with a 20 mm focal length aspheric lens and the other with a 50 mm focal length aspheric lens. The maximum output power achieved was 8 muW. 18. Novel optical designs for consumer astronomical telescopes and their application to professional imaging Science.gov (United States) Wise, Peter; Hodgson, Alan 2006-06-01 Since the launch of the Hubble Space Telescope there has been widespread popular interest in astronomy. A further series of events, most notably the recent Deep Impact mission and Mars oppositions have served to fuel further interest. As a result more and more amateurs are coming into astronomy as a practical hobby. At the same time more sophisticated optical equipment is becoming available as the price to performance ratio become more favourable. As a result larger and better optical telescopes are now in use by amateurs. We also have the explosive growth in digital imaging technologies. In addition to displacing photographic film as the preferred image capture modality it has made the capture of high quality astronomical imagery more accessible to a wider segment of the astronomy community. However, this customer requirement has also had an impact on telescope design. There has become a greater imperative for wide flat image fields in these telescopes to take advantage of the ongoing advances in CCD imaging technology. As a result of these market drivers designers of consumer astronomical telescopes are now producing state of the art designs that result in wide, flat fields with optimal spatial and chromatic aberrations. Whilst some of these designs are not scalable to the larger apertures required for professional ground and airborne telescope use there are some that are eminently suited to make this transition. 19. Contributed Review: Camera-limits for wide-field magnetic resonance imaging with a nitrogen-vacancy spin sensor Science.gov (United States) Wojciechowski, Adam M.; Karadas, Mürsel; Huck, Alexander; Osterkamp, Christian; Jankuhn, Steffen; Meijer, Jan; Jelezko, Fedor; Andersen, Ulrik L. 2018-03-01 Sensitive, real-time optical magnetometry with nitrogen-vacancy centers in diamond relies on accurate imaging of small (≪10-2), fractional fluorescence changes across the diamond sample. We discuss the limitations on magnetic field sensitivity resulting from the limited number of photoelectrons that a camera can record in a given time. Several types of camera sensors are analyzed, and the smallest measurable magnetic field change is estimated for each type. We show that most common sensors are of a limited use in such applications, while certain highly specific cameras allow achieving nanotesla-level sensitivity in 1 s of a combined exposure. Finally, we demonstrate the results obtained with a lock-in camera that paves the way for real-time, wide-field magnetometry at the nanotesla level and with a micrometer resolution. 20. OmegaCAM: the 16k × 16k Survey Camera for the VLT Survey Telescope NARCIS (Netherlands) Deul, Erik; Kuijken, Konrad; Valentijn, Edwin A.; Tyson, J. Anthony; Wolff, Sidney 2002-01-01 OmegaCAM, a 16k×16k-pixel wide field optical camera, and the VLT Survey Telescope (VST) that is to host it, will constitute a major sky surveying machine that becomes operational in 2004 at ESO"s Paranal Observatory. It maps one square degree of sky with 0.21 arcsec sized pixels. Both individual 1. Developments in fiber-positioning technology for the WEAVE instrument at the William Herschel Telescope NARCIS (Netherlands) Schallig, Ellen; Lewis, Ian J.; Gilbert, James; Dalton, Gavin; Brock, Matthew; Abrams, Don Carlos; Middleton, Kevin; Aguerri, J. Alfonso L.; Bonifacio, Piercarlo; Carrasco, Esperanza; Trager, Scott C.; Vallenari, Antonella 2016-01-01 WEAVE is the next-generation wide-field optical spectroscopy facility for the William Herschel Telescope (WHT) on La Palma in the Canary Islands, Spain. It is a multi-object "pick-and-place" fibre-fed spectrograph with a 1000 fibre multiplex behind a new dedicated 2° prime focus corrector. The WEAVE 2. 4MOST: the 4-metre Multi-Object Spectroscopic Telescope project at preliminary design review NARCIS (Netherlands) de Jong, Roelof S.; Barden, Samuel C.; Bellido-Tirado, Olga; Brynnel, Joar G.; Frey, Steffen; Giannone, Domenico; Haynes, Roger; Johl, Diana; Phillips, Daniel; Schnurr, Olivier; Walcher, Jakob C.; Winkler, Roland; Ansorge, Wolfgang R.; Feltzing, Sofia; McMahon, Richard G.; Baker, Gabriella; Caillier, Patrick; Dwelly, Tom; Gaessler, Wolfgang; Iwert, Olaf; Mandel, Holger G.; Piskunov, Nikolai A.; Pragt, Johan H.; Walton, Nicholas A.; Bensby, Thomas; Bergemann, Maria; Chiappini, Cristina; Christlieb, Norbert; Cioni, Maria-Rosa L.; Driver, Simon; Finoguenov, Alexis; Helmi, Amina; Irwin, Michael J.; Kitaura, Francisco-Shu; Kneib, Jean-Paul; Liske, Jochen; Merloni, Andrea; Minchev, Ivan; Richard, Johan; Starkenburg, Else 2016-01-01 We present an overview of the 4MOST project at the Preliminary Design Review. 4MOST is a major new wide-field, high-multiplex spectroscopic survey facility under development for the VISTA telescope of ESO. 4MOST has a broad range of science goals ranging from Galactic Archaeology and stellar physics 3. The Advanced Telescope for High Energy Astrophysics Science.gov (United States) Guainazzi, Matteo 2017-08-01 Athena (the Advanced Telescope for High Energy Astrophysics) is a next generation X-ray observatory currently under study by ESA for launch in 2028. Athena is designed to address the Hot and Energetic Universe science theme, which addresses two key questions: 1) How did ordinary matter evolve into the large scale structures we see today? 2) How do black holes grow and shape the Universe. To address these topics Athena employs an innovative X-ray telescope based on Silicon Pore Optics technology to deliver extremely light weight and high throughput, while retaining excellent angular resolution. The mirror can be adjusted to focus onto one of two focal place instruments: the X-ray Integral Field Unit (X-IFU) which provides spatially-resolved, high resolution spectroscopy, and the Wide Field Imager (WFI) which provides spectral imaging over a large field of view, as well as high time resolution and count rate tolerance. Athena is currently in Phase A and the study status will be reviewed, along with the scientific motivations behind the mission. 4. Compressive hyperspectral time-resolved wide-field fluorescence lifetime imaging Science.gov (United States) Pian, Qi; Yao, Ruoyang; Sinsuebphon, Nattawut; Intes, Xavier 2017-07-01 Spectrally resolved fluorescence lifetime imaging and spatial multiplexing have offered information content and collection-efficiency boosts in microscopy, but efficient implementations for macroscopic applications are still lacking. An imaging platform based on time-resolved structured light and hyperspectral single-pixel detection has been developed to perform quantitative macroscopic fluorescence lifetime imaging (MFLI) over a large field of view (FOV) and multiple spectral bands simultaneously. The system makes use of three digital micromirror device (DMD)-based spatial light modulators (SLMs) to generate spatial optical bases and reconstruct N by N images over 16 spectral channels with a time-resolved capability (∼40 ps temporal resolution) using fewer than N2 optical measurements. We demonstrate the potential of this new imaging platform by quantitatively imaging near-infrared (NIR) Förster resonance energy transfer (FRET) both in vitro and in vivo. The technique is well suited for quantitative hyperspectral lifetime imaging with a high sensitivity and paves the way for many important biomedical applications. 5. Wide-field surface plasmon microscopy of nano- and microparticles: features, benchmarking, limitations, and bioanalytical applications Science.gov (United States) Nizamov, Shavkat; Scherbahn, Vitali; Mirsky, Vladimir M. 2017-05-01 Detection of nano- and micro-particles is an important task for chemical analytics, food industry, biotechnology, environmental monitoring and many other fields of science and industry. For this purpose, a method based on the detection and analysis of minute signals in surface plasmon resonance images due to adsorption of single nanopartciles was developed. This new technology allows one a real-time detection of interaction of single nano- and micro-particles with sensor surface. Adsorption of each nanoparticle leads to characteristic diffraction image whose intensity depends on the size and chemical composition of the particle. The adsorption rate characterizes volume concentration of nano- and micro-particles. Large monitored surface area of sensor enables a high dynamic range of counting and to a correspondingly high dynamic range in concentration scale. Depending on the type of particles and experimental conditions, the detection limit for aqueous samples can be below 1000 particles per microliter. For application of method in complex media, nanoparticle images are discriminated from image perturbations due to matrix components. First, the characteristic SPRM images of nanoparticles (templates) are collected in aqueous suspensions or spiked real samples. Then, the detection of nanoparticles in complex media using template matching is performed. The detection of various NPs in consumer products like cosmetics, mineral water, juices, and wines was shown at sub-ppb level. The method can be applied for ultrasensitive detection and analysis of nano- and micro-particles of biological (bacteria, viruses, endosomes), biotechnological (liposomes, protein nanoparticles for drug delivery) or technical origin. 6. Simultaneous wall-shear-stress and wide-field PIV measurements in a turbulent boundary layer Science.gov (United States) Gomit, Guillaume; Fourrie, Gregoire; de Kat, Roeland; Ganapathisubramani, Bharathram 2015-11-01 Simultaneous particle image velocimetry (PIV) and hot-film shear stress sensor measurements were performed to study the large-scale structures associated with shear stress events in a flat plate turbulent boundary layer at a high Reynolds number (Reτ ~ 4000). The PIV measurement was performed in a streamwise-wall normal plane using an array of six high resolution cameras (4 ×16MP and 2 ×29MP). The resulting field of view covers 8 δ (where δ is the boundary layer thickness) in the streamwise direction and captures the entire boundary layer in the wall-normal direction. The spatial resolution of the measurement is approximately is approximately 70 wall units (1.8 mm) and sampled each 35 wall units (0.9 mm). In association with the PIV setup, a spanwise array of 10 skin-friction sensors (spanning one δ) was used to capture the footprint of the large-scale structures. This combination of measurements allowed the analysis of the three-dimensional conditional structures in the boundary layer. Particularly, from conditional averages, the 3D organisation of the wall normal and streamwise velocity components (u and v) and the Reynolds shear stress (-u'v') related to a low and high shear stress events can be extracted. European Research Council Grant No-277472-WBT. 7. Curved sensors for compact high-resolution wide-field designs: prototype demonstration and optical characterization Science.gov (United States) Chambion, Bertrand; Gaschet, Christophe; Behaghel, Thibault; Vandeneynde, Aurélie; Caplet, Stéphane; Gétin, Stéphane; Henry, David; Hugot, Emmanuel; Jahn, Wilfried; Lombardo, Simona; Ferrari, Marc 2018-02-01 Over the recent years, a huge interest has grown for curved electronics, particularly for opto-electronics systems. Curved sensors help the correction of off-axis aberrations, such as Petzval Field Curvature, astigmatism, and bring significant optical and size benefits for imaging systems. In this paper, we first describe advantages of curved sensor and associated packaging process applied on a 1/1.8'' format 1.3Mpx global shutter CMOS sensor (Teledyne EV76C560) into its standard ceramic package with a spherical radius of curvature Rc=65mm and 55mm. The mechanical limits of the die are discussed (Finite Element Modelling and experimental), and electro-optical performances are investigated. Then, based on the monocentric optical architecture, we proposed a new design, compact and with a high resolution, developed specifically for a curved image sensor including optical optimization, tolerances, assembly and optical tests. Finally, a functional prototype is presented through a benchmark approach and compared to an existing standard optical system with same performances and a x2.5 reduction of length. The finality of this work was a functional prototype demonstration on the CEA-LETI during Photonics West 2018 conference. All these experiments and optical results demonstrate the feasibility and high performances of systems with curved sensors. 8. New Material Transistor with Record-High Field-Effect Mobility among Wide-Band-Gap Semiconductors. Science.gov (United States) Shih, Cheng Wei; Chin, Albert 2016-08-03 At an ultrathin 5 nm, we report a new high-mobility tin oxide (SnO2) metal-oxide-semiconductor field-effect transistor (MOSFET) exhibiting extremely high field-effect mobility values of 279 and 255 cm(2)/V-s at 145 and 205 °C, respectively. These values are the highest reported mobility values among all wide-band-gap semiconductors of GaN, SiC, and metal-oxide MOSFETs, and they also exceed those of silicon devices at the aforementioned elevated temperatures. For the first time among existing semiconductor transistors, a new device physical phenomenon of a higher mobility value was measured at 45-205 °C than at 25 °C, which is due to the lower optical phonon scattering by the large SnO2 phonon energy. Moreover, the high on-current/off-current of 4 × 10(6) and the positive threshold voltage of 0.14 V at 25 °C are significantly better than those of a graphene transistor. This wide-band-gap SnO2 MOSFET exhibits high mobility in a 25-205 °C temperature range, a wide operating voltage of 1.5-20 V, and the ability to form on an amorphous substrate, rendering it an ideal candidate for multifunctional low-power integrated circuit (IC), display, and brain-mimicking three-dimensional IC applications. 9. Quantifying fire-wide carbon emissions in interior Alaska using field measurements and Landsat imagery Science.gov (United States) Rogers, B. M.; Veraverbeke, S.; Azzari, G.; Czimczik, C. I.; Holden, S. R.; Mouteva, G. O.; Sedano, F.; Treseder, K. K.; Randerson, J. T. 2014-08-01 Carbon emissions from boreal forest fires are projected to increase with continued warming and constitute a potentially significant positive feedback to climate change. The highest consistent combustion levels are reported in interior Alaska and can be highly variable depending on the consumption of soil organic matter. Here we present an approach for quantifying emissions within a fire perimeter using remote sensing of fire severity. Combustion from belowground and aboveground pools was quantified at 22 sites (17 black spruce and five white spruce-aspen) within the 2010 Gilles Creek burn in interior Alaska, constrained by data from eight unburned sites. We applied allometric equations and estimates of consumption to calculate carbon losses from aboveground vegetation. The position of adventitious spruce roots within the soil column, together with estimated prefire bulk density and carbon concentrations, was used to quantify belowground combustion. The differenced Normalized Burn Ratio (dNBR) exhibited a clear but nonlinear relationship with combustion that differed by forest type. We used a multiple regression model based on transformed dNBR and deciduous fraction to scale carbon emissions to the fire perimeter, and a Monte Carlo framework to assess uncertainty. Because of low-severity and unburned patches, mean combustion across the fire perimeter (1.98 ± 0.34 kg C m-2) was considerably less than within a defined core burn area (2.67 ± 0.40 kg C m-2) and the mean at field sites (2.88 ± 0.23 kg C m-2). These areas constitute a significant fraction of burn perimeters in Alaska but are generally not accounted for in regional-scale estimates. Although total combustion in black spruce was slightly lower than in white spruce-aspen forests, black spruce covered most of the fire perimeter (62%) and contributed the majority (67 ± 16%) of total emissions. Increases in spring albedo were found to be a viable alternative to dNBR for modeling emissions. 10. Resolving fringe ambiguities of a wide-field Michelson interferometer using visibility measurements of a noncollimated laser beam. Science.gov (United States) Wan, Xiaoke; Wang, Ji; Ge, Jian 2009-09-10 An actively stabilized interferometer with a constant optical path difference is a key element in long-term astronomical observation, and resolving interference fringe ambiguities is important to produce high-precision results for the long term. We report a simple and reliable method of resolving fringe ambiguities of a wide-field Michelson interferometer by measuring the interference visibility of a noncollimated single-frequency laser beam. Theoretical analysis shows that the interference visibility is sensitive to a subfringe phase shift, and a wide range of beam arrangements is suitable for real implementation. In an experimental demonstration, a Michelson interferometer has an optical path difference of 7 mm and a converging monitoring beam has a numerical aperture of 0.045 with an incidental angle of 17 degrees. The resolution of visibility measurements corresponds to approximately 1/16 fringe in the interferometer phase shift. The fringe ambiguity-free region is extended over a range of approximately 100 fringes. 11. Early Science with the Large Millimeter Telescope: Detection of Dust Emission in Multiple Images of a Normal Galaxy at z > 4 Lensed by a Frontier Fields Cluster Energy Technology Data Exchange (ETDEWEB) Pope, Alexandra; Battisti, Andrew; Wilson, Grant W.; Calzetti, Daniela; Cybulski, Ryan; Giavalisco, Mauro; Kirkpatrick, Allison [Department of Astronomy, University of Massachusetts, Amherst, MA 01003 (United States); Montaña, Alfredo; Aretxaga, Itziar; Hughes, David [Instituto Nacional de Astrofísica, Óptica y Electrónica (INAOE), Luis Enrique Erro 1, Sta. Ma. Tonantzintla, 72840 Puebla (Mexico); Limousin, Marceau [Aix Marseille Univ, CNRS, LAM, Laboratoire d' Astrophysique de Marseille, Marseille (France); Marchesini, Danilo; Kado-Fong, Erin [Department of Physics and Astronomy, Tufts University, Medford, MA 02155 (United States); Alberts, Stacey [Steward Observatory, University of Arizona, 933 North Cherry Avenue, Tucson, AZ 85721 (United States); Avila-Reese, Vladimir [Instituto de Astronomía, Universidad Nacional Autónoma de México, A.P. 70-264, 04510, CDMX (Mexico); Bermejo-Climent, José Ramón [Departamento de Astrofísica, Universidad de La Laguna. Vía Láctea s/n, La Laguna 38200, Tenerife (Spain); Brammer, Gabriel [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Bravo-Alfaro, Hector [Departamento de Astronomia, Universidad de Guanajuato, Apdo. Postal 144, Guanajuato 36000 (Mexico); Chary, Ranga-Ram [Infrared Processing and Analysis Center, MS314-6, California Institute of Technology, Pasadena, CA 91125 (United States); Keller, Erica, E-mail: [email protected] [National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, VA 22903 (United States); and others 2017-04-01 We directly detect dust emission in an optically detected, multiply imaged galaxy lensed by the Frontier Fields cluster MACSJ0717.5+3745. We detect two images of the same galaxy at 1.1 mm with the AzTEC camera on the Large Millimeter Telescope leaving no ambiguity in the counterpart identification. This galaxy, MACS0717-Az9, is at z > 4 and the strong lensing model ( μ = 7.5) allows us to calculate an intrinsic IR luminosity of 9.7 × 10{sup 10} L {sub ⊙} and an obscured star formation rate of 14.6 ± 4.5 M {sub ⊙} yr{sup −1}. The unobscured star formation rate from the UV is only 4.1 ± 0.3 M {sub ⊙} yr{sup −1}, which means the total star formation rate (18.7 ± 4.5 M {sub ⊙} yr{sup −1}) is dominated (75%–80%) by the obscured component. With an intrinsic stellar mass of only 6.9 × 10{sup 9} M {sub ⊙}, MACS0717-Az9 is one of only a handful of z > 4 galaxies at these lower masses that is detected in dust emission. This galaxy lies close to the estimated star formation sequence at this epoch. However, it does not lie on the dust obscuration relation (IRX- β ) for local starburst galaxies and is instead consistent with the Small Magellanic Cloud attenuation law. This remarkable lower mass galaxy, showing signs of both low metallicity and high dust content, may challenge our picture of dust production in the early universe. 12. Comparing NEO Search Telescopes Science.gov (United States) Myhrvold, Nathan 2016-04-01 Multiple terrestrial and space-based telescopes have been proposed for detecting and tracking near-Earth objects (NEOs). Detailed simulations of the search performance of these systems have used complex computer codes that are not widely available, which hinders accurate cross-comparison of the proposals and obscures whether they have consistent assumptions. Moreover, some proposed instruments would survey infrared (IR) bands, whereas others would operate in the visible band, and differences among asteroid thermal and visible-light models used in the simulations further complicate like-to-like comparisons. I use simple physical principles to estimate basic performance metrics for the ground-based Large Synoptic Survey Telescope and three space-based instruments—Sentinel, NEOCam, and a Cubesat constellation. The performance is measured against two different NEO distributions, the Bottke et al. distribution of general NEOs, and the Veres et al. distribution of Earth-impacting NEO. The results of the comparison show simplified relative performance metrics, including the expected number of NEOs visible in the search volumes and the initial detection rates expected for each system. Although these simplified comparisons do not capture all of the details, they give considerable insight into the physical factors limiting performance. Multiple asteroid thermal models are considered, including FRM, NEATM, and a new generalized form of FRM. I describe issues with how IR albedo and emissivity have been estimated in previous studies, which may render them inaccurate. A thermal model for tumbling asteroids is also developed and suggests that tumbling asteroids may be surprisingly difficult for IR telescopes to observe. 13. Low field magnetoresistance in a 2D topological insulator based on wide HgTe quantum well. Science.gov (United States) Olshanetsky, E B; Kvon, Z D; Gusev, G M; Mikhailov, N N; Dvoretsky, S A 2016-09-01 Low field magnetoresistance is experimentally studied in a two-dimensional topological insulator (TI) in both diffusive and quasiballistic samples fabricated on top of a wide (14 nm) HgTe quantum well. In all cases a pronounced quasi-linear positive magnetoresistance is observed similar to that found previously in diffusive samples based on a narrow (8 nm) HgTe well. The experimental results are compared with the main existing theoretical models based on different types of disorder: sample edge roughness, nonmagnetic disorder in an otherwise coherent TI and metallic puddles due to locally trapped charges that act like local gate on the sample. The quasiballistic samples with resistance close to the expected quantized values also show a positive low-field magnetoresistance but with a pronounced admixture of mesoscopic effects. 14. The lofar phased array telescope system NARCIS (Netherlands) Gunst, André W.; Bentum, Marinus Jan 2010-01-01 The Low Frequency Array (LOFAR) is the largest telescope in the world operating at a frequency range from 30 to 240 MHz. LOFAR is the first radio telescope of its size which uses phased array principles to detect radio signals. More than 10,000 antennas are installed in the field. The antennas are 15. Wide Field-of-View Fluorescence Imaging with Optical-Quality Curved Microfluidic Chamber for Absolute Cell Counting Directory of Open Access Journals (Sweden) Mohiuddin Khan Shourav 2016-07-01 Full Text Available Field curvature and other aberrations are encountered inevitably when designing a compact fluorescence imaging system with a simple lens. Although multiple lens elements can be used to correct most such aberrations, doing so increases system cost and complexity. Herein, we propose a wide field-of-view (FOV fluorescence imaging method with an unconventional optical-quality curved sample chamber that corrects the field curvature caused by a simple lens. Our optics simulations and proof-of-concept experiments demonstrate that a curved substrate with lens-dependent curvature can reduce greatly the distortion in an image taken with a conventional planar detector. Following the validation study, we designed a curved sample chamber that can contain a known amount of sample volume and fabricated it at reasonable cost using plastic injection molding. At a magnification factor of approximately 0.6, the curved chamber provides a clear view of approximately 119 mm2, which is approximately two times larger than the aberration-free area of a planar chamber. Remarkably, a fluorescence image of microbeads in the curved chamber exhibits almost uniform intensity over the entire field even with a simple lens imaging system, whereas the distorted boundary region has much lower brightness than the central area in the planar chamber. The absolute count of white blood cells stained with a fluorescence dye was in good agreement with that obtained by a commercially available conventional microscopy system. Hence, a wide FOV imaging system with the proposed curved sample chamber would enable us to acquire an undistorted image of a large sample volume without requiring a time-consuming scanning process in point-of-care diagnostic applications. 16. Origins Space Telescope Science.gov (United States) Cooray, Asantha; Origins Space Telescope Study Team 2018-01-01 The Origins Space Telescope (OST) is the mission concept for the Far-Infrared Surveyor, a study in development by NASA in preparation for the 2020 Astronomy and Astrophysics Decadal Survey. Origins is planned to be a large aperture, actively-cooled telescope covering a wide span of the mid- to far-infrared spectrum. Its spectrographs will enable 3D surveys of the sky that will discover and characterize the most distant galaxies, Milky-Way, exoplanets, and the outer reaches of our Solar system. Origins will enable flagship-quality general observing programs led by the astronomical community in the 2030s. The Science and Technology Definition Team (STDT) would like to hear your science needs and ideas for this mission. The team can be contacted at [email protected]. This presentation will provide a summary of the OST STDT, our completed first mission concept and an introduction to the second concept that will be studied at the study center in 2018. This presentation will also summarize key science drivers and the key study milestones between 2018 and 2020. 17. The Near-Earth Asteroid Tracking (NEAT) Program: A Completely Automated System for Telescope Control, Wide-Field Imaging, and Object Detection Science.gov (United States) Pravdo, S. H.; Rabinowitz, D. L.; Helin, E. F.; Lawrence, K. J.; Bambery, R. J.; Clark, C. C.; Groom, S. L.; Levin, S.; Lorre, J.; Shaklan, S. B.; 1998-01-01 The Near-Earth Asteroid Tracking (NEAT) system operates autonomously at the Maui Space Surveillance Site on the summit of the extinct Haleakala Volcano Crater, Hawaii. The program began in December 1995 and continues with an observing run every month. 18. Science.gov (United States) Boulon, Carine; Blaise, Sophie; Lazareth, Isabelle; Le Hello, Claire; Pistorius, Marc-Antoine; Imbert, Bernard; Mangin, Marion; Sintes, Pierre; Senet, Patricia; Decamps-Le Chevoir, Joëlle; Tribout, Laurent; Carpentier, Patrick; Constans, Joël 2017-10-01 The aim of this work was to study inter- and intra-observer agreement for the diagnosis of scleroderma pattern by wide-field capillaroscopy. Images were taken from 50 patients known to have SSc and 50 controls consulting for RP who did not have SSc. These images were rated simultaneously by 11 experienced vascular medicine physicians as scleroderma pattern or not. Two weeks later, 7 of the 11 observers again rated the same images. Inter-observer agreement was almost perfect between the 11 observers (κ 0.86 ± 0.01), and the proportion of concordant observations was 79% (70-87). When each observer was compared with the reference, agreement was also almost perfect: κ coefficient 0.92 ± 0.03 and proportion of concordant observations 79% (70-87). Intra-observer agreement was also almost perfect: median κ coefficient 0.94 (0.78-0.96) and median proportion of concordant observations 97% (89-98). Excellent inter- and intra-observer agreement was obtained in experienced vascular physicians for the diagnosis of capillaroscopic landscape by wide-field nailfold capillary microscopy. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Rheumatology. All rights reserved. For Permissions, please email: [email protected] 19. Effects of spatial and spectral frequencies on wide-field functional imaging (wifi) characterization of preclinical breast cancer models Science.gov (United States) Moy, Austin; Kim, Jae G.; Lee, Eva Y. H. P.; Choi, Bernard 2010-02-01 A common strategy to study breast cancer is the use of the preclinical model. These models provide a physiologically relevant and controlled environment in which to study both response to novel treatments and the biology of the cancer. Preclinical models, including the spontaneous tumor model and mammary window chamber model, are very amenable to optical imaging and to this end, we have developed a wide-field functional imaging (WiFI) instrument that is perfectly suited to studying tumor metabolism in preclinical models. WiFI combines two optical imaging modalities, spatial frequency domain imaging (SFDI) and laser speckle imaging (LSI). Our current WiFI imaging protocol consists of multispectral imaging in the near infrared (650-980 nm) spectrum, over a wide (7 cm x 5 cm) field of view. Using SFDI, the spatially-resolved reflectance of sinusoidal patterns projected onto the tissue is assessed, and optical properties of the tissue are determined, which are then used to extract tissue chromophore concentrations in the form of oxy-, deoxy-, and total hemoglobin concentrations, and percentage of lipid and water. In the current study, we employ Monte Carlo simulations of SFDI light propagation in order to characterize the penetration depth of light in both the spontaneous tumor model and mammary window chamber model. Preliminary results suggest that different spatial frequency and wavelength combinations have different penetration depths, suggesting the potential depth sectioning capability of the SFDI component of WiFI. 20. Retinal pigment epithelium findings in patients with albinism using wide-field polarization-sensitive optical coherence tomography. Science.gov (United States) Schütze, Christopher; Ritter, Markus; Blum, Robert; Zotter, Stefan; Baumann, Bernhard; Pircher, Michael; Hitzenberger, Christoph K; Schmidt-Erfurth, Ursula 2014-11-01 To investigate pigmentation characteristics of the retinal pigment epithelium (RPE) in patients with albinism using wide-field polarization-sensitive optical coherence tomography compared with intensity-based spectral domain optical coherence tomography and fundus autofluorescence imaging. Five patients (10 eyes) with previously genetically diagnosed albinism and 5 healthy control subjects (10 eyes) were imaged by a wide-field polarization-sensitive optical coherence tomography system (scan angle: 40 × 40° on the retina), sensitive to melanin contained in the RPE, based on the polarization state of backscattered light. Conventional intensity-based spectral domain optical coherence tomography and fundus autofluorescence examinations were performed. Retinal pigment epithelium-pigmentation was analyzed qualitatively and quantitatively based on depolarization assessed by polarization-sensitive optical coherence tomography. This study revealed strong evidence of polarization-sensitive optical coherence tomography to specifically image melanin in the RPE. Depolarization of light backscattered by the RPE in patients with albinism was reduced compared with normal subjects. Heterogeneous RPE-specific depolarization characteristics were observed in patients with albinism. Reduction of depolarization observed in the light backscattered by the RPE in patients with albinism corresponds to expected decrease of RPE pigmentation. The degree of depigmentation of the RPE is possibly associated with visual acuity. Findings suggest that different albinism genotypes result in heterogeneous levels of RPE pigmentation. Polarization-sensitive optical coherence tomography showed a heterogeneous appearance of RPE pigmentation in patients with albinism depending on different genotypes. 1. Wide-Field Landers Temporary Keratoprosthesis in Severe Ocular Trauma: Functional and Anatomical Results after One Year Directory of Open Access Journals (Sweden) Katarzyna Nowomiejska 2015-01-01 Full Text Available Purpose. To evaluate longitudinal functional and anatomical results after combined pars plana vitrectomy (PPV and penetrating keratoplasty (PKP using a wide-field Landers intraoperative temporary keratoprosthesis (TKP in patients with vitreoretinal pathology and corneal opacity due to severe ocular trauma. Material and Methods. Medical records of 12 patients who had undergone PPV/PKP/KP due to severe eye trauma were analyzed. Functional (best-corrected visual acuity and anatomic outcomes (clarity of the corneal graft, retinal attachment, and intraocular pressure were assessed during the follow-up (mean 16 months. Results. Final visual acuities varied from NLP to CF to 2 m. Visual acuity improved in 7 cases, was unchanged in 4 eyes, and worsened in 1 eye. The corneal graft was transparent during the follow-up in 3 cases and graft failure was observed in 9 eyes. Silicone oil was used as a tamponade in all cases and retina was reattached in 92% of cases. Conclusions. Combined PPV and PKP with the use of wide-field Landers TKP allowed for surgical intervention in patients with vitreoretinal pathology coexisting with corneal wound. Although retina was attached in most of the cases, corneal graft survived only in one-fourth of patients and final visual acuities were poor. 2. AWARE Wide Field View Science.gov (United States) 2016-04-29 Vignette Correction sRGB Conversion Exposure and Sensor Sensitivity Normalization Image Resizing and Warping Tone-Mapping OpenGL...ARCHITECTURE In addition to minimizing system size, weight, and power (SWaP), two major areas were identified for improvement over the previous AWARE camera...computer. Both of these areas are discussed next. A. Image Processing Improvements The overall data processing pipeline for the new system is shown 3. HUBBLE SPACE TELESCOPE RESOLVES VOLCANOES ON IO Science.gov (United States) 2002-01-01 This picture is a composite of a black and white near infrared image of Jupiter and its satellite Io and a color image of Io at shorter wavelengths taken at almost the same time on March 5, 1994. These are the first images of a giant planet or its satellites taken by NASA's Hubble Space Telescope (HST) since the repair mission in December 1993. Io is too small for ground-based telescopes to see the surface details. The moon's angular diameter of one arc second is at the resolution limit of ground based telescopes. Many of these markings correspond to volcanoes that were first revealed in 1979 during the Voyager spacecraft flyby of Jupiter. Several of the volcanoes periodically are active because Io is heated by tides raised by Jupiter's powerful gravity. The volcano Pele appears as a dark spot surrounded by an irregular orange oval in the lower part of the image. The orange material has been ejected from the volcano and spread over a huge area. Though the volcano was first discovered by Voyager, the distinctive orange color of the volcanic deposits is a new discovery in these HST images. (Voyager missed it because its cameras were not sensitive to the near-infrared wavelengths where the color is apparent). The sulfur and sulfur dioxide that probably dominate Io's surface composition cannot produce this orange color, so the Pele volcano must be generating material with a more unusual composition, possibly rich in sodium. The Jupiter image, taken in near-infrared light, was obtained with HST's Wide Field and Planetary Camera in wide field mode. High altitude ammonia crystal clouds are bright in this image because they reflect infrared light before it is absorbed by methane in Jupiter's atmosphere. The most prominent feature is the Great Red Spot, which is conspicuous because of its high clouds. A cap of high-altitude haze appears at Jupiter's south pole. The Wide Field/Planetary Camera 2 was developed by the Jet Propulsion Laboratory and managed by the Goddard Spaced 4. Optical Space Telescope Assembly Data.gov (United States) National Aeronautics and Space Administration — The Optical Space Telescope Assembly (OSTA) task is to demonstrate the technology readiness of assembling large space telescopes on orbit in 2015. This task is an... 5. Virtual Telescope Alignment System Data.gov (United States) National Aeronautics and Space Administration — Next-generation space telescopes require two spacecraft to fly in a coordinated fashion in space forming a virtual telescope. Achieving and maintaining this precise... 6. Wide tunability and electron transfer in GaAs/AlGaAs quantum well photodetector by magnetic field Science.gov (United States) Yu, C. H.; Zhang, Bo; Luo, X. D.; Lu, Wei; Shen, X. C. 2017-05-01 One strategy for terahertz (THz) detection in GaAs/AlGaAs quantum well photodetectors under the magnetic field is reported. The THz detection begins to operate after the normally empty hydrogenic donor ground states in the AlGaAs barriers become populated by electrons transferred from the GaAs wells. Through the Landau quantization arising from a perpendicular magnetic field, we achieved the electron transfer from subband Landau levels in the GaAs wells at liquid helium temperature when the magnetic field reaches a certain threshold. One detector based on this strategy exhibited a dramatic range of frequency tunability of 3.20-6.13 THz. Our photothermal ionization spectroscopy measurements show quantitative agreement with the theoretical calculation of intradonor transition energies, verifying the origin of the strongly enhanced frequency tunability from the Zeeman behavior of transferred electrons in the AlGaAs barriers. This finding is useful for exploring magneto-optical effects and realization of wide tunability in THz photodetectors. 7. MeerLICHT and BlackGEM: custom-built telescopes to detect faint optical transients Science.gov (United States) Bloemen, Steven; Groot, Paul; Woudt, Patrick; Klein Wolt, Marc; McBride, Vanessa; Nelemans, Gijs; Körding, Elmar; Pretorius, Margaretha L.; Roelfsema, Ronald; Bettonvil, Felix; Balster, Harry; Bakker, Roy; Dolron, Peter; van Elteren, Arjen; Elswijk, Eddy; Engels, Arno; Fender, Rob; Fokker, Marc; de Haan, Menno; Hagoort, Klaas; de Hoog, Jasper; ter Horst, Rik; van der Kevie, Giel; Kozłowski, Stanisław; Kragt, Jan; Lech, Grzegorz; Le Poole, Rudolf; Lesman, Dirk; Morren, Johan; Navarro, Ramon; Paalberends, Willem-Jelle; Paterson, Kerry; Pawłaszek, Rafal; Pessemier, Wim; Raskin, Gert; Rutten, Harrie; Scheers, Bart; Schuil, Menno; Sybilski, Piotr W. 2016-07-01 We present the MeerLICHT and BlackGEM telescopes, which are wide-field optical telescopes that are currently being built to study transient phenomena, gravitational wave counterparts and variable stars. The telescopes have 65 cm primary mirrors and a 2.7 square degree field-of-view. The MeerLICHT and BlackGEM projects have different science goals, but will use identical telescopes. The first telescope, MeerLICHT, will be commissioned at Sutherland (South Africa) in the first quarter of 2017. It will co-point with MeerKAT to collect optical data commensurate with the radio observations. After careful analysis of MeerLICHT's performance, three telescopes of the same type will be commissioned in La Silla (Chile) in 2018 to form phase I of the BlackGEM array. BlackGEM aims at detecting and characterizing optical counterparts of gravitational wave events detected by Advanced LIGO and Virgo. In this contribution we present an overview of the science goals, the design and the status of the two projects. 8. The Allen Telescope Array Pi GHz Sky Survey. I. Survey description and static catalog results for the Boötes field NARCIS (Netherlands) Bower, G.C.; Croft, S.; Keating, G.; Whysong, D.; Ackermann, R.; Atkinson, S.; Backer, D.; Backus, P.; Barott, B.; Bauermeister, A.; Blitz, L.; Bock, D.; Bradford, T.; Cheng, C.; Cork, C.; Davis, M.; DeBoer, D.; Dexter, M.; Dreher, J.; Engargiola, G.; Fields, E.; Fleming, M.; Forster, R.J.; Gutierrez-Kraybill, C.; Harp, G.R.; Heiles, C.; Helfer, T.; Hull, C.; Jordan, J.; Jorgensen, S.; Kilsdonk, T.; Law, C.; van Leeuwen, J.; Lugten, J.; MacMahon, D.; McMahon, P.; Milgrome, O.; Pierson, T.; Randall, K.; Ross, J.; Shostak, S.; Siemion, A.; Smolek, K.; Tarter, J.; Thornton, D.; Urry, L.; Vitouchkine, A.; Wadefalk, N.; Weinreb, S.; Welch, J.; Werthimer, D.; Williams, P.K.G.; Wright, M. 2010-01-01 The Pi GHz Sky Survey (PiGSS) is a key project of the Allen Telescope Array. PiGSS is a 3.1 GHz survey of radio continuum emission in the extragalactic sky with an emphasis on synoptic observations that measure the static and time-variable properties of the sky. During the 2.5 year campaign, PiGSS 9. Measurement of linear energy transfer distribution at CERN-EU high- energy reference field facility with real-time radiation monitoring device III and its comparison with dosimetric telescope CERN Document Server Doke, T; Hara, K; Hayashi, T; Kikuchi, J; Suzuki, S; Terasawa, K 2004-01-01 The distributions of linear energy transfer for LET (LET/sub water/) in front of the 80-cm-thick concrete side shield at the CERN-EU high- energy reference field (CERF) facility were measured with a Si detector telescope named real-time radiation monitoring device-III (RRMD-III) covered with and without a 1 cm-thick acrylic plate. In these measurements, a difference of about 20% in the absorbed dose between the two LET/sub water/ distributions was observed as a result of protons, deuterons and tritons recoiled by neutrons. The LET/sub water/ distribution obtained using RRMD-III without the 1-cm-thick acrylic plate is compared with lineal energy distributions obtained using the dosimetric telescope (DOSTEL) detector under the same conditions. These dose equivalents are also compared with that obtained using HANDI TEPC which is used as the standard at the CERF facility. (26 refs). 10. The Alignment System for a Medium-Sized Schwarzschild-Couder Telescope Prototype for the Cherenkov Telescope Array Science.gov (United States) Ribeiro, Deivid; Humensky, Brian; Nieto, Daniel; V Vassiliev Group in UCLA division of Astronomy and Astrophysics, P Kaaret Group at Iowa University Department of Physics and Astronomy, CTA Consortium 2016-01-01 The Cherenkov Telescope Array (CTA) is an international project for a next-generation ground-based gamma-ray observatory. CTA, conceived as an array of tens of imaging atmospheric Cherenkov telescopes, comprising small, medium and large-size telescopes, is aiming to improve on the sensitivity of current-generation experiments by an order of magnitude and provide energy coverage from 20 GeV to more than 300 TeV. The Schwarzschild-Couder design is a candidate 9-m diameter medium-sized telescope featuring a novel aplanatic two-mirror optical design capable of a wide field of view with significantly improved imaging resolution as compared to the traditional Davies-Cotton optical design. Achieving this imaging resolution imposes strict mirror alignment requirements that necessitate a sophisticated alignment system. This system uses a collection of position sensors between panels to determine the relative position of adjacent panels; each panel is mounted on a Stewart platform to allow motion control with six degrees of freedom, facilitating the alignment of the optical surface for the segmented primary and secondary mirrors. Alignments of the primary and secondary mirrors and the camera focal plane with respect to each other are performed utilizing a set of CCD cameras which image LEDs placed on the mirror panels to measure relative translation, and custom-built auto-collimators to measure relative tilt between the primary and secondary mirrors along the optical axis of the telescope. In this contribution we present the status of the development of the SC optical alignment system, soon to be materialized in a full-scale prototype SC medium-size telescope (pSCT) at the Fred Lawrence Whipple Observatory in southern Arizona. 11. ATST telescope mount: telescope of machine tool Science.gov (United States) Jeffers, Paul; Stolz, Günter; Bonomi, Giovanni; Dreyer, Oliver; Kärcher, Hans 2012-09-01 The Advanced Technology Solar Telescope (ATST) will be the largest solar telescope in the world, and will be able to provide the sharpest views ever taken of the solar surface. The telescope has a 4m aperture primary mirror, however due to the off axis nature of the optical layout, the telescope mount has proportions similar to an 8 meter class telescope. The technology normally used in this class of telescope is well understood in the telescope community and has been successfully implemented in numerous projects. The world of large machine tools has developed in a separate realm with similar levels of performance requirement but different boundary conditions. In addition the competitive nature of private industry has encouraged development and usage of more cost effective solutions both in initial capital cost and thru-life operating cost. Telescope mounts move relatively slowly with requirements for high stability under external environmental influences such as wind buffeting. Large machine tools operate under high speed requirements coupled with high application of force through the machine but with little or no external environmental influences. The benefits of these parallel development paths and the ATST system requirements are being combined in the ATST Telescope Mount Assembly (TMA). The process of balancing the system requirements with new technologies is based on the experience of the ATST project team, Ingersoll Machine Tools who are the main contractor for the TMA and MT Mechatronics who are their design subcontractors. This paper highlights a number of these proven technologies from the commercially driven machine tool world that are being introduced to the TMA design. Also the challenges of integrating and ensuring that the differences in application requirements are accounted for in the design are discussed. 12. A search for a distant companion to the sun with the wide-field infrared survey explorer Energy Technology Data Exchange (ETDEWEB) Luhman, K. L., E-mail: [email protected] [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States); Center for Exoplanets and Habitable Worlds, The Pennsylvania State University, University Park, PA 16802 (United States) 2014-01-20 I have used multi-epoch astrometry from the Wide-field Infrared Survey Explorer to perform a search for a distant companion to the Sun via its parallactic motion. I have not found an object of this kind down to W2 = 14.5. This limit corresponds to analogs of Saturn and Jupiter at 28,000 and 82,000 AU, respectively, according to models of the Jovian planets by Fortney and coworkers. Models of brown dwarfs by Burrows and coworkers predict fainter fluxes at a given mass for the age of the solar system, producing a closer distance limit of 26,000 AU for a Jupiter-mass brown dwarf. These constraints exclude most combinations of mass and separation at which a solar companion has been suggested to exist by various studies over the years. 13. Wide-field infrared survey explorer observations of young stellar objects in the Lynds 1509 dark cloud in Auriga Energy Technology Data Exchange (ETDEWEB) Liu, Wilson M.; McCollum, Bruce; Fajardo-Acosta, Sergio [Infrared Processing and Analysis Center, California Institute of Technology, MC 100-22, Pasadena, CA 91125 (United States); Padgett, Deborah L. [National Aeronautics and Space Administration, Goddard Space Flight Center, Code 665, Greenbelt, MD 20771 (United States); Terebey, Susan; Angione, John [Department of Physics and Astronomy, California State University, Los Angeles, CA 90032 (United States); Rebull, Luisa M. [Spitzer Science Center, California Institute of Technology, MC 314-6, Pasadena, CA 91125 (United States); Leisawitz, David, E-mail: [email protected] [National Aeronautics and Space Administration, Goddard Space Flight Center, Code 605, Greenbelt, MD 20771 (United States) 2014-06-01 The Wide-Field Infrared Survey Explorer (WISE) has uncovered a striking cluster of young stellar object (YSO) candidates associated with the L1509 dark cloud in Auriga. The WISE observations, at 3.4 μm, 4.6 μm, 12 μm, and 22 μm, show a number of objects with colors consistent with YSOs, and their spectral energy distributions suggest the presence of circumstellar dust emission, including numerous Class I, flat spectrum, and Class II objects. In general, the YSOs in L1509 are much more tightly clustered than YSOs in other dark clouds in the Taurus-Auriga star forming region, with Class I and flat spectrum objects confined to the densest aggregates, and Class II objects more sparsely distributed. We estimate a most probable distance of 485-700 pc, and possibly as far as the previously estimated distance of 2 kpc. 14. Wide-field time-correlated single photon counting (TCSPC) microscopy with time resolution below the frame exposure time Energy Technology Data Exchange (ETDEWEB) Hirvonen, Liisa M. [Department of Physics, King' s College London, Strand, London WC2R 2LS (United Kingdom); Petrášek, Zdeněk [Max Planck Institute of Biochemistry, Department of Cellular and Molecular Biophysics, Am Klopferspitz 18, D-82152 Martinsried (Germany); Suhling, Klaus, E-mail: [email protected] [Department of Physics, King' s College London, Strand, London WC2R 2LS (United Kingdom) 2015-07-01 Fast frame rate CMOS cameras in combination with photon counting intensifiers can be used for fluorescence imaging with single photon sensitivity at kHz frame rates. We show here how the phosphor decay of the image intensifier can be exploited for accurate timing of photon arrival well below the camera exposure time. This is achieved by taking ratios of the intensity of the photon events in two subsequent frames, and effectively allows wide-field TCSPC. This technique was used for measuring decays of ruthenium compound Ru(dpp) with lifetimes as low as 1 μs with 18.5 μs frame exposure time, including in living HeLa cells, using around 0.1 μW excitation power. We speculate that by using an image intensifier with a faster phosphor decay to match a higher camera frame rate, photon arrival time measurements on the nanosecond time scale could well be possible. 15. A search for a distant companion to the sun with the wide-field infrared survey explorer International Nuclear Information System (INIS) Luhman, K. L. 2014-01-01 I have used multi-epoch astrometry from the Wide-field Infrared Survey Explorer to perform a search for a distant companion to the Sun via its parallactic motion. I have not found an object of this kind down to W2 = 14.5. This limit corresponds to analogs of Saturn and Jupiter at 28,000 and 82,000 AU, respectively, according to models of the Jovian planets by Fortney and coworkers. Models of brown dwarfs by Burrows and coworkers predict fainter fluxes at a given mass for the age of the solar system, producing a closer distance limit of 26,000 AU for a Jupiter-mass brown dwarf. These constraints exclude most combinations of mass and separation at which a solar companion has been suggested to exist by various studies over the years. 16. High-resolution wide-field microscopy with adaptive optics for spherical aberration correction and motionless focusing. Science.gov (United States) Kner, P; Sedat, J W; Agard, D A; Kam, Z 2010-02-01 Live imaging in cell biology requires three-dimensional data acquisition with the best resolution and signal-to-noise ratio possible. Depth aberrations are a major source of image degradation in three-dimensional microscopy, causing a significant loss of resolution and intensity deep into the sample. These aberrations occur because of the mismatch between the sample refractive index and the immersion medium index. We have built a wide-field fluorescence microscope that incorporates a large-throw deformable mirror to simultaneously focus and correct for depth aberration in three-dimensional imaging. Imaging fluorescent beads in water and glycerol with an oil immersion lens we demonstrate a corrected point spread function and a 2-fold improvement in signal intensity. We apply this new microscope to imaging biological samples, and show sharper images and improved deconvolution. 17. Active feedback wide-field optical low-coherence interferometry for ultrahigh-speed three-dimensional morphometry International Nuclear Information System (INIS) Choi, Woo June; Choi, Hae Young; Lee, Byeong Ha; Na, Jihoon; Eom, Jonghyun 2010-01-01 A novel optical interferometric scheme for ultrahigh-speed three-dimensional morphometry is proposed. The system is based on wide-field optical coherence tomography (WF-OCT) but with optically chopped illumination. The chopping frequency is feedback-controlled to be always matched with the Doppler frequency of the OCT interferometer, which provides an efficient page-wide demodulation suitable for ultrahigh-speed volumetric imaging. To compensate the unwanted variation in the OCT Doppler frequency of the system, the illumination frequency is phase-locked with an auxiliary laser interferometer which shares the reference arm with the OCT interferometer. The two-dimensional (2D) interference signals projected on the 2D array pixels of a 200 Hz CCD are accumulated during one imaging frame of the CCD. Then, each pixel of the CCD demodulates the OCT signal automatically. Owing to the proposed active frequency-locked illumination scheme, the demodulation does not depend on the variation in the axial scanning speed. Volumetric topograms or/and tomograms of several samples were achieved and rendered with a sensitivity of 58 dB at an axial scan speed of 0.805 mm s −1 18. VISTA: Pioneering New Survey Telescope Starts Work Science.gov (United States) 2009-12-01 A new telescope - VISTA (the Visible and Infrared Survey Telescope for Astronomy) - has just started work at ESO's Paranal Observatory and has made its first release of pictures. VISTA is a survey telescope working at infrared wavelengths and is the world's largest telescope dedicated to mapping the sky. Its large mirror, wide field of view and very sensitive detectors will reveal a completely new view of the southern sky. Spectacular new images of the Flame Nebula, the centre of our Milky Way galaxy and the Fornax Galaxy Cluster show that it is working extremely well. VISTA is the latest telescope to be added to ESO's Paranal Observatory in the Atacama Desert of northern Chile. It is housed on the peak adjacent to the one hosting the ESO Very Large Telescope (VLT) and shares the same exceptional observing conditions. VISTA's main mirror is 4.1 metres across and is the most highly curved mirror of this size and quality ever made - its deviations from a perfect surface are less than a few thousandths of the thickness of a human hair - and its construction and polishing presented formidable challenges. VISTA was conceived and developed by a consortium of 18 universities in the United Kingdom [1] led by Queen Mary, University of London and became an in-kind contribution to ESO as part of the UK's accession agreement. The telescope design and construction were project-managed by the Science and Technology Facilities Council's UK Astronomy Technology Centre (STFC, UK ATC). Provisional acceptance of VISTA was formally granted by ESO at a ceremony at ESO's Headquarters in Garching, Germany, attended by representatives of Queen Mary, University of London and STFC, on 10 December 2009 and the telescope will now be operated by ESO. "VISTA is a unique addition to ESO's observatory on Cerro Paranal. It will play a pioneering role in surveying the southern sky at infrared wavelengths and will find many interesting targets for further study by the Very Large Telescope, ALMA and 19. Very large array and green bank telescope observations of Orion B (NGC 2024, W12): photodissociation region properties and magnetic field Energy Technology Data Exchange (ETDEWEB) Roshi, D. Anish [National Radio Astronomy Observatory, Charlottesville and Green Bank, 520 Edgemont Road, Charlottesville, VA 22903 (United States); Goss, W. M. [National Radio Astronomy Observatory, P.O. Box O, Socorro, NM 87801 (United States); Jeyakumar, S., E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Departamento de Astronomía, Universidad de Guanajuato, AP 144, Guanajuato CP 36000 (Mexico) 2014-10-01 We present images of C110α and H110α radio recombination line (RRL) emission at 4.8 GHz and images of H166α, C166α, and X166α RRL emission at 1.4 GHz, observed toward the star-forming region NGC 2024. The 1.4 GHz image with angular resolution ∼70'' is obtained using Very Large Array (VLA) data. The 4.8 GHz image with angular resolution ∼17'' is obtained by combining VLA and Green Bank Telescope data in order to add the short and zero spacing data in the uv plane. These images reveal that the spatial distributions of C110α line emission is confined to the southern rim of the H II region close to the ionization front whereas the C166α line emission is extended in the north-south direction across the H II region. The LSR velocity of the C110α line is 10.3 km s{sup –1} similar to that of lines observed from molecular material located at the far side of the H II region. This similarity suggests that the photodissociation region (PDR) responsible for C110α line emission is at the far side of the H II region. The LSR velocity of C166α is 8.8 km s{sup –1}. This velocity is comparable with the velocity of molecular absorption lines observed from the foreground gas, suggesting that the PDR is at the near side of the H II region. Non-LTE models for carbon line-forming regions are presented. Typical properties of the foreground PDR are T {sub PDR} ∼ 100 K, n{sub e}{sup PDR}∼5 cm{sup –3}, n {sub H} ∼ 1.7 × 10{sup 4} cm{sup –3}, and path length l ∼ 0.06 pc, and those of the far side PDR are T {sub PDR} ∼ 200 K, n{sub e}{sup PDR}∼ 50 cm{sup –3}, n {sub H} ∼ 1.7 × 10{sup 5} cm{sup –3}, and l ∼ 0.03 pc. Our modeling indicates that the far side PDR is located within the H II region. We estimate the magnetic field strength in the foreground PDR to be 60 μG and that in the far side PDR to be 220 μG. Our field estimates compare well with the values obtained from OH Zeeman observations toward NGC 2024. The H166α spectrum 20. Swift Burst Alert Telescope (BAT) Instrument Response International Nuclear Information System (INIS) Parsons, A.; Barthelmy, S.; Cummings, J.; Gehrels, N.; Hullinger, D.; Krimm, H.; Markwardt, C.; Tueller, J.; Fenimore, E.; Palmer, D.; Sato, G.; Takahashi, T.; Nakazawa, K.; Okada, Y.; Takahashi, H.; Suzuki, M.; Tashiro, M. 2004-01-01 The Burst Alert Telescope (BAT), a large coded aperture instrument with a wide field-of-view (FOV), provides the gamma-ray burst triggers and locations for the Swift Gamma-Ray Burst Explorer. In addition to providing this imaging information, BAT will perform a 15 keV - 150 keV all-sky hard x-ray survey based on the serendipitous pointings resulting from the study of gamma-ray bursts, and will also monitor the sky for transient hard x-ray sources. For BAT to provide spectral and photometric information for the gamma-ray bursts, the transient sources and the all-sky survey, the BAT instrument response must be determined to an increasingly greater accuracy. This paper describes the spectral models and the ground calibration experiments used to determine the BAT response to an accuracy suitable for gamma-ray burst studies 1. NEW YOUNG STAR CANDIDATES IN THE TAURUS-AURIGA REGION AS SELECTED FROM THE WIDE-FIELD INFRARED SURVEY EXPLORER International Nuclear Information System (INIS) Rebull, L. M.; Padgett, D. L.; Noriega-Crespo, A. 2011-01-01 The Taurus Molecular Cloud subtends a large solid angle on the sky, in excess of 250 deg 2 . The search for legitimate Taurus members to date has been limited by sky coverage as well as the challenge of distinguishing members from field interlopers. The Wide-field Infrared Survey Explorer has recently observed the entire sky, and we take advantage of the opportunity to search for young stellar object (YSO) candidate Taurus members from a ∼260 deg 2 region designed to encompass previously identified Taurus members. We use near- and mid-infrared colors to select objects with apparent infrared excesses and incorporate other catalogs of ancillary data to present a list of rediscovered Taurus YSOs with infrared excesses (taken to be due to circumstellar disks), a list of rejected YSO candidates (largely galaxies), and a list of 94 surviving candidate new YSO-like Taurus members. There is likely to be contamination lingering in this candidate list, and follow-up spectra are warranted. 2. Development of digital system for the wide-field x-ray imaging detector aboard Kanazawa-SAT3 Science.gov (United States) Kagawa, Yasuaki; Yonetoku, Daisuke; Sawano, Tatsuya; Mihara, Tatehiro; Kyutoku, Koutarou; Ikeda, Hirokazu; Yoshida, Kazuki; Ina, Masao; Ota, Kaichi; Suzuki, Daichi; Miyao, Kouga; Watanabe, Syouta; Hatori, Satoshi; Kume, Kyo; Mizushima, Satoshi; Hasegawa, Takashi 2017-08-01 We are planning to launch a micro satellite, Kanazawa-SAT3 , at the end of FY2018 to localize X-ray transients associated with gravitational wave sources. Now we are testing a prototype model of wide-field Xray imaging detector named T-LEX (Transient Localization EXperiment). T-LEX is an orthogonally distributed two sets of 1-dimensional silicon strip detectors with coded aperture masks, and covers more than 1 steradian field of view in the energy range of 1 - 20 keV. Each dimension has 512 readout electrodes (totally 1,024 channels), and they are read out with application specific integrated circuits (ASICs) controlled by two onboard FPGAs. Moreover, each FPGA calculates the cross correlation between the X-ray intensity and mask patterns every 64 msec, makes a histogram of lightcurves and energy spectra, and also plays a role of telemetry/command interface to mission CPU. In this paper, we report an overview of digital electronics system. Especially, we focus on the high-speed imaging processor on FPGA and demonstrate its performance as an X-ray imaging system. 3. Ultra-wide bore 900 MHz high-resolution NMR at the National High Magnetic Field Laboratory Science.gov (United States) Fu, R.; Brey, W. W.; Shetty, K.; Gor'kov, P.; Saha, S.; Long, J. R.; Grant, S. C.; Chekmenev, E. Y.; Hu, J.; Gan, Z.; Sharma, M.; Zhang, F.; Logan, T. M.; Brüschweller, R.; Edison, A.; Blue, A.; Dixon, I. R.; Markiewicz, W. D.; Cross, T. A. 2005-11-01 Access to an ultra-wide bore (105 mm) 21.1 T magnet makes possible numerous advances in NMR spectroscopy and MR imaging, as well as novel applications. This magnet was developed, designed, manufactured and tested at the National High Magnetic Field Laboratory and on July 21, 2004 it was energized to 21.1 T. Commercial and unique homebuilt probes, along with a standard commercial NMR console have been installed and tested with many science applications to develop this spectrometer as a user facility. Solution NMR of membrane proteins with enhanced resolution, new pulse sequences for solid state NMR taking advantage of narrowed proton linewidths, and enhanced spatial resolution and contrast leading to improved animal imaging have been documented. In addition, it is demonstrated that spectroscopy of single site 17O labeled macromolecules in a hydrated lipid bilayer environment can be recorded in a remarkably short period of time. 17O spectra of aligned samples show the potential for using this data for orientational restraints and for characterizing unique details of cation binding properties to ion channels. The success of this NHMFL magnet illustrates the potential for using a similar magnet design as an outsert for high temperature superconducting insert coils to achieve an NMR magnet with a field >25 T. 4. New Young Star Candidates in the Taurus-Auriga Region as Selected from the Wide-Field Infrared Survey Explorer Science.gov (United States) Rebull, L. M.; Koenig, X. P.; Padgett, D. L.; Terebey, S.; McGehee, P. M.; Hillenbrand, L. A.; Knapp, G. R.; Leisawitz, D.; Liu, W.; Noriega-Crespo, A.; Ressler, M. E.; Stapelfeldt, K. R.; Fajardo-Acosta, S.; Mainzer, A. 2011-09-01 The Taurus Molecular Cloud subtends a large solid angle on the sky, in excess of 250 deg2. The search for legitimate Taurus members to date has been limited by sky coverage as well as the challenge of distinguishing members from field interlopers. The Wide-field Infrared Survey Explorer has recently observed the entire sky, and we take advantage of the opportunity to search for young stellar object (YSO) candidate Taurus members from a ~260 deg2 region designed to encompass previously identified Taurus members. We use near- and mid-infrared colors to select objects with apparent infrared excesses and incorporate other catalogs of ancillary data to present a list of rediscovered Taurus YSOs with infrared excesses (taken to be due to circumstellar disks), a list of rejected YSO candidates (largely galaxies), and a list of 94 surviving candidate new YSO-like Taurus members. There is likely to be contamination lingering in this candidate list, and follow-up spectra are warranted. 5. Multimode simulations of a wide field of view double-Fourier far-infrared spatio-spectral interferometer Science.gov (United States) Bracken, Colm P.; Lightfoot, John; O'Sullivan, Creidhe; Murphy, J. Anthony; Donohoe, Anthony; Savini, Giorgio; Juanola-Parramon, Roser; The Fisica Consortium, On Behalf Of 2018-01-01 In the absence of 50-m class space-based observatories, subarcsecond astronomy spanning the full far-infrared wavelength range will require space-based long-baseline interferometry. The long baselines of up to tens of meters are necessary to achieve subarcsecond resolution demanded by science goals. Also, practical observing times command a field of view toward an arcminute (1‧) or so, not achievable with a single on-axis coherent detector. This paper is concerned with an application of an end-to-end instrument simulator PyFIInS, developed as part of the FISICA project under funding from the European Commission's seventh Framework Programme for Research and Technological Development (FP7). Predicted results of wide field of view spatio-spectral interferometry through simulations of a long-baseline, double-Fourier, far-infrared interferometer concept are presented and analyzed. It is shown how such an interferometer, illuminated by a multimode detector can recover a large field of view at subarcsecond angular resolution, resulting in similar image quality as that achieved by illuminating the system with an array of coherent detectors. Through careful analysis, the importance of accounting for the correct number of higher-order optical modes is demonstrated, as well as accounting for both orthogonal polarizations. Given that it is very difficult to manufacture waveguide and feed structures at sub-mm wavelengths, the larger multimode design is recommended over the array of smaller single mode detectors. A brief note is provided in the conclusion of this paper addressing a more elegant solution to modeling far-infrared interferometers, which holds promise for improving the computational efficiency of the simulations presented here. 6. LSST Telescope and Optics Status Science.gov (United States) Gressler, William; Krabbendam, V. L.; Andrew, J. R.; Barr, J. D.; DeVries, J.; Hileman, E.; Liang, M.; Neill, D. R.; Sebag, J.; Stubbs, C.; Wiecha, O.; LSST Collaboration 2010-01-01 Progress continues on the final design of key elements of the LSST Telescope system thanks to private support. Rear surface polishing of the unique 8.4m M1/M3 monolithic mirror has been completed with the subsequent attachment of support loadspreaders and hardpoints. The mirror will now undergo the final two year planned effort of front surface grinding and polishing. The LSST telescope cell design has matured to accommodate on-telescope mirror support, pointing, and thermal conditioning requirements in addition to off-telescope optical coating requirements. Performance and environmental testing of hardware components has commenced to assist with prototyping and final design selection of the M1/M3 mirror support system. LSST plans to design, fabricate, assemble, and deliver qualified subassemblies for integration of the M1/M3 and telescope cell in early 2012. Corning has completed and delivered the M2 ULE™ substrate. This 3.5m diameter, 100mm thick meniscus substrate has been acid etched to passivate any stress features and the convex surface has been finished via precision contour grinding to near net final shape. The substrate awaits construction funding to enable final optical polishing. The LSST Calibration System design utilizes a fiber-fed reflective projector system. An array of these projectors provides uniform illumination across the telescope field of view in tunable wavelength bands to calibrate the LSST camera detector elements. Finally, advancement continues forward on LSST support facility development via the award of an A&E contract to provide specific site design and development activities. 7. Active Surface Compensation for Large Radio Telescope Antennas Science.gov (United States) Wang, Congsi; Li, Haihua; Ying, Kang; Xu, Qian; Wang, Na; Duan, Baoyan; Gao, Wei; Xiao, Lan; Duan, Yuhu 2018-03-01 With the development of radio telescope antennas with large apertures, high gain, and wide frequency bands, compensation methods, such as mechanical or electronic compensation, are obviously essential to ensure the electrical performance of antennas that work in complex environments. Since traditional compensation methods can only adjust antenna pointing but not the surface accuracy, which are limited for obtaining high surface precision and aperture efficiency, active surface adjustment has become an indispensable tool in this field. Therefore, the development process of electrical performance compensation methods for radio telescope antennas is introduced. Further, a series of analyses of the five key technologies of active surface adjustment is presented. Then, four typical large antennas that have been designed with active main reflector technology are presented and compared. Finally, future research directions and suggestions for reflector antenna compensation method! s based on active surface adjustment are presented. 8. A telescope for observation from space of extreme lightnings in the upper atmosphere International Nuclear Information System (INIS) Nam, S.; Artikova, S.; Chung, T.; Garipov, G.; Jeon, J.A.; Jeong, S.; Jin, J.Y.; Khrenov, B.A.; Kim, J.E.; Kim, M.; Kim, Y.K.; Klimov, P.; Lee, J.; Lee, H.Y.; Na, G.W.; Oh, S.J.; Panasyuk, M.; Park, I.H.; Park, J.H.; Park, Y.-S. 2008-01-01 A new type of telescope with a wide field-of-view and functions of fast zoom-in has been introduced. Two kinds of MEMS (Micro-Electro-Mechanical Systems) micromirrors, digital and analog, are used for reflectors of the telescope, placed at different focal lengths. We apply this technology to the observation from space of TLE (Transient Luminous Events), extremely large transient sparks occurring at the upper atmosphere. TLE are one type of important backgrounds to be understood for future space observation of UHECR (Ultra-High Energy Cosmic Rays). The launch of the payload carried by a Russian microsatellite is foreseen in the middle of 2008 9. Hubble Space Telescope and VLA observations of two optical continuum knots in the jet of 3C 380 NARCIS (Netherlands) O'Dea, CP; De Vries, W; Biretta, JA; Baum, SA We present Nubble Space Telescope Wide Field Planetary Camera 2 broadband red and linear ramp filter (isolating redshifted [O II] lambda 3727) observations and subarcsecond-resolution 15, 22, and 43 GHz VLA observations of the radio-loud quasar 3C 380. We confirm the report of de Vries et ai. that 10. Quantitative analysis of wide field-of-view and broadband quarter-wave plate based on metasurface Science.gov (United States) Chen, Yanjun; Guo, Zhe; Liu, Ke; Liu, Lihui; Li, Yanqiu 2018-01-01 As the numerical aperture (NA) of the projection objective increases continually and the exposure pattern feature size decreases gradually, the polarization illumination is introduced into the lithography system. Therefore, it is necessary to design a wide field-of-view (FOV) wave plate to eliminate the effect of oblique incident light on the phase delay of the traditional zero order wave plate effectively. The quarter-wave plate with 20° FOV based on birefringent optical crystals has been designed in our laboratory by Dong et al. In order to obtain a wider FOV, we explore a previously reported Ag patch ultrathin quarter-wave plate whose performances were not analyzed by finite-difference time-domain (FDTD) method. In this paper, we mainly investigate three performances of the Ag patch quarter-wave plate consisting of FOV, achromatic band and achromatic band transmission. The simulation results indicate that when phase difference error is controlled at +/-2° (1) the range of FOV of the quarter-wave plate is +/-29° at 632nm; (2) the achromatic band ranges from 618nm to 658nm at normal incidence; (3) the achromatic band transmission ranges from 11% to 30%. Compared with the traditional wave plate made of birefringent crystals, the achromatic band and transmission is slightly lower but the FOV of this quarter-wave plate is much wider. Thus, this Ag patch nanoscale wide FOV quarter-wave plate can be effectively used in high NA lithography projection exposure systems to reduce the polarization aberration caused by oblique incidence of light. 11. MODIS/Aqua Geolocation Fields 1km 5-Min 1A Wide Swath Subset along CloudSat V002 (MAC03S1) at GES DISC Data.gov (United States) National Aeronautics and Space Administration — This is the wide-swath MODIS/Aqua subset along CloudSat field of view track. The goal of the wide-swath subset is to select and return MODIS data that are within... 12. Standoff Laser-Induced Breakdown Spectroscopy (LIBS) Using a Miniature Wide Field of View Spatial Heterodyne Spectrometer with Sub-Microsteradian Collection Optics. Science.gov (United States) Barnett, Patrick D; Lamsal, Nirmal; Angel, S Michael 2017-04-01 A spatial heterodyne spectrometer (SHS) is described for standoff laser-induced breakdown spectroscopy (LIBS) measurements. The spatial heterodyne LIBS spectrometer (SHLS) is a diffraction grating based interferometer with no moving parts that offers a very large field of view, high light throughput, and high spectral resolution in a small package. The field of view of the SHLS spectrometer is shown to be ∼1° in standoff LIBS measurements. In the SHLS system described here, the collection aperture was defined by the 10 mm diffraction gratings in the SHS and standoff LIBS measurements were made up to 20 m with no additional collection optics, corresponding to a collection solid angle of 0.2 μsr, or f/2000, and also using a small telescope to increase the collection efficiency. The use of a microphone was demonstrated to rapidly optimize laser focus for 20 m standoff LIBS measurements. 13. NEAT: A Microarcsec Astrometric Telescope Science.gov (United States) Shao, M.; Nemati, B.; Zhai, C.; Goullioud, R. 2011-01-01 NEAT, Nearby Exo-Earth Astrometric Telescope is a medium-small telescope (is) approximately 1m in diameter that is designed to make ultra precise (is) less than 1 uas (microarcsec) astrometric measurements of nearby stars in a (is) approximately 1hr observation. Four major error sources prevent normal space telescopes from obtaining accuracies close to 1 uas. Even with a small 1m telescope, photon noise is usually not a problem for the bright nearby target stars. But in general, the reference stars are much fainter. Typically a field of view of (is) approximately 0.5 deg dia is needed to obtain enough bright reference stars. The NEAT concept uses a very simple but unusual design to avoid optically induced astrometric errors. The third source of error is the accuracy and stability of the focal plane. A 1uas error over a (is) approximately 2000 arcsec field of view implies the focal plane is accurate or at least stable to 5 parts in 10(exp 10) over the lifetime of the mission ( (is) approximately 5yrs). The 4th class of error has to do with our knowledge of the PSF and how that PSF is sampled by an imperfect detector. A Nyquist sampled focal plane would have (is) greater than 2 pixels per lambda/D, and centroiding to 1uas means centroiding to 10-5 pixels. This paper describes the mission concept, and an overview of the technology needed to perform 1uas astrometry with a small telescope, and how we overcome problems 1 and 2. A companion paper will describe the technical progress we've made in solving problems 3 and 4. 14. Blood vessel segmentation in modern wide-field retinal images in the presence of additive Gaussian noise. Science.gov (United States) Asem, Morteza Modarresi; Oveisi, Iman Sheikh; Janbozorgi, Mona 2018-07-01 Retinal blood vessels indicate some serious health ramifications, such as cardiovascular disease and stroke. Thanks to modern imaging technology, high-resolution images provide detailed information to help analyze retinal vascular features before symptoms associated with such conditions fully develop. Additionally, these retinal images can be used by ophthalmologists to facilitate diagnosis and the procedures of eye surgery. A fuzzy noise reduction algorithm was employed to enhance color images corrupted by Gaussian noise. The present paper proposes employing a contrast limited adaptive histogram equalization to enhance illumination and increase the contrast of retinal images captured from state-of-the-art cameras. Possessing directional properties, the multistructure elements method can lead to high-performance edge detection. Therefore, multistructure elements-based morphology operators are used to detect high-quality image ridges. Following this detection, the irrelevant ridges, which are not part of the vessel tree, were removed by morphological operators by reconstruction, attempting also to keep the thin vessels preserved. A combined method of connected components analysis (CCA) in conjunction with a thresholding approach was further used to identify the ridges that correspond to vessels. The application of CCA can yield higher efficiency when it is locally applied rather than applied on the whole image. The significance of our work lies in the way in which several methods are effectively combined and the originality of the database employed, making this work unique in the literature. Computer simulation results in wide-field retinal images with up to a 200-deg field of view are a testimony of the efficacy of the proposed approach, with an accuracy of 0.9524. 15. Wide-field human photoreceptor morphological analysis using phase-resolved sensorless adaptive optics swept-source OCT (Conference Presentation) Science.gov (United States) Ju, Myeong Jin; Heisler, Morgan; Zawadzki, Robert J.; Bonora, Stefano; Jian, Yifan; Sarunic, Marinko V. 2017-02-01 Adaptive optics optical coherence tomography (AO-OCT) systems capable of 3D high resolution imaging have been applied to posterior eye imaging in order to resolve the fine morphological features in the retina. Human cone photoreceptors have been extensively imaged and studied for the investigation of retinal degeneration resulting in photoreceptor cell death. However, there are still limitations of conventional approaches to AO in the clinic, such as relatively small field-of-view (FOV) and the complexities in system design and operation. In this research, a recently developed phase-resolved Sensorless AO Swept Source based OCT (SAO-SS-OCT) system which is compact in size and easy to operate is presented. Owing to its lens-based system design, wide-field imaging can be performed up to 6° on the retina. A phase stabilization unit was integrated with the OCT system. With the phase stabilized OCT signal, we constructed retinal micro-vasculature image using a phase variance technique. The retinal vasculature image was used to align and average multiple OCT volumes acquired sequentially. The contrast-enhanced photoreceptor projection image was then extracted from the averaged volume, and analyzed based on its morphological features through a novel photoreceptor structure evaluation algorithm. The retinas of twelve human research subjects (10 normal and 2 pathological cases) were measured in vivo. Quantitative parameters used for evaluating the cone photoreceptor mosaic such as cell density, cell area, and mosaic regularity are presented and discussed. The SAO-SS-OCT system and the proposed photoreceptor evaluation method has significant potential to reveal early stage retinal diseases associated with retinal degeneration. 16. Using ISS Telescopes for Electromagnetic Follow-up of Gravitational Wave Detections of NS-NS and NS-BH Mergers Science.gov (United States) Camp, J.; Barthelmy, S.; Blackburn, L.; Carpenter, K. G.; Gehrels, N.; Kanner, J.; Marshall, F. E.; Racusin, J. L.; Sakamoto, T. 2013-01-01 The International Space Station offers a unique platform for rapid and inexpensive deployment of space telescopes. A scientific opportunity of great potential later this decade is the use of telescopes for the electromagnetic follow-up of ground-based gravitational wave detections of neutron star and black hole mergers. We describe this possibility for OpTIIX, an ISS technology demonstration of a 1.5 m diffraction limited optical telescope assembled in space, and ISS-Lobster, a wide-field imaging X-ray telescope now under study as a potential NASA mission. Both telescopes will be mounted on pointing platforms, allowing rapid positioning to the source of a gravitational wave event. Electromagnetic follow-up rates of several per year appear likely, offering a wealth of complementary science on the mergers of black holes and neutron stars. 17. A Physical Model-based Correction for Charge Traps in the Hubble Space Telescope’s Wide Field Camera 3 Near-IR Detector and Its Applications to Transiting Exoplanets and Brown Dwarfs Science.gov (United States) Zhou, Yifan; Apai, Dániel; Lew, Ben W. P.; Schneider, Glenn 2017-06-01 The Hubble Space Telescope Wide Field Camera 3 (WFC3) near-IR channel is extensively used in time-resolved observations, especially for transiting exoplanet spectroscopy as well as brown dwarf and directly imaged exoplanet rotational phase mapping. The ramp effect is the dominant source of systematics in the WFC3 for time-resolved observations, which limits its photometric precision. Current mitigation strategies are based on empirical fits and require additional orbits to help the telescope reach a thermal equilibrium. We show that the ramp-effect profiles can be explained and corrected with high fidelity using charge trapping theories. We also present a model for this process that can be used to predict and to correct charge trap systematics. Our model is based on a very small number of parameters that are intrinsic to the detector. We find that these parameters are very stable between the different data sets, and we provide best-fit values. Our model is tested with more than 120 orbits (∼40 visits) of WFC3 observations and is proved to be able to provide near photon noise limited corrections for observations made with both staring and scanning modes of transiting exoplanets as well as for starting-mode observations of brown dwarfs. After our model correction, the light curve of the first orbit in each visit has the same photometric precision as subsequent orbits, so data from the first orbit no longer need to be discarded. Near-IR arrays with the same physical characteristics (e.g., JWST/NIRCam) may also benefit from the extension of this model if similar systematic profiles are observed. 18. The great Melbourne telescope CERN Document Server Gillespie, Richard 2011-01-01 Erected at Melbourne Observatory in 1869, the telescope was the second largest in the world, designed to explore the nature of the nebulae in the southern skies. Richard Gillespie, head of the History and Technology department at the Melbourne museum has written an entertaining account of the telescope's extraordinary history and tells the story through an amazing cast of characters whose lives intersected with the telescope. 19. Development of a Data Reduction Algorithm for Optical Wide Field Patrol (OWL) II: Improving Measurement of Lengths of Detected Streaks Science.gov (United States) Park, Sun-Youp; Choi, Jin; Roh, Dong-Goo; Park, Maru; Jo, Jung Hyun; Yim, Hong-Suh; Park, Young-Sik; Bae, Young-Ho; Park, Jang-Hyun; Moon, Hong-Kyu; Choi, Young-Jun; Cho, Sungki; Choi, Eun-Jung 2016-09-01 As described in the previous paper (Park et al. 2013), the detector subsystem of optical wide-field patrol (OWL) provides many observational data points of a single artificial satellite or space debris in the form of small streaks, using a chopper system and a time tagger. The position and the corresponding time data are matched assuming that the length of a streak on the CCD frame is proportional to the time duration of the exposure during which the chopper blades do not obscure the CCD window. In the previous study, however, the length was measured using the diagonal of the rectangle of the image area containing the streak; the results were quite ambiguous and inaccurate, allowing possible matching error of positions and time data. Furthermore, because only one (position, time) data point is created from one streak, the efficiency of the observation decreases. To define the length of a streak correctly, it is important to locate the endpoints of a streak. In this paper, a method using a differential convolution mask pattern is tested. This method can be used to obtain the positions where the pixel values are changed sharply. These endpoints can be regarded as directly detected positional data, and the number of data points is doubled by this result. 20. In vivo wide-field multispectral dosimeter for use in ALA-PpIX based photodynamic therapy of skin Science.gov (United States) LaRochelle, Ethan P. M.; Davis, Scott C.; de Souza, Ana Luiza Ribeiro; Pogue, Brian W. 2017-02-01 Photodynamic therapy (PDT) for Actinic Kertoses (AK) using aminoluvelinic acid (ALA) is an FDA-approved treatment, which is generally effective, yet response rates vary. The origin of the variability is not well characterized, but may be related to inter-patient variability in the production of protoporphyrin IX (PpIX). While fiber-based point probe systems provide a method for measuring PpIX production, these measurements have demonstrated large spatial and inter-operator variability. Thus, in an effort to improve patient-specific dosimetry and treatment it is important to develop a robust system that accounts for spatial variability and reduces the chance of operator errors. To address this need, a wide-field multispectral imaging system was developed that is capable of quantifying maps of PpIX in both liquid phantoms and in vivo experiments, focusing on high sensitivity light signals. The system uses both red and blue excitation to elicit a fluorescent response at varying skin depths. A ten-position filter wheel with bandpass filters ranging from 635nm to 710nm are used to capture images along the emission band. A linear least-square spectral fitting algorithm provides the ability to decouple background autofluorescence from PpIX fluorescence, which has improved the system sensitivity by an order of magnitude, detecting nanomolar PpIX concentrations in liquid phantoms in the presence of 2% whole blood and 2% intralipid. 1. Multichannel wide-field microscopic FRET imaging based on simultaneous spectral unmixing of excitation and emission spectra. Science.gov (United States) DU, M; Mai, Z; Yang, F; Lin, F; Wei, L; Chen, T 2018-01-01 Simultaneous spectral unmixing of excitation and emission spectra (ExEm unmixing) has inherent ability resolving spectral crosstalks, two key issues of quantitative fluorescence resonance energy transfer (FRET) measurement, of both the excitation and emission spectra between donor and acceptor without additional corrections. We here set up a filter-based multichannel wide-field microscope for ExEm unmixing-based FRET imaging (m-ExEm-spFRET) containing a constant system correction factor (f sc ) for a stable system. We performed m-ExEm-spFRET with four- and two-wavelength excitation respectively on our system to quantitatively image single living cells expressing FRET tandem constructs, and obtained accurate FRET efficiency (E) and concentration ratio of acceptor to donor (R C ). We also performed m-ExEm-spFRET imaging for single living cells coexpressing CFP-Bax and YFP-Bax, and found that the E values were about 0 for control cells and about 28% for staurosporin-treated cells when R C were larger than 1, indicating that staurosporin induced significant oligomerisation. © 2017 The Authors Journal of Microscopy © 2017 Royal Microscopical Society. 2. THE LOW-FREQUENCY CHARACTERISTICS OF PSR J0437–4715 OBSERVED WITH THE MURCHISON WIDE-FIELD ARRAY Energy Technology Data Exchange (ETDEWEB) Bhat, N. D. R.; Ord, S. M.; Tremblay, S. E.; Tingay, S. J.; Oronsaye, S.; Emrich, D. [International Centre for Radio Astronomy Research, Curtin University, Bentley, WA 6102 (Australia); Deshpande, A. A. [Raman Research Institute, Bangalore 560080 (India); Van Straten, W.; Briggs, F. [ARC Centre of Excellence for All-sky Astrophysics (CAASTRO), Curtin University, Bentley, WA 6102 (Australia); Bernardi, G. [Square Kilometre Array South Africa, 3rd Floor, The Park, Park Road, Pinelands, 7405 (South Africa); Bowman, J. D. [School of Earth and Space Exploration, Arizona State University, Tempe, AZ 85287 (United States); Cappallo, R. J.; Corey, B. E. [MIT Haystack Observatory, Westford, MA 01886 (United States); Goeke, R.; Hewitt, J. N. [Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Greenhill, L. J.; Kasper, J. C. [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States); Hazelton, B. J. [Department of Physics, University of Washington, Seattle, WA 98195 (United States); Johnston-Hollitt, M. [School of Chemical and Physical Sciences, Victoria University of Wellington, Wellington 6140 (New Zealand); Kaplan, D. L. [Department of Physics, University of Wisconsin-Milwaukee, Milwaukee, WI 53201 (United States); and others 2014-08-20 We report on the detection of the millisecond pulsar PSR J0437–4715 with the Murchison Wide-field Array (MWA) at a frequency of 192 MHz. Our observations show rapid modulations of pulse intensity in time and frequency that arise from diffractive scintillation effects in the interstellar medium (ISM), as well as prominent drifts of intensity maxima in the time-frequency plane that arise from refractive effects. Our analysis suggests that the scattering screen is located at a distance of ∼80-120 pc from the Sun, in disagreement with a recent claim that the screen is closer (∼10 pc). Comparisons with higher frequency data from Parkes reveal a dramatic evolution of the pulse profile with frequency, with the outer conal emission becoming comparable in strength to that from the core and inner conal regions. As well as demonstrating the high time resolution science capabilities currently possible with the MWA, our observations underscore the potential to conduct low-frequency investigations of timing-array millisecond pulsars, which may lead to increased sensitivity in the detection of nanoHertz gravitational waves via the accurate characterization of ISM effects. 3. THE SIZE EVOLUTION OF PASSIVE GALAXIES: OBSERVATIONS FROM THE WIDE-FIELD CAMERA 3 EARLY RELEASE SCIENCE PROGRAM International Nuclear Information System (INIS) Ryan, R. E. Jr.; McCarthy, P. J.; Cohen, S. H.; Rutkowski, M. J.; Mechtley, M. R.; Windhorst, R. A.; Yan, H.; Hathi, N. P.; Koekemoer, A. M.; Bond, H. E.; Bushouse, H.; O'Connell, R. W.; Balick, B.; Calzetti, D.; Crockett, R. M.; Disney, M.; Dopita, M. A.; Frogel, J. A.; Hall, D. N. B.; Holtzman, J. A. 2012-01-01 We present the size evolution of passively evolving galaxies at z ∼ 2 identified in Wide-Field Camera 3 imaging from the Early Release Science program. Our sample was constructed using an analog to the passive BzK galaxy selection criterion, which isolates galaxies with little or no ongoing star formation at z ∼> 1.5. We identify 30 galaxies in ∼40 arcmin 2 to H obs ∼ * ∼ 10 11 M ☉ ) undergo the strongest evolution from z ∼ 2 to the present. Parameterizing the size evolution as (1 + z) –α , we find a tentative scaling of α ≈ (– 0.6 ± 0.7) + (0.9 ± 0.4)log (M * /10 9 M ☉ ), where the relatively large uncertainties reflect the poor sampling in stellar mass due to the low numbers of high-redshift systems. We discuss the implications of this result for the redshift evolution of the M * -R e relation for red galaxies. 4. Towards an ultra-thin medical endoscope: multimode fibre as a wide-field image transferring medium Science.gov (United States) 2018-03-01 Multimode optical fibres are attractive for biomedical and industrial applications such as endoscopes because of the small cross section and imaging resolution they can provide in comparison to widely-used fibre bundles. However, the image is randomly scrambled by propagation through a multimode fibre. Even though the scrambling is unpredictable, it is deterministic, and therefore the scrambling can be reversed. To unscramble the image, we treat the multimode fibre as a linear, disordered scattering medium. To calibrate, we scan a focused beam of coherent light over thousands of different beam positions at the distal end and record complex fields at the proximal end of the fibre. This way, the inputoutput response of the system is determined, which then allows computational reconstruction of reflection-mode images. However, there remains the problem of illuminating the tissue via the fibre while avoiding back reflections from the proximal face. To avoid this drawback, we provide here the first preliminary confirmation that an image can be transferred through a 2x2 fibre coupler, with the sample at its distal port interrogated in reflection. Light is injected into one port for illumination and then collected from a second port for imaging. 5. Seismic Imager Space Telescope Science.gov (United States) Sidick, Erkin; Coste, Keith; Cunningham, J.; Sievers,Michael W.; Agnes, Gregory S.; Polanco, Otto R.; Green, Joseph J.; Cameron, Bruce A.; Redding, David C.; Avouac, Jean Philippe; 2012-01-01 A concept has been developed for a geostationary seismic imager (GSI), a space telescope in geostationary orbit above the Pacific coast of the Americas that would provide movies of many large earthquakes occurring in the area from Southern Chile to Southern Alaska. The GSI movies would cover a field of view as long as 300 km, at a spatial resolution of 3 to 15 m and a temporal resolution of 1 to 2 Hz, which is sufficient for accurate measurement of surface displacements and photometric changes induced by seismic waves. Computer processing of the movie images would exploit these dynamic changes to accurately measure the rapidly evolving surface waves and surface ruptures as they happen. These measurements would provide key information to advance the understanding of the mechanisms governing earthquake ruptures, and the propagation and arrest of damaging seismic waves. GSI operational strategy is to react to earthquakes detected by ground seismometers, slewing the satellite to point at the epicenters of earthquakes above a certain magnitude. Some of these earthquakes will be foreshocks of larger earthquakes; these will be observed, as the spacecraft would have been pointed in the right direction. This strategy was tested against the historical record for the Pacific coast of the Americas, from 1973 until the present. Based on the seismicity recorded during this time period, a GSI mission with a lifetime of 10 years could have been in position to observe at least 13 (22 on average) earthquakes of magnitude larger than 6, and at least one (2 on average) earthquake of magnitude larger than 7. A GSI would provide data unprecedented in its extent and temporal and spatial resolution. It would provide this data for some of the world's most seismically active regions, and do so better and at a lower cost than could be done with ground-based instrumentation. A GSI would revolutionize the understanding of earthquake dynamics, perhaps leading ultimately to effective warning 6. Science.gov (United States) Bellini, A.; Bedin, L. R. 2010-07-01 High precision astrometry requires an accurate geometric-distortion solution. In this work, we present an average correction for the blue camera of the Large Binocular Telescope which enables a relative astrometric precision of ~15 mas for the BBessel and VBessel broad-band filters. The result of this effort is used in two companion papers: the first to measure the absolute proper motion of the open cluster M 67 with respect to the background galaxies; the second to decontaminate the color-magnitude of M 67 from field objects, enabling the study of the end of its white dwarf cooling sequence. Many other applications might find this distortion correction useful. Based on data acquired using the Large Binocular Telescope (LBT) at Mt. Graham, Arizona, under the Commissioning of the Large Binocular Blue Camera. The LBT is an international collaboration among institutions in the United States, Italy and Germany. LBT Corporation partners are: The University of Arizona on behalf of the Arizona university system; Istituto Nazionale di Astrofisica, Italy; LBT Beteiligungsgesellschaft, Germany, representing the Max-Planck Society, the Astrophysical Institute Potsdam, and Heidelberg University; The Ohio State University, and The Research Corporation, on behalf of The University of Notre Dame, University of Minnesota and University of Virginia.Visiting Ph.D. Student at STScI under the “2008 graduate research assistantship” program. 7. Agreement in Cone Density Derived from Gaze-Directed Single Images Versus Wide-Field Montage Using Adaptive Optics Flood Illumination Ophthalmoscopy. Science.gov (United States) Chew, Avenell L; Sampson, Danuta M; Kashani, Irwin; Chen, Fred K 2017-12-01 We compared cone density measurements derived from the center of gaze-directed single images with reconstructed wide-field montages using the rtx1 adaptive optics (AO) retinal camera. A total of 29 eyes from 29 healthy subjects were imaged with the rtx1 camera. Of 20 overlapping AO images acquired, 12 (at 3.2°, 5°, and 7°) were used for calculating gaze-directed cone densities. Wide-field AO montages were reconstructed and cone densities were measured at the corresponding 12 loci as determined by field projection relative to the foveal center aligned to the foveal dip on optical coherence tomography. Limits of agreement in cone density measurement between single AO images and wide-field AO montages were calculated. Cone density measurements failed in 1 or more gaze directions or retinal loci in up to 58% and 33% of the subjects using single AO images or wide-field AO montage, respectively. Although there were no significant overall differences between cone densities derived from single AO images and wide-field AO montages at any of the 12 gazes and locations ( P = 0.01-0.65), the limits of agreement between the two methods ranged from as narrow as -2200 to +2600, to as wide as -4200 to +3800 cones/mm 2 . Cone density measurement using the rtx1 AO camera is feasible using both methods. Local variation in image quality and altered visibility of cones after generating montages may contribute to the discrepancies. Cone densities from single AO images are not interchangeable with wide-field montage derived-measurements. 8. OAJ 2.6m survey telescope: optical alignment and on-sky evaluation of IQ performances Science.gov (United States) Lousberg, Gregory P.; Bastin, Christian; Moreau, Vincent; Pirnay, Olivier; Flebus, Carlo; Chueca, Sergio; Iñiguez, César; Ederoclite, Alessandro; Ramió, Héctor V.; Cenarro, A. Javier 2016-08-01 AMOS has recently completed the alignment campaign of the 2.6m telescope for the Observatorio Astrofisico de Javalambre (OAJ). AMOS developed an innovative alignment technique for wide field-of-view telescopes that has been successfully implemented on the OAJ 2.6m telescope with the active support of the team of CEFCA (Centro de Estudios de Física del Cosmos de Aragón). The alignment relies on two fundamental techniques: (1) the wavefront-curvature sensing (WCS) for the evaluation of the telescope aberrations at arbitrary locations in the focal plane, and (2) the comafree point method for the adjustment of the position of the secondary mirror (M2) and of the focal plane (FP). The alignment campaign unfolds in three steps: (a) analysis of the repeatability of the WCS measurements, (b) assessment of the sensitivity of telescope wavefront error to M2 and FP position adjustments, and (c) optical alignment of the telescope. At the end of the campaign, seeing-limited performances are demonstrated in the complete focal plane. With the help of CEFCA team, the image quality of the telescope are investigated with a lucky-imaging method. Image sizes of less than 0.3 arcsec FWHM are obtained, and this excellent image quality is observed over the complete focal plane. 9. 3D galaxy clustering with future wide-field surveys: Advantages of a spherical Fourier-Bessel analysis Science.gov (United States) Lanusse, F.; Rassat, A.; Starck, J.-L. 2015-06-01 Context. Upcoming spectroscopic galaxy surveys are extremely promising to help in addressing the major challenges of cosmology, in particular in understanding the nature of the dark universe. The strength of these surveys, naturally described in spherical geometry, comes from their unprecedented depth and width, but an optimal extraction of their three-dimensional information is of utmost importance to best constrain the properties of the dark universe. Aims: Although there is theoretical motivation and novel tools to explore these surveys using the 3D spherical Fourier-Bessel (SFB) power spectrum of galaxy number counts Cℓ(k,k'), most survey optimisations and forecasts are based on the tomographic spherical harmonics power spectrum C(ij)_ℓ. The goal of this paper is to perform a new investigation of the information that can be extracted from these two analyses in the context of planned stage IV wide-field galaxy surveys. Methods: We compared tomographic and 3D SFB techniques by comparing the forecast cosmological parameter constraints obtained from a Fisher analysis. The comparison was made possible by careful and coherent treatment of non-linear scales in the two analyses, which makes this study the first to compare 3D SFB and tomographic constraints on an equal footing. Nuisance parameters related to a scale- and redshift-dependent galaxy bias were also included in the computation of the 3D SFB and tomographic power spectra for the first time. Results: Tomographic and 3D SFB methods can recover similar constraints in the absence of systematics. This requires choosing an optimal number of redshift bins for the tomographic analysis, which we computed to be N = 26 for zmed ≃ 0.4, N = 30 for zmed ≃ 1.0, and N = 42 for zmed ≃ 1.7. When marginalising over nuisance parameters related to the galaxy bias, the forecast 3D SFB constraints are less affected by this source of systematics than the tomographic constraints. In addition, the rate of increase of the 10. Dynamic registration of an optical see-through HMD into a wide field-of-view rotorcraft flight simulation environment Science.gov (United States) Viertler, Franz; Hajek, Manfred 2015-05-01 To overcome the challenge of helicopter flight in degraded visual environments, current research considers headmounted displays with 3D-conformal (scene-linked) visual cues as most promising display technology. For pilot-in-theloop simulations with HMDs, a highly accurate registration of the augmented visual system is required. In rotorcraft flight simulators the outside visual cues are usually provided by a dome projection system, since a wide field-of-view (e.g. horizontally > 200° and vertically > 80°) is required, which can hardly be achieved with collimated viewing systems. But optical see-through HMDs do mostly not have an equivalent focus compared to the distance of the pilot's eye-point position to the curved screen, which is also dependant on head motion. Hence, a dynamic vergence correction has been implemented to avoid binocular disparity. In addition, the parallax error induced by even small translational head motions is corrected with a head-tracking system to be adjusted onto the projected screen. For this purpose, two options are presented. The correction can be achieved by rendering the view with yaw and pitch offset angles dependent on the deviating head position from the design eye-point of the spherical projection system. Furthermore, it can be solved by implementing a dynamic eye-point in the multi-channel projection system for the outside visual cues. Both options have been investigated for the integration of a binocular HMD into the Rotorcraft Simulation Environment (ROSIE) at the Technische Universitaet Muenchen. Pros and cons of both possibilities with regard on integration issues and usability in flight simulations will be discussed. 11. FIRE SPECTROSCOPY OF FIVE LATE-TYPE T DWARFS DISCOVERED WITH THE WIDE-FIELD INFRARED SURVEY EXPLORER International Nuclear Information System (INIS) Burgasser, Adam J.; Cushing, Michael C.; Mainzer, A.; Bauer, James M.; Kirkpatrick, J. Davy; Gelino, Christopher R.; Griffith, Roger L.; Marsh, Kenneth A.; Looper, Dagny L.; Tinney, Christopher; Simcoe, Robert A.; Bochanski, John J.; Skrutskie, Michael F.; Thompson, Maggie A.; Wright, Edward L. 2011-01-01 We present the discovery of five late-type T dwarfs identified with the Wide-field Infrared Survey Explorer (WISE). Low-resolution near-infrared spectroscopy obtained with the Magellan Folded-port InfraRed Echellette reveal strong H 2 O and CH 4 absorption in all five sources, and spectral indices and comparison to spectral templates indicate classifications ranging from T5.5 to T8.5:. The spectrum of the latest-type source, WISE J1812+2721, is an excellent match to that of the T8.5 companion brown dwarf Wolf 940B. WISE-based spectrophotometric distance estimates place these T dwarfs at 12-13 pc from the Sun, assuming they are single. Preliminary fits of the spectral data to the atmosphere models of Saumon and Marley indicate effective temperatures ranging from 600 K to 930 K, both cloudy and cloud-free atmospheres, and a broad range of ages and masses. In particular, two sources show evidence of both low surface gravity and cloudy atmospheres, tentatively supporting a trend noted in other young brown dwarfs and exoplanets. In contrast, the high proper motion T dwarf WISE J2018-7423 exhibits a suppressed K-band peak and blue spectrophotometric J - K colors indicative of an old, massive brown dwarf; however, it lacks the broadened Y-band peak seen in metal-poor counterparts. These results illustrate the broad diversity of low-temperature brown dwarfs that will be uncovered with WISE. 12. THE HUBBLE WIDE FIELD CAMERA 3 TEST OF SURFACES IN THE OUTER SOLAR SYSTEM: SPECTRAL VARIATION ON KUIPER BELT OBJECTS International Nuclear Information System (INIS) Fraser, Wesley C.; Brown, Michael E.; Glass, Florian 2015-01-01 Here, we present additional photometry of targets observed as part of the Hubble Wide Field Camera 3 (WFC3) Test of Surfaces in the Outer Solar System. Twelve targets were re-observed with the WFC3 in the optical and NIR wavebands designed to complement those used during the first visit. Additionally, all of the observations originally presented by Fraser and Brown were reanalyzed through the same updated photometry pipeline. A re-analysis of the optical and NIR color distribution reveals a bifurcated optical color distribution and only two identifiable spectral classes, each of which occupies a broad range of colors and has correlated optical and NIR colors, in agreement with our previous findings. We report the detection of significant spectral variations on five targets which cannot be attributed to photometry errors, cosmic rays, point-spread function or sensitivity variations, or other image artifacts capable of explaining the magnitude of the variation. The spectrally variable objects are found to have a broad range of dynamical classes and absolute magnitudes, exhibit a broad range of apparent magnitude variations, and are found in both compositional classes. The spectrally variable objects with sufficiently accurate colors for spectral classification maintain their membership, belonging to the same class at both epochs. 2005 TV189 exhibits a sufficiently broad difference in color at the two epochs that span the full range of colors of the neutral class. This strongly argues that the neutral class is one single class with a broad range of colors, rather than the combination of multiple overlapping classes 13. Measuring tubulin content in Toxoplasma gondii: A comparison of laser-scanning confocal and wide-field fluorescence microscopy Science.gov (United States) Swedlow, Jason R.; Hu, Ke; Andrews, Paul D.; Roos, David S.; Murray, John M. 2002-01-01 Toxoplasma gondii is an intracellular parasite that proliferates within most nucleated cells, an important human pathogen, and a model for the study of human and veterinary parasitic infections. We used a stable yellow fluorescent protein-α-tubulin transgenic line to determine the structure of the microtubule cytoskeleton in T. gondii. Imaging of living yellow fluorescent protein-α-tubulin parasites by laser-scanning confocal microscopy (LSCM) failed to resolve the 22 subpellicular microtubules characteristic of the parasite cytoskeleton. To understand this result, we analyzed sources of noise in the LSCM and identified illumination fluctuations on time scales from microseconds to hours that introduce significant amounts of noise. We confirmed that weakly fluorescent structures could not be imaged in LSCM by using fluorescent bead standards. By contrast, wide-field microscopy (WFM) did visualize weak fluorescent standards and the individual microtubules of the parasite cytoskeleton. We therefore measured the fluorescence per unit length of microtubule by using WFM and used this information to estimate the tubulin content of the conoid (a structure important for T. gondii infection) and in the mitotic spindle pole. The conoid contains sufficient tubulin for ≈10 microtubule segments of 0.5-μm length, indicating that tubulin forms the structural core of the organelle. We also show that the T. gondii mitotic spindle contains ≈1 microtubule per chromosome. This analysis expands the understanding of structures used for invasion and intracellular proliferation by an important human pathogen and shows the advantage of WFM combined with image deconvolution over LSCM for quantitative studies of weakly fluorescent structures in moderately thin living cells. PMID:11830634 14. Athermal laser launch telescopes NARCIS (Netherlands) Kamphues, F.G.; Henselmans, R.; Rijnveld, N.; Lemmen, M.H.J.; Doelman, N.J.; Nijkerk, M.D. 2011-01-01 ESO has developed a concept for a compact laser guide star unit for use in future Adaptive Optics (AO) systems. A small powerful laser is combined with a telescope that launches the beam, creating a single modular unit that can be mounted directly on a large telescope. This approach solves several 15. Observing the Sun with Coronado telescopes telescopes CERN Document Server Pugh, Philip 2007-01-01 The Sun provides amateur astronomers with one of the few opportunities for daytime astronomy. In order to see the major features of our nearest star, special telescopes that have a very narrow visible bandwidth are essential. The bandwidth has to be as narrow as 1 A- 10-10 m (1 Angstrom) and centred on the absorption line of neutral hydrogen. This makes many major features of the Suna (TM)s chromosphere visible to the observer. Such narrow-band "Fabry-Perot etalon filters" are high technology, and until the introduction of the Coronado range of solar telescopes, were too expensive for amateur use. The entry-level Coronado telescope, the PST (Personal Solar Telescope) costs under 500. Solar prominences (vast columns of plasma, best seen at the edge of the solar disk), filaments, flares, sunspots, plage and active regions are all visible and can be imaged to produce spectacular solar photographs. Philip Pugh has assembled a team of contributors who show just how much solar work can be done with Coronado telesco... 16. Large Binocular Telescope Project Science.gov (United States) Hill, John M.; Salinari, Piero 1998-08-01 The Large Binocular Telescope (LBT) Project is a collaboration between institutions in Arizona, Germany, Italy, and Ohio. With the addition of the partners from Ohio State and Germany in February 1997, the Large Binocular Telescope Corporation has the funding required to build the full telescope populated with both 8.4 meter optical trans. The first of two 8.4 meter borosilicate honeycomb primary mirrors for LBT was cast at the Steward Observatory Mirror Lab in 1997. The baseline optical configuration of LBT includes adaptive infrared secondaries of a Gregorian design. The F/15 secondaries are undersized to provide a low thermal background focal plane. The interferometric focus combining the light from the two 8.4 meter primaries will reimage the two folded Gregorian focal planes to three central locations. The telescope elevation structure accommodates swing arms which allow rapid interchange of the various secondary and tertiary mirrors. Maximum stiffness and minimal thermal disturbance were important drivers for the design of the telescope in order to provide the best possible images for interferometric observations. The telescope structure accommodates installation of a vacuum bell jar for aluminizing the primary mirrors in-situ on the telescope. The detailed design of the telescope structure was completed in 1997 by ADS Italia (Lecco) and European Industrial Engineering (Mestre). A series of contracts for the fabrication and machining of the telescope structure had been placed at the end of 1997. The final enclosure design was completed at M3 Engineering & Technology (Tucson), EIE and ADS Italia. During 1997, the telescope pier and the concrete ring wall for the rotating enclosure were completed along with the steel structure of the fixed portion of the enclosure. The erection of the steel structure for the rotating portion of the enclosure will begin in the Spring of 1998. 17. DESTINY, The Dark Energy Space Telescope Science.gov (United States) Pasquale, Bert A.; Woodruff, Robert A.; Benford, Dominic J.; Lauer, Tod 2007-01-01 We have proposed the development of a low-cost space telescope, Destiny, as a concept for the NASA/DOE Joint Dark Energy Mission. Destiny is a 1.65m space telescope, featuring a near-infrared (0.85-1.7m) survey camera/spectrometer with a moderate flat-field field of view (FOV). Destiny will probe the properties of dark energy by obtaining a Hubble diagram based on Type Ia supernovae and a large-scale mass power spectrum derived from weak lensing distortions of field galaxies as a function of redshift. 18. Preschool Teacher Support through Class-Wide Intervention: A Description of Field-Initiated Training and Evaluation Science.gov (United States) Barnett, David W.; Ihlo, Tanya; Nichols, Angela; Wolsing, Laurie 2007-01-01 Preparing professionals for class-wide consultation has a significant role in achieving goals associated with recent legislation and reform initiatives. Class-wide interventions are used to target achievement and social learning, are under a teacher's control and responsibility, and build on basic classroom interactions, routines, and resources.… 19. Wide-field time-domain fluorescence lifetime imaging microscopy (FLIM): Molecular snapshots of metabolic function in biological systems Science.gov (United States) Sud, Dhruv 2008-12-01 Steady-state fluorescence imaging is routinely employed to obtain physiological information but is susceptible to artifacts such as absorption and photobleaching. FLIM provides an additional source of contrast oblivious to these but is affected by factors such as pH, gases, and temperature. Here we focused on developing a resolution-enhanced FLIM system for quantitative oxygen sensing. Oxygen is one of the most critical components of metabolic machinery and affects growth, differentiation, and death. FLIM-based oxygen sensing provides a valuable tool for biologists without the need of alternate technologies. We also developed novel computational approaches to improve spatial resolution of FLIM images, extending its potential for thick tissue studies. We designed a wide-field time-domain UV-vis-NIR FLIM system with high temporal resolution (50 ps), large temporal dynamic range (750 ps -- 1 mus), short data acquisition/processing times (15 s) and noise-removal capability. Lifetime calibration of an oxygen-sensitive, ruthenium dye (RTDP) enabled in vivo oxygen level measurements (resolution = 8 muM, range = 1 -- 300 muM). Combining oxygen sensing with endogenous imaging allowed for the study of two key molecules (NADH and oxygen) consumed at the termini of the oxidative phosphorylation pathway in Barrett's adenocarcinoma columnar (SEG-1) cells and Esophageal normal squamous cells (HET-1). Starkly higher intracellular oxygen and NADH levels in living SEG-1 vs. HET-1 cells were detected by FLIM and attributed to altered metabolic pathways in malignant cells. We performed FLIM studies in microfluidic bioreactors seeded with mouse myoblasts. For these systems, oxygen concentrations play an important role in cell behavior and gene expression. Oxygen levels decreased with increasing cell densities and were consistent with simulated model outcomes. In single bioreactor loops, FLIM detected spatial heterogeneity in oxygen levels as high as 20%. We validated our calibration 20. The AMANDA neutrino telescope Energy Technology Data Exchange (ETDEWEB) Andres, E.C.; Askebjer, P.; Barwick, S.W.; Bay, R.C.; Bergstroem, L.; Biron, A.; Booth, J.; Botner, O.; Bouchta, A.; Carius, S.; Carlson, M.; Chinowsky, W.; Chirkin, D.; Conrad, J.; Costa, C.G.S.; Cowen, D.; Dalberg, E.; DeYoung, T.; Edsjoe, J.; Ekstroem, P.; Goobar, A.; Gray, L.; Hallgren, A.; Halzen, F.; Hardtke, R.; Hart, S.; He, Y.; Heros, C.P. de los; Hill, G.; Hulth, P.O.; Hundertmark, S.; Jacobsen, J.; Jones, A.; Kandhadai, V.; Karle, A.; Kim, J.; Leich, H.; Leuthold, M.; Lindahl, P.; Liubarsky, I.; Loaiza, P.; Lowder, D.; Marciniewski, P.; Miller, T.C.; Miocinovic, P.; Mock, P.C.; Morse, R.; Newcomer, M.; Niessen, P.; Nygren, D.; Porrata, R.; Potter, D.; Price, P.B.; Przybylski, G.; Rhode, W.; Richter, S.; Rodriquez, J.; Romenesko, P.; Ross, D.; Rubinstein, H.; Schmidt, T.; Schneider, E.; Schwartz, R.; Schwendicke, U.; Smoot, G.; Solarz, M.; Sorin, V.; Spiering, C.; Steffen, P.; Stokstad, R.; Streicher, O.; Taboada, I.; Thon, T.; Tilav, S.; Walck, C.; Wiebusch, C.H.; Wischnewski, R.; Woschnagg, K.; Wu, W.; Yodh, G.; Young, S 1999-05-01 With an effective telescope area of order 10{sup 4} m{sup 2} for TeV neutrinos, a threshold near {approx}50 GeV and a pointing accuracy of 2.5 degrees per muon track, the AMANDA detector represents the first of a new generation of high energy neutrino telescopes, reaching a scale envisaged over 25 years ago. We describe early results on the calibration of natural deep ice as a particle detector as well as on AMANDA's performance as a neutrino telescope. 1. The AMANDA neutrino telescope Energy Technology Data Exchange (ETDEWEB) Andres, E.C.; Askebjer, P.; Barwick, S.W.; Bay, R.C.; Bergstrom,L.; Biron, A.; Booth, J.; Botner, O.; Bouchta, A.; Carius, S.; Carlson,M.; Chinowsky, W.; Chirkin, D.; Conrad,J.; Costa, C.G.S.; Cowen, D.; Dalberg, E.; DeYoung, T.; Edsjo, J.; Ekstrom, P.; Goobar, A.; Gray, L.; Hallgren, A.; Halzen, F.; Hardtke, R.; Hart, S.; He, Y.; de, los, Heros,C.P.; Hill, G.; Hulth, PO.; Hundertmark, S.; Jacobsen, J.; Jones, A.; Kandhadai, V.; Karle, A.; Kim, J.; Leich, H.; Leuthold, M.; Lindahl, P.; Liubarsky, I.; Loaiza, P.; Lowder, D.; Marciniewski, P.; Miller, T.C.; Miocinovic, P.; Mock, P.C.; Morse, R.; Newcomer, M.; Niessen, P.; Nygren,D.; Porrata, R.; Potter, D.; Price, P.B.; Przybylski, G.; Rhode, W.; Richter, S.; Rodriguez, J.; Romenesko, P.; Ross, D.; Rubinstein, H.; Schmidt, T.; Schneider, E.; Schwarz, R.; Schwendicke, U.; Smoot, G.; Solarz, M.; Sorin, V.; Spiering, C.; Steffen, P.; Stokstad, R.; Streicher, O.; Taboada, I.; Thon, T.; Tilav, S.; Walck, C.; Wiebusch,C.H.; Wischnewski, R.; Woschnagg, K.; Wu, W.; Yodh, G.; Young, S.; AMANDACollaboration 1999-04-01 With an effective telescope area of order 10(4) m(2) for TeVneutrinos, a threshold near similar to 50 GeV and a pointing accuracy of2.5 degrees per muon track, the AMANDA detector represents the first of anew generation of high energy neutrino telescopes, reaching a scaleenvisaged over 25 years ago. We describe early results on the calibrationof natural deep ice as a particle detector as well as on AMANDA'sperformance as a neutrino telescope. 2. James Clerk Maxwell Telescope Science.gov (United States) The James Clerk Maxwell Telescope (JCMT) is a 15 m diameter telescope of high surface accuracy, operating in the millimeter and submillimeter bands, and is situated on Mauna Kea in Hawaii. The JCMT facility is described and a scientific report which includes a variety of scientific results over the years 1989 and 1990 showing the range of astronomical problems tackled with the telescope is presented. Operations, which note the decrease in both the time lost to faults and the time required for engineering and commissioning work, are described. Objectives and progress of the instrumentation program are described. A financial statement is presented. 3. Science Flight Program of the Nuclear Compton Telescope Science.gov (United States) Boggs, Steven This is the lead proposal for this program. We are proposing a 5-year program to perform the scientific flight program of the Nuclear Compton Telescope (NCT), consisting of a series of three (3) scientific balloon flights. NCT is a balloon-borne, wide-field telescope designed to survey the gamma-ray sky (0.2-5 MeV), performing high-resolution spectroscopy, wide-field imaging, and polarization measurements. NCT has been rebuilt as a ULDB payload under the current 2-year APRA grant. (In that proposal we stated our goal was to return at this point to propose the scientific flight program.) The NCT rebuild/upgrade is on budget and schedule to achieve flight-ready status in Fall 2013. Science: NCT will map the Galactic positron annihilation emission, shedding more light on the mysterious concentration of this emission uncovered by INTEGRAL. NCT will survey Galactic nucleosynthesis and the role of supernova and other stellar populations in the creation and evolution of the elements. NCT will map 26-Al and positron annihilation with unprecedented sensitivity and uniform exposure, perform the first mapping of 60-Fe, search for young, hidden supernova remnants through 44-Ti emission, and enable a host of other nuclear astrophysics studies. NCT will also study compact objects (in our Galaxy and AGN) and GRBs, providing novel measurements of polarization as well as detailed spectra and light curves. Design: NCT is an array of germanium gamma-ray detectors configured in a compact, wide-field Compton telescope configuration. The array is shielded on the sides and bottom by an active anticoincidence shield but is open to the 25% of the sky above for imaging, spectroscopy, and polarization measurements. The instrument is mounted on a zenith-pointed gondola, sweeping out ~50% of the sky each day. This instrument builds off the Compton telescope technique pioneered by COMPTEL on the Compton Gamma Ray Observatory. However, by utilizing modern germanium semiconductor strip detectors 4. a New Concept of Agile Telescope Directory of Open Access Journals (Sweden) Michael Valasek 2010-01-01 Full Text Available The paper deals with the description of a new concept for a spherical mechanism for agile telescopes. It is based on redundantly actuated parallel kinematical structure. Due to the three times overactuated structure and application of several further innovative concepts, the Hexasphere achieves the movability of ±100 degrees. This enables the use of a Hexasphere as the basis for mounts of telescopes. Such telescopes can be optimized for minimum weight or for maximum dynamics. The proposed mechanism is expected to play a role in novel robotic telescopes nowadays used in many fields of astronomy and astrophysics, with emphasis on automated systems for alert observations of celestial gamma-ray bursts. 5. IMF dependence of the open-closed field line boundary in Saturn's ionosphere, and its relation to the UV auroral oval observed by the Hubble Space Telescope Directory of Open Access Journals (Sweden) E. S. Belenkaya 2007-06-01 Full Text Available We study the dependence of Saturn's magnetospheric magnetic field structure on the interplanetary magnetic field (IMF, together with the corresponding variations of the open-closed field line boundary in the ionosphere. Specifically we investigate the interval from 8 to 30 January 2004, when UV images of Saturn's southern aurora were obtained by the Hubble Space Telescope (HST, and simultaneous interplanetary measurements were provided by the Cassini spacecraft located near the ecliptic ~0.2 AU upstream of Saturn and ~0.5 AU off the planet-Sun line towards dawn. Using the paraboloid model of Saturn's magnetosphere, we calculate the magnetospheric magnetic field structure for several values of the IMF vector representative of interplanetary compression regions. Variations in the magnetic structure lead to different shapes and areas of the open field line region in the ionosphere. Comparison with the HST auroral images shows that the area of the computed open flux region is generally comparable to that enclosed by the auroral oval, and sometimes agrees in detail with its poleward boundary, though more typically being displaced by a few degrees in the tailward direction. 6. Silicon carbide optics for space and ground based astronomical telescopes Science.gov (United States) Robichaud, Joseph; Sampath, Deepak; Wainer, Chris; Schwartz, Jay; Peton, Craig; Mix, Steve; Heller, Court 2012-09-01 Silicon Carbide (SiC) optical materials are being applied widely for both space based and ground based optical telescopes. The material provides a superior weight to stiffness ratio, which is an important metric for the design and fabrication of lightweight space telescopes. The material also has superior thermal properties with a low coefficient of thermal expansion, and a high thermal conductivity. The thermal properties advantages are important for both space based and ground based systems, which typically need to operate under stressing thermal conditions. The paper will review L-3 Integrated Optical Systems - SSG’s (L-3 SSG) work in developing SiC optics and SiC optical systems for astronomical observing systems. L-3 SSG has been fielding SiC optical components and systems for over 25 years. Space systems described will emphasize the recently launched Long Range Reconnaissance Imager (LORRI) developed for JHU-APL and NASA-GSFC. Review of ground based applications of SiC will include supporting L-3 IOS-Brashear’s current contract to provide the 0.65 meter diameter, aspheric SiC secondary mirror for the Advanced Technology Solar Telescope (ATST). 7. All-Sky Interferometry with Spherical Harmonic Transit Telescopes Energy Technology Data Exchange (ETDEWEB) Shaw, J.Richard [Canadian Inst. Theor. Astrophys.; Sigurdson, Kris [British Columbia U.; Pen, Ue-Li [Canadian Inst. Theor. Astrophys.; Stebbins, Albert [Fermilab; Sitwell, Michael [British Columbia U. 2013-02-01 In this paper we describe the spherical harmonic transit telescope, a novel formalism for the analysis of transit radio telescopes. This all-sky approach bypasses the curved sky complications of traditional interferometry and so is particularly well suited to the analysis of wide-field radio interferometers. It enables compact and computationally efficient representations of the data and its statistics that allow new ways of approaching important problems like map-making and foreground removal. In particular, we show how it enables the use of the Karhunen-Loeve transform as a highly effective foreground filter, suppressing realistic foreground residuals for our fiducial example by at least a factor twenty below the 21cm signal even in highly contaminated regions of the sky. This is despite the presence of the angle-frequency mode mixing inherent in real-world instruments with frequency-dependent beams. We show, using Fisher forecasting, that foreground cleaning has little effect on power spectrum constraints compared to hypothetical foreground-free measurements. Beyond providing a natural real-world data analysis framework for 21cm telescopes now under construction and future experiments, this formalism allows accurate power spectrum forecasts to be made that include the interplay of design constraints and realistic experimental systematics with twenty-first century 21cm science. 8. Hubble Space Telescope via the Web Science.gov (United States) O'Dea, Christopher P. The Space Telescope Science Institute (STScI) makes available a wide variety of information concerning the Hubble Space Telescope (HST) via the Space Telescope Electronic Information Service (STEIS). STEIS is accessible via anonymous ftp, gopher, WAIS, and WWW. The information on STEIS includes how to propose for time on the HST, the current status of HST, reports on the scientific instruments, the observing schedule, data reduction software, calibration files, and a set of publicly available images in JPEG, GIF and TIFF format. STEIS serves both the astronomical community as well as the larger Internet community. WWW is currently the most widely used interface to STEIS. Future developments on STEIS are expected to include larger amounts of hypertext, especially HST images and educational material of interest to students, educators, and the general public, and the ability to query proposal status. 9. Launch Will Create a Radio Telescope Larger than Earth Science.gov (United States) NASA and the National Radio Astronomy Observatory are joining with an international consortium of space agencies to support the launch of a Japanese satellite next week that will create the largest astronomical "instrument" ever built -- a radio telescope more than two-and-a-half times the diameter of the Earth that will give astronomers their sharpest view yet of the universe. The launch of the Very Long Baseline Interferometry (VLBI) Space Observatory Program (VSOP) satellite by Japan's Institute of Space and Astronautical Science (ISAS) is scheduled for Feb. 10 at 11:50 p.m. EST (1:50 p.m. Feb. 11, Japan time.) The satellite is part of an international collaboration led by ISAS and backed by Japan's National Astronomical Observatory; NASA's Jet Propulsion Laboratory (JPL), Pasadena, CA; the National Science Foundation's National Radio Astronomy Observatory (NRAO), Socorro, NM; the Canadian Space Agency; the Australia Telescope National Facility; the European VLBI Network and the Joint Institute for Very Long Baseline Interferometry in Europe. Very long baseline interferometry is a technique used by radio astronomers to electronically link widely separated radio telescopes together so they work as if they were a single instrument with extraordinarily sharp "vision," or resolving power. The wider the distance between telescopes, the greater the resolving power. By taking this technique into space for the first time, astronomers will approximately triple the resolving power previously available with only ground-based telescopes. The satellite system will have resolving power almost 1,000 times greater than the Hubble Space Telescope at optical wavelengths. The satellite's resolving power is equivalent to being able to see a grain of rice in Tokyo from Los Angeles. "Using space VLBI, we can probe the cores of quasars and active galaxies, believed to be powered by super massive black holes," said Dr. Robert Preston, project scientist for the U.S. Space Very Long 10. Development of 50 cm B-N Schmidt telescope at ARIES Science.gov (United States) Gupta, K. G.; Bangia, T.; Kumar, T. S.; Snehlata; Sharma, N.; Shukla, V. A 50/80 cm wide field (4 degree) Baker-Nunn Schmidt telescope with 4k CCD (9 micron pixel) is under development at ARIES for studying optical transients, near-earth objects (up to 20 magnitude) and supplementing ASTROSAT objectives. As a modification of original optical design, a field corrector system consisting of a 235 mm meniscus and a 55 mm field - flattener (near focus) have to be incorporated. Detailed mechanical design has been completed with an equatorial English mount and a passive support system for 796 mm primary mirror with 18 axial and 12 radial supports. Original B-N corrector cell assembly will be used with modifications. The electronics will consist of torque dc motors, absolute and incremental encoders for position and velocity feedback. Matlab will be used for modeling before designing complete electronic feedback control system. The telescope is expected to see first light in March 2008. Science.gov (United States) Richter, J. L. 1981-01-01 The Acme telescope is a compound telescope that resembles the familiar Cassegrain type except that the main mirror is spherical and the secondary is an achromatic doublet mangin mirror. Three 6-in. aperture f/15 telescope designs are described. With a cemented, all spherical surface achromangin mirror, there is a small amount of coma which can be eliminated by redesigning with an air space between the crown and flint elements of the achromangin mirror, or by cementing them with one of the concave external surfaces of achromangin figured to an hyperboloid. In the examples, the spherical aberration is nil and the chromatic residual is roughly half that of an achromatic objective of the same speed, aperture, and glass types. Readily available crown and flint glasses such as Schott BK-7 and F-2 are entirely satisfactory for the achromangin mirror. Also considered are two examples of Acme-like telescopes with paraboloidal instead of spherical main mirrors. 12. Goddard Robotic Telescope (GRT) Data.gov (United States) National Aeronautics and Space Administration — Since it is not possible to predict when a Gamma-Ray Burst (GRB) occurs, the follow-up ground telescopes must be distributed as uniform as possible all over the... 13. Telescopes in History Science.gov (United States) Bond, P.; Murdin, P. 2000-11-01 The precise origins of the optical telescope are hidden in the depths of time. In the thirteenth century Roger Bacon claimed to have devised a combination of lenses which enabled him to see distant objects as if they were near. Others who have an unsubstantiated claim to have invented the telescope in the sixteenth century include an Englishman, Leonard DIGGES, and an Italian, Giovanni Batista Po... 14. Slewing Mirror Telescope and the Data-Acquisition System for the UFFO-Pathfinder DEFF Research Database (Denmark) Lim, H.; Ahmad, S.; Barrillon, P. 2013-01-01 Alert & Trigger Telescope (UBAT) measuring the X-ray/gamma-ray with the wide-field of view and the Slewing Mirror Telescope (SMT) with a rapid-response for the UV/optical photons. Once the UBAT detects a GRB candidate with the position accuracy of 10 arcmin, the SMT steers the UV/optical photons from...... the candidate to the telescope by the fast rotatable mirror and provides the early UV/optical photons measurements with 4 arcsec accuracy. The SMT has a modified Ritchey-Chrètien telescope with the aperture size of 10 cm diameter including the rotatable mirror and the image readout by the intensified charge......-coupled device. There is a key board called the UFFO Data Acquisition system (UDAQ) that manages the communication of each telescope and also of the satellite and the UFFO overall operation. This pathfinder is designed and built within the limited size and weight of ~20 kg and the low power consumption up to ~30... 15. Modeling and control of antennas and telescopes CERN Document Server Gawronski, Wodek 2008-01-01 The book shows, step-by-step, the design, implementation, and testing of the antenna/telescope control system, from the design stage (analytical model) to fine tuning of the RF beam pointing (monopulse and conscan). It includes wide use of Matlab and Simulink.. 16. The Large Area Telescope on the Fermi Gamma-ray Space Telescope Mission Energy Technology Data Exchange (ETDEWEB) Atwood, W.B.; /UC, Santa Cruz; Abdo, Aous A.; /Naval Research Lab, Wash., D.C.; Ackermann, M.; /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept.; Anderson, B. /UC, Santa Cruz; Axelsson, M.; /Stockholm U.; Baldini, L.; /INFN, Pisa; Ballet, J.; /DAPNIA, Saclay; Band, D.L.; /NASA, Goddard /NASA, Goddard; Barbiellini, Guido; /INFN, Trieste /Trieste U.; Bartelt, J.; /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept.; Bastieri, Denis; /INFN, Padua /Padua U.; Baughman, B.M.; /Ohio State U.; Bechtol, K.; /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept.; Bederede, D.; /DAPNIA, Saclay; Bellardi, F.; /INFN, Pisa; Bellazzini, R.; /INFN, Pisa; Berenji, B.; /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept.; Bignami, G.F.; /Pavia U.; Bisello, D.; /INFN, Padua /Padua U.; Bissaldi, E.; /Garching, Max Planck Inst., MPE; Blandford, R.D.; /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept. /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept. /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept. /INFN, Perugia /Perugia U. /NASA, Goddard /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept. /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept. /INFN, Pisa /INFN, Pisa /Bari U. /INFN, Bari /Ecole Polytechnique /Washington U., Seattle /INFN, Padua /Padua U. /Bari U. /INFN, Bari /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept. /IASF, Milan /IASF, Milan /Kalmar U. /Royal Inst. Tech., Stockholm /DAPNIA, Saclay /ASI, Rome /INFN, Pisa /INFN, Perugia /Perugia U. /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept. /George Mason U. /Naval Research Lab, Wash., D.C. /NASA, Goddard /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept. /DAPNIA, Saclay /NASA, Goddard /INFN, Perugia /Perugia U. /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept. /Montpellier U. /Stanford U., HEPL /KIPAC, Menlo Park /Stanford U., Phys. Dept.; /more authors.. 2009-05-15 The Large Area Telescope (Fermi/LAT, hereafter LAT), the primary instrument on the Fermi Gamma-ray Space Telescope (Fermi) mission, is an imaging, wide field-of-view (FoV), high-energy {gamma}-ray telescope, covering the energy range from below 20 MeV to more than 300 GeV. The LAT was built by an international collaboration with contributions from space agencies, high-energy particle physics institutes, and universities in France, Italy, Japan, Sweden, and the United States. This paper describes the LAT, its preflight expected performance, and summarizes the key science objectives that will be addressed. On-orbit performance will be presented in detail in a subsequent paper. The LAT is a pair-conversion telescope with a precision tracker and calorimeter, each consisting of a 4 x 4 array of 16 modules, a segmented anticoincidence detector that covers the tracker array, and a programmable trigger and data acquisition system. Each tracker module has a vertical stack of 18 (x, y) tracking planes, including two layers (x and y) of single-sided silicon strip detectors and high-Z converter material (tungsten) per tray. Every calorimeter module has 96 CsI(Tl) crystals, arranged in an eight-layer hodoscopic configuration with a total depth of 8.6 radiation lengths, giving both longitudinal and transverse information about the energy deposition pattern. The calorimeter's depth and segmentation enable the high-energy reach of the LAT and contribute significantly to background rejection. The aspect ratio of the tracker (height/width) is 0.4, allowing a large FoV (2.4 sr) and ensuring that most pair-conversion showers initiated in the tracker will pass into the calorimeter for energy measurement. Data obtained with the LAT are intended to (1) permit rapid notification of high-energy {gamma}-ray bursts and transients and facilitate monitoring of variable sources, (2) yield an extensive catalog of several thousand high-energy sources obtained from an all-sky survey, (3 17. A system perspective on designing for field-dependent SNR in wide-angle point-source detection lenses Science.gov (United States) Olson, S. C.; Sparks, Andrew W.; Cline, Robert A.; Goodman, Timothy D. 2017-05-01 Lenses for staring-array point-source detection sensors must maintain good signal-to-noise ratio (SNR) over fields of view often exceeding 100 degrees. Such lenses typically have f-θ distortion to provide constant solid angle sampling in object space. While the relative illumination calculation is often used to describe flux transfer from a Lambertian extended object for imaging applications, maximizing SNR for point-source detection depends primarily on maximizing collected irradiance at the entrance pupil, the shape of which can vary dramatically over field. We illustrate this field-dependent SNR calculation with an example lens and outline the calculations needed to derive a simple aberration-based expression for the field dependence of point-source SNR. 18. Field Evaluation of Whole Airliner Decontamination Technologies - Wide-Body Aircraft With Dual-Use Application for Railcars National Research Council Canada - National Science Library Gale, William F; Gale, Hyacinth S; Watson, Jean 2008-01-01 ... field evaluation performed previously on a McDonnell Douglas DC-9 aircraft. The thermal decontamination system appears to be capable of reproducing temperatures needed for an efficacious antiviral process... 19. A High-resolution Multiband Survey of Westerlund 2 with the Hubble Space Telescope. I. Is the Massive Star Cluster Double? NARCIS (Netherlands) Zeidler, P.; Sabbi, E.; Nota, A.; Grebel, E.K.; Tosi, M.; Bonanos, A.Z.; Pasquali, A.; Christian, C.; de Mink, S.E.; Ubeda, L. 2015-01-01 We present first results from a high resolution multi-band survey of the Westerlund 2 region with the Hubble Space Telescope. Specifically, we imaged Westerlund 2 with the Advanced Camera for Surveys through the F555W, F814W, and F658N filters and with the Wide Field Camera 3 in the F125W, F160W, 20. Simulation study of geometric shape factor approach to estimating earth emitted flux densities from wide field-of-view radiation measurements Science.gov (United States) Weaver, W. L.; Green, R. N. 1980-01-01 A study was performed on the use of geometric shape factors to estimate earth-emitted flux densities from radiation measurements with wide field-of-view flat-plate radiometers on satellites. Sets of simulated irradiance measurements were computed for unrestricted and restricted field-of-view detectors. In these simulations, the earth radiation field was modeled using data from Nimbus 2 and 3. Geometric shape factors were derived and applied to these data to estimate flux densities on global and zonal scales. For measurements at a satellite altitude of 600 km, estimates of zonal flux density were in error 1.0 to 1.2%, and global flux density errors were less than 0.2%. Estimates with unrestricted field-of-view detectors were about the same for Lambertian and non-Lambertian radiation models, but were affected by satellite altitude. The opposite was found for the restricted field-of-view detectors. 1. Critical current measurements of high-temperature superconducting short samples at a wide range of temperatures and magnetic fields Science.gov (United States) Ma, Hongjun; Liu, Huajun; Liu, Fang; Zhang, Huahui; Ci, Lu; Shi, Yi; Lei, Lei 2018-01-01 High-Temperature Superconductors (HTS) are potential materials for high-field magnets, low-loss transmission cables, and Superconducting Magnetic Energy Storage (SMES) due to their high upper critical magnetic field (Hc2) and critical temperature (Tc). The critical current (Ic) of HTS, which is one of the most important parameters for superconductor application, depends strongly on the magnetic fields and temperatures. A new Ic measurement system that can carry out accurate Ic measurement for HTS short samples with various temperatures (4.2-80 K), magnetic fields (0-14 T), and angles of the magnetic field (0°-90°) has been developed. The Ic measurement system mainly consists of a measurement holder, temperature-control system, background magnet, test cryostat, data acquisition system, and DC power supply. The accuracy of temperature control is better than ±0.1 K over the 20-80 K range and ±0.05 K when measured below 20 K. The maximum current is over 1000 A with a measurement uncertainty of 1%. The system had been successfully used for YBa2Cu3O7-x(YBCO) tapes Ic determination with different temperatures and magnetic fields. 2. An afocal telescope configuration for the ESA Ariel mission Science.gov (United States) Da Deppo, V.; Middleton, K.; Focardi, M.; Morgante, G.; Pace, E.; Claudi, R.; Micela, G. 2017-09-01 ARIEL (Atmospheric Remote-sensing Infrared Exoplanet Large-survey) is one of the three candidates for the next ESA medium-class science mission (M4) expected to be launched in 2026. This mission will be devoted to observing spectroscopically in the infrared (IR) a large population of known transiting planets in the neighborhood of the Solar System, opening a new discovery space in the field of extrasolar planets and enabling the understanding of the physics and chemistry of these far away worlds. ARIEL is based on a 1-m class telescope ahead of two spectrometer channels covering the band 1.95 to 7.8 microns. In addition there are four photometric channels: two wide band, also used as fine guidance sensors, and two narrow band. During its 3.5 years of operations from L2 orbit, ARIEL will continuously observe exoplanets transiting their host star. The ARIEL optical design is conceived as a fore-module common afocal telescope that will feed the spectrometer and photometric channels. The telescope optical design is composed of an off-axis portion of a two-mirror classic Cassegrain coupled to a tertiary off-axis paraboloidal mirror. The telescope and optical bench operating temperatures, as well as those of some subsystems, will be monitored and fine tuned/stabilised mainly by means of a thermal control subsystem (TCU-Telescope Control Unit) working in closed-loop feedback and hosted by the main Payload electronics unit, the Instrument Control Unit (ICU). Another important function of the TCU will be to monitor the telescope and optical bench thermistors when the Payload decontamination heaters will be switched on (when operating the instrument in Decontamination Mode) during the Commissioning Phase and cyclically, if required. Then the thermistors data will be sent by the ICU to the On Board Computer by means of a proper formatted telemetry. The latter (OBC) will be in charge of switching on and off the decontamination heaters on the basis of the thermistors readout 3. Cu incorporated amorphous diamond like carbon (DLC) composites: An efficient electron field emitter over a wide range of temperature Science.gov (United States) Ahmed, Sk Faruque; Alam, Md Shahbaz; Mukherjee, Nillohit 2018-03-01 The effect of temperature on the electron field emission properties of copper incorporated amorphous diamond like carbon (a-Cu:DLC) thin films have been reported. The a-Cu:DLC thin films have been deposited on indium tin oxide (ITO) coated glass and silicon substrate by the radio frequency sputtering process. The chemical composition of the films was investigated using X-ray photoelectron spectroscopy and the micro structure was established using high resolution transmission electron microscopy. The sp2 and sp3 bonding ratio in the a-Cu:DLC have been analyzed by the Fourier transformed infrared spectroscopy studies. The material showed excellent electron field emission properties; which was optimized by varying the copper atomic percentage and temperature of the films. It was found that the threshold field and effective emission barrier were reduced significantly by copper incorporation as well as temperature and a detailed explanation towards emission mechanism has been provided. 4. Report of the facility definition team spacelab UV-Optical Telescope Facility Science.gov (United States) 1975-01-01 Scientific requirements for the Spacelab Ultraviolet-Optical Telescope (SUOT) facility are presented. Specific programs involving high angular resolution imagery over wide fields, far ultraviolet spectroscopy, precisely calibrated spectrophotometry and spectropolarimetry over a wide wavelength range, and planetary studies, including high resolution synoptic imagery, are recommended. Specifications for the mounting configuration, instruments for the mounting configuration, instrument mounting system, optical parameters, and the pointing and stabilization system are presented. Concepts for the focal plane instruments are defined. The functional requirements of the direct imaging camera, far ultraviolet spectrograph, and the precisely calibrated spectrophotometer are detailed, and the planetary camera concept is outlined. Operational concepts described in detail are: the makeup and functions of shuttle payload crew, extravehicular activity requirements, telescope control and data management, payload operations control room, orbital constraints, and orbital interfaces (stabilization, maneuvering requirements and attitude control, contamination, utilities, and payload weight considerations). 5. Scientific Performance Analysis of the SYZ Telescope Design versus the RC Telescope Design Science.gov (United States) Ma, Donglin; Cai, Zheng 2018-02-01 Recently, Su et al. propose an innovative design, referred as the “SYZ” design, for China’s new project of a 12 m optical-infrared telescope. The SYZ telescope design consists of three aspheric mirrors with non-zero power, including a relay mirror below the primary mirror. SYZ design yields a good imaging quality and has a relatively flat field curvature at Nasmyth focus. To evaluate the science-compatibility of this three-mirror telescope, in this paper, we thoroughly compare the performance of SYZ design with that of Ritchey–Chrétien (RC) design, a conventional two-mirror telescope design. Further, we propose the Observing Information Throughput (OIT) as a metric for quantitatively evaluating the telescopes’ science performance. We find that although a SYZ telescope yields a superb imaging quality over a large field of view, a two-mirror (RC) telescope design holds a higher overall throughput, a better diffraction-limited imaging quality in the central field of view (FOV < 5‧) which is better for the performance of extreme Adaptive Optics (AO), and a generally better scientific performance with a higher OIT value. D. Ma & Z. Cai contributed equally to this paper. 6. The Infrared Telescope in Space (IRTS) Science.gov (United States) Murakami, H.; Bock, J.; Freund, M. M.; Guo, H.; Hirao, T.; Lange, A. E.; Matsuhara, H.; Matsumoto, T.; Matsuura, S.; McMahon, T. J.; Murakami, M.; Nakagawa, T.; Noda, M.; Noguchi, K.; Okuda, H.; Okumura, K.; Onaka, T.; Roellig, T. L.; Sato, S.; Shibai, H.; Tanabe, T.; Watabe, T.; Yagi, T.; Yajima, N.; Yui, M. 1994-06-01 The Infrared Telescope in Space (IRTS) is a cryogenically cooled small infrared telescope that will fly aboard the small space platform Space Flyer Unit. It will survey approximately 10% of the sky with a relatively wide beam during its 20 day emission. Four focal-plane instruments will make simultaneous observations of the sky at wavelengths ranging from 1 to 1000 microns. The IRTS will provide significant information on cosmology, interstellar matter, late-type stars, and interplanetary dust. This paper describes the instrumentation and mission. 7. Contributed review: camera-limits for wide-field magnetic resonance imaging with a nitrogen-vacancy spin sensor DEFF Research Database (Denmark) 2018-01-01 Sensitive, real-time optical magnetometry with nitrogen-vacancy centers in diamond relies on accurate imaging of small (≪10−2), fractional fluorescence changes across the diamond sample. We discuss the limitations on magnetic field sensitivity resulting from the limited number of photoelectrons...... that a camera can record in a given time. Several types of camera sensors are analyzed, and the smallest measurable magnetic field change is estimated for each type. We show that most common sensors are of a limited use in such applications, while certain highly specific cameras allow achieving nanotesla... 8. German wide cross sectional survey on health impacts of electromagnetic fields in the view of general practitioners DEFF Research Database (Denmark) Kowall, Bernd; Breckenkamp, Jürgen; Heyer, Kristina 2010-01-01 OBJECTIVES: The proportion of general practitioners (GPs) in Germany who assume health impacts of electromagnetic fields (EMF) is assessed. Moreover, factors associated with this risk perception are examined. METHODS: A 7% random sample was drawn from online lists of all the GPs working in Germany... 9. Automated, highly reproducible, wide-field, light-based cortical mapping method using a commercial stereo microscope and its applications. Science.gov (United States) Jiang, Su; Liu, Ya-Feng; Wang, Xiao-Min; Liu, Ke-Fei; Zhang, Ding-Hong; Li, Yi-Ding; Yu, Ai-Ping; Zhang, Xiao-Hui; Zhang, Jia-Yi; Xu, Jian-Guang; Gu, Yu-Dong; Xu, Wen-Dong; Zeng, Shao-Qun 2016-09-01 We introduce a more flexible optogenetics-based mapping system attached on a stereo microscope, which offers automatic light stimulation to individual regions of interest in the cortex that expresses light-activated channelrhodopsin-2 in vivo . Combining simultaneous recording of electromyography from specific forelimb muscles, we demonstrate that this system offers much better efficiency and precision in mapping distinct domains for controlling limb muscles in the mouse motor cortex. Furthermore, the compact and modular design of the system also yields a simple and flexible implementation to different commercial stereo microscopes, and thus could be widely used among laboratories. 10. ANTARES: An Undersea Neutrino telescope CERN Multimedia 2002-01-01 The ANTARES (Astronomy with a Neutrino Telescope and ${Abyss}$ environmental RESearch) deep-sea neutrino telescope is designed to search for neutrinos of astrophysical origin. Neutrinos are unique probes of the high energy universe; being neutral they are not deflected by magnetic fields and interacting weakly they can readily escape from the densest regions of the universe. Potential sources of neutrino are galactic (e.g supernova remnants, micro-quasars) and extra-galactic (e.g active galactic nuclei, gamma-ray bursters). Annihilation of dark matter particles in the Sun or Galactic Centre is another well motivated potential source of extra terrestrial neutrinos. The ANTARES detector is located 40 km off the coast of Toulon (France) at a depth of 2475m in the Mediterranean Sea. Being located in the Northern hemisphere it studies the Southern sky and in particular has the Galactic Centre in its field of view. Since 2006, the detector has operated continuously in a partial configuration. The detector was compl... 11. Reflecting telescope optics CERN Document Server Wilson, Raymond N 2004-01-01 R.N. Wilson's two-volume treatise on reflecting telescope optics has become a classic in its own right. It is intended to give a complete treatment of the subject, addressing professionals in research and industry as well as students of astronomy and amateur astronomers. This first volume, Basic Design Theory and its Historical Development, is devoted to the theory of reflecting telescope optics and systematically recounts the historical progress. The author's approach is morphological, with strong emphasis on the historical development. The book is richly illustrated including spot-diagrams a 12. The Mini-EUSO telescope on the ISS Energy Technology Data Exchange (ETDEWEB) Scotti, Valentina, E-mail: [email protected]; Osteria, Giuseppe 2017-02-11 The Mini-EUSO project aims to perform observations of the UV-light night emission from Earth. The UV background produced in atmosphere is a key measurement for any experiment aiming at the observation of Extreme Energy Cosmic Rays (EECR) from space, the most energetic component of the cosmic radiation. The Mini-EUSO instrument will be placed within the International Space Station (ISS) in the Russian Module and measures through a UV transparent window. The instrument comprises a compact telescope with a large field of view, based on an optical system employing two Fresnel lenses for increased light collection. The light is focused onto an array of photo-multipliers and the resulting signal is converted into digital, processed and stored via the electronics subsystems on-board. The instrument is designed and built by the members of the JEM-EUSO collaboration. JEM-EUSO is a wide-angle refractive UV telescope being proposed for attachment to the ISS, which has been designed to address basic problems of fundamental physics and high-energy astrophysics investigating the nature of cosmic rays with energies above 10{sup 20} eV. Mini-EUSO will be able to study beside EECRs a wide range of scientific phenomena including atmospheric physics, strange quark matter and bioluminescence. The mission is approved by the Italian Space Agency and the Russian Space Agency. Scientific, technical and programmatic aspects of this project will be described. 13. Large aperture wide field multi-object spectroscopy for the 2020s: the science and status of the Maunakea Spectroscopic Explorer. Science.gov (United States) Devost, Daniel; McConnachie, Alan; Chambers, Kenneth; Gallagher, Sarah; Maunakea Spectroscopic Explorer Project office, MSE Science Advisory group, MSE Science Team 2018-01-01 Numerous international reports have recently highlighted the need for fully dedicated, large aperture, highly multiplexed spectroscopy at a range of spectral resolutions in the OIR wavelength range. Such a facility is the most obvious missing link in the emerging network of international multi-wavelength, astronomy facilities, and enables science from reverberation mapping of black holes to the nucleosynthetic history of the Galaxy, and will follow-up discoveries from the optical through to the radio with facilities such as LSST. The only fully dedicated large aperture MOS facility that is in the design phase is the Maunakea Spectroscopic Explorer (MSE), an 11.4m segmented mirror prime focus telescope with a 1.5 square degree field of view that has 3200 fibers at low (R~2500) and moderate (R~6000) resolution, and 1000 fibers at high (R=20/40000) resolution. I will provide an overview of MSE, describing the science drivers and the current design status, as well as the international partnership, and the results of multiple, newly completed, external reviews for the system and subsystems. The anticipated cost and timeline to first light will also be presented. 14. Population-wide bias of surround suppression in auditory spatial receptive fields of the owl’s midbrain OpenAIRE Wang, Yunyan; Shanbhag, Sharad J.; Fischer, Brian J.; Peña, José L 2012-01-01 The physical arrangement of receptive fields (RFs) within neural structures is important for local computations. Nonuniform distribution of tuning within populations of neurons can influence emergent tuning properties, causing bias in local processing. This issue was studied in the auditory system of barn owls. The owl’s external nucleus of the inferior colliculus (ICx) contains a map of auditory space where the frontal region is overrepresented. We measured spatiotemporal RFs of ICx neurons ... 15. Dual-Element Transducer with Phase-Inversion for Wide Depth of Field in High-Frequency Ultrasound Imaging Directory of Open Access Journals (Sweden) Jong Seob Jeong 2014-08-01 Full Text Available In high frequency ultrasound imaging (HFUI, the quality of focusing is deeply related to the length of the depth of field (DOF. In this paper, a phase-inversion technique implemented by a dual-element transducer is proposed to enlarge the DOF. The performance of the proposed method was numerically demonstrated by using the ultrasound simulation program called Field-II. A simulated dual-element transducer was composed of a disc- and an annular-type elements, and its aperture was concavely shaped to have a confocal point at 6 mm. The area of each element was identical in order to provide same intensity at the focal point. The outer diameters of the inner and the outer elements were 2.1 mm and 3 mm, respectively. The center frequency of each element was 40 MHz and the f-number (focal depth/aperture size was two. When two input signals with 0° and 180° phases were applied to inner and outer elements simultaneously, a multi-focal zone was generated in the axial direction. The total −6 dB DOF, i.e., sum of two −6 dB DOFs in the near and far field lobes, was 40% longer than that of the conventional single element transducer. The signal to noise ratio (SNR was increased by about two times, especially in the far field. The point and cyst phantom simulation were conducted and their results were identical to that of the beam pattern simulation. Thus, the proposed scheme may be a potential method to improve the DOF and SNR in HFUI. 16. Lightweight Inexpensive Ozone Lidar Telescope Using a Plastic Fresnel Lens Science.gov (United States) DeYoung, Russell J.; Notari, Anthony; Carrion, William; Pliutau, Denis 2014-01-01 An inexpensive lightweight ozone lidar telescope was designed, constructed and operated during an ozone lidar field campaign. This report summarizes the design parameters and performance of the plastic Fresnel lens telescope and shows the ozone lidar performance compared to Zemax calculations. 17. Rapid mapping of compound eye visual sampling parameters with FACETS, a highly automated wide-field goniometer. Science.gov (United States) Douglass, John K; Wehling, Martin F 2016-12-01 A highly automated goniometer instrument (called FACETS) has been developed to facilitate rapid mapping of compound eye parameters for investigating regional visual field specializations. The instrument demonstrates the feasibility of analyzing the complete field of view of an insect eye in a fraction of the time required if using non-motorized, non-computerized methods. Faster eye mapping makes it practical for the first time to employ sample sizes appropriate for testing hypotheses about the visual significance of interspecific differences in regional specializations. Example maps of facet sizes are presented from four dipteran insects representing the Asilidae, Calliphoridae, and Stratiomyidae. These maps provide the first quantitative documentation of the frontal enlarged-facet zones (EFZs) that typify asilid eyes, which, together with the EFZs in male Calliphoridae, are likely to be correlated with high-spatial-resolution acute zones. The presence of EFZs contrasts sharply with the almost homogeneous distribution of facet sizes in the stratiomyid. Moreover, the shapes of EFZs differ among species, suggesting functional specializations that may reflect differences in visual ecology. Surveys of this nature can help identify species that should be targeted for additional studies, which will elucidate fundamental principles and constraints that govern visual field specializations and their evolution. 18. Freeform Optical Design of Two Mirror Telescopes Science.gov (United States) Howard, Joseph; West, Garrett; Trumper, Isaac; Anderson, Alex 2015-01-01 Two Mirror telescopes composed of freeform optical surfaces are investigated and surveyed to explore the usable design space. F-number and field of view are evaluated and plotted. A case study is presented to show the benefits of volume reduction using freeform surfaces. 19. Slice-based supine-to-standing posture deformation for chinese anatomical models and the dosimetric results with wide band frequency electromagnetic field exposure: Simulation International Nuclear Information System (INIS) Wu, T.; Tan, L.; Shao, Q.; Li, Y.; Yang, L.; Zhao, C.; Xie, Y.; Zhang, S. 2013-01-01 Standing Chinese adult anatomical models are obtained from supine-postured cadaver slices. This paper presents the dosimetric differences between the supine and the standing postures over wide band frequencies and various incident configurations. Both the body level and the tissue/organ level differences are reported for plane wave and the 3T magnetic resonance imaging radiofrequency electromagnetic field exposure. The influence of posture on the whole body specific absorption rate and tissue specified specific absorption rate values is discussed. . (authors) 20. Origins Space Telescope: Study Plan Science.gov (United States) Nayyeri, Hooshang; Cooray, Asantha; Origins Space Telescope Study Team 2018-01-01 The Origins Space Telescope (OST) is the mission concept for the Far-Infrared Surveyor, a study in development by NASA in preparation for the 2020 Astronomy and Astrophysics Decadal Survey. Origins is planned to be a large aperture, actively-cooled telescope covering a wide span of the mid- to far-infrared spectrum. Its spectrographs will enable 3D surveys of the sky that will discover and characterize the most distant galaxies, Milky-Way, exoplanets, and the outer reaches of our Solar system. Origins will enable flagship-quality general observing programs led by the astronomical community in the 2030s. The Science and Technology Definition Team (STDT) would like to hear your science needs and ideas for this mission. The team can be contacted at [email protected]. This presentation will provide a summary of the OST STDT, the OST Study Team based at NASA Goddard Space Flight Center, study partners, and the advisory panel to the study. This presentation will also summarize recent activities, including the process used to reach a decision on the mission architecture, the identification of key science drivers, and the key study milestones between 2017 and 2020. 1. A Simple "Tubeless" Telescope Science.gov (United States) Straulino, S.; Bonechi, L. 2010-01-01 Two lenses make it possible to create a simple telescope with quite large magnification. The set-up is very simple and can be reproduced in schools, provided the laboratory has a range of lenses with different focal lengths. In this article, the authors adopt the Keplerian configuration, which is composed of two converging lenses. This instrument,… 2. Taiwan Automated Telescope Network Directory of Open Access Journals (Sweden) Dean-Yi Chou 2010-01-01 can be operated either interactively or fully automatically. In the interactive mode, it can be controlled through the Internet. In the fully automatic mode, the telescope operates with preset parameters without any human care, including taking dark frames and flat frames. The network can also be used for studies that require continuous observations for selected objects. 3. The Dutch Open Telescope NARCIS (Netherlands) Rutten, R.J.; Hammerschlag, R.H.; Bettonvil, F.C.M. 1997-01-01 The Dutch Open Telescope is now being installed at La Palma. It is intended for optical solar observations with high spatial resolution. Its open design aims to minimize disturbances of the local air ow and so re- duce the locally-generated component of the atmospheric seeing. This paper brie y 4. Hubble Space Telescope Spies on 'Black Eye' Science.gov (United States) 2004-01-01 Residing roughly 17 million light years from Earth, in the northern constellation Coma Berenices, is a merged star system known as Messier 64 (M64). First cataloged in the 18th century by the French astronomer Messier, M64 is a result of two colliding galaxies and has an unusual appearance as well as bizarre internal motions. It has a spectacular dark band of absorbing dust in front of its bright nucleus, lending to it the nickname of the 'Black Eye' or 'Evil Eye' galaxy. Fine details of the dark band can be seen in this image of the central portion of M64 obtained by the Wide Field Planetary Camera (WFPC2) of NASA's Hubble Space Telescope (HST). Appearing to be a fairly normal pinwheel-shaped galaxy, the M64 stars are rotating in the same direction, clockwise, as in the majority of galaxies. However, detailed studies in the 1990's led to the remarkable discovery that the interstellar gas in the outer regions of M64 rotates in the opposite direction from the gas and stars in the irner region. Astronomers believe that the oppositely rotating gas arose when M64 absorbed a satellite galaxy that collided with it, perhaps more than one billion years ago. The Marshall Space Flight Center had responsibility for design, development, and construction of the HST. 5. Cosmic inquirers: Modern telescopes and their makers International Nuclear Information System (INIS) Tucker, W.; Tucker, K. 1986-01-01 An historical account is given of major, telescopic instrument-related advancements in 20th-century astronomy, with attention to the roles played by leading figures in the various fields of astronomical research involved. These biographical treatments encompass David Heeshen and the development of the VLA; Riccardo Giacconi and the X-ray astronomy Uhuru, High Energy Astronomy Observatory, and X-ray Explorer, and Einstein Observatory satellites; Allan Jacobson and the Gamma Ray Observatory satellite; the involvements of Frank Low and Gerry Neugebauer in the development of the IR Astronomy Satellite; and C. R. O'Dell's organization of the NASA Space Telescope program. 62 references 6. The Hobby-Eberly Telescope Dark Energy Experiment (HETDEX): Description and Early Pilot Survey Results OpenAIRE Hill, G. J.; Gebhardt, K.; Komatsu, E.; Drory, N.; MacQueen, P. J.; Adams, J.; Blanc, G. A.; Koehler, R.; Rafal, M.; Roth, M. M.; Kelz, A.; Gronwall, C.; Ciardullo, R.; Schneider, D. P. 2008-01-01 The Hobby-Eberly Telescope Dark Energy Experiment (HETDEX) will outfit the 10 m HET with a new wide field and an array of 150 integral-field spectrographs to survey a 420 sq. deg. area in the north Galactic cap. Each fiber-coupled unit spectrograph will cover 350-550 nm, simultaneously. This instrument, called VIRUS, will produce ~34,000 spectra per exposure, and will open up the emission-line universe to large surveys for the first time. The survey will detect 0.8 million Lyman-alpha emittin... 7. The Large Millimeter Telescope Science.gov (United States) Hughes, David H.; Jáuregui Correa, Juan-Carlos; Schloerb, F. Peter; Erickson, Neal; Romero, Jose Guichard; Heyer, Mark; Reynoso, David Huerta; Narayanan, Gopal; Perez-Grovas, Alfonso Serrano; Souccar, Kamal; Wilson, Grant; Yun, Min 2010-07-01 This paper describes the current status of the Large Millimeter Telescope (LMT), the near-term plans for the telescope and the initial suite of instrumentation. The LMT is a bi-national collaboration between Mexico and the USA, led by the Instituto Nacional de Astrofísica, Óptica y Electrónica (INAOE) and the University of Massachusetts at Amherst, to construct, commission and operate a 50m-diameter millimeter-wave radio telescope. Construction activities are nearly complete at the 4600m LMT site on the summit of Volcán Sierra Negra, an extinct volcano in the Mexican state of Puebla. Full movement of the telescope, under computer control in both azimuth and elevation, has been achieved. The commissioning and scientific operation of the LMT is divided into two major phases. As part of phase 1, the installation of precision surface segments for millimeter-wave operation within the inner 32m-diameter of the LMT surface is now complete. The alignment of these surface segments is underway. The telescope (in its 32-m diameter format) will be commissioned later this year with first-light scientific observations at 1mm and 3mm expected in early 2011. In phase 2, we will continue the installation and alignment of the remainder of the reflector surface, following which the final commissioning of the full 50-m LMT will take place. The LMT antenna, outfitted with its initial complement of scientific instruments, will be a world-leading scientific research facility for millimeter-wave astronomy. 8. Water Vapor, Temperature and Wind Profiles within Maize Canopy under in-Field Rainwater Harvesting with Wide and Narrow Runoff Strips Directory of Open Access Journals (Sweden) Weldemichael A. Tesfuhuney 2013-11-01 Full Text Available Micrometeorological measurements were used to evaluate heat and water vapor to describe the transpiration (Ev and soil evaporation (Es processes for wide and narrow runoff strips under in-field rainwater harvesting (IRWH system. The resulting sigmoid-shaped water vapor (ea in wide and narrow runoff strips varied in lower and upper parts of the maize canopy. In wide runoff strips, lapse conditions of ea extended from lowest measurement level (LP to the upper middle section (MU and inversion was apparent at the top of the canopy. The virtual potential temperature (θv profile showed no difference in middle section, but the lower and upper portion (UP had lower  in narrow, compared to wide, strips, and LP-UP changes of 0.6 K and 1.2 K were observed, respectively. The Ev and Es within the canopy increased the ea concentration as determined by the wind order of magnitude. The ea concentration reached peak at about 1.6 kPa at a range of wind speed value of 1.4–1.8 m∙s−1 and 2.0–2.4 m∙s−1 for wide and narrow treatments, respectively. The sparse maize canopy of the wide strips could supply more drying power of the air in response to atmospheric evaporative demand compared to narrow strips. This is due to the variation in air flow in wide and narrow runoff strips that change gradients in ea for evapotranspiration processes. 9. A balloon borne telescope for planetary observations with a fine pointing technology Science.gov (United States) Shoji, Yasuhiro; Onishi, Tomoya; Battazzo, Steve; Yoshimura, Atsushi; Sakamoto, Yuji; Yoshida, Kazuya; Takahashi, Yukihiro; Taguchi, Makoto A balloon borne telescope is one of the effective observation methods for planets under space environment. A telescope is carried up to the stratosphere at an altitude of higher than 32 km where the air density is as thin as 1/100 of that at the ground. The thin atmosphere gives a telescope better observation conditions: fine seeing, stable weather, and high transmittance especially in the infrared region. Moreover there is a chance that a planet can be continuously seen for a window longer than 24 hours from the polar stratosphere. The authors have been developing a balloon borne telescope system for years to take finer images of planets in the solar system., The first object is Venus, of which atmospheric motions are derived by tracking the changes of cloud patterns with bands of UV, visible and NIR. Highly precise pointing control within the error of sub-arcseconds is required so that the balloon borne telescope achieves its diffraction-limited spatial resolution. The flight system is equipped with a three-stage attitude and pointing control system in order to realize the desired pointing control precision. In 2009, the flight system was built and tested in various ground tests and an actual balloon flight. Although the balloon experiment failed due to trouble with an onboard computer, the ground tests before the flight operation have verified that the pointing control system can achieve pointing error of less than 0.2 arcseconds. The balloon borne telescope is being redesigned for a sequential observation of Venus, Mars and Jupiter in the summer of 2011. This flight will be a step for a long-duration observation in the polar stratosphere. Additionally, an observation of the sodium tail of Mercury with a small telescope and a wide field of view has been under consideration. Mercury has very thin atmosphere called a surface-bounded exosphere. Past observations by spacecraft and ground-based telescopes revealed that one of the atmospheric components, gaseous 10. Nearby Exo-Earth Astrometric Telescope (NEAT) Science.gov (United States) Shao, M.; Nemati, B.; Zhai, C.; Goullioud, R. 2011-01-01 NEAT (Nearby Exo ]Earths Astrometric Telescope) is a modest sized (1m diameter telescope) It will be capable of searching approx 100 nearby stars down to 1 Mearth planets in the habitable zone, and 200 @ 5 Mearth, 1AU. The concept addresses the major issues for ultra -precise astrometry: (1) Photon noise (0.5 deg dia field of view) (2) Optical errors (beam walk) with long focal length telescope (3) Focal plane errors , with laser metrology of the focal plane (4) PSF centroiding errors with measurement of the "True" PSF instead of using a "guess " of the true PSF, and correction for intra pixel QE non-uniformities. Technology "close" to complete. Focal plane geometry to 2e-5 pixels and centroiding to approx 4e -5 pixels. 11. Can Radio Telescopes Find Axions? Science.gov (United States) Kohler, Susanna 2017-08-01 axions. Now scientists Katharine Kelley and Peter Quinn at ICRAR, University of Western Australia, have explored how we might use next-generation radio telescopes to search for photons that were created by axions interacting with the magnetic fields of our galaxy.Hope for Next-Gen TelescopesPotential axion coupling strengths vs. mass (click for a closer look). The axion mass is thought to lie between a eV and a meV; two theoretical models are shown with dashed lines. The plot shows the sensitivity of the upcoming SKA and its precursors, ASKAP and MEERKAT. [KelleyQuinn 2017]By using a simple galactic halo model and reasonable assumptions for the central galactic magnetic field even taking into account the time dependence of the field Kelley and Quinn estimate the radio-frequency power density that we would observe at Earth from axions being converted to photons within the Milky Ways magnetic field.The authors then compare this signature to the detection capabilities of upcoming radio telescope arrays. They show that the upcoming Square Kilometer Array and its precursors should have the capability to detect signs of axions across large parts of parameter space.Kelley and Quinn conclude that theres good cause for optimism about future radio telescopes ability to detect axions. And if we did succeed in making a detection, it would be a triumph for both particle physics and astrophysics, finally providing an explanation for the universes dark matter.CitationKatharine Kelley and P. J. Quinn 2017 ApJL 845 L4. doi:10.3847/2041-8213/aa808d 12. Augmenting WFIRST Microlensing with a Ground-Based Telescope Network Science.gov (United States) Zhu, Wei; Gould, Andrew 2016-06-01 Augmenting the Wide Field Infrared Survey Telescope (WFIRST) microlensing campaigns with intensive observations from a ground-based network of wide-field survey telescopes would have several major advantages. First, it would enable full two-dimensional (2-D) vector microlens parallax measurements for a substantial fraction of low-mass lenses as well as planetary and binary events that show caustic crossing features. For a significant fraction of the free-floating planet (FFP) events and all caustic-crossing planetary/binary events, these 2-D parallax measurements directly lead to complete solutions (mass, distance, transverse velocity) of the lens object (or lens system). For even more events, the complementary ground-based observations will yield 1-D parallax measurements. Together with the 1-D parallaxes from WFIRST alone, they can probe the entire mass range M > M_Earth. For luminous lenses, such 1-D parallax measurements can be promoted to complete solutions (mass, distance, transverse velocity) by high-resolution imaging. This would provide crucial information not only about the hosts of planets and other lenses, but also enable a much more precise Galactic model. Other benefits of such a survey include improved understanding of binaries (particularly with low mass primaries), and sensitivity to distant ice-giant and gas-giant companions of WFIRST lenses that cannot be detected by WFIRST itself due to its restricted observing windows. Existing ground-based microlensing surveys can be employed if WFIRST is pointed at lower-extinction fields than is currently envisaged. This would come at some cost to the event rate. Therefore the benefits of improved characterization of lenses must be weighed against these costs. 13. Evidence for non-axisymmetry in M 31 from wide-field kinematics of stars and gas Science.gov (United States) Opitsch, M.; Fabricius, M. H.; Saglia, R. P.; Bender, R.; Blaña, M.; Gerhard, O. 2018-03-01 Aim. As the nearest large spiral galaxy, M 31 provides a unique opportunity to study the structure and evolutionary history of this galaxy type in great detail. Among the many observing programs aimed at M 31 are microlensing studies, which require good three-dimensional models of the stellar mass distribution. Possible non-axisymmetric structures like a bar need to be taken into account. Due to M 31's high inclination, the bar is difficult to detect in photometry alone. Therefore, detailed kinematic measurements are needed to constrain the possible existence and position of a bar in M 31. Methods: We obtained ≈220 separate fields with the optical integral-field unit spectrograph VIRUS-W, covering the whole bulge region of M 31 and parts of the disk. We derived stellar line-of-sight velocity distributions from the stellar absorption lines, as well as velocity distributions and line fluxes of the emission lines Hβ, [O III] and [N I]. Our data supersede any previous study in terms of spatial coverage and spectral resolution. Results: We find several features that are indicative of a bar in the kinematics of the stars, we see intermediate plateaus in the velocity and the velocity dispersion, and correlation between the higher moment h3 and the velocity. The gas kinematics is highly irregular, but is consistent with non-triaxial streaming motions caused by a bar. The morphology of the gas shows a spiral pattern, with seemingly lower inclination than the stellar disk. We also look at the ionization mechanisms of the gas, which happens mostly through shocks and not through starbursts. This paper includes data taken at The McDonald Observatory of The University of Texas at Austin.This research was supported by the DFG cluster of excellence "Origin and Structure of the Universe".Full Tables B.4-B.7 are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/611/A38 14. VLT Unit Telescopes Named at Paranal Inauguration Science.gov (United States) 1999-03-01 Southern Cross) and YEPUN (UT4; ye-poon ; Sirius), respectively. An audio sequence with these names pronounced by a native speaker is available below: [RealMedia - Audio only - 164k] "First Light" of UT2 Following the installation of the main mirror in its cell and a 20-hour working session to put the complex secondary mirror and its support in place, the UT2, now Kueyen , achieved (technical) first light in the morning of March 1, 1999, when an image was obtained of a bright star. It showed this telescope to be in good optical shape and further adjustments of the optical and mechanical systems are expected soon to result in some "astronomical" images. The announcement of this important event was made by the ESO Director during the opening session of the VLT Symposium that was held in Antofagasta during March 1-4, 1999. This meeting attracted over 250 scientists from all over world. It provided a most useful opportunity to discuss future scientific programmes with the VLT and other large telescopes. The participants were left with the impression of mounting expectations, just four weeks before the first VLT Unit Telescope, Antu (UT1), will receive the first visiting astronomers. More images from UT1 ESO PR Photo 17c/99 ESO PR Photo 17c/99 [Preview - JPEG: 400 x 667 pix - 332k] [Normal - JPEG: 800 x 1334 pix - 1.3M] [High-Res - JPEG: 2108 x 3450 pix - 2.8M] Caption to PR Photo 17c/99 : This colour composite photo of the Chamaeleon I area is based on six 1-min exposures obtained with VLT UT1 + FORS1 in the V, R and I bands. The sky field measures 6.8 x 11.2 arcmin 2 ; North is up and East is left [1]. Despite the extensive preparations for the Paranal Inguration and the VLT Symposium, excellent progress is being made during the final tuning of Antu (UT1) and its instruments for the "hand-over" to the astronomers on April 1, 1999. This involves exposures in many different modes and of different sky regions. Another impressive photo is shown here that was obtained some nights 15. HIGH-PRECISION ASTROMETRY WITH A DIFFRACTIVE PUPIL TELESCOPE International Nuclear Information System (INIS) Guyon, Olivier; Eisner, Josh A.; Angel, Roger; Woolf, Neville J.; Bendek, Eduardo A.; Milster, Thomas D.; Mark Ammons, S.; Shao, Michael; Shaklan, Stuart; Levine, Marie; Nemati, Bijan; Pitman, Joe; Woodruff, Robert A.; Belikov, Ruslan 2012-01-01 Astrometric detection and mass determination of Earth-mass exoplanets require sub-μas accuracy, which is theoretically possible with an imaging space telescope using field stars as an astrometric reference. The measurement must, however, overcome astrometric distortions, which are much larger than the photon noise limit. To address this issue, we propose to generate faint stellar diffraction spikes using a two-dimensional grid of regularly spaced small dark spots added to the surface of the primary mirror (PM). Accurate astrometric motion of the host star is obtained by comparing the position of the spikes to the background field stars. The spikes do not contribute to scattered light in the central part of the field and therefore allow unperturbed coronagraphic observation of the star's immediate surroundings. Because the diffraction spikes are created on the PM and imaged on the same focal plane detector as the background stars, astrometric distortions affect equally the diffraction spikes and the background stars and are therefore calibrated. We describe the technique, detail how the data collected by the wide-field camera are used to derive astrometric motion, and identify the main sources of astrometric error using numerical simulations and analytical derivations. We find that the 1.4 m diameter telescope, 0.3 deg 2 field we adopt as a baseline design achieves 0.2 μas single measurement astrometric accuracy. The diffractive pupil concept thus enables sub-μas astrometry without relying on the accurate pointing, external metrology, or high-stability hardware required with previously proposed high-precision astrometry concepts. 16. Combined 60° Wide-Field Choroidal Thickness Maps and High-Definition En Face Vasculature Visualization Using Swept-Source Megahertz OCT at 1050 nm. Science.gov (United States) Mohler, Kathrin J; Draxinger, Wolfgang; Klein, Thomas; Kolb, Jan Philip; Wieser, Wolfgang; Haritoglou, Christos; Kampik, Anselm; Fujimoto, James G; Neubauer, Aljoscha S; Huber, Robert; Wolf, Armin 2015-10-01 To demonstrate ultrahigh-speed swept-source optical coherence tomography (SS-OCT) at 1.68 million A-scans/s for choroidal imaging in normal and diseased eyes over a ∼60° field of view. To investigate and correlate wide-field three-dimensional (3D) choroidal thickness (ChT) and vascular patterns using ChT maps and coregistered high-definition en face images extracted from a single densely sampled Megahertz-OCT (MHz-OCT) dataset. High-definition, ∼60° wide-field 3D datasets consisting of 2088 × 1024 A-scans were acquired using a 1.68 MHz prototype SS-OCT system at 1050 nm based on a Fourier-domain mode-locked laser. Nine subjects (nine eyes) with various chorioretinal diseases or without ocular pathology are presented. Coregistered ChT maps, choroidal summation maps, and depth-resolved en face images referenced to either the retinal pigment epithelium or the choroidal-scleral interface were generated using manual segmentation. Wide-field ChT maps showed a large inter- and intraindividual variance in peripheral and central ChT. In only four of the nine eyes, the location with the largest ChT was coincident with the fovea. The anatomy of the large lumen vessels of the outer choroid seems to play a major role in determining the global ChT pattern. Focal ChT changes with large thickness gradients were observed in some eyes. Different ChT and vascular patterns could be visualized over ∼60° in patients for the first time using OCT. Due to focal ChT changes, a high density of thickness measurements may be favorable. High-definition depth-resolved en face images are complementary to cross sections and thickness maps and enhance the interpretation of different ChT patterns. 17. WIDE-FIELD VLBI OBSERVATIONS OF M31: A UNIQUE PROBE OF THE IONIZED INTERSTELLAR MEDIUM OF A NEARBY GALAXY International Nuclear Information System (INIS) Morgan, John S.; Argo, Megan K.; Trott, Cathryn M.; Macquart, Jean-Pierre; Miller-Jones, James; Tingay, Steven J.; Deller, Adam; Middelberg, Enno 2013-01-01 The Very Long Baseline Array was used at 1.6 GHz to observe a target field 50' in diameter including the core of M31. Novel very long baseline interferometry correlation techniques were used to observe 200 sources simultaneously, of which 16 were detected. We classify all 16 as background active galactic nuclei based on their X-ray properties and arcsecond- and mas-scale morphology. The detected sources were then analyzed for evidence of scatter-broadening due to the ionized interstellar medium (ISM) of M31. The detection of a compact background source only 0.25 kpc projected distance from M31* places a constraint on the extent of any extreme scattering region associated with the center of M31. However, the two sources closest to the core show evidence of scatter broadening consistent with that which would be seen for a compact source if it were observed through the inner disk of our Galaxy, at the inclination of M31. We interpret this as a detection of the ionized ISM of M31 along two lines of sight. With the increases in bandwidth and sensitivity envisaged for future long-baseline interferometers, this should prove to be a remarkably powerful technique for understanding the ionized ISM in external galaxies. 18. Construction of the Advanced Technology Solar Telescope Science.gov (United States) Rimmele, T. R.; Keil, S.; McMullin, J.; Knölker, M.; Kuhn, J. R.; Goode, P. R.; Rosner, R.; Casini, R.; Lin, H.; Tritschler, A.; Wöger, F.; ATST Team 2012-12-01 The 4m Advance Technology Solar Telescope (ATST) will be the most powerful solar telescope and the world's leading ground-based resource for studying solar magnetism that controls the solar wind, flares, coronal mass ejections and variability in the Sun's output. The project has entered its construction phase. Major subsystems have been contracted. As its highest priority science driver ATST shall provide high resolution and high sensitivity observations of the dynamic solar magnetic fields throughout the solar atmosphere, including the corona at infrared wavelengths. With its 4m aperture, ATST will resolve features at 0.″03 at visible wavelengths and obtain 0.″1 resolution at the magnetically highly sensitive near infrared wavelengths. A high order adaptive optics system delivers a corrected beam to the initial set of state-of-the-art, facility class instrumentation located in the Coudé laboratory facility. The initial set of first generation instruments consists of five facility class instruments, including imagers and spectro-polarimeters. The high polarimetric sensitivity and accuracy required for measurements of the illusive solar magnetic fields place strong constraints on the polarization analysis and calibration. Development and construction of a four-meter solar telescope presents many technical challenges, including thermal control of the enclosure, telescope structure and optics and wavefront control. A brief overview of the science goals and observational requirements of the ATST will be given, followed by a summary of the design status of the telescope and its instrumentation, including design status of major subsystems, such as the telescope mount assembly, enclosure, mirror assemblies, and wavefront correction 19. Australia to Build Fibre Positioner for the Very Large Telescope Science.gov (United States) 1998-06-01 The Anglo-Australian Observatory (AAO) at Epping (New South Wales, Australia) has been awarded the contract to build a fibre positioner for the European Southern Observatory's Very Large Telescope (VLT). This new, large astronomical facility is located at the Paranal Observatory in Chile and will feature four Unit Telescopes, each with a main mirror of 8.2-m diameter. This positioner, (affectionately) known as the OzPoz , will form part of the FLAMES facility (the F ibre L arge A rea M ulti- E lement S pectrograph), to be mounted on the second Unit Telescope (UT2) of the VLT in 2001. The construction of this facility includes other institutes in Europe, e.g. Observatoire de Genève (Switzerland) and Observatoire de Meudon (France). The ESO Instrument Division will coordinate the entire project that will result in an observational capability that is unique in the world. Optical fibres at astronomical telescopes Optical fibres have come to play an increasingly important role as transmitters of information, for instance in telephone and computer networks. It may be less known that they can be used in a similar way to transmit visible and infrared light in astronomical telescopes. Over the past decade, the AAO has been refining its skills in building optical-fibre instruments for its own telescopes, the 3.9-metre Anglo-Australian Telescope and the 1.2-m UK Schmidt Telescope (a telescope dedicated to wide-field surveys). These instruments enable astronomers to study many celestial objects simultaneously, increasing the effectiveness and productivity by enormous factors. The OzPoz positioner sets up to 560 optical fibres (developed in collaboration with the Observatoire de Meudon in France) very precisely by a robotic arm to match the positions of galaxies and quasars in the telescope's focal plane. The positional accuracy is about 50 µm (0.05 mm), or 0.08 arcsec on the sky. The fibres siphon the light from these very faint and distant astronomical objects and guide it 20. Robotic telescopes for education and public outreach the TAROT Experience Science.gov (United States) Boer, M.; Melchior, A. L.; Mottez, F.; Pennypaker, C. The Rapid Action Telescope for Transient Objects (TAROT - Télescope à Action Rapide pour les Objets Transitoires) has been used over the past years as a support tool for the teaching of astronomy and physics within the framework of the Hand-On Universe program. TAROT is a fully autonomous 25cm telescope located at the Calern station of the Observatoire de la Côte d'Azur in France. Since its primary objective is the detection of the optical counterpart of cosmic gamma-ray bursts (GRBs), it features a very rapid (up to 80 deg./sec.) mount, and a wide field of view (2 deg.). Because the occurrence of GRBs is rather low, TAROT is used for other studies, including variable stars and orbital debris. For Education and Public Outreach, TAROT may be used in two ways. 1) full control of the telescope can be taken through a web interface, including the remote monitoring of housekeeping, weather conditions, control of auxiliary equipment (lamps, temperature setting...) and direct viewing of the telescope and of its surroundings; 2) a powerful web interface allows to send requests for observations; this enable efficient scheduling of the telescope and observation of sources in optimal conditions, including for repeated observations of the same location, e.g. for variable stars. As soon as the 2k x 2k images are taken, they are processed, background searches for variability are made, and the data is available through a web interface. All these products may be used or viewed even with a 56kbps modem connection. Getting the FITS files (instead of jpeg) requires however a rapid connection, e.g. an ADSL. TAROT allows both for direct demonstrations of the possibilities of remote controlled instruments, for the simultaneous monitoring of sources from the ground and space, and for the long term studies in the framework of a scientific project. As an example, the study of orbital debris may be an introduction to an actual problem for space policy and an explanation of the gravitation 1. PHOTOSPHERIC FLOW FIELD RELATED TO THE EVOLUTION OF THE SUN'S POLAR MAGNETIC PATCHES OBSERVED BY HINODE SOLAR OPTICAL TELESCOPE Energy Technology Data Exchange (ETDEWEB) Kaithakkal, Anjali John; Suematsu, Y.; Kubo, M. [Department of Astronomical Science, Graduate University for Advanced Studies (SOKENDAI), Mitaka, Tokyo 181-8588 (Japan); Iida, Y.; Tsuneta, S. [Institute of Space and Astronautical Science (ISAS), Japan Aerospace Exploration Agency (JAXA), Sagamihara, Kanagawa 252-5210 (Japan); Shiota, D., E-mail: [email protected] [Solar-Terrestrial Environment Laboratory, Nagoya University, Nagoya 464-8601 (Japan) 2015-02-01 We investigated the role of photospheric plasma motions in the formation and evolution of polar magnetic patches using time-sequence observations with high spatial resolution. The observations were obtained with the spectropolarimeter on board the Hinode satellite. From the statistical analysis using 75 magnetic patches, we found that they are surrounded by strong converging, supergranulation associated flows during their apparent lifetime and that the converging flow around the patch boundary is better observed in the Doppler velocity profile in the deeper photosphere. Based on our analysis, we suggest that the like-polarity magnetic fragments in the polar region are advected and clustered by photospheric converging flows, thereby resulting in the formation of polar magnetic patches. Our observations show that, in addition to direct cancellation, magnetic patches decay by fragmentation followed by unipolar disappearance or unipolar disappearance without fragmentation. It is possible that the magnetic patches of existing polarity fragment or diffuse away into smaller elements and eventually cancel out with opposite polarity fragments that reach the polar region around the solar cycle maximum. This could be one of the possible mechanisms by which the existing polarity decays during the reversal of the polar magnetic field. 2. Wide-Field Washington Photometry of the NGC 5128 Globular Cluster System. II. Large-Scale Properties of the System Science.gov (United States) Harris, Gretchen L. H.; Harris, William E.; Geisler, Doug 2004-08-01 Building on the CMT1 photometric database presented in Paper I, in this paper we derive the large-scale properties of the globular cluster system (GCS) in NGC 5128, the nearest giant elliptical and the dominant galaxy in the Centaurus group. In global terms, it has a smaller total population than previously thought: we estimate 980+/-120 clusters over all magnitudes, yielding a specific frequency SN=1.4+/-0.2, with a steep projected radial distribution σ~r-2. The luminosity distribution of the clusters resembles that of an old, normal GC luminosity function (Gaussian-like with peak at MV~=-7.4 and dispersion of ~=1.3 mag), but these parameters are unfortunately quite uncertain because of the system's low population and the heavy field contamination. Using the metallicity-sensitive C-T1 color index, we discuss the metallicity distribution function (MDF) for a subsample of 211 previously identified clusters, all on a homogeneous photometric system. We find the MDF to be strongly bimodal, with metallicity peaks at [Fe/H]=-1.55 and -0.55 and with nearly equal numbers of clusters in each of the metal-poor and metal-rich modes. The combined evidence from the system's low specific frequency, the MDF, and the isophotal shell features in the halo light make a major merger'' a plausible model for the formation history of this giant E galaxy. However, the progenitor galaxies must have been more gas-rich than in any present-day mergers or starbursts. Finally, we present a list of 327 new cluster candidates not identified in any previous surveys; most of these are in the less well studied bulge region of the galaxy and along the minor axis. 3. Cheap and Sturdy Student Telescopes Made with Plumbing Parts Science.gov (United States) Edmonds, J. P.; Brandenburg, G. F. 2010-04-01 This rugged telescope design uses readily available PVC pipe and connectors to house the optics and may be constructed for under \$20. The low cost, durability and portability make it ideal for individual student observations in the field. 4. A large sample of shear-selected clusters from the Hyper Suprime-Cam Subaru Strategic Program S16A Wide field mass maps Science.gov (United States) Miyazaki, Satoshi; Oguri, Masamune; Hamana, Takashi; Shirasaki, Masato; Koike, Michitaro; Komiyama, Yutaka; Umetsu, Keiichi; Utsumi, Yousuke; Okabe, Nobuhiro; More, Surhud; Medezinski, Elinor; Lin, Yen-Ting; Miyatake, Hironao; Murayama, Hitoshi; Ota, Naomi; Mitsuishi, Ikuyuki 2018-01-01 We present the result of searching for clusters of galaxies based on weak gravitational lensing analysis of the ˜160 deg2 area surveyed by Hyper Suprime-Cam (HSC) as a Subaru Strategic Program. HSC is a new prime focus optical imager with a 1.5°-diameter field of view on the 8.2 m Subaru telescope. The superb median seeing on the HSC i-band images of 0.56" allows the reconstruction of high angular resolution mass maps via weak lensing, which is crucial for the weak lensing cluster search. We identify 65 mass map peaks with a signal-to-noise (S/N) ratio larger than 4.7, and carefully examine their properties by cross-matching the clusters with optical and X-ray cluster catalogs. We find that all the 39 peaks with S/N > 5.1 have counterparts in the optical cluster catalogs, and only 2 out of the 65 peaks are probably false positives. The upper limits of X-ray luminosities from the ROSAT All Sky Survey (RASS) imply the existence of an X-ray underluminous cluster population. We show that the X-rays from the shear-selected clusters can be statistically detected by stacking the RASS images. The inferred average X-ray luminosity is about half that of the X-ray-selected clusters of the same mass. The radial profile of the dark matter distribution derived from the stacking analysis is well modeled by the Navarro-Frenk-White profile with a small concentration parameter value of c500 ˜ 2.5, which suggests that the selection bias on the orientation or the internal structure for our shear-selected cluster sample is not strong. 5. MID-INFRARED SELECTION OF ACTIVE GALACTIC NUCLEI WITH THE WIDE-FIELD INFRARED SURVEY EXPLORER. I. CHARACTERIZING WISE-SELECTED ACTIVE GALACTIC NUCLEI IN COSMOS Energy Technology Data Exchange (ETDEWEB) Stern, Daniel; Assef, Roberto J.; Eisenhardt, Peter [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Mail Stop 169-221, Pasadena, CA 91109 (United States); Benford, Dominic J. [NASA Goddard Space Flight Center, Code 665, Greenbelt, MD 20771 (United States); Blain, Andrew [Department of Physics and Astronomy, University of Leicester, LE1 7RH Leicester (United Kingdom); Cutri, Roc; Griffith, Roger L.; Jarrett, T. H.; Masci, Frank; Tsai, Chao-Wei; Yan, Lin [Infrared Processing and Analysis Center, California Institute of Technology, Pasadena, CA 91125 (United States); Dey, Arjun [National Optical Astronomical Observatory, 950 N. Cherry Ave., Tucson, AZ 85719 (United States); Lake, Sean; Petty, Sara; Wright, E. L. [Physics and Astronomy Department, University of California, Los Angeles, CA 90095 (United States); Stanford, S. A. [Department of Physics, University of California, One Shields Avenue, Davis, CA 95616 (United States); Harrison, Fiona; Madsen, Kristin, E-mail: [email protected] [Space Radiation Laboratory, California Institute of Technology, Pasadena, CA 91125 (United States) 2012-07-01 The Wide-field Infrared Survey Explorer (WISE) is an extremely capable and efficient black hole finder. We present a simple mid-infrared color criterion, W1 - W2 {>=} 0.8 (i.e., [3.4]-[4.6] {>=}0.8, Vega), which identifies 61.9 {+-} 5.4 active galactic nucleus (AGN) candidates per deg{sup 2} to a depth of W2 {approx} 15.0. This implies a much larger census of luminous AGNs than found by typical wide-area surveys, attributable to the fact that mid-infrared selection identifies both unobscured (type 1) and obscured (type 2) AGNs. Optical and soft X-ray surveys alone are highly biased toward only unobscured AGNs, while this simple WISE selection likely identifies even heavily obscured, Compton-thick AGNs. Using deep, public data in the COSMOS field, we explore the properties of WISE-selected AGN candidates. At the mid-infrared depth considered, 160 {mu}Jy at 4.6 {mu}m, this simple criterion identifies 78% of Spitzer mid-infrared AGN candidates according to the criteria of Stern et al. and the reliability is 95%. We explore the demographics, multiwavelength properties and redshift distribution of WISE-selected AGN candidates in the COSMOS field. 6. Simulation study of a geometric shape factor technique for estimating earth-emitted radiant flux densities from wide-field-of-view radiation measurements Science.gov (United States) Weaver, W. L.; Green, R. N. 1980-01-01 Geometric shape factors were computed and applied to satellite simulated irradiance measurements to estimate Earth emitted flux densities for global and zonal scales and for areas smaller than the detector field of view (FOV). Wide field of view flat plate detectors were emphasized, but spherical detectors were also studied. The radiation field was modeled after data from the Nimbus 2 and 3 satellites. At a satellite altitude of 600 km, zonal estimates were in error 1.0 to 1.2 percent and global estimates were in error less than 0.2 percent. Estimates with unrestricted field of view (UFOV) detectors were about the same for Lambertian and limb darkening radiation models. The opposite was found for restricted field of view detectors. The UFOV detectors are found to be poor estimators of flux density from the total FOV and are shown to be much better as estimators of flux density from a circle centered at the FOV with an area significantly smaller than that for the total FOV. 7. The COROT telescope Science.gov (United States) Viard, Thierry 2017-11-01 The COROT telescope, of which the customer is the French "INSU" / "CNES" (Institut National des Sciences de l'Univers / Centre National des Etudes Spatiales) is in fact a very precise and stable imaging instrument, which will be pointed towards fixed areas in the sky (each containing more than 3000 target stars) for periods of at least 5 months, in order to carry out its two missions. 8. Workshop: Neutrino telescopes International Nuclear Information System (INIS) Anon. 1990-01-01 Despite being the most elusive of the known particles, neutrinos provide vital new physics insights. Most neutrino knowledge so far has come from studies using beams from reactors and accelerators, but in recent years important new contributions have resulted from investigation of natural neutrinos from cosmic rays, nearby stars (the sun), or distant sources, such as the 1987 supernova. The supernova observations marked the start of a new era in neutrino astronomy, but neutrino telescopes were anyway assured of an important ongoing role 9. [Galileo and his telescope]. Science.gov (United States) Strebel, Christoph 2006-01-01 Galileo's publication of observations made with his newly reinvented telescope provoked a fierce debate. In April 1610 Martinus Horky, a young Bohemian astronomer, had an opportunity to make his own observations with Galileo's telescope in the presence of Antonio Magini and other astronomers. Horky and the other witnesses denied the adequacy of Galileo's telescope and therefore the bona fides of his discoveries. Kepler conjectured Horky as well as all his witnesses to be myopic. But Kepler's objection could not stop the publication of Horky's Peregrinatio contra nuncium sidereum (Modena, 1610), the first printed refutation of Galileo's Sidereus nuncius. In his treatise, Horky adresses four questions: 1) Do the four newly observed heavenly bodies actually exist? Horky denies their existence on various grounds: a) God, as every astronomer teaches, has created only seven moveable heavenly bodies and astronomical knowledge originates in God, too. b) Heavenly bodies are either stars or planets. Galileo's moveable heavenly bodies fit into neither category. c) If they do exist, why have they not already been observed by other scholars? Horky concludes that there are no such heavenly bodies. 2) What are these phenomena? They are purely artefactual, and produced by Galileo's telescope. 3) How are they like? Galileo's "stars" are so small as to be almost invisible. Galileo claims that he has measured their distances from each other. This however is impossible due to their diminutive size and other observational problems. Hence, Galileo's claim is a further proof that he is a fraud. 4) Why are they? For Galileo they are a chance to earn money but for astronomers like Horky they are a reason to offer thanks and honour to God. Horky's treatise was favourably received by the enemies of Galileo. But Kepler's critique was devastating. After calling on Kepler in Prague, Horky had to revoke the contents of his book. 10. The gamma-ray Cherenkov telescope for the Cherenkov telescope array Science.gov (United States) Tibaldo, L.; Abchiche, A.; Allan, D.; Amans, J.-P.; Armstrong, T. P.; Balzer, A.; Berge, D.; Boisson, C.; Bousquet, J.-J.; Brown, A. M.; Bryan, M.; Buchholtz, G.; Chadwick, P. M.; Costantini, H.; Cotter, G.; Daniel, M. K.; De Franco, A.; De Frondat, F.; Dournaux, J.-L.; Dumas, D.; Ernenwein, J.-P.; Fasola, G.; Funk, S.; Gironnet, J.; Graham, J. A.; Greenshaw, T.; Hervet, O.; Hidaka, N.; Hinton, J. A.; Huet, J.-M.; Jankowsky, D.; Jegouzo, I.; Jogler, T.; Kraus, M.; Lapington, J. S.; Laporte, P.; Lefaucheur, J.; Markoff, S.; Melse, T.; Mohrmann, L.; Molyneux, P.; Nolan, S. J.; Okumura, A.; Osborne, J. P.; Parsons, R. D.; Rosen, S.; Ross, D.; Rowell, G.; Rulten, C. B.; Sato, Y.; Sayède, F.; Schmoll, J.; Schoorlemmer, H.; Servillat, M.; Sol, H.; Stamatescu, V.; Stephan, M.; Stuik, R.; Sykes, J.; Tajima, H.; Thornhill, J.; Trichard, C.; Vink, J.; Watson, J. J.; White, R.; Yamane, N.; Zech, A.; Zink, A.; Zorn, J.; CTA Consortium 2017-01-01 The Cherenkov Telescope Array (CTA) is a forthcoming ground-based observatory for very-high-energy gamma rays. CTA will consist of two arrays of imaging atmospheric Cherenkov telescopes in the Northern and Southern hemispheres, and will combine telescopes of different types to achieve unprecedented performance and energy coverage. The Gamma-ray Cherenkov Telescope (GCT) is one of the small-sized telescopes proposed for CTA to explore the energy range from a few TeV to hundreds of TeV with a field of view ≳ 8° and angular resolution of a few arcminutes. The GCT design features dual-mirror Schwarzschild-Couder optics and a compact camera based on densely-pixelated photodetectors as well as custom electronics. In this contribution we provide an overview of the GCT project with focus on prototype development and testing that is currently ongoing. We present results obtained during the first on-telescope campaign in late 2015 at the Observatoire de Paris-Meudon, during which we recorded the first Cherenkov images from atmospheric showers with the GCT multi-anode photomultiplier camera prototype. We also discuss the development of a second GCT camera prototype with silicon photomultipliers as photosensors, and plans toward a contribution to the realisation of CTA. 11. Exploring the Universe with the Worldwide Telescope Science.gov (United States) Fay, J. E. 2014-12-01 Microsoft Research WorldWide Telescope is a software platform for exploring the universe. Whether you are a researcher, student or just a casual explorer WorldWide Telescope uses cutting edge technology to take you anywhere in the universe and visualize data collected by science programs from across the globe, including NASA great observatories and planetary probes. WWT leverages technologies such as Virtual reality headsets, multi-channel full dome projection and HTML5/WebGL to bring the WWT experience to any device and any scale. We will discuss how to use WWT to browse previously curated data, as well as how to process and visualize your own data, using examples from NASA Mars missions. 12. Last results of technological developments for ultra-lightweight, large aperture, deployable mirror for space telescopes Science.gov (United States) Gambicorti, Lisa; D'Amato, Francesco; Vettore, Christian; Duò, Fabrizio; Guercia, Alessio; Patauner, Christian; Biasi, Roberto; Lisi, Franco; Riccardi, Armando; Gallieni, Daniele; Lazzarini, Paolo; Tintori, Matteo; Zuccaro Marchi, Alessandro; Pereira do Carmo, Joao 2017-11-01 The aim of this work is to describe the latest results of new technological concepts for Large Aperture Telescopes Technology (LATT) using thin deployable lightweight active mirrors. This technology is developed under the European Space Agency (ESA) Technology Research Program and can be exploited in all the applications based on the use of primary mirrors of space telescopes with large aperture, segmented lightweight telescopes with wide Field of View (FOV) and low f/#, and LIDAR telescopes. The reference mission application is a potential future ESA mission, related to a space borne DIAL (Differential Absorption Lidar) instrument operating around 935.5 nm with the goal to measure water vapor profiles in atmosphere. An Optical BreadBoard (OBB) for LATT has been designed for investigating and testing two critical aspects of the technology: 1) control accuracy in the mirror surface shaping. 2) mirror survivability to launch. The aim is to evaluate the effective performances of the long stroke smart-actuators used for the mirror control and to demonstrate the effectiveness and the reliability of the electrostatic locking (EL) system to restraint the thin shell on the mirror backup structure during launch. The paper presents a comprehensive vision of the breadboard focusing on how the requirements have driven the design of the whole system and of the various subsystems. The manufacturing process of the thin shell is also presented. 13. Dual-Telescope Multi-Channel Thermal-Infrared Radiometer for Outer Planet Fly-By Missions Science.gov (United States) Aslam, Shahid; Amato, Michael; Bowles, Neil; Calcutt, Simon; Hewagama, Tilak; Howard, Joseph; Howett, Carly; Hsieh, Wen-Ting; Hurford, Terry; Hurley, Jane; 2016-01-01 The design of a versatile dual-telescope thermal-infrared radiometer spanning the spectral wavelength range 8-200 microns, in five spectral pass bands, for outer planet fly-by missions is described. The dual- telescope design switches between a narrow-field-of-view and a wide-field-of-view to provide optimal spatial resolution images within a range of spacecraft encounters to the target. The switchable dual-field- of-view system uses an optical configuration based on the axial rotation of a source-select mirror along the optical axis. The optical design, spectral performance, radiometric accuracy, and retrieval estimates of the instrument are discussed. This is followed by an assessment of the surface coverage performance at various spatial resolutions by using the planned NASA Europa Mission 13-F7 fly-by trajectories as a case study. 14. DEFF Research Database (Denmark) Evangelista, Y.; Campana, R.; Del Monte, E. 2012-01-01 The Large Observatory For X-ray Timing (LOFT), selected by ESA as one of the four Cosmic Vision M3 candidate missions to undergo an assessment phase, will revolutionize the study of compact objects in our galaxy and of the brightest supermassive black holes in active galactic nuclei. The Large Area...... Detector (LAD), carrying an unprecedented effective area of 10 m^2, is complemented by a coded-mask Wide Field Monitor, in charge of monitoring a large fraction of the sky potentially accessible to the LAD, to provide the history and context for the sources observed by LAD and to trigger its observations... 15. The TACTIC atmospheric Cherenkov imaging telescope International Nuclear Information System (INIS) Koul, R.; Tickoo, A.K.; Kaul, S.K.; Kaul, S.R.; Kumar, N.; Yadav, K.K.; Bhatt, N.; Venugopal, K.; Goyal, H.C.; Kothari, M.; Chandra, P.; Rannot, R.C.; Dhar, V.K.; Koul, M.K.; Kaul, R.K.; Kotwal, S.; Chanchalani, K.; Thoudam, S.; Chouhan, N.; Sharma, M.; Bhattacharyya, S.; Sahayanathan, S. 2007-01-01 The TACTIC (TeV Atomospheric Cherenkov Telescope with Imaging Camera) γ-ray telescope, equipped with a light collector of area ∼9.5m 2 and a medium resolution imaging camera of 349 pixels, has been in operation at Mt. Abu, India, since 2001. This paper describes the main features of its various subsystems and its overall performance with regard to (a) tracking accuracy of its two-axes drive system, (b) spot size of the light collector, (c) back-end signal processing electronics and topological trigger generation scheme, (d) data acquisition and control system and (e) relative and absolute gain calibration methodology. Using a trigger field-of-view of 11x11 pixels (∼3.4 a tx3.4 a t), the telescope records a cosmic ray event rate of ∼2.5Hz at a typical zenith angle of 15 a t. Monte Carlo simulation results are also presented in the paper for comparing the expected performance of the telescope with actual observational results. The consistent detection of a steady signal from the Crab Nebula above ∼1.2TeV energy, at a sensitivity level of ∼5.0σ in ∼25h, along with excellent matching of its energy spectrum with that obtained by other groups, reassures that the performance of the TACTIC telescope is quite stable and reliable. Furthermore, encouraged by the detection of strong γ-ray signals from Mrk 501 (during 1997 and 2006 observations) and Mrk 421 (during 2001 and 2005-2006 observations), we believe that there is considerable scope for the TACTIC telescope to monitor similar TeV γ-ray emission activity from other active galactic nuclei on a long-term basis 16. A portable extruder for in situ wide angle x-ray scattering study on multi-dimensional flow field induced crystallization of polymer Science.gov (United States) Chang, Jiarui; Wang, Zhen; Tang, Xiaoliang; Tian, Fucheng; Ye, Ke; Li, Liangbin 2018-02-01 We have designed and constructed a portable extruder with a rotatable mandrel, which can be employed to study the multi-dimensional flow field (MDFF) induced crystallization of polymer combined with in situ wide angle x-ray scattering (WAXS). With the piston driving the melt sample to flow along the channel, a direct axial shear field is achieved. At the same time, the central mandrel keeps rotating under a stable speed, providing the sample with an additional circumferential shear field. By presetting different proportions of the two shear fields, namely, axial and circumferential, various flow states of the sample can be obtained, which makes it capable of investigating the effects of MDFF on polymer crystallization. We have performed an in situ WAXS experiment of MDFF induced crystallization of isotactic polypropylene based on the portable extruder at the beam line BL16B in Shanghai Synchrotron Radiation Facility. The rheological and structural information is collected simultaneously, which manifests the viability of the portable extruder on regulating MDFF and can provide guidance for polymer processing. 17. Dark Matter Searches with the Fermi Large Area Telescope International Nuclear Information System (INIS) Meurer, Christine 2008-01-01 The Fermi Gamma-Ray Space Telescope, successfully launched on June 11th, 2008, is the next generation satellite experiment for high-energy gamma-ray astronomy. The main instrument, the Fermi Large Area Telescope (LAT), with a wide field of view (>2 sr), a large effective area (>8000 cm 2 at 1 GeV), sub-arcminute source localization, a large energy range (20 MeV-300 GeV) and a good energy resolution (close to 8% at 1 GeV), has excellent potential to either discover or to constrain a Dark Matter signal. The Fermi LAT team pursues complementary searches for signatures of particle Dark Matter in different search regions such as the galactic center, galactic satellites and subhalos, the milky way halo, extragalactic regions as well as the search for spectral lines. In these proceedings we examine the potential of the LAT to detect gamma-rays coming from Weakly Interacting Massive Particle annihilations in these regions with special focus on the galactic center region. 18. Sunyaev-Zeldovich Predictions for the Atacama Cosmology Telescope Science.gov (United States) Menanteau, Felipe; Hughes, J. P.; Jimenez, R.; Barkhouse, W.; Berta, Z.; Hansen, S.; Hernandez-Monteagudo, C.; Kosowsky, A.; Lin, Y. T.; Moodley, K.; Ngeow, C.; Roche, N.; Spergel, D.; Tucker, D.; Verde, L. 2007-05-01 We present predictions for the microwave sky in a low-extinction region centered near RA = 23:00 and Dec = -55:12, which will be surveyed in the coming year at 145 GHz by the Atacama Cosmology Telescope (ACT, PI: Lyman Page) and in the X-ray band by XMM-Newton (PI: Hans Boehringer). The predictions are based on Sunyaev-Zeldovich distortions drawn from optical data collected by the Blanco Cosmology Survey (BCS). We also compare the predictions with X-ray data from the ROSAT All Sky Survey. The BCS (PI: Joe Mohr) is a NOAO large, wide-field survey project that has been awarded 45 nights on the CTIO Blanco 4-meter telescope to image two 50 square-degree patches of the southern sky in four bands (griz). The survey began in 2005 and has completed two (out of three) years of data taking. A preliminary automated image reduction and analysis pipeline for the BCS data is briefly summarized. Financial support was provided by the NSF under the PIRE program (OISE-0530095). 19. Development of a mid-sized Schwarzschild-Couder Telescope for the Cherenkov Telescope Array Energy Technology Data Exchange (ETDEWEB) Cameron, Robert A. 2012-06-28 The Cherenkov Telescope Array (CTA) is a ground-based observatory for very high-energy (10 GeV to 100 TeV) gamma rays, planned for operation starting in 2018. It will be an array of dozens of optical telescopes, known as Atmospheric Cherenkov Telescopes (ACTs), of 8 m to 24 m diameter, deployed over an area of more than 1 square km, to detect flashes of Cherenkov light from showers initiated in the Earth's atmosphere by gamma rays. CTA will have improved angular resolution, a wider energy range, larger fields of view and an order of magnitude improvement in sensitivity over current ACT arrays such as H.E.S.S., MAGIC and VERITAS. Several institutions have proposed a research and development program to eventually contribute 36 medium-sized telescopes (9 m to 12 m diameter) to CTA to enhance and optimize its science performance. The program aims to construct a prototype of an innovative, Schwarzschild-Couder telescope (SCT) design that will allow much smaller and less expensive cameras and much larger fields of view than conventional Davies-Cotton designs, and will also include design and testing of camera electronics for the necessary advances in performance, reliability and cost. We report on the progress of the mid-sized SCT development program. 20. Calibration and testing of a prototype of the JEM-EUSO telescope on Telescope Array site Directory of Open Access Journals (Sweden) 2013-06-01 Full Text Available Aim of the TA-EUSO project is to install a prototype of the JEM-EUSO telescope on the Telescope Array site in Black Rock Mesa, Utah and perform observation of natural and artificial ultraviolet light. The detector consists of one Photo Detector Module (PDM, identical to the 137 present on the JEM-EUSO focal surface. Each PDM is composed by 36 Hamamatsu multi-anode photomultipliers (64 channels per tube, for a total of 2304 channels. Front-End readout is performed by 36 ASICS, with trigger and readout tasks performed by two FPGA boards that send the data to a CPU and storage system. Two, 1 meter side square Fresnel lenses provide a field-of-view of 8 degrees. The telescope will be housed in a container located in front of the fluorescence detector of the Telescope Array collaboration, looking in the direction of the ELF (Electron Light Source and CLF (Central Laser Facility. Aim of the project is to calibrate the response function of the EUSO telescope with the TA fluorescence detector in presence of a shower of known intensity and distribution. An initial run of about six months starting from end 2012 is foreseen, during which we expect to observe, triggered by TA electronics, a few cosmic ray events which will be used to further refine the calibration of the EUSO-Ground with TA. Medium term plans include the increase of the number of PDM and therefore the field of view. 1. Analysis of polarization introduced due to the telescope optics of the Thirty Meter Telescope Science.gov (United States) Anche, Ramya Manjunath; Sen, Asoke Kumar; Anupama, Gadiyara Chakrapani; Sankarasubramanian, Kasiviswanathan; Skidmore, Warren 2018-01-01 An analytical model has been developed to estimate the polarization effects, such as instrumental polarization (IP), crosstalk (CT), and depolarization, due to the optics of the Thirty Meter Telescope. These are estimated for the unvignetted field-of-view and the wavelengths of interest. The model estimates an IP of 1.26% and a CT of 44% at the Nasmyth focus of the telescope at the wavelength of 0.6 μm at field angle zero with the telescope pointing to zenith. Mueller matrices have been estimated for the primary, secondary, and Nasmyth mirrors. It is found that some of the Mueller matrix elements of the primary and secondary mirrors show a fourfold azimuthal antisymmetry, which indicates that the polarization at the Cassegrain focus is negligible. At the inclined Nasmyth mirror, there is no azimuthal antisymmetry in the matrix elements, and this results in nonzero values for IP and CT, which would negatively impact the polarization measurements at the telescope focus. The averaged Mueller matrix is estimated at the Nasmyth focus at different instrument ports and various zenith angles of the telescope. The variation in the Mueller matrix elements for different coatings is also estimated. The impact of this polarization effect on the science case requirements has been discussed. This analysis will help in achieving precise requirements for future instruments with polarimetric capability. 2. Choosing and Using a Refracting Telescope CERN Document Server English, Neil 2011-01-01 The refracting telescope has a long and illustrious past. Here’s what the author says about early telescopes and today’s refractors: “Four centuries ago, a hitherto obscure Italian scientist turned a home-made spyglass towards the heavens. The lenses he used were awful by modern standards, inaccurately figured and filled with the scars of their perilous journey from the furnace to the finishing workshop. Yet, despite these imperfections, they allowed him to see what no one had ever seen before – a universe far more complex and dynamic than anyone had dared imagine. But they also proved endlessly useful in the humdrum of human affairs. For the first time ever, you could spy on your neighbor from a distance, or monitor the approach of a war-mongering army, thus deciding the fate of nations. “The refractor is without doubt the prince of telescopes. Compared with all other telescopic designs, the unobstructed view of the refractor enables it to capture the sharpest, highest contrast images and the wides... 3. The Future of Small Telescopes In The New Millennium. Volume II - The Telescopes We Use Science.gov (United States) Oswalt, T. D. 2003-06-01 An invaluable reference for any student, scientist or administrator, using small telescopes for research. An essential collection of data and opinions for those charged with setting scientific and funding priorities. This three-volume set, The Future of Small Telescopes in the New Millennium details the essential roles that small telescopes should play in 21st century science and how their future productivity can be maximized. Over 70 experts from all corners of the international astronomical community have created a definitive reference on the present and future of "big science with small telescopes." Despite highly publicized closures of telescopes smaller than 4-m in aperture at national facilities and their omission from national science priority studies, the oft-lamented demise of the small telescope has been greatly exaggerated. In fact, the future of these workhorses of astronomy will be brighter than ever if creative steps are taken now. This three-volume set defines the essential roles that small telescopes should play in 21st century science and the ways in which a productive future for them can be realized. A wide cross-section of the astronomical community has contributed to a definitive assessment of the present and a vision for the future. Volume 2: The Telescopes We Use Small cost-effective optical-, radio- and space-based facilities face similar problems in scientific prioritization and funding. Volume 2 highlights how current small facilities are evolving to meet the scientific priorities and economical realities of the 21st century through standardization of instrumentation, use of off-the-shelf technology, specialization, optical improvements, new modes of scheduling, automation, and internet access. The Future of Small Telescopes in the New Millennium is a fundamental resource for those looking to undertake new projects with small telescopes, for those that are responsible for their operation, and for those called upon to help set scientific 4. The Ooty Wide Field Array ) studies of the propagation of plasma irregularities through the inner heliosphere and (3) blind surveys for transient sources. More details on the upgrade, as well as on the expected science uses can be found in other papers in this special ... 5. CsI Calorimeter for a Compton-Pair Telescope Science.gov (United States) Grove, Eric J. We propose to build and test a hodoscopic CsI(Tl) scintillating-crystal calorimeter for a medium-energy γ-ray Compton and pair telescope. The design and technical approach for this calorimeter relies deeply on heritage from the Fermi LAT CsI Calorimeter, but it dramatically improves the low-energy performance of that design by reading out the scintillation light with silicon photomultipliers (SiPMs), making the technology developed for Fermi applicable in the Compton regime. While such a hodoscopic calorimeter is useful for an entire class of medium-energy γ-ray telescope designs, we propose to build it explicitly to support beam tests and balloon flight of the Proto-ComPair telescope, the development and construction of which was funded in a four-year APRA program beginning in 2015 ("ComPair: Steps to a Medium Energy γ-ray Mission" with PI J. McEnery of GSFC). That award did not include funding for its CsI calorimeter subsystem, and this proposal is intended to cover that gap. ComPair is a MIDEX-class instrument concept to perform a high-sensitivity survey of the γ-ray sky from 0.5 MeV to 500 MeV. ComPair is designed to provide a dramatic increase in sensitivity relative to previous instruments in this energy range (predominantly INTEGRAL/SPI and Compton COMPTEL), with the same transformative sensitivity increase - and corresponding scientific return- that the Fermi Large Area Telescope provided relative to Compton EGRET. To enable transformative science over a broad range of MeV energies and with a wide field of view, ComPair is a combined Compton telescope and pair telescope employing a silicon-strip tracker (for Compton scattering and pair conversion and tracking) and a solid-state CdZnTe calorimeter (for Compton absorption) and CsI calorimeter (for pair calorimetry), surrounded by a plastic scintillator anti-coincidence detector. Under the current proposal, we will complete the detailed design, assembly, and test of the CsI calorimeter for the risk 6. Deep Sky Diving with the ESO New Technology Telescope Science.gov (United States) 1998-01-01 Preparations for future cosmological observations with the VLT Within a few months, the first 8.2-meter Unit Telescope of the ESO Very Large Telescope (VLT) array will open its eye towards the sky above the Atacama desert. As documented by recent Press Photos from ESO, the construction work at the Paranal VLT Observatory is proceeding rapidly. Virtually all of the telescope components, including the giant Zerodur mirror (cf. ESO PR Photos 35a-l/97 ), are now on the mountain. While the integration of the telescope and its many optical, mechanical and electronic components continues, astronomers in the ESO member countries and at ESO are now busy defining the observing programmes that will be carried out with the new telescope, soon after it enters into operation. In this context, new and exciting observations have recently been obtained with the 3.5-m New Technology Telescope at the ESO La Silla Observatory, 600 km to the south of Paranal. How to record the faintest and most remote astronomical objects With its very large mirror surface (and correspondingly great light collecting power), as well as an unsurpassed optical quality, the VLT will be able to look exceedingly far out into the Universe, well beyond current horizons. The best technique to record the faintest possible light and thus the most remote celestial objects, is to combine large numbers of exposures of the same field with slightly different telescope pointing. This increases the total number of photons recorded and by imaging the stars and galaxies on different areas (pixels) of the detector, the signal-to-noise ratio and hence the visibility of the faintest objects is improved. The famous Hubble Deep Field Images were obtained in this way by combining over 300 single exposures and they show myriads of faint galaxies in the distant realms of the Universe. The NTT as test bench for the VLT ESO is in the fortunate situation of possessing a `prototype' model of the Very Large Telescope, the 3.5-m New 7. The GRASP telescope Science.gov (United States) Bignami, G. F.; Dean, A. J.; Durouchoux, Ph.; Hurley, K.; Lund, N.; McBreen, B.; Schönfelder, V.; Swanenburg, B. N.; Tomaschek, G.; Winkler, C. 1989-01-01 The GRASP mission Gamma-Ray Astronomy with Spectroscopy and Positioning addresses the scientific goals of fine spectroscopy with imaging and accurate positioning of gamma-ray sources, an unexplored area within gamma-ray astronomy. The assessment of GRASP as a future space astronomy mission in the mid-1990s has led to the design of the instrument outlined in this article. Thus GRASP is a third generation gamma-ray telescope and is designed to operate as a high quality spectral imager in the mid-1990s, when, following the GRO, SIGMA, and GAMMA-1 missions, there will be requirement for a more sophisticated instrument to maintain the momentum of advance in gamma-ray astronomy. The telescope will be capable of locating point sources with a precision of typically 1 arc min, whilst making a fine spectral analysis (E/ΔE ˜ 1000) of any gamma-ray line features. The high sensitivity of this instrument and the long (> 2 year) lifetime of the mission will enable a large number (˜ 1000) of astronomical objects to be studied. The GRASP mission has the potential to move gamma-ray astronomy from an era of basic exploration to one in which detailed and novel measurements can be used to gain a better understanding of many astrophysical problems. 8. Antares Reference Telescope System International Nuclear Information System (INIS) Viswanathan, V.K.; Kaprelian, E.; Swann, T.; Parker, J.; Wolfe, P.; Woodfin, G.; Knight, D. 1983-01-01 Antares is a 24-beam, 40-TW carbon-dioxide laser-fusion system currently nearing completion at the Los Alamos National Laboratory. The 24 beams will be focused onto a tiny target (typically 300 to 1000 μm in diameter) located approximately at the center of a 7.3-m-diameter by 9.3-m-long vacuum (10 - 6 torr) chamber. The design goal is to position the targets to within 10 μm of a selected nominal position, which may be anywhere within a fixed spherical region 1 cm in diameter. The Antares Reference Telescope System is intended to help achieve this goal for alignment and viewing of the various targets used in the laser system. The Antares Reference Telescope System consists of two similar electro-optical systems positioned in a near orthogonal manner in the target chamber area of the laser. Each of these consists of four subsystems: (1) a fixed 9X optical imaging subsystem which produces an image of the target at the vidicon; (2) a reticle projection subsystem which superimposes an image of the reticle pattern at the vidicon; (3) an adjustable front-lighting subsystem which illuminates the target; and (4) an adjustable back-lighting subsystem which also can be used to illuminate the target. The various optical, mechanical, and vidicon design considerations and trade-offs are discussed. The final system chosen (which is being built) and its current status are described in detail 9. Deep space telescopes CERN Multimedia CERN. Geneva 2006-01-01 The short series of seminars will address results and aims of current and future space astrophysics as the cultural framework for the development of deep space telescopes. It will then present such new tools, as they are currently available to, or imagined by, the scientific community, in the context of the science plans of ESA and of all major world space agencies. Ground-based astronomy, in the 400 years since Galileo’s telescope, has given us a profound phenomenological comprehension of our Universe, but has traditionally been limited to the narrow band(s) to which our terrestrial atmosphere is transparent. Celestial objects, however, do not care about our limitations, and distribute most of the information about their physics throughout the complete electromagnetic spectrum. Such information is there for the taking, from millimiter wavelengths to gamma rays. Forty years astronomy from space, covering now most of the e.m. spectrum, have thus given us a better understanding of our physical Universe then t... 10. Holographic Optical Elements as Scanning Lidar Telescopes Science.gov (United States) Schwemmer, Geary K.; Rallison, Richard D.; Wilkerson, Thomas D.; Guerra, David V. 2005-01-01 We have developed and investigated the use of holographic optical elements (HOEs) and holographic transmission gratings for scanning lidar telescopes. For example, rotating a flat HOE in its own plane with the focal spot on the rotation axis makes a very simple and compact conical scanning telescope. We developed and tested transmission and reflection HOEs for use at the first three harmonic wavelengths of Nd:YAG lasers. The diffraction efficiency, diffraction angle, focal length, focal spot size and optical losses were measured for several HOEs and holographic gratings, and found to be suitable for use as lidar receiver telescopes, and in many cases could also serve as the final collimating and beam steering optic for the laser transmitter. Two lidar systems based on this technology have been designed, built, and successfully tested in atmospheric science applications. This technology will enable future spaceborne lidar missions by significantly lowering the size, weight, power requirement and cost of a large aperture, narrow field of view scanning telescope. 11. Fusion of Telescopic and Doppler Radar Data Science.gov (United States) Navara, M.; Matousek, M.; Drbohlav, O. 2014-09-01 We study the possibilities of observations of satellites at circular LEO orbits simultaneously by a telescope and a bistatic continuous-wave Doppler radar. Telescopic images allow for trajectory determination except for its distance (and hence height). Assuming a circular orbit, the height can be computed from the angular speed, but this is often impossible for LEO objects which do not remain in the field of view during the whole exposure time. To restore the missing information, we use Doppler radar data from a radio astronomy network, originally designed for detection of meteors. Using simulated perturbations of real radar data we studied their influence on the estimates of (i) permanent parameters of trajectory (orbital elements), (ii) instantaneous parameters of trajectory, (iii) distance and height estimates if the other parameters are given by the telescopic data. We derived recommendations for the optimal positions of the transmitter and receivers leading to the best resolution. We also discuss possible ways of improvement of this technique. Fusion results are shown on a suite of several matched radar and telescopic satellite fly-over data. 12. Cost Modeling for Space Telescope Science.gov (United States) Stahl, H. Philip 2011-01-01 Parametric cost models are an important tool for planning missions, compare concepts and justify technology investments. This paper presents on-going efforts to develop single variable and multi-variable cost models for space telescope optical telescope assembly (OTA). These models are based on data collected from historical space telescope missions. Standard statistical methods are used to derive CERs for OTA cost versus aperture diameter and mass. The results are compared with previously published models. 13. The Large Millimeter Telescope- Gran Telescopio Milimetrico Science.gov (United States) Irvine, W. M.; Schloerb, F. P.; Carramiñana, A.; Carrasco, L. 2004-11-01 The Large Millimeter Telescope/Gran Telescopio Milimetrico (LMT) project is a collaboration between the University of Massachusetts and the Instituto Nacional de Astrofisica, Óptica y Electrónica to build a 50 m diameter telescope that will have good efficiency at wavelengths as short as 1 mm. The LMT will have an overall effective surface accuracy of 70 micrometers and an ultimate pointing accuracy of better than 1 arcsec, and will thus be the largest millimeter-wavelength telescope in the world. The LMT site is Sierra Negra in the state of Puebla, at 4,640 meters above sea level in Central Mexico. At 18° 59' N latitude, it offers good sky coverage of both hemispheres. The normally low humidity will allow operation of the radio telescope at frequencies as high as 345 GHz. The LMT will make use of recent advances in structural design and active control of surface elements to achieve the required surface and pointing accuracy. At the site the alidade has been erected and the back structure for the main reflector has been assembled, while the monitor and control system has been successfully tested on another telescope. The schedule calls for acceptance tests in 2006. The initial complement of instruments will include a 32 element, heterodyne focal plane array at 3mm; a large format, focal plane bolometer array; a unique wide band receiver and spectrometer to determine the redshifts of primordial galaxies, and a 4 element receiver for the 1mm band. With its excellent sensitivity and mapping speed, the LMT/GTM will be a powerful facility for planetary science. In particular, it will enable key observations of comets, planetary atmospheres, asteroids and KBOs. 14. Perkinelmer Lamda 950 Measurements in Support of Nasa's Hubble Space Telescope Science.gov (United States) Miller, Kevin H.; Quijada, Manuel A. 2014-01-01 We present visible spectroscopy measurements using the Pe
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6829801201820374, "perplexity": 6870.166101771128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267867304.92/warc/CC-MAIN-20180624234721-20180625014721-00107.warc.gz"}
https://mailman.ntg.nl/pipermail/ntg-context/2004/007311.html
[NTG-context] more nath patches Hans Hagen pragma at wxs.nl Tue Oct 26 16:16:50 CEST 2004 ```Christopher Creutzig wrote: > \setbox\nathbox\currstyle at hbox{%\vrule\!!height 2\mex\!!width 0pt since you are going the fast and efficient way (\using \!!dimena etc), 0pt can become \zeropoint (less space interference as well) > \!!dimenf=\fontdimen8\textfont2 \!!dimenf\fontdimen8\textfont\syfam is more independent; maybe i should define symbolic names for the font dimens as well (no time to make a list now) > (\!!dimeni+0.5\fracrulethickness@)\relax is \fracrulethickness@ a nath specific thing? > \hfill\, maybe you need a % after this line > I do assume the whole thing could be done with about half as many lines > of code, but I was glad to finally understand how TeX typesets fractions > and just ignored beautifying the code. Also note that the snippet uses it's not that bad, tricky code seldom looks nice Hans -----------------------------------------------------------------
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9717342853546143, "perplexity": 14340.141105739052}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084888135.38/warc/CC-MAIN-20180119204427-20180119224427-00729.warc.gz"}
https://nbviewer.ipython.org/github/gpeyre/numerical-tours/blob/master/matlab/sparsity_2_cs_images.ipynb
# Compressed Sensing of Images¶ Important: Please read the installation page for details about how to install the toolboxes. $\newcommand{\dotp}[2]{\langle #1, #2 \rangle}$ $\newcommand{\enscond}[2]{\lbrace #1, #2 \rbrace}$ $\newcommand{\pd}[2]{ \frac{ \partial #1}{\partial #2} }$ $\newcommand{\umin}[1]{\underset{#1}{\min}\;}$ $\newcommand{\umax}[1]{\underset{#1}{\max}\;}$ $\newcommand{\umin}[1]{\underset{#1}{\min}\;}$ $\newcommand{\uargmin}[1]{\underset{#1}{argmin}\;}$ $\newcommand{\norm}[1]{\|#1\|}$ $\newcommand{\abs}[1]{\left|#1\right|}$ $\newcommand{\choice}[1]{ \left\{ \begin{array}{l} #1 \end{array} \right. }$ $\newcommand{\pa}[1]{\left(#1\right)}$ $\newcommand{\diag}[1]{{diag}\left( #1 \right)}$ $\newcommand{\qandq}{\quad\text{and}\quad}$ $\newcommand{\qwhereq}{\quad\text{where}\quad}$ $\newcommand{\qifq}{ \quad \text{if} \quad }$ $\newcommand{\qarrq}{ \quad \Longrightarrow \quad }$ $\newcommand{\ZZ}{\mathbb{Z}}$ $\newcommand{\CC}{\mathbb{C}}$ $\newcommand{\RR}{\mathbb{R}}$ $\newcommand{\EE}{\mathbb{E}}$ $\newcommand{\Zz}{\mathcal{Z}}$ $\newcommand{\Ww}{\mathcal{W}}$ $\newcommand{\Vv}{\mathcal{V}}$ $\newcommand{\Nn}{\mathcal{N}}$ $\newcommand{\NN}{\mathcal{N}}$ $\newcommand{\Hh}{\mathcal{H}}$ $\newcommand{\Bb}{\mathcal{B}}$ $\newcommand{\Ee}{\mathcal{E}}$ $\newcommand{\Cc}{\mathcal{C}}$ $\newcommand{\Gg}{\mathcal{G}}$ $\newcommand{\Ss}{\mathcal{S}}$ $\newcommand{\Pp}{\mathcal{P}}$ $\newcommand{\Ff}{\mathcal{F}}$ $\newcommand{\Xx}{\mathcal{X}}$ $\newcommand{\Mm}{\mathcal{M}}$ $\newcommand{\Ii}{\mathcal{I}}$ $\newcommand{\Dd}{\mathcal{D}}$ $\newcommand{\Ll}{\mathcal{L}}$ $\newcommand{\Tt}{\mathcal{T}}$ $\newcommand{\si}{\sigma}$ $\newcommand{\al}{\alpha}$ $\newcommand{\la}{\lambda}$ $\newcommand{\ga}{\gamma}$ $\newcommand{\Ga}{\Gamma}$ $\newcommand{\La}{\Lambda}$ $\newcommand{\si}{\sigma}$ $\newcommand{\Si}{\Sigma}$ $\newcommand{\be}{\beta}$ $\newcommand{\de}{\delta}$ $\newcommand{\De}{\Delta}$ $\newcommand{\phi}{\varphi}$ $\newcommand{\th}{\theta}$ $\newcommand{\om}{\omega}$ $\newcommand{\Om}{\Omega}$ This tour explores compressed sensing of natural images, using different sparsity priors over a wavelet basis. In [2]: addpath('toolbox_signal') ## Low Pass Linear Measures¶ We first make use of $P$ low pass linear measurements to remove the low frequency content of the image. Natural images are not only sparse over a wavelet domain. They also exhibit a fast decay of the coefficient through the scale. The coarse (low pass) wavelets caries much of the image energy. It thus make sense to measure directly the low pass coefficients. We load an image $f \in \RR^{n^2}$ of $n \times n$ pixels. In [3]: name = 'boat'; n = 256; f = rescale(f); Shortcuts for the wavelet transform $\{\dotp{f}{\psi_m}\}_m$. We only compute up to a scale $J$ so that only $k_0$ sub-bands are transformed. In [4]: k0 = 2; J = log2(n)-k0; Wav = @(f)perform_wavelet_transf(f,J,+1); WavI = @(x)perform_wavelet_transf(x,J,-1); Compute the wavelet transform. In [5]: fw = Wav(f); Display the coefficients. In [6]: clf; plot_wavelet(fw, J); Exercise 1 Compute an approximation |fLow| using the $P=2^{2J}=(n/k_0)^2$ low pass coefficients. In [7]: exo1() In [8]: %% Insert your code here. ## Randomized Orthogonal Measurements¶ We consider a compressed sensing operator that corresponds to randomized orthogonal projections. Extract the high pass wavelet coefficients, $x_0 = \{ \dotp{f}{\psi_m} \}_{m \in I_0}$. In [9]: A = ones(n,n); A(1:2^J,1:2^J) = 0; I0 = find(A==1); x0 = fw(I0); Number of coefficients. In [10]: N = length(x0); Number $P_0 = 2^{2J}=(n/k_0)^2$ of low pass measurements. In [11]: P0 = (n/2^k0)^2; Number of CS measurements. In [12]: P = 4 * P0; Generate random permutation operators $S_1,S_2 : \RR^N \rightarrow \RR^N$ so that $S_k(x)_i = x_{\sigma_k(i)}$ where $\sigma_k \in \Sigma_N$ is a random permutation of $\{1,\ldots,N\}$. In [13]: sigma1 = randperm(N)'; sigma2 = randperm(N)'; S1 = @(x)x(sigma1); S2 = @(x)x(sigma2); The adjoint (and also inverse) operators $S_1^*,S_2^*$ (denoted |S1S,S2S|) corresponds to the inverse permutation $\sigma_k^*$ such that $\sigma_k^* \circ \sigma_k(i)=i$. In [14]: sigma1S = 1:N; sigma1S(sigma1) = 1:N; sigma2S = 1:N; sigma2S(sigma2) = 1:N; S1S = @(x)x(sigma1S); S2S = @(x)x(sigma2S); We consider a CS operator $\Phi : \RR^N \rightarrow \RR^P$ that corresponds to a projection on randomized atoms $$(\Phi x)_i = \dotp{x}{ \phi_{\sigma_2(i)}}$$ where $\phi_i$ is a scrambled orthogonal basis $$\phi_i(x) = c_i( \sigma_1(x) )$$ where $\{ c_i \}_i$ is the orthogonal DCT basis. This can be rewritten in compact operator form as $$\Phi x = ( S_2 \circ C \circ S_1 (x) ) \downarrow_P$$ where $S_1,S_2$ are the permutation operators, and $\downarrow_P$ selects the $P$ first entries of a vector. In [15]: downarrow = @(x)x(1:P); Phi = @(x)downarrow(S2(dct(S1(x)))); The adjoint operator is $$\Phi^* x = S_1^* \circ C^* \circ S_2^* (x\uparrow_P)$$ where $\uparrow_P$ append $N-P$ zeros at the end of a vector, and $C^*$ is the inverse DCT transform. In [16]: uparrow = @(x)[x; zeros(N-P,1)]; PhiS = @(x)S1S(idct(S2S(uparrow(x)))); Perform the CS (noiseless) measurements. In [17]: y = Phi(x0); Exercise 2 Reconstruct an image using the pseudo inverse coefficients $\Phi^+ y = \Phi^* y$. In [18]: exo2() In [19]: %% Insert your code here. ## Compressed Sensing Recovery using Douglas Rachford Scheme¶ We consider the minimum $\ell^1$ recovery from the measurements $y = \Phi x_0 \in \RR^P$ $$\umin{\Phi x = y} \normu{x}.$$ This can be written as $$\umin{ x } F(x) + G(x) \qwhereq \choice{ F(x) = i_{\Cc}(x), \\ G(x) = \normu{x}. }$$ where $\Cc = \enscond{x}{\Phi x =y}$. One can solve this problem using the Douglas-Rachford iterations $$\tilde x_{k+1} = \pa{1-\frac{\mu}{2}} \tilde x_k + \frac{\mu}{2} \text{rPox}_{\gamma G}( \text{rProx}_{\gamma F}(\tilde x_k) ) \qandq x_{k+1} = \text{Prox}_{\gamma F}(\tilde x_{k+1},)$$ We have use the following definition for the proximal and reversed-proximal mappings: $$\text{rProx}_{\gamma F}(x) = 2\text{Prox}_{\gamma F}(x)-x$$ $$\text{Prox}_{\gamma F}(x) = \uargmin{y} \frac{1}{2}\norm{x-y}^2 + \ga F(y).$$ One can show that for any value of $\gamma>0$, any $0 < \mu < 2$, and any $\tilde x_0$, $x_k \rightarrow x^\star$ which is a solution of the minimization of $F+G$. Exercise 3 Implement the proximal and reversed-proximal mappings of $F$ (the orthogonal projector on $\Cc$ and $G$ (soft thresholding). In Matlab, use inline function with the |@| operator. In [20]: exo3() In [21]: %% Insert your code here. Value for the $0 < \mu < 2$ and $\gamma>0$ parameters. You can use other values, this might speed up the convergence. In [22]: mu = 1; gamma = 1; Exercise 4 Implement the DR iterative algorithm. Keep track of the evolution of the $\ell^1$ norm $G(x_k)$. In [23]: exo4() In [24]: %% Insert your code here. Exercise 5 Display the image reconstructed using the $P_0$ linear and $P$ CS measurements. The total number of used measurements is thus $P+P_0$. In [25]: exo5() In [26]: %% Insert your code here. ## Compressed Sensing Reconstruction using Block Sparsity¶ In order to enhance the CS reconstruction, it is possible to use more advanced priors than plain $\ell^1$. One can for instance use a block $\ell^1$ norm $$G(x) = \sum_i \norm{x_{B_i}}$$ where $(B_i)_i$ is a disjoint segmentation of the index set $\{1,\ldots,N\}$, where $x_{B} = \{ x_i \}_{i \in B} \in \RR^{|B|}$ extracts the coefficients within $B$, and $\norm{x_B}$ is the $\ell^2$ norm. The proximal operator of this block $\ell^1$ norm is a block thresholding $$\forall \, m \in B_i, \quad \text{Prox}_{\ga G}(x)_i = \max(0, 1-\ga/\norm{x_{B_i}}) x_i.$$ We use uniform blocks of size $w \times w$. In [27]: w = 4; Blocks position and offset in the image domain. In [28]: v = 1:w:n; dv = 0:w-1; [dX,dY,X,Y] = ndgrid(dv,dv,v,v); q = size(X,3); dX = reshape(dX, [w*w q*q]); dY = reshape(dY, [w*w q*q]); X = reshape(X, [w*w q*q]); Y = reshape(Y, [w*w q*q]); Remove the block which fails outside the image. In [29]: I = find( sum(X+dX>n | Y+dY>n) ); X(:,I) = []; Y(:,I) = []; dX(:,I) = []; dY(:,I) = []; Compute the indexes of the block in $\{1,\ldots,N\}$, i.e. not in image space but over the CS coefficients space. In [30]: U = zeros(n,n); U(I0) = 1:N; Ind = X+dX + (Y+dY-1)*n; I = U(Ind); Remove the indexes that corresponds to low pass wavelet coefficients. In [31]: I(:,sum(I==0)>0) = []; A block is defined as $B_i = \{ I_{k,i} \}_{k=1}^{w^2}$. Define the energy. In [32]: G = @(x)sum( sqrt(sum(x(I).^2)) ); Just for check : display in coefficient space the block structure. In [33]: [A,tmp] = meshgrid( randperm(size(I,2)) , ones(w*w,1)); x = zeros(N,1); x(I) = A; Z = zeros(n,n); Z(I0) = x; clf; imageplot(Z); colormap jet(256); Exercise 6 define the proximal operator $\text{Prox}_{\ga G}$ of $G$, and its reversed proximal mapping. In [34]: exo6() In [35]: %% Insert your code here. Exercise 7 Implement the DR iterative algorithm. Keep track of the evolution of $G(x_k)$. In [36]: exo7() In [37]: %% Insert your code here. Exercise 8 Display the image reconstructed using the $P_0$ linear and $P$ CS measurements. In [38]: exo8() In [39]: %% Insert your code here.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9153195023536682, "perplexity": 1190.6043974070385}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00523.warc.gz"}
https://clay6.com/qa/115511/in-a-mass-spectrometer-used-for-measuring-the-masses-of-ions-the-ions-are-i
# In a mass spectrometer used for measuring the masses of ions, the ions are initially accelerated by an electric potential V and then made to describe semicircular paths of radius R using a magnetic field B. If V and B are kept constant, the ratio $\frac{Charge\; on\; the ion}{mass\; of\; the\; ion}$ will be proportional to :- ( A ) $\frac{1}{R}$ ( B ) $R^2$ ( D ) $\frac{1}{R^2}$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9767476916313171, "perplexity": 486.08228623233646}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400208095.31/warc/CC-MAIN-20200922224013-20200923014013-00237.warc.gz"}
http://physics.stackexchange.com/questions/8289/matrix-solution-of-an-equivalent-resistance-circuit-problem
# Matrix solution of an equivalent resistance circuit problem Start with a set of points $x_1, x_2, \ldots$ that are connected by wires with some resistance. Represent the resistance by a conductance matrix (conductance being one over the resistance), where $\mathbf{C}_{ij}$ is the conductance between points $i$ and $j$, if the point are connected by a wire, otherwise the $\mathbf{C}_{ij}=0$. Can one solve for the equivalent resistance between two points by some matrix transform of $\mathbf{C}$? EDIT The comments bring up some interesting points - and suggest an alternate phrasing: Can you compute the resistance distance for a graph when the resistances are not all unit values using matrix operations? - Neat question :-) I suspect that there may be something like this, since you can use the Fourier transform (essentially an infinite-dimensional linear transformation) to solve the problem in xkcd.com/356. – David Z Apr 8 '11 at 20:58 It should probably be pointed out that $C_{ij}$ is measured when all the other resistors are absent, while the sought-for equivalent resistance $R_{ij}$ is measure when all the resistors are present. Or do you have something else in mind? – Qmechanic Apr 8 '11 at 21:18 @David: you can use Fourier transform for that because there the graph is lattice and the resistance has translational symmetry. In general the underlying graph will have no such structure. It might not even make sense to talk about embedding into $k$-dim space (which we often take for granted). – Marek Apr 8 '11 at 21:55 I think it's misleading to use the word "matrix" for this table of numbers because there is no natural linear structure on this space, as far as I can see. So the formulae to invert the conductances to resistances won't be a natural linear algebra formula - it won't be a "function" of the matrix, in particular, it won't be the inverse matrix, I guess. The most striking deviation from the "matrix logic" is that the entries $C_{ii}$ are either zero or infinite. – Luboš Motl Apr 9 '11 at 4:40 A more natural question is to first ask if there is known a general algorithm to find the equivalent reesistance for two points, given such a network. There is atleast an analogous thing for an arbitrary network of equal resistances mathworld.wolfram.com/ResistanceDistance.html – user1708 Apr 9 '11 at 4:54 Write $U_i$ for the potential at the site $i$ and $I_i$ for the external current flowing into the site $i$. Then continuity equation gives us $I_i = \sum_{j \neq i} C_{ij} (U_i - U_j)$ which can be rewritten as $I_i = \sum_j A_{ij} U_j$ with $$A_{ij} = \begin{cases} \sum_{k \neq i} C_{ij} & i = j \\ -C_{ij} & i \neq j \end{cases}$$ Now one can proceed directly to solve for $U_i$, given external current flows. But it turns out that thanks to special properties of the matrix $A$ (notice that sum entries of each row gives zero) more can be said. It turns out (read the paper for details) one can express equivalent resistance between points $k$ and $l$ as $$R_{kl} = {{\rm det}A^{(kl)} \over {\rm det} A^{(l)}}$$ where the indexed matrices are obtained by removing the said rows and columns from the matrix $A$. Last remark (not related directly to your question but it would be shame not to mention it now) is that those determinants can be interpreted naturally as spanning tree polynomials in $C_{ij}$ on the given graph $G$ (with or without $(kl)$ edge) and this in turn can be computed directly from the partition function of $q \to 0$ limit of the $q$-state Potts model on the said graph $G$ with weights on the edges related to their resistances.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9262612462043762, "perplexity": 260.6825851656038}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860113541.87/warc/CC-MAIN-20160428161513-00097-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.freemathhelp.com/forum/threads/93320-Chain-rule-partial-derivative?p=381970
# Thread: Chain rule partial derivative 1. ## Chain rule partial derivative (1 pt) Suppose that $\,\chi(s,\, t)\, =\, -4s^2\, -\, 2t^2,\,y\,$ a function of $\,(s,\, t)\,$ with $\, y(1,\, 1)\, =\, 1\,$ and $\, \dfrac{\partial y}{\partial t}\, (1,\, 1)\, =\, -2.$ Suppose that $\, u\, =\, xy,\, v\,$ a function of $\, x,\, y\,$ with $\, \dfrac{\partial v}{\partial y}\, (-6,\, 1)\, =\,4.$ Now suppose that $\, f(s,\,t)\, =\, u(x(s,\, t),\, y(s,\, t))\,$ and $\, g(s,\, t)\, =\, v(x(s,\, t),\, y(s,\, t)).\,$ You are given: . . . . .$\dfrac{\partial f}{\partial s}\, (1,\, 1)\, =\, -32,\,$. . .$\dfrac{\partial f}{\partial t}\, (1,\, 1)\, =\, 8,\,$. . .$\dfrac{\partial g}{\partial s}\, (1,\, 1)\, =\, -16.$ The value of $\, \dfrac{\partial g}{\partial t}\, (1,\, 1)\,$ must be: dv/dt=dv/dx*dx/dt+dv/dy*dy/dt dx/dt=-4t -> evaluate at (1,1) =-4 dv/dt=-4dv/dx+4(-2) dv/dt=-4dv/dx-8 How can I find the missing dv/dx in order to get a value for dv/dt? Thanks!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9967904090881348, "perplexity": 1214.2520243835459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794863967.46/warc/CC-MAIN-20180521063331-20180521083331-00292.warc.gz"}
https://scottlocklin.wordpress.com/2016/03/14/on-beating-roulette-part-3/
# Locklin on science ## On beating roulette: part 3 Posted in econo-blasphemy, Gambling systems by Scott Locklin on March 14, 2016 This is third in a four part series. Part 1 here, part 2 here. To my mind, the most mathematically interesting thing about roulette is the betting system you should use to maximize your wins. Bet sizing systems are important in all probabilistic games, and the types of lessons learned from a winning game of roulette are the same types of lessons you need to learn in betting on other things, like success in trading, or having an edge on the wiener dog races. The nice thing about a game of roulette is it is relatively easy to characterize your edge. Most people’s edge over the roulette wheel is negative, so you should not bet. If you built one of the computer gizmos I went over in part 2, you have a positive edge over the roulette wheel. We know from results in information theory, that sequential bets in the presence of an edge should be sized according to the Kelly Criterion to maximize bankroll growth rate. $betsize = \frac{bankroll * edge}{house odds}$ or, in more probabilistic terms; $betsize = \frac{p * odds + p -1}{odds}$ where $p$ is probability of success. It’s probably not immediately obvious why this is so, but consider a biased coin toss at even odds ($1 payoff for$1 bet). If your coin’s edge is 100%, you gain money fastest by betting your whole bankroll. If you have 0% edge, you shouldn’t bet anything. If you have a 1% edge, you should bet 1% of your bankroll. Daniel Bernoulli came up with the same fraction a long time before by maximizing the geometric mean. Kelly’s original paper figured this out by modeling how a better would place bets assuming he had insider information transmitted over a noisy wire transmitting a binary code; a beautiful way of thinking about predictions in the presence of noise. Kelly is a guy I wish had lived longer. He dropped dead at the young age of 41; in his short life he was a Naval aviator in WW-2, invented computer speech synthesis, made huge contributions to information theory, mentored important mathematicians (Elwyn Berlekamp, who went on to found Axcom/Rentech, based in part on Kelly’s insights) and had the kind of life that would be considered hyperbole if he was in a science fiction novel.  They make big men in Texas. Kelly was a giant. I’m pretty sure his testicles smoked unfiltered camels I’ve been known to take sadistic glee in making fun of economists. One of the most mockable economists in American history is (Nobelist -the Swedes have dry humor) Paul Samuelson.  One could write entire books on the ways in which Samuelson was a scoundrel and a numskull who set back human knowledge by decades. One fact will suffice for this essay: Samuelson didn’t believe in Kelly betting. Explaining why he thought this, and why he’s wrong would be pointless; debugging an economist’s faulty thought processes is as pointless as explaining why a crazy lady is breaking dishes in the kitchen. If you’re interested, Ed Thorp is your man here also. Ed Thorp is the man, period Following Ed Thorp’s original essay in the Gambling Times, as good little experimental physicists, we need to build up an error budget to figure out our edge.  Thorp breaks down the errors in his and Shannon’s Roulette system into several kinds. 1. E1 Rotor speed measurement error 2. E2 Ball speed measurement error 3. E3 Ball rotor path randomness 4. E4 Ball stator path randomness 5. E5 Fret scatter 6. E6 Rotor tilt (discovered by Shannon and Thorp) Uncorrelated errors add up as the sum of squares, so the total error budget is $Error = \sqrt{\sum_{n=1}^{n=6}{E_n^2} }$ The Thorp/Shannon roulette system had a 44% edge on the most favored number; single number payouts in Vegas are 35:1, making the correct bet on one number 0.44/ 35 = 0.01256. Since nobody in 1960s Vegas suspected the mathematical machinators of having a physics edge on the wheel, they were able to place larger bets on parts of the quadrant. While Thorp describes it as “diversification” in his exposition. Another way of thinking about it: he’s just playing more games at once. A friend and former customer explained his trend following method as working in much the same way. The more bets you place, the more likely you’ll hit a winning trend. Kelly betting isn’t a perfect solution in all cases; fixed fraction betting has certain disadvantages when you can’t exactly characterize your edge, or the payout odds, or you have a limited number of bets before you have to cash in your chips. However, in the case of a machine to beat Roulette, it’s difficult to think of a better technique. Of course, Kelly betting and things like it figure in other sorts of betting; people do use it in Markets where it is appropriate. Supposedly it was part of Axcom/Rentech’s early secret sauce, and certainly folks who have thought about trading need a bet sizing and risk management strategy that makes sense. Kelly is often a good place to start, depending on your situation. But that’s a topic for another blog post. One more coming on modern techniques to beat Roulette, including the one I came up with in 2010 (which, in case you were holding your breath, didn’t really work, which is why I have to work, and am willing to talk about such things in blogs). Kelly criterion resources Kelly’s original paper: http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6771227 Thorp’s explanation: http://edwardothorp.com/sitebuildercontent/sitebuilderfiles/TheKellyMoneyManagementSystem.pdf Thorp’s website: http://edwardothorp.com/id10.html ### 6 Responses 1. Oleh Danyliv said, on March 14, 2016 at 2:38 pm Great article and nice to have the original Kelly’s paper. I derived this formula myself for equal pay-outs after I lost some significant money on the stock market. Scott, I don’t share your enthusiasm regarding roulette. Since gambling industry is mostly run by gangsters, you are asking for troubles. I stopped researching roulette once I realised that the edge is not possible (unless you cheat). • flanagan314 said, on March 14, 2016 at 4:51 pm I don’t think that’s entirely true. Edge is possible, but only temporarily. Anywhere edge can be found (such as counting cards in Blackjack), it can be exploited. The important insight here, though, is that the people running the casinos can CHANGE THE RULES. There are a number of reasons they are generally reluctant to do so except when they really, absolutely must, to avoid losses. This does not mean that exploiting such edges is a good idea. As gangsters go, these are pretty gentle guys, who have big profitable businesses to run, and they don’t want to catch any heat if they don’t have to. But they are still gangsters at heart, so pissing them off remains a bad idea. One might flippantly say “but if you can make enough money off of them, you can stand the heat”, but the degree of rage they will experience (and likely visit upon you) scales superlinearly with the money you make off them, so at some point it becomes the Worst Idea You Ever Had. I have no interest in gambling, but I did make a good chunk of money trading foreign currencies about 10 years ago, and that’s a very similar kind of “respectable gangster” business. You take too much money from the banks when you do that, the banks shut you out. Very analogous to being banned from casinos, really. • Scott Locklin said, on March 14, 2016 at 4:54 pm Various courts have decided it’s not cheating if you can find an edge in roulette (Nevada has decided otherwise, because it is more or less a casino), and a few teams have had some success at this over the years, including Thorp and Shannon back in the day, as I described in part 1-2 of this series. When I was thinking about this in detail, I had maps of places where I wouldn’t go to jail or die if I won at roulette (Ukraine was not on the map), as well as various ideas about avoiding detection. Ultimately though, you’re correct: writing about this under my real name is a way of avoiding actually doing something dumb. 2. Brian Skourup said, on March 19, 2016 at 3:04 pm Scott, Have you read Hans Reichenbach? “The Rise of Scientific Philosophy” seems like something you should read if you haven’t. Many thanks for producing both this blog and your Amazon reviews. 3. EW said, on April 5, 2016 at 5:03 pm How’s part 4 going? The anticipation is killing me 🙂 • Scott Locklin said, on April 5, 2016 at 5:21 pm Well, the part 2 was written 5 years ago, so don’t hold your breath!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 4, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32443878054618835, "perplexity": 1866.9792050896292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886109157.57/warc/CC-MAIN-20170821152953-20170821172953-00410.warc.gz"}
https://cvgmt.sns.it/seminar/546/
## The 1-harmonic flow ### Lorenzo Giacomelli (SBAI Department, Sapienza University of Rome) created by paolini on 12 Oct 2016 18 oct 2016 -- 14:30   [open in google calendar] Aula dal Passo, dipartimento di matematica, Roma "Tor Vergata" SEMINARIO DI ANALISI DI EQUAZIONI DIFFERENZIALI Abstract. The 1-harmonic flow is the formal gradient flow -- with respect to the $L^2$-distance -- of the total variation of a manifold-valued unknown function. The problem originates from image processing and has an intrinsic analytical interest as prototype of constrained and vector-valued evolution equations in BV-spaces. For the resulting PDE, I will introduce a notion of solution and I will discuss existence and uniqueness results for two specific manifolds: the hyper-octant of an N-dimensional sphere and a connected sub-arc of a regular Jordan curve. I will also present possible extensions to general manifolds, together with related open questions and conjectures. Based on joint works with Agnese Di Castro, José Mazòn, and Salvador Moll.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5149391889572144, "perplexity": 3220.8713189132386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946584.94/warc/CC-MAIN-20230326235016-20230327025016-00641.warc.gz"}
http://tmdag.com/vopraytracer-pt5/
# Houdini VOP raytracer part 5 Posted on October 2, 2013 # Specular Hilight A specular highlight is a bright spot of light that appears on shiny objects when illuminated. The term specular means that light is perfectly reflected in a mirror-like way from the light source to the viewer. We are going to cover two specular reflection models. Phong reflection model. $K{\tiny spec} = \big\|R\big\| \big\|V\big\| cos^n \beta =(\hat{R} \cdot \hat{V})^n$ Where $\hat{R}$ is normalised mirror reflection of the light vector off the surface, and $\hat{V}$ is normalised viewpoint vector. The number $^n$ is called the Phong exponent, and is a user-chosen value that controls the apparent smoothness of the surface. When calculating $(\hat{R} \cdot \hat{V})$, again we will get negative values like we had in diffuse calculation, so we have to clamp it, $max(0,(\hat{R} \cdot \hat{V}))$. You can refer to the example above, that we get specular also on the back of our sphere. There are few ways to fix that issue, for e.g. we can multiply sampled position by shadow that we already have calculated (shadowed area ==0, non shadowed ==1) or make another dot product test with normal vector. Below is a phong model representation in Houdini nodes. First, calculate reflection vector $\hat{R}$. As law of reflection states – direction of incoming light (the incident ray), and the direction of outgoing light reflected (the reflected ray) make the same angle with respect to the surface normal. In this situation reflect vex node comes in handy. It takes input of vector to be reflected – in our case vector of Light towards the surface (It is very important to check your vector direction) and input of normalised surface normal vector. As an output we will get reflected $R$ vector which should be normalised ($\hat{R}$). Vector $\hat{V}$ is normalised Houdini (“I”) – vector towards Eye (Camera). After calculating dot product between those two vectors $(\hat{R} \cdot \hat{V})$ , we need make sure that we clamp negative values. I use clamp vex node instead of maxing it with zero as I know that values from dot product of normalised vectors won’t exceed value of 1. Last part is to use power function to control exponent and simple multiplication for overall intensity. ## Phong-Blinn reflection model $K{\tiny spec} = \big\|N\big\| \big\|H\big\| cos^n \beta =(\hat{N} \cdot \hat{H})^n$ Where $\hat{N}$ is normalised smooth surface normal vector off the surface, and $\hat{H}$  is the half-angle direction (the direction vector midway between L, the vector to the light, and V, the viewpoint vector). First problem we will find here is that our halfway vector $\hat{H}$ will jump as soon as angle in between L and V is larger than 180°. Here is an example with non clamped values of rotating light by 360°. H – half-angle direction between L and V N – smooth surface normal R – Mirror reflection of the light vector L – Light vector towards surface V – viewpoint vector (eye/camera) Here you can see both specular models with clamped values. As you have noticed, Blinn reflection model give us wider specular as the angle between R-V is more aggressive than N-H. We can compensate the difference by adjusting exponent. Maxing (or clamping in this example) dot product is very important as we would get undesirable effect when calculating exponent from negative values. First lets calculate $\hat{H}$ vector between the viewer and light-source vector. $H = \frac{L + V} {|L + V|}$ Halfway vector ${H}$ equals sum of vectors ${L}$ and ${V}$ divided by sum of their lengths. In many internet examples you will see calculation of halfway vector presented like this: $H = \frac{L + V} {2}$ This is not the mathematically correct way of calculating ${H}$ vector but in our case vectors ${L}$ and  ${V}$ are normalised (their length is equal of 1.0) so sum of their lengths will equal 2. Next step is a dot product calculation with normalised surface normal. Rest of nodes are exactly the same as in phong model. To get rid of jumping $\hat{H}$ vector, we can add second dot product calculation for negated halfway vector. ## Mantra shader version of specular model Mantra has build in specular vex node with few specular models. To create your own model, we need Illuminance Loop vex node that will iterate through all active lights in scene file. To get $L$ vector, you can create global variables vex node (global variables vex node is different inside and outside illuminance loop). $L$ vector vector provided by global variables is a “Direction From Surface to Light”  so we need to negate it (reverse its direction). Global $I$ vector (eye) is “Direction From Eye to Surface” (equivalent of our $V$ – Viewpoint vector), also needs to be negated for proper dot product calculation. This setup should produce exactly same effect as build in Phong model from specular vex node.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 29, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8114871978759766, "perplexity": 1977.2671143443538}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463611569.86/warc/CC-MAIN-20170528220125-20170529000125-00367.warc.gz"}
https://efinancemanagement.com/economics/law-of-diminishing-marginal-utility
# Law of Diminishing Marginal Utility ## What is the Law of Diminishing Marginal Utility? The law of diminishing marginal utility is an economic concept that helps to explain human buying behavior. As per this law, the amount of satisfaction from consuming every additional unit of a good or service drops as we increase the total consumption. Or, we can say that as the consumption increases, the additional or marginal utility goes down with each additional unit. In economics, the term utility refers to satisfaction or happiness. Total utility is also an economic term that tells the total satisfaction after consuming a unit of a product or service. Marginal utility is the change (increase or decrease) in the total utility after consuming an extra unit of good or service. ## Explanation of Law of Diminishing Marginal Utility In general, also, consuming the first unit of a commodity gives us the highest enjoyment or happiness. But as we consume more of the same commodity, after some time, we don’t feel the urge to consume more of it. This is what the law of diminishing marginal utility is all about. The first unit of a product gives the highest level of utility. But the marginal utility drops with each subsequent intake of the commodity. Moreover, if the consumers want to know marginal utility by spending the same dollars on various commodities, it is called the law of equi marginal utility, which is also called an extension of the law of marginal utility. We can also use a graph to explain the law of diminishing marginal utility. In the graph below, the X-axis shows the amount of units of a commodity that a consumer consumes. And the Y-axis shows the marginal utility of each unit. The graph below shows the total utility and the marginal utility curves. The graph clearly shows that the total utility is maximum when the marginal utility is zero. This is because the MU is the slope of the total utility curve. After zero MU, the total utility starts to drop, and MU goes negative. A negative marginal utility implies that consuming more units will lead to dissatisfaction for the consumer. The law of diminishing marginal utility also has a direct relation with the concept of diminishing prices. As the marginal utility of goes down, a consumer will be willing to pay a lower price for a product or service. For example, a consumer is willing to pay \$20 for the first burger, but since he is no more after the first burger, they will be willing to pay a less amount for the second burger. ## Example of Law of Diminishing Marginal Utility Let us understand the law of diminishing MU with the help of a simple example. Mr. A is very hungry, and he goes to a pizza store where he buys a very large pizza with 6 slices. The first slice of pizza gives him the maximum satisfaction, say 10. However, with the subsequent slices, Mr. A’s satisfaction keeps reducing as his stomach gets to fill. Or, we can say that the marginal utility for Mr. A diminishes as he consumes more pizza slices. The following table shows Mr. A’s total utility and marginal utility. ## Law of Diminishing Marginal Utility: Assumptions The law of diminishing marginal utility makes the following assumptions: • Consumers need to behave rationally to maximize their utility with their limited income. It means that a consumer always makes sound decisions. • Consumers need to continuously consume each extra unit of a commodity. It means that there shouldn’t be a pause between consumption. This is because if there is a gap in the consumption of two units, then the marginal utility may not drop. For example, if in the above pizza example, suppose Mr. A eats the third slice after a break or when he feels hungry again, the marginal utility, in this case, will increase. • It is crucial that each unit of a product is standardized. This implies that the size, quality, and volume of each unit should be the same. In case of a deviation, the marginal utility may change. In the pizza example, if the second slice is smaller than the first one, then it is possible that Mr. A would get the same level of satisfaction as from the first. • One can easily measure utility. Also, a consumer can tell their satisfaction level in absolute value, such as 1, 2, 3, etc. • A consumer must consume the product in a reasonable quantity. For example, if a thirsty person drinks water with a spoon, then every additional spoon will increase the satisfaction level. • The marginal utility of money remains constant. Usually, after spending money on the first unit, consumers are left with less money. The remaining money is generally dearer to the consumer. This will increase the MU for money for the consumer. But the law assumes there is no change in the MU for the money. • The income of the consumer and the price of the commodity don’t change. • There is no change in the taste and fashion of the consumer. ## Exceptions It is important to note that the law of diminishing marginal utility does not always hold. There are a few scenarios when this law doesn’t hold, and these are: • In the case of Addictions/Hobbies, this law doesn’t hold. For example, a person who loves to paint may not witness a drop in marginal utility after a new painting. Similarly, for an alcoholic, an extra glass of alcohol may not decrease the marginal utility. • Items that are rare or valuable are also an exception to this law. For example, if someone loves to collect a limited-edition watch, then they could continue collecting such items indefinitely without any drop in the marginal utility. Apart from these two exceptions, there is criticism against this law as well. And this criticism is that the assumptions of the law of diminishing marginal utility may not always hold. For instance, a consumer may not always make a rational decision. And there can also be cases when there is a usual gap between the consumption of two units of a commodity. ## Final Words Despite the exceptions and criticism, the law of diminishing marginal utility is a very popular economics concept. Economists and exporters widely use this law to explain why consumers get less satisfaction with every additional unit. Moreover, companies also use this concept to increase the marginal utility of their products and services for consumers. This, in turn, help companies to increase their sales.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8717380166053772, "perplexity": 679.1355748435956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337432.78/warc/CC-MAIN-20221003200326-20221003230326-00410.warc.gz"}
http://savae.net/6kc4x0h/lda-feature-selection-in-r-54e896
# lda feature selection in r How to deactivate embedded feature selection in caret package? In this study, we discuss several frequently-used evaluation measures for feature selection, and then survey supervised, unsupervised, and semi … Use MathJax to format equations. Can you legally move a dead body to preserve it as evidence? Even if Democrats have control of the senate, won't new legislation just be blocked with a filibuster? Automatic feature selection methods can be used to build many models with different subsets of a dataset and identify those attributes that are and are not required to build an accurate model. Linear Discriminant Analysis takes a data set of cases (also known as observations) as input. Feature selection algorithms could be linear or non-linear. your coworkers to find and share information. Feature selection majorly focuses on selecting a subset of features from the input data, which could effectively describe the input data. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. But, technology has developed some powerful methods which can be used to mine through the data and fetch the information that we are looking for. Feature selection on full training set, does information leak if using Filter Based Feature Selection or Linear discriminate analysis? Line Clemmensen, Trevor Hastie, Daniela Witten, Bjarne Ersbøll: Sparse Discriminant Analysis (2011). In this tutorial, we cover examples form all three methods, I.E… Viewed 2k times 1. The Feature Selection Problem : Traditional Methods and a new algorithm. It gives you a lot of insight into how you perform against the best on a level playing field. Disadvantages of SVM in R By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. Then a stepwise variable selection is performed. Active 4 years, 9 months ago. Hot Network Questions When its not okay to cheap out on bike parts Why should you have travel insurance? By clicking “Post Your Answer”, you agree to our terms of service, privacy policy and cookie policy. Classification and prediction by support vector machines (SVM) is a widely used and one of the most powerful supervised classification techniques, especially for high-dimension data. When I got there, I realized that was not the case – the winners were using the same algorithms which a lot of other people were using. Why don't unexpandable active characters work in \csname...\endcsname? I did not find yet documentations about this, so its more about giving a possible idea to follow rather than a straightforward solution. The R package lda (Chang 2010) provides collapsed Gibbs sampling methods for LDA and related topic model variants, with the Gibbs sampler implemented in C. All models in package lda are fitted using Gibbs sampling for determining the poste- rior probability of the latent variables. Can an employer claim defamation against an ex-employee who has claimed unfair dismissal? Selecting only numeric columns from a data frame, How to unload a package without restarting R. How to find out which package version is loaded in R? How to teach a one year old to stop throwing food once he's done eating? Details. I'm running a linear discriminant analysis on a few hundred variables and am using caret's 'train' function with the built in model 'stepLDA' to select the most 'informative' variables. Can I assign any static IP address to a device on my network? LDA is defined as a dimensionality reduction technique by au… The dataset for which feature selection will be carried out nosample The number of instances drawn from the original dataset threshold The cutoff point to select the features repet The number of repetitions. Why does "nslookup -type=mx YAHOO.COMYAHOO.COMOO.COM" return a valid mail exchanger? So given some measurements about a forest, you will be able to predict which type of forest a given observation belongs to. It works with continuous and/or categorical predictor variables. 1. Line Clemmensen, Trevor Hastie, Daniela Witten, Bjarne Ersbøll: Sparse Discriminant Analysis (2011), Specify number of linear discriminants in R MASS lda function, Proportion of explained variance in PCA and LDA. How about making sure your input data x and y. Feature selection provides an effective way to solve this problem by removing irrelevant and redundant data, which can reduce computation time, improve learning accuracy, and facilitate a better understanding for the learning model or data. But you say you want to work with some original variables in the end, not the functions. The classification “method” (e.g. Initially, I used to believe that machine learning is going to be all about algorithms – know which one to apply when and you will come on the top. What are the individual variances of your 27 predictors? denote a class. SVM works well in high dimensional space and in case of text or image classification. Or does it have to be within the DHCP servers (or routers) defined subnet? Time to master the concept of Data Visualization in R. Advantages of SVM in R. If we are using Kernel trick in case of non-linear separable data then it performs very well. Asking for help, clarification, or responding to other answers. Tenth National Conference on Artificial Intelligence, MIT Press, 129-134. In my last post, I started a discussion about dimensionality reduction which the matter was the real impact over the results using principal component analysis ( PCA ) before perform a classification task ( https://meigarom.github.io/blog/pca.html). Previously, we have described the logistic regression for two-class classification problems, that is when the outcome variable has two possible values (0/1, no/yes, negative/positive). Why is an early e5 against a Yugoslav setup evaluated at +2.6 according to Stockfish? So the output I would expect is something like this imaginary example. Details. How did SNES render more accurate perspective than PS1? Here I am going to discuss Logistic regression, LDA, and QDA. GA in Feature Selection Every possible solution of the GA, i.e. the selected variable, is considered as a whole, thus it will not rank variables individually against the target. To do so, a numbe… How are we doing? Parsing JSON data from a text column in Postgres. @ cogitivita, thanks a million. If it does, it will not give you any information to discriminate the data. How do I find complex values that satisfy multiple inequalities? The LDA model can be used like any other machine learning model with all raw inputs. Overcoming the myopia of induction learning algorithms with RELIEFF. It only takes a minute to sign up. The classification model is evaluated by confusion matrix. 18.2 Feature Selection Methods. Is there a word for an option within an option? Classification methods play an important role in data analysis in a wide range of scientific applications. share | cite | improve this question | follow | edited Oct 27 '15 at 14:51. amoeba . feature selection function in caret package. Feature Selection in R 14 Feb 2016. Replacing the core of a planet with a sun, could that be theoretically possible? In this post, I am going to continue discussing this subject, but now, talking about Linear Discriminant Analysis ( LDA ) algorithm. There is various classification algorithm available like Logistic Regression, LDA, QDA, Random Forest, SVM etc. This blog post is about feature selection in R, but first a few words about R. R is a free programming language with a wide variety of statistical and graphical techniques. Please let me know your thoughts about this. Thanks again. It is recommended to use at most 10 repetitions. Python's Scikit Learn provides a convenient interface for topic modeling using algorithms like Latent Dirichlet allocation(LDA), LSI and Non-Negative Matrix Factorization. Seeking a study claiming that a successful coup d’etat only requires a small percentage of the population. Is there a limit to how much spacetime can be curved? So, let us see which packages and functions in R you can use to select the critical features. Lda models are used to predict a categorical variable (factor) using one or several continuous (numerical) features. Is the Gelatinous ice cube familar official? On the other hand, feature selection could largely reduce negative impacts from noise or irrelevant features , , , , .The dependent features would provide no extra information and thus just serve as noised dimensions for the classification. The technique of extracting a subset of relevant features is called feature selection. Renaming multiple layers in the legend from an attribute in each layer in QGIS. First, we need to keep our model simple, and there are a couple of reasons for which need to ensure that your model is simple. from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0) Feature Scaling. However if the mean of a numerical feature differs depending on the forest type, it will help you discriminate the data and you'll use it in the lda model. How do digital function generators generate precise frequencies? LDA with stepwise feature selection in caret. There exist different approaches to identify the relevant features. Can I print plastic blank space fillers for my service panel? This will tell you for each forest type, if the mean of the numerical feature stays the same or not. Join Stack Overflow to learn, share knowledge, and build your career. Your out$K is 4, and that means you have 4 discriminant vectors. @amoeba - They vary slightly as below (provided for first 20 features). Feature selection is an important task. Proc. Code I used and results I got thus far: Too get the structure of the output from the anaylsis: I am interested in obtaining a list or matrix of the top 20 variables for feature selection, more than likely based on the coefficients of the Linear discrimination. How should I deal with “package 'xxx' is not available (for R version x.y.z)” warning? asked Oct 27 '15 at 1:13. I am looking for help on interpreting the results to reduce the number of features from$27$to some$x<27$. I was going onto 10 lines of code already, Glad it got broken down to just 2 lines. Can the scaling values in a linear discriminant analysis (LDA) be used to plot explanatory variables on the linear discriminants? LDA (its discriminant functions) are already the reduced dimensionality. Can you escape a grapple during a time stop (without teleporting or similar effects)? Using the terminology of John, Kohavi, and Pfleger (1994): Wrapper methods evaluate multiple models using procedures that add and/or remove predictors to find the optimal combination that maximizes model performance. With the growing amount of data in recent years, that too mostly unstructured, it’s difficult to obtain the relevant and desired information. Therefore it'll not be relevant to the model and you will not use it. Non-linear methods assume that the data of interest lie on a n embedded non-linear manifold within the higher-dimensional space. Please help us improve Stack Overflow. KONONENKO, I., SIMEC, E., and ROBNIK-SIKONJA, M. (1997). It simply creates a model based on the inputs, generating coefficients for each variable that maximize the between class differences. I am working on the Forest type mapping dataset which is available in the UCI machine learning repository. Analytics Industry is all about obtaining the “Information” from the data. Why would the ages on a 1877 Marriage Certificate be so wrong? Thanks for contributing an answer to Cross Validated! Will a divorce affect my co-signed vehicle? It must be able to deal with matrices as in method(x, grouping, ...). Cross Validated is a question and answer site for people interested in statistics, machine learning, data analysis, data mining, and data visualization. Can anyone provide any pointers (not necessarily the R code). Extract the value in the line after matching pattern, Healing an unconscious player and the hitpoints they regain. I am trying to use the penalizedLDA package to run a penalized linear discriminant analysis in order to select the "most meaningful" variables. Making statements based on opinion; back them up with references or personal experience. Feature selection using the penalizedLDA package. Discriminant analysis is used to predict the probability of belonging to a given class (or category) based on one or multiple predictor variables. CDA, on the other hand. Making statements based on opinion; back them up with references or personal experience. Sparse Discriminant Analysis, which is a LASSO penalized LDA: How do I install an R package from source? CRL over HTTPS: is it really a bad practice? How to teach a one year old to stop throwing food once he's done eating? One such technique in the field of text mining is Topic Modelling. Often we do not only require low prediction error but also we need to identify covariates playing an important role in discrimination between the classes and to assess their contribution to the classifier. sum(explained_variance_ratio_of_component * weight_of_features) or, sum(explained_variance_ratio_of_component * correlation_of_features). To do so, you need to use and apply an ANOVA model to each numerical variable. Do they differ a lot between each other? Arvind Arvind. Second, including insignificant variables can significantly impact your model performance. The general idea of this method is to choose the features that can be most distinguished between classes. As was the case with PCA, we need to perform feature scaling for LDA too. Is there a limit to how much spacetime can be curved? A popular automatic method for feature selection provided by the caret R package is called Recursive Feature Elimination or RFE. Elegant way to check for missing packages and install them? In each of these ANOVA models, the variable to explain (Y) is the numerical feature, and the explicative variable (X) is the categorical feature you want to predict in the lda model. Perhaps the explained variance of each component can be directly used in the computation as well: It works great!! For each case, you need to have a categorical variable to define the class and several predictor variables (which are numeric). On Feature Selection for Document Classification Using LDA 1. Review of the two previously used feature selection methods Mutual information: Let @ denote a document, P denote a term, ? This is one of several model types I'm building to test. If you want the top 20 variables according to, say, the 2nd vector, try this: Thanks for contributing an answer to Stack Overflow! Before applying a lda model, you have to determine which features are relevant to discriminate the data. r feature-selection interpretation discriminant-analysis. Ask Question Asked 4 years, 9 months ago. Feature selection can enhance the interpretability of the model, speed up the learning process and improve the learner performance. Although you got one feature as result of LDA, you can figure it out whether good or not in classification. It was created by Ross Ihaka and Robert Gentleman at the University of Auckland, New Zealand, and is currently developed by the R Development Core Team. Apart from models with built-in feature selection, most approaches for reducing the number of predictors can be placed into two main categories. Just to get a rough idea how the samples of our three classes$\omega_1, \omega_2$and$\omega_3$are distributed, let us visualize the distributions of the four different features in 1-dimensional histograms. I have 27 features to predict the 4 types of forest. LDA is not, in and of itself, dimension reducing. site design / logo © 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. I have searched here and on other sites for help in accessing the the output from the penalized model to no avail. In this tutorial, you will learn how to build the best possible LDA topic model and explore how to showcase the outputs as meaningful results. No, both feature selection and dimensionality reduction transform the raw data into a form that has fewer variables that can then be fed into a model. Crack in paint seems to slowly getting longer. To subscribe to this RSS feed, copy and paste this URL into your RSS reader. My data comprises of 400 varaibles and 44 groups. Classification algorithm defines set of rules to identify a category or group for an observation. In this post, you will see how to implement 10 powerful feature selection approaches in R. Asking for help, clarification, or responding to other answers. How do digital function generators generate precise frequencies? Was there anything intrinsically inconsistent about Newton's universe? One of the best ways I use to learn machine learningis by benchmarking myself against the best data scientists in competitions. From wiki and other links what I understand is LD1, LD2 and LD3 are functions that I can use to classify the new data (LD1 73.7% and LD2 19.7%). Thanks in advance. your code works. To learn more, see our tips on writing great answers. How should I deal with “package 'xxx' is not available (for R version x.y.z)” warning? This tutorial is focused on the latter only. As the name sugg… How to use LDA results for feature selection? Parallelize rfcv() function for feature selection in randomForest package. Can playing an opening that violates many opening principles be bad for positional understanding? Feature Selection using Genetic Algorithms in R Posted on January 15, 2019 by Pablo Casas in R bloggers | 0 Comments [This article was first published on R - Data Science Heroes Blog , and kindly contributed to R-bloggers ]. Examples . Then we want to calculate the expected log-odds ratio N(, ? To learn more, see our tips on writing great answers. I am not able to interpret how I can use this result to reduce the number of features or select only the relevant features as LD1 and LD2 functions have coefficient for each feature. Next, I thought sure… MathJax reference. 85k 26 26 gold badges 256 256 silver badges 304 304 bronze badges. If it doesn't need to be vanilla LDA (which is not supposed to select from input features), there's e.g. Is it possible to assign value to set (not setx) value %path% on Windows 10? Stack Overflow works best with JavaScript enabled, Where developers & technologists share private knowledge with coworkers, Programming & related technical career opportunities, Recruit tech talent & build your employer brand, Reach developers & technologists worldwide. It is considered a good practice to identify which features are important when building predictive models. Applied Intelligence Vol7, 1, 39-55. Linear Discriminant Analysis (LDA) is most commonly used as dimensionality reduction technique in the pre-processing step for pattern-classification and machine learning applications.The goal is to project a dataset onto a lower-dimensional space with good class-separability in order avoid overfitting (“curse of dimensionality”) and also reduce computational costs.Ronald A. Fisher formulated the Linear Discriminant in 1936 (The U… Renaming multiple layers in the legend from an attribute in each layer in QGIS, My capacitor does not what I expect it to do. )= 'ln É( Â∈ Î,∈ Ï) É( Â∈ Î) É( Â∈) A =( +∈ Ö=1, +∈ ×=1)ln É( Â∈, ∈ Ï @ 5) É( Â∈ @ 5) É( Â∈ Ï @ ‘lda’) must have its own ‘predict’ method (like ‘predict.lda’ for ‘lda’) that either returns a matrix of posterior probabilities or a list with an element ‘posterior’ containing that matrix instead. In my opinion, you should be leveraging canonical discriminant analysis as opposed to LDA. In machine learning, Feature selection is the process of choosing variables that are useful in predicting the response (Y). rev 2021.1.7.38271, The best answers are voted up and rise to the top, Cross Validated works best with JavaScript enabled, Start here for a quick overview of the site, Detailed answers to any questions you might have, Discuss the workings and policies of this site, Learn more about Stack Overflow the company, Learn more about hiring developers or posting ads with us. I am performing a Linear Discriminant Analysis (LDA) to reduce the number of features using lda() function available in the MASS library. I'm looking for a function which can reduce the number of explanatory variables in my lda function (linear discriminant analysis). Colleagues don't congratulate me or cheer me on, when I do good work? site design / logo © 2021 Stack Exchange Inc; user contributions licensed under cc by-sa. The benefit in both cases is that the model operates on fewer input … Stack Overflow for Teams is a private, secure spot for you and This uses a discrete subset of the input features via the LASSO regularization. Should the stipend be paid if working remotely? We often visualize this input data as a matrix, such as shown below, with each case being a row and each variable a column. What are “coefficients of linear discriminants” in LDA? 0. feature selection function in caret package. How to stop writing from deteriorating mid-writing? Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. How do you take into account order in linear programming? rev 2021.1.7.38271. I realized I would have to sort the coefficients in descending order, and get the variable names matched to it. 523. I don't know if this may be of any use, but I wanted to mention the idea of using LDA to give an "importance value" to each features (for selection), by computing the correlation of each features to each components (LD1, LD2, LD3,...) and selecting the features that are highly correlated to some important components. As Figure 6.1 shows, we can use tidy text principles to approach topic modeling with the same set of tidy tools we’ve used throughout this book. It does not suffer a multicollinearity problem. Histograms and feature selection. Is there a word for an option within an option? It is essential for two reasons. It can also be used for dimensionality reduction. I changed the title of your Q because it is about feature selection and not dimensionality reduction. Here I am going to discuss Logistic Regression, LDA, and build your career how did render... Conference on Artificial Intelligence, MIT Press, 129-134 accessing the the output from the input data which! This imaginary example uses a discrete subset of relevant features is called feature selection Problem: Traditional methods and new. Logistic Regression, LDA, and QDA three methods, I.E… your code works 2.... Our tips on writing great answers should you have 4 discriminant vectors I print plastic blank space fillers for service... For help in accessing the the output I would expect is something like this imaginary example select from input )... In high dimensional space and in case of text or image classification could that be theoretically possible scaling values a... Discriminants ” in LDA 26 26 gold lda feature selection in r 256 256 silver badges 304 304 bronze badges here... Move a dead body to preserve it as evidence dimension reducing copy and paste this URL into RSS... Colleagues do n't lda feature selection in r active characters work in \csname... \endcsname how you perform against best. Spot for you and your coworkers to find and share information 256 badges! Impact your model performance choose the features that can be most distinguished between.. The interpretability of the population my data comprises of 400 varaibles and 44 groups$ K 4! Share knowledge, and QDA of predictors can be most distinguished between classes takes a data set cases! Why should you have travel insurance an early e5 against a Yugoslav evaluated! The myopia of induction learning algorithms with RELIEFF algorithms with RELIEFF the same or not in classification wrong... One or several continuous ( numerical ) features a lot of insight into how you perform against the data... Extract the value in the field of text mining is Topic Modelling a wide range of applications... Supposed to select the critical features the title of your 27 predictors 2021 Stack Exchange Inc ; contributions! Linear discriminant analysis takes a data set of cases ( also known as observations as... By benchmarking myself against the target seeking a study claiming that a successful d. Reduced dimensionality want to work with some original variables in my LDA function linear! The learner performance for my service panel import train_test_split X_train, X_test y_train. Work with some original variables in my opinion, you agree to our terms service! I use to learn, share knowledge, and build your career changed the title of your 27?. Teach a one year old to stop throwing food once he 's done eating does it have determine! Did SNES render more accurate perspective than PS1 of insight into how you perform against the best on a playing... 27 '15 at 14:51. amoeba a bad practice could effectively describe the input via... My Network field of text or image classification LDA models are used to predict 4... Model and you will not rank variables individually against the best on lda feature selection in r level field. Against an ex-employee who has claimed unfair dismissal variable to define the class and several variables! Of your 27 predictors use at most 10 repetitions of the best ways I to! On Windows 10 bad for positional understanding in method ( x, y, test_size=0.2 random_state=0! Private, secure spot for you and your coworkers to find and share...., wo n't new legislation just be blocked with a sun, could that be theoretically?! Teams is a private, secure spot for you and your coworkers to find and share information classification... Need to be vanilla LDA ( its discriminant functions ) are already the dimensionality! A model based on opinion ; back them up with references or personal experience not it! The same or not any pointers ( not setx ) value % path % on Windows 10 could describe! I use to learn, share knowledge, and QDA stop ( without teleporting or similar effects ) new! Numerical feature stays the same or not discriminate analysis this uses a discrete of... ) defined subnet apart from models with built-in feature selection in randomForest package, could that be theoretically possible generating. ( which are numeric ) early e5 against a Yugoslav setup evaluated at +2.6 according to?! Can use to learn more, see our tips on writing great.. Spot for you and your coworkers to find and share information the technique of a! Discuss Logistic Regression, LDA, QDA, Random forest, you can use to learn, share knowledge and. Many opening principles be bad for positional understanding to assign value to (! The forest type, if the mean of the model, speed up the learning process improve!, in and of itself, dimension reducing lda feature selection in r critical features render more accurate perspective PS1! | follow | edited Oct 27 '15 at 14:51. amoeba selection or linear discriminate analysis the ages on n... 256 silver badges 304 304 bronze badges when building predictive models Network when! You say you want to work with some original variables in the legend from an attribute each. Healing an unconscious player and the hitpoints They regain is a private, spot! Main categories under cc by-sa 10 lines of code already, lda feature selection in r it got broken down to just lines... Device on my Network the case with PCA, we cover examples form all three methods, I.E… your works. Newton 's universe us see which packages and install them is various classification defines. (, find and share information to test on other sites for,! One such technique in the end, not the functions on writing answers... Output I would expect is something like this imaginary example LDA ) be used any! An R package from source '' return a valid mail exchanger embedded feature selection, most approaches reducing. Badges 256 256 silver badges 304 304 bronze badges nslookup -type=mx YAHOO.COMYAHOO.COMOO.COM '' return a valid mail?..., X_test, y_train, y_test = train_test_split ( x, grouping,... ) features are relevant discriminate... Reduce the number of predictors can be used to predict a categorical variable to define the class and several variables... Insight into how you perform against the target responding to other answers spot!, lda feature selection in r us see which packages and functions in R you can it... Am working on the forest type, if the mean of the population methods. Not give you any information to discriminate the data of interest lie on a 1877 Marriage Certificate so... My LDA function ( linear discriminant analysis ( LDA ) be used to predict a categorical to! That a successful coup d ’ etat only requires a small percentage of the numerical feature stays the same not. Whether good or not in classification no avail a time stop ( without teleporting or similar effects ) relevant... More accurate perspective than PS1 calculate the expected log-odds ratio n (?! Legend from an attribute in each layer in QGIS of a planet with a sun, could that be possible. Caret R package from source secure spot for you and your coworkers to find and share information used... Site design / logo © 2021 Stack Exchange Inc ; user contributions licensed under by-sa! Json data from a text column in Postgres about Newton 's universe scientists. This method is to choose the features that can be placed into two main categories analysis., which could effectively describe the input data x and y form all three methods, I.E… code! Learn machine learningis by benchmarking myself against the best on a 1877 Marriage Certificate be so wrong ” you! Second, including insignificant variables can significantly impact your model performance your coworkers to find and share information must! Machine learning model with all raw inputs this will tell you for each forest type, if the mean the... Numerical variable dimensional space and in case of text or image classification I.E… your code works features relevant! Down to just 2 lines have control of the population of explanatory variables in lda feature selection in r,! Group for an option or cheer me on, when I do good?!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20233194530010223, "perplexity": 1859.3363731048562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991685.16/warc/CC-MAIN-20210512070028-20210512100028-00620.warc.gz"}
https://arxiv-export-lb.library.cornell.edu/abs/2110.11922v1
gr-qc (what is this?) # Title: A singularity theorem for evaporating black holes Abstract: The classical singularity theorems of General Relativity rely on energy conditions that are easily violated by quantum fields. Here, we provide motivation for an energy condition obeyed in semiclassical gravity: the smeared null energy condition (SNEC), a proposed bound on the weighted average of the null energy along a finite portion of a null geodesic. Using SNEC as an assumption we proceed to prove a singularity theorem. This theorem extends the Penrose singularity theorem to semiclassical gravity and has interesting applications to evaporating black holes. Comments: Contribution to the Proceedings of the 16th Marcel Grossmann Meeting (MG16), 9 pages, 2 figures. arXiv admin note: text overlap with arXiv:2012.11569 Subjects: General Relativity and Quantum Cosmology (gr-qc); High Energy Physics - Theory (hep-th) Cite as: arXiv:2110.11922 [gr-qc] (or arXiv:2110.11922v1 [gr-qc] for this version) ## Submission history From: Eleni-Alexandra Kontou [view email] [v1] Fri, 22 Oct 2021 17:06:43 GMT (915kb,D) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8307042717933655, "perplexity": 3098.9484318134614}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304515.74/warc/CC-MAIN-20220124054039-20220124084039-00198.warc.gz"}
http://tex.stackexchange.com/questions/2132/how-to-define-a-command-that-takes-more-than-9-arguments?answertab=votes
# How to define a command that takes more than 9 arguments I have a mathematical transformation that takes 16 parameters (grouped into 3+8+5) and would like to make a latex command for it, so that I can easily change the notation for it if the need arises. As far as I know, both \def and \newcommand take a maximum of 9 arguments, is there any (recommended) way to extend this? - Perhaps you might show us the detail of what is wanted. This sounds like a question where the best answer will be to think carefully about the input you really require. –  Joseph Wright Aug 21 '10 at 6:43 I edited the question to make it clear the parameters are not programmatic, but rather, an unavoidable part of the the maths that I'm using. –  Simon Aug 21 '10 at 7:02 I wonder if there's a magic solution involving Currying. –  Seamus Feb 21 '13 at 16:19 You are going to have to parse the arguments some at a time and store them into temporary registers or macros. For example \newcommand\foo[9]{% \def\tempa{#1}% \def\tempb{#2}% \def\tempc{#3}% \def\tempd{#4}% \def\tempe{#5}% \def\tempf{#6}% \def\tempg{#7}% \def\temph{#8}% \def\tempi{#9}% \foocontinued } \newcommand\foocontinued[7]{% % Do whatever you want with your 9+7 arguments here. } - Thanks TH - that's the same solution as supplied in the "black TeX magic" link provided by mindcorrosive. I think that I'll use the xargs package, since it will make my code clearer and I like the simple default arguments. –  Simon Aug 21 '10 at 6:55 There's the xargs package, and there's also some black TeX magic. As for myself, being conditioned in Python, I prefer the key-value parameter syntax provided by keyval/xkeyval packages. On an unrelated note, if I find myself needing more than 9 parameters, that usually means that my macro/def/code organization is not very good, and I'd try to improve that first. But of course, there are legitimate situations where 9 parameters are perfectly okay --- especially if you try to build a definition with a lot of knobs and tweaks. - Thanks, I don't know how my googling did not turn up the first option you gave. The 16 parameters define a nonlinear transformation - they're not options in the macro. –  Simon Aug 21 '10 at 6:51 Actually, xargs does not allow more than 9 arguments - it only gives a neat interface for optional arguments. I'll have to use the TeX hack. –  Simon Aug 21 '10 at 7:10 That's correct. Until you clarified what you need so much parameters for, I assumed it's for a macro, and you'd use the keyval interface. But of course in that case it's better with plain TeX. –  Martin Tapankov Aug 21 '10 at 7:15 In a response to How to use variables inside a command when generating a table? I mention how the stringstrings package has a \getargs command that will parse large numbers of arguments that are passed within a single { }. To recap that reply, \documentclass{article} \usepackage{stringstrings} \begin{document} \getargs{1 2 3 4 5 6 7 8 9 10 11 12 FinalArgument} There are \narg~arguments. The thirteenth is \argxiii \end{document} The result to this example is: There are 13 arguments. The thirteenth is FinalArgument EDIT: A much more efficient version of \getargs is available in the readarray package and called \getargsC (in deference to David Carlisle's help). Thus, the same task can be accomplished more quickly with \documentclass{article} \begin{document} \getargsC{1 2 3 4 5 6 7 8 9 10 11 12 FinalArgument} There are \narg~arguments. The thirteenth is \argxiii \end{document} - Since it's a different technique, I also present the following: local macro definitions. \documentclass{article} \def\NineteenArgs#1#2#3#4#5#6#7#8#9{% \def\ArgsTenAndFurther##1##2##3##4##5##6##7##8##9{% \def\ArgNineteen####1{% ####1##9##8##7##6##5##4##3##2##1#9#8#7#6#5#4#3#2#1% }% \ArgNineteen% }% \ArgsTenAndFurther% } \begin{document} %1234567890123456789 \NineteenArgs abcdefghijklmnopqrs \end{document} -
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8253761529922485, "perplexity": 1292.9858410082097}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042988458.74/warc/CC-MAIN-20150728002308-00095-ip-10-236-191-2.ec2.internal.warc.gz"}
http://www.numdam.org/item/ASNSP_2009_5_8_2_333_0/
Approximation of complex algebraic numbers by algebraic numbers of bounded degree Annali della Scuola Normale Superiore di Pisa - Classe di Scienze, Série 5, Tome 8 (2009) no. 2, pp. 333-368. To measure how well a given complex number $\xi$ can be approximated by algebraic numbers of degree at most $n$ one may use the quantities ${w}_{n}\left(\xi \right)$ and ${w}_{n}^{*}\left(\xi \right)$ introduced by Mahler and Koksma, respectively. The values of ${w}_{n}\left(\xi \right)$ and ${w}_{n}^{*}\left(\xi \right)$ have been computed for real algebraic numbers $\xi$, but up to now not for complex, non-real algebraic numbers $\xi$. In this paper we compute ${w}_{n}\left(\xi \right)$, ${w}_{n}^{*}\left(\xi \right)$ for all positive integers $n$ and algebraic numbers $\xi \in ℂ\setminus ℝ$, except for those pairs $\left(n,\xi \right)$ such that $n$ is even, $n\ge 6$ and $n+3\le deg\xi \le 2n-2$. It is known that every real algebraic number of degree $>n$ has the same values for ${w}_{n}$ and ${w}_{n}^{*}$ as almost every real number. Our results imply that for every positive even integer $n$ there are complex algebraic numbers $\xi$ of degree $>n$ which are unusually well approximable by algebraic numbers of degree at most $n$, i.e., have larger values for ${w}_{n}$ and ${w}_{n}^{*}$ than almost all complex numbers. We consider also the approximation of complex non-real algebraic numbers $\xi$ by algebraic integers, and show that if $\xi$ is unusually well approximable by algebraic numbers of degree at most $n$ then it is unusually badly approximable by algebraic integers of degree at most $n+1$. By means of Schmidt’s Subspace Theorem we reduce the approximation problem to compute ${w}_{n}\left(\xi \right)$, ${w}_{n}^{*}\left(\xi \right)$ to an algebraic problem which is trivial if $\xi$ is real but much harder if $\xi$ is not real. We give a partial solution to this problem. Classification : 11J68 @article{ASNSP_2009_5_8_2_333_0, author = {Bugeaud, Yann and Evertse, Jan-Hendrik}, title = {Approximation of complex algebraic numbers by algebraic numbers of bounded degree}, journal = {Annali della Scuola Normale Superiore di Pisa - Classe di Scienze}, pages = {333--368}, publisher = {Scuola Normale Superiore, Pisa}, volume = {Ser. 5, 8}, number = {2}, year = {2009}, zbl = {1176.11031}, mrnumber = {2548250}, language = {en}, url = {http://www.numdam.org/item/ASNSP_2009_5_8_2_333_0/} } Bugeaud, Yann; Evertse, Jan-Hendrik. Approximation of complex algebraic numbers by algebraic numbers of bounded degree. Annali della Scuola Normale Superiore di Pisa - Classe di Scienze, Série 5, Tome 8 (2009) no. 2, pp. 333-368. http://www.numdam.org/item/ASNSP_2009_5_8_2_333_0/ [1] E. Bombieri and J. Mueller, Remarks on the approximation to an algebraic number by algebraic numbers, Michigan Math. J. 33 (1986), 83–93. | MR 817911 | Zbl 0593.10031 [2] Y. Bugeaud, “Approximation by Algebraic Numbers”, Cambridge Tracts in Mathematics 160, Cambridge University Press, 2004. | MR 2136100 | Zbl 1055.11002 [3] Y. Bugeaud and M. Laurent, Exponents of Diophantine approximation and Sturmian continued fractions, Ann. Inst. Fourier (Grenoble) 55 (2005), 773–804. | EuDML 116208 | Numdam | MR 2149403 | Zbl 1155.11333 [4] Y. Bugeaud and M. Laurent, On exponents of homogeneous and inhomogeneous Diophantine approximation, Mosc. Math. J. 5 (2005), 747–766. | MR 2266457 | Zbl 1119.11039 [5] Y. Bugeaud and O. Teulié, Approximation d’un nombre réel par des nombres algébriques de degré donné, Acta Arith. 93 (2000), 77–86. | EuDML 207401 | MR 1760090 [6] J. W. S. Cassels, “An Introduction to the Geometry of Numbers”, Springer Verlag, 1997. | MR 1434478 [7] H. Davenport and W. M. Schmidt, Approximation to real numbers by quadratic irrationals, Acta Arith. 13 (1967), 169–176. | EuDML 204825 | MR 219476 | Zbl 0155.09503 [8] H. Davenport and W. M. Schmidt, A theorem on linear forms, Acta Arith. 14 (1967/1968), 209–223. | EuDML 204859 | MR 225728 | Zbl 0179.07303 [9] H. Davenport and W. M. Schmidt, Approximation to real numbers by algebraic integers, Acta Arith. 15 (1969), 393–416. | EuDML 204905 | MR 246822 | Zbl 0186.08603 [10] H. Davenport and W. M. Schmidt, Dirichlet’s theorem on Diophantine approximation II, Acta Arith. 16 (1970), 413–423. | EuDML 204938 | MR 279040 | Zbl 0201.05501 [11] J.-H. Evertse and H.P. Schlickewei, A quantitative version of the Absolute Parametric Subspace Theorem, J. Reine Angew. Math. 548 (2002), 21–127. | MR 1915209 | Zbl 1026.11060 [12] A. Ya. Khintchine, Über eine Klasse linearer diophantischer Approximationen, Rend. Circ. Mat. Palermo 50 (1926), 170–195. | JFM 52.0183.01 [13] J. F. Koksma, Über die Mahlersche Klasseneinteilung der transzendenten Zahlen und die Approximation komplexer Zahlen durch algebraische Zahlen, Monatsh. Math. Phys. 48 (1939), 176–189. | JFM 65.0180.01 | MR 845 [14] K. Mahler, Zur Approximation der Exponentialfunktionen und des Logarithmus. I, II, J. Reine Angew. Math. 166 (1932), 118–150. | EuDML 183466 | JFM 58.0207.01 | MR 1581302 [15] K. F. Roth, Rational approximations to algebraic numbers, Matematika 2 (1955), 1–20; corrigendum, 168. | MR 72182 | Zbl 0064.28501 [16] D. Roy, Approximation simultanée d’un nombre et son carré, C. R. Acad. Sci. Paris 336 (2003), 1–6. | MR 1968892 [17] D. Roy, Approximation to real numbers by cubic algebraic numbers, I, Proc. London Math. Soc. 88 (2004), 42–62. | MR 2018957 | Zbl 1035.11028 [18] D. Roy, Approximation to real numbers by cubic algebraic numbers, II, Ann. of Math. 158 (2003), 1081–1087. | MR 2031862 | Zbl 1044.11061 [19] D. Roy and M. Waldschmidt, Diophantine approximation by conjugate algebraic integers, Compositio Math. 140 (2004), 593–612. | MR 2041771 | Zbl 1055.11043 [20] W. M. Schmidt, Simultaneous approximation to algebraic numbers by rationals, Acta Math. 125 (1970), 189–201. | MR 268129 | Zbl 0205.06702 [21] W. M. Schmidt, Linearformen mit algebraischen Koeffizienten. II, Math. Ann. 191 (1971), 1–20. | EuDML 162115 | MR 308062 | Zbl 0198.07103 [22] W. M. Schmidt, “Approximation to Algebraic Numbers”, Monographie de l’Enseignement Mathématique 19, Genève, 1971. | MR 327672 | Zbl 0226.10033 [23] W. M. Schmidt, “Diophantine Approximation”, Lecture Notes in Math. 785, Springer, Berlin, 1980. | MR 568710 | Zbl 0421.10019 [24] V. G. Sprindžuk, “Mahler’s Problem in Metric Number Theory”, Izdat. “Nauka i Tehnika” , Minsk, 1967 (in Russian). English translation by B. Volkmann, Translations of Mathematical Monographs, Vol. 25, American Mathematical Society, Providence, R.I., 1969. [25] E. Wirsing, Approximation mit algebraischen Zahlen beschränkten Grades, J. Reine Angew. Math. 206 (1961), 67–77. | EuDML 150474 | MR 142510 | Zbl 0097.03503
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 33, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.744683027267456, "perplexity": 1198.4469609839991}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585209.43/warc/CC-MAIN-20211018190451-20211018220451-00630.warc.gz"}
http://blog.jverkamp.com/2012/12/22/nested-primes/
# Nested Primes Yesterday's post from Programming Praxis poses an interesting problem: find the largest prime n such that the result of repeatedly removing each digit of n from left to right is also always prime. For example, 6317 would be such a number, as not only is it prime, but so are 317, 17, and 7. This is actually a surprisingly straightforward problem, if instead of starting at the largest such number and checking to make sure that each part along the way is prime, you start with the single digit primes (2, 3, 5, and 7) and repeatedly add on numbers to the left so long as they are prime. For example, 3 would become 13, 23, 43, 53, 73, and 83. 13 in turn would become 113, 313, and 613. It turns out though that this list doesn't grow unbounded. Eventually each branch will terminate (a proof for why this happens would be interesting, but I'm not sure how it would work--or even if such a thing is provable short of proof by enumerating all of the solutions). Enough on discussion though, it's code time! (Only Racket today as I'm running a bit short on time for my writeup. If anyone would like a Python version as well, let me know in the comments and I'll write one up.) ; find the largest prime such that each number produced by ; removing digits from the left side is still prime ; #:root - the prime to start at, 0 for all primes ; #:limit - the largest prime to check (define (largest-nested-prime #:root [root 0] #:limit [limit +inf.0]) ; loop starting at the root (let loop ([n root]) (cond ; if we're over the limit, no largest prime [(> n limit) 0] ; otherwise, try each prefixed digit while still prime [else (define multiplier (expt 10 (digits n))) ; use for/fold to emulate what for/max would do (for/fold ([best n]) ([i (in-range 1 10)] #:when (prime? (+ (* multiplier i) n))) ; return the best of the current and the nested (max best (loop (+ (* multiplier i) n))))]))) That's a whopping 10 lines of code if you remove the comments and collapse the define back into a single line. Not too bad for what it's calculating. It takes rather a while to run, but eventually you'll get an answer: > (largest-nested-prime) 357686312646216567629137 What I'm curious about though, is there a better way to do this? I used the trial division method of determining if a number was prime, but I wonder if either a statistical method or perhaps a sieve would work better. Perhaps it'll be something to try one day if I get bored. :) One additional thing I did was to work out the nested tree structure that you get with something like this. The code is available at GitHub, but here's a sample run just to whet your appetite: > (nested-primes #:root 3 #:limit 1000) '(3 (13 (113) (313) (613)) (23 (223) (523) (823)) (43 (443) (643) (743)) (53 (353) (653) (853) (953)) (73 (173) (373) (673) (773)) (83 (283) (383) (683) (883) (983))) Here's a nice chart of that: My goal is to generate a list of all such primes and make a graph of them all, just to see if there's any sort of visual pattern. I was going to include that in this post, but I have a feeling that code is going to take more than a little time to generate (and actually returning the list directly isn't probably a good idea as that way it all has to exist in memory at the same time). So I'll probably optimize this and post it again later. Graph visualization is fun! If you'd like to download the full code for today's post, you can do so here: nested primes source
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.30635684728622437, "perplexity": 889.4204798137856}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414637900031.50/warc/CC-MAIN-20141030025820-00061-ip-10-16-133-185.ec2.internal.warc.gz"}
https://www.lessonplanet.com/teachers/expanded-form-e-4th-6th
Expanded Form [E] In this expanded form worksheet, students write a set of 12 numbers in expanded form; all numbers are 3 digits and answers are included on page 2. Concepts Resource Details
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9472760558128357, "perplexity": 2249.574457949731}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689779.81/warc/CC-MAIN-20170923213057-20170923233057-00303.warc.gz"}
http://www.physicsgre.com/viewtopic.php?f=19&t=5588
## A discussion problem of thermodynamics from University Physi Himanshu_Shukla Posts: 5 Joined: Sat Jul 19, 2014 10:42 am ### A discussion problem of thermodynamics from University Physi QI9.17. The prevailing winds on the Hawaiian island of Kauai blow from the northeast. The winds cool as they go up the slope of Mt. Waialeale (elevation 1523 m), causing water vapor to con- dense and rain to fall. There is much more precipitation at the sum- mit than at the base of the mountain. In fact, Mt. Waialeale is the rainiest spot on earth, averaging 11.7 m of rainfall a year. But what makes the winds cool? QI9.18. Applying the same considerations as in Question 19.17, explain why the island of Niihau, a few kilometers to the south- west of Kauai, is almost a desert and farms there need to be irrigated. blighter Posts: 256 Joined: Thu Jan 26, 2012 6:30 pm ### Re: A discussion problem of thermodynamics from University Physi Himanshu_Shukla wrote:QI9.17. The prevailing winds on the Hawaiian island of Kauai blow from the northeast. The winds cool as they go up the slope of Mt. Waialeale (elevation 1523 m), causing water vapor to con- dense and rain to fall. There is much more precipitation at the sum- mit than at the base of the mountain. In fact, Mt. Waialeale is the rainiest spot on earth, averaging 11.7 m of rainfall a year. But what makes the winds cool? QI9.18. Applying the same considerations as in Question 19.17, explain why the island of Niihau, a few kilometers to the south- west of Kauai, is almost a desert and farms there need to be irrigated. Chiron Posts: 9 Joined: Tue Sep 01, 2015 10:37 am ### Re: A discussion problem of thermodynamics from University Physi I know that this is an older post, but in the hopes that others may also read this I'm posting my take on these questions. I believe that for the first question one way to look at this is that the hot air will rise. However, as it rises the pressure decreases. Thus, assuming the gas to be ideal, using the equation $PV = N k_B T$, we see that assuming the volume of the air to be relatively constant, if the pressure decreases then the temperature must decrease as well. For the second problem at least one way to look at it is that much of the moisture has already been expelled. However, using the same principle as before, as it goes down in elevation we see that the pressure increases, thus causing the temperature to increase. Therefore, the water vapor will not condense, and you will find a lack of rain.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 1, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8644367456436157, "perplexity": 1545.469791247763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560279933.49/warc/CC-MAIN-20170116095119-00540-ip-10-171-10-70.ec2.internal.warc.gz"}
https://arxiv.org/abs/1703.05358
astro-ph.SR (what is this?) # Title: Unstable standard candles. Periodic light curve modulation in fundamental mode classical Cepheids Authors: R. Smolec Abstract: We report the discovery of periodic modulation of pulsation in 51 fundamental mode classical Cepheids of the Magellanic Clouds observed by the Optical Gravitational Lensing Experiment. Although the overall incidence rate is very low, about 1 per cent in each of the Magellanic Clouds, in the case of the SMC and pulsation periods between 12 and 16d the incidence rate is nearly 40 per cent. On the other hand, in the LMC the highest incidence rate is 5 per cent for pulsation periods between 8 and 14d, and the overall amplitude of the effect is smaller. It indicates that the phenomenon is metallicity dependent. Typical modulation periods are between 70 and 300d. In nearly all stars the mean brightness is modulated, which, in principle, may influence the use of classical Cepheids for distance determination. Fortunately, the modulation of mean brightness does not exceed 0.01 mag in all but one star. Also, the effect averages out in typical observations spanning a long time base. Consequently, the effect of modulation on the determination of the distance moduli is negligible. The relative modulation amplitude of the fundamental mode is also low and, with one exception, it does not exceed 6 per cent. The origin of the modulation is unknown. We draw a hypothesis that the modulation is caused by the 2:1 resonance between the fundamental mode and the second overtone that shapes the famous Hertzsprung bump progression. Comments: 13 pages, 14 figures, accepted for publication in MNRAS Subjects: Solar and Stellar Astrophysics (astro-ph.SR); Cosmology and Nongalactic Astrophysics (astro-ph.CO) Journal reference: MNRAS, 468(4): 4299-4310 (2017) DOI: 10.1093/mnras/stx679 Cite as: arXiv:1703.05358 [astro-ph.SR] (or arXiv:1703.05358v1 [astro-ph.SR] for this version)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9067738056182861, "perplexity": 1771.5290377947747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320491.13/warc/CC-MAIN-20170625115717-20170625135717-00110.warc.gz"}
http://mathematica.stackexchange.com/questions?page=115&sort=newest
# All Questions 172 views ### Using Mathematica's Position Function I have a list a = {-1, 2/3, -2 - 4 I, -2 + 7 I, 2 I} and when I use the Position[a, I] it returns ... 91 views ### None of the NMinimize methods leads to the answers of the equation The following 10 equations has one set of exact answer using NSolve in Mathematica 9.0.0.0 (for positive c2, alpha1, alpha3, beta1, beta3, A1 and A3): ... 58 views ### Annotating output without changing the value [duplicate] I like to annotate output, using something like the following: ... 110 views ### depicting specific element of a table (knowing its position) in different color there is some 20 Time 20 tables, in each step we choose one element randomly. how can show this random selected element with different color? 70 views ### Getting error from CommunityGraphPlot Situation I'd like analyse my web site with CommunityGraphPlot as follows: Step 1: Define a function to scrape the all the webpages from expected website. ... 34 views ### Specifiying values in a system of equations Suppose I have system of equations, for example Clear[a, b, c] Reduce[{RCI7[{a, b, c}] > 0, RCI7[{b, c, a}] > 0, RCI7[{c, a, b}] > 0}, {a, b, c}] // N ... 49 views ### How can I prevent exponents from appearing in an expression? I've found posts dealing with showing/hiding exponents for numbers, like in scientific notation. But what if I have a general expression? I have an expression like this, ... 165 views ### Animating magnetic field lines on a circle So I want to create a changing vector as a function of theta and have it's origin stay on the path of a circle. The vector I want to plot is: ... 53 views ### Please, can someone solve this equation [closed] Solve the equation: e^(at)+e^(a(t-T)) = 2. Please can someone help me to solve this equation. I need a in terms of t, T. 65 views
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6870643496513367, "perplexity": 2356.9896639237963}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049270555.40/warc/CC-MAIN-20160524002110-00208-ip-10-185-217-139.ec2.internal.warc.gz"}
https://docs.astropy.org/en/latest/_modules/astropy/cosmology/flrw/wpwazpcdm.html
# Source code for astropy.cosmology.flrw.wpwazpcdm # Licensed under a 3-clause BSD style license - see LICENSE.rst from numpy import exp import astropy.units as u from astropy.cosmology import units as cu from astropy.cosmology.parameter import Parameter from astropy.cosmology.utils import aszarr from . import scalar_inv_efuncs from .base import FLRW __all__ = ["wpwaCDM"] __doctest_requires__ = {"*": ["scipy"]} [docs]class wpwaCDM(FLRW): r""" FLRW cosmology with a CPL dark energy equation of state, a pivot redshift, and curvature. The equation for the dark energy equation of state uses the CPL form as described in Chevallier & Polarski [1]_ and Linder [2]_, but modified to have a pivot redshift as in the findings of the Dark Energy Task Force [3]_: :math:w(a) = w_p + w_a (a_p - a) = w_p + w_a( 1/(1+zp) - 1/(1+z) ). Parameters ---------- H0 : float or scalar quantity-like ['frequency'] Hubble constant at z = 0. If a float, must be in [km/sec/Mpc]. Om0 : float Omega matter: density of non-relativistic matter in units of the critical density at z=0. Ode0 : float Omega dark energy: density of dark energy in units of the critical density at z=0. wp : float, optional Dark energy equation of state at the pivot redshift zp. This is pressure/density for dark energy in units where c=1. wa : float, optional Negative derivative of the dark energy equation of state with respect to the scale factor. A cosmological constant has wp=-1.0 and wa=0.0. zp : float or quantity-like ['redshift'], optional Pivot redshift -- the redshift where w(z) = wp Tcmb0 : float or scalar quantity-like ['temperature'], optional Temperature of the CMB z=0. If a float, must be in [K]. Default: 0 [K]. Setting this to zero will turn off both photons and neutrinos (even massive ones). Neff : float, optional Effective number of Neutrino species. Default 3.04. m_nu : quantity-like ['energy', 'mass'] or array-like, optional Mass of each neutrino species in [eV] (mass-energy equivalency enabled). If this is a scalar Quantity, then all neutrino species are assumed to have that mass. Otherwise, the mass of each species. The actual number of neutrino species (and hence the number of elements of m_nu if it is not scalar) must be the floor of Neff. Typically this means you should provide three neutrino masses unless you are considering something like a sterile neutrino. Ob0 : float or None, optional Omega baryons: density of baryonic matter in units of the critical density at z=0. If this is set to None (the default), any computation that requires its value will raise an exception. name : str or None (optional, keyword-only) Name for this cosmological object. meta : mapping or None (optional, keyword-only) Metadata for the cosmology, e.g., a reference. Examples -------- >>> from astropy.cosmology import wpwaCDM >>> cosmo = wpwaCDM(H0=70, Om0=0.3, Ode0=0.7, wp=-0.9, wa=0.2, zp=0.4) The comoving distance in Mpc at redshift z: >>> z = 0.5 >>> dc = cosmo.comoving_distance(z) References ---------- .. [1] Chevallier, M., & Polarski, D. (2001). Accelerating Universes with Scaling Dark Matter. International Journal of Modern Physics D, 10(2), 213-223. .. [2] Linder, E. (2003). Exploring the Expansion History of the Universe. Phys. Rev. Lett., 90, 091301. .. [3] Albrecht, A., Amendola, L., Bernstein, G., Clowe, D., Eisenstein, D., Guzzo, L., Hirata, C., Huterer, D., Kirshner, R., Kolb, E., & Nichol, R. (2009). Findings of the Joint Dark Energy Mission Figure of Merit Science Working Group. arXiv e-prints, arXiv:0901.0721. """ wp = Parameter( doc="Dark energy equation of state at the pivot redshift zp.", fvalidate="float" ) wa = Parameter( doc="Negative derivative of dark energy equation of state w.r.t. a.", fvalidate="float", ) zp = Parameter(doc="The pivot redshift, where w(z) = wp.", unit=cu.redshift) def __init__( self, H0, Om0, Ode0, wp=-1.0, wa=0.0, zp=0.0 * cu.redshift, Tcmb0=0.0 * u.K, Neff=3.04, m_nu=0.0 * u.eV, Ob0=None, *, name=None, meta=None ): super().__init__( H0=H0, Om0=Om0, Ode0=Ode0, Tcmb0=Tcmb0, Neff=Neff, m_nu=m_nu, Ob0=Ob0, name=name, meta=meta, ) self.wp = wp self.wa = wa self.zp = zp # Please see :ref:astropy-cosmology-fast-integrals for discussion # about what is being done here. apiv = 1.0 / (1.0 + self._zp.value) if self._Tcmb0.value == 0: self._inv_efunc_scalar = scalar_inv_efuncs.wpwacdm_inv_efunc_norel self._inv_efunc_scalar_args = ( self._Om0, self._Ode0, self._Ok0, self._wp, apiv, self._wa, ) elif not self._massivenu: self._inv_efunc_scalar = scalar_inv_efuncs.wpwacdm_inv_efunc_nomnu self._inv_efunc_scalar_args = ( self._Om0, self._Ode0, self._Ok0, self._Ogamma0 + self._Onu0, self._wp, apiv, self._wa, ) else: self._inv_efunc_scalar = scalar_inv_efuncs.wpwacdm_inv_efunc self._inv_efunc_scalar_args = ( self._Om0, self._Ode0, self._Ok0, self._Ogamma0, self._neff_per_nu, self._nmasslessnu, self._nu_y_list, self._wp, apiv, self._wa, ) [docs] def w(self, z): r"""Returns dark energy equation of state at redshift z. Parameters ---------- z : Quantity-like ['redshift'], array-like, or ~numbers.Number Input redshift. Returns ------- w : ndarray or float The dark energy equation of state Returns float if the input is scalar. Notes ----- The dark energy equation of state is defined as :math:w(z) = P(z)/\rho(z), where :math:P(z) is the pressure at redshift z and :math:\rho(z) is the density at redshift z, both in units where c=1. Here this is :math:w(z) = w_p + w_a (a_p - a) where :math:a = 1/1+z and :math:a_p = 1 / 1 + z_p. """ apiv = 1.0 / (1.0 + self._zp.value) return self._wp + self._wa * (apiv - 1.0 / (aszarr(z) + 1.0)) [docs] def de_density_scale(self, z): r"""Evaluates the redshift dependence of the dark energy density. Parameters ---------- z : Quantity-like ['redshift'], array-like, or ~numbers.Number Input redshift. Returns ------- I : ndarray or float The scaling of the energy density of dark energy with redshift. Returns float if the input is scalar. Notes ----- The scaling factor, I, is defined by :math:\rho(z) = \rho_0 I, and in this case is given by .. math:: a_p = \frac{1}{1 + z_p} I = \left(1 + z\right)^{3 \left(1 + w_p + a_p w_a\right)} \exp \left(-3 w_a \frac{z}{1+z}\right) """ z = aszarr(z) zp1 = z + 1.0 # (converts z [unit] -> z [dimensionless]) apiv = 1.0 / (1.0 + self._zp.value) return zp1 ** (3.0 * (1.0 + self._wp + apiv * self._wa)) * exp( -3.0 * self._wa * z / zp1 )
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7473253011703491, "perplexity": 14639.621804080456}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710462.59/warc/CC-MAIN-20221128002256-20221128032256-00689.warc.gz"}
http://physics.stackexchange.com/questions/4147/covariant-description-of-light-scattering-at-a-fastly-rotating-cylinder/22591
# Covariant Description of Light Scattering at a fastly rotating Cylinder Let us consider the following Gedankenexperiment: A cylinder rotates symmetric around the $z$ axis with angular velocity $\Omega$ and a plane wave with $\mathbf{E}\text{, }\mathbf{B} \propto e^{\mathrm{i}\left(kx - \omega t \right)}$ gets scattered by it. We assume to know the isotropic permittivity $\epsilon(\omega)$ and permeability $\mu(\omega)$ of the cylinder's material at rest. Furthermore, the cylinder is infinitely long in $z$-direction. The static problem ($\Omega = 0$) can be treated in terms of Mie Theory - here, however, one will need a covariant description of the system for very fast rotations (which are assumed to be possible) causing nontrivial transformations of $\epsilon$ and $\mu$. Hence my question: ### What is the scattering response to a plane wave on a fastly rotating cylinder? - What is "infinite" on that cylinder? – Georg Jan 29 '11 at 13:35 Thank you @Georg for pointing out to the misleading formulation. I mean infinitely in $z$-direction. I will change it in a second :) Greets – Robert Filter Jan 29 '11 at 13:38 @Carl: You might consider that there are still some things in classical electrodynamics which are somehow basic but not standard homework problems. To my mind, the covariant description of electrodynamics in media belongs to this class. Greets – Robert Filter Jan 30 '11 at 12:24 Robert, it might be useful to begin with the case for light impinging on a moving half-infinite media (i.e. infinite plane dividing space into two different materials). That case solves trivially (just boost the case for non moving material), and can be summed up (I think) to give a limiting case for the rotating cylinder (in the limit of small wave length). But it's been 30 years since I took E&M and it was never my "best" subject. – Carl Brannen Jan 30 '11 at 23:51 @Carl: Thank you for the hint. The difference to the reflection problem at a half-space is the rotational character of the system. One attempt to solve the problem is to go into a co-rotating coordinate system and transform the plane wave accordingly - In this case I am not sure if such a framework is physically correct. The other way would be to just covariantly transform the medium - this is much more general since we would learn about the special relativistic relation of $\epsilon$ and $\mu$. Greets – Robert Filter Jan 31 '11 at 9:13 First of all, I don't quite understand the following phrase: "The static problem (Ω=0) can be treated in terms of Mie Theory". The Mie theory is for diffraction on a homogeneous sphere, not a cylinder. The complete solution of the problem of diffraction of electromagnetic waves on an infinite homogeneous cylinder was obtained in J. R. Wait, Can. Journ. of Phys. 33, 189 (1955) (or you may find the outline of the Wait's solution for a cylindrical wave in http://arxiv.org/abs/physics/0405091 , Section III). This solution is rather complex, so I suspect your problem can only be solved numerically, as it seems significantly more complex. The Wait's problem is a special case of your problem, so the solution of the latter problem cannot be simpler than the Wait's solution. In particular, it seems advisable to expand your plane wave into cylindrical waves, following Wait. It seems that the material equations for the rotating cylinder can be obtained following http://arxiv.org/abs/1104.0574 (Am. J. Phys. 78, 1181 (2010)). However, the cylinder will not be homogeneous (the material properties will depend on the distance from the axis and may be anisotropic). I suspect the problem can be solved using numerical solution of an ordinary differential equation for the parameters of the cylindrical waves. - Can you at least solve the problem analytically for some special cases where still $\Omega\neq 0$? – Alexey Bobrick Mar 18 '12 at 15:10 Probably. For example, for a perfectly conducting cylinder, the radiation will not penetrate significantly into the cylinder, so the problem would be pretty much equivalent to that for a homogeneous cylinder. This case may look relatively trivial though. Anyway, I am afraid I don't have much time or motivation to solve this problem. For example, I am not enthusiastic about studying the AM. J. Phys. article trying to determine the electric properties of the rotating cylinder. With all due respect, the author of the question may be in a better position to do that. – akhmeteli Mar 18 '12 at 17:19 Thank you @akmeteli for your input. I however think that the determination of the properties of the rotating cylinder is at the core of this problem - how do $\epsilon$ and $\mu$ transform? Greets – Robert Filter Apr 22 '12 at 10:06 @Robert Filter: I agree. However, this issue is discussed, e.g., in the Am. J. Phys. I cited. I am not sure though that it would be possible to find an exact solution of the diffraction problem for the inhomogeneous cylinder (or the exact solution can be too complex to be useful). – akhmeteli Apr 22 '12 at 16:07 Look here for some details Some remarks on scattering by a rotating dielectric cylinder Also articles that cite them. - We're looking for long answers that provide some explanation and context. Don't just give a one-line answer; explain why your answer is right, ideally with citations. Answers that don't include explanations may be removed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.863249659538269, "perplexity": 329.8814125801043}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398471441.74/warc/CC-MAIN-20151124205431-00340-ip-10-71-132-137.ec2.internal.warc.gz"}
http://mathhelpforum.com/discrete-math/140159-subscripts-p.html
# Math Help - Subscripts with a P? 1. ## Subscripts with a P? I'm really sorry if this doesn't belong here. I just registered and I'm new. I don't know if it belongs here because I haven't seen it before, so I don't know what classification of math it falls under. The problem is, 'By how much does 6P3 exceed 6P2? 2. Originally Posted by DPooch I'm really sorry if this doesn't belong here. I just registered and I'm new. I don't know if it belongs here because I haven't seen it before, so I don't know what classification of math it falls under. The problem is, 'By how much does 6P3 exceed 6P2? $^nP_k = \frac{n!}{(n-k)!}$ Therefore $^6P_3-^6P_2 = \frac{6!}{(6-3)!}- \frac{6!}{(6-2)!}$ Can you finish it? 3. Originally Posted by pickslides $^nP_k = \frac{n!}{(n-k)!}$ Therefore $^6P_3-^6P_2 = \frac{6!}{(6-3)!}- \frac{6!}{(6-2)!}$ Can you finish it? 720/6 - 720/24 = 120 - 30 = 90? Thanks for the help. Though, what is this called and what section should it be in? 4. Originally Posted by DPooch Though, what is this called and what section should it be in? It's fine where it is but more correctly could be put in "Basic Statistics and Probabilty" or even "Discrete Mathematics" Moderator edit: Moved to Discrete Maths.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 4, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9017917513847351, "perplexity": 1746.3041087340548}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447548655.55/warc/CC-MAIN-20141224185908-00081-ip-10-231-17-201.ec2.internal.warc.gz"}
http://stackapps.com/questions/3373/mathjax-buttons/3382
# MathJax Buttons Adds some math/science buttons to the editor on science SE sites. These buttons are useful for converting selected text to math, formatting SI units, and formatting chemical equations. There are also keyboard shortcuts for them, which are IMO more useful than the buttons themselves. (The SI units and chem buttons are only enabled on certain sites) Currently, it runs on the SE sites listed on the site matrix here. There also is an exit-inline-math-mode hotkey (Alt-Z). This one moves the cursor just after the next instance of $. It's useful if you want to keep the flow of typing. For example, on Chem.SE, formatting H2O becomes Alt-C+type H2O+Alt-Z and you can immediately continue typing(no need to rightarrow-out of math mode). On CStheory/CS, typing "I like NP-complete problems!", where "NP" is sansserif, becomes: type 'I like'+Alt-S+type NP+Alt-Z+type '-complete problems!'. Once you're used to the shortcut, typing math fluidly becomes much easier! If you want your mathjax-enabled SE site to be supported, please let me know! My long term goal is to make this script a part of it (I'm still working on that script, though--so it will be a while) ## Installation • Click here to install (requires Greasemonkey on Firefox). If you have trouble installing (like the Chrome blocking), or if you wish to install it on another browser, please see here for full step-by-step instructions on installation. • Source ## Buttons supported ### Dollarify ($) • Encloses selection in $...$ • Enabled on all supported sites • Keyboard shortcut: Alt-M ### Double Dollarify ($$) • Encloses selection in $$...$$ • Enabled on all supported sites • Keyboard shortcut: Alt-D ### SI-ify (SI) • Encloses selection in \:\mathrm{...} (upright text with an extra separator space for SI units) • Enabled on Physics,Chemistry, and Biology • Keyboard shortcut: Alt-S ### chem-ify (O2) • Encloses selection in \ce{...} (mhchem chemical equation formatter) • Use the button on the \ce'd text to make this a block element (use Alt-M) • Enabled on Chemistry only • Keyboard shortcut: Alt-C (Full button list below for more site-specific buttons) ## Screenshots ### Chem Before: After: ### Dollarify Before: After: ### SI-ify Before: After (first dollarifying, then SI-ifying) After: - Can I use this script on website other than stackexchange? What kind of modifications are needed if it is possible? – LifeH2O Nov 26 '13 at 11:29 @LifeH2OYou would need to find out another place for the buttons to be kept. In addition, you would have to remove the StackExchange specific code in the keyboard event handler. – Manishearth Nov 26 '13 at 12:59 Is there anyway I can just use editor on math.stackexchange on my site. Both PageDown/WMD and MathJax are available separately. Is the combination which is (probably) used on math.stackexchange available for to use? – LifeH2O Nov 26 '13 at 13:06 @LifeH2O No, I've used ... and$$..$$. For EE.SE the dollars must be slash-escaped. Re:Math.SE: I don't know, just loading PageDown/WMD/Mathjax should do – Manishearth Nov 26 '13 at 13:08 My apologies. I actually tested this kodershaven.blogspot.com/2011/10/adding-mathjax-toolbar.html and it has those issues. Now going to try yours. – LifeH2O Nov 26 '13 at 13:17 Recently my chrome-browser refuses to enable the script, because it is not from the store. Browsing the google, did not find me a solution in how to use a custom built extension. Do you have any workaround for that? Or could you make it available via the chrome store? – Martin Jun 26 at 7:43 @Martin check the full step-by-step link in the installation portion of this post. I may make it a web store app once I clean up some of the code. – Manishearth Jun 26 at 8:55 @Manishearth The installation is absolutely not the problem. It is a new feature of chrome that hard-disables the script. When I try to run it with tempermonkey, the whole toolbar is gone. – Martin Jun 26 at 9:23 @Martin you have to install it in developer mode then. I'll try to get this on the store when my schedule settles down – Manishearth Jun 26 at 9:44 Well, i tried that, but chrome still refuses. Upon restart it was deactivated again even in developer mode. – Martin Jun 26 at 9:58 The spaces behind the thousand separators are not perfect, you can use {,} to avoid that space. Maybe the script could take care of that as well. – queueoverflow Jul 24 at 19:50 No matter what you say, "Ugly" and "Pretty!" are both ugly. :P – mypal125 Sep 6 at 18:03 ## 6 Answers Place to dump list of all mathjax sites. I will take a look at these and determine which configuration to give to each one, and if they need extra buttons. http://stats.stackexchange.com Dollars (Implemented 1.0.3) http://meta.stats.stackexchange.com Ditto http://math.stackexchange.com Dollars only(implemented) http://meta.math.stackexchange.com Dollars only(implemented) http://cstheory.stackexchange.com Dollars,big O, ans-serif(implemented 2.2) http://meta.cstheory.stackexchange.com Ditto http://electronics.stackexchange.com Dollars and SI (implemented) http://meta.electronics.stackexchange.com Ditto http://physics.stackexchange.com Dollars and SI(implemented). Also \mathbf{..} (1.0.3). http://quant.stackexchange.com Dollars. Implemented 1.0.3 http://meta.quant.stackexchange.com Dollars. Implemented 1.0.3 http://crypto.stackexchange.com Dollars and \mathcal{O}(...)? http://dsp.stackexchange.com Dollars (implemented 2.0.0) http://scicomp.stackexchange.com Dollars (implemented 2.2.2) http://mathematica.stackexchange.com Dollars (implemented) http://cogsci.stackexchange.com Dollars (implemented 2.1.1) http://cs.stackexchange.com Dollars,big O, ans-serif(implemented 2.2) http://chemistry.stackexchange.com Dollars,SI,chem(implemented) - I shall make a matrix and add it to the Git Wiki once I finish processing these sites – Manishearth May 5 '12 at 15:46 Great script! However, I've got a bug report: On Electrical Engineering, where the button creates \ braces (thanks for that exception, by the way), the SansSerif button still creates braces, which do not work. I'm not sure what makes the Sans Serif font look so bad, it may be an issue with the configuration on my machine. Don't worry about that, it's not your problem. This seems to be caused by line 108, the initialization for window.buttonconfig: "6 (SansSerif)":['NP',clickButtonEventLambda("\\mathsf{","}"),"serify","","s",/(cstheory|cs\.stack)/ig,"","Enclose selection in \\mathsf{..}"], A quick fix would be to add Electronics to the ignored sites for this button and create a new button for Electronics SansSerif: - "6 (SansSerif)":['NP',clickButtonEventLambda("\\mathsf{","}"),"serify","","s",/(cstheory|cs\.stack)/ig,"","Enclose selection in \\mathsf{..}"], + "6 (SansSerif)":['NP',clickButtonEventLambda("\\mathsf{","}"),"serify","","s",/(cstheory|cs\.stack)/ig,"/electronics/ig","Enclose selection in \\mathsf{..}"], + "7 (ElectronicsSansSerif)":['NP',clickButtonEventLambda("\\\\mathsf{","}\\"),"serify","","s",/electronics/ig,"","Enclose selection in \\\\mathsf{..}\\"], I think I've got all the \ escapes correct there, but be sure to check before pushing the change. Feel free to ignore what follows, just some rambling thoughts after reading the code: If you want to fix this more thoroughly, then a refactoring of the code would be helpful. You have a ton of static strings in the script. This makes it harder to modify and makes it more difficult to deal with exceptions. If you modified the code to create a static string like inlineDelimiter and displayedDelimiter for and $$ respectively, you could select that delimiter in one place for each site and then use it in multiple places (like dollarify and serify). It could be used for both functions, for the callbacks, and for the tooltips. If you wanted to be extra future-proof, you could have right and left delimiters; MathJax supports eg. $$..$$ as delimiters. In fact, the defaults for MathJax are $$...$$ and $...$for displayed mathematics, and $$...$$ for in-line mathematics per http://www.mathjax.org/docs/2.0/start.html?highlight=delimiters#tex-and-latex-input. The defaults are $$..$$ and $..$ for Stack Exchange. This is natural in some languages like C and C++ where you have to allocate storage for a string, but more difficult in languages like Javascript which make it easy to declare a string anywhere. To be clear; I'm a C developer and this would be an antipattern in C. I have no idea if it's a good or bad idea in Javascript, the language certainly makes it easy to do, it's quite readable as written, and the costs of concatenating the strings at runtime and the extra download size from the added lines aren't issues in C. It's also quite likely that this sort of future-proofing is completely unnecessary for a little script like this. You may want to consult a Javascript developer (not me!) before proceeding. - Aah. electroniCS.stack. Exactly why I kept the "stack" in the regex--"cs" is too short and will match half the SE universe (I'll improve the regex). EE.SE wasn't supposed to get the NP button(you can keep it if you guys do use mathsf like the CS folks though) –  Manishearth May 8 '12 at 2:20 Being an amateur programmer, any best-practice tips are really appreciated by me--I've never formally learnt programming(and never had a codereview) so my code tends to be icky sometimes. I never thought I'd need different dollar signs, since I never thought about SE sites that talk about both moolah and math ;-). Interesting concept, and definitely makes it more customizable. –  Manishearth May 8 '12 at 2:30 Later on I'll remove the dollar literals and add an if at the beginning which sets them based on the site. Also, I'm planning to remove everything from the window global object and adding it to some other MathJaxButtons object for cleanliness. Thanks for the code review !:D –  Manishearth May 8 '12 at 2:30 Oh, yeah, and the mathsf-looks-bad isn't your fault either, mathjax uses its own fonts. The font does look like that -- the CS folks use it in allcaps which looks OK. By the way, did you see what Alt-Z does? It may not be useful in your site (it's extremely useful in conjunction with the custom buttons), but since there is no button for, it may escape,your mind. Do you know any way to make this functionality (just a hotkey, no button) evident to others without cluttering the buttonbar with another button? –  Manishearth May 8 '12 at 2:37 Changes made. Now it supports random delimiters, and also I put everything in a wrapper object. –  Manishearth May 8 '12 at 11:35 Version history: ## 1.0 • 1.0.1: Fix the \ce{} highlighting bug. It now highlights properly even when you haven't selected anything. (Makes it useful to just hit Alt-C/whatever and start typing the formula) • 1.0.2: Add electronics.SE (with config $,$$,SI--special treatment of due to EE.SE requiring backslashes). Also crypto.SE (config ,$$, O--big O notation) • 1.0.3: Add stats.SE, quant.SE. Also a vector-field button to Physics.SE. ## 2.0 Now does not need to be updated, will fetch script once every day(it relies on your browser cache to not re-fetch, it reloads a new script by modifying a query string). • 2.0.0: add dsp.SE ($,$$) • 2.1: Tooltips huzzah! • 2.1.1: add cogsci.SE (dollars) • 2.2: Add an enter-exit math mode shortcut (Alt-Z). Extremely useful when keeping the flow of typing • 2.2.0: Add cstheory.SE and CS.SE (dollars, \mathsf{...}, and big O) • 2.2.1: Stupid firefox doesn't support window.location.origin • 2.2.2: Add scicomp(,$$). Improve internals-- Inline math/block math delimiters are more easily specified. Also, the entire script is wrapped in the MathJaxButtons object, now nothing is directly in the window object. - - Full list of supported buttons: ## Buttons used in many sites ### Dollarify ($) • Encloses selection in $...$ • Enabled on all supported sites • Keyboard shortcut: Alt-M • Works differently on electronics.SE (gives \$...$\ because these people talk about money as well and mathjax needs escaping) ### Double Dollarify ($$) • Encloses selection in $$...$$ • Enabled on all supported sites • Keyboard shortcut: Alt-D ### Exit math mode • Not an actual button, only a hotkey • Enabled on all supported sites • Finds the next dollar symbol and puts the cursor ahead of it. Useful to keep the flow of typing. • Keyboard shortcut: Alt-Z ### SI-ify (SI) • Encloses selection in \:\mathrm{...} (upright text with an extra separator space for SI units) • Enabled on Physics, Chemistry, and Electronics • Keyboard shortcut: Alt-S ### Big O notation (only a hotkey) • Encloses selection in $\mathcal{O}(...)$ • Enabled on Crypto.SE, CS.SE, and cstheory.SE only • Keyboard shortcut: Alt-O ### Sans-serif (NP) • Encloses selection in $\mathsf{...}$ • Enabled on CS/CStheory.SE only • Keyboard shortcut: Alt-S ## Site-specific buttons ### chem-ify (O2) • Encloses selection in $\ce{...}$ (mhchem chemical equation formatter) • Use the $ button on the \ce'd text to make this a block element (use Alt-M) • Enabled on Chemistry only • Keyboard shortcut: Alt-C ### Vector Fields(E) • Encloses selection in \mathbf{...} • Enabled on Physics.SE only • Keyboard shortcut: Alt-V ### Dirac-ify (〈 | 〉) • Encloses selection in \langle ... | or | ... \rangle • Enabled on Physics.SE only • Keyboard shortcuts: Alt-B for bra and Alt-K for ket - I hope this isn't considered hijacking, but if you're just putting units and chemical formulas in plain text, you can use Unicode instead of MathJAX. In Windows, I use a lot of shortcuts in AutoHotkey to make it easy to type: The resistor is 10 kohm +-10% becomes "The resistor is 10 kΩ ±10%" as I type. I poured CuSOsub4 into Hsub2←O becomes "I poured CuSO₄ into H₂O". I don't type chemical equations much, so this is a little clunky, but you could easily create shortcuts for common things, so that h2o instantly becomes "H₂O". And this method works the same way in any text field in any application, not just on Stack Exchange. - Well, many SE sites have mathjax built in soit's easier to just use it. In fact, we convert subsups to mathjax--it's easier to read. Since everyone is using MJ anyway, it's better to make life easy for them. And trust me, these have been extremely useful to me on chem.SE :) –  Manishearth May 15 '12 at 18:31
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.29745227098464966, "perplexity": 5586.85169115745}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1419447559763.4/warc/CC-MAIN-20141224185919-00030-ip-10-231-17-201.ec2.internal.warc.gz"}
https://scicomp.stackexchange.com/questions/31054/normalization-of-polynomials-for-discontinuous-galerkin-methods-dgm
# Normalization of polynomials for discontinuous Galerkin methods (DGM) I was curious if someone could share their opinion on this matter. I have noticed that some people in literature normalize their Legendre polynomials, i.e. divide or multiply the polynomial by $$\sqrt{\frac{2n+1}{2}},$$ where $$n$$ is the order of the polynomial. I am quite new to DG, but I am experiencing better results when I normalize my Legendre polynomial than when I don't. Is there a particular reason why people normalize? Density at $$T = 0.3$$ with normalized Legendre polynomial. Density at $$T = 0.3$$ with unnormalized Legendre Polynomial In terms of the resolution, I am running on a $$50\times 50$$ grid points. Polynomial subroutine: function legendre (x,n) integer :: n real(kind=8) :: x real(kind=8) :: legendre x = min(max(x,-1.0),1.0) select case(n) case(0) legendre = 1.0 case(1) legendre = x case(2) legendre = 0.5*(3*x**2-1) case(3) legendre = 0.5*(5.0*x**3-3.0*x) case(4) legendre = 0.125*(35.0*x**4-30.0*x**2+3.0) case(5) legendre = 0.125*(63.0*x**5-70.0*x**3+15.0*x) case(6) legendre = 1.0/16.0*(231.0*x**6-315.0*x**4+105.0*x**2-5.0) end select legendre = sqrt((2.0*dble(n)+1.0)/2.0)*legendre return end function legendre function legendre_prime (x,n) integer :: n real(kind=8) :: x real(kind=8) :: legendre_prime x = min(max(x,-1.0),1.0) select case(n) case(0) legendre_prime = 0.0 case(1) legendre_prime = 1.0 case(2) legendre_prime = 3.0*x case(3) legendre_prime = 0.5*(15.0*x**2-3.0) case(4) legendre_prime = 0.125*(140.0*x**3-60.0*x) case(5) legendre_prime = 0.125*(315.0*x**4-210.0*x**2+15.0) case(6) legendre_prime = 1.0/16.0*(1386.0*x**5-1260.0*x**3+210.0*x) end select legendre_prime = sqrt((2.0*dble(n)+1.0)/2.0)*legendre_prime return end function legendre_prime • Are you using explicit or implicit schemes ? For explicit schemes, it does not matter. For implicit schemes, the matrix conditioning may depend on this scaling. Can you elaborate in what way your results are worse/better and for what problem ? Feb 13, 2019 at 11:22 • What problem are you solving? Are you doing modal or nodal DG? Feb 13, 2019 at 16:20 • Sorry for the late replies. I am solving the 2D compressible Euler equations for an ideal gas. In this case I am solving a 4 shock Riemann problem as a means to benchmark my code. In terms of the time integration, I am using a 3rd-order SSP Runge-Kutta method. I have edited my post to include solutions at T = 0.3 of the normalized and unnormalized polynomials. Feb 19, 2019 at 2:54 • Are you using a limiter ? If yes, then you have to be careful with its implementation since it depends on the normalization used. Feb 19, 2019 at 5:12 • GIven I am dealing with a Riemann problem, as far as I know I have to use a limiter to ensure stability. The type of limiter I am using is the TVB Minmod limiter. In terms of the normalization, does it matter if its multiplicative or not? I have edited my post to share the way I am doing the normalization process. Feb 21, 2019 at 18:07 ## 1 Answer You have to careful while applying TVD limiter and account for the normalization of your basis functions. With reference to [1], if your solution is as $$u_h(x,y) = \bar{u} + u_x \phi_i(x) + u_y \psi_j(y)$$ where $$\phi_i(x) = \frac{x - x_i}{\Delta x_i/2}, \qquad \psi_j(y) = \frac{y - y_j}{\Delta y_j/2}$$ Here $$\bar{u}, u_x, u_y$$ are the dofs or solution variables and $$\bar{u}$$ is the cell average value. You limit the slope as $$u_x = minmod(u_x, \bar{u}_{i,j} - \bar{u}_{i-1,j}, \bar{u}_{i+1,j} - \bar{u}_{i,j})$$ Due to your normalization, your solution is of the form $$u_h(x,y) = \bar{u} \sqrt{1/2} + u_x \sqrt{3/2} \phi_i(x) + u_y \sqrt{3/2} \psi_j(y)$$ Now $$\bar{u}$$ is not the cell average value. You have to account for these differences and properly modify the limiter step. [1] Bernardo Cockburn and Chi-Wang Shu, The Runge–Kutta Discontinuous Galerkin Method for Conservation Laws V, JCP, 141, 1998 Multidimensional Systems
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5528590679168701, "perplexity": 944.4952794853663}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949701.0/warc/CC-MAIN-20230401032604-20230401062604-00084.warc.gz"}
http://svn.haxx.se/tsvnusers/archive-2009-10/0433.shtml
# Re: Tortoiseproc.exe crashing when attempting new checkout From: Stefan Küng <tortoisesvn_at_gmail.com> Date: Tue, 27 Oct 2009 19:09:20 +0100 On 27.10.2009 03:37, George Orwell wrote: > Hey guys, > > When attempting a checkout on the below: > > https://nhibernate.svn.sourceforge.net/svnroot/nhibernate > you should use the url https://nhibernate.svn.sourceforge.net/svnroot/nhibernate/trunk Otherwise you're checking out the *whole* repository, including all tags and branches! That can easily add up to several gigabytes! > to the following folder: > > C:\SVN\nhibernate.svn.sourceforge.net\svnroot\nhibernate no need to include the 'url' in your path to your working copy. but that shouldn't be the problem. > I get the following error: > > TortoiseProc.exe has encountered a problem and needs to close. > When do you get this dialog? Right at the start? After you click the OK button of the checkout dialog? When the progress dialog pops up? During the checkout? At the end of the checkout? Stefan ```-- ___ oo // \\ "De Chelonian Mobile" (_,\/ \_/ \ TortoiseSVN \ \_/_\_/> The coolest Interface to (Sub)Version Control /_/ \_\ http://tortoisesvn.net ------------------------------------------------------ http://tortoisesvn.tigris.org/ds/viewMessage.do?dsForumId=4061&dsMessageId=2411745 To unsubscribe from this discussion, e-mail: [users-unsubscribe_at_tortoisesvn.tigris.org]. ``` Received on 2009-10-27 19:09:27 CET This is an archived mail posted to the TortoiseSVN Users mailing list.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8518186211585999, "perplexity": 12341.8769525271}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368698958430/warc/CC-MAIN-20130516100918-00083-ip-10-60-113-184.ec2.internal.warc.gz"}
https://gianlubaio.blogspot.co.uk/2012_11_01_archive.html
## Friday, 30 November 2012 ### Marking your own homework I quite like the way in which Brian Leveson (who has led the famous public inquiry into the media, in the UK) has summarised his recommendations: you guys [ie the press/media] should not "mark your own homework". I'm afraid that's exactly what the Prime Minister Kirk Cameron will allow. By the way: isn't it kind of neat that in the picture I've put here he quite looks like vice-PM Nick Clegg, instead? Perhaps, despite the apparent argument the pair are having about whether the over 2000 pages Leveson inquiry report should be used to replace the use of Cushelle $-$ which incidentally would free us from the unbelievably stupid advert they have $-$ they are actually morphing into a single person! And also, if that's how it goes on, may be we should consider letting our students marking their own homework; it would come quite handy, given that I have a nice pile to mark right in front of me... ## Tuesday, 27 November 2012 ### Grant (and missing data) Today has been quite an interesting day. First of all, we finally heard from the MRC and we got the Grant! (I actually mean money to do our research on the regression discontinuity design, not fellow Fulham FC fan Hugh $-$ but he arguably looks better than a graph with dots and lines...). We had put quite a lot of work on the write up (actually since the moment Sara pitched the idea and we started discussing it), especially as we were really close to be funded, when we first submitted it last year. The panel liked the idea but asked us to change a few things and re-submit it, which we did. This time around we've been luckier and I'm really pleased. The research group is fantastic and I'm really looking forward to starting it! As for the rest of the day, I spent it in seminars/workshops, mainly about missing data. In the morning I went to one of our monthly seminars with the PRIMENT people; today's talk presented the statistical plan for a trial that they are developing. Some of the discussion was on how to deal with the expected, relatively large proportion of missing data. This linked nicely with the LSE workshop I went to in the afternoon (I nearly managed to make it on time for lunch, but as it turned out, I got there a bit too late, so I ended up not eating). The focus of the workshop was on linking survey weights and methods for missing data (specifically multiple imputation); this is interesting as I'm trying to include missing data in my revised lectures for Social Statistics (which will be in the next term). ## Sunday, 25 November 2012 ### The perks (and quirks) of being a referee The other day I was talking to a friend at work, who was rather annoyed that one of his papers had been rejected by a journal, given the negative comments of the reviewers. This is, of course, part of the game, so you don't really get annoyed just because a paper get rejected. From what I hear, though, I think my friend was quite right in being angry. The paper was submitted to a medical journal; the editor had sent it out for review to 3 referees, two of whom were, allegedly, statistical experts. I hadn't read the paper, nor the reviews, so I can't comment in great details. But from what I hear, the reviewers' comments were just wrong. In practice, they told off the authors for using wrong statistical methods, while it looks like they just didn't understand the valid statistical point. For example, one of the referees had criticised the following: for some reasons (I can't remember the details), the authors had performed a regression model and then regressed the resulting linear predictor on the covariates, which obviously leads to $R^2=1$. Now, you can certainly debate as to whether the methods used by the authors were the most appropriate for their purpose, but their point was not wrong $-$ you can easily check in R with the following commands # Simulates some covariates x1 <- rnorm(100,0,1) x2 <- rpois(100,2) x3 <- rbinom(100,1,.6) #(Arbitrarily) sets the coefficients for each covariate beta <- c(1.43,-.14,.97,1.1) # Computes the "true" linear predictor mu <- cbind(rep(1,100),x1,x2,x3)%*%beta # (Arbitrarily) sets the population standard deviation sigma <- 2.198 # Simulates the response y <- rnorm(100,mu,sigma) # Fits a linear regression & show the results m <- lm(y~x1+x2+x3) summary(m) Call: lm(formula = y ~ x1 + x2 + x3) Residuals Min      1Q  Median      3Q     Max -5.0186 -1.4885 -0.0434  1.4007  5.7971 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept)  1.86280    0.50344   3.700 0.000359 *** x1          -0.03908    0.24307  -0.161 0.872618 x2           1.05753    0.15927   6.640 1.88e-09 *** x3           0.41025    0.45461   0.902 0.369090 --- Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 2.244 on 96 degrees of freedom Multiple R-squared: 0.3154, Adjusted R-squared: 0.294 F-statistic: 14.74 on 3 and 96 DF,  p-value: 5.692e-08 Of course, because of sampling variability, the coefficients are estimated with error; in addition, the overall model fit (as measured by $R^2$) is not perfect, with only 32% of the total variability explained by the regression. If however we regress the fitted values on the same set of covariates: m1 <- lm(m$fitted.values~x1+x2+x3) summary(m1) Call: lm(formula = m$fitted.values ~ x1 + x2 + x3) Residuals: Min         1Q     Median         3Q        Max -1.560e-15 -2.553e-16 -1.035e-17  2.161e-16  2.699e-15 Coefficients: Estimate Std. Error    t value Pr(>|t|) (Intercept)  1.863e+00  1.193e-16  1.562e+16   <2e-16 *** x1          -3.908e-02  5.758e-17 -6.786e+14   <2e-16 *** x2           1.058e+00  3.773e-17  2.803e+16   <2e-16 *** x3           4.103e-01  1.077e-16  3.809e+15   <2e-16 *** --- Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 5.315e-16 on 96 degrees of freedom Multiple R-squared:     1, Adjusted R-squared:     1 F-statistic: 2.627e+32 on 3 and 96 DF,  p-value: < 2.2e-16 this now implies perfect fit $-$ but that just makes sense as the linear predictor is given by exactly that combination of covariates. I'm not defending my friend's paper for the sake of it $-$ to reiterate, I haven't read it and I don't really know whether it should have got published. And maybe there were other issues that the reviewers rightly picked up. But certainly it is wrong that it was judged as statistically flawed, and I think I would probably write a response letter to the editor to argue my case. Of course this is a very delicate issue, and people often voice their strong opinions about the state of peer-reviewing; Larry Wasserman even goes as far as to argue that we should completely dispense with them. ## Saturday, 24 November 2012 ### No junk mail, please The last two comments I received (on this post) were quite odd. Somebody has left a comment saying something like "Thanks! Good to see an update" and then a link to some random page advertising some stuff that had absolutely nothing to do with the content of the original post. I decided to let the first instance go, although I was a bit annoyed. This morning I found a second comment by the same person, and so I took action and deleted both. I have also sent the guy an email to explain that I don't think this is cool, but I doubt he'll ever actually read it (or care). ## Saturday, 17 November 2012 ### (Nearly) sold out The other day I have spoken with the publisher who told me that the book is officially out in the US; usually it takes a few weeks to stock in Europe, so it'll probably be available here by early December. Obviously, this is all very exciting (well, at least for me). So I checked how it was doing on amazon.com. Apparently, there are only 2 more copies left! I don't know if they had only stocked 3 copies and they have sold 1 since the book was out earlier this week, but thankfully they say that there are more on the way... ### Porn economics? Since it's Saturday and Marta is on holiday at IKEA with her mum, this morning I took it really easy [NB: wait until you read the whole thing, before you judge me from the title and the premise $-$ it's definitely not what it looks like!]. I had spot of breakfast, shaved, showered and then took a look at the newspapers. While I was reading the Guardian, I saw this article. I didn't know anything about this, but the gist of the story is that in addition to re-electing President Obama, earlier in November the people of California have also voted in favour of Measure B, a new law requiring the use of condoms in adult movies shot in LA County. The author of the article is Stoya, a "performer for adult production studio Digital Playground" [I thought of posting a link, but from her own profile in the Guardian's website, she "recommends that you refrain from googling her at work", and so I thought better not]. Her argument against the new law is that the porn industry has not seen a single case of performer-to-performer HIV transmission in the last 8 year (which I suppose it's impressive). On the other hand, there is some evidence that movies in which the performers wear condoms are less well received by the audience, leading to a drop in sales. This, she continues, will have an effect on the whole industry; at the moment, the performers are continuously tested for STDs (including HIV), which of course is quite expensive. Thus, if the industry's profits are reduced further (in addition to the losses inflicted by piracy, mostly on the web), this will lead to fewer performers being tested and thus potentially producing unintended negative effects. I think that Stoya is making a good job at trying to argue from an economic perspective (with this I mean that she's not making it all about the money, but also the intended and unintended consequences of interventions). For example, she says that condoms can cause abrasions during the "abnormal" [her words] sex-making sessions in porn movies; thus, if a condom fails, this can even increase the risk of disease transmission. That's interesting: I am not entirely convinced of the underlying (informal) model she is proposing, as I think she's possibly leaving other factors out of the equation, that should be taken into account. For example, I've no idea about this, but surely there must be statistics on other performer-to-performer infections; what if these show rates greater than 0? Sure, HIV is probably the worst outcome, but other diseases (for example HPV) may be also very relevant, both from the health and the financial perspective. And then there is the "educational" issue: given the large number of people watching porn, perhaps there is an explicit utility in showing performers wearing condoms. I absolutely have no idea or evidence that I could easily access here, but could this be even stretched as far as to say that it is cost-effective to invest public money in helping companies implementing this policy? Finally, some of the comments to the article, suggest that one of the obvious implications will be that some of the productions will probably move away from California (or LA, at least). This reminds me of a kind of similar issue with financial banks, here in London. The argument against increasing their fiscal contribution to the UK government is that in that case the corporations may choose to leave for countries with lighter taxations. Again, I think this is just one side of the story, as there is an implicit utility to being based in London (or in LA, if you're an actor/producer). While I understand that moving a business somewhere else could lead to losses to the central government (LA county in this case), I also think that this is not an entirely genuine threat. May be the implementation of the policy should have been subject to the results of a full cost-effectiveness analysis (I doubt it was $-$ but that's just a very uninformed opinion). I wonder if the NIH (or someone like them) would fund someone to do it? ## Thursday, 15 November 2012 ### When in Rome... Yesterday I was in Rome to teach in a short course on Bayesian methods in health economics The 6.45am flight from London was actually on time, which was impressive, considering that the last time I flew Alitalia I never made it to Rome $-$ we stopped in Milan but, because of "technical problems", the flight was cancelled. I had to give a talk via Skype from the airport and then got back to London with the very last flight, although I was originally supposed to be back with the 7.30pm flight. I arrived at Fiumicino at about 10.30am and after the passport control I headed to the train station. Unfortunately, there was no train scheduled to go into central Rome in the near future (or, for that matters, even in the distant future, according to the electronic board). So I walked back to the coach station. Signs on either side and on the front of the coach, as well as on the actual ticket said €4 one-way. The driver however said that it was €5. After a few minutes we left. Before we were even out of the parking lot, the driver was already shouting on his mobile phone (the first call was to "Fra") $-$ needless to say, he did not have blue-tooth or headphones; although to be fair he was remarkably good at handling the steering wheel with his knees. The first part of the journey into Rome is on the motorway and it wasn't very busy, so he was just happy to chat away, mostly boosting to his friends that they got stuck in traffic because they didn't ask his advice; but there also was a call to his mum (who, apparently, had annoyingly failed to talk to dad, which means he would have to do it himself) and a failed attempt to contact his wife, Barbara. When we actually got to the outskirts of town and off the motorway, the traffic became a bit worse. At that point, his friend "Fra" called again, and again the driver started to playfully (I think) insult them because they were stuck in traffic. Until we also got stuck in a big jam, that is. In response to this, he started to honk the horn and shout at everybody, including a policeman who was handling the traffic (well, he wasn't really doing a good job, but still...). Finally, a good 90 minutes after we left the airport, we arrived in central Rome, which unfortunately was still not where I needed to go. Because of a strike and a couple of demonstrations, the traffic in the area was mental. There was no official queue for the taxi, but I still managed to waive at and stop one, so I suppose I shouldn't complain too much that to do this (6km) it took me another 45 minutes. After a quick lunch, we started the course; the turnout was all right (about 20 people) and I think it went reasonably well $-$ although, as I suspected, we had planned a bit too much. I have given this lecture a few times in the last months (although in slightly different formats and this time it included more bits about the general principles of Bayesian statistics) and there are a couple of things that I think are really interesting. The first is that people seem to be genuinely surprised to hear about the controversy between Neyman-Pearson and Fisher and that they couldn't even bear to be in the same department (the usual reaction from the audience is to think I'm joking). The second is the reaction to my point that the prior information and the prior distribution are two different things, which I always stress. I think people generally take this well and I think it makes it a bit easier to come to terms with the idea of formulating the prior as just one possible probabilistic "translation" of some knowledge, which can be generally expressed in words. At the end of the course, I took a taxi to the airport. It was still pretty busy, but that didn't bother me that much (by then I'd sort of given up). I got to the airport in time for a quick (and incredibly expensive) sandwich before boarding the flight, only to discover that they had assigned seat 4A to 4 people $-$ of course I was one of them. The plane was not very full, but they still spent a good 15 minutes frantically calling I-don't-know-who on the phone to try and "sort it out". Which they did, in the end $-$ by telling three of us to just find another seat. ## Monday, 12 November 2012 ### You can't play a broken link Just like James Morrison and Nelly Furtado say, you really can't play a broken string. And quite similarly, you just can't use a broken link. I always find it very annoying when, while browsing a website, I find a broken link. You fall victim to the promise of interesting things to come, only to be disappointed by the error message filling up your screen $-$ something like when you're trying to call somebody on their phone and you get the "Sorry. The number you've dialled is not recognised" message. But, I should really stop moaning about all this because, as it turns out (thanks to semi-anonymous George from Canada), I'm guilt of this crime myself and there were some broken links on the book webpage. Some of the files with the R code for the examples in chapters 4 and 5 (which I've already discussed here and here) were pointing to the wrong addresses and therefore weren't downloadable. This should be fixed now, so hopefully it will all work. ## Wednesday, 7 November 2012 ### Gotcha! I should start this with a disclaimer, ie that I'm not really claiming any "success" with this post. But I find it quite interesting that the estimations I produced with this very, very simple model turned out to be quite good. The idea was to use the existing polls (that was a few days ago, even before the super-storm), which had been collated and presented in terms of an estimation of the proportion of voters for either party, together with some measure of uncertainty. Based on these, I constructed informative prior distributions, which I have then propagated to estimate the election results. As it turns out, according to the projections of the final results, the prediction was accurate, as the following graph shows: the dots and lines indicate the average prediction and a 50% (darker) and 90% (lighter) credible intervals; the crosses are the observed proportions for Obama. In all states, the prediction was "correct" (in the sense that the right "colour" was estimated). In some cases, the observed results were a bit more extreme than the observed ones, eg in Washington (WA) the actual proportion of votes for Obama is substantially larger than predicted $-$ but this has no real consequences on the final estimation of the election results as WA was already estimated to be a safe democratic state; and this is true for all other under/over estimated cases. My final estimation was that, based on the model, I was expecting Obama to get 304 EVs. At the moment, the Guardian is reporting 303 $-$ so pretty good! But, as I said, this is really not to brag, but rather to reflect on the point that while the race was certainly close, it probably wasn't as close as the media made it. Famously, Nate Silver gave Obama a probability of winning the election exceeding 80%, a prediction which has given rise to some controversy $-$ but he was spot on. Also, I think it's interesting that, at least in this case, the polls were quite representative of the "true" population and what most people said they would do was in fact very similar to what most people actually did. ## Monday, 5 November 2012 ### Mapping in health economics Last Friday, I went to one of the health economics seminars that are organised at UCL; the format that is used is that one of the people in the group suggests a paper (typically something that they are working on) but instead of having them leading the discussion, one of the others takes the responsibility of preparing a few slides to highlight what they think are the main points. The author/person who suggested the paper is usually in the room and they respond to the short presentation and then the discussion is open to the group at large. I missed a couple since they started last summer, but the last two I've been to have really been interesting. Last time the main topic was mapping of utility measures; in a nutshell, the idea is that there are some more or less standardised measures of "quality of life" (QoL $-$ the most common probably being the EQ5D and the SF6D). However, they are not always reported. For example, you may have a trial that you want to analyse in which data have been collected on a different scale (and I'm told that there are plenty); or, and that's perhaps even more interesting, as Rachael pointed out at the seminar, sometimes you're interested in a disease area that is not quite covered by the standard QoL measures and therefore you want to derive some induced measure by what is actually observed. In the paper that was discussed on Friday, the authors had used a Beta-Binomial regression and were claiming that the results were more reasonable than when using standard linear regression $-$ which is probably sensible, given that these measures are far from symmetrical or "normally distributed" (in fact the EQ5D is defined between $-\infty$ and 1). I don't know much about mapping (so it is likely that what I'm about to say has been thoroughly investigated already $-$ although it didn't come out in the seminar, where people were much more clued up than I am), but this got me thinking that this is potentially a problem that one can solve using (Bayesian) hierarchical models. The (very raw) way I see it is that effectively there are two compartments to this model: the first one (typically observed) is made by data on some non-standard QoL measure and possibly some relevant covariates; then one can think of a second compartment, which can be build separately to start with, in which the assumptions underlying the standard measure of QoL are spelt out (eg in terms of the impact of some potential covariates, or something). The whole point, I guess, is to find a way to connecting these two compartments, for example by assuming (in a more or less confident way) that each of them is used to estimate some relevant parameter, representing some form of QoL. These in turns have to be linked in some (theory-based, I should think) way. A Bayesian approach would allow for the exchange of information and "feed-back" between the two components, which would be potentially very helpful, for example if there was a subset of individuals on which observations on both the compartment were available. I'll try to learn more on this $-$ but I think this could be interesting... ## Sunday, 4 November 2012 ### Hand picking Yesterday we were out with our friends; we met for a drink late in the afternoon and spent quite some time trying to figure out what we wanted to eat. First, we approached the problem by area, trying to think where we wanted to go and then looking on the internet on our phones (well, at least those of us who have modern phones, ie not Christian or Vale), but that didn't work out. So we decided to browse by type of food and I think it was Marta who found an Eritrean place close to where we were. Because of the bonfire, it took us nearly twice as long as normal to get there, but in the end we made it. And it was really good! They bring you a big tray covered with a flat pancake over which they then put all the food; and the fun part is that you are supposed to pick it with your hands, eventually eating also the pancake. At the end of the dinner, they also offer a typical coffee ceremony, which we tried. It takes for ever (in all fairness, they roast and ground the coffee), but to keep you busy the also bring some popcorn, which is unusual as well as actually nice.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6462326049804688, "perplexity": 1158.3974082800755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948512054.0/warc/CC-MAIN-20171211014442-20171211034442-00022.warc.gz"}
https://tex.meta.stackexchange.com/questions/172/how-newbie-friendly-should-the-site-be/704
# How newbie-friendly should the site be? I guess this is the counterpart of the elitist question but asking about the other side of the spectrum. How friendly should the site be to complete LaTeX and even operating system (say command line) beginners? This question came out after some discussion of currently the most voted answer here where the OP further asked how is she supposed to run some command. Are we supposed to answer this kind of questions? Also I was playing roles and pretending to be a beginner to see the kind of responses I would get. With some early disappointment. I was given an answer but no knowledge as to why this was an answer, even though I was clearly confused (in my role) and had no clue about what I was doing. (I have to admit, maybe the whole role-playing thing also confused the people answering my question as they assumed that I do know at least the basics of LaTeX). But the point is that, in the end, I was given a fish when probably what I needed was to learn how to fish for myself. I actually think this is a rather big problem with the TeX/LaTeX culture at large. The web is plagued with lots of snippets of TeX hacks and the like, which people have been happily copy/pasting for years without knowing what they're doing. This results, as I've mentioned in that thread, in really lots of people scattering \\'s and \noindent's everywhere in their documents. Once I've been an editor for the proceedings of a small conference, I don't want to tell you about the kind of horrible code abuses I saw there. So from the beginning I had been trying to push forward this TeX-StackExchange initiative because I think this could be a solution to that problem. Because here we can edit and keep the answers current and up to date (some people out there are still using eepic!). Because if someone has a question about an existing answer he can ask and get a clarification. And that's why I also think, particularly when the question is asked by a clear beginner, that we should make the extra effort to be more friendly and educate the people who will be later also asking and answering the questions of this site. Edit: Just to clarify, I'm not suggesting that we should babysit new LaTeX users, neither we should tolerate lazy users who can't botter to read or follow instructions. Probably all that we need to provide is just enough information that an unexperienced but diligent user could use in order to figure out the way by themselves. • Oops! Julian silently goes and removes all the \noindents from the paper he's working on right now. – Julian Lamas-Rodriguez Jul 29 '10 at 13:39 As the guilty party in both cases, I'd like to say that I suspect that I was being over concise in both cases but through thoughtlessness rather than deliberately. On your question I will admit to being a little confused by the claim of being a newbie and disregarding it. Had it been Ola Nordmann then I would have been more expansive. I think that, for me, it is because I'm unconciously acting as I do on mathoverflow, where the baseline for participation is much higher and someone asking "What is 'x'?" would get (politely!) referred elsewhere. I agree (as it seems most here do, including Scott Morrison!) that there is enough of a barrier to wanting to use (La)TeX in the first place that we don't need any more here. That said, I think that we should draw the line somewhere it starts to get towards "How do I run that command?". The problem with that is that to answer it helpfully requires knowing a lot about the person asking and their computer, and the number of variations is so large that trying to anticipate them is impossible. I should (and will) go back and edit my answer to the picture question to include (something like): On a unix system, you would run pdflatex myGreatPicture.tex pdfcrop myGreatPicture.pdf and a link to pdfcrop in CTAN could be useful, but more than that I think is over the line. In particular, going on to explain (in detail, I would mention the commands that I use if ask) how to turn the pdf in to a png (for example) is something that I wouldn't expect to see here. The underlying principle that I'm trying to apply here is this: Maximum help for minimum effort. The first part is obvious, but the minimum effort might need explaining. The point is that for (almost?) all of us, this site isn't our day job. It's part of a key resource (ie (La)TeX) and something we really value, but we should (to be fair to our employers, if no-one else) only take a small part of our day for helping out here. So if it takes a lot of time and effort to answer questions here due to the amount of information I need to give, then I'll not be able to participate at the level that I would like and the level that this site needs us to in order to have a critical mass of people willing to answer questions. • Completely agree with the Maximum help for minimum effort principle. In the pdfcrop example, as you say, providing a link to the package on CTAN (which should be automagic!) is probably all that a diligent but unexperienced user needs to know. – Juan A. Navarro Jul 29 '10 at 20:08 I don't have any objection to us explaining things like how to run a shell command in the comments, as long as we're not explaining the same thing to the same person over and over again. Basically, if someone happens to lack a particular small piece of prerequisite knowledge needed to use an answer, it seems silly to refuse to provide it on principle. But if someone asks an actual question that boils down to "how do I use the command prompt?" we'd be justified in closing that because it's really about a general way to use computing tools, not a question specific to TeX usage. Those sorts of things could perhaps be referred to Super User. • I agree, though I'll add that ideally we'd not have to explain things like how to run a shell command in the comments — since it's going to come often enough, we could add such a section (or pointers to elsewhere) in the FAQ. – ShreevatsaR Jul 30 '10 at 2:26 • @ShreevatsaR: Good point, I think some links might be appropriate for the FAQ. Probably not the kind of thing we should devote a whole lot of the FAQ space to, though. – David Z Jul 30 '10 at 3:39 I have a comment of part of the original question. Although there are many people on this forum who know much more than I do about the inner workings of *TeX, I doubt that there are many who have actually transitioned as many people as I have from, say, MS Word to *TeX. Our lab instituted a policy 5 years ago that all documents we produce are LaTeX documents. It fell on me to make this happen since I was the only one who knew LaTeX. I developed a system of gradual immersion whose goal was to make the learning curve as flat as possible. This meant tons of cutting and pasting and almost no understanding and learning at first. That's the key. Make them LaTeX users first, and later LaTeX programmers. Users don't have to understand, they just have to know what to cut and paste where. Here's the typical transition flow: 1) horror at having to abandon Word and having to learn a (gasp!) programming language. 2) relief that most of the work has been done for them and all they have to do is write stuff, cut and paste some code for some things like figures and tables, and learn some basic math commands which he can find on the lab wiki easily. 3) amazement at how beautiful their doc looks, and how really easy it turned out to be. They just, basically, write text. They realize how much time they used to spend screwing with the formatting. 4) the desire to do something beyond what they know, so I teach them some basic macro programming. This is where most start to really like *TeX. Suddenly they're programmers. Many start writing macros relentlessly for every little thing. 5) They've reached a competency level where they feel easy about using *TeX and will tell others that it's not that hard, and the docs look much better, etc. They're over the hard part and a good percent go on from there on their own. Anyway, that's my experience. If we're serious about expanding the user base for *TeX, we have to create a transitional method that focuses on a flat learning curve for beginners by reducing the amount of learning and programming a newbie has to do to make nice docs. It's not a question of WYSIWYG or editors or TeXnicCenter. They don't address the main problem. It's having downloadable templets, example files, and simple instructions on how to use those things after you cut and paste them. This hasn't been done in the past because there's been this feeling that using *TeX without understanding is bad and wrong. I disagree with this for newbies. • At UK-TUG, we've tried to collect up a few simple templates at uk.tug.org/training/templates. Other examples (with lots of comments) gratefully received! – Joseph Wright Nov 20 '10 at 14:35 • I thoroughly agree with the approach but most people that hit your item 4 in your list would come here thirsty for more - see also my response to this post tex.stackexchange.com/questions/5688/… . There is also great interest from many people in hard programming currently there is very little of that on the web and by encouraging good answers and questions we can build a very good resource here. By the way I like your costume! – Yiannis Lazarides Nov 20 '10 at 20:46 • @Joseph - I'm happy that UK-TUG is on board with this approach. If that was an invitation for me to send you some of the training docs I've made, you don't have to ask twice. I'll remove some of the lab-specific stuff and send them on. – bev Nov 20 '10 at 21:49 • Great, thanks. Most of the examples are currently my own, with one or two where I've asked someone else for their source and have then added some comments. More ideas very welcome! – Joseph Wright Nov 20 '10 at 22:00 • @ Yiannis - yes, I agree that once people start to want to go deeper into *TeX, sites like this one are great. And the need for places to go to get answers to hard programming questions is greater than ever, IMO. And thanks for the pointer to your answer to Q5688. That was wonderful on many levels. It's congruence with my path was pretty accurate. It's a keeper. (oh, and I really look like that, I'm told) – bev Nov 20 '10 at 22:18 I think we should allow for TeX beginners, but not beginners at everything else. If you don't know how to run a shell command, you need to ask elsewhere (SuperUser, for example). But if, for example, you're confused about basic syntax of invoking pdflatex, I think you should be able to get it clarified here. Of course, we can't, and shouldn't enforce this kind of helpfulness. People answer with the level of detail they want, and put as much effort as they want into their answers. I don't want to give people a guilt trip over not being helpful enough when answering a beginners' questions. I expect that a lot of users here are only going to bother answering more advanced questions, or if they answer beginners questions, they'll do so very tersely. And that's ok. I just don't want us to start closing questions for being "too basic" either. As long as the question is TeX-related, I think it's fine, any anyone who cares can answer it. A comparison might be The LaTeX Community, where there are a lot of beginners. As a result, I tend to aim to be more 'helpful' there than on c.t.t, where the general level of TeX knowledge is higher (particularly with the regulars).
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6096619367599487, "perplexity": 745.2927614541331}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540514893.41/warc/CC-MAIN-20191208202454-20191208230454-00311.warc.gz"}
https://regularize.wordpress.com/2019/07/
In my previous post I illustrated why it is not possible to compute the Jordan canonical form numerically (i.e. in floating point numbers). The simple reason: For every matrix ${A}$ and every ${\epsilon>0}$ there is a matrix ${A_{\epsilon}}$ which differs from ${A}$ by at most ${\epsilon}$ (e.g. in every entry – but all norms for matrices are equivalent, so this does not really play a role) such that ${A_{\epsilon}}$ is diagonalizable. So why should you bother about computing the Jordan canonical form anyway? Or even learning or teaching it? Well, the prime application of the Jordan canonical form is to calculate solutions of linear systems of ODEs. The equation $\displaystyle y'(t) = Ay(t),\quad y(0) = y_{0}$ with matrix ${A\in {\mathbb R}^{n\times n}}$ and initial value ${y_{0}\in{\mathbb R}^{n}}$ (both could also be complex). This system has a unique solution which can be given explicitly with the help of the matrix exponential as $\displaystyle y(t) = \exp(At)y_{0}$ where the matrix exponential is $\displaystyle \exp(At) = \sum_{k=0}^{\infty}\frac{A^{k}t^{k}}{k!}.$ It is not always simple to work out the matrix exponential by hand. The straightforward way would be to calculate all the powers of ${A}$, weight them by ${1/k!}$ and sum the series. This may be a challenge, even for simple matrices. My favorite example is the matrix $\displaystyle A = \begin{bmatrix} 0 & 1\\ 1 & 1 \end{bmatrix}.$ Its first powers are $\displaystyle A^{2} = \begin{bmatrix} 1 & 1\\ 1 & 2 \end{bmatrix},\quad A^{3} = \begin{bmatrix} 1 & 2\\ 2 & 3 \end{bmatrix}$ $\displaystyle A^{4} = \begin{bmatrix} 2 & 3\\ 3 & 5 \end{bmatrix},\quad A^{5} = \begin{bmatrix} 3 & 5\\ 5 & 8 \end{bmatrix}.$ You may notice that the Fibonicci numbers appear (and this is pretty clear on a second thought). So, finding a explicit form for ${\exp(A)}$ leads us to finding an explicit form for the ${k}$-th Fibonacci number (which is possible, but I will not treat this here). Another way is diagonalization: If ${A}$ is diagonalizable, i.e. there is an invertible matrix ${S}$ and a diagonal matrix ${D}$ such that $\displaystyle S^{-1}AS = D\quad\text{or, equivalently}\quad A = SDS^{-1},$ you see that $\displaystyle \exp(At) = S\exp(Dt)S^{-1}$ and the matrix exponential of a diagonal matrix is simply the exponential function applied to the diagonal entries. But not all matrices are diagonalizable! The solution that is usually presented in the classroom is to use the Jordan canonical form instead and to compute the matrix exponential of Jordan blocks (using that you can split a Jordan block ${J = D+N}$ into the sum of a diagonal matrix ${D}$ and a nil-potent matrix ${N}$ and since ${D}$ and ${N}$ commute one can calculate ${\exp(J) = \exp(D)\exp(N)}$ and both matrix exponentials are quite easy to compute). But in light of the fact that there are a diagonalizable matrices arbitrarily close to any matrix, on may ask: What about replacing a non-diagonalizable matrix ${A}$ with a diagonalizable one (with a small error) and then use this one? Let’s try this on a simple example: We consider $\displaystyle A = \begin{bmatrix} -1 & 1\\ 0 & -1 \end{bmatrix}$ which is not diagonalizable. The linear initial value problem $\displaystyle y' = Ay,\quad y(0) = y_{0}$ has the solution $\displaystyle y(t) = \exp( \begin{bmatrix} -t & t\\ 0 & -t \end{bmatrix}) y_{0}$ and the matrix exponential is $\displaystyle \begin{array}{rcl} \exp( \begin{bmatrix} -t & t\\ 0 & -t \end{bmatrix}) & = &\exp(\begin{bmatrix} -t & 0\\ 0 & -t \end{bmatrix})\exp(\begin{bmatrix} 0 & t\\ 0 & 0 \end{bmatrix})\\& = &\begin{bmatrix} \exp(-t) & 0\\ 0 & \exp(-t) \end{bmatrix}\begin{bmatrix} 1 & t\\ 0 & 1 \end{bmatrix}\\ &=& \begin{bmatrix} \exp(-t) & t\exp(-t)\\ 0 & \exp(-t) \end{bmatrix}. \end{array}$ So we get the solution $\displaystyle y(t) = \begin{bmatrix} e^{-t}(y^{0}_{1} + ty^{0}_{2})\\ e^{-t}y^{0}_{2} \end{bmatrix}.$ Let us take a close-by matrix which is diagonalizable. For some small ${\epsilon}$ we choose $\displaystyle A_{\epsilon} = \begin{bmatrix} -1 & 1\\ 0 & -1+\epsilon \end{bmatrix}.$ Since ${A_{\epsilon}}$ is upper triangular, it has its eigenvalues on the diagonal. Since ${\epsilon\neq 0}$, there are two distinct eigenvalues and hence, ${A_{\epsilon}}$ is diagonalizable. Indeed, with $\displaystyle S = \begin{bmatrix} 1 & 1\\ 0 & \epsilon \end{bmatrix},\quad S^{-1}= \begin{bmatrix} 1 & -\tfrac1\epsilon\\ 0 & \tfrac1\epsilon \end{bmatrix}$ we get $\displaystyle A = S \begin{bmatrix} -1 & 0 \\ 0 & -1+\epsilon \end{bmatrix}S^{-1}.$ The matrix exponential of ${A_{\epsilon}t}$ is $\displaystyle \begin{array}{rcl} \exp(A_{\epsilon}t) &=& S\exp( \begin{bmatrix} -t & 0\\ 0 & -t(1-\epsilon) \end{bmatrix} )S^{-1}\\ &=& \begin{bmatrix} e^{-t} & \tfrac{e^{-(1-\epsilon)t} - e^{-t}}{\epsilon}\\ 0 & e^{-(1-\epsilon)t} \end{bmatrix}. \end{array}$ Hence, the solution of ${y' = Ay}$, ${y(0) = y_{0}}$ is $\displaystyle y(t) = \begin{bmatrix} e^{-t}y^{0}_{1} + \tfrac{e^{-(1-\epsilon)t} - e^{-t}}{\epsilon}y^{0}_{2}\\ e^{-(1-\epsilon)t}y^{0}_{2} \end{bmatrix}.$ How is this related to the solution of ${y'=Ay}$? How far is it away? Of course, the lower right entry of ${\exp(A_{\epsilon}t)}$ converges to ${e^{-t}}$ for ${\epsilon \rightarrow 0}$, but what about the upper right entry? Note that the entry $\displaystyle \tfrac{e^{-(1-\epsilon)t} - e^{-t}}{\epsilon}$ is nothing else that the (negative) difference quotient for the derivative of the function ${f(a) = e^{-at}}$ at ${a=1}$. Hence $\displaystyle \tfrac{e^{-(1-\epsilon)t} - e^{-t}}{\epsilon} \stackrel{\epsilon\rightarrow 0}{\longrightarrow} -f'(1) = te^{-t}$ and we get $\displaystyle \exp(A_{\epsilon}t) \stackrel{\epsilon\rightarrow 0}{\longrightarrow} \begin{bmatrix} e^{-t} & te^{-t}\\ 0 & e^{-t} \end{bmatrix} = \exp(At)$ as expected. It turns out that a fairly big $\epsilon$ is already enough to get a quite good approximation and even the correct asymptotics: The blue curve it first component of the exact solution (initialized with the second standard basis vector), the red one corresponds $\epsilon = 0.1$ and the yellow on (pretty close to the blue on) is for $\epsilon = 0.01$. to  \$\e
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 59, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9354787468910217, "perplexity": 180.30906425503275}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655887046.62/warc/CC-MAIN-20200705055259-20200705085259-00112.warc.gz"}
https://hydrogenaud.io/index.php?action=topic;sa=printpage;topic=85135.0
# Hydrogenaudio Forums ## Lossy Audio Compression => AAC => AAC - General => Topic started by: twinspex on 22 November, 2010, 01:10:23 AM Title: QAAC: discussion, questions, feature requests, etc. Post by: twinspex on 22 November, 2010, 01:10:23 AM I have been producing my own modified version of REACT for use in ripping my CDs and have nearly finished the task. I wanted to be able to produce iTunes compatible and tagged .m4a files for individual tracks from the react image.cfg script having ripped to a one-big-file album image with cuefile. I managed to do this with the NeroAAc Encoder, acdir and atomic parsley. I then came across QTAACENC and QAAC which looked very promising as they allow use of the Quicktime dlls. I totally failed to get either encoder working with ACDIR passing the split wave files into the encoder via STDIN in the analogous manner to that I successfully managed with the Nero encoder. No great loss though because I found QAAC alone can process a OBF wave image and associated cuefile into separate .m4a tracks - and it even tags the .m4a filees with the information taken from the cuefile. Fantastic! I am left with a problem though. When processing from a cuefile QAAC outputs the track files in a format tracknumber[space]tracktitle.m4a and this cannot be changed. I like my files to be formatted thus: tracknumber-tracktitle.m4a. I think I am a competent enough batch file programmer to handle the necessary file rename (just!  ), but the problem arises with various artists compilations. I really would like to use the naming format: tracknumber-trackartist-tracktitle.m4a in such an instance. I have looked online for a commandline based rename utility that would read the artist field from the embedded tag and alow me to use that field to rename the files QAAC produces. Unfortunately I haven't found one. (I know I could use something like mp3tag to do the job from a GUI later, but I would like to be able to do it as part of the REACT-based rip operation. I have looked at using the cuefile to extract track based wave files, which come out correctly named, and then to encode the track based wave files without using the cuefile. This results in no embedded tags and quite complicated scripting.) I have no idea how difficult it would be to implement the output of m4a files to a specified filename format with QAAC cuefile processing, but if it is possible, please could the author please consider this as an enhancement request? I had a look at the source code but it is way beyond my skills to make myself an amended version! Thank you very much in advance. (By the way I plan to use the NeroAAcTag utility to add cover art and albumArtist and compilation meta-user tags to the QAAC encoded .m4a files to finish the ripping task. I will be using OBF flac with embedded cover art and cuefile for archiving, and m4a for day to day usage). Incidentally, if one uses QAAC cuesheet processing to produce .m4a files and then inspects the embedded tags with the neroAActage.exe -meta-list command on *.m4a I have noticed that the tools field may have a bug. On a three track CD single, using --ignorelength and --tvbr=125, the note of the tvbr setting is repeated one more time in the field for every successive track in the cuesheet. I don't hink this is a problem with the neroAActag.exe utility because the same repeated data is visible with the Audio Shell v1.3.5 utility! This problem is not a showstopper so far as I can see, but I haven't tried to do a sizable encoding task, and I wondered if it might cause a field overflow in such a situation. example from Audio Shell v1..3.5 for the m4a file produced as track "03 Tracktitle" from the cuesheet: Encoder: qaac 0.20, Quicktime 7.6.8 TVBR Quality 125,  TVBR Quality 125,  TVBR Quality 125, I am using windows XP SP3 (up to date fixes), QAAC 0.20 and Quicktime 7.6.8 The quality setting was merely an experiment. To bring a verbose post to an end. Thank you for producing this utility. I will find a workaround if you can't help. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 22 November, 2010, 04:40:38 AM Hi. I'm author of qaac. I'm a Japanese, so please excuse my poor English. Quote I have noticed that the tools field may have a bug. On a three track CD single, using --ignorelength and --tvbr=125, the note of the tvbr setting is repeated one more time in the field for every successive track in the cuesheet. Thank you to let me know that. I fixed the bug, and released 0.21. Now I have opened qaac's repository at github, therefore you can use it's built-in issue tracker, or you can just email me. Quote I have no idea how difficult it would be to implement the output of m4a files to a specified filename format with QAAC cuefile processing, but if it is possible, please could the author please consider this as an enhancement request? This is not too difficult, but will take some time. I will consider of it. For now, I cited your request in the github as a "feature request", so I won't forget about it Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 22 November, 2010, 11:05:12 AM Released 0.22 with new --fname-format option. The spec is fairly simple and not rich, but I hope you'll like it. Title: QAAC: discussion, questions, feature requests, etc. Post by: twinspex on 22 November, 2010, 02:19:17 PM Thank you for your efforts. I will try out v0.22 and I will give you feedback. Your English is infinitely better than my Japanese!  But courtesy of Google translate - Arigatō! All the best, Twinspex Title: QAAC: discussion, questions, feature requests, etc. Post by: twinspex on 22 November, 2010, 03:48:10 PM I just found out I don't have permission to edit my last post  , so here is the revised version! Thank you for your efforts. I downloaded v0.23 and here is some feedback after a quick initial test: The --fname-format option does just what I wanted --fname-format ${tracknumber}-${artist}-${title} seems to work fine. I can use the internal REACT2 variable @various@ as a flag for my REACT-image.cfg script to determine whether to use the above for a Various Artists compilation, or to use --fname-format${tracknumber}-{title} format for a single artist cd rip. (For the benefit of others, the .m4a extension is added automatically). I will be doing some more extensive rips in the future with greater file numbers and will give more feedback then, but for the moment I am happy to report that the tools/encoder tag problem reported earlier seems to be resolved. It would be nice to be able to set the compilation tag from the cuesheet, but I can see this would not be an easy thing to do unless there were a specific "REM COMPILATION TRUE" or "REM COMPILATION FALSE" line in a standard EAC cuesheet, the reason being the wide variation in how the PERFORMER field is set for compilations - Various Artists, Various, VA, and so on, which would make the program logic to recognise a compilation rather hit and miss. OT: I actually use "@" to indicate a various artists in cuesheets, folder names and tags because I wanted to keep folder names short due to display problems with long folder names on now-obsolete mp3 devices. It has caused me no end of batch file rewrites and enforced workarounds. Some day I will get around to retag and rename compilations as "VA" - always assuming that a Pop Group with that as a name does not turn up! It is a shame that seemed to be no particular standard when I started ripping CDs when EAC first became available. I recount this as an advice to anyone just starting out with ripping a CD collection - try to find out what the established standards are and stick to them, and you will save yourself from problems in the future. It would be nice to be able to pick up and embed a cover thumbnail jpeg from the source folder (if present). Expecting the file to be called cover.jpg or folder.jpg might make the task simpler. I don't see this as a top priority request though as neroAactag.exe can do this task when run from a REACT2mod REACT-image.cfg or REACT-track.cfg script after the QAAC encoding. I am also keen to include apple lossless (ALAC) in the modified REACT scripts as an option now I have seen that QAAC supports it. In general I prefer to look at at using the Quicktime DLLs via QAAC because of the Apple encoder support for gapless playback on supporting aac players, which I believe is not currently supported by the Nero AAC encoder. Also, it is great that because of QAAC I no longer have to use itunesencode.exe and have iTunes installed and popping up maximised whenever I do an encode. Your English is infinitely better than my Japanese! But, courtesy of Google translate - Arigatō ! All the best, Twinspex Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 01 July, 2011, 06:29:32 PM Hello, can I choose a better level of compression for ALAC mode, if I simply state qaac -A -o out.m4a in.wav, I get a pretty big file, almost as big as wav. I believe apple lossless codec can compress better, but didnot figure out how to change the level. Title: QAAC: discussion, questions, feature requests, etc. Post by: mixminus1 on 01 July, 2011, 08:58:09 PM I don't think anyone can figure out how to change the level. AFAIK, Apple doesn't expose a compression level parameter in their ALAC encoder. For instance, even under OS X with XLD, which has direct access to CoreAudio, there's no setting for compression level. IME, ALAC usually compresses comparably to FLAC -5, maybe -4, so if you're getting an ALAC file that's close to the original WAV, I would guess that it's just some material that's difficult to compress, like metal or industrial music with lots of strong, uncorrelated high-frequency content. Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 02 July, 2011, 01:23:49 AM OK thanks. Dealing with QAAC I wanted yet to know if the writting library for AAC is the very same as iTunes UI uses since I can see different strings in the metadata (QuickTime vs. iTunes), and as last thing if theres conversion table between Nero and QAAC true VBR quality factor or between average bitrate and tvbr value. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 04 July, 2011, 09:51:53 PM It should be the same at the AAC bitstream level (that is, you obtain the same audio). However, MP4 container (where metadata or something go) is written in a different way. As far as I know, there's no documented way to directly write AAC bitstream into MP4 container via QuickTime. AudioFile API is not usable on Windows. Therefore, qaac uses opensource mp4v2 library (http://code.google.com/p/mp4v2/) for the work. About Nero/QuickTime comparison...No. The topic for qtaacenc (http://www.hydrogenaudio.org/forums/index.php?showtopic=78072) might helps you. Title: QAAC: discussion, questions, feature requests, etc. Post by: dpr on 23 August, 2011, 10:27:50 AM Hi, I'd like to request that support is added to all artwork to be embedded into the m4a file. I know this it is possible to have artwork in the file as React2 is able to use itunesencode to add artwork. I'm assuming that the quicktime apis support this. Thanks Dpr Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 23 August, 2011, 09:05:44 PM Hi, Thanks for suggestion. Technically, qaac uses not QuickTime but an open source mp4v2 library for MP4 muxing (of course, AAC encoding is done with QuickTime). Artwork support is possible through the library, but I will need some time to implement that. Title: QAAC: discussion, questions, feature requests, etc. Post by: dpr on 25 August, 2011, 07:07:20 AM Thanks for considering the request Title: QAAC: discussion, questions, feature requests, etc. Post by: ShotCaller on 07 September, 2011, 02:15:20 AM Does anyone know why QAAC comes with speex for resampling? Is there any advantage or situation where it should be used over the native resampler? Title: QAAC: discussion, questions, feature requests, etc. Post by: subinbar on 08 September, 2011, 03:22:25 AM Just a heads up - looks like qaac now reorders multichannel files correctly for QT 7.7 so you don't have to bother with older DLL's: http://sites.google.com/site/qaacpage/news/qaacrelease058 (http://sites.google.com/site/qaacpage/news/qaacrelease058) Title: QAAC: discussion, questions, feature requests, etc. Post by: no404error on 08 September, 2011, 06:27:34 AM Just a heads up - looks like qaac now reorders multichannel files correctly for QT 7.7 so you don't have to bother with older DLL's: http://sites.google.com/site/qaacpage/news/qaacrelease058 (http://sites.google.com/site/qaacpage/news/qaacrelease058) [qaac] release 0.61 Sorry for inconvenience, fixed bugs in previous releases. * Channel remapping of 0.58 was insufficient for 7.1ch, and fixed now. 7.1 audio has two similar mapping (L R C LFE Ls Rs Lc Rc) and (L R C LFE Ls Rs Rls Rrs), and only the latter with explicit channel mask was working. http://sites.google.com/site/qaacpage/news (http://sites.google.com/site/qaacpage/news) Title: QAAC: discussion, questions, feature requests, etc. Post by: subinbar on 10 September, 2011, 03:18:41 AM nu774 - is it possible to use qaac in dbPowerAmp? nevermind - figured it out Code: [Select] -V 45 -o "[outfile]" - --no-optimize That works well for about 96kbps. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 10 September, 2011, 05:07:18 AM Does anyone know why QAAC comes with speex for resampling? Is there any advantage or situation where it should be used over the native resampler? Hi, Personally I don't think QuickTime's native resampler has pretty good quality. It seems that on QuickTime 7.7 it's still unchanged. I once picked secret rabbit code resampler, and it's quality was decent (in exchange for very slow resampling speed). Then I dropped SRC due to license issue (qaac cannot choose GPL), and picked speex resampler. It seems that speex resampler is not as good as SRC in quality, but it's acceptable (still better than QT native) and runs fast. However, I don't wanna force this on you. Therefore I made it optional so that you can always choose which resampler to use, and I think you'd better check and see resampler quality by yourself: 1. Encode some sweep sample with qaac like this: qaac -A --rate=44100 sweep96k.wav 2. Decode the result ALAC file. 3. Check spectrogram of the result with softwares like sox. Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 10 September, 2011, 06:00:45 AM According to my tests, QT resamples with different quality for lossless and lossy encoding. I encoded a sweep sine file with the following command lines: qaac -A --rate=44100 sweep96.wav -o A.m4a qaac --cbr 320 --rate=44100 sweep96.wav -o cbr320.m4a Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 10 September, 2011, 06:34:58 AM According to my tests, QT resamples with different quality for lossless and lossy encoding. I encoded a sweep sine file with the following command lines: qaac -A --rate=44100 sweep96.wav -o A.m4a qaac --cbr 320 --rate=44100 sweep96.wav -o cbr320.m4a Oh, I didn't know that. Thanks to let me know. Title: QAAC: discussion, questions, feature requests, etc. Post by: ShotCaller on 10 September, 2011, 03:55:35 PM Perhaps the quality is better, but I am having a problem with Speex when I use this command in Subsonic streamer qaac -V 127 --adts --no-optimize -o - %s For some reason this is failing when streaming 24 bit 96 kHz FLAC, and yes, I do have libFLAC and libsndfile. Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 10 September, 2011, 04:31:55 PM When qaac uses Speex it creates temporary file with resampled content (and it takes time) and then encodes it to AAC. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 10 September, 2011, 11:28:18 PM [qaac] release 0.63 Fixed problem in --downmix option. Thanks to b66pak, as always, for reporting this. This was found only when you specified --downmix to downmix multichannel source into mono/stereo, and source contains explicit channel mask (for WAV files, this means that it's in extensible format). Also I have uploaded fixed qaac_sample.reg, which contains qaac.reg. Previous one didn't work for HE encoding. Title: QAAC: discussion, questions, feature requests, etc. Post by: Boulder on 15 October, 2011, 06:22:12 PM While testing qaac 0.91 with commandline Code: [Select] eac3to.exe "d:\temp\test.dts" stdout.wav | qaac.exe --tvbr 87 --quality 2 --ignorelength - f:\temp\captures\test_qaac.m4a I got this error: Code: [Select] initializing QTML...doneqaac 0.91, QuickTime 7.7.0<stdin>5.1 (L R C LFE Lsd Rsd) -> 5.1 (C L R Ls Rs LFE)01:58:36.992 (9.0x)341615616/-1 samples processed in 13:12.360Overall bitrate: 480.431kbps7099/7099 chunks written (optimizing)test_qaac.m4a_wfopen: f:\temp\captures\test_qaac.m4a: No such file or directory It seems that the AAC file is encoded as stdin.m4a in the eac3to directory but was never moved to the final destination and renamed. Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 15 October, 2011, 06:36:52 PM use -o switch for output filename: qaac.exe --tvbr 87 --quality 2 --ignorelength - -o f:\temp\captures\test_qaac.m4a Title: QAAC: discussion, questions, feature requests, etc. Post by: Boulder on 15 October, 2011, 06:38:29 PM Thanks! Title: QAAC: discussion, questions, feature requests, etc. Post by: xekon on 23 October, 2011, 12:10:11 AM Why such different results? do the two applications really work that differently, when using more or less the same arguments? I assumed since they both use quicktime that it would be similar results, just different methods of getting the job done. Some of the tagging is different for the container, and a lot of that I dont even understand, but there is an obvious difference in the average bitrate for the audio stream between the two. Both qtaacenc and qaac use a range of 0-127 for tvbr, so I am a little confused... example: Code: [Select] E:\AviSynth\Wavi bd.avs - | qtaacenc --tvbr 90 --highest --samplerate keep --ignorelength - qtaacenc.m4aE:\AviSynth\Wavi bd.avs - | qaac --tvbr 90 --quality 2 --rate keep --ignorelength - -o qaac.m4a cmd window: (https://hydrogenaud.io/imgcache.php?id=44e931b8d96d2f52bd7805ef00b07e6d" rel="cached" data-warn="External image, click to view at original size" data-url="http://sites.google.com/site/jakejsite/go.gif) qaac encoded m4a mediainfo: Code: [Select] GeneralComplete name : E:\AviSynth\qaac.m4aFormat : MPEG-4Format profile : Apple audio with iTunes infoCodec ID : M4A File size : 1.17 MiBDuration : 33s 259msOverall bit rate mode : VariableOverall bit rate : 295 KbpsEncoded date : UTC 2011-10-23 03:57:01Tagged date : UTC 2011-10-23 03:57:05Writing application : qaac 0.91, QuickTime 7.7.0, MPEG-4 AAC Encoder 1.7.1, Variable Bit Rate q91, BestAudioID : 1Format : AACFormat/Info : Advanced Audio CodecFormat profile : LCCodec ID : 40Duration : 33s 259msBit rate mode : VariableBit rate : 288 KbpsMaximum bit rate : 355 KbpsChannel(s) : 6 channelsChannel positions : Front: L C R, Side: L R, LFESampling rate : 48.0 KHzCompression mode : LossyStream size : 1.16 MiB (99%)Encoded date : UTC 2011-10-23 03:57:01Tagged date : UTC 2011-10-23 03:57:05 qtaacenc encoded m4a mediainfo: Code: [Select] GeneralComplete name : E:\AviSynth\qtaacenc.m4aFormat : MPEG-4Format profile : Base Media / Version 2Codec ID : mp42File size : 1.62 MiBDuration : 33s 259msOverall bit rate mode : VariableOverall bit rate : 409 KbpsEncoded date : UTC 2011-10-23 04:56:58Tagged date : UTC 2011-10-23 04:56:58Writing application : qtaacenc 20110816, QuickTime 7.7.0, True VBR Quality 90Encoding Params : (Binary)AudioID : 1Format : AACFormat/Info : Advanced Audio CodecFormat profile : LCCodec ID : 40Duration : 33s 259msBit rate mode : VariableBit rate : 407 KbpsMaximum bit rate : 428 KbpsChannel(s) : 6 channelsChannel positions : Front: L C R, Side: L R, LFESampling rate : 48.0 KHzCompression mode : LossyStream size : 1.61 MiB (99%)Language : EnglishEncoded date : UTC 2011-10-23 04:56:58Tagged date : UTC 2011-10-23 04:56:58 Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 23 October, 2011, 01:00:50 AM Hi, As far as I know, 5.1ch encoding with qtaacenc is not properly working, due to QuickTime side mixer problem. qaac has built in workaround for it. Listen to the result, and confirm the channel layout. However, I also confirmed that qtaacenc doesn't produce the same bitstream as qaac, even on 2ch stereo. I tried on qtaacenc version 20110816 by tmkk; Try --cvbr 256 on qaac, and compare them with "iTunes Plus" encoding by iTunes. Also, try the same with qtaacenc. On qaac, at least --cvbr 256 produces the exactly same bitstream with iTunes Plus. You can use foobar2000 "bit compare" menu or something for it. Also, if you have working Python interpreter (version 2.x), you can use my silly script (mdatcmp.py) to simply compare mdat in mp4 files. https://github.com/nu774/mdatcmp (https://github.com/nu774/mdatcmp) However, I couldn't get the same result with qtaacenc. Maybe I'm missing something... Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 23 October, 2011, 01:20:30 AM I'm sorry, forget about 2ch difference in the previous post. It seems there was a problem in my enviromnent / testing. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 23 October, 2011, 02:06:59 AM It seems that new iTunes 10.5.0 (which doesn't require QT) produces different result for iTunesPlus compared to iTunes 10.4.1... When I tested before, qaac -v256 used to produce the same result with iTunes 10.4.1, but not with 10.5.0; I tried to downgrade iTunes to re-confirm it, but unfortunately my iTunes library is already upgraded to 10.5.0, so iTunes refused to start up. Anyway, this is what I have encoded with the both version; https://sites.google.com/site/qaacpage/cabi...rects=0&d=1 (https://sites.google.com/site/qaacpage/cabinet/sample.zip?attredirects=0&d=1) Could someone confirm this? Title: QAAC: discussion, questions, feature requests, etc. Post by: Larson on 23 October, 2011, 04:09:43 AM I found a bug with QAAC when converting to tvbr with -q value 1 (since there's a bug with latest quicktime version and -q 2 as far as I've read). For instance with TVBR 113 (around 256 kbps) the resulting bitrate is much lower than it should be. There's no problem with CVBR instead. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 23 October, 2011, 04:44:51 AM I found a bug with QAAC when converting to tvbr with -q value 1 (since there's a bug with latest quicktime version and -q 2 as far as I've read). For instance with TVBR 113 (around 256 kbps) the resulting bitrate is much lower than it should be. There's no problem with CVBR instead. Thanks for reporting this. I could confirm it; Running directly from command-line, it's visible that displayed TVBR quality value is already broken when you specify -q1 (q0 and q2 seems OK). This is very strange, since qaac just passes -q value (0, 1, 2) to QuickTime, and it's transparent to qaac (Displayed value is re-fetched from QuickTime after configuration of the encoder finished). However, it seems that 0.90 doesn't suffer from this. So, please use 0.90 for now. I'm currently investigating another strange problem on 0.91. I will release new version (0.92) when it is settled and fixed. Title: QAAC: discussion, questions, feature requests, etc. Post by: xekon on 23 October, 2011, 07:11:02 PM Hi, As far as I know, 5.1ch encoding with qtaacenc is not properly working, due to QuickTime side mixer problem. qaac has built in workaround for it. Listen to the result, and confirm the channel layout. I believe you are correct. I encoded a 2 channel file with the same cmd as I did for the 6 channel file and both qaac and qtaacenc output the same bitrate, thank you so much for clearing that up. edit: testing the file you sent me now. for whatever reason quicktime is reporting as the correct version now. I cannot even seem to reproduce the problem where it reported as 0.9.13 Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 25 October, 2011, 09:50:22 PM Hi, forget about this post. It seems that iTunes 10.5 installs new CoreAudioToolBox.dll. It's version is 7.9.7.3, and the same as QT 7.7.0; However, it's a bit bigger in size, and seems AAC encoder is updated. I confirmed qaac -v256 produces identical bitstream as iTunesPlus of iTunes 10.5. It seems that new iTunes 10.5.0 (which doesn't require QT) produces different result for iTunesPlus compared to iTunes 10.4.1... When I tested before, qaac -v256 used to produce the same result with iTunes 10.4.1, but not with 10.5.0; I tried to downgrade iTunes to re-confirm it, but unfortunately my iTunes library is already upgraded to 10.5.0, so iTunes refused to start up. Anyway, this is what I have encoded with the both version; https://sites.google.com/site/qaacpage/cabi...rects=0&d=1 (https://sites.google.com/site/qaacpage/cabinet/sample.zip?attredirects=0&d=1) Could someone confirm this? Title: QAAC: discussion, questions, feature requests, etc. Post by: Boulder on 26 October, 2011, 03:02:56 AM Hi, forget about this post. It seems that iTunes 10.5 installs new CoreAudioToolBox.dll. It's version is 7.9.7.3, and the same as QT 7.7.0; However, it's a bit bigger in size, and seems AAC encoder is updated. I just downloaded the 32-bit installer of iTunes 10.5 and got CoreAudioToolBox.dll version 7.9.7.8, according to the file properties it was modified on Sept 27. It would be really nice to know what has changed between the many versions to help decide whether to upgrade or not. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 26 October, 2011, 05:01:30 AM Hi, Oh, really? From explorer property view, in my environment it has the same time stamp as yours (Sept 27 2011, 7:22:30), file size is 4,880,232 bytes, and product version is displayed as 7.9.7.3. FYI, md5sum is caae337ba3baa5590a595dc57e6c5d16. This is apparently updated, so probably yours is right. I wonder why explorer says 7.9.7.3 in my environment... Title: QAAC: discussion, questions, feature requests, etc. Post by: Boulder on 26 October, 2011, 05:32:08 AM Seems that my version also has the same md5 sum so we have the same file. Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 26 October, 2011, 10:35:44 AM Quote I wonder why explorer says 7.9.7.3 in my environment... It depends on the system locale. The file contains the following strings (in unicode encoding): "7.9.7.7", "7.9.7.3", "7.9.7.8" Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 26 October, 2011, 11:34:39 AM Quote I wonder why explorer says 7.9.7.3 in my environment... It depends on the system locale. The file contains the following strings (in unicode encoding): "7.9.7.7", "7.9.7.3", "7.9.7.8" Hi, thanks. I opened up CoreAudioToolbox.dll with Visual Studio, and confirmed FILEVERSION (and PRODUCTVERSION) for most locale are 7,9,7,3 (7,9,7,7 for ar-SA, and 7,9,7,8 for en-US). Really confusing... Title: QAAC: discussion, questions, feature requests, etc. Post by: Larson on 31 October, 2011, 04:52:03 AM nu744 could you please tell me how refalac works? What is the command line setting to use to convert to Alac with dbpoweramp for example? -A -o [outfile] - doesn't work Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 31 October, 2011, 05:12:16 AM nu744 could you please tell me how refalac works? What is the command line setting to use to convert to Alac with dbpoweramp for example? -A -o [outfile] - doesn't work Code: [Select] -o [outfile] - will work. Title: QAAC: discussion, questions, feature requests, etc. Post by: Larson on 31 October, 2011, 05:20:42 AM still not working for me but thanks for answering! Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 31 October, 2011, 05:34:24 AM still not working for me but thanks for answering! Can you run refalac directly from command prompt? Title: QAAC: discussion, questions, feature requests, etc. Post by: Pusherman on 31 October, 2011, 08:27:39 AM still not working for me but thanks for answering! Code: [Select] ' -A -o "[outfile]" - --no-optimize ' Try with spaces in begin and end (--no-optimize last). Title: QAAC: discussion, questions, feature requests, etc. Post by: Larson on 31 October, 2011, 08:52:51 AM EDIT: it works now, thank you! Title: QAAC: discussion, questions, feature requests, etc. Post by: overfloater on 05 November, 2011, 05:43:23 PM I confirmed qaac -v256 produces identical bitstream as iTunesPlus of iTunes 10.5. Interesting. I'm not getting identical files with qaac -v256 and iTunes Plus via iTunes 10.5.0.142. CoreAudioToolbox.dll is version 7.9.7.8. Disclaimer: I'm no expert. (I only just registered to post this!) Source material was a standard CD quality WAV ripped by EAC, than manually pushed through iTunes and qaac separately. I assumed I'd see identical output files from each, hence I stumbled across this thread. But perhaps I'm making a newbie mistake, overlooking something obvious, or simply don't know what I'm talking about. Edit: Ignore me. I keyed in on the "bitstream" part and realized that the files can be non-identical while the bitstream is the same. (Though I'm not 100% sure I understand where the file differences actually are.) Downloaded and used the foobar Bit Compare component to confirm the qaac/iTunes bitsreams are, in fact, identical. My bad! Told you I was a newbie... Title: QAAC: discussion, questions, feature requests, etc. Post by: kareha on 06 November, 2011, 08:38:58 AM Has anyone managed to get the --artwork option to work when using refalac with foobar as I'm struggling atm, any advice would be great I'm using Code: [Select] -o %d - just to do the normal conversion, but whenever I add in --artwork folder.jpg to add my artwork it just doesnt want to know Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 06 November, 2011, 09:05:27 AM Copy folder.jpg to the destination folder and see if it works. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 06 November, 2011, 09:07:38 AM Hi, It will work only if you have folder.jpg in encoding destination folder. Since foobar spawns CLI encoder in destination folder as it's current directly, if you have folder.jpg in somewhere else, it won't work. I should have mention it more clearly in the usage page; updated it now. Title: QAAC: discussion, questions, feature requests, etc. Post by: kareha on 06 November, 2011, 09:24:18 AM Still can't get it to work, I'm encoding from the directory that has both the FLAC files and the folder.jpg but it won't work. This is what I'm using, maybe I've got it wrong Code: [Select] -o --artwork folder.jpg %d - These are the error's I'm getting within foobar: Code: [Select] Could not load info (Object not found) from:"D:\Music\Lossless\fripSide\infinite synthesis (2010)\01. only my railgun.m4a"Could not load info (Object not found) from:"D:\Music\Lossless\fripSide\infinite synthesis (2010)\02. LEVEL5 -judgelight-.m4a"Could not load info (Object not found) from:"D:\Music\Lossless\fripSide\infinite synthesis (2010)\03. everlasting.m4a"Could not load info (Object not found) from:"D:\Music\Lossless\fripSide\infinite synthesis (2010)\04. late in autumn.m4a"Could not load info (Object not found) from:"D:\Music\Lossless\fripSide\infinite synthesis (2010)\05. future gazer.m4a"Could not load info (Object not found) from:"D:\Music\Lossless\fripSide\infinite synthesis (2010)\06. kanashii seiza.m4a"Could not load info (Object not found) from:"D:\Music\Lossless\fripSide\infinite synthesis (2010)\07. crossing over.m4a"Could not load info (Object not found) from:"D:\Music\Lossless\fripSide\infinite synthesis (2010)\08. closest love.m4a"Could not load info (Object not found) from:"D:\Music\Lossless\fripSide\infinite synthesis (2010)\09. meditations.m4a"Could not load info (Object not found) from:"D:\Music\Lossless\fripSide\infinite synthesis (2010)\10. trusty snow.m4a"Could not load info (Object not found) from:"D:\Music\Lossless\fripSide\infinite synthesis (2010)\11. lost answer.m4a"Could not load info (Object not found) from:"D:\Music\Lossless\fripSide\infinite synthesis (2010)\12. eternal pain.m4a"Could not load info (Object not found) from:"D:\Music\Lossless\fripSide\infinite synthesis (2010)\13. stay with you.m4a" Code: [Select] 13 out of 13 tracks converted with major problems.Source: "D:\Music\Lossless\fripSide\infinite synthesis (2010)\01. only my railgun.flac" An error occurred while writing to file (The encoder has terminated prematurely with code 2 (0x00000002); please re-check parameters) : "D:\Music\Lossless\fripSide\infinite synthesis (2010)\01. only my railgun.m4a" Additional information: Encoder stream format: 44100Hz / 2ch / 16bps Command line: "K:\Stuff\Foobar Components\qaac\refalac.exe" -o --artwork folder.jpg "01. only my railgun.m4a" - Working folder: D:\Music\Lossless\fripSide\infinite synthesis (2010)\ Conversion failed: The encoder has terminated prematurely with code 2 (0x00000002); please re-check parametersSource: "D:\Music\Lossless\fripSide\infinite synthesis (2010)\02. LEVEL5 -judgelight-.flac" An error occurred while writing to file (The encoder has terminated prematurely with code 2 (0x00000002); please re-check parameters) : "D:\Music\Lossless\fripSide\infinite synthesis (2010)\02. LEVEL5 -judgelight-.m4a" Additional information: Encoder stream format: 44100Hz / 2ch / 16bps Command line: "K:\Stuff\Foobar Components\qaac\refalac.exe" -o --artwork folder.jpg "02. LEVEL5 -judgelight-.m4a" - Working folder: D:\Music\Lossless\fripSide\infinite synthesis (2010)\ Conversion failed: The encoder has terminated prematurely with code 2 (0x00000002); please re-check parametersSource: "D:\Music\Lossless\fripSide\infinite synthesis (2010)\03. everlasting.flac" An error occurred while writing to file (The encoder has terminated prematurely with code 2 (0x00000002); please re-check parameters) : "D:\Music\Lossless\fripSide\infinite synthesis (2010)\03. everlasting.m4a" Additional information: Encoder stream format: 44100Hz / 2ch / 16bps Command line: "K:\Stuff\Foobar Components\qaac\refalac.exe" -o --artwork folder.jpg "03. everlasting.m4a" - Working folder: D:\Music\Lossless\fripSide\infinite synthesis (2010)\ Conversion failed: The encoder has terminated prematurely with code 2 (0x00000002); please re-check parametersSource: "D:\Music\Lossless\fripSide\infinite synthesis (2010)\04. late in autumn.flac" An error occurred while writing to file (The encoder has terminated prematurely with code 2 (0x00000002); please re-check parameters) : "D:\Music\Lossless\fripSide\infinite synthesis (2010)\04. late in autumn.m4a" Additional information: Encoder stream format: 44100Hz / 2ch / 16bps Command line: "K:\Stuff\Foobar Components\qaac\refalac.exe" -o --artwork folder.jpg "04. late in autumn.m4a" - Working folder: D:\Music\Lossless\fripSide\infinite synthesis (2010)\ Conversion failed: The encoder has terminated prematurely with code 2 (0x00000002); please re-check parametersSource: "D:\Music\Lossless\fripSide\infinite synthesis (2010)\05. future gazer.flac" An error occurred while writing to file (The encoder has terminated prematurely with code 2 (0x00000002); please re-check parameters) : "D:\Music\Lossless\fripSide\infinite synthesis (2010)\01. only my railgun.m4a" Additional information: Encoder stream format: 44100Hz / 2ch / 16bps Command line: "K:\Stuff\Foobar Components\qaac\refalac.exe" -o --artwork folder.jpg "01. only my railgun.m4a" - Working folder: D:\Music\Lossless\fripSide\infinite synthesis (2010)\ Conversion failed: The encoder has terminated prematurely with code 2 (0x00000002); please re-check parametersSource: "D:\Music\Lossless\fripSide\infinite synthesis (2010)\06. kanashii seiza.flac" An error occurred while writing to file (The encoder has terminated prematurely with code 2 (0x00000002); please re-check parameters) : "D:\Music\Lossless\fripSide\infinite synthesis (2010)\02. LEVEL5 -judgelight-.m4a" Additional information: Encoder stream format: 44100Hz / 2ch / 16bps Command line: "K:\Stuff\Foobar Components\qaac\refalac.exe" -o --artwork folder.jpg "02. LEVEL5 -judgelight-.m4a" - Working folder: D:\Music\Lossless\fripSide\infinite synthesis (2010)\ Conversion failed: The encoder has terminated prematurely with code 2 (0x00000002); please re-check parametersSource: "D:\Music\Lossless\fripSide\infinite synthesis (2010)\07. crossing over.flac" An error occurred while writing to file (The encoder has terminated prematurely with code 2 (0x00000002); please re-check parameters) : "D:\Music\Lossless\fripSide\infinite synthesis (2010)\03. everlasting.m4a" Additional information: Encoder stream format: 44100Hz / 2ch / 16bps Command line: "K:\Stuff\Foobar Components\qaac\refalac.exe" -o --artwork folder.jpg "03. everlasting.m4a" - Working folder: D:\Music\Lossless\fripSide\infinite synthesis (2010)\ Conversion failed: The encoder has terminated prematurely with code 2 (0x00000002); please re-check parametersSource: "D:\Music\Lossless\fripSide\infinite synthesis (2010)\08. closest love.flac" An error occurred while writing to file (The encoder has terminated prematurely with code 2 (0x00000002); please re-check parameters) : "D:\Music\Lossless\fripSide\infinite synthesis (2010)\04. late in autumn.m4a" Additional information: Encoder stream format: 44100Hz / 2ch / 16bps Command line: "K:\Stuff\Foobar Components\qaac\refalac.exe" -o --artwork folder.jpg "04. late in autumn.m4a" - Working folder: D:\Music\Lossless\fripSide\infinite synthesis (2010)\ Conversion failed: The encoder has terminated prematurely with code 2 (0x00000002); please re-check parametersSource: "D:\Music\Lossless\fripSide\infinite synthesis (2010)\09. meditations.flac" An error occurred while writing to file (The encoder has terminated prematurely with code 2 (0x00000002); please re-check parameters) : "D:\Music\Lossless\fripSide\infinite synthesis (2010)\01. only my railgun.m4a" Additional information: Encoder stream format: 44100Hz / 2ch / 16bps Command line: "K:\Stuff\Foobar Components\qaac\refalac.exe" -o --artwork folder.jpg "01. only my railgun.m4a" - Working folder: D:\Music\Lossless\fripSide\infinite synthesis (2010)\ Conversion failed: The encoder has terminated prematurely with code 2 (0x00000002); please re-check parametersSource: "D:\Music\Lossless\fripSide\infinite synthesis (2010)\10. trusty snow.flac" An error occurred while writing to file (The encoder has terminated prematurely with code 2 (0x00000002); please re-check parameters) : "D:\Music\Lossless\fripSide\infinite synthesis (2010)\02. LEVEL5 -judgelight-.m4a" Additional information: Encoder stream format: 44100Hz / 2ch / 16bps Command line: "K:\Stuff\Foobar Components\qaac\refalac.exe" -o --artwork folder.jpg "02. LEVEL5 -judgelight-.m4a" - Working folder: D:\Music\Lossless\fripSide\infinite synthesis (2010)\ Conversion failed: The encoder has terminated prematurely with code 2 (0x00000002); please re-check parametersSource: "D:\Music\Lossless\fripSide\infinite synthesis (2010)\11. lost answer.flac" An error occurred while writing to file (The encoder has terminated prematurely with code 2 (0x00000002); please re-check parameters) : "D:\Music\Lossless\fripSide\infinite synthesis (2010)\03. everlasting.m4a" Additional information: Encoder stream format: 44100Hz / 2ch / 16bps Command line: "K:\Stuff\Foobar Components\qaac\refalac.exe" -o --artwork folder.jpg "03. everlasting.m4a" - Working folder: D:\Music\Lossless\fripSide\infinite synthesis (2010)\ Conversion failed: The encoder has terminated prematurely with code 2 (0x00000002); please re-check parametersSource: "D:\Music\Lossless\fripSide\infinite synthesis (2010)\12. eternal pain.flac" An error occurred while writing to file (The encoder has terminated prematurely with code 2 (0x00000002); please re-check parameters) : "D:\Music\Lossless\fripSide\infinite synthesis (2010)\04. late in autumn.m4a" Additional information: Encoder stream format: 44100Hz / 2ch / 16bps Command line: "K:\Stuff\Foobar Components\qaac\refalac.exe" -o --artwork folder.jpg "04. late in autumn.m4a" - Working folder: D:\Music\Lossless\fripSide\infinite synthesis (2010)\ Conversion failed: The encoder has terminated prematurely with code 2 (0x00000002); please re-check parametersSource: "D:\Music\Lossless\fripSide\infinite synthesis (2010)\13. stay with you.flac" An error occurred while writing to file (The encoder has terminated prematurely with code 2 (0x00000002); please re-check parameters) : "D:\Music\Lossless\fripSide\infinite synthesis (2010)\01. only my railgun.m4a" Additional information: Encoder stream format: 44100Hz / 2ch / 16bps Command line: "K:\Stuff\Foobar Components\qaac\refalac.exe" -o --artwork folder.jpg "01. only my railgun.m4a" - Working folder: D:\Music\Lossless\fripSide\infinite synthesis (2010)\ Conversion failed: The encoder has terminated prematurely with code 2 (0x00000002); please re-check parameters Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 06 November, 2011, 09:44:59 AM Still can't get it to work, I'm encoding from the directory that has both the FLAC files and the folder.jpg but it won't work. This is what I'm using, maybe I've got it wrong Code: [Select] -o --artwork folder.jpg %d - Please try again with Code: [Select] -o %d --artwork folder.jpg - -o is a option to specify output file name. Therefore, %d (placeholder for output filename used by foobar) must come just after -o. Title: QAAC: discussion, questions, feature requests, etc. Post by: kareha on 06 November, 2011, 10:06:36 AM Please try again with Code: [Select] -o %d --artwork folder.jpg - -o is a option to specify output file name. Therefore, %d (placeholder for output filename used by foobar) must come just after -o. Perfect, thank you very much for the help, very much appreciated Title: QAAC: discussion, questions, feature requests, etc. Post by: ainou on 07 November, 2011, 07:01:49 PM Hello all, Can someone explain me why this comands Code: [Select] C:\conv\qaac_0.96>qaac.exe --cbr 96 01.WAV -o 01_AAC.mp4initializing QTML...doneqaac 0.96, QuickTime 7.7.001.WAVEstéreo (E D) -> Estéreo (E D)Codificador MPEG-4 AAC 1.7.1, Taxa de bits constante 96kbps, Óptima[25.0%] 3:33.840/14:15.360 (32.2x), ETA 0:19.9199430344/37721376 samples processed in 0:06.660Overall bitrate: 96.019kbps210/210 chunks written (optimizing) are producing mp4 files with this information (as returned by mediainfo)? Audio ID : 1 Format : AAC Format/Info : Advanced Audio Codec Format profile : LC Codec ID : 40 Duration : 3mn 33s Bit rate mode : Variable Bit rate : 96.0 Kbps Whatever option I use with -c I seem not to be able to get any Constant Bit Rate file. Any clues Thanks AM Title: QAAC: discussion, questions, feature requests, etc. Post by: YumeYao on 07 November, 2011, 10:01:32 PM I've encountered some problem: I'm running Linux (Ubuntu 10.04 64-bit) and using foobar2k and qaac in a VM (VMWare Player 4.0.0) with Windows XP SP3 installed. On Windows, D:\ is mounted as a network drive, and all musics are stored in D:\. So when I'm using foobar2k to convert a music, if the destination directory is in d:\, the file results to be corrupted no matter --no-optimize is tagged or not. But if destination directory is in c:\ (not a network drive), the converted file is ok. QT version is latest, 7.7.1. Then I just have done several tests by command line (to bypass foobar2k's touching mp4 container) used the commands below type xxx.wav | qaac.exe -i -o "d:\xxx.m4a" - type xxx.wav | qaac.exe -i --no-optimize -o "d:\xxx.m4a" - qaac.exe -i -o "d:\xxx.m4a" xxx.wav qaac.exe -i --no-optimize -o "d:\xxx.m4a" xxx.wav So there seems to be something wrong when qaac is used with foobar2k, because when I use faac or neroaacenc, I never encountered such problems. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 08 November, 2011, 01:07:21 AM Whatever option I use with -c I seem not to be able to get any Constant Bit Rate file. Any clues It's because they are not constant bit rate file. "CBR" mode of QuickTime AAC encoder is actually just a more constraind ABR mode. Therefore each AAC audio frame can differ in size. It's technically possible to pretend to be CBR for mediainfo (it's just looking at stsd decConfig descriptor), but qaac rather sets them actual values. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 08 November, 2011, 01:24:09 AM Code: [Select] So when I'm using foobar2k to convert a music, if the destination directory is in d:\, the file results to be corrupted no matter --no-optimize is tagged or not. But if destination directory is in c:\ (not a network drive), the converted file is ok. Could you try the following? Code: [Select] -V64 -o %d.m4a --log %d.txt - If you set like this, output filename is different from what foobar2000 expects (extra ".m4a" is appended on the end). Therefore foobar2000 will show error, but the result will be actually in the destination folder, untouched by foobar side. With --log option, log file will be written by qaac. Also, console message of foobar2000 will be useful (view->console). Title: QAAC: discussion, questions, feature requests, etc. Post by: YumeYao on 09 November, 2011, 12:42:37 AM Sorry, in my last post I was to say all 4 tests by CLI are fine, just missed them. I have followed your instruction to get the logs and foobar console, finding that is should be some problem with foobar. The log seems fine (I'm chinese so the parameters are in chinese): Quote initializing QTML...done qaac 0.96, QuickTime 7.7.1 <stdin> 立体声 (L R) -> 立体声 (L R) MPEG-4 AAC 编码器 1.7.1, 可变位速率 q91, 最佳 8313144/-1 samples processed in 0:14.281 Overall bitrate: 187.008kbps there are problematic messages in foobar console: Quote CLI encoder: qaac.exe Destination file: D:\02 Rolling star.m4a Encoder stream format: 44100Hz / 2ch / 16bps Command line: "D:\WinTools\foobar2k\qaac.exe" -V 90 -q 2 --no-optimize -i -o "02 Rolling star.m4a" - Working folder: D:\ Encoder process still running, waiting... Encoder process terminated cleanly. Track converted successfully. AAC decode error when analyzing first frame could not enumerate tracks (Unsupported format or corrupted file) on: D:\02 Rolling star.m4a Total encoding time: 0:12.828, 14.69x realtime So it seems foobar is to blame. So I tried faac.... Quote CLI encoder: faac.exe Destination file: D:\02 Rolling star.m4a Encoder stream format: 44100Hz / 2ch / 16bps Command line: "D:\WinTools\foobar2k\faac.exe" -q 700 - -o "02 Rolling star.m4a" Working folder: D:\ Encoder process still running, waiting... Encoder process terminated cleanly. Track converted successfully. AAC decode error when analyzing first frame could not enumerate tracks (Unsupported format or corrupted file) on: D:\02 Rolling star.m4a Total encoding time: 0:12.265, 15.36x realtime So definitely the bug belongs to this version of foobar (1.1.1). Sorry for mis-reporting. Title: QAAC: discussion, questions, feature requests, etc. Post by: subinbar on 10 November, 2011, 05:59:19 PM nu774, would it be possible to use only qaac.exe along with the Apple Application Support .dll's in the same folder in a portable manner, without the need for registry keys and a separate installation? (now that it bypasses quicktime) Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 10 November, 2011, 07:53:31 PM nu774, would it be possible to use only qaac.exe along with the Apple Application Support .dll's in the same folder in a portable manner, without the need for registry keys and a separate installation? (now that it bypasses quicktime) Yes. They are searched in the following order. No registory setting is required. 1) The directory where qaac.exe is placed 2) Windows system directory 3) "QTfiles" sub directory 4) The directory in a registry key "HKEY_LOCAL_MACHINE\SOFTWARE\Apple Inc.\Apple Application Support" (This can be overriden with qaac.reg) 5) Directories in the PATH environment variable Title: QAAC: discussion, questions, feature requests, etc. Post by: subinbar on 10 November, 2011, 10:01:54 PM nu774, would it be possible to use only qaac.exe along with the Apple Application Support .dll's in the same folder in a portable manner, without the need for registry keys and a separate installation? (now that it bypasses quicktime) Yes. They are searched in the following order. No registory setting is required. 1) The directory where qaac.exe is placed 2) Windows system directory 3) "QTfiles" sub directory 4) The directory in a registry key "HKEY_LOCAL_MACHINE\SOFTWARE\Apple Inc.\Apple Application Support" (This can be overriden with qaac.reg) 5) Directories in the PATH environment variable awesome, thanks! Do you mind listing the exact DLL's that are needed? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 10 November, 2011, 11:44:16 PM awesome, thanks! Do you mind listing the exact DLL's that are needed? Direct dependency: CoreAudioToolbox.dll, CoreFoundation.dll. Indirect dependency: ASL.dll, icudt46.dll, libdispatch.dll, libicuin.dll, libicuuc.dll, objc.dll, pthreadVC2.dll. Of course, these might change in future (Especially ICU version). You can check dependency rather easily with tools like Dependency Walker (http://www.dependencywalker.com/). Title: QAAC: discussion, questions, feature requests, etc. Post by: Xenion on 11 November, 2011, 03:27:32 AM I'm very impressed by the continuity qaac is developed and want to thank nu774 once again. Title: QAAC: discussion, questions, feature requests, etc. Post by: DARcode on 11 November, 2011, 08:53:23 AM I'm pretty impressed too, just switched to qaac for my lossy encodes, do you accept donations nu774? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 11 November, 2011, 09:08:40 AM I'm pretty impressed too, just switched to qaac for my lossy encodes, do you accept donations nu774? Thank you, but currently I'm not accepting donations. (It seems that paypal private donation is somewhat restricted in my country, for legal reason). Title: QAAC: discussion, questions, feature requests, etc. Post by: DARcode on 11 November, 2011, 09:33:42 AM I'm pretty impressed too, just switched to qaac for my lossy encodes, do you accept donations nu774? Thank you, but currently I'm not accepting donations. (It seems that paypal private donation is somewhat restricted in my country, for legal reason). A wish list on Amazon.co.jp maybe? Alternatively, anything I can send you from Italy? Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 11 November, 2011, 09:38:32 AM Direct dependency: CoreAudioToolbox.dll, CoreFoundation.dll. Indirect dependency: ASL.dll, icudt46.dll, libdispatch.dll, libicuin.dll, libicuuc.dll, objc.dll, pthreadVC2.dll. And also msvcr80.dll / msvcp80.dll (ver. 8.0.50727.6195). Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 11 November, 2011, 10:04:35 PM And also msvcr80.dll / msvcp80.dll (ver. 8.0.50727.6195). Yes, thanks. They are the latest security update version of MSVC 2005 SP1 runtime. If you have Visual C++ 2005 SP1 redistributable package installed and running MS update, probably it's already installed in your system. Title: QAAC: discussion, questions, feature requests, etc. Post by: Larson on 12 November, 2011, 04:36:20 AM nu774, first of all thank you for the impressing work you're doing! I wanted to ask you one thing, when converting with -q 2 in the tag it is reported as "Quality 96" which is "Better" according to these scheme on Apple developer site Better kAudioCodecQuality_High 96 (0x60) Best kAudioCodecQuality_Max 127 (0x7F) is it the same as Max or there's another setting for that? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 12 November, 2011, 05:39:10 AM Yes, it is same as kAudioCodecQuailityMax, as for AAC codec. If you set larger value than this, it's just rounded to 96. Title: QAAC: discussion, questions, feature requests, etc. Post by: awx on 04 December, 2011, 10:50:02 AM Hi, It will work only if you have folder.jpg in encoding destination folder. Since foobar spawns CLI encoder in destination folder as it's current directly, if you have folder.jpg in somewhere else, it won't work. I should have mention it more clearly in the usage page; updated it now. Quote However, if you have folder.jpg (or somehing) in your source folder and it's different from destination folder, you have to pass fullpath of folder.jpg to qaac. Therefore, your setting option will gets more complex Hi, and what the command line must be if I have folder.jpg in my source folder? Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 04 December, 2011, 10:58:34 AM AFAIK it's not possible to pass the name of the source folder from foobar2000 to an encoder. Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 13 December, 2011, 03:51:18 AM ExactAudioCopy 1.0 b2 supports qaac well, the options set was improved since EAC 0.9x and almost completely supports qaac's options set (except "--compilation"): Code: [Select] -V 80 -o %dest% --title "%title%" --artist "%artist%" --band "%albumartist%" --album "%albumtitle%" --track "%tracknr%/%numtracks%" --disk "%cdnumber%/%totalcds%" --genre "%genre%" --date "%year%" --comment "%comment%"%hascover% --artwork %coverfile%%hascover%%haslyrics% --lyrics %lyricsfile%%haslyrics% %source% Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 13 December, 2011, 08:23:38 AM ExactAudioCopy 1.0 b2 supports qaac well, the options set was improved since EAC 0.9x and almost completely supports qaac's options set (except "--compilation"): I'm thinking that current --compilation option design is not friendly to GUI front-end usage. Since it takes no argument and just controlled by the presense of --compilation option, it might be difficult or sometimes impossible to use it from GUI front-end (it will require some conditional control). It would be much simpler if you could just use --compilation=%compilation% or something. Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 13 December, 2011, 08:38:30 AM Regarding the philosophy of the EAC 1.0 option set, its support would probably become (a guess into the blue): %ifvarious%--compilation%ifvarious% I'm more curious if any player is interested in that flag at all ... possibly Apple iTunes only. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 13 December, 2011, 08:44:41 AM I'm more curious if any player is interested in that flag at all ... possibly Apple iTunes only. Yes, I don't know other than that Since iTunes already supports sorting/grouping by album artists like other players, probably it won't be so important for iTunes users, too. Title: QAAC: discussion, questions, feature requests, etc. Post by: Soren on 19 December, 2011, 01:04:26 AM I don't what i'm doind wrong, but i can't produce a non-corrupted alac file using Foobar. I always get this error in the console : Quote An error occurred while finalizing the encoding process (Unsupported format or corrupted file) : "C:\Users\Philippe\Desktop\El Camino\11. The Black Keys - Mind Eraser.m4a" Conversion failed: Unsupported format or corrupted file could not enumerate tracks (Unsupported format or corrupted file) on: C:\Users\Philippe\Desktop\El Camino\11. The Black Keys - Mind Eraser.m4a I use the default commande line (-A -o %d -) Someone have a clue about what's going on ?? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 19 December, 2011, 03:15:43 AM Please post foobar2000 console message (view->console). Title: QAAC: discussion, questions, feature requests, etc. Post by: marc2003 on 19 December, 2011, 05:57:34 AM the previous post says that message is from the console... perhaps it's an outdated version of foobar without foo_input_alac? old versions need that to tag files. the current version can play and tag apple lossless without the need for extra components as it uses the recently released source code. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 19 December, 2011, 06:40:50 AM the previous post says that message is from the console... Oh, sorry... I missed the sentence. I just wanted to see more detailed information such as the exact command line or something, which should have been printed to the console. Title: QAAC: discussion, questions, feature requests, etc. Post by: Soren on 19 December, 2011, 09:01:32 AM Thanks for replying guys, but the problem is solved ! The problem must be on the Foobas side since i update from 1.0.9 to 1.1.10 and everything work now Thanks for the nice piece of software (QAAC), i'm now converting my flac collection to ALAC with much more ease !! Soren Title: QAAC: discussion, questions, feature requests, etc. Post by: holyrevenger on 23 December, 2011, 09:15:00 PM @nu774 On my system (i5-2410m/win7x64/fb2k1.1.10/qtlite4.10/qaac1.13), qaac.exe will "stop working" when converting with "thread detect" set to 0 in fb2k. But qaac0.99 works fine. Any suggestion? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 23 December, 2011, 11:28:55 PM On my system (i5-2410m/win7x64/fb2k1.1.10/qtlite4.10/qaac1.13), qaac.exe will "stop working" when converting with "thread detect" set to 0 in fb2k. But qaac0.99 works fine. Any suggestion? Have no idea, though I haven't tried qaac with QT Lite. Could you add --log option like the following, and let me see the log file as well as foobar2000's console log? Code: [Select] -o %d - --log %d.txt Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 24 December, 2011, 04:41:23 AM I installed QT Lite 4.10 (into vritual machine ). It contain CoreAudioToolbox 7.9.5.0 btw. For qaac 1.14: the error window pops up in the middle of encoding but qaac continues to encode an input file; its log file is fine. The pop-up window contains the following: Quote Error signature: AppName: qaac.exe AppVer: 0.0.0.0 ModName: msvcr80.dll ModVer: 8.0.50727.4053 Offset: 000029e1 Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 24 December, 2011, 05:21:05 AM HI is it possible encode 2.1 audio without losing any channel information? I get unsupported format error.. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 24 December, 2011, 06:11:24 AM HI is it possible encode 2.1 audio without losing any channel information? I get unsupported format error.. If you don't want to lose LFE, you have to upmix to 5.1. Try matrix like the following: Code: [Select] 1 0 00 1 00 0 00 0 10 0 00 0 0 Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 24 December, 2011, 06:24:59 AM I installed QT Lite 4.10 (into vritual machine ). It contain CoreAudioToolbox 7.9.5.0 btw. Thanks for testing. Quote For qaac 1.14: the error window pops up in the middle of encoding but qaac continues to encode an input file; its log file is fine. Sounds like the background thread created by CoreAudio is crashing. As for regular installation of CoreAudioToolbox 7.9.7.3 -- though you might see it as 7.9.7.8 or something ;-), it creates new background thread when the DLL is attached to the process (from qaac point of view, this thread is spawned via a call to LoadLibararyW(L"CoreAudioToolbox.dll"). I don't know what the background thread is for, but it looks like just sitting and waiting for something. In fact, I just tried killing it. qaac main thread just continued as if nothing happened, and terminated properly, as you say. Very interesting, but there's nothing I can do here. @holyrevenger I have to say that you'd better use regular Apple Application Support. Not to mention of license issue, 7.9.5.0 is just too old and HE encoder is not available (It was available via QuickTime, but cannot be used directly from CoreAudioToolbox). If you don't want to install them, just search and read this thread. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 24 December, 2011, 09:12:06 AM I quickly investigated on this further. Seems like this thread is owned by libdispatch, which is used by Apple Application Support. Documented at here: http://developer.apple.com/library/mac/#do.../reference.html (http://developer.apple.com/library/mac/#documentation/Performance/Reference/GCD_libdispatch_Ref/Reference/reference.html) Open sourced implementation here: http://libdispatch.macosforge.org/ (http://libdispatch.macosforge.org/) The fact that qaac runs fine without this dispatcher thread bound to main queue of libdispatch probably means that it's simply not used at least by AAC encoder... But I don't know. If you have this kind of trouble with regular installation, please let me know. However, if the problem happens here, probably there's nothing I can do. It's completely internal of CoreAudio job. Title: QAAC: discussion, questions, feature requests, etc. Post by: holyrevenger on 25 December, 2011, 03:59:24 AM @nu774 Thanks for your advice. Think of the version of QTLite is too old, I install AppleApplicationSupport and it works fine now. Title: QAAC: discussion, questions, feature requests, etc. Post by: Bostedclog on 29 December, 2011, 02:51:05 PM Hello hydrogenaudio Im a newbee on here and this is my first post so please be gentle. I have a couple of simple questions regarding Qaac which Im sure you experts can help me out with please.Firstly Im running windows 7 32 bit.I have downloaded the latest Qaac from the link on the first post but in the x64 folder which I think is for me all I see is refalac64 and a few Dlls. Where is the qaacq.exe file ?Thank you. Also for it to work in Foobar would these settings be ok --tvbr 60 --highest - %d I just want to use it for my Iriver H340 and what does highest on the command line mean.Many thanks in advance.Ans keep up the good work. P.S All I have downloaded is Quick time and foobar is this correct or is there anything better or anything else I need..x Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 29 December, 2011, 03:07:05 PM Are you trying to execute a 64-bit application on a 32-bit Windows? That wouldn't work anyway. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 29 December, 2011, 08:24:58 PM Firstly Im running windows 7 32 bit.I have downloaded the latest Qaac from the link on the first post but in the x64 folder which I think is for me all I see is refalac64 and a few Dlls. Where is the qaacq.exe file ?Thank you. qaac is 32bit only. Anything inside x64 folder won't run in your environment anyway. Just ignore them, and copy contents of x86 folder to somewhere you like. Quote Also for it to work in Foobar would these settings be ok --tvbr 60 --highest - %d Doesn't work. The following is identical to that qtaacenc command line: Code: [Select] -V60 -q2 - -o%d However, you had better read usage page for more details. Title: QAAC: discussion, questions, feature requests, etc. Post by: Bostedclog on 30 December, 2011, 03:07:22 AM So the x86 is the 32 bit exe file? God Im dumb. Thanks you very much for your advice.What did highest mean do you know.If not it doesn't matter and thanks again. Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 30 December, 2011, 04:00:26 AM "x86" once had the meaning of "intel 80x86 family compatible", to be separated from Motorola and RISC processors with a different machine code. That abbreviation existed already in 8-bit and 16-bit times, even long before AMD made the first Athlon 64 with 64-bit architecture mode. Therefore, "x86" still means "compatible with 32-bit addressing" too. Option "--highest" of qtaacenc (using the highest amount of efforts to encode efficiently, taking the longest time) is equal to "-2" of qaac. Title: QAAC: discussion, questions, feature requests, etc. Post by: Bostedclog on 30 December, 2011, 05:43:11 AM "x86" once had the meaning of "intel 80x86 family compatible", to be separated from Motorola and RISC processors with a different machine code. That abbreviation existed already in 8-bit and 16-bit times, even long before AMD made the first Athlon 64 with 64-bit architecture mode. Therefore, "x86" still means "compatible with 32-bit addressing" too. Option "--highest" of qtaacenc (using the highest amount of efforts to encode efficiently, taking the longest time) is equal to "-2" of qaac. Superb. Title: QAAC: discussion, questions, feature requests, etc. Post by: Boulder on 30 December, 2011, 03:26:50 PM Was the message that showed channel remapping lost at some version? I updated to the latest QAAC but I don't see the message anymore. Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 30 December, 2011, 03:36:41 PM Good to see that qaac is supported quite well; my question about qtaacenc (http://www.hydrogenaudio.org/forums/index.php?showtopic=78072&view=findpost&p=778777) is still unreplied... Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 30 December, 2011, 09:16:19 PM Was the message that showed channel remapping lost at some version? I updated to the latest QAAC but I don't see the message anymore. Yes, it's not displayed since 1.00 due to the drastic underlying API change. On 1.00 branch, I dropped QuickTime7 and moved to CoreAudio. --remix option was also dropped, and now is superceded by --matrix-preset and --matrix-file. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 01 January, 2012, 06:05:53 AM Released 1.18 now. Enabled channel layout printing with --verbose on. Title: QAAC: discussion, questions, feature requests, etc. Post by: Boulder on 01 January, 2012, 06:38:53 AM Released 1.18 now. Enabled channel layout printing with --verbose on. Thanks for adding the option without me even asking for it Title: QAAC: discussion, questions, feature requests, etc. Post by: Bostedclog on 02 January, 2012, 05:01:36 PM Before I decide to encode all of my music to qaac .Does anyone know if there will be any significant improvements in the near future regarding sound quality.Many thanks.. Title: QAAC: discussion, questions, feature requests, etc. Post by: Kohlrabi on 02 January, 2012, 05:09:59 PM Before I decide to encode all of my music to qaac .Does anyone know if there will be any significant improvements in the near future regarding sound quality.Many thanks.. Sound quality is of course the same as files generated with Quicktime/iTunes directly, since qaac interfaces Apple's encoders. So you should bug Apple about their encoders if you deem them too bad. At relatively low bitrates, Quicktime was recently found to produce the highest quality AAC encodes, though (http://listening-tests.hydrogenaudio.org/igorc/aac-96-a/results.html). Title: QAAC: discussion, questions, feature requests, etc. Post by: Bostedclog on 03 January, 2012, 05:09:16 AM Before I decide to encode all of my music to qaac .Does anyone know if there will be any significant improvements in the near future regarding sound quality.Many thanks.. Sound quality is of course the same as files generated with Quicktime/iTunes directly, since qaac interfaces Apple's encoders. So you should bug Apple about their encoders if you deem them too bad. At relatively low bitrates, Quicktime was recently found to produce the highest quality AAC encodes, though (http://listening-tests.hydrogenaudio.org/igorc/aac-96-a/results.html). I dont know whether my question came across wrong but I think the qtaac/qaac encoders are brilliant .I as just wondering if any big changes are expected if not Ill start encoding 11,000 tracks. Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 03 January, 2012, 05:45:13 AM Sounds a bit like sending a letter praising the powerful engine and easily shifting gearbox to the manufacturers of the car chassis. The author of qaac is not responsible for the quality of the AAC encoder developed for Apple QuickTime. He "only" gave you the ability to use it without the QuickTime Player application. But I'd like to thank him a lot for this essential job, because it is a prerequisite to use it with different user interfaces. Title: QAAC: discussion, questions, feature requests, etc. Post by: sneaker on 06 February, 2012, 05:54:23 AM Bug report: It seems that the "iTunSMPB" tag is written incorrectly for HE output. Instead of writing the correct delay value, it will always write the standard LC value (840). Haven't checked padding value yet. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 06 February, 2012, 06:23:30 AM Bug report: It seems that the "iTunSMPB" tag is written incorrectly for HE output. Instead of writing the correct delay value, it will always write the standard LC value (840). Haven't checked padding value yet. As far as I know, iTunes is using exactly the same value including padding. How did you think it is incorrect? Title: QAAC: discussion, questions, feature requests, etc. Post by: sneaker on 06 February, 2012, 06:49:41 AM Bug report: It seems that the "iTunSMPB" tag is written incorrectly for HE output. Instead of writing the correct delay value, it will always write the standard LC value (840). Haven't checked padding value yet. As far as I know, iTunes is using exactly the same value including padding. How did you think it is incorrect? I compared the value to the actual delay that I can see in Audacity after decoding to wav. For LC the value stored in the tag as delay is 840 which equals 2112 samples in decimal notation. The source file was 44.1 KHz. 2112/44100Hz ~= 48ms Comparing the original source wav and the decoded wav I can see that this is correct. Now for HE qaac also writes 840, which would mean: 2112/22050Hz ~= 96ms But comparing the files in Audacity I can see that the actual delay is 117ms When I encode with NeroAacEnc it will correctly adapt the value to the output profile. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 06 February, 2012, 07:10:04 AM If you think it's problem of Apple encoder, you should report this to Apple. qaac is just using the value which obtained via CoreAudioToolbox API. Probably same for iTunes (leading/trailing frames of AudioConverterPrimeInfo). Title: QAAC: discussion, questions, feature requests, etc. Post by: sneaker on 06 February, 2012, 07:37:58 AM Sorry, I don't know what part of the work is Apple's and what is yours. Can anyone with iTunes confirm that it has the same problem? Another suggestion: As the iTunes value is wrong, just write your own value. It is always fixed, isn't it? 117ms would equal "A14", roughly. Title: QAAC: discussion, questions, feature requests, etc. Post by: Alex B on 06 February, 2012, 12:08:43 PM This iTunes/QT HE issue is interesting. In last July I helped the JRiver developers in adding support for iTunSMPB to their recently introduced new decoder: http://yabb.jriver.com/interact/index.php?topic=65076 (http://yabb.jriver.com/interact/index.php?topic=65076). Gapless HE and HE+PS decoding didn't initially work correctly because the doubled output sample rate was not taken into account. But this was a decoder issue, not an iTunSMPB tag issue. AFAIK, it is correct to write the tag values that work with the LC part. If a decoder creates the reconstructed higher sample rate output, it should adjust the values accordingly. Though, I had earlier created a set of useful sample files: http://yabb.jriver.com/interact/index.php?...22862#msg422862 (http://yabb.jriver.com/interact/index.php?topic=63215.msg422862#msg422862) . In that package my iTunes encoded HE-AAC files indeed had incorrect iTunSMPB tags. I discovered that an early version of iTunes 10 didn't do this right: From: http://yabb.jriver.com/interact/index.php?...23115#msg423115 (http://yabb.jriver.com/interact/index.php?topic=63215.msg423115#msg423115) Quote FYI, I just noticed that the ITUNSMPB tag values in my "iTunes HE-AAC (SBR)" sample files are slightly off. This was caused by a buggy iTunes version (an early v.10 build). I just installed the latest iTunes version (10.2.1.1) and the bug seems to be fixed. It produces values that are identical to the Nero and QuickTime values. For encoding the "QuickTime LC-AAC" set I used the qaac (http://sites.google.com/site/qaacpage/) frontend and it created correct ITUNSMPB tags. So, if you are considering adding gapless support, don't use the "iTunes HE-AAC (SBR)" file set for testing the behavior. Title: QAAC: discussion, questions, feature requests, etc. Post by: Alex B on 06 February, 2012, 12:52:30 PM sneaker, What decoder did you use? Could it be possible that the decoder uses the older Nero/FAAC gapless method (and does it correctly) if that data is present, but does not interpret iTunSMPB correctly if the format is HE-AAC? Title: QAAC: discussion, questions, feature requests, etc. Post by: sneaker on 06 February, 2012, 03:58:31 PM sneaker, What decoder did you use? Could it be possible that the decoder uses the older Nero/FAAC gapless method (and does it correctly) if that data is present, but does not interpret iTunSMPB correctly if the format is HE-AAC? I'm using the AviSynth plug-in ffmpegsource. It does not seem to read the iTunSMPB tag nor the old Nero info at all, thus is ideal for finding out the encoder delay. If it would just be an issue of not taking the halved sample rate into consideration, the delay should be off by a factor of two, which is not the case. (117ms / 2 != 96ms) Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 07 February, 2012, 07:58:21 AM I confirmed your findings with avconv. Also found: • Actual content seems trimmed by 20ms or so. • QuickTime player decodes it without leading zeros. Instead, zero padding is appended to the end. Whole length is same with avconv, and actual content is trimmed here, too. Probably it is already trimmed when it is encoded. When iTunSMPB says: [blockquote]2112 + A + T (A: length of input, T: length of trailing zero)[/blockquote]Actually it looks like: [blockquote]L + A' + T (L: actual leading delay, A': length of encoded samples, T: length of trailing zero) [/blockquote]where: [blockquote]L ≒ 2580, A' = A - (L - 2112)[/blockquote]Note: [blockquote]2580/22050 ≒ 0.117 (2580-2112)/22050 ≒ 0.02[/blockquote]I sent PM to squo asking about this. Title: QAAC: discussion, questions, feature requests, etc. Post by: Alex B on 07 February, 2012, 09:35:36 AM Something is definitely not right. For a practical test, I created a short 44.1 kHz wav file. It contains 102400 samples of 8820 Hz sine wave. I converted the sample to HE-AAC and LC-AAC with qaac. Then I decoded the resulting m4a files with foobar2000 and iTunes. Foobar2000 produced accurate file durations for both decoded files (102400 samples), but the decoded HE sample is delayed by a bit over 20 ms. iTunes didn't even get the durations right. The decoded LC file is 102312 samples and the decoded HE file is 103358. The latter contains a bit over 20 ms of some quieter stuff in the end (quieter by about -20 dB). I'll upload the samples and add a link here. EDIT The sample package is available here: http://www.hydrogenaudio.org/forums/index....st&p=785135 (http://www.hydrogenaudio.org/forums/index.php?s=&showtopic=93310&view=findpost&p=785135) Title: QAAC: discussion, questions, feature requests, etc. Post by: sneaker on 07 February, 2012, 04:33:43 PM [/list]When iTunSMPB says: [blockquote]2112 + A + T (A: length of input, T: length of trailing zero)[/blockquote]Actually it looks like: [blockquote]L + A' + T (L: actual leading delay, A': length of encoded samples, T: length of trailing zero) [/blockquote]where: [blockquote]L ? 2580, A' = A - (L - 2112)[/blockquote]Note: Did you swap A and T, i.e. shouldn't iTunSMPB be L + T + A(')? At least that seems to be correct in the case of LC. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 07 February, 2012, 08:33:51 PM Did you swap A and T, i.e. shouldn't iTunSMPB be L + T + A(')? At least that seems to be correct in the case of LC. Yes, as for iTunSMPB your order is correct. I described here in the order which appear in the actual audio stream. Might have been misleading. Title: QAAC: discussion, questions, feature requests, etc. Post by: Xenion on 08 February, 2012, 07:40:19 AM with --concat-cuesheet foobar2000 reads the chapters perfectly but itunes does not. does itunes not support chapters? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 08 February, 2012, 09:03:13 AM with --concat-cuesheet foobar2000 reads the chapters perfectly but itunes does not. does itunes not support chapters? If I remember correctly, iTunes was ignoring ALAC chapter before. However, it looks like iTunes is now supporting chapter of ALAC in m4a. In my environment (iTunes 10.5.3.3), "chapter" menu appears when a file including chapter is being played. Title: QAAC: discussion, questions, feature requests, etc. Post by: Xenion on 08 February, 2012, 02:58:30 PM If I remember correctly, iTunes was ignoring ALAC chapter before. However, it looks like iTunes is now supporting chapter of ALAC in m4a. In my environment (iTunes 10.5.3.3), "chapter" menu appears when a file including chapter is being played. hm with aacs it works but not with alacs here (iTunes 10.5.3.3) Title: QAAC: discussion, questions, feature requests, etc. Post by: yuker on 11 February, 2012, 01:07:16 PM I've created tool for using "qaac" without installing QuickTime. http://www.mediafire.com/?kk96ytd933bf119 (http://www.mediafire.com/?kk96ytd933bf119) Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 11 February, 2012, 08:32:52 PM Is this stripped version of lvqcl's package (http://www.hydrogenaudio.org/forums/index.php?s=&showtopic=78072&view=findpost&p=769999)? Eh, I see, that was on megaupload (and is already lost). Title: QAAC: discussion, questions, feature requests, etc. Post by: Soulvomit on 22 February, 2012, 07:23:06 PM Would it be possible to disable normalization when downmixing? If not, could the option be added? Thank you, nu, for this great tool. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 22 February, 2012, 10:57:43 PM Would it be possible to disable normalization when downmixing? If not, could the option be added? Thank you, nu, for this great tool. When you want different gain for each channel, you might surely want to disable automatic normalization of matrix coefficients. Since automatic normalization of coefficients will be usually convenient, I will leave it as default and add another option. Thanks. Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 23 February, 2012, 02:32:28 AM Is this stripped version of lvqcl's package (http://www.hydrogenaudio.org/forums/index.php?s=&showtopic=78072&view=findpost&p=769999)? My version uses 7z.exe (and 7z.dll) for unpacking and yuker's uses 7za.exe and system32\msiexec.exe (interesting idea). Also, it seems that the biggest dll file (icudt46.dll) contains only some data for ICU library. it is possible to compile 'dummy' icudt46.dll that doesn't contain any data. This version is sufficient for qaac encoding: [attachment=6942:icudt46_dummy.zip] Title: QAAC: discussion, questions, feature requests, etc. Post by: Boulder on 26 February, 2012, 09:05:13 AM I'm getting this error with qaac v1.31: ERROR: libmp4v2: mp4v2::impl::MP4File::WriteBytes: write failed: errno: 28 (..\..\mp4v2\src\mp4file_io.cpp,163) What does it mean? I was trying to encode a 5.1-channel DTS-HD MA track to 5.1-channel AAC using this command line: eac3to.exe d:redline.dts stdout.wav | qaac.exe -i -N -V 82 --verbose --tmpdir d:\ - -o f:redline_audio.m4a The error occurs always at the exact same spot. By the way, qaac says that it's version 1.30. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 26 February, 2012, 11:45:20 AM ERROR: libmp4v2: mp4v2::impl::MP4File::WriteBytes: write failed: errno: 28 (..\..\mp4v2\src\mp4file_io.cpp,163) What does it mean? errno 28 means ENOSPC (No space left on device). Quote By the way, qaac says that it's version 1.30. Thanks, updated files. Title: QAAC: discussion, questions, feature requests, etc. Post by: Boulder on 26 February, 2012, 12:34:34 PM Weird, there is enough space on drive F..or does it actually run out of space on drive D (tmpdir)? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 26 February, 2012, 07:49:44 PM Weird, there is enough space on drive F..or does it actually run out of space on drive D (tmpdir)? Tempfile for -N can be quite big (it is in the form of 32bit float raw PCM). If your input format is 48000Hz/6ch, it will consume 1MB/s or so. You also need room for intermediate m4a file, which is about the same as the resulting m4a in size. Title: QAAC: discussion, questions, feature requests, etc. Post by: kareha on 11 March, 2012, 04:36:47 PM Wonder if anyone knows how to fix this problem I'm having with QAAC and dbpoweramp. For some reason when converting stuff from either CD or Lossless to AAC, dbpoweramp will not add the ITUNSMPB tag. This means that any mix CD's that I rip won't play gaplessly in Foobar which is my main music program. However, if I untick db Write Tags it does add the tag, but unfortunately none of my music gets tagged with anything else. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 12 March, 2012, 05:09:36 AM Quote dbpoweramp will not add the ITUNSMPB tag It's not dbpoweramp but the encoder who can (and does) write it. Therefore, the problem seems not dbpoweramp doesn't write ITUNSMPB, but it actually removes the existing ITUNSPMB when it edits the resulting file to append tags. Title: QAAC: discussion, questions, feature requests, etc. Post by: Soulvomit on 17 March, 2012, 08:59:38 PM What should the two bit depth settings be set to using fb2k? I ask this because I notice when converting WAV to AAC with qaac using different combinations of those settings, the sizes of the outputted AAC files are the same and identical to what QuickTime (demuxing MOV) gives, yet if done with AC-3, they're aren't. What am I missing here? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 18 March, 2012, 03:59:23 AM What should the two bit depth settings be set to using fb2k? I ask this because I notice when converting WAV to AAC with qaac using different combinations of those settings, the sizes of the outputted AAC files are the same and identical to what QuickTime (demuxing MOV) gives, yet if done with AC-3, they're aren't. What am I missing here? As is written in the help message, it is meant only for ALAC or WAV output, and has no effect on AAC encoding. Title: QAAC: discussion, questions, feature requests, etc. Post by: Soulvomit on 20 March, 2012, 02:14:20 AM Never mind. Lossy and 16 are the correct settings. Is matrix mixing enabled for ALAC conversions? I get a code 2 error using: Code: [Select] -A --no-matrix-normalize --matrix-preset=62 - %d Thanks again. And also for mp4fpsmod; such a great tool. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 20 March, 2012, 05:56:32 AM Is matrix mixing enabled for ALAC conversions? I get a code 2 error using: Code: [Select] -A --no-matrix-normalize --matrix-preset=62 - %d You can use matrix mixer for ALAC. What error message is displayed when you run it directly from command prompt? Title: QAAC: discussion, questions, feature requests, etc. Post by: Soulvomit on 20 March, 2012, 04:24:56 PM You can use matrix mixer for ALAC. What error message is displayed when you run it directly from command prompt? "Not supported sample format for ALAC: F32LE". Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 20 March, 2012, 08:45:17 PM You can use matrix mixer for ALAC. What error message is displayed when you run it directly from command prompt? "Not supported sample format for ALAC: F32LE". Oh, I see. qaac/reflac converts internal sample format to 32bit float when DSP options such as --matrix-preset are specified. However, ALAC doesn't support float. This is the cause of that error. In short, you have to specify -b option to convert it to 16 or 24 bit integer format before encoding. In the past, qaac/refalac converted automatically to 16 or 24 bit for ALAC depending on the source bit depth. However I dropped the feature when -b option was introduced, so that user can control resulting output bit depth. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 21 March, 2012, 08:55:53 AM Updated to 1.32; Now it will automatically convert to 16 bit integer format by default in this case, instead of just showing some error message. You can still use -b option, but when you don't specify -b, this new default conversion is applied. https://sites.google.com/site/qaacpage/news...se132refalac043 (https://sites.google.com/site/qaacpage/news/qaacrelease132refalac043) Anyway, thanks for reporting. I think this was the matter of usability (or bad documentation). Title: QAAC: discussion, questions, feature requests, etc. Post by: Soulvomit on 21 March, 2012, 10:45:36 AM Thank you for explaining, however, the error still persists with fb2k. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 21 March, 2012, 11:36:49 AM Thank you for explaining, however, the error still persists with fb2k. Try Code: [Select] --log %d.txt or something and let's see what is shown in the log. Generally speaking, --log is useful when you are running from GUI frontend and you cannot see the error message. Title: QAAC: discussion, questions, feature requests, etc. Post by: Soulvomit on 21 March, 2012, 12:28:40 PM Thank you for explaining, however, the error still persists with fb2k. Try Code: [Select] --log %d.txt or something and let's see what is shown in the log. Generally speaking, --log is useful when you are running from GUI frontend and you cannot see the error message. Code: [Select] qaac 1.32, CoreAudioToolbox 7.9.7.822.m4aERROR: C:\Users\Soulvomit\Desktop\22.m4a: No such file or directory I don't have this problem using the matrix mixer with AAC conversions or converting stereo files to ALAC. Title: QAAC: discussion, questions, feature requests, etc. Post by: edwrap on 21 March, 2012, 08:26:27 PM hey nu774, first off - thanks for creating this great tool two minor issues with the "writing application" metadata value created by qaac: 1) the TVBR value doesn't correspond to the encoder settings. ie. -V 90 (or default) registers as q91, -V 75 as q73, etc. 2) -q2 (or default) translates to Quality 96, though in the technical note (http://developer.apple.com/library/mac/#technotes/tn2237/_index.html) you reference, that setting actually goes to 127 Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 21 March, 2012, 10:37:03 PM Code: [Select] 22.m4aERROR: C:\Users\Soulvomit\Desktop\22.m4a: No such file or directory [/code] A "-" (hyphen, meaning stdin input) might be missing from your CLI encoder setting. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 21 March, 2012, 10:53:07 PM 1) the TVBR value doesn't correspond to the encoder settings. ie. -V 90 (or default) registers as q91, -V 75 as q73, etc. At the interface level, TVBR quality parameter accepts values from 0 to 127. However, the QuickTime AAC encoder actually has only 15 quality steps. Therefore, parameter is get rounded to the nearest functional value, which is saved into the "tool" tag. 2) -q2 (or default) translates to Quality 96, though in the technical note (http://developer.apple.com/library/mac/#technotes/tn2237/_index.html) you reference, that setting actually goes to 127 Same story as TVBR quality. only 32, 64, 96 are actually functional values. I could choose 0-127 option style, but is using 0-2 by historical reason (0-2 was more natural when qaac was using QuickTime API). Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 22 March, 2012, 06:19:45 AM Code: [Select] -A --no-matrix-normalize --matrix-preset=62 - %d Sorry, I should have looked at your command line closer. You just need "-o" before %d. Title: QAAC: discussion, questions, feature requests, etc. Post by: edwrap on 23 March, 2012, 12:50:30 AM 1) the TVBR value doesn't correspond to the encoder settings. ie. -V 90 (or default) registers as q91, -V 75 as q73, etc. At the interface level, TVBR quality parameter accepts values from 0 to 127. However, the QuickTime AAC encoder actually has only 15 quality steps. Therefore, parameter is get rounded to the nearest functional value, which is saved into the "tool" tag. 2) -q2 (or default) translates to Quality 96, though in the technical note (http://developer.apple.com/library/mac/#technotes/tn2237/_index.html) you reference, that setting actually goes to 127 Same story as TVBR quality. only 32, 64, 96 are actually functional values. I could choose 0-127 option style, but is using 0-2 by historical reason (0-2 was more natural when qaac was using QuickTime API). Ah, I see. Thanks for the explanation and apologies for the needless concerns! Title: QAAC: discussion, questions, feature requests, etc. Post by: Soulvomit on 23 March, 2012, 11:57:13 PM Sorry, I should have looked at your command line closer. You just need "-o" before %d. Thanks. I got it working with "- -o". Title: QAAC: discussion, questions, feature requests, etc. Post by: Soulvomit on 24 March, 2012, 05:59:42 AM Do you think you can add an m4a muxer to qaac for ADTS AAC and raw ALAC (demuxed M4A with MP4Box)? Thanks again. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 24 March, 2012, 08:25:28 AM Do you think you can add an m4a muxer to qaac for ADTS AAC and raw ALAC (demuxed M4A with MP4Box)? No, I don't think it is the encoder's task, and have no plan for it. As for "raw ALAC"... why do you want that? As far as I know, there's no defined elementary stream format for ALAC as in MPEG family. mp4box seems to able to just extract the mdat content without headers or something. Since it lacks ALACDecoderConfig (which holds information mandatory for decoding, such as rice parameters), I don't think this "raw" ALAC bitstream is quite useful for something. Title: QAAC: discussion, questions, feature requests, etc. Post by: silverbear on 20 April, 2012, 05:56:47 PM 'Hope I'm posting this in the correct place. Re qaac and ipod chapters: I'm using qaac to join several flac files with embedded cue sheets (an audiobook) into a single m4b file with chapters. If I run qaac from the command line, the chapters appear in foobar, iTunes and my iPod. However, if I try to do this processing from within foobar, only foobar will recognise the chapters. In ITunes and my ipod, the file appears as one huge chapter. I know it's been mentioned that when mp4 files are processed in foobar, foobar will 'touch' the headers as part of the final process. Do you think that this is wiping out the ITunes version of the chapters and leaving only the nero type chapter headings in tact? By the way, many many thanks to the developer of qaac - most impressive and gratefully received. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 20 April, 2012, 09:30:37 PM If I run qaac from the command line, the chapters appear in foobar, iTunes and my iPod. However, if I try to do this processing from within foobar, only foobar will recognise the chapters. In ITunes and my ipod, the file appears as one huge chapter. That's because fb2k writes Nero style chapters and QuickTime/iTunes/iPod doesn't read them (When you encode from fb2k, metadata and chapters are basically written by fb2k, except for a few encoder specific tags). mp4chaps.exe will be your friend... You can inspect/import/export/convert MP4 chapters with it. Title: QAAC: discussion, questions, feature requests, etc. Post by: Mix3dmessagez on 21 April, 2012, 04:00:43 AM Hey, does tvbr support 16 bitor 24 bit depth? Title: QAAC: discussion, questions, feature requests, etc. Post by: silverbear on 21 April, 2012, 05:00:20 AM If I run qaac from the command line, the chapters appear in foobar, iTunes and my iPod. However, if I try to do this processing from within foobar, only foobar will recognise the chapters. In ITunes and my ipod, the file appears as one huge chapter. That's because fb2k writes Nero style chapters and QuickTime/iTunes/iPod doesn't read them (When you encode from fb2k, metadata and chapters are basically written by fb2k, except for a few encoder specific tags). mp4chaps.exe will be your friend... You can inspect/import/export/convert MP4 chapters with it. Ahh, I was hoping to get around this by calling qaac as a post process in foobar. Never mind, mp4chaps it is. Many thanks for replying so quickly. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 21 April, 2012, 05:31:13 AM Hey, does tvbr support 16 bitor 24 bit depth? The concept of bit depths is not defined for AAC, at least in the same sense as of PCM. --bits-per-sample of qaac is for WAV/ALAC output only; It will be simply ignored for AAC. Title: QAAC: discussion, questions, feature requests, etc. Post by: pururin on 22 April, 2012, 11:23:08 AM Does anyone knows whether Apple's aac (coreaudiotoolbox.dll) get any update recently? As I read the changelog and found nothing for long time. Thanks. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 22 April, 2012, 12:31:31 PM Does anyone knows whether Apple's aac (coreaudiotoolbox.dll) get any update recently? As I read the changelog and found nothing for long time. Thanks. On 7.9.7.8 (QuickTime 7.7.1 / iTunes10.5), there were updates on AAC encoder, and also a function named ACMP4AACHighEfficiencyEncoderFactory was added to the DLL, which enables iTunes10.5 to run HE-AAC codec without QuickTime installation. AFAIK that was the last update on AAC encoder so far. Title: QAAC: discussion, questions, feature requests, etc. Post by: pururin on 22 April, 2012, 02:22:28 PM Thanks a lot for the fast reply, nu774. Title: QAAC: discussion, questions, feature requests, etc. Post by: kareha on 22 April, 2012, 06:53:07 PM Hmm, I've got a CoreAudioToolBox.dll version 7.9.7.9 dated 20.02.12, not sure what was updated in this one though. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 22 April, 2012, 09:14:01 PM Hmm, I've got a CoreAudioToolBox.dll version 7.9.7.9 dated 20.02.12, not sure what was updated in this one though. Yes, it is the latest. Actually, there are at least two different binaries under the same version number "7.9.7.9". Same for 7.9.7.8. As far as I can see, AAC encoder is not updated, though. By the way, I sent PM to squo (a developper of QuickTime AAC) about HE-AAC iTuneSMPB problem in the past, but I haven't received a reply. At the time of 7.9.7.8 update he was still there, but he might have been transferred/retired or something. Title: QAAC: discussion, questions, feature requests, etc. Post by: pururin on 23 April, 2012, 12:35:07 AM skuo retired? That's sad. Thought about the pre-echo on sharp transients issue, have it been fixed in the latter version? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 23 April, 2012, 04:34:01 AM Thought about the pre-echo on sharp transients issue, have it been fixed in the latter version? I only know that output of Avisynth ColorBars() function was a killer sample to QT AAC in the past, and was fixed on 7.9.7.8 update. ColorBars() mainly outputs SMPTE color bars (video test pattern), but also outputs audio test tone, which is repeatedly turned on/off once every seconds. QT AAC was producing quite obvious artifacts at the boundaries even at the highest settings. Title: QAAC: discussion, questions, feature requests, etc. Post by: pururin on 23 April, 2012, 08:58:49 AM I only know that output of Avisynth ColorBars() function was a killer sample to QT AAC in the past, and was fixed on 7.9.7.8 update. It seems some issues that was reported got fixed on 7.9.7.8. I'm appreciate that the developers are quite active.(qaac too, Many thanks to nu774 ) A little wonder when the next update will come. About pre-echo, may have to ask someone like /mnt. Title: QAAC: discussion, questions, feature requests, etc. Post by: 2304p on 23 April, 2012, 11:25:38 AM I only know that output of Avisynth ColorBars() function was a killer sample to QT AAC in the past, and was fixed on 7.9.7.8 update. It seems some issues that was reported got fixed on 7.9.7.8. I'm appreciate that the developers are quite active.(qaac too, Many thanks to nu774 ) A little wonder when the next update will come. About pre-echo, may have to ask someone like /mnt. I have iTunes 10.6.1.7 and still coreaudiotoolbox 7.9.7.3 question: How to get a current coreaudiotollbox version? How to updated it? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 23 April, 2012, 11:48:05 AM I have iTunes 10.6.1.7 and still coreaudiotoolbox 7.9.7.3 question: How to get a current coreaudiotollbox version? How to updated it? Try Code: [Select] qaac --check from command prompt, instead of explorer property window. CoreAudioToolbox.dll is internationalized, and contains multiple resources for different locales. The problem is, for many locales Apple has not been properly updating version resources, and for such locales Windows explorer will show file version as 7.9.7.3; On the other hand, qaac --check will always pick version number from en-US locale resource. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 23 April, 2012, 11:52:55 AM Basically speaking, if you are using qaac 1.x branch and you can use HE-AAC encoder, your CoreAudioToolbox.dll is 7.9.7.8 or newer. Title: QAAC: discussion, questions, feature requests, etc. Post by: 2304p on 23 April, 2012, 11:53:26 AM I have iTunes 10.6.1.7 and still coreaudiotoolbox 7.9.7.3 question: How to get a current coreaudiotollbox version? How to updated it? Try Code: [Select] qaac --check from command prompt, instead of explorer property window. CoreAudioToolbox.dll is internationalized, and contains multiple resources for different locales. The problem is, for many locales Apple has not been properly updating version resources, and for such locales Windows explorer will show file version as 7.9.7.3; On the other hand, qaac --check will always pick version number from en-US locale resource. thanks Title: QAAC: discussion, questions, feature requests, etc. Post by: Dario on 26 April, 2012, 08:30:19 AM Is there a place where I can download the CoreAudioToolbox library independently? I am unable to download iTunes at the moment, as my connection is very slow. Title: QAAC: discussion, questions, feature requests, etc. Post by: kareha on 26 April, 2012, 08:46:24 AM Is there a place where I can download the CoreAudioToolbox library independently? I am unable to download iTunes at the moment, as my connection is very slow. Uploaded 7.9.7.9 to my Dropbox: http://dl.dropbox.com/u/26532689/CoreAudioToolbox.dll (http://dl.dropbox.com/u/26532689/CoreAudioToolbox.dll) Title: QAAC: discussion, questions, feature requests, etc. Post by: Dario on 26 April, 2012, 09:19:27 AM Thank you very much! Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 15 May, 2012, 06:30:24 AM Hiyas, any seem of approximate quality relation table between QuickTime TVBR values and LAME V x values, eventually Vorbis Q values? Something that look like Lame V4 ≈ TVBR 40 Lame V2 ≈ TVBR 60 Lame V0 ≈ TVBR 80 etc.. Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 15 May, 2012, 06:55:50 AM Not really. The three algorithms differ in specific cases, it may be hard to rate them generally, and the only "reliable" way would be a mass ABX test in quality ranges most people would call "transparent". As listed in this post about Vorbis accelleration (http://www.hydrogenaudio.org/forums/index.php?act=findpost&pid=793859), my own humble opinion is that qaac -V 80 might be comparable to oggenc2 -q 5, based on the assumption that they both are close in some bitrate/quality quotient. The MP3 technology is less efficient compared to both of them, may need about 20-30% more bitrate for a similar quality, and I found that lame -V 2 matches this range quite well. The opinion above is personal and weak. Other members here may have more founded facts. Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 31 May, 2012, 04:30:36 AM I exchanged the CoreAudioToolbox.dll from a freshly installed QuickTime 7.7.2 (DLL file date: Feb. 20 2012) with the one kareha uploaded. "qaac --check" still reports version 7.9.7.3 (on a german Windows XP SP3). When I realized they have the same size, I did a byte comparison, and guess what ... I found they were identical. I opened it in ResHacker and checked the "Version Info" tables. There are several for different language codes: 1033 = "7.9.7.9"; 1025 = "7.9.7.7"; 1028,1030,1031,... = "7.9.7.3" (and on top, they mix up InternalName and OriginalFileName vice versa). So ... don't care about numbers if you don't use an english Windows. __ P.S.: Oh, I did the check with an older qaac version; qaac 1.37 reports CoreAudioToolbox version 7.9.7.9 correctly. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 09 June, 2012, 02:21:06 PM 1.38 released. I finally completely switched from every other encoding software to just foobar2000 + qaac (TVBR 91/100), thank you dev/s! Changelog: Updated libsoxrate to 0.21 (merged upstream update on SoX rate effect). Fixed not to write WAVE_FORMAT_EXTENSIBLE header unnecessarily on WAV output. https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 09 June, 2012, 02:29:59 PM I finally completely switched from every other encoding software to just foobar2000 + qaac (TVBR 91/100), thank you dev/s! Is there noticeable difference between TVBR 91 and 100, and can you ABX tvbr 91 from original? Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 09 June, 2012, 02:36:10 PM I finally completely switched from every other encoding software to just foobar2000 + qaac (TVBR 91/100), thank you dev/s! Is there noticeable difference between TVBR 91 and 100, and can you ABX tvbr 91 from original? You'll be amazed how low you can go with AAC. I keep ~192/224 just to be safe Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 09 June, 2012, 02:43:01 PM IMHO, TVBR 82 is similar to Vorbis q5, that seems to be enough for "easy listening"; see the Vorbis CPU optimization thread for an explanation... Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 09 June, 2012, 02:51:55 PM IMHO, TVBR 82 is similar to Vorbis q5, that seems to be enough for "easy listening"; see the Vorbis CPU optimization thread for an explanation... I don't go lower than 160Kbps. TVBR 91 is the lowest I go. I keep ~1000 songs on my iPhone, no space problem. Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 09 June, 2012, 03:02:02 PM Don't care about bitrates. 160 kbps silence is not "better" than 32 kbps silence. There is easily compressible sound which you can't directly compare with harder "noise". If the quality mode ensures a threshold of maximum loss, trust in it. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 09 June, 2012, 03:11:51 PM Don't care about bitrates. 160 kbps silence is not "better" than 32 kbps silence. There is easily compressible sound which you can't directly compare with harder "noise". If the quality mode ensures a threshold of maximum loss, trust in it. Where is the "Like" button? Title: QAAC: discussion, questions, feature requests, etc. Post by: IgorC on 09 June, 2012, 03:39:45 PM Constrained VBR (CVBR) might be a good solution. It doesn't go as low as TVBR does occasionally and causes some artifacts as here http://www.hydrogenaudio.org/forums/index....st&p=768156 (http://www.hydrogenaudio.org/forums/index.php?s=&showtopic=90403&view=findpost&p=768156) LigH, It's strange. Don't You watch the football righ now? Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 09 June, 2012, 03:46:18 PM I do. And I am not really satisfied... But that's quite usual. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 15 July, 2012, 06:48:31 PM [qaac] release 1.39 (refalac 0.50) posted 6 hours ago by nu 774 - Support "REM DISCNUMBER" "REM TOTALDISCS" in cuesheet. - Flush stdio buffer when stdout is connected to a pipe. - Update mp4v2 to svn rev 496. https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Title: QAAC: discussion, questions, feature requests, etc. Post by: Meeko on 24 July, 2012, 10:47:31 AM Took me a while to get this to work, but once I got it working in foobar, its great. The new change to no longer need Quicktime helped because I could never get quicktime to work properly on my machine. Thanks for all your hard work nu774. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 29 July, 2012, 04:05:15 PM So, I was trying to transcode with qaac on an older laptop and it didn't work and I don't know why precisely. The laptop has Windows 7 32bit, both iTunes and QuickTime latest version are installed, I've tried to copy CoreAudioToolbox.dll and CoreFoundation.dll on the same folder where qaac.exe is but nothing. The software I am using to convert with qaac is foobar2000 1.1.14 beta 1 and the error message I receive is: "Conversion failed: The encoder has terminated prematurely with code -1073741515 (0xC0000135); please re-check parameters". I have .NET 4.0 installed, do I need an older version as well? I've not yet tested qtaacenc. Thanks. Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 29 July, 2012, 04:15:09 PM Quote I've tried to copy CoreAudioToolbox.dll and CoreFoundation.dll on the same folder where qaac.exe is but nothing. Remove them, and copy msvcp100.dll and msvcr100.dll to the qaac folder. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 29 July, 2012, 04:36:57 PM Quote I've tried to copy CoreAudioToolbox.dll and CoreFoundation.dll on the same folder where qaac.exe is but nothing. Remove them, and copy msvcp100.dll and msvcr100.dll to the qaac folder. It worked thanks. Shouldn't Visual C++ be added to the list of requirement on the qaac homepage? Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 29 July, 2012, 04:40:33 PM It may have been part of earlier packages... Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 29 July, 2012, 04:57:08 PM qaac 1.39 package contains: qaac.exe, refalac.exe, libsoxrate.dll, msvcp100.dll, msvcr100.dll. Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 31 July, 2012, 03:32:58 PM Hello, why qaac can't use HE with TVBR, and is therefore less suitable for VBR mode at ~64kbps than NeroAAC? Probably --cvbr 64 --he will produce something..but is the quality at least as good as Nero's -q 0.25 ? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 31 July, 2012, 11:53:27 PM Hello, why qaac can't use HE with TVBR Because HE-AAC encoder of Apple CoreAudio/QuickTime doesn't have TVBR mode. Title: QAAC: discussion, questions, feature requests, etc. Post by: IgorC on 01 August, 2012, 06:34:00 AM Hello, why qaac can't use HE with TVBR, and is therefore less suitable for VBR mode at ~64kbps than NeroAAC? Probably --cvbr 64 --he will produce something..but is the quality at least as good as Nero's -q 0.25 ? What makes You think that Nero's VBR is any better than Apple CVBR? And why do You think CVBR is necessary worse than TVBR? Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 01 August, 2012, 06:43:32 AM Hello, why qaac can't use HE with TVBR, and is therefore less suitable for VBR mode at ~64kbps than NeroAAC? Probably --cvbr 64 --he will produce something..but is the quality at least as good as Nero's -q 0.25 ? What makes You think that Nero's VBR is any better than Apple CVBR? And why do You think CVBR is necessary worse than TVBR? Thanks. Is CVBR HE at 64kb better than Nero VBR HE at ~64kb? I've been using Nero q 0.25 long time but it might increase the bitrate when necessary while CVBR always keep the given bitrate which may lead to worse quality on complex music? Title: QAAC: discussion, questions, feature requests, etc. Post by: IgorC on 01 August, 2012, 06:56:01 AM I've been using Nero q 0.25 long time but it might increase the bitrate when necessary while CVBR always keep the given bitrate which may lead to worse quality on complex music? CVBR isn't CBR, and it does increase bitrate where it's necesarry. For You the variation of bitrate is the most important and only indicator of quality? Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 01 August, 2012, 07:01:18 AM I'm not sure Does it mean that QuickTime's CVBR at 64k is always better than Nero at q 0.25? Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 01 August, 2012, 07:11:13 AM This is entirely a subjective questions. Listen to hundreds of samples and decide yourself... Title: QAAC: discussion, questions, feature requests, etc. Post by: IgorC on 01 August, 2012, 08:47:56 AM This is entirely a subjective questions. Listen to hundreds of samples and decide yourself... Or google for the latest public test (2011) of Apple and Nero AAC encoders at 64 kbps. It could be understandable to mention Nero if it was at least an average AAC encoder. But Nero was the last and worse than Coding Technologies, FhG/Winamp and Apple encoders in last AAC public test. Furthermore Nero is outdated. The last fixes were made in 2009. The last quality improvements are dated by 2007. It's gone. Understand it. Title: QAAC: discussion, questions, feature requests, etc. Post by: neothe0ne on 13 August, 2012, 07:28:12 PM 1) the TVBR value doesn't correspond to the encoder settings. ie. -V 90 (or default) registers as q91, -V 75 as q73, etc. At the interface level, TVBR quality parameter accepts values from 0 to 127. However, the QuickTime AAC encoder actually has only 15 quality steps. Therefore, parameter is get rounded to the nearest functional value, which is saved into the "tool" tag. Can you share what those 15 functional values are? I could test myself (since I obviously won't be using all 15 ever) but if you have the information handy it would save me some time Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 14 August, 2012, 02:31:37 AM Here is a graph by 'kamedo2': (https://hydrogenaud.io/imgcache.php?id=f6efbf87e32988c784be6ac34ea217f2" rel="cached" data-warn="External image, click to view at original size" data-url="http://cdn-ak.f.st-hatena.com/images/fotolife/k/kamedo2/20120225/20120225015649_120.jpg) qaac tvbr number-bitrate relations (http://cdn-ak.f.st-hatena.com/images/fotolife/k/kamedo2/20120225/20120225015649.png) (Hatena Fotolife gallery (http://f.hatena.ne.jp/kamedo2/20120225015649)) I know I saw even a better one, but don't remember where. Probably on the qaac site or even here; someone explained the change in the quality steps between specific generations of the QuickTime AAC codec. __ IgorC posted it in a thread about qtaacenc (http://www.hydrogenaudio.org/forums/index.php?showtopic=78072&view=findpost&p=682393). Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 14 August, 2012, 03:27:17 AM I havenot tried all presets but they seem to be step 9: 1, 10, 19, 28, 37, 46, 55, 64, 73, 82, 91, 100, 109, 118, 127 Btw. is that true that tvbr 82 corresponds to Vorbis's q 5.0, tvbr 91 corresponds to Vorbis's q 6.0, tvbr 100 corresponds to Vorbis's 7.0 etc? About bitrate they're approx comparative: tvbr 91 ~192kbps tvbr 100 ~224kbps tvbr 109 ~272kbps tvbr 118 ~ 320kbps tvbr 127 ~352kbps Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 14 August, 2012, 03:40:06 AM Btw. is that true that tvbr 82 corresponds to Vorbis's q 5.0, tvbr 91 corresponds to Vorbis's q 6.0, tvbr 100 corresponds to Vorbis's 7.0 etc? In general ... rather "no", because Vorbis is a quite different algorithm, it does not work in the same way as AAC, and I doubt that Apple made efforts to synchronize their quality levels to any other software. But subjectively you will probably be close in quality and bitrate. And remember, you can fine-tune Vorbis with fractional quality values. Title: QAAC: discussion, questions, feature requests, etc. Post by: neothe0ne on 14 August, 2012, 06:59:03 AM I've encoded about a thousand tracks and haven't batted an eye at TVBR (got lots of stuff from 170 to 200 with V91), but I just encoded PSY's 6th album (Korean electronic/dance pop) and WOW. Just looking at the title track, Gangnam Style, tvbr 91 ~171 kbps iTunes Plus ~260 kbps tvbr 109 ~233 kbps I realize that -91 doesn't guarantee anything near 192, nor -109 around 270+, but then I compared the bitrate distribution: CVBR 256 starts at ~140 kbps and bilds to ~200 kbps, and at 6 seconds in, hits ~450 kbps. (6 seconds in it changes from low/bass beat to add a vocal) tvbr 91 starts at ~100 and is constant until it hits ~130-160 at 6 seconds in. tvbr 109 starts ~130 and is constant until it hits ~170 at 6 seconds in. (studied in foobar2000 with 3 VBR-updates per second) My first thought was tvbr had a lower frequency cutoff than iTunes Plus, but turns out that isn't correct. (tvbr 91 does have a soft wall at ~19.4 khz, whereas the other two go up to 22 khz, but the first 6 seconds in question don't have any (visible) frequencies going high enough to have been cut off anyway) CVBR has a higher floor than TVBR, but anyone have any idea what accounts for the nearly 300 kbps difference in the ceiling, which I hadn't believed would differ significantly between CVBR and TVBR? Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 14 August, 2012, 02:37:08 PM For curiosity I've made spectral graphs of TVBR versus Vorbis encode both at almost same bitrate 192k and from these the Vorbis looks like much better as maintaining higher cutoff range. Anybody can confirm that the Vorbis encode keeps more of the original? TVBR 91 (full / zoomed / frequency): (https://hydrogenaud.io/imgcache.php?id=3b8e50411031961005bd2438df36f6b9" rel="cached" data-warn="External image, click to view at original size" data-url="http://thumb.screencast.com/2/9dd27fba-d6a4-406d-b9f1-a0d1632113f7/thumb.gif) (http://content.screencast.com/users/nobody5/folders/Snagit/media/9dd27fba-d6a4-406d-b9f1-a0d1632113f7/everlong.png) (https://hydrogenaud.io/imgcache.php?id=c86cf6df27644fbf63e2a7185fca408c" rel="cached" data-warn="External image, click to view at original size" data-url="http://thumb.screencast.com/2/002ea456-0ed6-4391-83e4-6be3c4807f03/thumb.gif) (http://content.screencast.com/users/nobody5/folders/Snagit/media/002ea456-0ed6-4391-83e4-6be3c4807f03/everlong.png) (https://hydrogenaud.io/imgcache.php?id=d35a006d3b2212d707dae9ff05301d56" rel="cached" data-warn="External image, click to view at original size" data-url="http://thumb.screencast.com/2/04c74df2-823e-4fef-b215-711580756634/thumb.gif) (http://content.screencast.com/users/nobody5/folders/Snagit/media/04c74df2-823e-4fef-b215-711580756634/everlong.png) Vorbis q6 (full / zoomed / frequency): (https://hydrogenaud.io/imgcache.php?id=97cf0b426946ba629dddb2e41aa32c30" rel="cached" data-warn="External image, click to view at original size" data-url="http://thumb.screencast.com/2/5c86686d-c9be-429f-8c52-f7c7c3ac3fd7/thumb.gif) (http://content.screencast.com/users/nobody5/folders/Snagit/media/5c86686d-c9be-429f-8c52-f7c7c3ac3fd7/everlong.png) (https://hydrogenaud.io/imgcache.php?id=aab4c7a194485c856d813ae645d6998f" rel="cached" data-warn="External image, click to view at original size" data-url="http://thumb.screencast.com/2/bba13ed9-db56-4f5b-87c3-9b7012acc86e/thumb.gif) (http://content.screencast.com/users/nobody5/folders/Snagit/media/bba13ed9-db56-4f5b-87c3-9b7012acc86e/everlong.png) (https://hydrogenaud.io/imgcache.php?id=c69f563336fa1a6d109e4b7751c773e4" rel="cached" data-warn="External image, click to view at original size" data-url="http://thumb.screencast.com/2/082579f2-c442-460b-84a9-9d37a983b99d/thumb.gif) (http://content.screencast.com/users/nobody5/folders/Snagit/media/082579f2-c442-460b-84a9-9d37a983b99d/everlong.png) Title: QAAC: discussion, questions, feature requests, etc. Post by: jetpower on 14 August, 2012, 04:15:58 PM For curiosity I've made spectral graphs of TVBR versus Vorbis encode both at almost same bitrate 192k and from these the Vorbis looks like much better as maintaining higher cutoff range. Anybody can confirm that the Vorbis encode keeps more of the original? TVBR 91 (full / zoomed / frequency): (https://hydrogenaud.io/imgcache.php?id=3b8e50411031961005bd2438df36f6b9" rel="cached" data-warn="External image, click to view at original size" data-url="http://thumb.screencast.com/2/9dd27fba-d6a4-406d-b9f1-a0d1632113f7/thumb.gif) (http://content.screencast.com/users/nobody5/folders/Snagit/media/9dd27fba-d6a4-406d-b9f1-a0d1632113f7/everlong.png) (https://hydrogenaud.io/imgcache.php?id=c86cf6df27644fbf63e2a7185fca408c" rel="cached" data-warn="External image, click to view at original size" data-url="http://thumb.screencast.com/2/002ea456-0ed6-4391-83e4-6be3c4807f03/thumb.gif) (http://content.screencast.com/users/nobody5/folders/Snagit/media/002ea456-0ed6-4391-83e4-6be3c4807f03/everlong.png) (https://hydrogenaud.io/imgcache.php?id=d35a006d3b2212d707dae9ff05301d56" rel="cached" data-warn="External image, click to view at original size" data-url="http://thumb.screencast.com/2/04c74df2-823e-4fef-b215-711580756634/thumb.gif) (http://content.screencast.com/users/nobody5/folders/Snagit/media/04c74df2-823e-4fef-b215-711580756634/everlong.png) Vorbis q6 (full / zoomed / frequency): (https://hydrogenaud.io/imgcache.php?id=97cf0b426946ba629dddb2e41aa32c30" rel="cached" data-warn="External image, click to view at original size" data-url="http://thumb.screencast.com/2/5c86686d-c9be-429f-8c52-f7c7c3ac3fd7/thumb.gif) (http://content.screencast.com/users/nobody5/folders/Snagit/media/5c86686d-c9be-429f-8c52-f7c7c3ac3fd7/everlong.png) (https://hydrogenaud.io/imgcache.php?id=aab4c7a194485c856d813ae645d6998f" rel="cached" data-warn="External image, click to view at original size" data-url="http://thumb.screencast.com/2/bba13ed9-db56-4f5b-87c3-9b7012acc86e/thumb.gif) (http://content.screencast.com/users/nobody5/folders/Snagit/media/bba13ed9-db56-4f5b-87c3-9b7012acc86e/everlong.png) (https://hydrogenaud.io/imgcache.php?id=c69f563336fa1a6d109e4b7751c773e4" rel="cached" data-warn="External image, click to view at original size" data-url="http://thumb.screencast.com/2/082579f2-c442-460b-84a9-9d37a983b99d/thumb.gif) (http://content.screencast.com/users/nobody5/folders/Snagit/media/082579f2-c442-460b-84a9-9d37a983b99d/everlong.png) AAC has less "holes" in high freq, also are you a bat? Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 14 August, 2012, 04:24:11 PM AAC has less holes in high freq, also are you a bat? No not a bat does "less holes in high freq" mean a worse encode quality? I think the hard cutoff at ~19.5kHz by AAC is no good but not sure about that. I'm curious if iTunes does their encodes using the same QuickTime library or something else. Looked at iTunes original track which otherwise was a noticeably higher bitrate but there wasnot such a hard cutoff. Finally I'm not too impressed by the spectral of this TVBR. Title: QAAC: discussion, questions, feature requests, etc. Post by: jetpower on 14 August, 2012, 04:36:32 PM AAC has less holes in high freq, also are you a bat? No not a bat does "less holes in high freq" mean a worse encode quality? I think the hard cutoff at ~19.5kHz by AAC is no good but not sure about that. I'm curious if iTunes does their encodes using the same QuickTime library or something else. Looked at iTunes original track which otherwise was a noticeably higher bitrate but there wasnot such a hard cutoff. Finally I'm not too impressed by the spectral of this TVBR. I can hear up to 17khz, so this 19.5khz is more than enough for me. Holes/darker parts means less energy. Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 15 August, 2012, 02:49:33 AM "A higher cutoff frequency" is not the only criterion for "subjectively better quality"; that would rather be a general "less annoying loss". What if the high frequency range prevents the codec from preserving quality in lower, more audible ranges? A more variable frequency spectrum may be a sign for a more "optimistic" psycho-acoustic model; this is neither a guarantee for better or worse quality on its own. Just a proof for a different algorithm. It may sound more or less lossy, depending on many factors. If subjective impression could be measured and quantized more directly than in a "mass ABX test", we could have an objective quality metric. Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 15 August, 2012, 03:01:58 AM Vorbis achieves higher frequencies but probably at cost of worse reproduction in higher range. So I guess it's not possible to say if QAAC model is better or worse. Title: QAAC: discussion, questions, feature requests, etc. Post by: pururin on 22 August, 2012, 04:56:36 AM Is it normal that encoding 5.1 audio in Multi-threading mode is slower than single threading? Total time used by Multi-thread encoding (either 2 or 4 instances at the same time) is slower than consecutive single-thread encoding everytime I've tried. And total CPU usage is quite a bit low during multi-thread encoding with enough ram and hdd power left. This doesn't happen with stereo audio, with 2.0 tracks CPU usage skyrockets to full 50 or 100% and the work's blazing fast. Simply put, one by one work is faster for 5.1 audio. Kinda weird? (My system is i5-2540m Sandybridge, Win7 32bit) Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 22 August, 2012, 08:46:35 PM QuickTime/CoreAudio encoder itself always works on a single thread. When --threading option of qaac is set, encoder and decoder/DSP run on two different threads (one thread for each). Therefore, --threading will give you faster speed only when you do some heavy work on decoder/DSP side. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 27 August, 2012, 01:34:08 PM [qaac] release 1.40 (refalac 0.51) posted 5 hours ago by nu 774 [ updated 4 hours ago ] Update libsoxrate to 0.30 (merged update on rate effect of SoX: Speed up on down sampling). Due to ABI change on libsoxrate.dll, new qaac/refalac is incompatible with older libsoxrate.dll versions. https://sites.google.com/site/qaacpage/home (https://sites.google.com/site/qaacpage/home) Thanks for the update. Title: QAAC: discussion, questions, feature requests, etc. Post by: neothe0ne on 31 August, 2012, 09:50:13 PM I'm using qaac 1.39 and am having random problems with encoding FLAC to AAC. I see 26x+ speed (I saw sustained 40x for a bit before it slowed down) most of the time, but on some files the encoder will progress at a sluggish ~1.2 to ~1.7x speed, but the real kicker is the resulting file is silence. This is completely random and an unknown number of retries / accessing data from a different part of the disk after X minutes (that's a really long time) will "fix" the problem and let qaac actually encode something that's not silence. Foobar2000 can play the FLAC file fine, of course, and the FLAC file is not corrupted. Anyone know what's going on? Non-ASCII/long filenames/locations, FLAC tags, and inside/outside archives don't seem to be the problem after some testing. Title: QAAC: discussion, questions, feature requests, etc. Post by: neothe0ne on 01 September, 2012, 03:24:44 PM I'm using qaac 1.39 and am having random problems with encoding FLAC to AAC. I see 26x+ speed (I saw sustained 40x for a bit before it slowed down) most of the time, but on some files the encoder will progress at a sluggish ~1.2 to ~1.7x speed, but the real kicker is the resulting file is silence. This is completely random and an unknown number of retries / accessing data from a different part of the disk after X minutes (that's a really long time) will "fix" the problem and let qaac actually encode something that's not silence. Foobar2000 can play the FLAC file fine, of course, and the FLAC file is not corrupted. Anyone know what's going on? Non-ASCII/long filenames/locations, FLAC tags, and inside/outside archives don't seem to be the problem after some testing. Apparently, this slow speed silence bug has something to do with using --lowpass 19700? (I'm only using this for TVBR 100+... from a glance at the spectrographs it doesn't seem to change the distribution of bits at all, just imposes a hard wall which saves a couple hundred KB per song.) Is this a known bug of qaac and/or Core Audio? Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 01 September, 2012, 03:29:50 PM Is this a known bug of qaac and/or Core Audio? Can you share the original file converting from? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 01 September, 2012, 11:05:09 PM It's impossible to fix a bug (if any) if I cannot reproduce it. If you can reproduce it, please share the problematic file and write precise step to reproduce it (command line, etc). Title: QAAC: discussion, questions, feature requests, etc. Post by: neothe0ne on 02 September, 2012, 02:14:24 AM It's impossible to fix a bug (if any) if I cannot reproduce it. If you can reproduce it, please share the problematic file and write precise step to reproduce it (command line, etc). It's not a file - it's every file, FLAC or WAV, any location. When the bug kicks in and I let the process finish encoding at ~2x instead of 40-50x, the resulting file is ~22 kbps of the correct length, but complete silence. (instead of ~200 kbps) As I said, after an unknown number of playback / disk reading changes, the said files can be encoded properly again. It's completely random but affects every single file. I've now even had it encode the first 2 tracks of an album as slow silence (with proper length), but successfully encode the other 10 tracks of the album. my foobar2000 arguments are Code: [Select] -V 109 --lowpass 19700 - -o %d I don't think this bug happens when I use -V 91 and don't set a lowpass (the default lowpass for -91 is lower than 19.7). So my thoughts are it's the lowpass. Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 02 September, 2012, 03:30:31 AM I don't think this bug happens when I use -V 91 and don't set a lowpass (the default lowpass for -91 is lower than 19.7). So my thoughts are it's the lowpass. Your command line isnot producing any faulty output. Check if qaac using proper libsoxrate dll. Also try append --ignorelength to command line if encoding through PIPE. Title: QAAC: discussion, questions, feature requests, etc. Post by: neothe0ne on 05 September, 2012, 12:58:41 AM I don't think this bug happens when I use -V 91 and don't set a lowpass (the default lowpass for -91 is lower than 19.7). So my thoughts are it's the lowpass. Your command line isnot producing any faulty output. Check if qaac using proper libsoxrate dll. Also try append --ignorelength to command line if encoding through PIPE. If I try a wrong libsoxrate.dll, qaac crashes after 1 second and won't encode anything at all. What does "encoding through PIPE" mean? A filepath containing the "|" aka flacs inside zip/rar packages? Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 05 September, 2012, 04:15:40 AM Using the command line interpreter and the "pipe symbol" (|) is one way to make a pipe, a connected redirection of the output of one application (>) into the input of another application (<). It can also be built with OS functions. "Pipe aware" CLI applications usually support a single dash (-) as placeholder for standard input and output file handles instead of file names for disk files. In this case, they "print the output file content to the standard output" instead of using disk file operations; this requires specific options under Windows to support correct binary output. STDIN = file handle 0 (usually assigned to the console, keyboard) STDOUT = file handle 1 (usually assigned to the console, monitor) STDERR = file handle 2 (usually assigned to the console, monitor) The standard error handle can be redirected too, using its number 2 preceding the redirector (e.g. '2>'), which can be useful to store error messages into a log file. Title: QAAC: discussion, questions, feature requests, etc. Post by: sundance on 05 September, 2012, 05:50:34 AM @nu774 Is there a technical reason why e.g. MP3 files are not supported as input for QAAC? I thought that libsndfile does support MP3... Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 05 September, 2012, 06:07:06 AM Why recoding a worse format which already suffers from psychoacoustic loss? Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 05 September, 2012, 06:10:20 AM Is there a technical reason why e.g. MP3 files are not supported as input for QAAC? Transcoding from lossy formats is forbidden. Title: QAAC: discussion, questions, feature requests, etc. Post by: sundance on 05 September, 2012, 06:12:17 AM Why recoding a worse format which already suffers from psychoacoustic loss? E.g. to recode an MP3 audiobook @ 192 kbps / multiple files to an iPod compatible M4B file with chapters @ 48 kbps... Title: QAAC: discussion, questions, feature requests, etc. Post by: sundance on 05 September, 2012, 06:17:58 AM Is there a technical reason why e.g. MP3 files are not supported as input for QAAC? Transcoding from lossy formats is forbidden. Forbidden? By law? LOL! I asked for technical, not ideological/political/religious reasons... Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 05 September, 2012, 06:24:45 AM HydrogenAudio has higher ideological standards. A workaround could be piping {lame --decode *.mp3 - | qaac -i ... -o *.m4a -}. Title: QAAC: discussion, questions, feature requests, etc. Post by: sundance on 05 September, 2012, 06:39:40 AM A workaround could be piping {lame --decode *.mp3 - | qaac -i ... -o *.m4a -}. I've already considered that, but I couldn't find a way to use piping in combination with QAAC's --concat switch. So I ended up with first converting all the MP3 stuff to WAV and then running "QAAC --concat *.wav -o audiobook.m4b" which uses more time and temp disc space than neccessary... Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 05 September, 2012, 07:22:22 AM @nu774 Is there a technical reason why e.g. MP3 files are not supported as input for QAAC? I thought that libsndfile does support MP3... AFAIK libsndfile doesn't support MP3, but CoreAudio does. As far as using CoreAudio built-in decoder, license/patent would not matter. I don't want to support MP3 input mainly because it's not important at all for a command line AAC encoder, while implementing it is a pain for me (handling of encoder delays, etc..) . Title: QAAC: discussion, questions, feature requests, etc. Post by: sundance on 05 September, 2012, 08:08:27 AM I don't want to support MP3 input (...) implementing it is a pain for me (handling of encoder delays, etc..) . I can perfectly understand your point of view. And (unless another work-around for my "single audiobook conversion problem" surfaces, I'll stick to my intermediate WAV solution... Or, does anyone have a solution (command line tool for Windows) to concat multiple m4a files to a single m4b file with chapter marks at file boundaries? Title: QAAC: discussion, questions, feature requests, etc. Post by: thespacepope72 on 10 September, 2012, 04:03:36 AM SITUATION: I ripped all of my CDs to FLAC about 4 years ago using EAC. I felt like FLAC was the best option at the time and still may be however I have recently moved to iPhones (wife's decision) with iOS 5.1.1 so syncing FLAC files to the iPhone using foobar2000 is not possible. PROPOSED SOLUTION: I am thinking about converting my FLAC to ALAC using foobar2000 and qaac so I can use iTunes to sync the files to the iPhone. CONCERN: It is my understanding that prior to November 2011, ALAC was a closed format and ALAC encoders were reverse engineered. QUESTIONS: Is qaac based on the reverse engineered ALAC encoder or is it based on the "official" specification? I would be disappointed if I converted my FLAC collection to an "unofficial" ALAC implementation only to have something incompatible about the files at some point in the future. I am not technical enough to understand some of the specifications but I know I want my files archived in some lossless format. OTHER CONSIDERATIONS: I realize that I could use MediaMonkey to sync the files but that may only last until Apple changes something in their software to break syncing with MediaMonkey. Additionally, I don't think that the free version of MediaMonkey does transcoding on the fly. I know I could also just convert my FLAC to mp3 and keep both copies of the files but maintaining two versions of the same file sounds like torture to me. Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 10 September, 2012, 04:26:06 AM QAAC is only a user interface for the original Apple QuickTime AAC and ALAC encoders. Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 13 September, 2012, 07:00:03 AM Anu info about what;s new in CoreAudioCodec 7.9.8.1???? Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 13 September, 2012, 08:06:07 AM Anu info about what;s new in CoreAudioCodec 7.9.8.1???? Id like to know also. I just checked to see if there was a quicktime update only to find the apple site says the latest is 7.7.1 where i have been using 7.7.2 for a while now?? Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 13 September, 2012, 11:53:24 AM http://www.hydrogenaudio.org/forums/index....showtopic=96970 (http://www.hydrogenaudio.org/forums/index.php?showtopic=96970) Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 13 September, 2012, 12:09:52 PM http://www.hydrogenaudio.org/forums/index....showtopic=96970 (http://www.hydrogenaudio.org/forums/index.php?showtopic=96970) Thanks eahm, im still wondering why on the apple site the latest quicktime says 7.7.1 where 7.7.2 has been out for a while? Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 13 September, 2012, 12:24:19 PM im still wondering why on the apple site the latest quicktime says 7.7.1 where 7.7.2 has been out for a while? I was thinking the same yesterday, I guess 7.7.1 was the latest included with iTunes? Now you have to install it separately, no idea btw. Title: QAAC: discussion, questions, feature requests, etc. Post by: kevinsham on 13 September, 2012, 09:31:24 PM I cannot encode mono files to aac using qaac now . The output is always stereo no matter what parameters I tried. It even output an HE-AAC v2 file if i specify -he. I've installed iTunes 10.7.0.21. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 14 September, 2012, 04:00:24 AM I cannot encode mono files to aac using qaac now . The output is always stereo no matter what parameters I tried. It even output an HE-AAC v2 file if i specify -he. Open with QuickTime Player->window menu->movie inspector. Or, check with Mediainfo (http://mediainfo.sourceforge.net/) (install the latest version). ffmpeg/libav seems to judge mono HE-AAC file generated by Apple encoder to be stereo. Title: QAAC: discussion, questions, feature requests, etc. Post by: kevinsham on 14 September, 2012, 07:25:10 AM Itunes says so http://postimage.org/image/n3e3k99yl/ (http://postimage.org/image/n3e3k99yl/) So does foobar http://postimage.org/image/t7as83wgn/ (http://postimage.org/image/t7as83wgn/) Anyway, seems that the result is the same when I do "Create AAC Version" within Itunes 10.7: it is encoded as he-aac v2 in stereo Title: QAAC: discussion, questions, feature requests, etc. Post by: kevinsham on 14 September, 2012, 07:40:25 AM Seems that Itunes can keep it to mono if i encode as LE. Encode as HE will make it HEv2 which is of course in stereo. Encode as HE / LE in qaac both results in stereo. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 14 September, 2012, 08:30:18 AM Encode as HE / LE in qaac both results in stereo. Not reproducing here. The result of qaac is the same as iTunes. It's funny that iTunes says mono HE-AAC files created by itself as stereo HE-AAC v2. It actually is not HE-AAC v2 and not using any SBR extensions including PS. I confirmed this looking into bitstream (using debugger). In this case, QuickTime player and Mediainfo are correct. Title: QAAC: discussion, questions, feature requests, etc. Post by: kevinsham on 14 September, 2012, 08:46:52 AM Encode as HE / LE in qaac both results in stereo. Not reproducing here. The result of qaac is the same as iTunes. It's funny that iTunes says mono HE-AAC files created by itself as stereo HE-AAC v2. It actually is not HE-AAC v2 and not using any SBR extensions including PS. I confirmed this looking into bitstream (using debugger). In this case, QuickTime player and Mediainfo are correct. Oh yeah, for the LC file, foobar says stereo and itunes / quicktime says mono. So the behaviour for qaac is consistent with itunes and any "issue" is that caused by itunes only. I think quicktime actually always says he v2 files are mono: i tested one encoded with nero aac. BTW, i notice that for the nero he v2 files: foobar says SBR+PS stereo For the itunes he file (mono source), foobar says SBR stereo Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 14 September, 2012, 10:01:43 AM Your HE AAC file IS mono, the decoder just outputs upmixed stereo. This is because of parametric stereo, which is always signalled implicitly. Because of that, the decoder upmixes ALL mono files to stereo, even if there is no PS in the file. Menno Title: QAAC: discussion, questions, feature requests, etc. Post by: kevinsham on 14 September, 2012, 10:59:45 AM Your HE AAC file IS mono, the decoder just outputs upmixed stereo. This is because of parametric stereo, which is always signalled implicitly. Because of that, the decoder upmixes ALL mono files to stereo, even if there is no PS in the file. Menno What "decoder" is that quote referring to? If Mediainfo is correct, all the below behaviors is wrong: Itunes encoded LC file with mono source is reported as LC stereo in foobar. Itunes encoded HE file with mono source is reported as HE stereo in foobar. Itunes encoded HE file with mono source is reported as HEv2 stereo in Itunes. In all 3 cases: the software actually decodes to 2 channels, as verified by their "convert to wav" output. I think it is problem with foobar and itunes then. Title: QAAC: discussion, questions, feature requests, etc. Post by: kevinsham on 14 September, 2012, 11:07:38 AM Your HE AAC file IS mono, the decoder just outputs upmixed stereo. This is because of parametric stereo, which is always signalled implicitly. Because of that, the decoder upmixes ALL mono files to stereo, even if there is no PS in the file. Menno What "decoder" is that quote referring to? If Mediainfo is correct, all the below behaviors is wrong: Itunes encoded LC file with mono source is reported as LC stereo in foobar. Itunes encoded HE file with mono source is reported as HE stereo in foobar. Itunes encoded HE file with mono source is reported as HEv2 stereo in Itunes. In all 3 cases: the software actually decodes to 2 channels, as verified by their "convert to wav" output. I think it is problem with foobar and itunes then. The last test I am doing: Nero encoded HE(v1) with mono source: Itunes reported as HEv2 stereo. So seems that Itunes is just reporting any HE files as "v2" and try decoding as stereo. Issues should be reported to iTunes and foobar than. Thanks for all the prompt replies. QAAC is a great tool. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 20 September, 2012, 05:18:22 PM nu774, sorry I have to ask this, is there an option to remove the "Tool" tag while converting? I have few iOS projects and I would like to use Apple/TVBR for music files but I don't want to show how I converted them. If not, could you create a new version with a "-notool" or similar option? Thanks. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 20 September, 2012, 09:38:17 PM If not, could you create a new version with a "-notool" or similar option? It's technically possible, but looks too special to be implemented as a generic option. Can't you just remove tool tag after encoding? In case you don't know how to do it, try mp4tags(CLI app) of mp4v2 project: mp4v2 download page (http://code.google.com/p/mp4v2/downloads/list?can=1&q=&colspec=Filename+Summary+Uploaded+ReleaseDate+Size+DownloadCount) The following command line will remove tool tag from foo.m4a. Code: [Select] mp4tags -rE foo.m4a Currently only a deprecated pre-built binary for Windows is out there and probably it has issues on non-ascii text handling, but it will be enough for this rather simple usage. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 20 September, 2012, 09:53:14 PM mp4tags works perfectly. An option on your CLI would be amazing for mass encoding without it but it's perfect for now. Thanks. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 08 October, 2012, 01:44:16 PM Code: [Select] [qaac] release 1.42 (refalac 0.53)posted 3 hours ago by nu 774MP4 container minor fix: added "isom" to compatible brands.Also I started a new project [url="https://github.com/nu774/cafmux"]cafmux[/url] at github, which is a simple command line audio remuxer using CoreAudioToolbox. https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Thanks for the update. Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 17 October, 2012, 03:43:13 AM There's new version: https://sites.google.com/site/qaacpage/cabi...rects=0&d=1 (https://sites.google.com/site/qaacpage/cabinet/qaac_1.43.zip?attredirects=0&d=1) Quote Fixed passband of --rate (sample rate converter of libsoxrate). Regression on 1.40. Default passband was not properly set, and resulted in audibly muffled sound when you do --rate=32000 or something. Modified transition band width of --lowpass (was unnecessarily small). Support CAF tag reading (qaac only). Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 17 October, 2012, 04:03:45 AM There's new version: https://sites.google.com/site/qaacpage/cabi...rects=0&d=1 (https://sites.google.com/site/qaacpage/cabinet/qaac_1.43.zip?attredirects=0&d=1) Quote Fixed passband of --rate (sample rate converter of libsoxrate). Regression on 1.40. Default passband was not properly set, and resulted in audibly muffled sound when you do --rate=32000 or something. Modified transition band width of --lowpass (was unnecessarily small). Support CAF tag reading (qaac only). Yes but please link to the cabinet's url (https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet)), people know what to download by their selves. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 19 October, 2012, 11:33:05 AM Thanks for the 1.44 update and fixiTunSMPB. One question, why qtaacenc never needs updates? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 19 October, 2012, 11:57:01 AM Thanks for the 1.44 update and fixiTunSMPB. One question, why qtaacenc never needs updates? Well, is that kind of a question I can answer? I don't know. I'm sorry for long standing iTunSMPB problem, I didn't notice it until now since I'm a fb2k user, and nobody reported of it. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 19 October, 2012, 11:58:04 AM Don't qaac and qtaacenc use the same "engine"? Sorry I'm not a programmer. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 19 October, 2012, 12:07:32 PM Don't qaac and qtaacenc use the same "engine"? Sorry I'm not a programmer. Yes, eventually same AAC encoder of Apple is used, but that's all. These two are completely different programs written by different authors. For example, qaac doesn't use QuickTime at all. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 25 October, 2012, 09:51:18 PM nu774, did you test qaac with Windows 8? I am having problems converting. Code: [Select] Conversion failed: The encoder has terminated prematurely with code -1073741701 (0xC000007B); please re-check parameters update1: Is this Visual C++ again? update2: It works with the 32bit files inside the qaac ZIP, NOT with the 64bit. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 26 October, 2012, 03:03:50 AM nu774, did you test qaac with Windows 8? I am having problems converting. No, I haven't. It works with the 32bit files inside the qaac ZIP, NOT with the 64bit. Do you mean refalac64 still having problem? Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 26 October, 2012, 03:12:17 AM Yes, eventually same AAC encoder of Apple is used, but that's all. These two are completely different programs written by different authors. For example, qaac doesn't use QuickTime at all. qtaacenc is updated also but not so frequently. Disregarding the factor which engine is which of them using does qtaacenc offer some advantage over qaac? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 26 October, 2012, 03:39:36 AM qtaacenc is updated also but not so frequently. Disregarding the factor which engine is which of them using does qtaacenc offer some advantage over qaac? Well, it's much simpler and executable size is much smaller than qaac. Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 31 October, 2012, 11:01:03 AM [qaac] release 1.46 (refalac 0.57) posted 3 hours ago by nu 774 Fixed a regression on 1.45: due to a subttle bug in option parsing code, --concat-cuesheet and --native-resampler were misinterpreted and not working. Automatic RF64 output on -D (wav output), when file size is unknown or larger than 4G, and output is not connected to a pipe. In these cases, at first "JUNK" chunk is written to where ds64 chunk goes, and if the file is actually beyond 4G limit at the end of writing, it is rewritten to ds64 chunk. Therefore, if it was actually smaller than 4G, JUNK chunk remains before "fmt " chunk. This is completely valid and legal in RIFF/WAV spec, but some software might get confused when seeing this. Slight improvement on channel mapping code. Additional search to mingw DLLs (libFLAC-8.dll and libwavpack-1.dll) When official win32 binaries are not found, these names are also searched. https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 31 October, 2012, 12:12:45 PM Quote It works with the 32bit files inside the qaac ZIP, NOT with the 64bit. Do you mean refalac64 still having problem? I only use qaac. I used to copy qaac.exe from the x86 and all the other files from x64 but it doesn't work with Windows 8. It works now when I copy every file from x86. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 31 October, 2012, 10:50:35 PM I only use qaac. I used to copy qaac.exe from the x86 and all the other files from x64 but it doesn't work with Windows 8. It works now when I copy every file from x86. OK. Actually it has nothing to do with Windows 8. Generally speaking, 32bit process can only load 32bit DLLs, and 64bit process can only load 64bit DLLs. Therefore, qaac (32bit application) requires 32bit version of DLLs (in x86 folder). In your case, probably you had MSVC runtime installed in your previous environment/OS. Therefore, qaac did work without installing MSVC runtime in the qaac zip archive. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 01 November, 2012, 12:52:54 AM Perfect thank you, for some stupid reason I thought qaac was x86 and x64. Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 08 November, 2012, 03:26:18 AM QuickTime 7.7.3 was updated today releasing CoreAudio toolbox 7.9.8.2. I'm curious what was updated in this version? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 08 November, 2012, 06:04:07 AM QuickTime 7.7.3 was updated today releasing CoreAudio toolbox 7.9.8.2. I'm curious what was updated in this version? As far as I can see, output of AAC encoder is identical with 7.9.8.1. I also tested if ExtAudioFileRead() still crashes on HEv2 files, and now it doesn't. However, even if I replaced CoreAudioToolbox.dll with 7.9.8.1, it doesn't crash. So I cannot reproduce the behavior on the previous version. Maybe it might depend on other components... I don't know. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 08 November, 2012, 10:37:32 AM QuickTime 7.7.3 was updated today releasing CoreAudio toolbox 7.9.8.2. I'm curious what was updated in this version? Sorry I just saw this post after I opened a new one for the 7.9.8.2 (http://www.hydrogenaudio.org/forums/index.php?showtopic=97782). Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 15 November, 2012, 04:51:30 AM [qaac] release 2.00 (refalac 1.00) posted 41 minutes ago by nu 774 This is an experimental (might be unstable) release with many updates, so version was bumped up to 2.00. Enabled MP3 decoding. --concat + --adts now accepts multiple inputs with different sample format. Explained later. Removed --concat-cuesheet, since it's mostly similar to --concat. Added --no-dither, which turns off automatic dither on quantization. -b now accepts arbitrary value in 2-32 range. -b32 for WAV output means float format. All other cases are integer. -N(--normalize) now doesn't use temporary file if the input is seekable. FLAC file with ID3v2 tag is now accepted (ID3 tag is just skipped and ignored). Fix crash on reading TAK file with binary tag. Improve ID3v2 tag handling. Many refactoring of source code has been done. Multiple format stream generated by --concat and --adts Since this requires complete reset of the encoder, zero padding is added at the stream change point. As far as I know, almost no software player on PC can continue to play such file after the stream format change. In my environment, Windows Media Player 12 is the only exception I know of. https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 17 November, 2012, 12:26:01 PM [qaac] release 2.01 (refalac 1.01) posted Nov 16, 2012 4:31 AM by nu 774 Fixed a regression on 2.00: --threading was broken. https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Title: QAAC: discussion, questions, feature requests, etc. Post by: Mix3dmessagez on 18 November, 2012, 11:09:59 AM Hello all Since the new update on quicktime albums encoded from cd would skip at a certain point in each song to the next song in the album for every song on the album. This didn't happen to needledropped files or mp3 files, just new qt files encoded. I used qaac to encode the flac files to alac does this fix that issue? Title: QAAC: discussion, questions, feature requests, etc. Post by: deluge on 18 November, 2012, 01:55:48 PM Hi, I have found possible bug in QAAC or in AppleApplicationSupport. I have encoded this sample: eig.flac --> http://www.hydrogenaudio.org/forums/index....st&p=622980 (http://www.hydrogenaudio.org/forums/index.php?s=&showtopic=70598&view=findpost&p=622980) with QAAC 2.01 ( qaac.exe *.flac -c112 ) and CoreAudioToolbox 7.9.8.2 and result sample has realy bad distortion at the first 2 seconds. Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 18 November, 2012, 02:44:50 PM Try CoreAudioToolbox 7.9.7.9 from iTunes 10.6.3. But why do you want to use CBR? Title: QAAC: discussion, questions, feature requests, etc. Post by: deluge on 18 November, 2012, 03:02:38 PM I have used it for testing purposes only. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 19 November, 2012, 12:17:46 AM I quickly tested on the following set, and looks like only the combination of 7.9.8.2(or 7.9.8.1) and CBR 112k having that problem. Anyway, it's interesting and thanks for sharing it -- although it's not an issue of qaac and there's nothing I can do for it. 7.9.8.2, -c112 7.9.8.2, -c96 7.9.8.2, -c128 7.9.8.2, -a112 7.9.8.2, -v112 7.9.7.9, -c112 Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 27 November, 2012, 08:17:05 AM [qaac] release 2.03 (refalac 1.03) posted 2 hours ago by nu 774 Fixed box layout of iTunes custom metadata (long tag). It was written as name-> mean -> data (should be mean -> name -> data). This was a long standing bug, and I am somewhat surprised that no one has ever reported me of this. This should fix the interoperability problem with TagLib. https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 27 November, 2012, 08:22:59 AM Should I worry the tags with previous qaac versions was somehow broken? Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 27 November, 2012, 09:02:29 AM This was a long standing bug, and I am somewhat surprised that no one has ever reported me of this. Because people know less than they actually think they do. I consider myself expert in many things but this is all new to me. I come here every day and get screamed for something I am sure I am doing right or for something I say that means absolutely nothing. It's a release of stress to see that you actually don't know everything and of course I know I don't but sometimes when you do and manage a lot of things like I do you think you know everything. Thanks to the people of the forum for waking me up every single day. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 27 November, 2012, 09:18:33 AM Should I worry the tags with previous qaac versions was somehow broken? If you are having no trouble concerning m4a tags so far, you don't have to worry about it. Especially if you are running qaac from some front end such as fb2k and let it write the tags. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 27 November, 2012, 09:37:04 AM Because people know less than they actually think they do. I consider myself expert in many things but this is all new to me. I come here every day and get screamed for something I am sure I am doing right or for something I say that means absolutely nothing. It's a release of stress to see that you actually don't know everything and of course I know I don't but sometimes when you do and manage a lot of things like I do you think you know everything. Thanks to the people of the forum for waking me up every single day. Touching tags afterwards by fb2k (for replaygain or something) is enough to fix the tag related issue. Therefore, I think it's just because most people are using fb2k with qaac. I don't assume people take the trouble to look inside of MP4 container. Since taglib shows some warnings when opening generated m4a file, it could be noticeable if someone tried it once. Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 27 November, 2012, 12:35:44 PM [qaac] release 2.04 (refalac 1.04) posted an hour ago by nu 774 Fixed broken pipe input (regression on 2.00). When feeding from pipe, there was always a chance that output from some arbitrary point become white noise like. This was due to switch to lower level I/O routine on 2.00, which can result in "partial read" in case of pipe input. When it is still aligned to sample size boundary, it does no harm. However, when it is not aligned, the succeeding samples get completely out of sync, and result in white noise or something. The possibility of this problem depends on how sender pushes audio to pipe, and sample size (16bit, 24bit, etc). I didn't notice it until today, but I could reproduce this using cat command as feeder. https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 27 November, 2012, 02:25:27 PM Long awaited qaac 2.5 is out! https://sites.google.com/site/qaacpage/news...se205refalac105 (https://sites.google.com/site/qaacpage/news/qaacrelease205refalac105) Quote Sorry, 2.04 fix was flawed. Re-fixed it. BTW, The problem on 2.00 was usually quite audible. If you are anxious about it, the apparent evidence of the bug is less number of samples compared to the original. If you were using simply 16/32bit 2ch input, you might not have met any troubles so far (like me). In this case, sample size (in bytes) is multiple of 2, and probably there's less possibility of partial read breaking in the middle of sample boundary. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 27 November, 2012, 07:53:10 PM If you were using simply 16/32bit 2ch input, you might not have met any troubles so far (like me). In this case, sample size (in bytes) is multiple of 2, and probably there's less possibility of partial read breaking in the middle of sample boundary. Sorry, I wanted to say power of two. If the sample size is power of two, there's a high possibility that I/O buffer size (usually power of two) is divisible by the sample size, thus problematic partial read doesn't happen. One user noticed this when encoding from 24bit source yesterday. In fact, another user has already reported me of this, but at that time I couldn't reproduce it, and thought it to be an environment specific problem (he was using Linux and wine). Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 27 November, 2012, 08:32:22 PM Long awaited qaac 2.5 is out! 2.05 Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 06 December, 2012, 06:53:56 PM [qaac] release 2.06 (refalac 1.06) posted 13 hours ago by nu 774 Fixed a bug: when opening non-supported input file, there was a chance that ridiculously too much memory gets allocated and OS hangs (refalac only). This is regression on 2.00, but is basically coming from a weakness of libmp4v2 which can allocate HUGE memory when mp4 box structure is corrupt. Rewritten 24bit PCM bit packing/unpacking code. qaac -D 24bit.wav -o - >NUL is about 3 times faster than before. https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 07 December, 2012, 11:53:31 AM [qaac] release 2.07 (refalac 1.07) posted 3 hours ago by nu 774 Fixes for 2.00 regression again. WAV parser was ignoring data chunk length even if --ignorelength was not specified. Bogus total length was printed on libsndfile input due to int64_t -> int8_t typo. https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Title: QAAC: discussion, questions, feature requests, etc. Post by: sneaker on 10 December, 2012, 10:40:02 AM I thought the old qtaacenc portable script may have been too bloated, so I created a new one for qaac. http://www.mediafire.com/?1ysj4ph04vt3b8g (http://www.mediafire.com/?1ysj4ph04vt3b8g) You need to put either the QuickTime or iTunes installer into the same directory as the script and it should extract only the files necessary for qaac without the need to install anything. Files it extracts: Code: [Select] CoreAudioToolbox.dllCoreFoundation.dllASL.dllicudt46.dlllibdispatch.dlllibicuin.dlllibicuuc.dllobjc.dllpthreadVC2.dll Did I miss aynthing? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 10 December, 2012, 10:56:19 AM Thanks, Alternatively, you can try makeportable.zip at the cabinet page (bat file only, 7z.exe is not included). I haven't mentioned/recommended it in public since it's hackish, and might be against Apple's policy. Title: QAAC: discussion, questions, feature requests, etc. Post by: sneaker on 10 December, 2012, 11:07:41 AM Ah, damn, I totally missed your portable script. Also, yours has "7z e -y -oQTfiles\Microsoft.VC80.CRT -i!msvcp80.dll.* -i!msvcr80.dll.* -i!manifest.* AppleApplicationSupport.msi" and much more error handling. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 13 December, 2012, 02:56:11 AM Code: [Select] [qaac] release 2.08 (refalac 1.08)posted 1 minute ago by nu 774Now copy chapters from ALAC/m4a input (when available).Delay Nero style chapter point as much as the encoder delay (2112 samples). It seems that Nero AAC encoder was previously using Nero style chapter to signal encoder delay this way, and fb2k is in honor of it.Note that Nero style chapter is a list of <title, start time> pairs, therefore first chapter can start at arbitrary point, while last chapter goes until the end of the track.On the other hand, QuickTime style chapter is a list of <title, duration> pairs, therefore first chapter always starts from the beginning of the track, while last chapter can end at arbitrary point.qaac will write both style chapters (for the sake of compatibility), but these two have subtle difference and incompatibility.Now writes actual duration into edts. This is done mainly for QuickTime which doesn't look iTunSMPB thingy. Now QuickTime can trim zero-padding and decode sample accurately (whole song / each sub-chapters).Technically, there's no way to tell the value of encoder delay to QuickTime player. It just silently assumes implicit AAC delay of 2112 samples, and automatically crops that amount from the beginning --- it just works with qaac because qaac is using their encoder. edts is used here in order to just let them trim the trailing zero paddings. https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Title: QAAC: discussion, questions, feature requests, etc. Post by: sneaker on 13 December, 2012, 09:38:31 AM If qaac were to also use edts for trimming the encoder delay, would that break the gapless playback in QuickTime, since it always skips 2112 samples? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 13 December, 2012, 09:58:12 AM If qaac were to also use edts for trimming the encoder delay, would that break the gapless playback in QuickTime, since it always skips 2112 samples? Exactly. For that very reason I cannot do it. QuickTime File Format Specification Appendix G. (http://developer.apple.com/library/mac/#documentation/QuickTime/qtff/QTFFAppenG/QTFFAppenG.html) defines a way to signal encoder delay using edts and (new) sgpd atom, but it's a spec of QT/MOV so cannot be applied to MP4, and QT7 for win doesn't seem to implement or support it anyway. Title: QAAC: discussion, questions, feature requests, etc. Post by: sneaker on 13 December, 2012, 10:02:17 AM I see. Would be nice if Apple were to stick to a single proprietary implementation of this stuff while being at it. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 13 December, 2012, 10:26:08 AM BTW Quick Time pro was useful when handling chapters for testing. When I want to extract second chapter, the following did that job. - Move to second chapter head by selecting second chapter - Press I (selection start) - Move to third chapter head by selecting third chapter - Press O (selection end) - Trim to selection - Export, done. This way I could losslessly export chapter to MP4, or decode/re-encode to another format. Too bad that it's almost abandoned (Mac OS X already moved to QuickTime X, different software with same branding) and quite easily crashes. I also wanted to test sample accuracy with iTunes, but I couldn't figure out if it's even possible for iTunes to extract specific sub chapter. Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 18 December, 2012, 04:50:07 AM [qaac] release 2.09 (refalac 1.09) posted 7 hours ago by nu 774 Fixed a regression on 2.06, which resulted in failure when non-canonical path was passed by -o option (reported by this post at HA). Added --fname-from-tag option to generate output file names based on the tags of input files. You can configure output file name more precisely by additionally using --fname-format (which has been an option for cuesheet input only). https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Title: QAAC: discussion, questions, feature requests, etc. Post by: o-l-a-v on 25 December, 2012, 06:53:14 AM I did a pitty small abx today between wav and qaac tvbr 127. Got 5 out of 5. I thought i could hear some difference. Hard to describe what, but kind of the richness of the audio was different. And when i got 5/5 i think there was no coincidence. It was a hardstyle tune by headhunterz. In other genres though I can't seem to hear any difference at all. Code: [Select] foo_abx 1.3.4 reportfoobar2000 v1.1.182012/12/25 12:36:05File A: C:\Users\Olav\Desktop\3929430_Lessons_In_Love_feat__Neon_Trees_Headhunterz_Remix.wavFile B: C:\Users\Olav\Desktop\QAAC TVBR 127\3929430_Lessons_In_Love_feat__Neon_Trees_Headhunterz_Remix.m4a12:36:05 : Test started.12:37:08 : 01/01 50.0%12:38:06 : 02/02 25.0%12:38:57 : 03/03 12.5%12:39:42 : 04/04 6.3%12:42:34 : 05/05 3.1%12:43:14 : Test finished. ---------- Total: 5/5 (3.1%) Pretty interesting i think Setup was: foo_abx 1.3.4 -> Belkin USB cable -> HRT Music Streamer II -> Steelseries 5HV2. QAAC v 2.09 BTW, new qaac uploaded, 2.10 https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) No changelog yet Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 25 December, 2012, 07:12:08 AM BTW, new qaac uploaded, 2.10 https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) No changelog yet I was just writing and finished it now Quote Changed --delay option spec. --delay now accepts either duration in time or number of samples. If you are used to timespec of sox, you should be already familiar with it. The format is as following: --delay=[hh:[mm:]]ss[.sss]... --delay=<integer>s In the first case, parts surrounded by brackets can be omitted. So, --delay=100 means 100 seconds, --delay=-10.72 means -10.72 seconds, --delay=02:53.1 means 2 minutes and 53.1 seconds, and so on. Second case is for number of samples. You just put an integer followed by "s" (means "samples"). --delay=-2112s or something. HTOA support. Now index 00 of first track in cue sheet is encoded into track 0. Fixed a bug of cue sheet parser. Last line of cue was ignored if the last line is ending with a white space character other than LF. Title: QAAC: discussion, questions, feature requests, etc. Post by: usikpa on 25 December, 2012, 04:40:54 PM nu774, I am new to qaac, just downloaded its command line module and read the documentation page. I was wondering why you have no example for the mp3 conversion syntax to preserve the quality of the input as much as possible. My objective is to be able to import the resulting m4a as a sound track into an m4v video (using mp4box and handbrake). I tried conversion with one of the ffmpeg's aac encoders but did not quite like the quality of the output. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 26 December, 2012, 09:07:22 AM I was wondering why you have no example for the mp3 conversion syntax to preserve the quality of the input as much as possible. Well, if you want higher quality, just raise bitrate or something. Period. However, if you encode 64kbps MP3 into 320kbps AAC, the result gets 5 times larger than input in file size, but is still worse than input in quality. Lossy to lossy conversion works like that. Sounds ridiculous? So you have to make a compromise somewhere on quality/size trade off. People use lossy codec that way. I cannot tell you "the best setting" for you. Only you can decide it using your own ears. And I think you will be able to understand why I cannot write every possible example for every single task you might think of. Title: QAAC: discussion, questions, feature requests, etc. Post by: usikpa on 26 December, 2012, 11:10:56 AM Well, if you want higher quality, just raise bitrate or something. Period. However, if you encode 64kbps MP3 into 320kbps AAC, the result gets 5 times larger than input in file size, but is still worse than input in quality. Lossy to lossy conversion works like that. Sounds ridiculous?.. Agreed. I tried for an HE -ACC file out of the 320 Kbps mp3, the result was almost four times as less in size, but practically of the same sound quality. Then, I ran qaac -v256, and got the same quality and almost the same size file. (Though, must admit the perception of 'quality' of the sound file largely depends on the decoder in the system. As I use ffdshow, all of my m4a files appear to sound better in windows media rather than in QT ) What's great about qaac is that, unlike ffmpeg which sometimes defaults to ADTS, it produces real QT/Itunes compatible m4a on the spot! I wonder how QAAC encoder compares to libvo_aacenc? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 26 December, 2012, 11:26:22 AM What's great about qaac is that, unlike ffmpeg which sometimes defaults to ADTS, it produces real QT/Itunes compatible m4a on the spot! As far as I know, ffmpeg/avconv will decide container format from file extension you give, and can mux into MP4 if properly directed. Quote I wonder how QAAC encoder compares to libvo_aacenc? Quality of libvo_aacenc is not comparable to state-of-the-art commercial encoders of Apple or FhG. If you can, use libfdk_aac instead on ffmpeg (although you have to make it yourself). Title: QAAC: discussion, questions, feature requests, etc. Post by: Stop the Noise on 26 December, 2012, 11:49:36 AM Hi nu774, I'm using QAAC through the latest TAudioConverter(0.6.2.459) and there's a problem with the metadata from the flac files not completely being transferred to the m4a file. The 'date' is missing! The rest of the metadate data transfers correctly(album, artist, title etc.), when I read the log, the 'date' does show in the metadata. Is there anything that can be done? Thanks for all your work in QAAC. !UPDATE! I just received word from the developer of TAudioConverter that's it's an issue there, which will be fixed in the next release. Title: QAAC: discussion, questions, feature requests, etc. Post by: usikpa on 30 December, 2012, 05:56:56 PM Thank you, nu774 for your suggestion. Although I never compiled an executive file under windows, I may give it a try... Now, I am trying to convert to m4a file format a movie's track, which is an ac3 6 channel audio. When I ran Code: [Select] qaac -v256 --verbose audio.ac3 -o audio.m4a I got an error that there is no such input format. Is there something that I need to do with ac3 file first? Also, I understand that I need to mixdown from 6 channels to 2. How do I do that? Is 2 channels - a default setting? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 30 December, 2012, 08:01:12 PM qaac does not support AC3 input. So, you have to decode it before passing it to qaac (by ffmpeg, avconv or something). As for 5.1ch -> stereo mixdown by qaac, read the following: https://github.com/nu774/qaac/wiki/Matrix-mixer (https://github.com/nu774/qaac/wiki/Matrix-mixer) Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 30 December, 2012, 08:17:13 PM MeGUI supports transcoding from AC3 to AAC using QAAC if you enable its support in the latest development update (and restart it). It also supports downmix with e.g. Dolby ProLogic II matrix. Please note that MeGUI won't ensure the presence of the Apple CoreAudioToolbox DLLs; the user will be responsible to fulfill this prerequisite. Title: QAAC: discussion, questions, feature requests, etc. Post by: usikpa on 31 December, 2012, 01:55:58 AM qaac does not support AC3 input. So, you have to decode it before passing it to qaac (by ffmpeg, avconv or something). As for 5.1ch -> stereo mixdown by qaac, read the following: https://github.com/nu774/qaac/wiki/Matrix-mixer (https://github.com/nu774/qaac/wiki/Matrix-mixer) MeGUI supports transcoding from AC3 to AAC using QAAC if you enable its support in the latest development update (and restart it). It also supports downmix with e.g. Dolby ProLogic II matrix. Please note that MeGUI won't ensure the presence of the Apple CoreAudioToolbox DLLs; the user will be responsible to fulfill this prerequisite. Thank you for suggestions. If I understood correctly I will try to to decode first ac3 to raw pcm, and then run the encoder. Happy New YEAR to all of you! Title: QAAC: discussion, questions, feature requests, etc. Post by: kode54 on 31 December, 2012, 08:49:09 PM Quality of libvo_aacenc is not comparable to state-of-the-art commercial encoders of Apple or FhG. If you can, use libfdk_aac instead on ffmpeg (although you have to make it yourself). Speaking of libfdk_aac, it would be really cool if someone could make a portable version of qaac that uses libfdk_aac as the encoder, as neither ffmpeg nor avconv appear to support gapless encoding. I'm only guessing that the only non-portable binary blob code that qaac uses is the CoreAudioToolbox library, and only for AAC encoding. Well, and then I did look into some of the headers when I considered an attempt, and noticed a lot of Win32 header and function usage, so maybe it won't be so easy after all. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 31 December, 2012, 10:13:33 PM Yeah, *some* part of qaac has been written more or less portability in mind, but others are not. When qaac moved from QuickTime to CoreAudioToolbox, I dropped gcc(MingW) because I wanted to use DLL delay loading for CoreAudioToolbox.dll. Since then, I stopped to care about portability at all. So, now qaac even lacks cross compiler support on Win32, and I don't think it's a good base for making something useful. If I find time, I *might* think of writing much simpler implementation for libfdk_aac. Title: QAAC: discussion, questions, feature requests, etc. Post by: usikpa on 01 January, 2013, 04:27:51 PM nu774 Sorry to bother with this, I am a bit confused about multichannel handling by QAAC. I have now an LPCM file (wav or au format) that has the channels, according to MediaInfo, in the following order Front: LCR Side: LsRsLFE Now, when QAAC does its first (Microsoft) reordering, does it re-order channels incorrectly, as the specified above channel mask is not mentioned in the channel input layout table? How does one let QAAC know the input channel layout? Also, just in case, am I right to assume, when writing a matrix file, the Microsoft layout? Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 01 January, 2013, 04:39:54 PM @ usikpa: MeGUI uses AviSynth, usually a scriptable video frameserver, but it is also able to serve audio in RAM to the encoder. This technique was first used by BeHappy, specifically an audio converter, alternative to the abandoned BeSweet. Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 01 January, 2013, 05:01:53 PM nu774 Front: LCR Side: LsRsLFE Now, when QAAC does its first (Microsoft) reordering, does it re-order channels incorrectly, as the specified above channel mask is not mentioned in the channel input layout table? How does one let QAAC know the input channel layout? The channel order for WAV format is fixed. Your layout is equal to "FL FR FC LFE SL SR" Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 02 January, 2013, 12:08:29 AM Use --verbose if you are uncertain of channel layout recognition by qaac. It will show something like the following: Code: [Select] Input layout: 5.1 (L R C LFE Lsd Rsd)Output layout: 5.1 (C L R Ls Rs LFE) Also, just in case, am I right to assume, when writing a matrix file, the Microsoft layout? Correct. Title: QAAC: discussion, questions, feature requests, etc. Post by: usikpa on 02 January, 2013, 04:36:23 PM Use --verbose if you are uncertain of channel layout recognition by qaac. It will show something like the following: Code: [Select] Input layout: 5.1 (L R C LFE Lsd Rsd)Output layout: 5.1 (C L R Ls Rs LFE) Well, I did, and this is what I got: Code: [Select] ...audio.m4aUsing default channel layout.Output layout: 5.1 (C L R Ls Rs LFE)... As advised above, will resort to the .wav file format instead of .au Thank you for your replies Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 04 January, 2013, 11:28:48 AM Speaking of libfdk_aac, it would be really cool if someone could make a portable version of qaac that uses libfdk_aac as the encoder, as neither ffmpeg nor avconv appear to support gapless encoding. I'm only guessing that the only non-portable binary blob code that qaac uses is the CoreAudioToolbox library, and only for AAC encoding. Well, and then I did look into some of the headers when I considered an attempt, and noticed a lot of Win32 header and function usage, so maybe it won't be so easy after all. Finally written frontend from scratch: https://github.com/nu774/fdkaac (https://github.com/nu774/fdkaac) Very simple program (compared to qaac), only WAV input is available. Gapless playback is supported at least for LC. I hope it to be portable (tested on mingw-w64 and Linux). Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 04 January, 2013, 11:40:14 AM Excellent, will you be building a version to work with fb2k?? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 04 January, 2013, 11:45:46 AM Excellent, will you be building a version to work with fb2k?? I could distribute binary of my frontend, but will not be able to distribute libfdk-aac anyway. Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 04 January, 2013, 11:59:28 AM Finally written frontend from scratch: https://github.com/nu774/fdkaac (https://github.com/nu774/fdkaac) Very simple program (compared to qaac), only WAV input is available. Gapless playback is supported at least for LC. I hope it to be portable (tested on mingw-w64 and Linux). I have no idea how to build this or how to get it to work with fb2k? Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 04 January, 2013, 02:59:17 PM nu774, two quick questions: 1) Why should people use 1.x instead of 2.x? Older OSes? 2) Why is 2.09 still available in "Cabinet" and not in "old", is 2.10 a testing build? Thanks. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 04 January, 2013, 08:08:37 PM 1) Why should people use 1.x instead of 2.x? Older OSes? I don't recommend which to use. Although 2.x made improvements in functionality, you can continue to use 1.x if you think 2.x is not stable enough. 2) Why is 2.09 still available in "Cabinet" and not in "old", is 2.10 a testing build? Just forgot to move it, thanks for pointing out. Title: QAAC: discussion, questions, feature requests, etc. Post by: bat_guano on 10 January, 2013, 10:00:57 AM I could distribute binary of my frontend, but will not be able to distribute libfdk-aac anyway. Hi What is the position with fdkaac.exe? When it's compiled, is it OK to share it on this forum or some file-hosting site? (Just asking) Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 10 January, 2013, 10:20:22 PM What is the position with fdkaac.exe? When it's compiled, is it OK to share it on this forum or some file-hosting site? (Just asking) As for Fraunhofer's FDK AAC library, read here: https://raw.github.com/mstorsjo/fdk-aac/master/NOTICE (https://raw.github.com/mstorsjo/fdk-aac/master/NOTICE) Title: QAAC: discussion, questions, feature requests, etc. Post by: Sparktank on 12 January, 2013, 04:44:21 PM Hi! Thank you for this wonderful program! I'm very new to AAC formats (AAC/M4A). So far I've been using just Foobar with NeroAACenc. But with this getting frequent updates, I've decided to look into this and see how it fairs for my typical usage. Right now it's just PC playback and a Coby portable media player. Windows 7 (64bit)/Windows XP SP3 (32bit). My Coby media player is MP828-8GB. I've delayed trying this out due to the CoreAudioToolbox conundrum. Thankfully sneaker created a wonderful batch script to extract all the necessary files and make things easier. :) It took me several tests to figure out the quality values for TVBR [127=highest] and for the -q [0-2] quality modes [0=highest], as their values are not listed in the help and I'm completely new to the AAC field. ^__^" Thank you all for your contribution and I look forward to using this more frequently and checking out more updates in the future. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 12 January, 2013, 08:14:50 PM Document is at https://github.com/nu774/qaac/wiki (https://github.com/nu774/qaac/wiki) -q controls quality/speed trade off, and 2 is highest in quality, slowest in speed. Title: QAAC: discussion, questions, feature requests, etc. Post by: Sparktank on 12 January, 2013, 10:37:45 PM Document is at https://github.com/nu774/qaac/wiki (https://github.com/nu774/qaac/wiki) -q controls quality/speed trade off, and 2 is highest in quality, slowest in speed. Bookmarked, thanks! Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 13 January, 2013, 06:38:53 AM it's 2.11 now Quote [qaac] release 2.11 (refalac 1.11) posted 47 minutes ago by nu 774 Changed --tag option behavior to be strict. Formerly, when fourcc passed by --tag is unknown, qaac accepted it and wrote it as UTF8 string tag. Now --tag accepts only known tags. This is considered to be more foolproof, since iTunes is known to refuse editing tags when a file contains unknown tag atoms. Read vorbis comment "WAVEFORMATEXTENSIBLE_CHANNEL_MASK" of FLAC and treat as channel layout. Fixed a bug: mono AIFF/CAF file with kAudioChannelLabel_Mono in chan chunk could not be read. https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Title: QAAC: discussion, questions, feature requests, etc. Post by: robertcollier4 on 15 January, 2013, 07:53:02 AM Why does qaac seem to be slightly changing the duration of my audio file? This is the command I am running: Quote qaac.exe --tvbr 90 --quality 2 --rate keep --ignorelength S01E01.wav -o S01E01.aac I am using "--ignorelength" because usually I run this command with piped WAV input from eac3to. The result I get with MediaInfo (showing milliseconds with Debug>Advanced Mode): S01E01.wav - Duration: 1h 26mn 25s 664ms S01E01.aac - Duration: 1h 26mn 25s 728ms The audio file duration grew by 64 milliseconds - now I understand that this isn't very much --- but it is very slightly noticable out-of-sync when people are talking in my remuxed video file. Why should qaac be changing the duration of a source wav file? How can I get qaac to give me an AAC file with the exact same number of milliseconds as my source WAV file? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 15 January, 2013, 08:12:33 AM Read this: http://lame.sourceforge.net/tech-FAQ.txt (http://lame.sourceforge.net/tech-FAQ.txt) This is the FAQ of LAME, but mostly the same for ALL MDCT based lossy codec. For allowing gapless playback, iTunes introduced a special tag (metadata) named "iTunSMPB" to declare amount of encoder delay, valid number of samples, and padding. Some players supporting it (such as fb2k or Rockbox) can playback resulting m4a files gaplessly (without any amount of delay or padding). In short, that is a not problem of qaac. Title: QAAC: discussion, questions, feature requests, etc. Post by: robertcollier4 on 15 January, 2013, 09:11:55 AM Read this: http://lame.sourceforge.net/tech-FAQ.txt (http://lame.sourceforge.net/tech-FAQ.txt) Okay, thank you for the info and the great app. Since the documentation indicated that these encoders usually add the delay at the beginning of the file, I added delay of "-64" to mkvmerge to compensate and the resultant video seems to be in perfect sync again. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 15 January, 2013, 09:18:28 AM FYI, the amount of delay of Apple LC-AAC encoder is 2112 samples (= 44ms for 48000Hz). Title: QAAC: discussion, questions, feature requests, etc. Post by: sneaker on 15 January, 2013, 12:02:30 PM Read this: http://lame.sourceforge.net/tech-FAQ.txt (http://lame.sourceforge.net/tech-FAQ.txt) Okay, thank you for the info and the great app. Since the documentation indicated that these encoders usually add the delay at the beginning of the file, I added delay of "-64" to mkvmerge to compensate and the resultant video seems to be in perfect sync again. Just as a note: mkvmerge should be able to automatically read and apply the delay from aforementioned iTunSMPB tag. You have output to m4a insteads of adts aac, though. (There have been some changes to mkvmerge's mp4 handling recently, so I suggest you test the result once to see if it still works reliably. Get the newest pre from here (http://www.bunkus.org/videotools/mkvtoolnix/win32/pre/).) @nu774 Maybe you could add a "--nodelay" parameter to qaac. Since you already have implemented a delay switch, I guess it's trivial for you to implement and it could be useful for video encoding where gap-less playback does not matter. Title: QAAC: discussion, questions, feature requests, etc. Post by: robertcollier4 on 15 January, 2013, 02:18:03 PM Just as a note: mkvmerge should be able to automatically read and apply the delay from aforementioned iTunSMPB tag. You have output to m4a insteads of adts aac, though. (There have been some changes to mkvmerge's mp4 handling recently, so I suggest you test the result once to see if it still works reliably. Get the newest pre from here (http://www.bunkus.org/videotools/mkvtoolnix/win32/pre/).) Okay, great. I found the file created by qaac did have a iTunSMPB tag which is explained how to decipher here (http://yabb.jriver.com/interact/index.php?topic=63215). tool qaac 2.11, CoreAudioToolbox 7.9.7.9, AAC-LC Encoder, TVBR q91, Quality 96 iTunSMPB 00000000 00000840 000003C0 000000000ED61800 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 Encoder Delay = 0x00000840 = 2112 samples = 44ms Padding = 0x000003C0 = 960 samples = 20 ms Original Sample Count = 0x000000000ED61800 = 248911872 samples = 5185664 ms So the m4a file produced by qaac does contain all the perfectly encoded information. And the original sample count matches the original wav file. The 64 ms extra is accounted for by the "Encoder Delay" and "Padding". I tried latest mkvtoolnix-unicode-5.9.0-build20130108-490. When adding the m4a file with this iTunSMPB tag into mmg.exe (mkvmerge GUI) - it does not auto populate the "Delay (in ms)" field in the "Format specific options" tab. The "Delay (in ms)" field remains blank even when loading a m4a file with the above iTunSMPB tag. I have to manually type in "-64" to have it compensate for the 44ms+20ms delays introduced. It would be nice if mkvmerge would automatically add a negative delay corresponding to "Encoder Delay" + "Padding". Title: QAAC: discussion, questions, feature requests, etc. Post by: sneaker on 15 January, 2013, 02:31:15 PM 1. Neither mkvmerge nor mkvmerge GUI display any information about applying the delay, but they still do it in the background. Also note that mkvmerge applies negative delays by dropping all packets that would start before 0 and delays the first packet that does not get dropped if necessary, so tools like MediaInfo might show a small ( smaller than the length of a packet) positive delay. 2. You must not add the padding to the delay value. Padding is at the end of the file. 3. the iTunSMPB tag info is only correct for LC. Title: QAAC: discussion, questions, feature requests, etc. Post by: robertcollier4 on 15 January, 2013, 03:20:02 PM Thank you sneaker, very helpful. I just found that qaac was always outputting to m4a container but that I can force it to output ADTS AAC with "--adts" option. If I output to ADTS AAC with qaac "--adts" option - then does the encoder delay also get added in that scenario?. I cant find a tool that can give me millisecond precise length information about an ADTS AAC file to see if my ADTS AAC created file is longer than my WAV or not. I would prefer to create ADTS AAC and just use that directly inside of mkvmerge if it means that I do not have to muck around with any delays or paddings and am assured I am retaining perfect millisecond alignment in my resulting video file. For some reason i feel that mkvmerge was not auto-applying the 44ms delay (negatively) from the iTunSMPB m4a file field. Also when I didn't manually specify a negative delay for it - mkvmerge created a mkv file for me that had a slightly different video and audio duration as shown in MediaInfo and the audio seemed a tiny bit off. Best scenario would be if I could give mkvmerge an audio and video file with the exact same millisecond duration. Title: QAAC: discussion, questions, feature requests, etc. Post by: sneaker on 15 January, 2013, 04:03:47 PM ADTS files also suffer from delay and padding, but since there is no container to write the delay and padding information into, you have to take care of it yourself, by doing one of the following things: 1.) enter the delay manually into mkvmerge 2.) already cut a bit from the input file, e.g. cut the first 44 ms (*) with a wave editor or use the new --delay function of qaac. (Don't do either of these with m4a output, though!) (*) it is 2112 samples for LC, so you have to calculate the delay for different sample rates: 2112 / 48 (kHz) = 44 (ms) 2112 / 44,1 (kHz) = 48 (ms) etc. I would ignore the information about the mkv durations for now, as they are more irritating than helping. Quote For some reason i feel that mkvmerge was not auto-applying the 44ms delay (negatively) from the iTunSMPB m4a file field. Feeling does not help, you have to make proper tests. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 15 January, 2013, 08:14:01 PM Maybe you could add a "--nodelay" parameter to qaac. Since you already have implemented a delay switch, I guess it's trivial for you to implement and it could be useful for video encoding where gap-less playback does not matter. Well, by --nodelay what kind of implementation do you have in mind? Is that same as "--delay=-2112" ? (--delay just chops the beginning / prepend silence before feeding input to encoder). Title: QAAC: discussion, questions, feature requests, etc. Post by: sneaker on 15 January, 2013, 08:22:25 PM Yes, basically. Of course also use the correct value for non-LC and write 0 ms delay into the iTunSMPB tag (or none at all). Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 15 January, 2013, 08:26:31 PM Yes, basically. Of course also use the correct value for non-LC and write 0 ms delay into the iTunSMPB tag (or none at all). OK. However, "correct value for non-LC" is not that simple. Please read: http://www.hydrogenaudio.org/forums/index....showtopic=98450 (http://www.hydrogenaudio.org/forums/index.php?showtopic=98450) Title: QAAC: discussion, questions, feature requests, etc. Post by: sneaker on 15 January, 2013, 08:30:26 PM Thanks for the link. I remember bringing up the issue some time ago when we still regarded this as a simple error on Apple's and Fraunhofer's part and only Nero seemed to make any sense. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 15 January, 2013, 10:18:06 PM For --no-delay, I'm thinking of implementing as follows: 1. Prepend 960 samples of silence to the beginning (before sending to encoder). Then total amount of delay becomes 960 + 2112 = 3072 = 3 * 1024. For SBR, these numbers are doubled. 2. Drop first 3 AAC frames after encoding. This method has a danger of introducing some pops/clicks, but can reduce zeroes at beginning when decoded, and also beginning of the input can be (hopefully) more or less restored (instead of just discarding them). Any comments? You can try this by the attached experimental implementation: [attachment=7286:qaac.zip] Title: QAAC: discussion, questions, feature requests, etc. Post by: robertcollier4 on 16 January, 2013, 01:54:53 AM For --no-delay, I'm thinking of implementing as follows: 1. Prepend 960 samples of silence to the beginning (before sending to encoder). Then total amount of delay becomes 960 + 2112 = 3072 = 3 * 1024. For SBR, these numbers are doubled. 2. Drop first 3 AAC frames after encoding. This method has a danger of introducing some pops/clicks, but can reduce zeroes at beginning when decoded, and also beginning of the input can be (hopefully) more or less restored (instead of just discarding them). Any comments? You can try this by the attached experimental implementation: (Attachment) @nu774 - Just tested qaac 2.11_NO_DELAY_TESTING - and it is working great. The m4a file produced with "--no-delay" flag is the exact same number of milliseconds as the source WAV. Remuxing then gives me a perfectly synced and working video. Also seems that the resultant file is not having end padding anymore since the beginning silence you are introducing + encoder delay results in a size aligned to a byte(?) boundary thus quicktime not adding end padding anymore correct? Perhaps a bit more detail in the flag description would help referencing that this about the delay that iTunSMPB talks about. Such as follows: Quote --no-delay Quicktime normally adds an encoder delay that is recorded in iTunSMPB tag. This flag compensates encoder delay by trimming a few frames from the beginning and then does not write iTunSMPB. This option is mainly intended for video, and don't use this if you don't understand what you are doing. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 16 January, 2013, 03:18:29 AM Well, you might be misunderstanding. There's no way to remove trailing padding from AAC bitstream itself. Resultant length will be always multiple of 1024 (1024 = frame length of AAC). If you get exact length, it is because the decoder/demultiplexer you use takes care of iTunSMPB and removes trailing padding. Having said that, --no-delay surely produces zero delay AAC bitstream, and would be enough for video even with trailing padding. To be on the safer side, use LC-AAC only. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 16 January, 2013, 03:27:58 AM If you get exact length, it is because the decoder/demultiplexer you use takes care of iTunSMPB and removes trailing padding. Well, or it might be because of edts... qaac doesn't put enough information into edts to achieve gapless playback, but valid length (in samples) is written there. Title: QAAC: discussion, questions, feature requests, etc. Post by: robertcollier4 on 16 January, 2013, 04:37:30 AM My source comes from ac3 5.1 files which are downmixed to 2.0 through eac3to. In the case of source ac3 files - it seems that the sample counts are already divisible by 1024 probably because ac3 has a similar frame size requirement. So now I can input.ac3 > eac3to -downDpl > qaac --no-delay > output.m4a with exact same length as original ac3. Before, if input sample count was already divisible by 1024 (as it already is for many files which may be used as input) - it was adding 2112 samples to beginning and 960 samples to end. Now, with --no-delay if the input sample count is already divisible by 1024 - then the output file is exact same length as input. Otherwise if input file sample count is not divisible by 1024, the output file will have a few extra samples of silence padded onto the end until sample count is divisible by 1024. Either way - the beginning sync with the video stays intact. If a person wants to produce an output file with exact same length as input file - they know that now it is possible by making the source have a sample count divisible by 1024 and running qaac with "-no-delay". Thanks for helping me get perfectly aligned audio! Cheers. Title: QAAC: discussion, questions, feature requests, etc. Post by: robertcollier4 on 16 January, 2013, 05:30:35 AM Just a suggestion, it would be helpful to add the following text to this page (https://github.com/nu774/qaac/wiki/Installation). For a long time I was having to run QuickTimeInstaller.exe or AppleApplicationSupport.msi on my computers to use qaac - only recently I found the fully portable solution via this forum. It would be helpful if text such as follows would be included in the documentation Installation page. Quote In lieu of installing Application Application Support, you may use qaac in a completely portable manner by including the following required files in a directory named "QTfiles" in the same location as qaac.exe: QTfiles\ASL.dll QTfiles\CoreAudioToolbox.dll QTfiles\CoreFoundation.dll QTfiles\icudt46.dll QTfiles\icuin40.dll QTfiles\icuuc40.dll QTfiles\libdispatch.dll QTfiles\libicuin.dll QTfiles\libicuuc.dll QTfiles\objc.dll QTfiles\pthreadVC2.dll QTfiles\Microsoft.VC80.CRT\Microsoft.VC80.CRT.manifest QTfiles\Microsoft.VC80.CRT\msvcp80.dll QTfiles\Microsoft.VC80.CRT\msvcr80.dll There is a script available on the download page named makeportable.cmd that will extract these necessary files for you from installer package QuickTimeInstaller.exe available from Apple. To use - place both makeportable.cmd and QuickTimeInstaller.exe in a common directory - run makeportable.cmd - and it will extract the required files into a QTfiles subdirectory. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 16 January, 2013, 06:38:56 AM Please read from #280 in this thread. I know many feel comfortable with portable version, and you are feel free to use it... but I'm hesitating to recommend using it officially. It's Apple's encoder, they created it, and qaac is merely utilizing it (although I'm not an Apple fan-boy or something). Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 16 January, 2013, 08:24:47 AM [qaac] release 2.12 (refalac 1.12) posted 16 minutes ago by nu 774 Add --no-delay option. (Read the discussion at HA thread from here). --no-delay will compensate encoder delay (2112 samples) by prepending silence of 960 samples before sending input to encoder, then trimming 3 AAC frames at beginning (2112 + 960 = 3072 = 1024 * 3, where 1024 is the frame length of AAC. So total amount of delay will be exactly equals to length of 3 AAC frames). Note that these numbers are doubled in case of SBR. This option is meant for video as a mean to resolve A/V sync issue. The resultant AAC will have exactly zero-delay, but might have pops/clicks at the beginning. Use with care. Title: QAAC: discussion, questions, feature requests, etc. Post by: sneaker on 16 January, 2013, 05:47:50 PM Actually I'd rather have it implemented as cutting off the delay from the input as to not have any clicks/pops since we can't have a gap-less playback with this anyways. But this is also ok - I guess I just didn't answer fast enough. Title: QAAC: discussion, questions, feature requests, etc. Post by: gottogo99 on 16 January, 2013, 10:00:28 PM I tried the --no-delay option on a video encode using Virtualdub 1.10.3 with x264 and qaac as external encoders and MP4Box as the muxer, all 32 bit, on a Windows 7 64 bit system. Per MediaInfo 0.7.61, the original video length was 23.790 secs. Without the --no-delay option, the encoded video is 23.872 sec. With the --no-delay option it is 23.807 sec. I was expecting it to exactly match the original. Thoughts? (Great tool, btw.) Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 16 January, 2013, 11:29:03 PM Per MediaInfo 0.7.61, the original video length was 23.790 secs. Without the --no-delay option, the encoded video is 23.872 sec. With the --no-delay option it is 23.807 sec. I was expecting it to exactly match the original. Thoughts? Your expectation is simply wrong. Read post #335. Title: QAAC: discussion, questions, feature requests, etc. Post by: robertcollier4 on 17 January, 2013, 03:32:06 AM I tried the --no-delay option on a video encode using Virtualdub 1.10.3 with x264 and qaac as external encoders and MP4Box as the muxer, all 32 bit, on a Windows 7 64 bit system. Per MediaInfo 0.7.61, the original video length was 23.790 secs. Without the --no-delay option, the encoded video is 23.872 sec. With the --no-delay option it is 23.807 sec. I was expecting it to exactly match the original. Thoughts? Since AAC LC frame size is 1024 samples - the "qaac --no-delay" encoded audio file will have silence padding added onto the end of it until the last AAC frame is full (so that sample count becomes divisable by 1024). If you give "qaac --no-delay" an input file with sample count already divisable by 1024 - then the duration exactly matches the original. You can get "Samples count" in MediaInfo in "Debug > Advanced Mode". Title: QAAC: discussion, questions, feature requests, etc. Post by: robertcollier4 on 17 January, 2013, 04:19:48 AM @nu774 - I am getting a discrepancy which I think may be a bug in qaac 2.12 when you use "--no-delay" with an input that doesn't have sample count already divisable by 1024. The numbers are correct if my input file has sample count divisable by 1024 - but the numbers don't add up when "qaac --no-delay" is given a file with sample count not divisable by 1024. See below the test case: Input: S01E12.wav Samples count : 127772160 Duration(ms) : 2661920 Console Command >qaac.exe --tvbr 127 --quality 2 --rate keep --no-delay "S01E12.wav" -o "S01E12.m4a" qaac 2.12, CoreAudioToolbox 7.9.8.2 S01E12.m4a AAC-LC Encoder, TVBR q127, Quality 96 [100.0%] 44:21.940/44:21.940 (28.8x), ETA 0:00.000 127773120/127773120 samples processed in 1:32.468 Overall bitrate: 250.912kbps Outut: S01E12.m4a Samples count : 127772688 Duration(ms) : 2661931 iTunSMPB(hex) 00000000 00000000 00000200 00000000079DA600 iTunSMPB(dec) 00000000 00000000 512 127772160 PROBLEM: Number of samples in S01E12.m4a should be 127772160 + 512 = 127772672 which is divisable by 1024. Instead S01E12.m4a has sample count which is 127772688 which is wrong and also not divisable by 1024. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 17 January, 2013, 04:54:44 AM Thanks for reporting, I will look into it. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 17 January, 2013, 05:40:13 AM Ah, it seems that I missed your point. I don't know why MediaInfo shows something like that as "Sample Count", but I think it's not problem of qaac. You can dump mp4 file structure by tools like mp4box or boxdumper (of L-SMASH), which is FAR more reliable than MediaInfo when you want to inspect MP4. You can see AAC frame count by the following command in case of mp4box: Code: [Select] mp4box -std -diso foo.m4a | grep SampleCount (In case of Windows use "findstr" instead of grep. "Sample" is a bit confusing, but this "Sample" means AAC frame). For duration, you can do similarly the following: Code: [Select] mp4box -std -diso foo.m4a | grep Duration This will give you several lines. Duration of MovieHeaderBox(mvhd), MediaHeaderBox(mdhd), TrackHeaderBox(tkhd) will be all same, and will be equal to 1024 * SampleCount. You will also get EditListEntry, duration in which give you the actual valid length, and which is equal to the value inside of iTunSMPB. Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 17 January, 2013, 05:45:11 AM Should I understand this like qaac is breaking gapless playback? I didnot encounter any length mismatches so far (using qaac via pipe from foobar) Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 17 January, 2013, 05:54:50 AM Should I understand this like qaac is breaking gapless playback? I didnot encounter any length mismatches so far (using qaac via pipe from foobar) Don't use --no-delay; Without --no-delay everything is same as before. --no-delay is just a HACK, which tries to resolve A/V sync issue of video, which should properly be solved by container/demultiplxer of video in the first place, and has nothing to do with AAC encoder like qaac. Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 17 January, 2013, 06:02:12 AM Should I understand this like qaac is breaking gapless playback? I didnot encounter any length mismatches so far (using qaac via pipe from foobar) Don't use --no-delay; Without --no-delay everything is same as before. --no-delay is just a HACK, which tries to resolve A/V sync issue of video, which should properly be solved by container/demultiplxer of video in the first place, and has nothing to do with AAC encoder like qaac. I have always used a custom positive or negative delay for AAC conversions from AC3 or DTS as the audio was always badly out of sync with default delay (50-200ms in most cases) Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 17 January, 2013, 06:29:08 AM I have always used a custom positive or negative delay for AAC conversions from AC3 or DTS as the audio was always badly out of sync with default delay (50-200ms in most cases) If your input has some amount of DTS/PTS offset, of course you have to take it into account. However, what --no-delay does has nothing to do with the offset already in the source. --no-delay kills additional (fixed amount of) delay introduced by AAC encoder, and that's all. You shouldn't use this for ordinary music encoding, or when you can solve A/V sync issue at video container level (by specifying amount of audio delay or something), and your player(demultiplexer) properly supports it. Title: QAAC: discussion, questions, feature requests, etc. Post by: robertcollier4 on 17 January, 2013, 06:46:54 AM I don't know why MediaInfo shows something like that as "Sample Count", but I think it's not problem of qaac. You can dump mp4 file structure by tools like mp4box or boxdumper (of L-SMASH), which is FAR more reliable than MediaInfo when you want to inspect MP4. Thanks so much for the explanation. "mp4box -diso" does shows the correct value of 127772672 whereas MediaInfo shows 127772688 for the same file. Seems that MediaInfo is reporting the wrong values and I have reported this bug on the MediaInfo forum (https://sourceforge.net/p/mediainfo/discussion/297609/thread/8bbad9a2/). "mp4box.exe -diso" shows millisecond duration under "SampleCount" and Sample Count under "Duration" in its XML output. The XML tag names are switched in mp4box but the values are right. Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 25 January, 2013, 07:09:45 AM [qaac] release 2.13 (refalac 2.13) posted 4 hours ago by nu 774 Gracefully shutdown on console interrupt event (such as Ctrl+C, Ctrl+Break or closing console window). Gracefully means that it stops encoding immediately as if it were the end of input, and properly finalize the container, therefore resulting file will be playable (until that point). Of course, it is not that qaac can terminate gracefully in every possible situations. You can always forcefully kill qaac using task manager or something. https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Title: QAAC: discussion, questions, feature requests, etc. Post by: qupfer on 28 January, 2013, 10:24:23 AM Can somebody give me a quick help? I want to convert 5.1 AC3 to 5.1 AAC. But i don't know how to "convert" the ac3-file to a supportet inputfile for qaac. Can somebody tell me a working commandline? (like ffmpeg ----many options---- | qaac ---some more options---)? thanks Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 28 January, 2013, 10:40:20 AM More graphical with MeGUI: • External Program Configuration: Enable QAAC • Restart MeGUI, update • (Input, Audio) Audio Input [...] *.ac3 • (Input, Audio) Encoder settings: QAAC - [Config] ... • (Input, Audio) [Queue] • (Queue) Start Title: QAAC: discussion, questions, feature requests, etc. Post by: sneaker on 28 January, 2013, 10:44:36 AM Either that or: ffmpeg -i input.ac3 -f wav - | qaac -i --adts --no-delay - -o output.aac The "--adts --no-delay" part is kinda optional, but works good for movies. Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 28 January, 2013, 10:49:48 AM You can also have QAAC output *.m4a by omitting "--adts"; I would prefer it in an MP4 container if it will be multiplexed later. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 28 January, 2013, 11:30:31 AM @sneaker You might want to specify -acodec pcm_f32le to avoid unnecessary quantization to pcm_s16le by ffmpeg. qaac can read float WAV just fine. Title: QAAC: discussion, questions, feature requests, etc. Post by: qupfer on 28 January, 2013, 11:32:51 AM More graphical with MeGUI: [...] Thanks, but this create a corrupt audio file (MediaInfo Bitrate 2kbit/s) and i think the reason could be, that I have some 2.0 parts in the 5.1 audio source. Can I "force" 5.1 in megui like -ac 6 in ffmpeg? Or can I convert the ac3-source lossles to a compleate 5.1 ac3-file? Title: QAAC: discussion, questions, feature requests, etc. Post by: LastSilmaril on 28 January, 2013, 12:17:22 PM More graphical with MeGUI: [...] Thanks, but this create a corrupt audio file (MediaInfo Bitrate 2kbit/s) and i think the reason could be, that I have some 2.0 parts in the 5.1 audio source. Can I "force" 5.1 in megui like -ac 6 in ffmpeg? Or can I convert the ac3-source lossles to a compleate 5.1 ac3-file? You could also get eac3to (recently updated) (http://forum.doom9.org/showthread.php?t=125966) and do this: Code: [Select] eac3to input.ac3 stdout.wav | qaac - -o "output.mp4" -i --verbose --threading The last two options on the qaac side are of course optional. I would recommend --verbose at least as qaac then displays the amt of channels you're dealing with. "-i" tells it to ignore wav headers (which will give you the wrong length and muck things up). On the eac3to side I didn't specify any flags, but you might want to use -down16 when using TrueHD input (let eac3to handle the dithering). If you're converting from 6.1 or 7.1 and want 5.1 output, you should also use -down6. Quote and i think the reason could be, that I have some 2.0 parts in the 5.1 audio source. I'm not sure if that's technically possible or makes any sense? Having silence in every channel except for front L/R does not a 2.0 source make. Note that eac3to uses ffmpeg to decode ac3 by default. I'm not sure if using it will help you at all. Alternatively, I have also had success with this tool: http://forum.doom9.org/showthread.php?t=165577 (http://forum.doom9.org/showthread.php?t=165577) Title: QAAC: discussion, questions, feature requests, etc. Post by: sneaker on 28 January, 2013, 11:52:56 PM You can also have QAAC output *.m4a by omitting "--adts"; I would prefer it in an MP4 container if it will be multiplexed later. Yeah, it really depends on the muxer. I sometimes use l-smash and it only supports adts, while mkvmerge supports both. And because of the "--no-delay" switch no gapless info is needed. I prefer that as it does not rely on the player using the proper delay, because some players just don't. @sneaker You might want to specify -acodec pcm_f32le to avoid unnecessary quantization to pcm_s16le by ffmpeg. qaac can read float WAV just fine. Thanks for the tip, though it does not seem to make a difference for ffmpeg's ac3 decoder. Usually I use eac3to as suggested by LastSilmaril because the author does keep proper bit depth conversions in mind, though it "only" uses 24 bit by default for the output. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 29 January, 2013, 12:40:54 AM Thanks for the tip, though it does not seem to make a difference for ffmpeg's ac3 decoder. Usually I use eac3to as suggested by LastSilmaril because the author does keep proper bit depth conversions in mind, though it "only" uses 24 bit by default for the output. Hmm, I tried with ffmpeg just now, and seems you are correct. Actually I was using avconv instead of ffmpeg, and it gives you different result. Tested with the attached AC3 (encoded with intentionally high gain, and it clips when quantized to int, giving you audibly different result) [attachment=7297:sin.zip] Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 02 February, 2013, 12:04:35 PM [qaac] release 2.14 (refalac 1.14) posted an hour ago by nu 774 Add --cue-track option to limit tracks to extract from cuesheet, and fixed several minor bugs. https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 05 February, 2013, 05:30:02 AM [qaac] release 2.15 (refalac 2.15) posted 23 minutes ago by nu 774 [ updated a minute ago ] Fixed an awful bug of refalac of 2.xx branch. It wasn't encoding in correct frame length (4096 samples) on some cases. I noticed it when I encoded directly from lossyFLAC (not piped input), which resulted 512 samples-per-frame ALAC file. It seemed playable, but apparently is not a normal/sane ALAC file; WAV input will be fine (including piped input). Direct input from FLAC or other formats might be affected, and Re-encoding is recommended. Only refalac of 2.xx branch is affected. qaac is fine. Use more strict sharing mode when opening files. Now qaac/refalac doesn't allow other processes to open the output file when qaac/refalac is writing to it. Reading can be shared, but now qaac/refalac cannot open a file for reading when another process is writing to it. https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 05 February, 2013, 06:12:24 AM Thanks for the tip, though it does not seem to make a difference for ffmpeg's ac3 decoder. Usually I use eac3to as suggested by LastSilmaril because the author does keep proper bit depth conversions in mind, though it "only" uses 24 bit by default for the output. Hmm, I tried with ffmpeg just now, and seems you are correct. Actually I was using avconv instead of ffmpeg, and it gives you different result. Tried this with latest ffmpeg binary from http://ffmpeg.zeranoe.com/builds/ (http://ffmpeg.zeranoe.com/builds/), and now ffmpeg can correctly output without integer clipping with -acodec pcm_f32le. The version I tried seems just too old (it was built on Nov 2012 or so, therefore it was not VERY old, but ffmpeg is really a moving target). Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 05 February, 2013, 06:22:33 AM The refalac bug I fixed on 2.15 was because I changed source or filter layer not to repeatedly pull or generate samples until it reaches requested amount, but at the same time I didn't modified ALAC encoder to compensate the source/filter layer modification. As a result, nobody assured ALAC encoder to process each frame per 4096 samples. Silly me! For usual cases (WAV input, with or without pipe, without DSP), source filter would read up until requested samples (=4096) anyway, so did no harm. Title: QAAC: discussion, questions, feature requests, etc. Post by: sneaker on 08 February, 2013, 08:14:24 AM Tried this with latest ffmpeg binary from http://ffmpeg.zeranoe.com/builds/ (http://ffmpeg.zeranoe.com/builds/), and now ffmpeg can correctly output without integer clipping with -acodec pcm_f32le. The version I tried seems just too old (it was built on Nov 2012 or so, therefore it was not VERY old, but ffmpeg is really a moving target). ffmpeg also supports pcm_f64le. Would it make any sense to use it with qaac? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 08 February, 2013, 08:55:57 AM ffmpeg also supports pcm_f64le. Would it make any sense to use it with qaac? qaac can read f64, but I don't think it will make any sense. Title: QAAC: discussion, questions, feature requests, etc. Post by: sneaker on 08 February, 2013, 09:00:31 AM ffmpeg also supports pcm_f64le. Would it make any sense to use it with qaac? qaac can read f64, but I don't think it will make any sense. But it won't break anything even if there's zero practical advantage? Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 08 February, 2013, 09:06:23 AM If the pipe output is stored as temporary file somewhere with low diskspace, it may break... Somehow I remember that pipes under Windows can be inefficient. But I won't swear it. It may have been in times of Windows 9x. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 08 February, 2013, 09:18:56 AM But it won't break anything even if there's zero practical advantage? It won't break anything or lose quality. However, native decoding format of ffmpeg (for MDCT codec) will be f32, and same for CoreAudio AAC codec. Therefore, it will simply waste time for unnecessary float<->double conversion + increased I/O size. Title: QAAC: discussion, questions, feature requests, etc. Post by: sneaker on 08 February, 2013, 09:19:55 AM I see. Title: QAAC: discussion, questions, feature requests, etc. Post by: Boulder on 11 February, 2013, 09:40:19 AM Will I get proper channel order for my 5.1-channel AAC file if I first decode the DTS-HD MA track to a multichannel wav file with eac3to and then encode that one with qaac? I need to decode to wav as there's sometimes clipping in the decoded output and eac3to needs a second pass to handle it. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 11 February, 2013, 07:50:38 PM Will I get proper channel order for my 5.1-channel AAC file if I first decode the DTS-HD MA track to a multichannel wav file with eac3to and then encode that one with qaac? I need to decode to wav as there's sometimes clipping in the decoded output and eac3to needs a second pass to handle it. Read https://github.com/nu774/qaac/wiki/Multichannel--handling (https://github.com/nu774/qaac/wiki/Multichannel--handling). IIRC DTS-HD MA is a 24bit lossless format, and I don't understand why you worry about clipping. If , for some reason, eac3to detects DTS-HD MA input as clipped and lower the gain, the process is not lossless, by definition. Title: QAAC: discussion, questions, feature requests, etc. Post by: Boulder on 11 February, 2013, 10:36:31 PM Sorry, should have been a bit more clear. Clipping is sometimes detected when downmixing 7.1ch or 6.1ch to 5.1ch. I do that for movies that I put on my media player and put the movie with the original audio track in my archive. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 11 February, 2013, 11:23:01 PM Sorry, should have been a bit more clear. Clipping is sometimes detected when downmixing 7.1ch or 6.1ch to 5.1ch. Oh, I see. Makes sense. Title: QAAC: discussion, questions, feature requests, etc. Post by: lunkhead on 19 February, 2013, 12:00:34 AM Duration: qaac 1.47 q73: 3wk 0d 8:34:06.898 (81 375 388 218 samples) FLAC and LAME: 3wk 0d 8:34:06.899 (81 375 388 229 samples) I noticed foobar2000 incorrectly adds the total sample size of large number of items. It's always very slightly less. It's not noticeable when the list is small. Just an observation. Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 19 February, 2013, 09:45:52 AM Do you have HE-AAC files ? Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 28 February, 2013, 07:44:22 PM [qaac] release 2.16 (refalac 1.16) posted 14 hours ago by nu 774 Read and handle multichannel layout of TAK files. Write fact chunk when decoding into WAVEFORMATEXTENSIBLE format. As far as I can see, even WMP is not in honor of the fact chunk, so this would be pretty much useless. However, since it looks like RIFF/WAV spec require it in WAVEFORMATEXTENSIBLE, this was implemented to be more spec compliant. fact chunk is not written on piped output or WAVEFORMATEX format. Automatically kill progress message when stderr is connected to nothing. https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Title: QAAC: discussion, questions, feature requests, etc. Post by: hellside on 03 March, 2013, 09:46:56 PM I just run into QACC, and I am wondering which is better, TVBR127 or cvbr256? some say that cvbr sometimes reach 400 br just need a quick word, thanks! Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 03 March, 2013, 10:02:39 PM TVBR 127 = ~320kbps You should ABX, you'll be surprise how low you can go with AAC. I use -V73 (~150kbps) just because I want to use "transparent for me" +1. I like AAC @~128kbps and since -V63 gives ~135kbps (almost all the time around ~100kbps with my music) I use -V63 + 1 (just in case) = -V73. Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 04 March, 2013, 03:07:22 AM I know what you mean ... everything was fine with "oggenc2 -q 4" until I heard echoes in the break beats of "Prodigy: Your Love (Remix) — Experience". There are always extraordinary exceptions. Title: QAAC: discussion, questions, feature requests, etc. Post by: o-l-a-v on 19 March, 2013, 09:02:05 AM [qaac] release 2.17 (refalac 1.17) posted 13 minutes ago by nu 774 Can't find the changelog yet " Fixed github issue 27 (regression on 1.26). --decode was writing invalid wav file. Added --gapless-mode option (same as fdkaac). Interestingly, iTunes seems to support both of iTunSMPB and ISO standard gapless mode. QuickTime supports only the latter. In the past, I thought QT silently assumes 2112 samples of delay. However, it turned out that QT actually looks elst media_time when sbgp and sgpd are present, so it can be used generally (as described in QTFF spec). As far as I know, iTunes is the only music player that supports gapless playback in both way. " Download: https://sites.google.com/site/qaacpage/cabi...rects=0&d=1 (https://sites.google.com/site/qaacpage/cabinet/qaac_2.17.zip?attredirects=0&d=1) Title: QAAC: discussion, questions, feature requests, etc. Post by: ktf on 19 March, 2013, 10:02:35 AM Fixed github issue 27 (regression on 1.26). --decode was writing invalid wav file. I reported that issue a few hours ago and apparently it's already fixed? That's really, really fast! Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 19 March, 2013, 10:19:43 AM Fixed github issue 27 (regression on 1.26). --decode was writing invalid wav file. I reported that issue a few hours ago and apparently it's already fixed? That's really, really fast! Thanks for reporting. It was simply due to missing initialization of one new boolean flag variable introduced on 1.26, and was quite easy to fix. Maybe it could have been caught earlier by the compiler warning if I had killed tons of stupid C4244 or something (casting to smaller type). Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 19 March, 2013, 11:26:20 AM 1.26 or 2.16 ? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 19 March, 2013, 12:22:42 PM 1.26 or 2.16 ? Oh, sorry. I meant 2.16. Title: QAAC: discussion, questions, feature requests, etc. Post by: me7 on 19 March, 2013, 02:42:03 PM ...so, if I want gapless playback to work across most players/devices, what --gapless-mode should I choose? The default is "iTunSMPB only". EDIT: To be clearer: Why is "iTunSMPB only" preferred over "both"? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 19 March, 2013, 09:10:32 PM ...so, if I want gapless playback to work across most players/devices, what --gapless-mode should I choose? The default is "iTunSMPB only". EDIT: To be clearer: Why is "iTunSMPB only" preferred over "both"? It's default not because it's considered superior, but it's backward compatible and conservative. Probably "both" will do no harm, but I cannot assure that. Other encoders (including iTunes) only write iTunSMPB. Title: QAAC: discussion, questions, feature requests, etc. Post by: me7 on 20 March, 2013, 07:32:47 AM You've got a point. Even if the ISO specification says something else, it might be better to do "what all other encoders do" Title: QAAC: discussion, questions, feature requests, etc. Post by: db1989 on 21 March, 2013, 10:00:47 AM I just removed blatant spam, and a full quote thereof by a regular member, from a user whose name began with “laptop” and whose post consisted of an 1.5 year–old paragraph by nu774 copied and pasted and with a link to an external website added. Please think about things such as usernames and links before replying! Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 21 March, 2013, 10:19:39 AM Some ad bots use this behaviour (quoting a previous posting) to cover their activity by pretending to refer to it. Title: QAAC: discussion, questions, feature requests, etc. Post by: robertcollier4 on 31 March, 2013, 04:59:45 PM Is it possible to use qaac as an audio encoder in GraphEdit? I would like to connect the ffdshow audio decoder output to qaac encoder. I am currently having to do ffdshow audio decoder -->WAV Dest --> File writer. Then convert the WAV file on hard dive with qaac. Given that qaac can already accept piped input - is there any way to set this up in graphedit to pass its output to a command-line program? If not, then please add it as a feature request to have a qaac output filter for GraphEdit. Thanks. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 31 March, 2013, 10:19:15 PM Given that qaac can already accept piped input - is there any way to set this up in graphedit to pass its output to a command-line program? It might be possible by creating special audio writer/renderer filter or something to stream audio to a pipe, but I've never heard of such things. If not, then please add it as a feature request to have a qaac output filter for GraphEdit. Thanks. Why don't you just pipe from ffmpeg instead of graphedit+ffdshow ? I don't feel like writing dshow encoder filters. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 01 April, 2013, 10:21:09 PM Code: [Select] -V63 --adts --no-delay -o %d - gives me "Conversion failed: Unsupported format or corrupted file". What am I missing? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 02 April, 2013, 12:22:24 AM Code: [Select] -V63 --adts --no-delay -o %d - gives me "Conversion failed: Unsupported format or corrupted file". What am I missing? Try with older version of fb2k. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 02 April, 2013, 01:32:53 AM I went all back to 1.1.18 but nothing, same error. (https://hydrogenaud.io/imgcache.php?id=e214b267b6a8deeaf06764e9faa495c4" rel="cached" data-warn="External image, click to view at original size" data-url="http://i.imgur.com/oSiSUoW.png) Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 02 April, 2013, 02:28:28 AM I went all back to 1.1.18 but nothing, same error. It seems fb2k tries to copy tags on the resulting ADTS file after encoding, and fails. You can try converting first to an intermediate file, then convert it to ADTS. Use AIFF (for example) as the intermediate file format, since fb2k doesn't support tags on AIFF. Otherwise, you can cheat fb2k by passing qaac "-o %d.aac" or something as output filename (an extra ".aac" extension is appended to the output filename). fb2k will show error, but resulting file will not be removed by fb2k, since it's named different from what fb2k thinks. Title: QAAC: discussion, questions, feature requests, etc. Post by: sohm on 09 April, 2013, 11:13:55 AM nu774, firstly many thanks for qaac! i've encountered a rather strange glitch trying to encode in HE-AAC: output file's length(in seconds) is exactly 50% of the input. meanwhile, qaac.exe command line output displays correct processed length in seconds, also no complaints in --verbose mode except that it runs suspiciously fast. Code: [Select] ..\qaac_2.17\qaac>qaac --verbose --he test1.wavqaac 2.17, CoreAudioToolbox 7.9.8.2test2.m4aUsing default channel layout.Output layout: StereoAAC-HE Encoder, CVBR 80kbps, Quality 96[100.0%] 7:11.506/7:11.506 (28.8x), ETA 0:00.00019029444/19029444 samples processed in 0:15.000Overall bitrate: 88.272kbps423/423 chunks written (optimizing)..\qaac_2.17\qaac>qaac --verbose test2.wavqaac 2.17, CoreAudioToolbox 7.9.8.2test2.m4aUsing default channel layout.Output layout: StereoAAC-LC Encoder, TVBR q91, Quality 96[100.0%] 7:11.506/7:11.506 (25.1x), ETA 0:00.00019029444/19029444 samples processed in 0:17.234Overall bitrate: 213.081kbps423/423 chunks written (optimizing) test1.m4a 4.57MB (4 802 231 bytes) 3:35.753 (9514722 samples) test2.m4a 11.0MB (11 571 769 bytes) 7:11.507 (19029444 samples) 9514722*2=19029444 ))) apart from this issue with --he, AAC-LC performs flawlessly in both --tvbr or --cvbr modes. any suggestions much appreciated)) Title: QAAC: discussion, questions, feature requests, etc. Post by: Big_Berny on 09 May, 2013, 12:12:06 PM Hi I try to use qaac for transcoding in subsonic (http://subsonic.org). I want to transcode my 256Kbit/s AAC to like 96 KBit/s AAC. I know that's evil , but I need iot for streaming if I have a bad mobile connection. I tried this transcoding command: Step 1: faad -w %s Step 2: qaac -a %bk --adts - - It seems to work but at the end of transcoding I always get a error message "Cannot seek back the input". Does someone has an idea why this happens? Code: [Select] ...SHORTENED...[5/9/13 5:57:11 PM CEST] DEBUG InputStreamReaderThread (c:\subsonic\transcode\faad) 99% decoding D:\Users\Admin\Music\iTunes\iTunes Media\Music\Absolute Beginner\The Early Years 1992-1994\05 Planet 2000.m4a.[5/9/13 5:57:11 PM CEST] DEBUG InputStreamReaderThread (c:\subsonic\transcode\faad) Decoding D:\Users\Admin\Music\iTunes\iTunes Media\Music\Absolute Beginner\The Early Years 1992-1994\05 Planet 2000.m4a took: 6.40 sec. 45.19x real-time.[5/9/13 5:57:11 PM CEST] DEBUG InputStreamReaderThread (c:\subsonic\transcode\qaac) 4:46.511 (45.3x)[5/9/13 5:57:11 PM CEST] DEBUG InputStreamReaderThread (c:\subsonic\transcode\qaac) 4:49.297 (45.2x)[5/9/13 5:57:11 PM CEST] DEBUG InputStreamReaderThread (c:\subsonic\transcode\qaac) 12758016/-1 samples processed in 0:06.395[5/9/13 5:57:11 PM CEST] DEBUG InputStreamReaderThread (c:\subsonic\transcode\qaac) Overall bitrate: 110.335kbps[5/9/13 5:57:11 PM CEST] DEBUG InputStreamReaderThread (c:\subsonic\transcode\qaac)[5/9/13 5:57:11 PM CEST] DEBUG InputStreamReaderThread (c:\subsonic\transcode\qaac) stdin.aac[5/9/13 5:57:11 PM CEST] DEBUG InputStreamReaderThread (c:\subsonic\transcode\qaac) ERROR: Cannot seek back the input Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 09 May, 2013, 12:31:28 PM Try "qaac -a %bk --adts - -o -" (if you want qaac to output to stdout) Title: QAAC: discussion, questions, feature requests, etc. Post by: Big_Berny on 09 May, 2013, 07:18:49 PM Great, thanks a lot! The online problem I have now is that I can't seek. The stream Ing app dsub shows the correct length but the progressbar doesn't work. Ok, this one is fixed too. Problem is solved when song is fully loaded/cached. Last thing: afaik replaygain is lost when decoding aac, right? Is it possible to change the volume when decoding mp3 and aac by adding replaygain correction directly to the raw stream? As I read it's possible to change volume when decoding a flac to wave with flac-codec but is this also possible with a decoder for mp3 and aac? Title: QAAC: discussion, questions, feature requests, etc. Post by: Big_Berny on 10 May, 2013, 05:42:56 AM Last thing: afaik replaygain is lost when decoding aac, right? Is it possible to change the volume when decoding mp3 and aac by adding replaygain correction directly to the raw stream? As I read it's possible to change volume when decoding a flac to wave with flac-codec but is this also possible with a decoder for mp3 and aac? Made a new topic as it has nothing to do with qaac anymore. (Unfortunately I can't edit last post anymore) Title: QAAC: discussion, questions, feature requests, etc. Post by: Pulstar on 23 May, 2013, 08:29:38 PM I've noticed that with Apple Application Support v2.3.4 the encoding is slower even with multithreading enabled. Not that big of an issue considering audio takes less cycles to encode than video but I'm curious as to why. Perhaps it's just my machine? Title: QAAC: discussion, questions, feature requests, etc. Post by: deej_1977 on 15 June, 2013, 04:01:07 PM --- Problems that solve themselves are great, carry on, as you were :-) --- Title: QAAC: discussion, questions, feature requests, etc. Post by: Boulder on 03 July, 2013, 10:52:38 AM I'm having problems encoding the audio track of "Edward Scissorhands" with qaac. eac3to reports it as 3/1 which I believe means FL, C and FR channels for the front and BC for the back. qaac assumes "FL FR BL BR" which causes incorrect channel mapping. I couldn't find what chanmask parameter I should use, so can you help me with this? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 03 July, 2013, 11:10:48 AM I'm having problems encoding the audio track of "Edward Scissorhands" with qaac. eac3to reports it as 3/1 which I believe means FL, C and FR channels for the front and BC for the back. qaac assumes "FL FR BL BR" which causes incorrect channel mapping. I couldn't find what chanmask parameter I should use, so can you help me with this? If it is actually "FL FR FC BC" but just channel mask is missing in the wav header, use --chanmask=0x107. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 03 July, 2013, 11:21:38 AM You can calculate channel mask by summing the following values. In this case, FL + FR + FC + BC == 1 + 2 + 4 + 256 == 263 == 0x107. You can give both hex value (with 0x prefix) and decimal value to --chanmask option. Code: [Select] FL == 1<<0 == 1 == 0x1FR == 1<<1 == 2 == 0x2FC == 1<<2 == 4 == 0x4LF == 1<<3 == 8 == 0x8BL == 1<<4 == 16 == 0x10BR == 1<<5 == 32 == 0x20FLC == 1<<6 == 64 == 0x40FRC == 1<<7 == 128 == 0x80BC == 1<<8 == 256 == 0x100SL == 1<<9 == 512 == 0x200SR == 1<<10 == 1024 == 0x400 Title: QAAC: discussion, questions, feature requests, etc. Post by: Boulder on 03 July, 2013, 01:51:50 PM Thanks, that table is really useful. Got it working now Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 13 July, 2013, 12:32:59 AM [qaac] release 2.19 (refalac 1.19) posted 2 hours ago by nu 774 Fixed: attempt to set one of stik, rtng, akID, sfID tags caused qaac hang. Well, actually not hanging but waiting for console input in vain... due to a silly bug calling scanf() instead of sscanf(). Fixed: --tag akID:fra was writing USA country code (not France). https://sites.google.com/site/qaacpage/ (https://sites.google.com/site/qaacpage/) Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 24 July, 2013, 12:55:58 PM qaac.exe --tvbr 87 --quality 2 --ignorelength - -o f:\temp\captures\test_qaac.m4a Is the ignorelength switch ever really needed? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 25 July, 2013, 08:29:11 AM qaac.exe --tvbr 87 --quality 2 --ignorelength - -o f:\temp\captures\test_qaac.m4a Is the ignorelength switch ever really needed? Well, it depends. If you are encoding from fb2k, and if blockalign (length per one sample, in bytes) of the input is even (usually it will be even), qaac will work as if --ignorelength is specified, and surely you don't need to explicitly specify that switch. More precisely, qaac will work in ignorelength mode in the following cases: • Length of data (in headr) is equal to zero. • Length of data is not divisible by blockalign. Title: QAAC: discussion, questions, feature requests, etc. Post by: Carsi on 05 August, 2013, 07:38:50 AM Hi. I wanted to ask if I can convert from FLAC/ALAC to m4a without creating temporary wav first? It really slows down the process. My parameters: --tvbr 53 --no-optimize -o %d %s I tried to remove %s but then it displays en error message? And yes I went down til tvbr 53, I think it sounds great, want to store as much as possible on my 32gb iphone 5 Title: QAAC: discussion, questions, feature requests, etc. Post by: detmek on 05 August, 2013, 10:16:25 AM If you are using foobar try this: --tvbr 53 --no-optimize --ignorelength - -o %d You need to replace %s with -. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 05 August, 2013, 12:06:42 PM And the number is 54 anyway to be even more precise. Code: [Select] AAC (Apple True VBR / qaac)Q0 - Q4 (0) = ~40 kbpsQ5 - Q13 (9) = ~45 kbpsQ14 - Q22 (18) = ~75 kbpsQ23 - Q31 (27) = ~80 kbpsQ32 - Q40 (36) = ~95 kbpsQ41 - Q49 (45) = ~105 kbpsQ50 - Q58 (54) = ~115 kbpsQ59 - Q68 (63) = ~135 kbpsQ69 - Q77 (73) = ~150 kbpsQ78 - Q86 (82) = ~165 kbpsQ87 - Q95 (91) = ~195 kbpsQ96 - Q104 (100) = ~225 kbpsQ105 - Q113 (109) = ~255 kbpsQ114 - Q122 (118) = ~285 kbpsQ123 - Q127 (127) = ~320 kbps Title: QAAC: discussion, questions, feature requests, etc. Post by: Boulder on 16 August, 2013, 07:23:52 AM Has anyone tested the compression efficiency of multichannel ALAC? I was thinking of switching from FLAC to ALAC for my movie archives (because mkvtoolnix sometimes breaks FLAC files when segmented mkv files are created), but some multichannel ALAC files are huge. For example, the audio track of Notting Hill is ~970MB with FLAC but 2,9GB with ALAC. Some multichannel (5.1ch) tracks are very close to the FLAC file in size. Title: QAAC: discussion, questions, feature requests, etc. Post by: db1989 on 16 August, 2013, 08:08:35 AM the audio track of Notting Hill is ~970MB with FLAC but 2,9GB with ALAC. This seems peculiar, well beyond the usual expected variance between lossless codecs. Did you encode both of these yourself, yes? Which encoder was used to create the ALAC? Have you observed this trend on other files compressed with ALAC compared to other lossless codecs? I just find it hard to believe this can be normal. And I cannot recall reading of any such large bugs in ALAC. Title: QAAC: discussion, questions, feature requests, etc. Post by: Boulder on 16 August, 2013, 08:12:20 AM Yes, they are my encodes. I used refalac, which is in the QAAC package. There's nothing wrong with the file what comes to playback, there's no static or anything else that could explain the huge size. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 16 August, 2013, 10:25:43 AM Yes, they are my encodes. I used refalac, which is in the QAAC package. There's nothing wrong with the file what comes to playback, there's no static or anything else that could explain the huge size. Wow, I have to try a multichannel track. Two days ago I converted ~31000 audio files from FLAC to ALAC. If they come up with a 128GB iPhone I am done going lossy. edit: Downloading the first, Mozart (FLAC 470MB) from here: http://www.2l.no/hires/ (http://www.2l.no/hires/) edit2: ALAC is 494MB (TAK -p1 456MB, TAK -p2 454MB, TAK -p4m 447MB, WavPack normal 469MB, TTA 509MB, WMA lsl 494MB, WAV 929MB) Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 16 August, 2013, 10:36:28 AM Maybe the file is 24 bit but the real bit depth is less? Title: QAAC: discussion, questions, feature requests, etc. Post by: Boulder on 16 August, 2013, 11:15:13 AM That could be the case here. This is what eac3to reports when decoding the FLAC file: Original audio track: max 24 bits, average 17 bits, most common 16 bits. Apparently ALAC is not as efficient in such cases. Title: QAAC: discussion, questions, feature requests, etc. Post by: testyou on 16 August, 2013, 02:06:24 PM But 3 times as large? I'm guessing there's something else going on there. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 16 August, 2013, 09:01:51 PM Simply put, when valid bit depth is 16/24 and compression ratio is around 25% for both codec on the effective 16bits part, ALAC file will become 3 times larger than FLAC one. 24bit ALAC file always stores LSB-side 8bits uncompressed, while FLAC won't consume bits due to it's wasted bits feature in this case. Code: [Select] 16×0.25 : 16×0.25+8 = 1 : 3 Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 06 September, 2013, 11:26:00 AM [qaac] release 2.20 + refalac 1.20 posted 2 hours ago by nu 774 Add optional libsoxr support. It's basically the same as libsoxrate (both derives from SoX), but is more optimized (fast), and is maintained by a developer of SoX. When libsoxr.dll is present, now qaac/refalac will use it for sample rate conversion. libsoxr binary is on the cabinet page. Note that you still need libsoxrate when you want --lowpass or mixing option. libsoxr binary built with GCC doesn't usually work with qaac/refalac due to a few ABI compatibility issues. The binary at cabinet page is built with GCC _with care_. I have reported this issue to author of libsoxr, so it might be fixed in the future. Decreased refresh rate of progress on title bar of console window. Explicitly check presence of BOM when reading text files, since MLang often does a wrong guess even when BOM is present. Title: QAAC: discussion, questions, feature requests, etc. Post by: testyou on 06 September, 2013, 01:26:41 PM When I have libsoxr.dll in the directory and run "--check", it fails and says libgcc_s_sjlj-1.dll is missing. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 07 September, 2013, 03:42:14 AM When I have libsoxr.dll in the directory and run "--check", it fails and says libgcc_s_sjlj-1.dll is missing. Oh thanks for reporting it. Updated libsoxr archive on the cabinet page. libsoxr is a pure C DLL and I can't understand why it needs to be dependent on that DLL, but this might be the case: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=57120 (http://gcc.gnu.org/bugzilla/show_bug.cgi?id=57120). Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 07 September, 2013, 03:55:55 AM Thanks for the update nu774. Simple question, is libsoxr conversion just faster? Does it give the very same result of libsoxrate? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 07 September, 2013, 04:56:43 AM Thanks for the update nu774. Simple question, is libsoxr conversion just faster? Does it give the very same result of libsoxrate? Well, it's not "very same". rate module of original SoX (and libsoxrate) is always using double precision float numbers for internal calculation. On the other hand, libsoxr offers both single and double precision implementations. single precision version can be faster by using SIMD, but is not as precise as double version. single/double mode are automatically chosen by the quality parameter for libsoxr. As for qaac, when incoming signal is double or 32bit integer, "very high quality" mode is selected, that leads to double precision resampling in libsoxr. For other cases, qaac will choose "high quality", that leads to single precision resampling. I think "high quality" mode is usually enough, and it's the default of libsoxr. Title: QAAC: discussion, questions, feature requests, etc. Post by: bandpass on 07 September, 2013, 10:00:27 AM For other cases, qaac will choose "high quality", that leads to single precision resampling. I think "high quality" mode is usually enough, and it's the default of libsoxr. libsoxr single-precision is clean to –120dB, and IIRC, here at HA, we've yet to find a recording with noise floor below –96dB (even somewhat above that, but I can't remember the figure). So I concur. Title: QAAC: discussion, questions, feature requests, etc. Post by: mrgou on 10 September, 2013, 12:40:30 PM I'd like to confirm my understanding of the dependencies with CoreAudioToolbox.dll If I get it right, on a system with iTunes 10 (because I don't like iTunes 11), if I extract CoreAudioToolbox.dll from the latest iTunes installer (as of now, 11.0.5) and put it in the same folder as qaac.exe, QAAC will use the DLL from the latest release, and not from the iTunes release installed on my PC. Is this correct? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 10 September, 2013, 12:58:50 PM If I get it right, on a system with iTunes 10 (because I don't like iTunes 11), if I extract CoreAudioToolbox.dll from the latest iTunes installer (as of now, 11.0.5) and put it in the same folder as qaac.exe, QAAC will use the DLL from the latest release, and not from the iTunes release installed on my PC. Is this correct? Yes, but you need not only CoreAudioToolbox.dll but also other dependencies. Alternatively you can place them under "QTfiles" sub folder under where qaac is installed. Title: QAAC: discussion, questions, feature requests, etc. Post by: mrgou on 10 September, 2013, 03:08:52 PM Yes, but you need not only CoreAudioToolbox.dll but also other dependencies. Alternatively you can place them under "QTfiles" sub folder under where qaac is installed. OK, so I've run makeportable.cmd, and it's all set up. It really great that it takes these files over the iTunes install so you can update one but not the other! Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 10 September, 2013, 09:43:04 PM nu774, why the Windows system folder has priority over the QTFiles folder? I think it should be: 1) Same folder 2) QTFiles 3) Windows system (look for something installed after the portable option) Thanks. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 11 September, 2013, 04:52:13 AM I think it should be: 1) Same folder 2) QTFiles 3) Windows system (look for something installed after the portable option) As is written in http://msdn.microsoft.com/en-us/library/wi...6(v=vs.85).aspx (http://msdn.microsoft.com/en-us/library/windows/desktop/ms682586(v=vs.85).aspx), it is by design of Microsoft. qaac just pushes QTfiles and standard AppleAppicationSupport directory at the top of process internal PATH environment variable so that they are searched before any other directories in the PATH. Also as is written in that document, Windows has a known security issue that DLL in the "current directory" (not same as application directory) gets loaded, but it's prevented via a call to SetDllDirectory(). This can be changed by manually searching DLLs for candidates then call LoadLibrary() with full path name, but I don't feel it's worth doing in case of qaac. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 11 September, 2013, 11:27:36 PM Not worth. Thanks for the explanation. Title: QAAC: discussion, questions, feature requests, etc. Post by: kurosu_ on 13 September, 2013, 08:40:28 AM Hello, even with the recent 2.21 (although I'm not using --threading option), I'm still getting random crashes when feeding multichannel (eg 5.1) audio to qaac through fb2k, depending on the coreaudio version. For the record, the command-line I'm using: Code: [Select] -V 80 --no-optimize --verbose --quality 2 -n --no-delay --log "%d.txt" -o %d - If using coreaudio 7.5.5.0 (old, from qtlite 4.1 or something), crashes are very frequent. If using 7.9.8.3 (I have extracted the dlls around 2 months ago), it seems to be fine. My problem though is that there are posts from 2011 (for instance this one (https://sites.google.com/site/qaacpage/news/qaacdontuseformultichannelaudioencoding)) stating that the channel mapping is wrong. I don't have a multichannel AAC setup to actually verify this, but my tv set can decode those multichannel aacs to stereo (not multichannel spdif output though ), except the sounds are muffled (in particular people speaking with various amount of audio around etc). Whether this is DRC or matrixing gone wrong, I don't know, but I'd prefer to make sure whether this is an issue of the past. qaac wiki (https://github.com/nu774/qaac/wiki/Multichannel--handling/_history#) does not list this issue. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 13 September, 2013, 09:46:49 AM If using coreaudio 7.5.5.0 (old, from qtlite 4.1 or something), crashes are very frequent. If using 7.9.8.3 (I have extracted the dlls around 2 months ago), it seems to be fine. My problem though is that there are posts from 2011 (for instance this one (https://sites.google.com/site/qaacpage/news/qaacdontuseformultichannelaudioencoding)) stating that the channel mapping is wrong. Well, please don't use THAT old CoreAudioToolbox. That channel mapping issue was already solved in the past on qaac side, and should work fine. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 13 September, 2013, 11:55:41 AM kurosu_, why would you use something that old? Lossy improves every time, you should update every time. Thanks again for your hard work nu774, let's announce it: [qaac] release 2.21 (refalac 1.21) posted 9 hours ago by nu 774 Fixed an issue of --threading option. There was a possibility of non sample aligned read on the pipe, similar to the problem that was fixed on 2.04 and 2.05. Title: QAAC: discussion, questions, feature requests, etc. Post by: ScionicReaver on 20 September, 2013, 05:20:32 PM Can someone give me help on how to get the QAAC encoder to work with Foobar. I keep getting errors whenever I try to input a command https://github.com/nu774/qaac/wiki/Command-Line-Options (https://github.com/nu774/qaac/wiki/Command-Line-Options) I'm really lost and have no idea how to use it for Foobar. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 20 September, 2013, 05:41:02 PM Can someone give me help on how to get the QAAC encoder to work with Foobar. I keep getting errors whenever I try to input a command https://github.com/nu774/qaac/wiki/Command-Line-Options (https://github.com/nu774/qaac/wiki/Command-Line-Options) I'm really lost and have no idea how to use it for Foobar. http://www.hydrogenaudio.org/forums/index....st&p=845439 (http://www.hydrogenaudio.org/forums/index.php?s=&showtopic=102715&view=findpost&p=845439) Title: QAAC: discussion, questions, feature requests, etc. Post by: ScionicReaver on 20 September, 2013, 05:43:45 PM Can someone give me help on how to get the QAAC encoder to work with Foobar. I keep getting errors whenever I try to input a command https://github.com/nu774/qaac/wiki/Command-Line-Options (https://github.com/nu774/qaac/wiki/Command-Line-Options) I'm really lost and have no idea how to use it for Foobar. http://www.hydrogenaudio.org/forums/index....st&p=845439 (http://www.hydrogenaudio.org/forums/index.php?s=&showtopic=102715&view=findpost&p=845439) Thanks! I forgot to include -o %d - commands at the end! I got it to work now! Title: QAAC: discussion, questions, feature requests, etc. Post by: otonvm on 22 September, 2013, 12:39:49 PM Hello nu774, great job with this frontend, very easy to use. But I cannot wrap my head around mixer matrices, I never could... I'm currently using a ffmpeg->sox->qaac pipe to downmix 5.1 to DPL2 2.0 audio. By using flac as an input format I could remove all the other utilities. In sox I'm using this for remix: Code: [Select] 1v0.2646,3v0.1870,4v0.1870,5v0.2991,6v0.1323 2v0.2646,3v0.1870,4v0.1870,5v-0.1323,6v-0.2291 Is this the correct translation to be used with matrix-file? Code: [Select] 0.2646 0 0.1870 0.1870 0.2991 0.13230 0.2646 0.1870 0.1870 -0.1323 -0.2291 Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 22 September, 2013, 02:05:45 PM In sox I'm using this for remix: Code: [Select] 1v0.2646,3v0.1870,4v0.1870,5v0.2991,6v0.1323 2v0.2646,3v0.1870,4v0.1870,5v-0.1323,6v-0.2291 Is this the correct translation to be used with matrix-file? Code: [Select] 0.2646 0 0.1870 0.1870 0.2991 0.13230 0.2646 0.1870 0.1870 -0.1323 -0.2291 Looks ok in that it's equivalent to the sox remix option you are using. Title: QAAC: discussion, questions, feature requests, etc. Post by: otonvm on 22 September, 2013, 02:20:00 PM Looks ok in that it's equivalent to the sox remix option you are using. Ok thanks! What about this: I took the values from this page: Wikipedia: Dolby Pro Logic (http://en.wikipedia.org/wiki/Dolby_Pro_Logic#Dolby_encoding_matrices) and converted them to this: Code: [Select] 1 0 0.7071 0 -0.8718j -0.4899j0 1 0.7071 0 0.4899j 0.8718j EDIT: Aaaand of course those are the values you use in your wiki. Thanks again for the prompt answer any your work. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 22 September, 2013, 02:58:08 PM Code: [Select] 1 0 0.7071 0 -0.8718j -0.4899j0 1 0.7071 0 0.4899j 0.8718j That will work, but it's actually taken from http://en.wikipedia.org/wiki/Dolby_Pro_Logic (http://en.wikipedia.org/wiki/Dolby_Pro_Logic). As is written in Sox document, 90 deg phase shift (hilbert transform) has a bandpass characteristic, and it's far more complex than simple 180 degree phase shift you are using. Although I cited from the wiki, I don't know if it's actually worth doing. Title: QAAC: discussion, questions, feature requests, etc. Post by: otonvm on 22 September, 2013, 03:45:11 PM Code: [Select] 1 0 0.7071 0 -0.8718j -0.4899j0 1 0.7071 0 0.4899j 0.8718j That will work, but it's actually taken from http://en.wikipedia.org/wiki/Dolby_Pro_Logic (http://en.wikipedia.org/wiki/Dolby_Pro_Logic). As is written in Sox document, 90 deg phase shift (hilbert transform) has a bandpass characteristic, and it's far more complex than simple 180 degree phase shift you are using. Although I cited from the wiki, I don't know if it's actually worth doing. Right... I really don't understand this but it sounds ok, L and R seem to be in balance overall. Why not "worth doing"? Computationally? More complex? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 22 September, 2013, 09:09:48 PM Computationally? More complex? Yes, computationally more complex, and it's not exact / lossless transform like 180 degree phase shift. Highs/lows will be attenuated to a certain degree, because it acts like a bandpass filter. Channel separation may be better when heard using DPL2 system, but I don't know. I'm not very familiar with surround audio. Title: QAAC: discussion, questions, feature requests, etc. Post by: otonvm on 23 September, 2013, 04:30:19 PM OK I did some research and it seems that the later matrix has been accepted as the closest to what the spec probably looks like and I think it's what has been implemented in most tools today. The only point of contention remains the phase shift. It looks like if it's used then a proper DPL2 decoder can rebuild the original channels back but it's also relative to how the recording has been done in the first place. I have tried a manual encode from single channels with a "reference" encoder and I think this matrix sounds almost identical to that. Again thanks for your work! Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 23 September, 2013, 09:16:59 PM OK I did some research and it seems that the later matrix has been accepted as the closest to what the spec probably looks like and I think it's what has been implemented in most tools today. "the latter matrix" = 90 degree phase shift version on the wiki ? Quote I have tried a manual encode from single channels with a "reference" encoder and I think this matrix sounds almost identical to that. Nice to hear that Title: QAAC: discussion, questions, feature requests, etc. Post by: otonvm on 24 September, 2013, 02:13:40 AM "the latter matrix" = 90 degree phase shift version on the wiki? Yes, that one. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 28 September, 2013, 05:34:30 PM nu774, why don't you pack libsoxr in the same zip as qaac? Not that it so hard to unpack two zip files but it would be just one more reason that people don't forget to use it instead of libsoxrate. I guess you can use for other applications, better to stay apart. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 01 October, 2013, 03:14:18 AM Is libsoxrate.dll anywhere else in the system? qaac just converted a 24/96 test file to a 16/48 with only qaac.exe in the encoder folder. I've tried with the dlls as well and the result is bit-bit equal to the one without. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 01 October, 2013, 04:34:17 AM Is libsoxrate.dll anywhere else in the system? qaac just converted a 24/96 test file to a 16/48 with only qaac.exe in the encoder folder. I've tried with the dlls as well and the result is bit-bit equal to the one without. Try qaac --check, then add --verbose when encoding, and see what you get. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 01 October, 2013, 01:02:33 PM Code: [Select] D:\foobar2000\Encoders>qaac --checkqaac 2.21, CoreAudioToolbox 7.9.8.3 Code: [Select] D:\foobar2000\Encoders>qaac.exe "Simple Symphony op. 4 - Boisterous Bourree.wav" --verboseqaac 2.21, CoreAudioToolbox 7.9.8.3Simple Symphony op. 4 - Boisterous Bourree.m4aUsing default channel layout.Output layout: Stereo96000Hz -> 48000HzAAC-LC Encoder, TVBR q91, Quality 96[100.0%] 3:01.746/3:01.746 (32.4x), ETA 0:00.00017447680/17447680 samples processed in 0:05.609Overall bitrate: 196.915kbps182/182 chunks written (optimizing) Only qaac.exe is in the folder. I have iTunes (latest) installed. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 01 October, 2013, 01:24:04 PM So, it's just that CoreAudio resampler is (naturally) being used. Quote I've tried with the dlls as well and the result is bit-bit equal to the one without. Are you sure? Try again with --check, --verbose, with libsoxrate installed. Make sure not to use libsoxrate64.dll with qaac. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 01 October, 2013, 01:33:20 PM Or, if you are trying to use libsoxr (instead of libsoxrate), make sure to download libsoxr_0.1.1_20130907.zip, and copy libgcc_s_sjlj-1.dll with libsoxr.dll (both are under x86 directory in the archive). Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 01 October, 2013, 01:41:23 PM Code: [Select] D:\foobar2000\Encoders>qaac --checkqaac 2.21, CoreAudioToolbox 7.9.8.3libsoxrate 0.4.1libsoxr-0.1.1 Code: [Select] D:\foobar2000\Encoders>qaac "Simple Symphony op. 4 - Boisterous Bourree.wav" --verboseqaac 2.21, CoreAudioToolbox 7.9.8.3Simple Symphony op. 4 - Boisterous Bourree.m4aUsing default channel layout.Output layout: Stereo96000Hz -> 48000HzUsing libsoxr SRC: single-precision-SIMDAAC-LC Encoder, TVBR q91, Quality 96[100.0%] 3:01.746/3:01.746 (47.3x), ETA 0:00.0008723840/8723840 samples processed in 0:03.843Overall bitrate: 197.332kbps182/182 chunks written (optimizing) Code: [Select] Comparing:"D:\foobar2000\Encoders\Simple Symphony op. 4 - Boisterous Bourree.m4a""D:\foobar2000\Encoders\Simple Symphony op. 4 - Boisterous Bourree dlls.m4a"Differences found: 13364978 sample(s), starting at 0.0840000 second(s), peak: 0.0176514 at 124.5878958 second(s), 2ch I must have been tired last night, I swear they were bit-bit identical :/ Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 01 October, 2013, 01:47:26 PM I must have been tired last night, I swear they were bit-bit identical :/ It happens Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 04 October, 2013, 11:47:30 AM [qaac] release 2.22 (refalac 1.22) posted 56 minutes ago by nu 774 - Fixed not to write tag when value of tag is empty. - Support loading of libFLAC_dynamic.dll (this name is used by v1.3.0 DLL distributed at www.rarewares.org). Currently, qaac searches libFLAC dll in the following order. libFLAC_dynamic.dll -> libFLAC.dll -> libFLAC-8.dll Since 1.3.0 and 1.2.1 DLL are binary compatible within range of use by qaac, you can use any of them. - Tags given by command line option now take precedence over default tool tag written by qaac. As a result, you can override tool tag if you want to (with --tag too:value). - Updated TagLib to github current HEAD. https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Thank you very much nu774, you truly rock! Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 14 October, 2013, 12:31:14 PM [qaac] release 2.23 (refalac 1.23) posted 39 minutes ago by nu 774 Switched from libsoxrate to libsoxr and new libsoxconvolver. Like libsoxr, libsoxconvolver uses SIMD optimized DFT/convolution routine when SSE is available. This library is used for --lowpass, --matrix-preset and --matrix-file. Unlike libsoxrate, libsoxconvolver is 32bit float based. Add --peak and --play option. Both doesn't produce output to a file, and cannot be used with other encoding option such as -V, -v, -a, -c, -A, and -D. However, DSP options such as --rate or --lowpass can be used. --peak just scans input and print peak. Might be useful when you apply some DSP (especially mixing), and want to know resulting peak value before encoding. --play does what it's name implies (play files using Wave Mapper device). Since qaac is an encoder and not a music player, don't expect much from it. It's just intended for cases when you want to test new custom matrix coefficients or something. --play doesn't automatically convert sample format, nor does remix. Changed random number generator (used for TPDF dither) to LCG, which is known to be poor in randomness but quite fast, and is enough for just generating white noise for dither. Don't flush immediately after writing WAV header when writing WAV file to a pipe. This makes pipe rewinding hack of SoX happier, but it seems not perfect. Basically speaking, SoX's pipe rewinding on win32 is nothing but a hack, so don't expect automatic format detecting of SoX to always work. Just use -t wav - or something to avoid unnecessary pipe rewinding. Some code cleanup. Fix help messages. Updated taglib (again). https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Title: QAAC: discussion, questions, feature requests, etc. Post by: paperskyline on 21 October, 2013, 09:38:52 AM Can somebody please explain these things: 1. --adts - ADTS output (AAC only) meaning of "adts output"? 2. -s, --silent - Suppress console messages. --verbose - More verbose console messages. what does "verbose" mean here? should i use any of these options? is it ok to use both these options simultaneously? Title: QAAC: discussion, questions, feature requests, etc. Post by: LifeWOutMilk on 21 October, 2013, 04:46:13 PM Can somebody please explain these things: 1. --adts - ADTS output (AAC only) meaning of "adts output"? http://wiki.multimedia.cx/index.php?title=ADTS (http://wiki.multimedia.cx/index.php?title=ADTS) http://en.wikipedia.org/wiki/Advanced_Audi...ntainer_formats (http://en.wikipedia.org/wiki/Advanced_Audio_Coding#Container_formats) Quote 2. -s, --silent - Suppress console messages. --verbose - More verbose console messages. what does "verbose" mean here? should i use any of these options? is it ok to use both these options simultaneously? Silent and verbose don't make sense when used together. Either you want console messages, or not. Verbose will add additional information to what's printed in the console. Title: QAAC: discussion, questions, feature requests, etc. Post by: paperskyline on 22 October, 2013, 11:13:26 AM Can somebody please explain these things: 1. --adts - ADTS output (AAC only) meaning of "adts output"? http://wiki.multimedia.cx/index.php?title=ADTS (http://wiki.multimedia.cx/index.php?title=ADTS) http://en.wikipedia.org/wiki/Advanced_Audi...ntainer_formats (http://en.wikipedia.org/wiki/Advanced_Audio_Coding#Container_formats) Quote 2. -s, --silent - Suppress console messages. --verbose - More verbose console messages. what does "verbose" mean here? should i use any of these options? is it ok to use both these options simultaneously? Silent and verbose don't make sense when used together. Either you want console messages, or not. Verbose will add additional information to what's printed in the console. thanks for the info Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 22 October, 2013, 11:59:26 AM [qaac] release 2.24 (refalac 1.24) posted 4 hours ago by nu 774 - Fix crash on reading unsigned 8bit PCM through libsndfile (for example, Wave64 format). - Fix bogus (non-compliant) sgpd box written on gaplessmode 1 or 2. However, I don't still recommend using it. As far as I know, only iTunes is known to support it well. VLC also supports edts, but it seems VLC decodes first few frames of HE-AAC without SBR when edts is being used. - Support float16 and float24 WAV and Wavpack file. float16 is assumed to be normalize in range [-65536, 65536], which is different from normal [-1,1] for floating point PCM. For details, read this thread on HA: http://www.hydrogenaudio.org/forums/index....90770&st=50 (http://www.hydrogenaudio.org/forums/index.php?showtopic=90770&st=50) - Show PCM sample format (int8 or something) when --verbose is specified. Both input format and resulting format are shown, the latter might be different due to DSP chain. - Disabled automatic quantization to integer when sample format is converted to float by DSP chain and encoding to ALAC. - Repackaged 64bit libsoxr.dll as libsoxr64.dll. Now refalac64 supports both names (of course it cannot use 32bit version of DLL, so be careful). - Show more meaningful message on write error (MSVCRT assigns EINVAL for broken pipe error, resulting in "invalid parameter" message, which is not quite helpful). - Some code clean up. https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Thanks again nu774 for your dedication. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 22 October, 2013, 12:32:48 PM As for new wavpack float16/24bit file created with --store-floats-as-int: It can be decoded by 4.60.1 DLL. Of course you can upgrade it to new 4.70 DLL. Both are binary compatible. And it seems that --store-floats-as-int is possible for normal float32 WAV file. Although I don't think of it's practical use, qaac can decode such a file, too. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 24 October, 2013, 01:29:59 PM nu774, does libsoxr64.dll need libgcc_s_sjlj-1.dll? Just asking because the x64 folder doesn't have the file in it, I only use the x86 binary. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 24 October, 2013, 01:39:48 PM nu774, does libsoxr64.dll need libgcc_s_sjlj-1.dll? Just asking because the x64 folder doesn't have the file in it, I only use the x86 binary. No. Basically speaking, if refalac64 --check shows libsoxr then it's OK. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 24 October, 2013, 05:12:21 PM Thank you and thank you for fdkaac changelog. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 26 October, 2013, 04:48:13 PM [qaac] release 2.25 (refalac 1.25) posted 10 hours ago by nu 774 Fixed HE-AAC gapless playback issue It seems that CoreAudio HE-AAC encoder finishes too early on HE-AAC encoding, and can result in too small amount of enc_padding. When enc_padding is smaller than 481 (which is the additional decoder delay for SBR), there's no way for decoder to reproduce complete samples, so decoded result will be get shorter. This is actually a bug of CoreAudio encoder. I can see the very same behavior using iTunes, and when decoding resulting HE-AAC file by iTunes, decoded result is shorter than original. You can see this by arbitrary 44.1khz 11 sec sample (number of samples=485100), which result in number of padding samples=74, and about 10ms at ending is dropped. Since I'm using somewhat older iTunes (10.5.3.3) and I'm reluctant to upgrading it, I don't know if it's still true for new iTunes, but I believe so since I'm using recent CoreAudioToolbox. As a workaround, qaac now feeds additional 2048 samples of silence to HE-AAC encoder and edit iTunSMPB so that it reflects original length. As a result, HE-AAC file encoded by qaac will now play gaplessly by iTunes, but it's no more bit-identical with the result of iTunes in case of HE-AAC. Note that you cannot still play HE-AAC files outside Apple softwares because of this: http://www.hydrogenaudio.org/forums/index....&pid=817997 (http://www.hydrogenaudio.org/forums/index.php?showtopic=98450&mode=threaded&pid=817997). I'm sorry I should have noticed earlier of this. https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 30 October, 2013, 04:28:12 AM nu774, probably stupid question: how do I create a multiple words tag? Ex: --tag too:word1 word2 Thanks. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 30 October, 2013, 05:57:38 AM nu774, probably stupid question: how do I create a multiple words tag? Ex: --tag too:word1 word2 All of the following examples will work. --tag too:word1" "word2 --tag too:word"1 wor"d2 --tag too:"word1 word2" --tag "too:word1 word2" Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 30 October, 2013, 10:21:29 AM As I said, stupid question, it didn't even cross my mind to use "", I should go to sleep earlier The third option is good enough, thank you. Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 06 November, 2013, 11:59:32 AM [qaac] release 2.26 (refalac 1.26) posted an hour ago by nu 774 Disabled --no-delay on SBR. --no-delay removes several frames at beginning to compensate encoder delay, but it turns out to be problematic for SBR since sbr_header is not present on every frame, and decoder cannot decode SBR until seeing it. Anyway, you should be using LC when delay is important. I guess VLC cannot decode SBR at the beginning of HE-AAC file having edts because of similar reason. It really needs to read sbr_header -- but probably it just skips to the position described in edts. Added --drc for dynamic range compression. About new dynamic range compressor This is implemented based on "Digital Dynamic Range Compressor Design -- A Tutorial and Analysis", JAES2012. Takes 5 parameters (threshold, ratio, knee width, attack, release), all of them are common among compressors, so you should be familiar with them. Note that in this implementation, actual release time will be approximately equals to attack + release. This is by nature of "smooth, decoupling peak detector" in the paper, used by this compressor. If you want what is called "makeup gain" to compensate attenuation by this compressor, just use --gain. https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Title: QAAC: discussion, questions, feature requests, etc. Post by: obazavil on 07 November, 2013, 01:06:15 PM Hi, I have been reading the parameters, and the more I read, the more confused I get I want something similar to LAME -V2. What is the parameters I should put to qaac? What do you guys have for parameters? Is there a doc or site that explain what they mean? Currently I'm using LAME 320kbps, but want to save some space with a very good quality Thanks a lot! Title: QAAC: discussion, questions, feature requests, etc. Post by: testyou on 07 November, 2013, 01:13:19 PM What is the parameters I should put to qaac? Try Code: [Select] qaac inputname.wav first. Is there a doc or site that explain what they mean? Yes: qaac documentation. (https://github.com/nu774/qaac/wiki) I want something similar to LAME -V2. [...] Currently I'm using LAME 320kbps, but want to save some space with a very good quality Have you tried using LAME -V2 ? Title: QAAC: discussion, questions, feature requests, etc. Post by: o-l-a-v on 07 November, 2013, 04:26:11 PM Hi, I have been reading the parameters, and the more I read, the more confused I get I want something similar to LAME -V2. What is the parameters I should put to qaac? What do you guys have for parameters? Is there a doc or site that explain what they mean? Currently I'm using LAME 320kbps, but want to save some space with a very good quality Thanks a lot! Convert with many different tvbr settings. ABX it against MP3 CBR320 (Which you are currently using and are happy with), or lossless source (preferable). Find the lowest setting/bitrate that seems transparent to you, and use it for your music. Remember: - What seem transparent to you, is not necessarily transparent to others. (Which is why one can't tell another what bitrate is transparent) - I find that some genres of music, require higher bitrate for being transparent in my ears, than others. Test using a variety of music. Tips: - Easy to use tool/GUI for converting with QAAC (and other codecs): TAudioConverter (http://sourceforge.net/projects/taudioconverter/) - Abx can be done by using Foobar (http://www.foobar2000.org/) and one of its external plugins named ABX Comparator (http://www.foobar2000.org/components/view/foo_abx) Title: QAAC: discussion, questions, feature requests, etc. Post by: Aleron Ives on 10 November, 2013, 02:08:06 AM @ nu774 Regarding the new DRC switch in qaac 2.26, do you have plans to add any presets that might mimic the effects of the "light", "normal", and "heavy" DRC options present in Dolby Digital? I expect the layman won't know how to use the five parameters effectively, so having a few demonstration modes might be beneficial. Thanks for continuing to work on the frontend! Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 10 November, 2013, 04:17:42 AM Regarding the new DRC switch in qaac 2.26, do you have plans to add any presets that might mimic the effects of the "light", "normal", and "heavy" DRC options present in Dolby Digital? I expect the layman won't know how to use the five parameters effectively, so having a few demonstration modes might be beneficial. While your request is reasonable, I don't think it's possible to mimic Dolby ATSC DRC profiles. As far as I know Dolby has 5 thresholds and does both upward/downward compression, while qaac only provides very simple one threshold + downward compression. Chaining multiple qaac instances with different --drc, --gain options might make it possible to achieve somewhat similar effects, but I'm not sure. To tell the truth, I'm not confident of usefulness of this new option. If somebody have any thought on this, I'd like to hear feedback. Title: QAAC: discussion, questions, feature requests, etc. Post by: Aleron Ives on 11 November, 2013, 08:16:02 PM I was actually looking through qaac's documentation a few weeks ago to see if it had such functionality, so it is interesting that you decided to add the feature now. Does the -N switch use the ReplayGain algorithm, or does it use peak normalisation? My guess is that it uses the latter, since the file I tried normalising ended up clipping. You might be able to implement both normalisation and DRC using a methodology similar to MP3Gain/AACGain (http://mp3gain.sourceforge.net/), i.e. rather than scaling the audio prior to encoding, you could adjust the global gain values of each frame. Not only would this allow you to implement lossless gain adjustment (since MP3Gain writes APEv2 tags to provide the information necessary to undo the volume adjustment), but it would also potentially allow qaac to normalise or apply DRC to existing AAC files without transcoding them. The 1.5 dB steps might not allow for the same level of precise volume adjustment as Dolby's DRC implementation, but I suspect it would be more than capable of achieving the desired effect. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 11 November, 2013, 09:36:48 PM Does the -N switch use the ReplayGain algorithm, or does it use peak normalisation? My guess is that it uses the latter Correct. You might be able to implement both normalisation and DRC using a methodology similar to MP3Gain/AACGain (http://mp3gain.sourceforge.net/), i.e. rather than scaling the audio prior to encoding, you could adjust the global gain values of each frame. Your idea is reasonable, and actually I've heard similar request for "replaygain in qaac" several times. HOWEVER, like you say what it does is not encoding, but modifying the already encoded AAC bitstream, and can very well implemented as an independent tool. Actually, it shares almost zero functionality with encoder. To implement it, qaac actually has to start from decoding the AAC bitstream that has just been encoded. No better than other tools like aacgain. So, I'd rather want to see it implemented in different tool. Not in qaac. I WON'T implement MP4 tag editor in qaac for the very same reason. And don't forget qaac not only is an AAC LC encoder but also supports SBR, ALAC, PCM output. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 11 November, 2013, 10:08:07 PM As for dynamic range compression, At least 3 choices come to mind. 1. Directly compress PCM signals before encoding (what qaac --drc does) 2. Modify global gain value (aacgain way, works for LC only) 3. Set DRC metadata (present in AAC spec) Third option is similar to what is used by AC3 or something, and looks best IF it is supported by wide range of decoders... but as far as I know it seems not. Since 2 and 3 work on per frame basis, it's impossible for them to achieve as fine grained control as 1, and probably they are not suitable for subtle compression what is done on mastering or something. However, since we are not mastering but just encoding here, it shouldn't be a big problem. Title: QAAC: discussion, questions, feature requests, etc. Post by: Aleron Ives on 12 November, 2013, 02:12:40 AM I'd rather want to see it implemented in different tool. My current solution is to run files through WaveGain prior to encoding if I need RG adjustment, which works well enough. Since 2 and 3 work on per frame basis, it's impossible for them to achieve as fine grained control as 1, and probably they are not suitable for subtle compression what is done on mastering or something. However, since we are not mastering but just encoding here, it shouldn't be a big problem. I would tend to agree. If users desired a mastering-calibre compression algorithm, they probably wouldn't be targetting a lossy format. Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 12 November, 2013, 09:38:22 AM [qaac] release 2.27 (refalac 1.27) posted 48 minutes ago by nu 774 Now you can set --drc option twice or more, with different parameters. This can be used to obtain more complex effect. For example, you can use --drc for normal compression, then as a limiter (--drc with high thresh + high ratio + zero attack/release will work something like a limiter, that effectively kills remaining peaks). https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Title: QAAC: discussion, questions, feature requests, etc. Post by: zerowalker on 15 November, 2013, 07:01:09 AM Those it have the feature that i know many would like. A Transparent option. Or more like, you sert to a quality where the encoder will try to achieve transparency. Cause currently, let´s say a certain music file can be transparent easily at 164kbps. But if you set, "Variable at 320kbps", it will probably become 300-340kbps, even though that is unnecessary. Though of course, transparency it not something that´s achieved at a precise point, but i think there has been discussion about it. I know people asked for it in OPUS, me included. Title: QAAC: discussion, questions, feature requests, etc. Post by: LithosZA on 15 November, 2013, 09:13:23 AM I don't think it is possible. How will the encoder know if it is reaching transparency for your ears? Title: QAAC: discussion, questions, feature requests, etc. Post by: zerowalker on 15 November, 2013, 09:21:11 AM I don't think it is possible. How will the encoder know if it is reaching transparency for your ears? Yeah as you say, that is the fundamental problem. But i think some sort of "sensor" to try to achieve allow only a certain magnitude of distortion. A clever search implementation, something that knows "If it is this much distorted, than it will surely be noticeable". If you think about Lossless vs Lossy a bit. For example, let´s say FLAC can make a file 200kbps. But if you encode that with anything, like QAAC, and set, Variable at 320kbps, it will surely use more than 200kbps. It doesn´t know that it can be Lossless at 200kbps, which means that it should be able to achieve total transparency at 200kbps or less. But i know you also can´t compare them, as Lossy completely obliterate Lossless, so it can´t achieve it no matter what, as Jpeg vs Png. I just wanted to point out that "problem", but i also think it´s easier said than done. Title: QAAC: discussion, questions, feature requests, etc. Post by: AndersHu on 15 November, 2013, 11:20:01 AM If you don't want Constrained VBR, use the True VBR mode. Title: QAAC: discussion, questions, feature requests, etc. Post by: zerowalker on 15 November, 2013, 12:26:16 PM If you don't want Constrained VBR, use the True VBR mode. That works similar to CRF in x264 if i am not to far off? Haven´t really tried it, but it goes to 127 in quality i think. And i don´t quite get it. 127, is that suppose to mean, Guaranteed top quality possible with that encoder, or something like that? For x264 that would possible be, CRF 1, as 0 means lossless. x264 = h264 encoder, anything with h264 is the same though, but usually write "x" as i am used to it. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 15 November, 2013, 12:38:46 PM And i don´t quite get it. 127, is that suppose to mean, Guaranteed top quality possible with that encoder, or something like that? Q0 - Q4 (0) = ~40 kbps Q5 - Q13 (9) = ~45 kbps Q14 - Q22 (18) = ~75 kbps Q23 - Q31 (27) = ~80 kbps Q32 - Q40 (36) = ~95 kbps Q41 - Q49 (45) = ~105 kbps Q50 - Q58 (54) = ~115 kbps Q59 - Q68 (63) = ~135 kbps Q69 - Q77 (73) = ~150 kbps Q78 - Q86 (82) = ~165 kbps Q87 - Q95 (91) = ~195 kbps Q96 - Q104 (100) = ~225 kbps Q105 - Q113 (109) = ~255 kbps Q114 - Q122 (118) = ~285 kbps Q123 - Q127 (127) = ~320 kbps Title: QAAC: discussion, questions, feature requests, etc. Post by: zerowalker on 15 November, 2013, 01:16:42 PM So in other words, it´s Constrained Variable Bitrate, but with another Currency or what to call it? Title: QAAC: discussion, questions, feature requests, etc. Post by: Aleron Ives on 15 November, 2013, 05:22:42 PM No, those are merely general bitrate ranges that the encoder often produces on the tracks that people have tested. Other tracks can go above or below those averages, and the bitrate fluctuates during the tracks as necessary to achieve the desired quality. You are correct that the VBR scale in Apple's encoder is similar to the CRF scale in x264. What you seem to be missing is that both the VBR scale and the ratefactor scale are in arbitrary units that have no numerical quality equivalent. That is to say the 8-bit CRF scale of 0 - 51 is arbitrary, as is the VBR scale of 0 - 127. The way you use both scales is the same: you start at a "bad" quality that you can easily ABX and start increasing the quality setting until you can't ABX the results anymore. Once you reach transparency, you know that you should use that setting for most, if not all, of your encodes. The only difference between testing x264 CRF values and AAC VBR values is that it's harder to perform ABX tests on video than it is on audio, as I am not aware of any video ABX comparators. The fact that all listeners must perform these tests to determine their individual transparency thresholds is the reason why qaac does not have a "transparency" VBR setting: transparency is achieved at different bitrates for different people, and thus it cannot be standardised into a qaac preset. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 15 November, 2013, 08:12:59 PM @zerowalker If you think VBR is not as variable in bitrate as you think, try some extra ordinary case like the following: Code: [Select] sox -n -t wav - synth 5 sin 1000 | qaac -V127 - -o foo.m4a This pipeline encodes a simple sine wave, and results in 18kbps or so in true VBR quality 127. On CVBR it goes a bit higher, but you might notice that it is not as constrained as you think. And if you think VBR is not as constant quality as you think (yeah, some difficult samples can be easier to ABX than other), just accept it. Although VBR encoder tries it's best to achieve constant quality, world is no that ideal. And at last, do you know qaac is merely a frontend to Apple's encoder? qaac cannot do anything better than just setting parameters for quality control already provided as command line options. Title: QAAC: discussion, questions, feature requests, etc. Post by: Boulder on 16 November, 2013, 05:09:40 AM Regarding CVBR and TVBR, has there been any wide-range comparison between the two methods? I've always used TVBR @ q82 for my movie audio track re-encodes. Title: QAAC: discussion, questions, feature requests, etc. Post by: zerowalker on 16 November, 2013, 08:08:00 AM Ah, well then i understand them quite correctly. And as you say Video is on a different level. The different in content changes alot, while Audio content doesn´t really matter that much when you set a quality level. Learned something about doing the tests to know your "transparent setting", even though i am crappy at noticing distortions as long as it´s above 128kbps or something. So i often just take the highest possible settings to know that it will be transparent, if not for some extremely rare cases of course. Title: QAAC: discussion, questions, feature requests, etc. Post by: Kamedo2 on 16 November, 2013, 08:29:50 AM Regarding CVBR and TVBR, has there been any wide-range comparison between the two methods? I've always used TVBR @ q82 for my movie audio track re-encodes. The overall quality difference of CVBR and TVBR is very small. http://listening-tests.hydrogenaudio.org/i...-96-a/index.htm (http://listening-tests.hydrogenaudio.org/igorc/aac-96-a/index.htm) http://www.hydrogenaudio.org/forums/index....showtopic=97913 (http://www.hydrogenaudio.org/forums/index.php?showtopic=97913) Title: QAAC: discussion, questions, feature requests, etc. Post by: ChronoSphere on 08 December, 2013, 11:51:37 AM What is it with AAC and my ears, I wonder. Code: [Select] foo_abx 1.3.4 reportfoobar2000 v1.2.92013/12/08 17:40:37File A: R:\03 蒼空にくちづけたら_-v256.m4aFile B: R:\03. ゆうまお - 蒼空にくちづけたら.flac17:40:37 : Test started.17:42:03 : 01/01 50.0%17:42:49 : 02/02 25.0%17:43:29 : 02/03 50.0%17:43:55 : 03/04 31.3%17:44:17 : 04/05 18.8%17:44:58 : 05/06 10.9%17:45:34 : 05/07 22.7%17:45:57 : 06/08 14.5%17:46:18 : 07/09 9.0%17:47:04 : 08/10 5.5%17:47:07 : Test finished. ---------- Total: 8/10 (5.5%) If I had to describe it, aac sounds kind of "electronic" or "metallic" to my ears. I struggle ABXing mp3 at this bitrate, but AAC is so obvious. Though I'd say nero's aac encoder did an even poorer job. Or maybe this song is just a problem sample. Title: QAAC: discussion, questions, feature requests, etc. Post by: Kamedo2 on 08 December, 2013, 11:58:52 AM If I had to describe it, aac sounds kind of "electronic" or "metallic" to my ears. I struggle ABXing mp3 at this bitrate, but AAC is so obvious. Though I'd say nero's aac encoder did an even poorer job. Or maybe this song is just a problem sample. It's a very interesting sample, as Apple AAC is the mighty encoder, and the likelihood of defects are very small. Could you upload the problematic section of the song in FLAC, so that we can reproduce the result? Title: QAAC: discussion, questions, feature requests, etc. Post by: ChronoSphere on 08 December, 2013, 12:44:00 PM uploaded the sample here (http://www.hydrogenaudio.org/forums/index.php?showtopic=103770). First time doing this, is this how I was supposed to do it? Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 08 December, 2013, 12:58:56 PM ChronoSphere my compliments, I can't even ABX @~96kbps with Vorbis, AAC and Opus. I guess it also depends how hard you try. I realized long time ago if I have to try for more than 10 mins it means the codec is good enough. Title: QAAC: discussion, questions, feature requests, etc. Post by: Kamedo2 on 08 December, 2013, 01:02:39 PM uploaded the sample here (http://www.hydrogenaudio.org/forums/index.php?showtopic=103770). First time doing this, is this how I was supposed to do it? Exactly. And I reproduced the result, although at cvbr 192kbps. This is a good critical sample, because of the sharp transient attack, crystal vocal, and background silence. Code: [Select] foo_abx 1.3.4 reportfoobar2000 v1.2.92013/12/09 02:52:06File A: 蒼空にくちづけたら.wavFile B: 蒼空にくちづけたら.mp402:52:06 : Test started.02:52:47 : 01/01 50.0%02:53:52 : 02/02 25.0%02:54:10 : 03/03 12.5%02:54:40 : 03/04 31.3%02:54:59 : 04/05 18.8%02:55:23 : 05/06 10.9%02:55:56 : 06/07 6.3%02:56:24 : 06/08 14.5%02:56:51 : 07/09 9.0%02:57:21 : 08/10 5.5%02:57:24 : Test finished. ---------- Total: 8/10 (5.5%) Title: QAAC: discussion, questions, feature requests, etc. Post by: ChronoSphere on 08 December, 2013, 01:56:09 PM ChronoSphere my compliments, I can't even ABX @~96kbps with Vorbis, AAC and Opus. I guess it also depends how hard you try. I realized long time ago if I have to try for more than 10 mins it means the codec is good enough. Thank you, though I didn't think my hearing to be that good. Incidentally, I tested opus and vorbis for this track, too in the past, and I stopped trying at 192kbps average for both of them. 160kbps was already near my limit, actually, as you can see below. Which is why I'm so surprised about AAC behaving so badly in comparison. Each test takes me about 3-4 minutes I guess. Code: [Select] foo_abx 1.3.4 reportfoobar2000 v1.2.92013/12/08 17:54:25File A: R:\03. ???? - ?????????.flacFile B: R:\03 ?????????_160.opus17:54:25 : Test started.17:55:27 : 01/01 50.0%17:55:54 : 01/02 75.0%17:56:20 : 01/03 87.5%17:56:47 : 02/04 68.8%17:57:04 : 03/05 50.0%17:57:52 : 04/06 34.4%17:58:40 : 04/07 50.0%17:59:08 : 05/08 36.3%17:59:25 : 06/09 25.4%17:59:40 : 07/10 17.2%18:00:24 : Test finished. ---------- Total: 7/10 (17.2%) Title: QAAC: discussion, questions, feature requests, etc. Post by: o-l-a-v on 08 December, 2013, 02:15:38 PM ChronoSphere my compliments, I can't even ABX @~96kbps with Vorbis, AAC and Opus. I guess it also depends how hard you try. I realized long time ago if I have to try for more than 10 mins it means the codec is good enough. Thank you, though I didn't think my hearing to be that good. Incidentally, I tested opus and vorbis for this track, too in the past, and I stopped trying at 192kbps average for both of them. 160kbps was already near my limit, actually, as you can see below. Which is why I'm so surprised about AAC behaving so badly in comparison. Each test takes me about 3-4 minutes I guess. Code: [Select] foo_abx 1.3.4 reportfoobar2000 v1.2.92013/12/08 17:54:25File A: R:\03. ゆうまお - 蒼空にくちづけたら.flacFile B: R:\03 蒼空にくちづけたら_160.opus17:54:25 : Test started.17:55:27 : 01/01 50.0%17:55:54 : 01/02 75.0%17:56:20 : 01/03 87.5%17:56:47 : 02/04 68.8%17:57:04 : 03/05 50.0%17:57:52 : 04/06 34.4%17:58:40 : 04/07 50.0%17:59:08 : 05/08 36.3%17:59:25 : 06/09 25.4%17:59:40 : 07/10 17.2%18:00:24 : Test finished. ---------- Total: 7/10 (17.2%) Well, from earlier posts, I easily managed to ABX V 127 QAAC Quote Code: [Select] foo_abx 1.3.4 reportfoobar2000 v1.1.182012/12/25 12:36:05File A: C:\Users\Olav\Desktop\3929430_Lessons_In_Love_feat__Neon_Trees_Headhunterz_Remix.wavFile B: C:\Users\Olav\Desktop\QAAC TVBR 127\3929430_Lessons_In_Love_feat__Neon_Trees_Headhunterz_Remix.m4a12:36:05 : Test started.12:37:08 : 01/01 50.0%12:38:06 : 02/02 25.0%12:38:57 : 03/03 12.5%12:39:42 : 04/04 6.3%12:42:34 : 05/05 3.1%12:43:14 : Test finished. ---------- Total: 5/5 (3.1%) Setup was: foo_abx 1.3.4 -> Belkin USB cable -> HRT Music Streamer II -> Steelseries 5HV2. QAAC v 2.09 So, not surprised you guys managed to abx at lower bitrate. My sample was a completely different genre though. Hardstyle. Could upload a sample if folks are interested Title: QAAC: discussion, questions, feature requests, etc. Post by: Brazil2 on 09 December, 2013, 04:23:42 AM Though I'd say nero's aac encoder did an even poorer job. Or maybe this song is just a problem sample. Did you try with other AAC encoders like Winamp FhG and FDK ? Title: QAAC: discussion, questions, feature requests, etc. Post by: ChronoSphere on 09 December, 2013, 09:09:05 AM No, I have not. Considering qaac is seen as the "best" among aac encoders, I just assumed it would be the same with the other ones. BTW, eahm, with qaac I get a runtime of 11:44:08 on my clip+, which gives me the following runtimes: Code: [Select] mpc -192: 16:13:47Flac -8: 16:09:49vorbis q5: 12:17:56mp3 v0: 11:48:20qaac -v256: 11:44:08Wv -hh -b384Mx: 11:17:12opus 160: 10:38:05 The settings for lossless formats is what I would choose for archival, for lossy, my transparency settings (except for qaac). Someone said my runtimes are shorter than usual, that is probably because I have crossfeed activated. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 09 December, 2013, 11:14:16 AM Thanks for testing, -v256 is kinda a lot, I use -V63 but I'm sure it won't change much. MPC is insane! Now, just to waste you another day, you should test ALAC and TAK as well Title: QAAC: discussion, questions, feature requests, etc. Post by: ChronoSphere on 09 December, 2013, 06:29:51 PM Well considering AAC is still not transparent on that song for me, I'd even have to go higher (as I did with mp3 ) Rockbox can't play TAK, unless that changed recently, btw. Why ALAC? I don't really know of an advantage except if you're already using the apple ecosystem and want to stay native. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 10 December, 2013, 03:14:31 AM btw. Why ALAC? I don't really know of an advantage except if you're already using the apple ecosystem and want to stay native. Codec-wise, I don't see any advantage in ALAC either. MP4/M4A container might be attractive for some (iTunes style tag being natively supported by Mac and Microsoft Windows, ability to multiplex video tracks, text tracks for subtitle / karaoke / chapters or something). Who knows? Title: QAAC: discussion, questions, feature requests, etc. Post by: halb27 on 10 December, 2013, 09:07:16 AM I'm not sure what you're actually considering to do, but in case you want to change codec, and in case that Rockbox plays your new choice is of major concern: Musepack is a good candidate for your new choice IMO. Or lossyWAV | FLAC in case you can allow for higher bitrates. Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 10 December, 2013, 09:21:52 AM [qaac] release 2.28 (refalac 1.28) posted 5 hours ago by nu 774 Add new option: --caf. As the name implies, --caf tells qaac to output to CAF container. (HE-)AAC, ALAC, PCM (-D) are supported. Pipe streaming is supported in case of PCM, that can be used to pass audio as well as tags to fdkaac through pipeline. (Hopefully) better handling of metadata. Non-standard tags such as performer or ISRC are now copied from input (However, some tags such as replaygain related metadata, ripping log, and cuesheet are blacklisted and not copied). Support ALAC in CAF input from libsndfile. This will only be used by refalac + very recent libsndfile. qaac has already been supporting input of ALAC in CAF through CoreAudio API. Fix: take care of zero byte text file input, which resulted in MLang text encoding detection failure. https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 10 December, 2013, 11:21:46 AM Thanks for the update nu774. Title: QAAC: discussion, questions, feature requests, etc. Post by: ChronoSphere on 10 December, 2013, 12:57:22 PM I'm not sure what you're actually considering to do, but in case you want to change codec, and in case that Rockbox plays your new choice is of major concern: Musepack is a good candidate for your new choice IMO. Or lossyWAV | FLAC in case you can allow for higher bitrates. I posted my runtimes with different codecs in the opus thread and eahm asked me to test qaac. I posted the result here to not clog the opus with off-topic. It was just an "FYI" post, I'm personally using FLAC for convenience reasons of only having to copy the files over and not having to re-scan with replay gain etc. Battery-wise, MPC is only slightly better than FLAC, but currently, I'd go for it if I had space restrictions. Title: QAAC: discussion, questions, feature requests, etc. Post by: vozer on 11 December, 2013, 05:01:03 AM Could anyone here help me to solve this problem. I used to use this command line option for previous versions of qaac (2.23 & 2.25) and it worked well but recently when i update to new version 2.27 or 2.28 , it doesn't work. The error message like this is displayed : Quote Conversion failed: The encoder has terminated prematurely with code 2 (0x00000002); please re-check parameters and here is my command line option : Quote -s --cbr 256 --ignorelength --rate keep -q 2 - -o %d Quote -s --tvbr 100 --ignorelength --rate keep -q 2 - -o %d and here is my conversion window : Title: QAAC: discussion, questions, feature requests, etc. Post by: ChronoSphere on 11 December, 2013, 06:03:45 AM I've been having the same issues until I ran it from the command line and saw it complaining about not being able to find CoreAudioTools.dll (or similar). Are you sure you have iTunes installed/made portable and in the same folder as qaac? Title: QAAC: discussion, questions, feature requests, etc. Post by: vozer on 11 December, 2013, 10:28:02 AM I've been having the same issues until I ran it from the command line and saw it complaining about not being able to find CoreAudioTools.dll (or similar). Are you sure you have iTunes installed/made portable and in the same folder as qaac? Oops , i have removed iTunes for 2 weeks. I thought that CoreAudioTool doesn't require Itunes to be installed. It seems everything is ok if I re-install iTunes. Thanks . Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 11 December, 2013, 10:42:27 AM Oops , i have removed iTunes for 2 weeks. I thought that CoreAudioTool doesn't require Itunes to be installed. It seems everything is ok if I re-install iTunes. Thanks . Strictly speaking you don't need iTunes, but it should be the most simple way if you don't mind installing it. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 11 December, 2013, 11:03:06 AM I've been having the same issues until I ran it from the command line and saw it complaining about not being able to find CoreAudioTools.dll (or similar). Are you sure you have iTunes installed/made portable and in the same folder as qaac? Oops , i have removed iTunes for 2 weeks. I thought that CoreAudioTool doesn't require Itunes to be installed. It seems everything is ok if I re-install iTunes. Thanks . It's ok even without iTunes. You missed something when you setup the portable version. edit: Don't copy the folder QTFiles where qaac.exe is but all the files inside QTFiles http://www.hydrogenaudio.org/forums/index....st&p=844462 (http://www.hydrogenaudio.org/forums/index.php?s=&showtopic=85135&view=findpost&p=844462) Title: QAAC: discussion, questions, feature requests, etc. Post by: vozer on 12 December, 2013, 07:38:50 AM I've been having the same issues until I ran it from the command line and saw it complaining about not being able to find CoreAudioTools.dll (or similar). Are you sure you have iTunes installed/made portable and in the same folder as qaac? Oops , i have removed iTunes for 2 weeks. I thought that CoreAudioTool doesn't require Itunes to be installed. It seems everything is ok if I re-install iTunes. Thanks . It's ok even without iTunes. You missed something when you setup the portable version. edit: Don't copy the folder QTFiles where qaac.exe is but all the files inside QTFiles http://www.hydrogenaudio.org/forums/index....st&p=844462 (http://www.hydrogenaudio.org/forums/index.php?s=&showtopic=85135&view=findpost&p=844462) I cannot explain why but after i try to re-install iTunes everything is ok , qaac is working again. Title: QAAC: discussion, questions, feature requests, etc. Post by: lock67ca on 12 December, 2013, 09:26:57 PM I just used Winrar to extract the Apple Application Support .msi from the iTunes installer. It's a separate installer and you don't need to fully install iTunes at all. Title: QAAC: discussion, questions, feature requests, etc. Post by: Aleron Ives on 12 December, 2013, 10:02:41 PM You don't need to install AppleApplicationSupport.msi, either. You can extract the contents of it with msiexec: Code: [Select] msiexec /a foo.msi /qn TARGETDIR=C:\bar This would extract the contents of foo.msi to the directory C:\bar without showing you any GUI elements. Wait ten seconds for the extraction to complete, then enter the directory, select the DLL files that qaac needs, copy them to the QTfiles folder, then delete everything else you extracted from AppleApplicationSupport.msi. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 12 December, 2013, 11:28:01 PM I just used Winrar to extract the Apple Application Support .msi from the iTunes installer. It's a separate installer and you don't need to fully install iTunes at all. You can't install it, the only way is if you use something like CopyTrans Drivers Installer (http://download.cnet.com/CopyTrans-Drivers-Installer/3000-18546_4-75300288.html). Or install iTunes then uninstall everything but that one. Did you even try before suggesting? The portable way (using makeportable) is actually easier anyway. You don't need to install AppleApplicationSupport.msi, either. You can extract the contents of it with msiexec: Code: [Select] msiexec /a foo.msi /qn TARGETDIR=C:\bar This would extract the contents of foo.msi to the directory C:\bar without showing you any GUI elements. Wait ten seconds for the extraction to complete, then enter the directory, select the DLL files that qaac needs, copy them to the QTfiles folder, then delete everything else you extracted from AppleApplicationSupport.msi. Download makeportable.zip from here https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Title: QAAC: discussion, questions, feature requests, etc. Post by: Aleron Ives on 13 December, 2013, 12:53:17 AM That script seems to run a few unnecessary commands. You need only to extract the relevant DLLs and put them in one of the locations qaac looks for them. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 13 December, 2013, 01:24:06 AM That script seems to run a few unnecessary commands. Care to elaborate on what exactly you think unnecessary? Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 13 December, 2013, 01:25:55 AM . Title: QAAC: discussion, questions, feature requests, etc. Post by: testyou on 13 December, 2013, 02:09:18 AM That script seems to run a few unnecessary commands. You need only to extract the relevant DLLs and put them in one of the locations qaac looks for them. What do you think it does? Code: [Select] @echo offsetlocalif not "%~1" == "" ( set installer=%~1) else if exist iTunes64Setup.exe ( set installer=iTunes64Setup.exe) else if exist iTunesSetup.exe ( set installer=iTunesSetup.exe) else if exist QuickTimeInstaller.exe ( set installer=QuickTimeInstaller.exe) else ( echo installer executable not found goto end)7z e -y %installer% AppleApplicationSupport.msiif not %errorlevel% == 0 ( echo cannot extract AppleApplicationSupport.msi from installer goto end)mkdir QTfiles\Microsoft.VC80.CRT7z e -y -oQTfiles -i!ASL.dll -i!CoreAudioToolbox.dll -i!CoreFoundation.dll -i!*icu*.dll -i!libdispatch.dll -i!objc.dll -i!pthreadVC2.dll AppleApplicationSupport.msiif not %errorlevel% == 0 ( echo error on extracting AppleApplicationSupport.msi goto end)7z e -y -oQTfiles\Microsoft.VC80.CRT -i!msvcp80.dll.* -i!msvcr80.dll.* -i!manifest.* AppleApplicationSupport.msiif not %errorlevel% == 0 ( echo error on extracting AppleApplicationSupport.msi goto end)del AppleApplicationSupport.msipushd QTfiles\Microsoft.VC80.CRTrem strip assembly version number from filenames of msvc runtime dllsfor %%f in (msvcr80.dll.*) do move /Y %%f msvcr80.dllfor %%f in (msvcp80.dll.*) do move /Y %%f msvcp80.dllrem find needless one out of the two manifests and remove itfor /F "delims=:" %%t in ('findstr win32-policy manifest.*') do del %%trem rename manifestfor %%f in (manifest.*) do move /Y %%f Microsoft.VC80.CRT.manifestpopd:endendlocal Title: QAAC: discussion, questions, feature requests, etc. Post by: Aleron Ives on 13 December, 2013, 03:23:56 AM Since it would appear that I am in the wrong, perhaps somebody could enlighten me as to what the purpose is of extracting the msvcr80.dll and msvcp80.dll files? It it just in case the user doesn't have VC++ 2008 runtimes installed already? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 13 December, 2013, 03:39:47 AM It it just in case the user doesn't have VC++ 2008 runtimes installed already? Exactly. Title: QAAC: discussion, questions, feature requests, etc. Post by: the_weirdo on 13 December, 2013, 04:55:00 AM Since it would appear that I am in the wrong, perhaps somebody could enlighten me as to what the purpose is of extracting the msvcr80.dll and msvcp80.dll files? It it just in case the user doesn't have VC++ 2008 runtimes installed already? Actually, those are VC++ 2005 runtimes. Title: QAAC: discussion, questions, feature requests, etc. Post by: lock67ca on 13 December, 2013, 10:57:37 AM I just used Winrar to extract the Apple Application Support .msi from the iTunes installer. It's a separate installer and you don't need to fully install iTunes at all. You can't install it, the only way is if you use something like CopyTrans Drivers Installer (http://download.cnet.com/CopyTrans-Drivers-Installer/3000-18546_4-75300288.html). Or install iTunes then uninstall everything but that one. Did you even try before suggesting? The portable way (using makeportable) is actually easier anyway. But that's exactly what I did do. It worked and I'm still using it. Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 13 December, 2013, 11:21:00 AM Quote [qaac] release 2.29 (refalac 1.29) * Fixed regression on 2.28: Tags were not properly copied when --concat was specified on cuesheet input. * Fixed not to exit with failure requesting output filename when --concat was specified with --peak or --play (in which case "output filename" is nonsense). * Some minor improvement and code refactoring. Thanks nu774 Title: QAAC: discussion, questions, feature requests, etc. Post by: sundance on 22 December, 2013, 12:45:51 PM @nu774, thanks for the latest version of QAAC: Quote [qaac] release 2.31 (refalac 1.31) posted 2 hours ago by nu 774 Just 1note: qaac.exe (32bit) still claims to be v2.30... (couldn't test 64bit) .sundance. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 22 December, 2013, 01:22:01 PM Just 1note: qaac.exe (32bit) still claims to be v2.30... (couldn't test 64bit) 64-bit version of qaac doesn't exist and yes, confirm that 2.31 shows 2.30. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 22 December, 2013, 07:55:04 PM Sorry, uploaded now. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 10 January, 2014, 03:32:47 AM No one posted but: Code: [Select] [qaac] release 2.32 (refalac 1.32)posted Dec 22, 2013, 5:28 PM by nu 774- Fixed: --tag apID and --tag akID were written in the long tag format. nu774, does refalac use ONLY msvcr120.dll and msvcp120.dll (used with foobar2000)? Does it need any of the other files (libgcc_s_sjlj-1.dll, libsoxconvolver.dll, libsoxr.dll)? Thanks Title: QAAC: discussion, questions, feature requests, etc. Post by: aztec_mystic on 10 January, 2014, 03:37:24 AM nu774, does refalac use ONLY msvcr120.dll and msvcp120.dll (used with foobar2000)? You can use dependency walker to figure this out on your own. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 10 January, 2014, 03:46:35 AM Thanks for the software, tested and didn't see any of the last three DLLs. I didn't ask because of the dependency thing though, I asked for features that refalac may need from these DLLs. Don't know if I explained well enough. In a simpler way: Open refalac? Yes, it need the first two DLLs (dependencies) Downsample, upsample? Maybe it needs these other three DLLs but they are not dependencies (or they are called dependencies even in this case?) Really tired right now sorry. edit: I think I just answered myself, of course it does, that's how it downsamples and upsamples. Also that's why there are 64-bit versions of these files, qaac doesn't have a 64-bit version. Now that I figured out I am a genius I'm gonna go to sleep a little bit, goodnight. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 10 January, 2014, 05:52:54 AM nu774, does refalac use ONLY msvcr120.dll and msvcp120.dll (used with foobar2000)? Does it need any of the other files (libgcc_s_sjlj-1.dll, libsoxconvolver.dll, libsoxr.dll)? 1. You need msvc*120.dll to run refalac. 2. You don't need others in order just to run refalac, but some options (--rate, --lowpass, --matrix-*) don't work without them. 3. You can see if these DLLs are loaded with refalac --check. Technically, msvc*120.dll is implicitly linked to refalac. When you invoke refalac, OS's loader/linker will do the job of loading and linking of dependent DLLs. When implicit dependency is not satisfied (due to missing DLL or something), the attempt will fail and OS will show up an error dialog. In other words, the executable doesn't even start up. You can easily track down these implicit dependency by Dependency Walker. On the other hand, libsox*.dll is explicitly linked. In this case, OS does nothing automatically for them. Instead, they are loaded by refalac on runtime. This kind of linkage is usually used by plugin system (for example, fb2k will load every plugin DLL this way, so it can run without them). You cannot see this kind of dependency by Dependency Walker by default (you have to "profile" the process). Finally, 32bit libsoxr.dll is implicitly dependent on libgcc_s_sjlj-1.dll. So attempt to load libsoxr.dll should fail without it. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 10 January, 2014, 06:12:59 AM BTW main reason for using SoX related things through DLL is due to license of SoX. SoX is LGPL, but qaac cannot be. Although it's not clear if a static linking to a LGPLed library forces the same license on the derivative work, module separation seemed simpler and safer to me. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 10 January, 2014, 10:53:45 AM Thanks nu774, your replies are always thorough. Title: QAAC: discussion, questions, feature requests, etc. Post by: drSeehas on 10 January, 2014, 11:53:31 AM ... SoX is LGPL, but qaac cannot be... Why can QAAC not be LGPL? Aren't you the author of QAAC? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 10 January, 2014, 12:34:13 PM ... SoX is LGPL, but qaac cannot be... Why can QAAC not be LGPL? Aren't you the author of QAAC? Well, I'm not a lawyer so maybe I'm wrong, but qaac depends on Apple's proprietary software and libmp4v2(MPL 1.1). Both of them don't seem to be compatible with LGPL. Title: QAAC: discussion, questions, feature requests, etc. Post by: drSeehas on 10 January, 2014, 12:46:57 PM ... Well, I'm not a lawyer ... I am not a lawyer too. I thought, you, being the author, can choose the license. Title: QAAC: discussion, questions, feature requests, etc. Post by: Aleron Ives on 10 January, 2014, 05:09:16 PM I thought, you, being the author, can choose the license. You can, but you need to choose a license whose terms you can abide by. I think that qaac could be LGPL, because the main purpose of the LGPL is to allow free software and freeware to coexist by keeping the open and closed parts of the software separate, such as by using DLLs. Since qaac loads all of Apple's proprietary software through external DLLs, qaac should be compatible with the LGPL. Using the GPL would be impossible, because all aspects of a program must be open source to use the GPL, and Apple's proprietary code isn't. The only way nu774 would get in trouble with the LGPL is if qaac incorporated Apple's code directly, but nu774 already avoids doing that in order to keep Apple from taking legal action against him. The normal LGPL model allows developers of proprietary software to include LGPL software by keeping the open components as separate DLLs, so that users can clearly tell which elements of the program are open and which are closed. The qaac model is the reverse of this (the open components are the program, while the closed components are loaded from separate DLLs), but I don't see why that would make any difference, since the line between the open and closed components remains. I could be wrong, though, as I'm not a lawyer, either. As such, nu774's cautious approach is the safest one to take, at least until someone with legal expertise can advise him. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 10 January, 2014, 08:37:34 PM Well, my choice was not based on well studied thought, and probably some of them have been actually unnecessary. As for dependency on Apple library, seeing that ffmpeg (LGPL) requires "non-free" configuration to enable some of encoders such as libfaac, I just thought it's better to avoid LGPL. That's all. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 11 January, 2014, 03:48:01 AM nu774, I wanted to test these two libraries with refalac but I get an error. The command I'm trying is "refalac (or refalac64) -r 32000 (or -r 96000) file.wav" and I get "ERROR: ALAC: Not supported format". Why? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 11 January, 2014, 04:13:16 AM The command I'm trying is "refalac (or refalac64) -r 32000 (or -r 96000) file.wav" and I get "ERROR: ALAC: Not supported format". The error message is indeed odd and is not quite helpful, but you can see what's going on if you turn on --verbose. Due to sample rate conversion, sample format is converted to 32bit float which is not supported by ALAC. You have to add "-b16" or something. Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 14 January, 2014, 02:07:01 PM Hi I'm interested what's the factual difference of .m4b format from .m4a and if I can use QAAC to tonvert to this format (if there's need for special software, what software is that). I'm sure that m4b stands for audiobooks so probably there's need for something extra like internal chaptering (tho m4a does contain by default chapter for song too). Title: QAAC: discussion, questions, feature requests, etc. Post by: o-l-a-v on 14 January, 2014, 04:57:01 PM Hi I'm interested what's the factual difference of .m4b format from .m4a and if I can use QAAC to tonvert to this format (if there's need for special software, what software is that). I'm sure that m4b stands for audiobooks so probably there's need for something extra like internal chaptering (tho m4a does contain by default chapter for song too). AAC is the actual audio format, mp4 is an container. Changing it to m4a, m4b or m4r does not change the fact that it is a mp4. http://en.wikipedia.org/wiki/MPEG-4_Part_14#.MP4_versus_.M4A (http://en.wikipedia.org/wiki/MPEG-4_Part_14#.MP4_versus_.M4A) Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 14 January, 2014, 05:01:42 PM AAC is the actual audio format, mp4 is an container. Changing it to m4a, m4b or m4r does not change the fact that it is a mp4. Good, both AAC is what I know. But I'd rather like to know what's the difference between m4b and m4a formats and if QAAC is able to generate full-featured m4b format too (or whether I need different software for this). As for chaptering I think this is possible by merging more files into one multitrack with more internal chpters. Title: QAAC: discussion, questions, feature requests, etc. Post by: Aleron Ives on 14 January, 2014, 05:06:14 PM The M4B format is just MP4 with chapters, such as for audio books. There is a --chapter switch in qaac to load chapters from a file, but I have never used it. Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 14 January, 2014, 05:09:41 PM The M4B format is just MP4 with chapters, such as for audio books. There is a --chapter switch in qaac to load chapters from a file, but I have never used it. So everything I need is to use external chapters file and change the output extension to .m4b? Title: QAAC: discussion, questions, feature requests, etc. Post by: Zarggg on 14 January, 2014, 09:32:34 PM You need the .m4b extension for iTunes (and maybe other software) to recognize it as an MPEG-4 Part 14 container with chapters. The container format is the same regardless of the extension; they're just conventions. I.e., you can have a .m4a file with chapters or a .m4b file without, but the software you're using might not recognize them as such. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 16 January, 2014, 09:26:36 AM [qaac] release 2.33 (refalac 1.33) posted 2 hours ago by nu 774 - Implemented smart padding (same as fdkaac) that minimizes the possibility of gapless playback issue. You can disable this feature by new option --no-smart-padding. However, --no-smart-padding also disables additional padding at the end of HE-AAC stream that has been implemented as a workaround for CoreAudio encoder bug. Although I don't recommend using --no-smart-padding, it is mandatory when you want bit-identical bitstream output as iTunes (including it's bugs). - Fixed fallback sample rate conversion when libsoxr is not present (was not working exactly as intended). - Improved error messages for the attempt to encode non supported PCM format to ALAC. - Minor fixes and rewriting. https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 16 January, 2014, 06:43:35 PM refalac64.exe (1.33) crashes when converting multiple files from foobar2000, it open multiple error windows. refalac.exe (1.33 32-bit) works fine. Error: Code: [Select] Problem signature: Problem Event Name: BEX64 Application Name: refalac64.exe Application Version: 0.0.0.0 Application Timestamp: 52d7c1a3 Fault Module Name: refalac64.exe Fault Module Version: 0.0.0.0 Fault Module Timestamp: 52d7c1a3 Exception Offset: 000000000008e62c Exception Code: c0000409 Exception Data: 0000000000000002 OS Version: 6.3.9600.2.0.0.256.48 Locale ID: 1033 Additional Information 1: 0195 Additional Information 2: 01957bf7c2d1c23bb30701b39e430e81 Additional Information 3: 6171 Additional Information 4: 6171d6e585b1eae8c9cbbac37e14d099Read our privacy statement online: http://go.microsoft.com/fwlink/?linkid=280262If the online privacy statement is not available, please read our privacy statement offline: C:\Windows\system32\en-US\erofflps.txt edit: BTW, even a single file from the CLI crashes. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 16 January, 2014, 08:08:27 PM Thx for reporting. Seems like incremental build for refalac64 was broken. Uploaded re-built binary just now. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 16 January, 2014, 09:02:14 PM Thanks nu774. Also another thing I didn't notice before: When the conversion is done refalac gives a weird bitrate value: for example with this file it gave me "Overall bitrate: 2.70187e-009kbps" Code: [Select] 01. Atom Heart Mother.m4a[100.0%] 23:45.160/23:45.160 (130.7x), ETA 0:00.00062849556/62849556 samples processed in 0:10.907Overall bitrate: 2.70187e-009kbps With refalac64 I get "Overall bitrate: 636.169kbps". Code: [Select] 01. Atom Heart Mother.m4a[100.0%] 23:45.160/23:45.160 (151.8x), ETA 0:00.00062849556/62849556 samples processed in 0:09.390Overall bitrate: 636.169kbps I am sure the real bitrate is fine, this is just a test from the command line encoder. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 16 January, 2014, 09:38:33 PM Thanks, it's build seems to be broken in the same way, and probably it doesn't crash by pure luck. Updated to v3. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 16 January, 2014, 11:47:23 PM Perfect, thank you very much! Title: QAAC: discussion, questions, feature requests, etc. Post by: akin0780 on 17 January, 2014, 11:08:27 PM Thanks, it's build seems to be broken in the same way, and probably it doesn't crash by pure luck. Updated to v3. Hi nu774, I'm getting the following crash report when trying to convert some FLAC files to AAC. I'm using version QAAC 2.33: An error occurred while writing to file (The encoder has terminated prematurely with code -1073741515 (0xC0000135); please re-check parameters) Can you look into this for me. Alex Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 17 January, 2014, 11:28:02 PM akin0780, are you 100% sure you're using v3? First of all I think the problem was just on refalac/refalac64. Everything seems fine here, can you please download qaac right now and try again? Does an older version of qaac (2.31, 2.32) work fine? I think it's actually a library issue. Title: QAAC: discussion, questions, feature requests, etc. Post by: akin0780 on 17 January, 2014, 11:51:36 PM akin0780, are you 100% sure you're using v3? Everything seems fine here, can you please download qaac right now and try again? Does an older version of qaac (2.31, 2.32) work ok? I think it's actually a library issue. I am 100% sure that I'm using 2.33. I've also tried 2.32: same problem, same error code. Interestingly, 2.31 works perfectly. By the way, foobar2000 serves as my frontend. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 18 January, 2014, 12:12:05 AM I am 100% sure that I'm using 2.33. I've also tried 2.32: same problem, same error code. Interestingly, 2.31 works perfectly. By the way, foobar2000 serves as my frontend. 2.32 or later requires MSVCR120.dll and MSVCP120.dll. You can extract them together with qaac from qaac_2.33.zip, or download & install from http://www.microsoft.com/en-us/download/de...s.aspx?id=40784 (http://www.microsoft.com/en-us/download/details.aspx?id=40784). Title: QAAC: discussion, questions, feature requests, etc. Post by: akin0780 on 18 January, 2014, 12:20:44 AM I am 100% sure that I'm using 2.33. I've also tried 2.32: same problem, same error code. Interestingly, 2.31 works perfectly. By the way, foobar2000 serves as my frontend. 2.32 or later requires MSVCR120.dll and MSVCP120.dll. You can extract them together with qaac from qaac_2.33.zip, or download & install from http://www.microsoft.com/en-us/download/de...s.aspx?id=40784 (http://www.microsoft.com/en-us/download/details.aspx?id=40784). That did the trick! Thanks nu774 and eahm. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 22 January, 2014, 05:47:45 PM nu774, makeportable no longer works with the new iTunes 11.1.4.62. Title: QAAC: discussion, questions, feature requests, etc. Post by: moob2014 on 22 January, 2014, 07:46:19 PM nu774, you need update qaac, because not work properly with new CoreAudioToolbox.dll 7.9.8.4 with new CoreAudioToolbox, qaac add lot of silence into to the end of the song converted. mayby padding problem with new CoreAudioToolbox???? and makeportable no longer work. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 22 January, 2014, 08:57:06 PM nu774, you need update qaac, because not work properly with new CoreAudioToolbox.dll 7.9.8.4 New one is 7.9.8.5. File and Product version. Quote and makeportable no longer work. Read the post right before yours? Title: QAAC: discussion, questions, feature requests, etc. Post by: moob2014 on 22 January, 2014, 09:16:56 PM nu774, you need update qaac, because not work properly with new CoreAudioToolbox.dll 7.9.8.4 New one is 7.9.8.5. File and Product version. Quote and makeportable no longer work. Read the post right before yours? update: As someone else already stated, the DLLs are inside AppleApplicationSupport.msi now renamed as AppleApplicationSupport_NAME.DLL and the Microsoft runtime folder is still inside AppleMobileDeviceSupport(64).msi I know that, qaac work fine I made a mistake in the test sorry for that. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 22 January, 2014, 09:28:58 PM makeportable works if you change the string to: Code: [Select] 7z e -y -oQTfiles -i!*ASL*.dll -i!*CoreAudioToolbox*.dll -i!*CoreFoundation*.dll -i!*icu*.dll -i!*libdispatch*.dll -i!*objc*.dll -i!*pthreadVC2*.dll AppleApplicationSupport.msi Now nu774 needs to modify qaac to read these new renamed files. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 22 January, 2014, 09:35:54 PM Updated makeportable (should work for both old / new packaging style). CoreAudioToolbox.dll and others seem to be now linked with Microsoft Visual Studio 2010 C/C++ runtime which is not included in AppleApplicationSupport.msi. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 22 January, 2014, 09:41:09 PM Thanks, where do we get the MS runtime to make it fully portable? Does the old one from qaac work? Also, what should be the new name of the folder "Microsoft.VC80.CRT"? edit: nu774, typo in makeportable.cmd line 25: iTunes version number is wrong 1.11.4 instead of 11.1.4. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 22 January, 2014, 10:12:16 PM Updated makeportable (should work for both old / new packaging style). CoreAudioToolbox.dll and others seem to be now linked with Microsoft Visual Studio 2010 C/C++ runtime which is not included in AppleApplicationSupport.msi. Thanks, where do we get the MS runtime to make it fully portable? Does the old one from qaac work? Also, what should be the new name of the folder "Microsoft.VC80.CRT"? It's likely that you already have them in the system, but you can download them from http://support.microsoft.com/kb/2019667 (http://support.microsoft.com/kb/2019667) ("Microsoft Visual C++ 2010 Service Pack 1 Redistributable Package MFC Security Update" is the latest one you need now). Files in qaac_2.30.zip should be fine. For local (portable) install, just copy MSVC*.dll under QTportable directory (at the same level as CoreAudioToolbox.dll and others). Unlike VC80(VS2005), VC10(MSVS2010) C/C++ runtime does not require special directory and manifests. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 22 January, 2014, 10:27:12 PM Perfect done, thanks. Remember the typo on makeportable Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 23 January, 2014, 11:13:34 AM CoreAudioToolbox.dll and others seem to be now linked with Microsoft Visual Studio 2010 C/C++ runtime which is not included in AppleApplicationSupport.msi. AppleApplicationSupport.msi contains F_CENTRAL_msvcr100_x86.someGUID (=msvcr100.dll) Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 23 January, 2014, 11:18:56 AM lvqcl, can we use the 3kB version without any issue? Don't really care anyway, too much work, just run makeportable and done. edit: Thanks for the new makeportable nu774. Nothing crazy but the type is still there: "rem iTunes 1.11.4 and onwards appends "AppleApplicationSupport_" prefix." Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 23 January, 2014, 11:36:09 AM AppleApplicationSupport.msi contains F_CENTRAL_msvcr100_x86.someGUID (=msvcr100.dll) Thanks for pointing it out. Updated makeportable.cmd to extract them. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 30 January, 2014, 11:59:55 AM [qaac] release 2.34 (refalac 1.34) posted an hour ago by nu 774 - Added experimental option --num-priming to specify arbitrary number of priming samples between 0 and 2112. This option can only be applicable to AAC LC. - 2112 is the default number of priming samples (delay) of Apple encoder. By specifying smaller value, you get shorter delay. --num-priming=0 is equivalent to --no-delay, and in fact, --no-delay is now re-implemented by --num-priming. - 1024 or greater should be safe. In many cases, it seems that you can go as low as 576 (=448 + 128, where 448 is the number of borrowed samples from the previous frame for short block case, and 128 is the size of short block) and still be able to achieve perfect gapless playback. However, considering long block case and also the fact that faad (CLI frontend) discards first 1024 samples, setting smaller value than 1024 cannot be said to be always safe. - When number of priming samples is X where X < 576, decoder should not be able to reconstruct first 576 - X samples at least. Therefore, you should avoid it unless that portion of input is known to be silent. https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) nu774, are you going to remove "--no-delay"? Title: QAAC: discussion, questions, feature requests, etc. Post by: tvholic on 07 February, 2014, 11:34:00 AM qaac.exe 2.34 x86 doesn't run on Windows XP. It's flagged to require OS version 6.0 (Vista) or higher. Title: QAAC: discussion, questions, feature requests, etc. Post by: bbrabant on 07 February, 2014, 03:47:01 PM qaac.exe 2.34 x86 doesn't run on Windows XP. It's flagged to require OS version 6.0 (Vista) or higher. I am also unable to run qaac 2.34 x86 on Windows Xp pro. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 07 February, 2014, 04:32:26 PM Probably because of the new Visual Studio 2013 runtime implemented since 2.31? You still have XP guys? Sorry but it's time to upgrade. Title: QAAC: discussion, questions, feature requests, etc. Post by: Aleron Ives on 07 February, 2014, 05:32:28 PM Microsoft claims that the VS 2013 runtime is compatible with XP: http://www.microsoft.com/en-ca/download/de...s.aspx?id=40784 (http://www.microsoft.com/en-ca/download/details.aspx?id=40784) so you may be able to get the latest qaac to run. If the x86 runtime works, please let us know. If you're going to continue using XP, though, you should get used to the idea of never upgrading any software unless it's absolutely necessary; otherwise, you're just going to break things as new programs become less and less compatible with the OS. As for qaac, unless you have need of the specific features introduced in the latest versions, you can just use an older one if the runtime doesn't work. There have been no changes to my knowledge that would affect the AAC output beyond using the new switches, so unless you know what the new versions offer and need one of their features, don't upgrade. Problem solved. That being said, if there is some irreversible change in current and future qaac versions that will break XP compatibility, it would be good to note that somewhere on the qaac site so XP users will know which version to get. Title: QAAC: discussion, questions, feature requests, etc. Post by: tvholic on 07 February, 2014, 07:39:50 PM It's not the MS runtime, it's the linker setting for qaac. Editing the qaac.exe PE header and changing MajorSubsystemVersion from 6 to 5 makes it work in XP. Title: QAAC: discussion, questions, feature requests, etc. Post by: Aleron Ives on 07 February, 2014, 08:00:12 PM In that case, nu774 should be able to fix it in the next version. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 07 February, 2014, 08:24:30 PM Thanks for reporting. In fact, I was informed by a HA user for not being able to run qaac 2.34, but I didn't hit on this. 2.34 was built using toolset v120 by accident, and uploaded fixed build as 2.34.1 just now. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 07 February, 2014, 09:41:43 PM nu774, 2.34.1 reports as 2.34, is it the new one anyway? Thanks. ps: Are you going to completely remove --no-delay in one of the future versions? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 07 February, 2014, 11:32:55 PM nu774, 2.34.1 reports as 2.34, is it the new one anyway? It's just a rebuild under different build configuration, without any source code change. Quote Are you going to completely remove --no-delay in one of the future versions? Maybe, but not in the near future. It's redundant, still there just for backward compatibility, but also there's no strong reason to remove it. Title: QAAC: discussion, questions, feature requests, etc. Post by: bbrabant on 08 February, 2014, 06:04:02 AM It's just a rebuild under different build configuration, without any source code change. The rebuild is running fine on my old windows xp. I will upgrade asap just to keep on using qaac. Title: QAAC: discussion, questions, feature requests, etc. Post by: Aleron Ives on 11 February, 2014, 08:18:05 PM Regarding system requirements, would you only need the x86 VC++ 2013 runtime to run qaac on Windows 7 x64, since qaac is a 32-bit application, or do you need the x64 runtime for a 64-bit OS? (I assume it's only needed to run 64-bit applications.) Title: QAAC: discussion, questions, feature requests, etc. Post by: the_weirdo on 12 February, 2014, 02:41:57 AM Regarding system requirements, would you only need the x86 VC++ 2013 runtime to run qaac on Windows 7 x64, since qaac is a 32-bit application Yes. 32-bit applications only need 32-bit runtime libs to run. Title: QAAC: discussion, questions, feature requests, etc. Post by: EagleScout1998 on 09 March, 2014, 07:42:41 PM I have decided to experiment a little with QAAC. I am trying to figure out how to configure foobar2000 to work with QAAC. Perhaps, it’s something I might use for my iPod in place of MP3 @ -V 2. I'll confess that I didn't read through every single post in this topic. I am confused about all the different command line options, what they do, and whether they're actually needed. I am basically trying to find a parameter than I can plug into foobar2000 and not have to worry about the technical details. Below is a command I copied and pasted from some other topic. --tvbr 82 --no-optimize --ignorelength - -o %d I do know that 82 refers to the quality setting. In this case, it's about 165 kbps. The rest of the string is Greek. Title: QAAC: discussion, questions, feature requests, etc. Post by: drSeehas on 10 March, 2014, 03:01:28 AM ... --tvbr 82 --no-optimize --ignorelength - -o %d I do know that 82 refers to the quality setting. In this case, it's about 165 kbps. The rest of the string is Greek. https://github.com/nu774/qaac/wiki/Command-Line-Options (https://github.com/nu774/qaac/wiki/Command-Line-Options) Title: QAAC: discussion, questions, feature requests, etc. Post by: EagleScout1998 on 10 March, 2014, 06:58:45 AM I have seen this. My problem is that I don’t understand it. Most of these command line options are too technical for me to wrap my head around. When converting to MP3 in foobar2000, I never had to worry about encoder settings; I just slid the quality bar to the level I wanted (usually –V 2 or –Q 0.50 if using Nero) and accepted whatever other parameters foobar2000 gave to it. With QAAC, it’s different. I would like to know if there are any “recommended” set of parameters to include when configuring foobar2000 to use QAAC. This is what I currently have, which I copied from here (http://www.hydrogenaudio.org/forums/index.php?s=&showtopic=91422&view=findpost&p=817602). Would you recommend any changes be made? Encoder file: qaac.exe Extension: m4a Parameters: --tvbr 82 --no-optimize --ignorelength - -o %d Format is: lossy Highest BPS mode supported: 32 Title: QAAC: discussion, questions, feature requests, etc. Post by: Frankie on 16 March, 2014, 06:05:45 PM Folks, I could also need some help with setting up foobar to use qaac. First I used this commandline: Code: [Select] %s -V 72 --ignorelength - -o %d Then I tried the commandline EagleScout1998 posted here: Code: [Select] %s --tvbr 72 --no-optimize --ignorelength - -o %d Unfortunatly none of those works for me, I always get the same error: (https://hydrogenaud.io/imgcache.php?id=8291e2466f503e0402dc5fb21b935939" rel="cached" data-warn="External image, click to view at original size" data-url="http://i.imgur.com/XGgzz9d.png) What am I doing wrong? Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 16 March, 2014, 06:50:07 PM Remove %s from your commandline Title: QAAC: discussion, questions, feature requests, etc. Post by: Frankie on 17 March, 2014, 03:25:48 PM Remove %s from your commandline When I remove %s it doesn't work at all and gives me an error before conversion even starts: (https://hydrogenaud.io/imgcache.php?id=512f83c68f2a3386493cbf7c80137d08" rel="cached" data-warn="External image, click to view at original size" data-url="http://i.imgur.com/2fkqrb8.png) Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 17 March, 2014, 03:42:44 PM Frankie, do you have iTunes installed or did you create the portable package? Both are ok, just want to make sure you have all the libraries. Follow the commands here: https://github.com/nu774/qaac/wiki/Examples#foobar2000 (https://github.com/nu774/qaac/wiki/Examples#foobar2000) Title: QAAC: discussion, questions, feature requests, etc. Post by: Frankie on 17 March, 2014, 03:54:12 PM Frankie, do you have iTunes installed or did you create the portable package? Both are ok, just want to make sure you have all the libraries. Follow the commands here: https://github.com/nu774/qaac/wiki/Examples#foobar2000 (https://github.com/nu774/qaac/wiki/Examples#foobar2000) No, I don't have iTunes installed (and I don't want to install it). I just downloaded qaac from this link: https://sites.google.com/site/qaacpage/cabi...rects=0&d=1 (https://sites.google.com/site/qaacpage/cabinet/qaac_2.35.zip?attredirects=0&d=1) Then I unzipped it and put all the files in my foobar directory. When I follow the instructions on the site you linked and use the commandlines that are posted there I get the exact same error that I already posted (the 2nd one). So without "%s" conversion doesn't even start and I get an error immediatly. With "%s" conversion is running but I get an error after the conversion. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 17 March, 2014, 03:59:03 PM Frankie, qaac requires Apple libraries from iTunes or QuickTime. You don't need to install iTunes or Quicktime but you have to download one of the two together with makeportable.zip (https://sites.google.com/site/qaacpage/cabinet) to create a portable version of all the necessary libraries to copy where qaac.exe is. Title: QAAC: discussion, questions, feature requests, etc. Post by: Frankie on 17 March, 2014, 04:15:20 PM Frankie, qaac requires Apple libraries from iTunes or QuickTime. You don't need to install iTunes or Quicktime but you have to download one of the two together with makeportable.zip (https://sites.google.com/site/qaacpage/cabinet) to create a portable version of all the necessary libraries to copy where qaac.exe is. Hm, at https://sites.google.com/site/qaacpage/ (https://sites.google.com/site/qaacpage/) it says "You only have to download qaac-x.xx.zip" But OK, I dl'd the makeportable.zip, can you pls explain what exactly I have to do with this file now? *edit* OK, maybe I should have read everything on the page I linked. I now downloaded the QT installer, used the file from makeportable.zip on it and copied all resulting files to my f2k folder. And now it works! Thx for your help folks! Title: QAAC: discussion, questions, feature requests, etc. Post by: the_weirdo on 18 March, 2014, 04:12:30 AM Hm, at https://sites.google.com/site/qaacpage/ (https://sites.google.com/site/qaacpage/) it says "You only have to download qaac-x.xx.zip" But OK, I dl'd the makeportable.zip, can you pls explain what exactly I have to do with this file now? If you read more carefully, you would see at least this sentence: "However, Apple Application Support is required." "You only have to download qaac-x.xx.zip" means you only need to download the file qaac-x.xx.zip at cabinet page, not all of them, if qaac is what you want. Because of legal reason, qaac developer cannot include Apple Application Support files in the package. Title: QAAC: discussion, questions, feature requests, etc. Post by: polemon on 30 March, 2014, 11:18:28 AM You still have XP guys? Sorry but it's time to upgrade. Sometimes this is not an option. Depending on other circumstances, such like drivers or other software, you're stuck with WinXP for the time being. To put this into perspective: The military uses software that requires DOS 3.0 or even CP/M in some places. Using this for audio coding is a very big stretch, of course, but eh, you get the idea, I suppose... However, since qaac is a modern, experimental program, yeah, I see no reason why supporting an outdated OS makes sense... Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 30 March, 2014, 11:35:34 AM Fortunately, QAAC is more or less a stand-alone encoder. In contrast, x265 is also a library linkable in e.g. ffmpeg; when the developers decided to use a Windows 6.x native feature for thread-safe classes, which was not supported by the XP kernel (no, not simply a linker flag), many "users" (should rather mean "testers", to be honest) switched to a kind of "muslim rage" mode. Already before they realized that linking a Vista+ libx265 also made ffmpeg crash when trying to encode HEVC with it... So the developers had to adapt a solution which already worked in x264. 32 bit builds will soon be XP compatible again. Which doesn't mean they will get far with it. HEVC is so complex that the 2 GB for a 32-bit process will barely be sufficient for 1080 HD video, not to mention 4K UHD video even. And the encoding speed on a PC so old that only XP would run on it is another matter. Enough off-topic... Title: QAAC: discussion, questions, feature requests, etc. Post by: sven_Bent on 30 March, 2014, 03:35:41 PM Sorry for not reading all the thread. but 2 pass true vbr option would be wonderful sometimes the TVBR just makes to big steps V 73 = 108kbits V 82 = 118kbits V 91 = 141kbits ... Im targeting 128kbits im not sure how much of a quality "loss" there is going from tvbr to abr mode Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 30 March, 2014, 03:42:59 PM QAAC can only provide the features the Apple AAC Encoder supports. If you want a different core feature, try to ask Apple to implement it... Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 30 March, 2014, 04:54:08 PM Im targeting 128kbits If you want some specific bitrate then you should use CBR or ABR modes Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 31 March, 2014, 02:25:32 AM But you should not want specific bitrates anymore. VBR has the advantage that the codec uses as much as it needs to maintain a stable quality. Modern container formats like MP4 or MKV are able to handle VBR audio. Title: QAAC: discussion, questions, feature requests, etc. Post by: sven_Bent on 31 March, 2014, 06:51:46 PM Im targeting 128kbits If you want some specific bitrate then you should use CBR or ABR modes Which i already said above... but om unsure how much a Quality "loss" there will be with the bit distribution being less flexible 2pass vbr (like in nero digital) if it had been possible would give me the benefit of both worlds. CBR seems to be hitting pretty close to my target bitrate though So in reality it boils down to Quality on QAAC CVBR vs Nero digital 2pass VBR, which if off topic for this thread @light Totally agree but having a size limit i need to adjust for size in encoding Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 31 March, 2014, 11:07:12 PM On some extreme cases, even ABR can give you resulting bitrate that is far from the one you have requested. For instance: qaac -a 128 fatboy.wv --> 187.877kbps By concatenating fatboy.wv (which is very short) multiple times, bitrate decreases but you will still get 152kbps or so and no less. Title: QAAC: discussion, questions, feature requests, etc. Post by: sven_Bent on 06 April, 2014, 06:08:11 PM possibility to implant a /low cpu priority switch ? which will start qaac/coretools in lower than normal cpu priority ? i know ican do it by command line Start /below normal qaac.exe but that pop up a new windows that steals focus which is annoying when multitasking. adding /b to not open a new windows just screws it all up this is a part of my batch files start /belownormal /wait Qaac.exe -V 73 "%~n1_track2.wav" ren "%~n1_track2.m4a" "%~n1_track2.V73.m4a" start /belownormal /wait Qaac.exe -V 82 "%~n1_track2.wav" ren "%~n1_track2.m4a" "%~n1_track2.V82.m4a" if i add /b to make it not start in a new windows, it ignores the /wait tag for some reason. which results and over seeking/disk trashing and reduces performance Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 06 April, 2014, 08:51:05 PM qaac | grep nice -n, --nice            Give lower process priority. Title: QAAC: discussion, questions, feature requests, etc. Post by: sven_Bent on 08 April, 2014, 12:44:50 PM $qaac | grep nice -n, --nice Give lower process priority. Thank you i had completely missed that option Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 01 May, 2014, 05:48:56 AM [qaac] release 2.36 (refalac 1.36) posted 6 minutes ago by nu 774 2.36 includes some minor fixes: Improved accuracy of seeking on MP3 files by increased amount of preroll. Still doesn't count how many frames required due to bit-reservoir, but prerolling of 9 frames should be enough... Fixed bitrate formatting on --format. Has been printing in decimals for 3ch only. Fixed --stat. Incorrect values were written at the beginning (regression introduced by --num-priming or something). Updated taglib. https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 01 May, 2014, 06:48:43 AM I was somehow thinking that bit reservoir is mainly for making CBR more like ABR-ish, so I was surprised to see that lame -V5 (VBR) regularly generates files having dependency of 9 preceding frames at most due to bit reservoir, that made me fix preroll distance for MP3. I din't think it was being abused for VBR THAT much. Could anybody tell me why? Title: QAAC: discussion, questions, feature requests, etc. Post by: halb27 on 01 May, 2014, 03:50:59 PM If VBR would only change frame bitrate a maximum local bitrate of just 320 kbps were possible. By active inclusion of the bit reservoir local bitrate can be significantly higher. In order to do so and because there is no look ahead mechanism, bit reservoir must be kept on a high level no matter whether it's actually used or not. Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 01 May, 2014, 04:47:34 PM Reservoir usage for VBR -V5 in two versions of LAME: 3.98.4 (https://hydrogenaud.io/imgcache.php?id=fc1ab8c87b47df3863f75772e1d3b87b" rel="cached" data-warn="External image, click to view at original size" data-url="http://i.imgur.com/q7OEJoQ.png) 3.99.5 (https://hydrogenaud.io/imgcache.php?id=a16450f57aa0e0c8276c978fb03c2cc6" rel="cached" data-warn="External image, click to view at original size" data-url="http://i.imgur.com/rvTRxz8.png) (3.96, 3.97 are similar to 3.98.4) Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 01 May, 2014, 09:08:51 PM harb27: Thanks, that makes sense. What confused me was that VBR looked more aggressively using reservoir than CBR. lvqcl: Wow. I wasn't aware of that. Astonishing difference. What is the software in the picture? Title: QAAC: discussion, questions, feature requests, etc. Post by: IgorC on 01 May, 2014, 09:53:37 PM It's encspot Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 02 May, 2014, 01:07:51 AM It's encspot Thanks. Title: QAAC: discussion, questions, feature requests, etc. Post by: Brazil2 on 02 May, 2014, 08:39:25 AM EncSpot Which version of EncSpot is it ? Mine doesn't show this tab Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 02 May, 2014, 10:27:25 AM Which version of EncSpot is it ? Mine doesn't show this tab Encspot 2.1 Pro (http://www.hydrogenaudio.org/forums/index.php?showtopic=49769) build 494, and license file (http://wayback.archive.org/web/20070306173418/http://guerillasoft.co.uk/encspot/encspotprolicense.txt) Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 02 May, 2014, 12:53:39 PM Indeed ... I remember that GuerillaSoft officially released this license code because it abandoned EncSpot. Title: QAAC: discussion, questions, feature requests, etc. Post by: Brazil2 on 03 May, 2014, 07:25:14 AM Encspot 2.1 Pro (http://www.hydrogenaudio.org/forums/index.php?showtopic=49769) build 494, and license file (http://wayback.archive.org/web/20070306173418/http://guerillasoft.co.uk/encspot/encspotprolicense.txt) I have v2.2 Pro beta 2 with the free license, I guess that's the one you're talking about because v2.2 is build 494 while latest v2.1 beta 1 is build 489. Anyway, I miss three tabs: Bit Graph, Reservoir Usage and Big values, tested with files encoded with LAME 3.98.4 and 3.99.5 both coming from Rarewares, VBR and CBR. Any idea what I'm doing wrong ? Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 03 May, 2014, 08:15:47 AM True, EncSpot 2.2 beta 2 does not contain these tabs anymore. But you still can get at least max. and avg. values as directory table columns. Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 03 May, 2014, 09:07:54 AM google for "http://www.hydrogenaudio.org/forums/index.php?showtopic=45738" (with quotes) Title: QAAC: discussion, questions, feature requests, etc. Post by: Brazil2 on 03 May, 2014, 09:34:48 AM So there is a v2.1 Pro build 494. And it's showing all of the tabs Thanks for the tip Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 10 May, 2014, 12:58:27 PM [qaac] release 2.37 (refalac 1.37) posted 3 hours ago by nu 774 Fixed a bug: AAC in CAF generated by qaac --caf was not playable due to bogus kuki chunk (Descriptors inside of esds box are expected, but qaac was writing bare AudioSpecificConfig). https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Thanks nu774 as always for your great work. Title: QAAC: discussion, questions, feature requests, etc. Post by: tahaa7 on 15 May, 2014, 11:57:10 AM I don't have much experience with AAC and lossy encoders in general, since I don't usually use them, but I am now inclined to do so because I do want to have a few more tracks on my poor 16GB iPhone than I would be able to with lossless codecs. Having said that, I ask what is the best AAC encoding option with qaac, quality-wise? I don't care about efficiency, etc., I just want the closest thing to lossless without actually using lossless. From what I've read so far on QuickTime AAC encoder, it seems like the best option quality-wise would be CVBR @ 320kbps with quality 96, is that correct? Thanks. Title: QAAC: discussion, questions, feature requests, etc. Post by: o-l-a-v on 15 May, 2014, 12:45:12 PM I'd go tvbr -127. AFAIK cvbr is made if you want to achieve a given bitrate = quality is not 1st priority. TVBR on the other hand has quality as its' primary goal. Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 16 May, 2014, 01:47:05 AM The C in CVBR is a little confusing, compared to the C in Constant Bitrate. Comparing constant and variable bitrate modes, your result will have either a rather constant bitrate or a rather constant quality. The former is only interesting for narrow bandwidth transmissions or where an obsolete container format requires it. Well, it would possibly not even support AAC at all. So whereever possible, one should prefer variable bitrate, and if it doesn't need to be Constrained, go for True Variable Bitrate. Title: QAAC: discussion, questions, feature requests, etc. Post by: tahaa7 on 16 May, 2014, 03:31:55 AM I'd go tvbr -127. AFAIK cvbr is made if you want to achieve a given bitrate = quality is not 1st priority. TVBR on the other hand has quality as its' primary goal. Comparing constant and variable bitrate modes, your result will have either a rather constant bitrate or a rather constant quality. The former is only interesting for narrow bandwidth transmissions or where an obsolete container format requires it. Well, it would possibly not even support AAC at all. So whereever possible, one should prefer variable bitrate, and if it doesn't need to be Constrained, go for True Variable Bitrate. But from my experiments so far, CVBR produces the largest files of all modes at 320 kbps, and overall bitrate of CVBR-encoded files is routinely above 320 (and never below 320), on some tracks I've encoded so far it goes up to 380 kbps. So shouldn't that mean best quality? Isn't True VBR all about having the best efficiency? From what I've read so far, Constrained VBR actually sets a minimum bitrate, and is allowed to go up if necessary, whereas True VBR has no minimum bitrate, isn't that correct? Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 16 May, 2014, 04:01:01 AM "The highest average bitrate" does not certainly mean "the highest possible quality", but possibly rather "the same quality may have been achieved with less waste". This explanation regarding a minimum bitrate may be true; there is a chance that it is achieved by stuffing the file with useless junk. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 16 May, 2014, 05:21:00 AM From what I've read so far, Constrained VBR actually sets a minimum bitrate, and is allowed to go up if necessary, whereas True VBR has no minimum bitrate, isn't that correct? Not exactly. For example, try encoding sine wave with CVBR 320kbps. If it doesn't really need bits like in this case, bitrate will go much lower. However, also try encoding with --gain -50 or something like that (this makes input _VERY_ quiet). TVBR will aggressively cut bitrate for this ("Hey, this is too quiet and barely audible. Let's give bits for other parts!"), but CVBR does not so much. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 17 May, 2014, 05:06:42 AM [qaac] release 2.38 (refalac 1.38) posted an hour ago by nu 774 Allow nesting of${} in --fname-format. Now you can write something like ${albumartist|${artist}} for example, which means: When album artist tag is present and non-empty, it evaluates to album artist tag's value. Otherwise, it evaluates to artist tag's value. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 20 May, 2014, 11:31:31 AM [qaac] release 2.39 (refalac 1.39) posted 26 minutes ago by nu 774 Deconding AAC-LC, MP1/2/3, ALAC (qaac), ALAC (refalac). Perfectly support files with iTunSMPB and multiple edits. In files with multiple edits, there exist multiple valid spans to be played back. In other words, there are multiple gaps to be skipped. As far as I know, there is NO such software that supports it properly. If you are interested, try https://sites.google.com/site/qaacpage/cabi...tiple-edits.zip (https://sites.google.com/site/qaacpage/cabinet/Multiple-edits.zip). This file contains 3 edits in it. When properly decoded/played, it contains exactly 30 seconds music. However, in this file there are very short 2 gaps to be skipped in the middle as well as first delay and end padding. Therefore, no software other than qaac should play it correctly without pops/clicks exactly in 30 seconds. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 20 May, 2014, 09:41:43 PM [qaac] release 2.40 (refalac 1.40) posted an hour ago by nu 774 Fixed new MP4 decoder introduced at 2.39 (found seek related bug, and ALAC has been trimmed too much). Title: QAAC: discussion, questions, feature requests, etc. Post by: jml90 on 22 May, 2014, 04:01:08 PM I don't know if I can ask a question, here. The QAAC site sends me to this post, and I don't want to start a new topic for this question. I'm new in the digital audio world, so don't kill me. Also English is not my native language so excuse me for my grammarians errors. Well I start, I'm tired of ripping my CDs everytime I decide to change from format or codec. I've been using QAAC for a year. When the source is Lossless I use TVBR 109 for Audio-CD (44,1 KHz) and TVBR 127 for DVD-Video with LPCM tracks (48 KHz). And when the source is Lossy I try differents TVBR qualities comparing with Spek from the original to the new one. Now I want to keep in a Lossless format my CDs. So I don't have to keep ripping everytime I change my mind. I choose ALAC. I know that FLAC have a better compression ratio but I'm used to the iTunes world. Since my first iPod in 2007, iTunes became my favorite music player. So ALAC is the smartest desicion. This is the situation. I usually use normalize in QAAC to keep the max volume per track of the same album. To do this with ALAC I put the WAVs of the uncompressed audio from the Audio-CD, in the same folder that QAAC. I run something like this: qaac.exe --alac --concat --normalize -b 16 --verbose -o "1.m4a" *.wav I delete the WAVs. I open 1.m4a with Foobar2000, it detects the tracks as chapters. I convert to WAV 16 bits with no dither to the same folder, one file per track. And I run: qaac.exe --alac --verbose *.wav I delete 1.m4a and I move the m4a files to my iTunes folder and start to fill with tags and that stuffs. Now my question is this. Normalize convert the audio to float 32 bits. That's why i put -b 16. But I don't really know what this means. If: a) The normalize filter is applied with an input of int 16 bit and the output is also int 16 bits b) int 16 bits transform to float 32, the filter is applied, and then the bit depth is reduced to 16 bits using dither TPDF. If the answer is a, it's perfect. But if it's b, I think I shouldn't use normalize since it applies dither and cause noise. What do you think? Thank you. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 22 May, 2014, 08:46:54 PM --normalize will multiply each sample by some floating point number. For example, when the original peak was -3dBFS and you want to make the new peak to be at around 0dBFS, you have to multiply each sample by 1.4 or something. Multiplying by 1.4 yields also float.. not integer. -b16 will apply TPDF dither when it scales down bit depth. If you don't want that behavior, use --no-dither. Having said that, I don't recommend --normalize for lossless archiving unless you have a special reason to do so. Once you have normalized, it means the result is NOT lossless (=identical) copy of the original CD even if you are using lossless codec, which means that you cannot check validity of the results using AccurateRip DB or something. Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 23 May, 2014, 02:12:43 AM If you want to play different songs with similar "loudness", but keep a lossless archive, I believe that techniques like ReplayGain may be more suitable. But they possibly can't help to equalize rather linear (e.g. classic) with dynamic-compressed (usually pop) sources either. Title: QAAC: discussion, questions, feature requests, etc. Post by: jml90 on 23 May, 2014, 01:00:36 PM --normalize will multiply each sample by some floating point number. For example, when the original peak was -3dBFS and you want to make the new peak to be at around 0dBFS, you have to multiply each sample by 1.4 or something. Multiplying by 1.4 yields also float.. not integer. -b16 will apply TPDF dither when it scales down bit depth. If you don't want that behavior, use --no-dither. Having said that, I don't recommend --normalize for lossless archiving unless you have a special reason to do so. Once you have normalized, it means the result is NOT lossless (=identical) copy of the original CD even if you are using lossless codec, which means that you cannot check validity of the results using AccurateRip DB or something. Thank you. I didn't understand why "--normalize" have to transform the bith depth to float. I won't be using that filter again since it applies dither and I don't want that. Except in special cases. For example I have a DVD of Madonna's concert S&S Tour released around 2010. I demuxed the PCM track, encoded to ALAC with QAAC with the "--raw" options and applied "--normalize" and in the analysis shows a peak around 0.4. In that case I think is useful to use --normalize. Not? If the answer is yes, should I let the dithering happens or should I use --no-dither. Thank you Title: QAAC: discussion, questions, feature requests, etc. Post by: jml90 on 23 May, 2014, 01:18:09 PM If you want to play different songs with similar "loudness", but keep a lossless archive, I believe that techniques like ReplayGain may be more suitable. But they possibly can't help to equalize rather linear (e.g. classic) with dynamic-compressed (usually pop) sources either. Thank you. I use ReplayGain. iTunes have it's own way, instead of writing the appropiate ReplayGain tag it writes iTunNORM tag. But I use beaTunes who writes both tags. Other players like Foobar2000 use the ReplayGain tag. ReplayGain make every song sound at the same volume. It's useful to a party, reunion o whatever when you play a list of songs and don't want to keep rising or decreasing the volume. But what it makes it's to decrease volume in almost all songs, except when the song is not very loud. The reason why I used --normalize it's not for listening in my computer but in my iPod and my cellphone when the volume insufficient. I think I won't be using --normalize for ALAC. When I want to transfer this song to iPod or cellphone I will encode to AAC with the normalize filter. I believe in lossy formats the bit depth it's not important. Title: QAAC: discussion, questions, feature requests, etc. Post by: Sixth Street on 03 June, 2014, 02:59:45 PM I'm trying to configure HE-AAC transcoding via QAAC for Subsonic, but I can't seem to figure out the HE-AAC. I've got LC working via: Step 1 - ffmpeg -i %s -f wav - Step 2 - qaac -a %bk --adts - -o - Can someone help on how to configure for HE-AAC? Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 03 June, 2014, 05:55:14 PM Sixth Street, is something like "qaac -v128 --he %bk --adts - -o -" working? Please change after "--he" as you like, I don't even know if you can do --adts with --he. Also, as quoted from the CLI "HE AAC mode (TVBR is not available)" so, no -V, only -v, -c and -a. Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 04 June, 2014, 02:38:03 AM Checking a 5.1 M4A file created by QAAC using MediaInfo, you may notice a certain difference between two distinct values related to the "channel count". I would assume that MediaInfo just reports header values as they are found (is there another AAC / M4A analysis tool to compare?). Is this a "bug" of QAAC (putting a wrong value in a header field), or based on a peculiarity of the AAC format (e.g. multichannel encoding being an extension to stereo since MPEG-2 BC audio)? ... Well, I may be wrong here, AAC is also known as MPEG-2 NBC (non backward compatible), so I hope there was no similar mistake. Code: [Select] GeneralComplete name                            : ToS_QAAC.m4aFormat                                   : MPEG-4Format profile                           : Apple audio with iTunes infoCodec ID                                 : M4A File size                                : 37.4 MiBDuration                                 : 12mn 14sOverall bit rate mode                    : VariableOverall bit rate                         : 427 KbpsEncoded date                             : UTC 2014-06-04 06:30:09Tagged date                              : UTC 2014-06-04 06:31:16Writing application                      : qaac 2.38, CoreAudioToolbox 7.9.8.3, AAC-LC Encoder, TVBR q91, Quality 96AudioID                                       : 1Format                                   : AACFormat/Info                              : Advanced Audio CodecFormat profile                           : LCCodec ID                                 : 40Duration                                 : 12mn 14sBit rate mode                            : VariableBit rate                                 : 426 KbpsMaximum bit rate                         : 578 KbpsChannel count                            : 2 channelsOriginal Channel count                   : 6 channelsChannel positions                        : Front: L C R, Side: L R, LFESampling rate                            : 48.0 KHzCompression mode                         : LossyStream size                              : 37.3 MiB (100%)Encoded date                             : UTC 2014-06-04 06:30:09Tagged date                              : UTC 2014-06-04 06:31:16 Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 04 June, 2014, 10:08:36 AM mp4a.channelcount is intentionally set to either 1 or 2, since in the former spec (ISO 14496-12) only 1 or 2 was allowed. However, it seems that no such restriction present in the new spec (4th ed @2012-7-15). Therefore, maybe I should fix qaac to write actual number of channels. On the other hand, I can say for sure that nothing in AudioSampleEntry (mp4a) matters and cannot be taken seriously. For example, samplesize field (typically it is 16) is non-sense for AAC. samplerate field is 16.16 fixed-point, so not enough to represent high samplerate such as 96kHz. I don't understand why MediaInfo looks at it at all. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 07 June, 2014, 05:04:04 PM [qaac] release 2.41 (refalac 1.41) posted 6 hours ago by nu 774 Add --limiter. --limiter applies smart limiter that softly clips portions where peak exceeds (near) 0dBFS. Softly means that it applies non-linear filter to surrounding half cycles (nearest zero crossing point to zero crossing point) so that the result fits in under 0dBFS but still is smoothly connected to other parts, resulting in much smaller audible distortion than dumb hard clips. For CVBR/ABR/CBR mode, bitrate value less than 8 is now treated as "bits per sample". Bitrate is computed as the following: Bitrate = bits_per_sample * number_of_channels * sample_rate For example, --cvbr 2 is now equivalent for --cvbr 192 (=2*2*48000) for 2ch, 48kHz case. This can be useful when you want to use CVBR/ABR/CBR and want constant quality setting for varying number of channels or sample rate. Other minor changes. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 12 June, 2014, 01:17:06 AM nu774, why was foo_input_caf moved to "Old"? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 12 June, 2014, 08:19:49 AM nu774, why was foo_input_caf moved to "Old"? Now it's here: https://github.com/nu774/foo_input_caf/releases (https://github.com/nu774/foo_input_caf/releases) Title: QAAC: discussion, questions, feature requests, etc. Post by: Carsi on 12 June, 2014, 11:12:09 AM Anyone tried the new limiter option? Am I correct in thinking that it act's like a declip feature....kind of? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 12 June, 2014, 06:39:08 PM Anyone tried the new limiter option? Am I correct in thinking that it act's like a declip feature....kind of? No. --limiter doesn't magically repair already hard clipped input (that is, peaks beyond 0dBFS are already cut off / thrown away). Instead, --limiter works on floating point input that can hold peaks beyond 0dBFS and softly controls the amplitude so that output amplitude doesn't exceed the threshold. Compare the following: Code: [Select] qaac --play --gain=6 -b16 foo.wav           # hard clipqaac --play --gain=6 -b16 --limiter foo.wav  # soft clip In this example, input is boosted by 6dB in floating point domain then converted into 16bit int. This will cause clipping if peak of foo.wav is higher than -6dBFS, but with --limiter you will have much less audible defect. Note that non-linearity of --limiter is only capable of up to +9dBFS or so. Beyond that is simply hard clipped. Title: QAAC: discussion, questions, feature requests, etc. Post by: AiZ on 23 June, 2014, 02:34:00 PM Hello everybody, With C.R.Helmrich and IgorC advices, I'm posting about a problematic intro for Apple/iTunes HE-AAC encoder. Upload is here (http://www.hydrogenaud.io/forums/index.php?showtopic=106106), anything under 80kbps is easily caught. Just for info, sample is less than 5 seconds of well-known (?) Mr Probz's Waves remix by Robin Schulz. Have a nice day, AiZ Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 03 July, 2014, 02:01:50 PM nu774, is there a particular reason the TVBR settings numbers are what they are? Can we use a "standard" set like I suggested on the other thread? For example: 0, 10, 20, 30, 40, 45, 50, 60, 70, 80, 90, 100, 110, 115, 125? Thanks. Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 03 July, 2014, 02:24:46 PM I think these numbers are from Apple encoder. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 03 July, 2014, 02:26:42 PM I think these numbers are from Apple encoder. The range I know but even the precise one that the CLI selects? For example: Q5 - Q13 = 9 Q5 - Q13 I understand this is Apple but 9 is always picked because nu774 decided? Can he just change it to 10? Title: QAAC: discussion, questions, feature requests, etc. Post by: Brazil2 on 03 July, 2014, 02:35:10 PM https://github.com/nu774/qaac/wiki/Encoder-configuration (https://github.com/nu774/qaac/wiki/Encoder-configuration) Quote TVBR quality steps Although TVBR option allows arbitrary value in 0-127 range, internally AAC codec has only 15 actually functional quality steps, therefore the value is get rounded to one of the following: 0    9    18    27    36    45    54    63    73    82    91    100    109    118    127 Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 03 July, 2014, 02:39:49 PM Brazil2, useless and time wasting comment, please read here: http://www.hydrogenaud.io/forums/index.php...st&p=868861 (http://www.hydrogenaud.io/forums/index.php?s=&showtopic=106199&view=findpost&p=868861) Title: QAAC: discussion, questions, feature requests, etc. Post by: Brazil2 on 03 July, 2014, 02:53:31 PM Not useless because it's the reason why it must be Q63, Q73, Q82 and so on, and not Q60, Q70, Q80 because the encoder always falls back to the default values. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 03 July, 2014, 02:55:12 PM Not useless because it's the reason why it must be Q63, Q73, Q82 and so on, and not Q60, Q70, Q80 because the encoder always falls back to the default values. No shit? That's the reason I'm asking if nu774 can change them. Sorry for the misunderstanding if someone didn't get that Title: QAAC: discussion, questions, feature requests, etc. Post by: Brazil2 on 03 July, 2014, 03:15:59 PM As I understand it it's an encoder (CoreAudioToolbox.dll) limitation, not a QAAC.exe one. But I might be wrong Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 03 July, 2014, 03:26:11 PM As I understand it it's an encoder (CoreAudioToolbox.dll) limitation, not a QAAC.exe one. But I might be wrong Yes I think so, but I am trying to understand if it's only the range or the exact fallback number as well. Title: QAAC: discussion, questions, feature requests, etc. Post by: The Seeker on 03 July, 2014, 03:32:18 PM Some nice news on the foobar2000 site: Quote Latest news 2014-07-03 New release: foobar2000 v1.3.3 beta 1. Free Encoder Pack now includes QAAC allowing the new foobar2000 version to use iTunes AAC encoder. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 03 July, 2014, 03:36:16 PM Some nice news on the foobar2000 site: Quote Latest news 2014-07-03 New release: foobar2000 v1.3.3 beta 1. Free Encoder Pack now includes QAAC allowing the new foobar2000 version to use iTunes AAC encoder. At this point he could even add ALAC with refalac, no Apple software needed. Title: QAAC: discussion, questions, feature requests, etc. Post by: The Seeker on 03 July, 2014, 03:37:45 PM At this point he could even add ALAC with refalac, no Apple software needed. Apart from Apple Application Support, of course. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 03 July, 2014, 03:38:58 PM At this point he could even add ALAC with refalac, no Apple software needed. Apart from Apple Application Support, of course. Nope, refalac is standalone. Title: QAAC: discussion, questions, feature requests, etc. Post by: The Seeker on 03 July, 2014, 03:43:21 PM Nope, refalac is standalone. I took your post to mean with QAAC and refalac, no Apple software needed. You're quite right, of course. Title: QAAC: discussion, questions, feature requests, etc. Post by: jacobacci on 09 July, 2014, 12:25:39 PM Hi all, Searching for a way to convert my FLAC library to AAC, I stumbled across qaac / foobar. This is exactly what I was looking for. Great converter, thanks a lot to the developer. I have tried out different conversions and noticed the following behaviour when converting flac files to m4a and then playing the m4a in foobar. Source FLAC file = 44.1 kHz -> converted m4a expands to 44.1kHz upon playback in foobar Source FLAC file = 88.2 kHz -> converted m4a expands to 48kHz upon playback in foobar Source FLAC file = 176.4 kHz -> converted m4a expands to 48kHz upon playback in foobar Source FLACs of 48, 96 and 192kHz -> converted m4a expands to 48kHz upon playback in foobar It seems to me it would be optimal to have 88.2 -> 44.1 and 176.4 -> 44.1kHz, as these are integer transformations. Is there a way to configure qaac to do the encoding in this way? Thanks and regards Rudi Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 09 July, 2014, 12:36:37 PM rudolffischer, I'm sure you can play around with the switch -r / --rate but really, leave the encoder do what it does best. They are meaningless, temporary lossy files anyway that you will probably delete/change in few months, who cares if they don't end up being perfectly how you think they should be. Title: QAAC: discussion, questions, feature requests, etc. Post by: jacobacci on 09 July, 2014, 03:29:11 PM thanks, eahm I will use the m4a files on my laptop and ipod, where I cannot put a 3TB lossless library. So I would like the m4a's to sound as good as possible as they will stay on those devices for quite a while. From the sound of it the "-r keep" switch should do the job. I will give it a try. How can I modify the qaac command line parameters in foobar 1.3.3 beta 2? Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 09 July, 2014, 03:32:07 PM How can I modify the qaac command line parameters in foobar 1.3.3 beta 2? On Convert > ... > Output Format > Add New: select the one you'd like to modify then select Custom, you will see the full command right there. Title: QAAC: discussion, questions, feature requests, etc. Post by: AiZ on 09 July, 2014, 03:41:19 PM Hello, If you look at 'qaac --formats' output, you'll see that it can only handle up to 48KHz for LC-AAC format. On the other hand, HE-AAC can go up to 96KHz. fdkaac can encode up to 96KHz in LC-AAC format. AiZ Title: QAAC: discussion, questions, feature requests, etc. Post by: jacobacci on 09 July, 2014, 03:54:31 PM found it, thanks eahm AiZ, you're right, I will need to convert all the 88.2kHz files in one go and set -r to 44.1 I'll do some experimenting. Title: QAAC: discussion, questions, feature requests, etc. Post by: audiophool on 09 July, 2014, 04:09:54 PM It seems to me it would be optimal to have 88.2 -> 44.1 and 176.4 -> 44.1kHz, as these are integer transformations. You might want to read one of the threads on HA about resampling. From your statement, I think you might learn something.. Title: QAAC: discussion, questions, feature requests, etc. Post by: AiZ on 10 July, 2014, 03:08:34 AM rudolffischer, If I can suggest something not too complicated, as you want to use foobar2000, you could : - download foo_resampler_mod2 (http://www.hydrogenaud.io/forums/index.php?showtopic=67373) (mod2 version, I insist), and install it in foobar2000, - configure foobar2000 Converter Processing, (https://hydrogenaud.io/imgcache.php?id=409b996976d5f7a2a5c2950c073b16e8" rel="cached" data-warn="External image, click to view at original size" data-url="http://aiz.free.fr/images/rudolffischer_01.png) - add Resampler (SoX) mod2 in Active DSPs, (https://hydrogenaud.io/imgcache.php?id=a1645028af1c8a3d768b83a95619ee5b" rel="cached" data-warn="External image, click to view at original size" data-url="http://aiz.free.fr/images/rudolffischer_02.png) - configure resampler like this : (https://hydrogenaud.io/imgcache.php?id=37b182dfb65ce63b6cf98b1ae1f7ad20" rel="cached" data-warn="External image, click to view at original size" data-url="http://aiz.free.fr/images/rudolffischer_03.png) - and eventually save your preset. As audiophool gently said, don't bother with "integer transformations". Any sampling rate above 48KHz can effectively and safely be downsampled to 44.1KHz, period. Have a nice day, AiZ Title: QAAC: discussion, questions, feature requests, etc. Post by: jacobacci on 10 July, 2014, 03:31:41 AM @audiophool - always happy to learn. I found a few threads on re-sampling that suggest that artifacts are below -100db, so given the lossy conversion that follows, that should not cause an issue. Could you indicate the specific thread you were referring to? @AiZ - very elegant solution, works like a breeze, thanks a lot. qaac seems to use SOX, so putting a conditional SOX resampling step before the qaac step should basically achieve the same thing with the flexibility of configuring the processing step. I have set SOX mod2 to only resample 176400 and 88200 to 44100. Everything else is then converted to 48000 by qaac and 44100 is left at 41000. Brilliant. Pls don't get me wrong, I am not trying to be difficult here. I always try to avoid complex processing if I there is a less complex way. More a question of principle than of actually audible differences. Thanks a lot for the help. Title: QAAC: discussion, questions, feature requests, etc. Post by: taketoo on 24 July, 2014, 05:37:07 PM Hi I am having problems ripping to ALAC in the latest version of foobar, I get the following error text, after trying to rip a CD: Code: [Select] 10 out of 10 tracks converted with major problems. Source: "cdda://00AD051F" / index: 1  An error occurred while writing to file (The encoder has terminated prematurely with code 2 (0x00000002); please re-check parameters) : "C:\Users\IOIOI\Music\George Duke\From Me To You\From me to you.m4a"  Additional information:  Encoder stream format: 44100Hz / 2ch / 16bps  Command line: "C:\Program Files (x86)\foobar2000\components\qaac.exe" --ignorelength -s --no-optimize --alac -o "From me to you.m4a" -  Working folder: C:\Users\IOIOI\Music\George Duke\From Me To You\    Conversion failed: The encoder has terminated prematurely with code 2 (0x00000002); please re-check parameters Title: QAAC: discussion, questions, feature requests, etc. Post by: TomasPin on 24 July, 2014, 08:52:45 PM I thought you ought to use refalac.exe instead? It's in the same zip file (https://docs.google.com/viewer?a=v&pid=sites&srcid=ZGVmYXVsdGRvbWFpbnxxYWFjcGFnZXxneDo0YzUwZTNhZjhmOTFmZTgw). Title: QAAC: discussion, questions, feature requests, etc. Post by: Case on 25 July, 2014, 03:24:15 AM taketoo, qaac requires components from iTunes for its operation. Easiest way to make it work is to install iTunes on your machine. Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 25 July, 2014, 03:47:25 AM Installing "Bloatware" can be avoided, but then you may have to manually extract libraries, which was certainly not intended by Apple... Title: QAAC: discussion, questions, feature requests, etc. Post by: taketoo on 25 July, 2014, 04:06:44 PM Installing "Bloatware" can be avoided, but then you may have to manually extract libraries, which was certainly not intended by Apple... Any tips on that anywhere ? If its a PITA I may as well just use FLAC or WMA lossless (easiest to use with WMC). Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 25 July, 2014, 04:15:46 PM [Search Topic] for "makeportable" (see left bottom of this page). Title: QAAC: discussion, questions, feature requests, etc. Post by: TomasPin on 25 July, 2014, 06:00:56 PM Any tips on that anywhere ? Just download the normal iTunes or Quicktime installer from the website, change the extension to .zip and extract the AppleApplicationSupport.msi. Install that, ???, PROFIT. Title: QAAC: discussion, questions, feature requests, etc. Post by: taketoo on 25 July, 2014, 06:33:59 PM Any tips on that anywhere ? Just download the normal iTunes or Quicktime installer from the website, change the extension to .zip and extract the AppleApplicationSupport.msi. Install that, ???, PROFIT. LOL somekind of conspiracy I try & download quicktime, nothin' happens  . Is there anyway to get WMC to play FLAC & ALAC? I would use foobar as a MC but I dunno how to get remotes to work with it  . Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 25 July, 2014, 07:05:01 PM Just download the normal iTunes or Quicktime installer from the website, change the extension to .zip and extract the AppleApplicationSupport.msi. Install that, ???, PROFIT. With makeportable from nu774 you don't need to install anything. Title: QAAC: discussion, questions, feature requests, etc. Post by: taketoo on 25 July, 2014, 07:10:20 PM Just download the normal iTunes or Quicktime installer from the website, change the extension to .zip and extract the AppleApplicationSupport.msi. Install that, ???, PROFIT. With makeportable from nu774 you don't need to install anything. Sorry for being really f*in stupid, but where do you download it, I looked but can't find  ? Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 26 July, 2014, 02:09:19 AM Thanks, Alternatively, you can try makeportable.zip at the cabinet page (bat file only, 7z.exe is not included). QAAC cabinet page (https://sites.google.com/site/qaacpage/cabinet) If you don't find a "makeportable.zip" there, I don't know what your meaning of "searching" is. Unpack "makeportable.cmd" to a separate directory, copy the QuickTimeInstaller.exe (http://support.apple.com/downloads/#quicktime) next to it, have 7-zip installed, run the CMD. The result will be unpacked to a folder "QTFiles". Its content should be copied to the QAAC x86 installation folder, I believe.... Title: QAAC: discussion, questions, feature requests, etc. Post by: taketoo on 26 July, 2014, 07:10:59 PM Thanks, Alternatively, you can try makeportable.zip at the cabinet page (bat file only, 7z.exe is not included). QAAC cabinet page (https://sites.google.com/site/qaacpage/cabinet) If you don't find a "makeportable.zip" there, I don't know what your meaning of "searching" is. Unpack "makeportable.cmd" to a separate directory, copy the QuickTimeInstaller.exe (http://support.apple.com/downloads/#quicktime) next to it, have 7-zip installed, run the CMD. The result will be unpacked to a folder "QTFiles". Its content should be copied to the QAAC x86 installation folder, I believe.... Thanks, I didn't 'click' the cabinet, all working perfectly according to your instructions, thanks very much  !!! Title: QAAC: discussion, questions, feature requests, etc. Post by: TomasPin on 27 July, 2014, 01:45:14 PM With makeportable from nu774 you don't need to install anything. Wasn't aware of that, thanks! Title: QAAC: discussion, questions, feature requests, etc. Post by: jacobacci on 31 July, 2014, 03:59:16 AM Hi, I would like to keep my lossless (FLAC) and my lossy (m4a) library in sync, the lossless one obviously being the master. Using qaac with foobar allows me to select that files marked for conversion which already exist in the target directory are skipped. I could not find any way to delete files in the target directory that do not have a corresponding file in the source directory (in case I renamed a directory in the lossless library for example). Is there a way to do this with qaac / foobar? Thanks and regards Rudi Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 31 July, 2014, 04:08:15 AM This will certainly not be the responsibility of QAAC, which is an encoder, not a file manager... But you may be able to handle that in a batch file. Briefly (not complete, without warranty): Code: [Select] FOR %%a IN (*.aac) DO IF NOT EXIST "%%~na.flac" DEL "%%a" But I would be afraid of automatically deleting files. Title: QAAC: discussion, questions, feature requests, etc. Post by: drSeehas on 03 August, 2014, 02:45:50 PM Hello, I have a new PC :-) But when I want to make AAC files from FLAC files by e.g. Quote "PATH\qaac.exe" -V91 -q2 -r keep --verbose "PATH\FILENAME.flac" I get the following error message: Code: [Select] qaac 2.41, CoreAudioToolbox 7.9.8.6ERROR: Not available input file format What is wrong? AFAIK the only thing which is new is 64 bit vs 32 bit Windows 7 on the old PC. Title: QAAC: discussion, questions, feature requests, etc. Post by: azaqiel on 03 August, 2014, 03:13:16 PM sounds like you're missing libFLAC. if I remember correctly, just put the DLLs from a libFLAC compile into the same folder as QAAC and try again.  rarewares has the libFLAC DLLs for download under lossless. or, decode your FLAC files to WAV files first. Title: QAAC: discussion, questions, feature requests, etc. Post by: drSeehas on 03 August, 2014, 03:42:03 PM sounds like you're missing libFLAC. if I remember correctly, just put the DLLs from a libFLAC compile into the same folder as QAAC and try again.  rarewares has the libFLAC DLLs for download under lossless. Of course I downloaded x64DLLs_20120131.zip and extracted into the same folder as QAAC before I tryed it, so libFLAC.dll was already in the same folder as QAAC! I can't compile myself as I don't have a compiler. I downloaded flac_dll-1.3.0-x64-icl.zip from rarewares, extracted, but get "libFLAC_dynamic.dll" and "libFLAC_dynamic.lib"? Quote ... or, decode your FLAC files to WAV files first. I know, but I want to avoid this  additional step and it worked without it on the old PC. Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 03 August, 2014, 03:50:40 PM qaac is 32-bit, so it can't use 64-bit dlls. Title: QAAC: discussion, questions, feature requests, etc. Post by: drSeehas on 03 August, 2014, 03:56:02 PM qaac is 32-bit, so it can't use 64-bit dlls. Oh, I thought on a 64-bit OS you need 64-bit dlls... I'll try the 32-bit versions and report later. Edit: It works! Although the files have names like "libFLAC_dynamic.dll" and "libFLAC_dynamic.lib". Thank you! Title: QAAC: discussion, questions, feature requests, etc. Post by: jacobacci on 03 August, 2014, 04:59:08 PM This will certainly not be the responsibility of QAAC, which is an encoder, not a file manager... But you may be able to handle that in a batch file. Briefly (not complete, without warranty): Code: [Select] FOR %%a IN (*.aac) DO IF NOT EXIST "%%~na.flac" DEL "%%a" But I would be afraid of automatically deleting files. Thanks, LigH, I'll try that on one or two subdirectories. I can always encode the files again, if something goes wrong. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 12 August, 2014, 01:20:25 PM If I use qaac.exe from the Free Encoder Pack will I need to copy the three non-MS DLLs from the official qaac zip or will they be integrated as well? Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 12 August, 2014, 03:56:38 PM If I use qaac.exe from the Free Encoder Pack will I need to copy the three non-MS DLLs from the official qaac zip or will they be integrated as well? I noticed the qaac.exe from the encoder pack is larger in size than the one from nu774 Title: QAAC: discussion, questions, feature requests, etc. Post by: drSeehas on 12 August, 2014, 04:05:03 PM ... I noticed the qaac.exe from the encoder pack is larger in size than the one from nu774 BTW: Which encoder pack are you talking about? Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 12 August, 2014, 04:41:26 PM ... I noticed the qaac.exe from the encoder pack is larger in size than the one from nu774 BTW: Which encoder pack are you talking about? The fb2k encoder pack,it includes qaac and fhgaac. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 12 August, 2014, 04:43:00 PM I noticed the qaac.exe from the encoder pack is larger in size than the one from nu774 http://www.hydrogenaud.io/forums/index.php...st&p=868999 (http://www.hydrogenaud.io/forums/index.php?s=&showtopic=106199&view=findpost&p=868999) I am NOT talking about the MS runtimes but about the other three DLLs. Title: QAAC: discussion, questions, feature requests, etc. Post by: drSeehas on 12 August, 2014, 05:00:59 PM ... I am NOT talking about the MS runtimes but about the other three DLLs. Why don't you simply give it a try without them? If it works, you know you don't need these other three DLLs. If it doesn't work, install them and you are done. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 12 August, 2014, 05:04:43 PM Why don't you simply give it a try without them? If it works, you know you don't need these other three DLLs. If it doesn't work, install them and you are done. The other three are only for resampling, qaac will resample anyway not using SoX if the three are not used but I'm not sure how to test pushing them. That's why I need nu774 or Case to reply Title: QAAC: discussion, questions, feature requests, etc. Post by: drSeehas on 12 August, 2014, 05:08:52 PM ... I'm not sure how to test pushing them. ... What do you mean with "test pushing them"? Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 12 August, 2014, 05:13:54 PM What do you mean with "test pushing them"? Sorry I meant forcing them (the three SoX DLLs) and not the integrated resample (I guess Apple's?). I don't resample anyway, I'd just like to know how Case's qaac.exe is made. My guess is that is not even possible to integrate the SoX DLLs and he modified qaac to work with older, integrated MS runtimes compatible with XP. Title: QAAC: discussion, questions, feature requests, etc. Post by: drSeehas on 12 August, 2014, 05:21:31 PM ... I meant forcing them (the three SoX DLLs) and not the integrated resample ... I got it now :-) BTW: libgcc_s_sjlj-1.dll is also a SoX DLL? Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 12 August, 2014, 05:31:15 PM I got it now :-) BTW: libgcc_s_sjlj-1.dll is also a SoX DLL? The last line is about that DLL: "Finally, 32bit libsoxr.dll is implicitly dependent on libgcc_s_sjlj-1.dll. So attempt to load libsoxr.dll should fail without it.". This is about the three DLLs: "2. You don't need others in order just to run refalac, but some options (--rate, --lowpass, --matrix-*) don't work without them.". Is there a way to see if Case's qaac.exe has them inside? I should learn some basics of programming. Title: QAAC: discussion, questions, feature requests, etc. Post by: drSeehas on 12 August, 2014, 06:29:57 PM ... Here http://www.hydrogenaud.io/forums/index.php...st&p=855028 (http://www.hydrogenaud.io/forums/index.php?s=&showtopic=85135&view=findpost&p=855028) asking about refalac. The last line is about that DLL: "Finally, 32bit libsoxr.dll is implicitly dependent on libgcc_s_sjlj-1.dll. So attempt to load libsoxr.dll should fail without it.". ... Thank you. I looked with Dependency Walker on libsoxr.dll and I could see libgcc_s_sjlj-1.dll. Title: QAAC: discussion, questions, feature requests, etc. Post by: kode54 on 13 August, 2014, 02:13:16 AM ... Here http://www.hydrogenaud.io/forums/index.php...st&p=855028 (http://www.hydrogenaud.io/forums/index.php?s=&showtopic=85135&view=findpost&p=855028) asking about refalac. The last line is about that DLL: "Finally, 32bit libsoxr.dll is implicitly dependent on libgcc_s_sjlj-1.dll. So attempt to load libsoxr.dll should fail without it.". ... Thank you. I looked with Dependency Walker on libsoxr.dll and I could see libgcc_s_sjlj-1.dll. Congratulations on repeating exactly what he just said that nu774 said a year ago. Title: QAAC: discussion, questions, feature requests, etc. Post by: Case on 13 August, 2014, 02:42:16 AM The qaac binary bundled in the Free Encoder Pack doesn't include SoX. I didn't verify but I'm pretty sure including it would have required code changes, and it's not needed by most users. Also the built-in AAC encoder profiles do not deal with resampling settings. If users want high-quality resampling I recommend using lvqcl's SoX component. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 13 August, 2014, 03:38:33 AM I don't think explicit --rate option is regularly needed or used by users, but be careful of source sample rate. When given sample rate isn't supported by the encoder, then sample rate conversion should implicitly take place inside of qaac. When libsoxr is not present, rate converter of the CoreAudio codec component is picked up, which does the job done but not as good as libsoxr. OTOH, fdkaac doesn't support this automatic sample rate conversion. When the encoder does't support given sample rate, it simply fails. Title: QAAC: discussion, questions, feature requests, etc. Post by: drSeehas on 13 August, 2014, 04:03:58 AM ... Congratulations on repeating exactly what he just said that nu774 said a year ago. ... Sorry, but my using of Dependency Walker and eahms answer was done in parallel, but eahm was faster... BTW: Using Dependency Walker, I couldn't find why icuin40.dll, icuuc40.dll and pthreadvc2.dll in the QTfiles subdirectory are needed. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 13 August, 2014, 04:25:15 AM Thanks Case. drSeehas, my bad, I misread the question, nu774 already replied. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 13 August, 2014, 04:38:21 AM BTW: Using Dependency Walker, I couldn't find why icuin40.dll, icuuc40.dll and pthreadvc2.dll in the QTfiles subdirectory are needed. IIRC they were required in the past (one of CoreAudio related DLLs were dependent on them), but yes, it seems they are not required anymore. Title: QAAC: discussion, questions, feature requests, etc. Post by: drSeehas on 13 August, 2014, 04:48:53 AM I couldn't find why icuin40.dll, icuuc40.dll and pthreadvc2.dll in the QTfiles subdirectory are needed. You need Apple's libraries for qaac to run, from qaac's homepage: "Since 1.00, qaac directly uses CoreAudioToolbox.dll. Therefore, QuickTime installation is no more required. However, Apple Application Support is required.". You can automatically extract all the necessary files from iTunes or QuickTime with makeportable: https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet) Please read carefully the instructions on the homepage, we are repeating everything every few months ??? These three are not Apple's libraries. I have installed neither QuickTime nor iTunes. I have extracted the files with (a modified) makeportable. Where can I find an answer to my question: why are these three libraries needed? BTW: Using Dependency Walker, I couldn't find why icuin40.dll, icuuc40.dll and pthreadvc2.dll in the QTfiles subdirectory are needed. IIRC they were required in the past (one of CoreAudio related DLLs were dependent on them), but yes, it seems they are not required anymore. So we can safely delete them and can modify makeportable? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 13 August, 2014, 05:22:53 AM So we can safely delete them and can modify makeportable? I think so, although they do no harm. Title: QAAC: discussion, questions, feature requests, etc. Post by: drSeehas on 13 August, 2014, 06:07:48 AM So we can safely delete them and can modify makeportable? I think so, although they do no harm. I always want a clean system, so I deleted them. Title: QAAC: discussion, questions, feature requests, etc. Post by: sundance on 15 August, 2014, 10:36:34 AM After deleting those DLLs I get an error (missing pthreadvc2.dll) when I try to run "qaac --check" (x86). Maybe my CoreAudioToolbox DLLs are outdated... Title: QAAC: discussion, questions, feature requests, etc. Post by: drSeehas on 15 August, 2014, 10:43:05 AM After deleting those DLLs I get an error (missing pthreadvc2.dll) when I try to run "qaac --check" (x86). Maybe my CoreAudioToolbox DLLs are outdated... When I do a "qaac --check" (without pthreadvc2.dll), I get the following: Code: [Select] qaac 2.41, CoreAudioToolbox 7.9.8.6libsoxconvolver 0.1.0libsoxr-0.1.1libFLAC 1.3.0 Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 15 August, 2014, 02:06:48 PM [qaac] release 2.42 (refalac 1.42) posted 16 hours ago by nu 774  [ updated 15 hours ago ] Add --start and --end option to specify start, end point of the input for partial encode. --start, --end (and --delay) supports 3 ways to describe the point. [[hh:]mm:]ss[.sss..] : Timestamp described in hours, minutes, and seconds. Parts enclosed by brackets can be omitted. Seconds are parsed as double precision number (64bit float), and you can place arbitrary numbers of digits under the decimal point. You will need enough digits to achieve sample accuracy, depending on the sample rate. ns : Number of samples, followed by 's'. mm:ss:fff : Cuepoint in minutes, seconds, and frames(1/75 second), followed by 'f'. Re-linked 32bit libsoxr.dll not to depend on libgcc_s_sjlj-1.dll. Now it is not included in the archive and you don't need it anymore. Title: QAAC: discussion, questions, feature requests, etc. Post by: drSeehas on 15 August, 2014, 02:37:36 PM There is also a new makeportable. Title: QAAC: discussion, questions, feature requests, etc. Post by: mrgou on 16 August, 2014, 05:13:09 AM Hi, Could someone please help me understand what the "-q" setting does? The documentation only describes it as "aac encoding quality", with no additional details. What does a value of 2 do compared to 0? How does it differ from a high or low value of the "-V" setting? Thanks! R. Title: QAAC: discussion, questions, feature requests, etc. Post by: sundance on 16 August, 2014, 06:50:09 AM @drSeehas: Updated to latest CoreAudioToolbox; thanks a lot, everything looks good now: Code: [Select] qaac 2.42, CoreAudioToolbox 7.9.8.6libsoxconvolver 0.1.0libsoxr-0.1.1libsndfile-1.0.25libFLAC 1.3.0tak_deco_lib 2.3.0 compatible Title: QAAC: discussion, questions, feature requests, etc. Post by: wottha on 15 September, 2014, 06:06:08 AM QAAC feature request: Clip and intersample peak readout like AFCLIP on Mac that prints totals of all the Clip levels and intersample peaks at or above 0dbfs for a given file to the command window. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 15 September, 2014, 10:29:08 AM Clip and intersample peak readout like AFCLIP on Mac that prints totals of all the Clip levels and intersample peaks at or above 0dbfs for a given file to the command window. I don't know much about afclip since I don't have a Mac box, but is it something like EBUR128 true peak scanner? As far as I know, recent ffmpeg has EBUR128 filter (GPL-enabled build is required) with true peak support: https://www.ffmpeg.org/ffmpeg-filters.html#toc-ebur128 (https://www.ffmpeg.org/ffmpeg-filters.html#toc-ebur128) Like this: Code: [Select] ffmpeg -nostats -i foo.wav -filter_complex ebur128=peak=true -f null - Title: QAAC: discussion, questions, feature requests, etc. Post by: wottha on 18 September, 2014, 04:44:55 PM Clip and intersample peak readout like AFCLIP on Mac that prints totals of all the Clip levels and intersample peaks at or above 0dbfs for a given file to the command window. I don't know much about afclip since I don't have a Mac box, but is it something like EBUR128 true peak scanner? As far as I know, recent ffmpeg has EBUR128 filter (GPL-enabled build is required) with true peak support: https://www.ffmpeg.org/ffmpeg-filters.html#toc-ebur128 (https://www.ffmpeg.org/ffmpeg-filters.html#toc-ebur128) Like this: Code: [Select] ffmpeg -nostats -i foo.wav -filter_complex ebur128=peak=true -f null - Thank you nu774.  I'll have to try that. Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 03 October, 2014, 07:54:33 AM nu774: Would you possibly consider supporting AviSynth skripts as input directly (either via AVIFile VfW function or via AviSynth interface) so that it is not necessary to use a WAV piping auxiliary tool (like: avs2pipemod -wav)? Not "important", just "nice to have". Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 03 October, 2014, 10:01:14 AM LigH: I might implement avs importing in the future, but don't hold your breath. At present, I'm not very willing to do so because: - (I think) main advantage of native input support by qaac is the ability to copying tags from input, which is not easily achieved by chaining pipes. As for AVS scripts, they don't have tags so you can just use piping. It's just that you need more typing. - It doesn't seem very difficult, but at the same time, I don't know much about avisynth. I don't know details about ABI compatibility between versions (2.5, 2.6, MT, Avisynth+...), whether "Distributor()" should be automatically inserted or not, unicode capability on path name handling, or something like that (Maybe I could use VFW instead of native avisynth library functions, but it should bring some limitations). Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 05 October, 2014, 08:45:17 AM AVS input was implemented: Quote Support avisynth script (avs) input. Like x264 or avs2pipemod, AVS is directly supported through avisynth C interface, not via VfW. Title: QAAC: discussion, questions, feature requests, etc. Post by: gottogo99 on 05 October, 2014, 05:49:56 PM AVS input was implemented: Quote Support avisynth script (avs) input. Like x264 or avs2pipemod, AVS is directly supported through avisynth C interface, not via VfW. Will simplify my batch files.  Looking forward to trying out it.  Thanks nu774. Title: QAAC: discussion, questions, feature requests, etc. Post by: sundance on 06 October, 2014, 02:18:04 AM @nu774: I'd like to suggest a new feature for qaac, which (I think...) would be quite easy to implement: >> Allow playlist files (.m3u) as input for qaac. This could come in very handy if you encode a complete album or your favourite playlist. Or, in my case, encoding an audiobook and concat it to a single m4b file. In that scenario, my NAS sometimes lists files in random order when running "dir *.flac". Which would result in an audiobook with random chapters when i use "*.flac" as input arg for qaac... Or could I use (in that particular case) this option: Code: [Select] --sort-args            Sort filenames given by command line arguments. How could that be used to force qaac to sort all files matching a wildcard input file argument by e.g. filename in ascending order? .sundance. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 06 October, 2014, 03:06:28 AM Or could I use (in that particular case) this option: Code: [Select] --sort-args            Sort filenames given by command line arguments. How could that be used to force qaac to sort all files matching a wildcard input file argument by e.g. filename in ascending order? As is written in the message, --sort-args will sort the filenames given by the command line. The result will be ordered in ascending order, and the process (encoding) is done in this order. So, if you name files like "01 foo.flac", "02 bar.flac"... , it will work as you intended. This option was introduced by a request which was exactly the same as yours. Title: QAAC: discussion, questions, feature requests, etc. Post by: sundance on 06 October, 2014, 04:28:58 AM Thanks for explanation. I did search this thread before posting but didn't find anything for "sorting" and "filenames". I was thinking that I needed additional parameters for the "--sort-args" option to specify the sort options... .sundance. Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 07 October, 2014, 04:33:20 PM So, well, thank you very much for the surprisingly quick implementation of AviSynth support. It will help those who create movie backups with e.g. AVC+AAC in MP4 containers. Title: QAAC: discussion, questions, feature requests, etc. Post by: Boulder on 15 October, 2014, 04:45:57 AM I just noticed a problem with QAAC's limiter option. I decoded the 5.1ch AC3 track of David Bowie's Glass Spider concert (DVD) to a 32-bit float wav with eac3to and ran qaac.exe -V 91 --verbose --no-delay --limiter -o "xxx.m4a" "xxx.wav". eac3to had already noticed that there was clipping and thus had adjusted the volume accordingly. I checked the peak with qaac and it showed 0.99. I started wondering why the encoding took so long (estimated to take almost 30 minutes) and looked at the Task Manager. The qaac.exe process was consuming one CPU at maximum but the i/o throughput was very small. When I dropped the limiter parameter, the encoding processed normally. Hope this helps in finding something out, I can also provide a small sample file if needed. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 15 October, 2014, 08:21:13 AM Hope this helps in finding something out, I can also provide a small sample file if needed. Usually --limiter is not that slow. Please provide a sample that reproduces the issue. Title: QAAC: discussion, questions, feature requests, etc. Post by: freddy687 on 19 October, 2014, 11:26:38 PM Wondering if anyone could help me.  I use a program called LPRipper to cut up and label recorded vinyl tracks.  It has a built in encoder option that allows the use of command line encoders.  I have been using the Nero encoder but would like to try the QAAC encoder.  But struggling to understand the command lines for this program.  I want to be able to bulk encode files using the same file name in the same folder but encode from WAV to M4A with high quality, 192-256 VBR.  Can't seem to get the infile and outfile commands right.  Can anyone recommend a sample command line that I might try? Also, is there a requirement that certain of the files need to be in the same folder as the qaac.exe?  Does QAAC automatically find the Apple files or do they need to be there? Using Windows7 and Windows8 64bit. Title: QAAC: discussion, questions, feature requests, etc. Post by: marc2003 on 20 October, 2014, 12:04:39 AM @nu774, you probably aren't interested in a 13 year old obsolete OS but i'm stuck running windows xp. qaac itself works fine but i notice your makeportable batch file checks for 7-zip by looking under the HKCU registry key. on my system, it's actually located under HKLM. obviously i've got it running by myself but i just thought i'd mention it.... Title: QAAC: discussion, questions, feature requests, etc. Post by: Aleron Ives on 20 October, 2014, 12:36:50 AM Can anyone recommend a sample command line that I might try? If you really want an average bitrate of ~224 kbps (which is probably overkill), try: Code: [Select] for %1 in (*.wav) do qaac.exe -v 224 %1 %1.m4a Also, is there a requirement that certain of the files need to be in the same folder as the qaac.exe?  Does QAAC automatically find the Apple files or do they need to be there? The "QTfiles" folder should be in the same folder as qaac.exe. Unless qaac.exe is in a folder included in the system's PATH variable, you need to cd into that folder before you can run the qaac command. Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 20 October, 2014, 12:54:16 AM @ freddy687: Regarding the last point: If QuickTime or even iTunes is installed, then the required Apple CoreAudioToolBox DLLs will be in a directory which is added to the PATH so they can be found whereever they are installed. If you don't like Apple software to be installed, and you prefer to extract the DLLs out of QuickTime or iTunes installers using makeportable.cmd, then they have to be put together with qaac.exe to ensure that they are available specifically for QAAC, no other application will need them anywhere else in this case. You may use a bitrate for QAAC, but it is not recommended, best quality will be achieved in "True VBR" mode. I would suggest to use parameters like: --tvbr 81 --threading -o outfile.m4a infile.wav (for M4A output with True VBR quality mode, quality level 81 – there are some common values for discrete steps, like: 72, 81, 90) --abr % --adts --threading -o outfile.aac infile.wav (for ADTS AAC output with ABR mode, using the maximum bitrate from the slider in LP Ripper options) ^ untested. Try and report either version in different combinations. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 20 October, 2014, 01:38:27 AM @nu774, you probably aren't interested in a 13 year old obsolete OS but i'm stuck running windows xp. qaac itself works fine but i notice your makeportable batch file checks for 7-zip by looking under the HKCU registry key. on my system, it's actually located under HKLM. obviously i've got it running by myself but i just thought i'd mention it.... Thanks, updated to HKLM. Title: QAAC: discussion, questions, feature requests, etc. Post by: Hex144 on 24 October, 2014, 12:31:30 PM qaac can't use libsoxconvolver64.dll for lowpass, only the 32 bit version. Why is this? Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 24 October, 2014, 12:40:30 PM qaac can't use libsoxconvolver64.dll for lowpass, only the 32 bit version. Why is this? When you donwload qaac you can see two folders inside the archive, x86 and x64. There is not qaac.exe inside the x64, is it too hard to understand qaac.exe is only 32-bit? Title: QAAC: discussion, questions, feature requests, etc. Post by: jacobacci on 25 October, 2014, 03:23:26 AM Question on the "-r keep" switch. I have tried the "-r keep" switch and seen that it works in the following way: Input        Output 44.1        44.1 48            48 88.2        48 96            48 176.4      48 192          48 I know that from a sound quality point of view it probably does not make any difference, but would it be possible to change the behavior so that the division of sample rate is an integer? Input        Output 44.1        44.1 48            48 88.2        44.1 96            48 176.4      44.1 192          48 Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 25 October, 2014, 05:33:32 AM I know that from a sound quality point of view it probably does not make any difference, but would it be possible to change the behavior so that the division of sample rate is an integer? When the input rate is 64kHz (which is not supported), what do you want? 32kHz? If you stick to integer division for some magical reason, yes you have to pick 32kHz in this case, instead of 48kHz. However, I don't think it's a good choice. Do you? Current behavior of qaac is quite simple. It just picks the nearest value that is supported by the encoder. And I want keep it simple. If you want to resample to specific rate for your needs, you can simply use --rate=44100 or something. Title: QAAC: discussion, questions, feature requests, etc. Post by: jacobacci on 25 October, 2014, 05:50:15 AM Hi nu774. Not meaning to be a pain, apologies if I have. qaac is simply the best and easiest way to convert to aac, that's why I use it. My question only concerns the bitrates which are multiples of 44.1 or 48. For everything else the behavior is fine as it is (using 48 for anything >48 I guess). In my view integer transformation is simpler than non integer, less processor intensive. I simply like the "efficiency" of the integer division vs. the non integer one. If this adaptation is not easily possible, I am sure I can find a way to detect the input sample rate before the conversion and set the -r parameter (which is what I have done manually in the past, but I would like to automate the process). Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 25 October, 2014, 06:12:50 AM My question only concerns the bitrates which are multiples of 44.1 or 48. For everything else the behavior is fine as it is (using 48 for anything >48 I guess). Strictly speaking, your concern looks like only on multiples of 44.1. I suppose you are ok when multiples of 32/48 result in 48. Quote If this adaptation is not easily possible, I am sure I can find a way to detect the input sample rate before the conversion and set the -r parameter It's possible, in psuedo-code it should look like this: Code: [Select] output_rate = UNDEFfor n in 2..MAX:    if 44100 * n == input_rate:        output_rate = 44100        breakif output_rate == UNDEF:    output_rate = nearest_supported_value(input_rate) As you can see, you have to treat multiples of 44100 as a "special case". I just don't think it deserves the special treatment like this. Title: QAAC: discussion, questions, feature requests, etc. Post by: [JAZ] on 25 October, 2014, 08:19:39 AM My question only concerns the bitrates which are multiples of 44.1 or 48. For everything else the behavior is fine as it is (using 48 for anything >48 I guess). In my view integer transformation is simpler than non integer, less processor intensive. I simply like the "efficiency" of the integer division vs. the non integer one. Which is your knowledge of resampling algorithms? More so, which is your knowledge of the specific resampler that qaac (in fact Apple's codec) has? Either your knowledge is limited and think in terms of "rounding" and "cutting" (as if resampling 88.2 to 44.1 consisted only on dropping one of each two samples), or you think in terms of an upsample/filter/decimator combination, and thinking that a 2:1 ratio generates less computations than a 160:147 ratio (but that ignores that such an algorithm, to be efficient, does not generate the samples that it doesn't need, and it might, or might not be the type or resampler used in this case). So given these facts... can you explain again why do you think it is better to have integer divisions for resampling? Title: QAAC: discussion, questions, feature requests, etc. Post by: jacobacci on 25 October, 2014, 01:07:49 PM Hi jaz, I admit to knowing next to nothing about resampling algorithms, except that one cannot cheat maths. More complex algorithms take more processing power, no matter how you slice and dice them. A conversion between two sampling rates that are non integer or closely spaced will need to calculate more complex filters than an integer one (if the same conversion quality is to be achieved). I have done two conversions and looked at the time needed for the process. Same 88.2kHz FLAC file, once to 48 and once to 44.1 sample rate. The conversion times were 6.9 and 5.9 seconds. Processor load in both cases was around 30% during the conversion. (https://hydrogenaud.io/imgcache.php?id=c12279a333c774fc12f8e735e614fdf1" rel="cached" data-warn="External image, click to view at original size" data-url="http://s25.postimg.org/s0161yzmj/Conversion_88_2_44.1.jpg) (https://hydrogenaud.io/imgcache.php?id=9e64c06f67bbac1b595bf198c0ba291f" rel="cached" data-warn="External image, click to view at original size" data-url="http://s25.postimg.org/s0161yzmj/Conversion_88_2_48.jpg) So it seems to me that there may indeed be a difference in processor load between integer and non integer sample rate conversion of close to 20%. No scientific test, I admit. Regards Rudi Title: QAAC: discussion, questions, feature requests, etc. Post by: Octocontrabass on 25 October, 2014, 04:07:50 PM More complex algorithms take more processing power, no matter how you slice and dice them. If you slice and dice any sample rate conversion to only work on a single input:output ratio, it will be faster than the general case. It will also be much less useful. When the algorithm is complex enough, the benefit of a single-ratio optimized case outweigh the cost of maintaining separate versions for each ratio. The algorithm in SoX, for example, is complex enough that I doubt it has separate versions for specific ratios. A conversion between two sampling rates that are non integer or closely spaced will need to calculate more complex filters than an integer one (if the same conversion quality is to be achieved). The filters may be more difficult for you to calculate with paper and a pencil, but the computer doesn't know how to take shortcuts unless it's been programmed to do so. (See above.) I have done two conversions and looked at the time needed for the process. Same 88.2kHz FLAC file, once to 48 and once to 44.1 sample rate. The conversion times were 6.9 and 5.9 seconds. Processor load in both cases was around 30% during the conversion. Assuming the same input rate and shortcut-free algorithm, the required processing power will be linearly correlated with the output sample rate. A lower output rate means fewer samples need to be calculated, and thus take less CPU time. (Additionally, you may be bound by disk I/O - in which case, a lower output sample rate means less time needs to be spent waiting for the disk.) No scientific test, I admit. At least you're willing to admit it. To make it more scientific, you'd need to run the same test many times and average the results (with a bit of statistical analysis thrown in) to see what kind of differences are typical. You might also try longer/shorter files, to get a good idea of the per-run overhead, or different output sample rates to get a better idea of how speed correlates with output sample rate. Title: QAAC: discussion, questions, feature requests, etc. Post by: [JAZ] on 25 October, 2014, 04:55:49 PM I didn't expect such an important difference, so i did some tests. qaac 2.41, toolbox: 7.9.8.6 88 -> 48 : 11.9x ( 6:12 minutes ) 88 -> 44 : 14.5x ( 5:04 minutes ) 48 -> 48 : 15.0x ( 4:55 minutes ) 44 -> 44 : 15.8x ( 4:40 minutes ) So, there is a difference between resampling and not resampling, there is a difference in the codec working at 44 or 48, and there is a difference in resampling from 88 to 44 compared to 88 to 48. In the end, it all adds up, but it is true that in this case, the biggest difference comes from the 88 -> 48 resampling. I did one more test.  Now, using the foobar resampler  (PPHS) 88->48->48 : 14.0x ( 5:17 minutes ) 88->44->44 : 15:1x ( 4:57 minutes ) Which is more in line to what I expected. Soo....... You are right that when using qaac/apple codec's resampler, it is better to use integer resampling, because the penalty of using non-integer is big. I am right in saying that resampling to non-integer factors should not have a big penalty. And in any case, I've demonstrated another thing... It is always faster to use foobar's resampler Title: QAAC: discussion, questions, feature requests, etc. Post by: jacobacci on 25 October, 2014, 06:16:19 PM I did one more test the other way around. 96kHz file conversion to 48 and 44.1 96 -> 48: 9.64 seconds 96 -> 44.1: 9.59 seconds This would also support the hypothesis that integer is faster than non integer (if one takes into account that around 8% less samples need to be calculated in the 96 -> 44.1 conversion). This case is much less clear than the other one though. Given nu774's inputs a few posts back, I think I'll rest my case. I feel my request is too niche and may have unintended side effects that I am not aware of at this time. Be it as it may, thanks a lot for for your inputs and discussion. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 25 October, 2014, 09:48:55 PM First, qaac can use 3 types of resampler for AAC encoding: soxr, CoreAudio, CoreAudio(codec builtin). When libsoxr (included in qaac distribution) is present, it's chosen by default. Second, when you do speed comparison of resampler alone, you'd better use -D switch and output to NUL like this: qaac -D --rate=44100 input.88200.wav -o NUL Third, as for soxr, it looks like integer ratio down sampling is indeed faster (probably requires smaller numbers of filter stages). Here is the results in my environment: 88200 -> 48000: 117x 88200 -> 44100: 380x 88200 -> 36000: 195x Finally, If you care about the speed, you can benefit from --threading. When --threading is on, resampling  runs on the input thread, encoding runs on another thread. Since encoding is a lot slower than input + resampling, encoding speed dominates the whole process, and the total speed shouldn't be affected by the resampling efficiency. Title: QAAC: discussion, questions, feature requests, etc. Post by: Boulder on 26 October, 2014, 05:16:14 AM Is there any comparison regarding quality between the different options in qaac? Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 26 October, 2014, 07:40:52 AM HI !! Does qaac support downmixing channels to mono dependless on the input channels configuration? I mean: convert 5.1 to mono convert 2.0 to mono leave mono intact --matrix-preset mono crashesh for mono files ;( Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 26 October, 2014, 08:02:30 AM Is there any comparison regarding quality between the different options in qaac? What quality? qaac offers only a few options regarding AAC encoding quality. You choose encoding strategy (TVBR or something), choose TVBR quality or bitrate. You have another option -q that controls quality/speed trade off but usually you don't have to touch it, and that's all. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 26 October, 2014, 08:17:35 AM HI !! Does qaac support downmixing channels to mono dependless on the input channels configuration? I mean: convert 5.1 to mono convert 2.0 to mono leave mono intact --matrix-preset mono crashesh for mono files ;( No. You have to create a file that contains mixing matrix spec and let qaac know your spec. --matrix-preset expects that you have stored those spec files in a certain pre-defined directories, and qaac just searches for it. Since shape of matrix is different when number of input channels is different, you cannot share the same "catch-all" matrix for  2.0 input and 5.1 input. For mono output, stereo->mono matrix will be like this: Code: [Select] 1 1 5.1 surround->mono matrix will be like this (this one discards LFE channel): Code: [Select] 1 1 1 0 1 1 https://github.com/nu774/qaac/wiki/Matrix-mixer (https://github.com/nu774/qaac/wiki/Matrix-mixer) Title: QAAC: discussion, questions, feature requests, etc. Post by: [JAZ] on 26 October, 2014, 08:56:41 AM I did one more test the other way around. 96kHz file conversion to 48 and 44.1 96 -> 48: 9.64 seconds 96 -> 44.1: 9.59 seconds That result shows a draw, more likely. In fact, in this case, you have a slower resampling ( 96 -> 44.1 vs 96 -> 48) compensated by a faster encoding (44.1 vs 48). Third, as for soxr, it looks like integer ratio down sampling is indeed faster (probably requires smaller numbers of filter stages). Here is the results in my environment: 88200 -> 48000: 117x 88200 -> 44100: 380x 88200 -> 36000: 195x Finally, If you care about the speed, you can benefit from --threading. When --threading is on, resampling  runs on the input thread, encoding runs on another thread. Since encoding is a lot slower than input + resampling, encoding speed dominates the whole process, and the total speed shouldn't be affected by the resampling efficiency. Now I've realized that by foobar test was biased in this sense. The resampling was happening in foobar's thread whereas the encoding happened in another, and even though it is a serial operation (samples need to be resampled previous to be feed to the encoder), it is able to resample in chunks, and use the pipe as a synchronization method. So, given the existing implementations, I guess i was more wrong than right Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 26 October, 2014, 02:03:31 PM For mono output, stereo->mono matrix will be like this: Code: [Select] 1 1 Good, I have this file but it only processes on stereo input while I sometimes need to convert bunch of sources of different formats. Do U plan to implement an generic downmix to mono switch? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 27 October, 2014, 12:54:37 AM Good, I have this file but it only processes on stereo input while I sometimes need to convert bunch of sources of different formats. Do U plan to implement an generic downmix to mono switch? No. Title: QAAC: discussion, questions, feature requests, etc. Post by: Boulder on 27 October, 2014, 04:41:31 AM Is there any comparison regarding quality between the different options in qaac? What quality? qaac offers only a few options regarding AAC encoding quality. You choose encoding strategy (TVBR or something), choose TVBR quality or bitrate. You have another option -q that controls quality/speed trade off but usually you don't have to touch it, and that's all. I meant the resampling quality as that was being discussed (forgot to quote, hence the confusion). I've been wondering whether it's better to resample outside qaac or use some internal method for resampling 96 kHz material to 48 kHz. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 27 October, 2014, 08:24:30 AM I meant the resampling quality as that was being discussed (forgot to quote, hence the confusion). I've been wondering whether it's better to resample outside qaac or use some internal method for resampling 96 kHz material to 48 kHz. soxr: flawless CoreAudio --native-resampler=norm: has aliasing CoreAudio --native-resampler=bats: no aliasing, VERY SLOW AAC default: worse than soxr, but I guess difference is usually inaudible ALAC default: looks equivalent with --native-resampler=norm I recommend soxr, which is enabled by default when libsoxr is present. However, AAC default will also be enough. Title: QAAC: discussion, questions, feature requests, etc. Post by: jacobacci on 27 October, 2014, 11:08:40 AM First, qaac can use 3 types of resampler for AAC encoding: soxr, CoreAudio, CoreAudio(codec builtin). When libsoxr (included in qaac distribution) is present, it's chosen by default. Second, when you do speed comparison of resampler alone, you'd better use -D switch and output to NUL like this: qaac -D --rate=44100 input.88200.wav -o NUL Third, as for soxr, it looks like integer ratio down sampling is indeed faster (probably requires smaller numbers of filter stages). Here is the results in my environment: 88200 -> 48000: 117x 88200 -> 44100: 380x 88200 -> 36000: 195x Finally, If you care about the speed, you can benefit from --threading. When --threading is on, resampling  runs on the input thread, encoding runs on another thread. Since encoding is a lot slower than input + resampling, encoding speed dominates the whole process, and the total speed shouldn't be affected by the resampling efficiency. It is probably more of a (objectively unfounded, I know) wish to keep files within the same multiple of 44.1 /48 after compression and re-expansion, than the worry about speed. It would be nice if -r=auto or -r=keep would stay in the same "base" samplerate family for samplerates > 48. Probably comes from a hardware perspective, where DACs use separate oscillators for multiples of 44.1 and 48. I know my concern has absolutely no scientific basis. ;-) Title: QAAC: discussion, questions, feature requests, etc. Post by: freddy687 on 28 October, 2014, 11:12:04 PM Would appreciate help again with command lines for bulk encoding WAV files with QAAC.  Trying to use it with a program LPRipper that I have successfully used with NeroAAC. I created a separate folder for QAAC and extracted the AppleApplicationSupport.msi to the folder with qaac.exe.  Then set the Program File to direct to C:\Windows\System32\cmd.exe. Tried two suggested commands.  C:\QAAC\qaac_2.44\x86 qaac.exe --tvbr 81 --threading -o outfile.m4a infile.wav .  And C:\QAAC\qaac_2.44\x86 qaac.exe -v 224 %1 %1.m4a . Start the command to encode, the CMD box opens and another box that says "Encoding"  and "Elapsed Time".  After several minutes of time has elapsed with no apparent results I finally terminate the process and find that there is no converted file in the directory. Are there additional files I need?  Or am I just not understanding the command syntax?  Would appreciate any help. Thanks. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 28 October, 2014, 11:39:51 PM And C:\QAAC\qaac_2.44\x86 qaac.exe -v 224 %1 %1.m4a . You need -o before output filename. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 28 October, 2014, 11:43:19 PM freddy687: 3) Download 7-Zip Portable (http://portableapps.com/apps/utilities/7-zip_portable) and extract it. 5) Extract makeportable.zip 6) Copy iTunes where makeportable.cmd is, same folder 7) Copy 7z.exe where makeportable.cmd is, same folder 8) Run makeportable.cmd 9) Extract qaac_x.xx.zip 10) Copy the folder QTFiles where qaac.exe is 11) Launch qaac.exe and convert whatever you want. Ok, or just do what nu774 said... I guess Title: QAAC: discussion, questions, feature requests, etc. Post by: Aleron Ives on 29 October, 2014, 02:49:46 AM Using % placeholders will probably work better if you do it like so: Code: [Select] for %1 in (*.wav) do qaac.exe --tvbr 81 -o %1.m4a %1 If you need to use the full path to qaac, be sure to enclose it in quotes if the path contains spaces. Title: QAAC: discussion, questions, feature requests, etc. Post by: audiophool on 29 October, 2014, 06:23:24 AM 3) Download 7-Zip Portable (http://portableapps.com/apps/utilities/7-zip_portable) and extract it. 5) Extract makeportable.zip 6) Copy iTunes where makeportable.cmd is, same folder 7) Copy 7z.exe where makeportable.cmd is, same folder 8) Run makeportable.cmd I have a thought. It may be stupid, I don't know much about Windows shell scripting. I suppose a substantial fraction of people who dislike the idea of having a full-blown QuickTime or iTunes on their Windows box may have 7-zip already installed on their system. (Because they may prefer it over the bloatier and closed-source competition such as Winzip and Winrar.) Wouldn't it be relatively easy for the makeportable script to check whether 7z x86 or x64 is present in its default location? And if so, you just do a "set path=%PATH%;C:\Program Files\7-Zip" or so. In my opinion, it would make usage of makeportable more convenient. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 29 October, 2014, 06:32:11 AM Wouldn't it be relatively easy for the makeportable script to check whether 7z x86 or x64 is present in its default location? And if so, you just do a "set path=%PATH%;C:\Program Files\7-Zip" or so. This is already done in makeportable.cmd (It searches registry for 7-zip location under HKLM and adds it to the PATH environment variable). Title: QAAC: discussion, questions, feature requests, etc. Post by: audiophool on 29 October, 2014, 07:09:46 AM This is already done in makeportable.cmd (It searches registry for 7-zip location under HKLM and adds it to the PATH environment variable). Interesting, it doesn't work for me. I have 7-Zip (64 bit) installed in its standard location on a Win 7 Pro x64 box. I am work right now and cannot double-check right away, but I'm fairly sure the script aborts saying it cannot execute 7z. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 29 October, 2014, 12:12:29 PM I suppose a substantial fraction of people who dislike the idea of having a full-blown QuickTime or iTunes on their Windows box may have 7-zip already installed on their system. (Because they may prefer it over the bloatier and closed-source competition such as Winzip and Winrar.) They are not that "full blown" like many people still think, they are pretty light and fast actually, many use WinRAR or the default Zip extractor of Windows, some don't even know or need 7-Zip. There isn't an official list of apps that everyone installs only because they are open and small This is already done in makeportable.cmd (It searches registry for 7-zip location under HKLM and adds it to the PATH environment variable). It never worked for me either. I have no idea why I thought it never worked but now it works, thanks for this. Title: QAAC: discussion, questions, feature requests, etc. Post by: jacobacci on 29 October, 2014, 01:47:03 PM RE: Command Line Options Reference (https://github.com/nu774/qaac/wiki/Command-Line-Options) I noticed that on the qaac command line options page on github, the line breaks are not quite right, which leads some single characters hanging over to the next line. This looks a bit confusing. @nu774, could you possibly correct this, thanks. Title: QAAC: discussion, questions, feature requests, etc. Post by: freddy687 on 29 October, 2014, 06:50:36 PM Still not having any success.  And when I follow the extraction instructions per EAHM,  and run makeportable I get an error that says system can not find the registry key or value.  Installer executable not found. Title: QAAC: discussion, questions, feature requests, etc. Post by: Aleron Ives on 29 October, 2014, 08:15:48 PM You can always try the process manually. Now that you have AppleApplicationSupport.msi, open a command prompt, cd to the directory with the MSI file (e.g. if the MSI is in C:\, type "cd C:\" without the quotes to change to that directory), and run this command: Code: [Select] msiexec /a AppleApplicationSupport.msi /qb TARGETDIR=C:\Appletmp After the extraction is complete, go into the C:\Appletmp directory and copy the following DLLs: Code: [Select] ASL.dllCoreAudioToolbox.dllCoreFoundation.dllicudt46.dlllibdispatch.dlllibicuin.dlllibicuuc.dllobjc.dllpthreadVC2.dll Paste these DLLs into the QTfiles folder that is next to qaac.exe (or create the folder if it's missing). You should now be able to use qaac, and you can delete AppleApplicationSupport.msi and the Appletmp directory. Note that qaac also requires Microsoft Visual C 2010 runtime library, which you probably already have installed. If you don't, you can get the runtime from Microsoft's website. Makeportable would place the DLLs into your QTfiles directory, but you should probably save yourself some trouble and install the runtime normally, so that other programs can use it, too. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 29 October, 2014, 08:17:18 PM Still not having any success.  And when I follow the extraction instructions per EAHM,  and run makeportable I get an error that says system can not find the registry key or value.  Installer executable not found. > system can not find the registry key or value When 7-zip is not installed on your system, this is normal and can be ignored if 7z.exe exists. You need either iTunesSetup.exe or iTunesSetup64.exe or QuickTimeInstaller.exe placed under current folder, or run makeportable like this: C:\foo\bar> makeportable path\to\iTunesSetup.exe Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 29 October, 2014, 08:29:06 PM RE: Command Line Options Reference (https://github.com/nu774/qaac/wiki/Command-Line-Options) I noticed that on the qaac command line options page on github, the line breaks are not quite right, which leads some single characters hanging over to the next line. This looks a bit confusing. @nu774, could you possibly correct this, thanks. Yes, it seems so. However, that is because current CSS of github that provides not enough space for 80 chars per line. To fix it, I have to completely build up the markup from the start, instead of just copy/pasting from qaac help message and using pre tag. I don't want to do it. Maybe I should remove the page... you can view it anyway by invoking qaac from command prompt. Title: QAAC: discussion, questions, feature requests, etc. Post by: freddy687 on 29 October, 2014, 10:32:29 PM Yea, got it working and great results!! Thanks so much to all for your help!! Title: QAAC: discussion, questions, feature requests, etc. Post by: jacobacci on 30 October, 2014, 03:31:42 AM RE: Command Line Options Reference (https://github.com/nu774/qaac/wiki/Command-Line-Options) I noticed that on the qaac command line options page on github, the line breaks are not quite right, which leads some single characters hanging over to the next line. This looks a bit confusing. @nu774, could you possibly correct this, thanks. Yes, it seems so. However, that is because current CSS of github that provides not enough space for 80 chars per line. To fix it, I have to completely build up the markup from the start, instead of just copy/pasting from qaac help message and using pre tag. I don't want to do it. Maybe I should remove the page... you can view it anyway by invoking qaac from command prompt. Could you make the text available as a download? I have solved the problem for myself by copying / pasting the text to a local text file. Not really a big deal, but I need to check manually whether there have been changes. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 30 October, 2014, 04:49:16 AM Could you make the text available as a download? I have solved the problem for myself by copying / pasting the text to a local text file. Not really a big deal, but I need to check manually whether there have been changes. It should be possible, but I cannot assure of keeping the online command line options document up to date. What is shown by qaac.exe will always be the original and correct. Is it so difficult just to run "qaac | more" or "qaac >help.txt" or something in the command prompt? I think there are two types of qaac users. One just uses qaac from some GUI app and rarely runs it from command prompt. The other often uses qaac from command prompt. Help message shown by qaac.exe itself should be enough for the latter. Indivisual document might be useful for the former, but I guess most of them just set up for GUI app once, reading and copy/pasting the command line that is provided somewhere on the net. Title: QAAC: discussion, questions, feature requests, etc. Post by: Carsi on 01 November, 2014, 07:57:06 PM Why is there no x64 version available for qaac? I can see the refalac is in x64. Btw thanks for your work, I use it quite often! Title: QAAC: discussion, questions, feature requests, etc. Post by: Aleron Ives on 01 November, 2014, 11:18:17 PM There would be no performance benefit from a 64-bit qaac, and nu774 probably doesn't want to spend time maintaining redundant versions of the same program. It doesn't need more RAM or hardware registers, and 32-bit programs have equal performance to 64-bit programs under Windows, so leaving qaac as 32 bit makes it more compatible, since x86 Windows can't run 64-bit programs, but x64 Windows can run 32-bit programs. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 02 November, 2014, 12:13:04 AM Why is there no x64 version available for qaac? I can see the refalac is in x64. Btw thanks for your work, I use it quite often! Because CoreAudioToolbox is 32 bit only, and it's impossible to load 32 bit DLL from 64 bit executable. Title: QAAC: discussion, questions, feature requests, etc. Post by: Aleron Ives on 02 November, 2014, 02:24:49 AM That's also a good reason. Title: QAAC: discussion, questions, feature requests, etc. Post by: jacobacci on 17 November, 2014, 05:27:15 PM Hi I have been using qaac on audio files (flac) that have long path names (>256 chars). I used dbpoweramp batch converter and Xiklone as conversion front ends. With both programs qaac reported conversion errors and did not convert the files correctly. Using both dbpoweramp batch converter or Xiklone with mp3 or other converters on files with long paths worked without any problems. Is anyone aware that qaac would have issues handling files with long path names? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 17 November, 2014, 07:22:21 PM Hi I have been using qaac on audio files (flac) that have long path names (>256 chars). I used dbpoweramp batch converter and Xiklone as conversion front ends. With both programs qaac reported conversion errors and did not convert the files correctly. Using both dbpoweramp batch converter or Xiklone with mp3 or other converters on files with long paths worked without any problems. Is anyone aware that qaac would have issues handling files with long path names? Aren't they configured to use piping? When piping is used, path name of the source file shouldn't matter in any way, since they are not passed to the encoder. That being said... Windows requires a special prefix "\\.\" for very a long path name (longer than 256 chars) to be opened. Therefore, qaac will internally insert the prefix when input path name given by the command line is longer than 256 chars, but it doesn't see if the prefix is already included. Therefore, if the front ends are not configured to use piping, and if they feed the encoder with the already prefixed path name of the source file, qaac will fail to open it. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 17 November, 2014, 07:28:09 PM Or maybe issue is on the output side. If the already prefixed output path name is given, qaac will fail to open it. Title: QAAC: discussion, questions, feature requests, etc. Post by: redsandvb on 21 January, 2015, 10:03:44 PM I had a working foobar converter setup for QAAC, though I haven't used it for a very long time, and just recently tried to convert a flac track to m4a and got these errors: "F:\New\Converted\Kalapana - The Very Best Of Kalapana\18 - When The Morning Comes.m4a" 1 out of 5 tracks converted with major problems. Source: "F:\New\Kalapana - The Very Best Of Kalapana (FLAC)\Kalapana - The Very Best Of - 18 - When The Morning Comes.flac" An error occurred while writing to file (The encoder has terminated prematurely with code -1073741515 (0xC0000135); please re-check parameters) : "F:\New\Converted\Kalapana - The Very Best Of Kalapana\18 - When The Morning Comes.m4a" Encoder stream format: 44100Hz / 2ch / 16bps Command line: "C:\Program Files (x86)\foobar2000\qaac.exe" -V 100 -o "18 - When The Morning Comes.m4a" - --no-optimize Working folder: F:\New\Converted\Kalapana - The Very Best Of Kalapana\ Conversion failed: The encoder has terminated prematurely with code -1073741515 (0xC0000135); please re-check parameters the command line I've used was: -V 100 -o %d - --no-optimize What am I missing? Thanks! Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 22 January, 2015, 12:37:32 AM You need 32bit version of msvcr120.dll and msvcp120.dll. They are included in x86 folder in the qaac zip archive. You can copy them to the same directory as qaac.exe. Title: QAAC: discussion, questions, feature requests, etc. Post by: redsandvb on 22 January, 2015, 02:10:59 AM You need 32bit version of msvcr120.dll and msvcp120.dll. They are included in x86 folder in the qaac zip archive. You can copy them to the same directory as qaac.exe. Thank you so much, that did the trick! Strange, I don't remember deleting those files, is this a new requirement? Title: QAAC: discussion, questions, feature requests, etc. Post by: Case on 22 January, 2015, 02:46:21 AM You were probably using qaac.exe bundled with foobar2000's Free Encoder Pack (http://www.foobar2000.org/encoderpack) previously. It doesn't require these files. Title: QAAC: discussion, questions, feature requests, etc. Post by: redsandvb on 22 January, 2015, 03:07:11 AM You were probably using qaac.exe bundled with foobar2000's Free Encoder Pack (http://www.foobar2000.org/encoderpack) previously. It doesn't require these files. I don't think I had that installed, at least I don't remember installing it.  But anyway, things are ok now. Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 30 January, 2015, 08:55:14 AM What's new in QAAC 2.45: * Added qaac64.exe that works with iTunes 64bit (ver 12.1). * Switched to static C runtime linking. Now you don't need msvcr120.dll and msvcp120.dll anymore. * Minor bug fixes. Thx nu774! Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 30 January, 2015, 09:08:45 AM qaac64 can't find CoreAudioToolbox.dll. I have installed iTunes64? CoreAudioToolbox 7.9.9.4 gives different output! Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 30 January, 2015, 10:53:12 AM What's new in QAAC 2.45: * Added qaac64.exe that works with iTunes 64bit (ver 12.1). * Switched to static C runtime linking. Now you don't need msvcr120.dll and msvcp120.dll anymore. * Minor bug fixes. Thx nu774! Thanks for the update, almost as fast as FhG now! In fact, I think I'll go back to QAAC. FhG VBR 4 (~128) gave me ~133 kbps avg. on one album and Q63 much higher, ~144 kbps. Did Apple change anything with the latest version? Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 30 January, 2015, 01:16:57 PM It was the file, optimization etc. Other files are more normal, around ~95-115 for that TVBR setting. Also, they changed one setting, Q63 now gives Q64. Everything else is the same: Q: 0, 9, 18, 27, 36, 45, 54, 64, 73, 82, 91, 100, 109, 118, 127 So, please update foobar2000 as well with the new Q64 (-V 64 or -V64) and (qaac.exe;qaac64.exe) in the AAC (Apple) settings. Also please again on foobar2000, add Apple Lossless (refalac) (refalac.exe;refalac64.exe), no need for libraries. Thanks! Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 30 January, 2015, 04:11:00 PM nu774, can you add -A on refalac (hidden feature?) so foobar2000 can use only "Apple Lossless" in its settings with (qaac.exe;qaac64.exe;refalac.exe;refalac64.exe) instead of "Apple Lossless (qaac)" and "Apple Lossless (refalac)". Title: QAAC: discussion, questions, feature requests, etc. Post by: sPeziFisH on 31 January, 2015, 12:16:25 PM What's new in QAAC 2.45: * Added qaac64.exe that works with iTunes 64bit (ver 12.1). ... ..not to forget to mention that makeportable (https://sites.google.com/site/qaacpage/cabinet (https://sites.google.com/site/qaacpage/cabinet)) has also been updated to handle the 64bit-packages. Thx nu774 ! Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 31 January, 2015, 01:18:04 PM And if someone doesn't like to have 19MB icudt49.dll in their portable install: Title: QAAC: discussion, questions, feature requests, etc. Post by: Sixth Street on 02 February, 2015, 02:29:02 AM Does anyone use QAAC with Ampache?  Trying to get transcoding to work but having no luck.  Thanks in advance! Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 02 February, 2015, 10:04:59 AM Today I got very strange message when using qaac with piped input QAAC64 can't find DevIL.dll. WTH, this dll never has beenn bundled with QAAC What it truly does and why qaac demands it? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 03 February, 2015, 02:06:05 AM Today I got very strange message when using qaac with piped input QAAC64 can't find DevIL.dll. WTH, this dll never has beenn bundled with QAAC What it truly does and why qaac demands it? Can't investigate now, but only thing I can think of is avisynth related. I guess you have 64bit avisynth that is not correctly working due to missing dependencies, but please post console error message as well as result of qaac --check. Does it always happen? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 03 February, 2015, 07:11:28 AM OK, I confirmed that when you have avisynth.dll without devil.dll (which avisynth depends on), OS shows that dialog. Since qaac works without avisynth.dll (it's merely optional), this interference by OS is really unnecessary and annoying. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 03 February, 2015, 07:25:31 AM The issue is fixed on 2.46. However, you had better re-install 64bit avisynth or remove it anyway. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 03 February, 2015, 10:08:22 AM Thanks for the -A switch too nu774. Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 03 February, 2015, 11:56:01 AM Sorry if this sounds like a silly question but i don't understand what difference the new -A option does? Anyone care to explain? Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 03 February, 2015, 12:01:18 PM Sorry if this sounds like a silly question but i don't understand what difference the new -A option does? Anyone care to explain? It was just a request so apps like foobar2000 wouldn't have to add different encoder parameters for both qaac and refalac, now the parameter is always -A and what changes is only the .exe. See here another request, for Peter now: http://www.hydrogenaud.io/forums/index.php...=108274 (http://www.hydrogenaud.io/forums/index.php?showtopic=108274) Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 03 February, 2015, 12:09:15 PM Sorry if this sounds like a silly question but i don't understand what difference the new -A option does? Anyone care to explain? It was just a request so apps like foobar2000 wouldn't have to add different encoder parameters for both qaac and refalac, now the parameter is always -A and what changes is only the .exe. See here another request, for Peter now: http://www.hydrogenaud.io/forums/index.php...=108274 (http://www.hydrogenaud.io/forums/index.php?showtopic=108274) Ah so you can access the lossless encoder using qaac.exe with -A? I didnt realise that. I always just use refalac. Title: QAAC: discussion, questions, feature requests, etc. Post by: M on 03 February, 2015, 08:52:56 PM My apologies if this has already been covered somewhere in the thread (yes, I searched; no, I didn't find it), but does the following parameter directly map to the "good," "better," and "best" options in QuickTime? Quote -q, --quality <n>      AAC encoding Quality [0-2] I ask because the Quicktime 7 User's Guide (http://www.apple.com/quicktime/pdf/QuickTime7_User_Guide.pdf) includes the following note in the MPEG-4 Audio Export Options, at the top of page 51: Quote Encoding Quality: Available only with AAC audio. The Good setting is optimized for the highest-speed encoding, for higher-quality, choose Best for 16-bit audio, or Better if your audio source is 24-bit. As a quick experiment, I encoded a 24-bit audio source via qaac/CoreAudioToolbox 7.9.9.4 to produce two separate files, one of which used --quality 2 (assuming that to be "Best"), and the other of which used --quality 1 (assuming that to be "Better"). I then converted each of those files back to WAV via foobar2000. Next I loaded the original into Audacity, and inverted the signal, prior to loading one of the decoded WAV files into Audacity, so that I could mix and render a residual signal. From there, I loaded both residual signals in Audacity, and mapped both channels of the --quality 1 ("Better") output Left, with both channels of the --quality 2 ("Best") output mapped Right. Playing the synchronized pair of residual signals resulted in an experience that was significantly—as in, no subtlety about it, it was that easy to discern—noisier on the "Best" side, and that counter-intuitively seems to imply the "Better" setting preserved an audio signal closer to the original, 24-bit source than the "Best" setting managed... which, if correct, matches the advice in the Quicktime 7 User's Guide. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 04 February, 2015, 02:26:52 AM does the following parameter directly map to the "good," "better," and "best" options in QuickTime? Quote -q, --quality <n>      AAC encoding Quality [0-2] Yes, I think so. Quote Encoding Quality: Available only with AAC audio. The Good setting is optimized for the highest-speed encoding, for higher-quality, choose Best for 16-bit audio, or Better if your audio source is 24-bit. I dunno. Only Apple dev should be able to explain it, but codec itself works on 32bit float. Then why the input bit depth matters? I'm somewhat skeptical. Quote As a quick experiment, I encoded a 24-bit audio source via qaac/CoreAudioToolbox 7.9.9.4 to produce two separate files, one of which used --quality 2 (assuming that to be "Best"), and the other of which used --quality 1 (assuming that to be "Better"). I then converted each of those files back to WAV via foobar2000. Next I loaded the original into Audacity, and inverted the signal, prior to loading one of the decoded WAV files into Audacity, so that I could mix and render a residual signal. I don't think your test procedure is valid for comparing perceptual encoding. Title: QAAC: discussion, questions, feature requests, etc. Post by: M on 04 February, 2015, 06:37:00 AM I don't think your test procedure is valid for comparing perceptual encoding. Nor did I suggest that it was. What I did suggest was that, based on Apple's own advice, the possibility merited technical examination... hence the reason I began with the "quick experiment" I described, rather than jumping straight to a double-blind test of perceptual output. At any rate, the information seemed interesting enough to mention, so that any other HA members who might also be inclined to experiment with alternative methods of encoding 24-bit source material could do so. (And should any then do so, surely multiple individuals contributing results of their own double-blind tests would be more indicative than anecdotal information from a single individual?) Title: QAAC: discussion, questions, feature requests, etc. Post by: Mix3dmessagez on 08 February, 2015, 01:32:18 PM My apologies if this has already been covered somewhere in the thread (yes, I searched; no, I didn't find it), but does the following parameter directly map to the "good," "better," and "best" options in QuickTime? Quote -q, --quality <n>      AAC encoding Quality [0-2] I ask because the Quicktime 7 User's Guide (http://www.apple.com/quicktime/pdf/QuickTime7_User_Guide.pdf) includes the following note in the MPEG-4 Audio Export Options, at the top of page 51: Quote Encoding Quality: Available only with AAC audio. The Good setting is optimized for the highest-speed encoding, for higher-quality, choose Best for 16-bit audio, or Better if your audio source is 24-bit. As a quick experiment, I encoded a 24-bit audio source via qaac/CoreAudioToolbox 7.9.9.4 to produce two separate files, one of which used --quality 2 (assuming that to be "Best"), and the other of which used --quality 1 (assuming that to be "Better"). I then converted each of those files back to WAV via foobar2000. Next I loaded the original into Audacity, and inverted the signal, prior to loading one of the decoded WAV files into Audacity, so that I could mix and render a residual signal. From there, I loaded both residual signals in Audacity, and mapped both channels of the --quality 1 ("Better") output Left, with both channels of the --quality 2 ("Best") output mapped Right. Playing the synchronized pair of residual signals resulted in an experience that was significantly—as in, no subtlety about it, it was that easy to discern—noisier on the "Best" side, and that counter-intuitively seems to imply the "Better" setting preserved an audio signal closer to the original, 24-bit source than the "Best" setting managed... which, if correct, matches the advice in the Quicktime 7 User's Guide. Interesting, over here https://developer.apple.com/library/mac/tec...237/_index.html (https://developer.apple.com/library/mac/technotes/tn2237/_index.html) It says: Encoding Quality: Good, Better or Best. The Good  setting is optimized for the highest-speed encoding, for higher-quality  choose Better or Best (optimal for 24-bit source). The tradeoff is  between encoding speed and audio quality This seems to me that better and best could both apply to 16 and 24 bit files, with best resulting in the highest quality setting *which would make sense*. And it also has the distinction of mentioning 24 bit but not proclaiming it to be the highest source it accepts, simply optimal. Title: QAAC: discussion, questions, feature requests, etc. Post by: Munchulax on 08 February, 2015, 05:27:22 PM Does anyone know what the default lowpass settings for QAAC's CVBR and TVBR modes in the higher bitrate range are (256 kbps and up for CVBR and Q 100 and up for TVBR)? Title: QAAC: discussion, questions, feature requests, etc. Post by: Steve Forte Rio on 12 February, 2015, 08:30:05 AM Hello, nu774. Thank you so much for 64-bit support, for me it runs about 20% faster than x86. But can you tell us why does qaac64 gives considerably different stream? I encoded the same file with x64 and x86 (both qaac 2.46/7.9.9.4) and have got their differential file with peaks up to -48 dBFS. Don't think it's normal (I wouldn't care if they was less than -96 dBFS, but actually they're much higher). How could you explain it? Thanks. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 12 February, 2015, 09:15:16 AM have got their differential file with peaks up to -48 dBFS. Don't think it's normal (I wouldn't care if they was less than -96 dBFS, but actually they're much higher). Well, not being the developer of the codec, I have nothing to explain. However, to me it doesn't look as abnormal as you say. Have you tried the same with other lossy encoders (LAME, Vorbis, Opus,...) ? Title: QAAC: discussion, questions, feature requests, etc. Post by: Steve Forte Rio on 12 February, 2015, 01:31:03 PM Have you tried the same with other lossy encoders (LAME, Vorbis, Opus,...) ? For other codecs the differential signal generally has peaks much lower so it's even almost inaudible. Then, you are right, it's rather a question for Apple developers And, anyway, I'm almost sure that even such differences, as for QAAC, will be non-abxable. Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 12 February, 2015, 01:50:14 PM The difference between SSE vs. non-SSE builds of "Oggenc2.87 using libVorbis v1.3.4 (http://www.rarewares.org/ogg-oggenc.php#oggenc-libvorbis)" is comparable or even bigger than the difference between qaac vs. qaac64. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 15 February, 2015, 02:01:00 AM Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 15 February, 2015, 06:27:22 AM It's new QAAC 2.47: - Large file (>= 4GB) output is now supported. Very long duration (beyond 32bit limit) is also supported, but the latter is not compatible with QuickTime 7. - On very large files, container optimization can take several minutes. You can disable it by --no-optimize. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 15 February, 2015, 02:49:12 PM Thanks for the -A switch too nu774. Just noticed that Peter used --alac for foobar2000 and not -A, I don't know if he cares enough to change it and if he doesn't -A won't resolve anything for the normal user. Here what he uses: "--ignorelength -s --no-optimize --alac -o %d -" Title: QAAC: discussion, questions, feature requests, etc. Post by: lonelyroads on 15 February, 2015, 06:16:14 PM while i was finding out what setting to encode my collection at i discovered a nasty artifact that i only seem to get on Apple AAC but not on other formats?. i can make sample if anyone whats hear or not. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 15 February, 2015, 06:48:02 PM while i was finding out what setting to encode my collection at i discovered a nasty artifact that i only seem to get on Apple AAC but not on other formats?. i can make sample if anyone whats hear or not. Of course do please and thank you  Many say they prefer MP3 to AAC because of the different artifact habit I guess they have. Open a new thread, this is about QAAC not AAC, the sample must be less than 30 secs. Title: QAAC: discussion, questions, feature requests, etc. Post by: lonelyroads on 16 February, 2015, 05:49:40 AM while i was finding out what setting to encode my collection at i discovered a nasty artifact that i only seem to get on Apple AAC but not on other formats?. i can make sample if anyone whats hear or not. Of course do please and thank you  Many say they prefer MP3 to AAC because of the different artifact habit I guess they have. Open a new thread, this is about QAAC not AAC, the sample must be less than 30 secs. will do and thanks. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 16 February, 2015, 10:15:41 PM Thanks for the -A switch too nu774. Just noticed that Peter used --alac for foobar2000 and not -A, I don't know if he cares enough to change it and if he doesn't -A won't resolve anything for the normal user. Here what he uses: "--ignorelength -s --no-optimize --alac -o %d -" nu774, sorry for always requesting, can you add --alac too to refalac, I guess? I don't even know if refalac is compatible with the other commands, let's see what Peter is going to do with the new foobar2000. Thanks. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 17 February, 2015, 01:41:27 AM nu774, sorry for always requesting, can you add --alac too to refalac, I guess? I don't even know if refalac is compatible with the other commands, let's see what Peter is going to do with the new foobar2000. Thanks. I think --alac is already allowed. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 17 February, 2015, 03:37:21 AM I think --alac is already allowed. Perfect, sorry didn't have the file to check. Just checked. Title: QAAC: discussion, questions, feature requests, etc. Post by: iListener on 02 March, 2015, 07:59:57 AM nu774, could you please add support for Sound Check? Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 02 March, 2015, 08:26:42 AM Not using iTunes, I did not even know what Sound Check (https://support.apple.com/en-us/HT201724) means; apparently it is similar to ReplayGain. Title: QAAC: discussion, questions, feature requests, etc. Post by: iListener on 02 March, 2015, 09:13:44 AM Not using iTunes, I did not even know what Sound Check (https://support.apple.com/en-us/HT201724) means; apparently it is similar to ReplayGain. Yes, it is similar to RG. I don't use iTunes either, but my iPod doesn't support RG, only SC. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 03 March, 2015, 02:31:38 AM I will not implement replaygain scanner in qaac since: 1) A replaygain scanner shares no functionality with an encoder and it can be implemented as a separated, independent program (Scanner will use AAC decoder, not encoder). 2) Multiple choices of algorithm (R128 or something), multiple way of implementing it (aacgain way/metadata only). 3) qaac can be executed in many ways. What "Album" means is not always clear to qaac. The same applies true for sound check, but it is even worse than replaygain. Title: QAAC: discussion, questions, feature requests, etc. Post by: Neroldy on 05 March, 2015, 11:07:12 AM Hi, I have a question about --ignorelength. I use eac3to and get 3.85G wav file from video. Then I use qaac to convert it to the aac. first I use --ignorelength and get a m4a file, then I try not use --ignorelength and I also get a m4a file. And I listen these 2 files and didn't find any problem. But these 2 files are different(MD5). So, do I need to use --ignorelength ? Thank~ Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 05 March, 2015, 11:18:14 AM This option is required if you want the encoder to read from a pipe (decoder infile - | encoder - outfile), because the length of the data chunk is often only known when the decoder finished decoding an audio stream to a virtual WAV file in the RAM, but should have updated the WAV header when it started to write the virtual WAV file into the pipe already. But a pipe can only be written once and sequentially, the output cannot rewind in the end and change something in the beginning. Therefore an encoder reading out of a pipe must ignore the length field in a WAV header because the connected decoder cannot have written the correct value in it yet. Title: QAAC: discussion, questions, feature requests, etc. Post by: Neroldy on 05 March, 2015, 11:26:47 AM This option is required if you want the encoder to read from a pipe (decoder infile - | encoder - outfile), because the length of the data chunk is often only known when the decoder finished decoding an audio stream to a virtual WAV file in the RAM, but should have updated the WAV header when it started to write the virtual WAV file into the pipe already. But a pipe can only be written once and sequentially, the output cannot rewind in the end and change something in the beginning. Therefore an encoder reading out of a pipe must ignore the length field in a WAV header because the connected decoder cannot have written the correct value in it yet. Thank you! You mean that even if the wav file is > 4GB, I needn't to use the --ignorelength, am I right? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 05 March, 2015, 10:30:52 PM Hi, I have a question about --ignorelength. I use eac3to and get 3.85G wav file from video. Then I use qaac to convert it to the aac. first I use --ignorelength and get a m4a file, then I try not use --ignorelength and I also get a m4a file. And I listen these 2 files and didn't find any problem. But these 2 files are different(MD5). So, do I need to use --ignorelength ? Thank~ How did you created MD5sum? The resulting file should always be different, even if you specify identical options, since MP4 container stores creation time or something. You have to compare AAC bitsteam only. Binary comparator of fb2k should be enough. However, in your  case, what you really have to compare is duration of the resulting file. If it is the same as the input, then it is fine. Period. Quote You mean that even if the wav file is > 4GB, I needn't to use the --ignorelength, am I right? You really don't have to worry about that option as long as duration of the output is fine. --ignorelength forces qaac to ignore the length declared in the WAV header. If the WAV file doesn't fits in 32bit limit, it's not a valid WAV file, and it's length field has to be ignored. In some cases, qaac can detect incorrectness of the header and automatically turns into ignorelength mode. In others, you need --ignorelength. Title: QAAC: discussion, questions, feature requests, etc. Post by: Neroldy on 05 March, 2015, 11:51:31 PM Hi, I have a question about --ignorelength. I use eac3to and get 3.85G wav file from video. Then I use qaac to convert it to the aac. first I use --ignorelength and get a m4a file, then I try not use --ignorelength and I also get a m4a file. And I listen these 2 files and didn't find any problem. But these 2 files are different(MD5). So, do I need to use --ignorelength ? Thank~ How did you created MD5sum? The resulting file should always be different, even if you specify identical options, since MP4 container stores creation time or something. You have to compare AAC bitsteam only. Binary comparator of fb2k should be enough. However, in your  case, what you really have to compare is duration of the resulting file. If it is the same as the input, then it is fine. Period. Quote You mean that even if the wav file is > 4GB, I needn't to use the --ignorelength, am I right? You really don't have to worry about that option as long as duration of the output is fine. --ignorelength forces qaac to ignore the length declared in the WAV header. If the WAV file doesn't fits in 32bit limit, it's not a valid WAV file, and it's length field has to be ignored. In some cases, qaac can detect incorrectness of the header and automatically turns into ignorelength mode. In others, you need --ignorelength. Oh, I see. Thank you very much ! Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 06 March, 2015, 02:06:37 AM If the WAV file expectably could exceed 4 GB (or better, 2 GB already, there may be tools interpreting the size fields incorrectly as signed), one should possibly prefer the WAV64 format. Title: QAAC: discussion, questions, feature requests, etc. Post by: stax on 12 March, 2015, 05:56:34 PM Hi, I got two reports of StaxRip users qaac not accepting FLAC. What could be the reason? Here is the log: Code: [Select] ------------------------------------------------------------              Convert to WAV/FLAC using ffmpeg------------------------------------------------------------"C:\Program Files (x86)\StaxRip_1.2.0.5\Applications\ffmpeg\ffmpeg.exe" -i "D:\Down.Temp\foobar\foobar temp files\foobar - ID2 - German.ac3" -y -ac 2 "D:\Down.Temp\foobar\foobar temp files\foobar - ID2 - German.flac"ffmpeg version N-70599-gc8372f8 Copyright (c) 2000-2015 the FFmpeg developersbuilt with gcc 4.9.2 (GCC)configuration: --enable-gpl --enable-version3 --disable-w32threads --enable-avisynth --enable-bzlib --enable-fontconfig --enable-frei0r --enable-gnutls --enable-iconv --enable-libass --enable-libbluray --enable-libbs2b --enable-libcaca --enable-libfreetype --enable-libgme --enable-libgsm --enable-libilbc --enable-libmodplug --enable-libmp3lame --enable-libopencore-amrnb --enable-libopencore-amrwb --enable-libopenjpeg --enable-libopus --enable-librtmp --enable-libschroedinger --enable-libsoxr --enable-libspeex --enable-libtheora --enable-libtwolame --enable-libvidstab --enable-libvo-aacenc --enable-libvo-amrwbenc --enable-libvorbis --enable-libvpx --enable-libwavpack --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs --enable-libxvid --enable-lzma --enable-decklink --enable-zliblibavutil      54. 20.100 / 54. 20.100libavcodec     56. 26.100 / 56. 26.100libavformat    56. 25.101 / 56. 25.101libavdevice    56.  4.100 / 56.  4.100libavfilter     5. 12.100 /  5. 12.100libswscale      3.  1.101 /  3.  1.101libswresample   1.  1.100 /  1.  1.100libpostproc    53.  3.100 / 53.  3.100[ac3 @ 0036b920] Estimating duration from bitrate, this may be inaccurateInput #0, ac3, from 'D:\Down.Temp\foobar\foobar temp files\foobar - ID2 - German.ac3':Duration: 00:42:22.18, start: 0.000000, bitrate: 192 kb/sStream #0:0: Audio: ac3, 48000 Hz, stereo, fltp, 192 kb/s[flac @ 042a9560] encoding as 24 bits-per-sampleOutput #0, flac, to 'D:\Down.Temp\foobar\foobar temp files\foobar - ID2 - German.flac':Metadata:encoder         : Lavf56.25.101Stream #0:0: Audio: flac, 48000 Hz, stereo, s32 (24 bit), 128 kb/sMetadata:encoder         : Lavc56.26.100 flacStream mapping:Stream #0:0 -> #0:0 (ac3 (native) -> flac (native))Press [q] to stop, [?] for helpvideo:0kB audio:372972kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: 0.002170%Start:    10:18:38End:      10:18:49Duration: 00:00:10GeneralComplete name                            : D:\Down.Temp\foobar\foobar temp files\foobar - ID2 - German.flacFormat                                   : FLACFormat/Info                              : Free Lossless Audio CodecFile size                                : 364 MiBDuration                                 : 42mn 22sOverall bit rate mode                    : VariableOverall bit rate                         : 1 202 KbpsWriting application                      : Lavf56.25.101AudioFormat                                   : FLACFormat/Info                              : Free Lossless Audio CodecDuration                                 : 42mn 22sBit rate mode                            : VariableBit rate                                 : 1 202 KbpsChannel(s)                               : 2 channelsSampling rate                            : 48.0 KHzBit depth                                : 24 bitsStream size                              : 364 MiB (100%)Writing library                          : Lavf56.25.101------------------------------------------------------------                 Audio encoding using qaac------------------------------------------------------------"C:\Program Files (x86)\StaxRip_1.2.0.5\Applications\qaac\qaac.exe" -o "D:\Down.Temp\foobar\foobar temp files\foobar - ID2 - German_out.m4a" --tvbr 75 --normalize "D:\Down.Temp\foobar\foobar temp files\foobar - ID2 - German.flac"------------------------------------------------------------              Error Audio encoding using qaac------------------------------------------------------------Audio encoding using qaac failed with exit code 2qaac 2.47, CoreAudioToolbox 7.9.8.3ERROR: Not available input file format Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 12 March, 2015, 11:22:53 PM I got two reports of StaxRip users qaac not accepting FLAC. What could be the reason? qaac needs external libFLAC DLL to decode FLAC files. Title: QAAC: discussion, questions, feature requests, etc. Post by: stax on 13 March, 2015, 08:33:08 AM Thanks but I don't understand then why for me and most other users it worked without libFLAC? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 13 March, 2015, 11:08:46 AM Thanks but I don't understand then why for me and most other users it worked without libFLAC? Because libFLAC is installed somewhere in your system. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 13 March, 2015, 11:14:25 AM Another possibility is existence of libsndfile (which is statically or dynamically linked to libFLAC). Anyway, qaac cannot read FLAC without them. Title: QAAC: discussion, questions, feature requests, etc. Post by: stax on 13 March, 2015, 01:39:25 PM Indeed it was in System32, maybe I've put it there and forgot about it or some setup or application copied it there (on the creation day no programs were installed though), thanks for everything. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 27 April, 2015, 02:15:34 PM And if someone doesn't like to have 19MB icudt49.dll in their portable install: Why is this file still necessary if a dummy is fine as well? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 28 April, 2015, 12:03:24 AM And if someone doesn't like to have 19MB icudt49.dll in their portable install: Why is this file still necessary if a dummy is fine as well? If you are saying that makeportable.zip should install the stab as the replacement of the real icudt... No, I will not do that. Required version of ICU is not constant, and it costs me some efforts for almost no real gain. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 28 April, 2015, 12:15:54 AM And if someone doesn't like to have 19MB icudt49.dll in their portable install: Why is this file still necessary if a dummy is fine as well? If you are saying that makeportable.zip should install the stab as the replacement of the real icudt... No, I will not do that. Required version of ICU is not constant, and it costs me some efforts for almost no real gain. No of course not, I don't understand why Apple keeps that huge file even if lvqcl's dummy works as well. What's the purpose of that DLL? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 28 April, 2015, 01:56:02 AM No of course not, I don't understand why Apple keeps that huge file even if lvqcl's dummy works as well. What's the purpose of that DLL? ICU is an open source library that provides Unicode / internationalization support, and libicudt contains many kind of data about Unicode characters, languages, calendars or something. Apparently it has nothing to do with audio signal processing provided by CoreAudioToolbox, but CoreFoundation has many features based on ICU. Title: QAAC: discussion, questions, feature requests, etc. Post by: OxygenSupply on 30 April, 2015, 08:38:49 PM Hello. I am using 64-bit Windows 7. The “makeportable.zip” method (https://forum.doom9.org/showthread.php?p=1718529#post1718529) created two folders: QTfiles and QTfiles64. Do I: • move QTfiles to the location of qaac.exe? • move QTfiles64 to location of qaac? • rename QTfiles64 folder to QTfiles then move it to location of qaac? • move both? nu774, would it be possible to use only qaac.exe along with the Apple Application Support .dll's in the same folder in a portable manner, without the need for registry keys and a separate installation?  (now that it bypasses quicktime) Yes. They are searched in the following order. No registory setting is required. 1) The directory where qaac.exe is placed 2) Windows system directory 3) "QTfiles" sub directory 4) The directory in a registry key "HKEY_LOCAL_MACHINE\SOFTWARE\Apple Inc.\Apple Application Support" (This can be overriden with qaac.reg) 5) Directories in the PATH environment variable Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 30 April, 2015, 09:33:47 PM Don't rename them. They are named like this so that they can be installed under the same directory without collision. QTfiles contains 32bit libraries and is for qaac.exe. QTfiles64 contains 64bit libraries and is for qaac64.exe. qaac64 searches QTfiles64 instead of QTfiles. Since you are using 64-bit Windows 7, you need only 64bit version unless you need TAK decoding support (it's 32bit only). Title: QAAC: discussion, questions, feature requests, etc. Post by: stax on 02 May, 2015, 04:34:13 AM Hello, I ported my GUI to 64-Bit, all 64-Bit applications I redirect work and behave exactly like the previously used 32-Bit applications except qaac where redirection fails and a command shell window pops up instead. I use exact identical code for all applications always redirecting both stdout and stderr, my application is rather old and large so it uses quite a few different command line tools, qaac is really the only causing a problem and it's only happening with qaac64.exe, 32-Bit was fine. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 02 May, 2015, 09:20:58 PM qaac is really the only causing a problem and it's only happening with qaac64.exe, 32-Bit was fine. All I can say is that qaac64 should run once it is correctly set up. Can you run qaac64 directly (from command prompt) ? Have you set up 64bit CoreAudio libs or other dependent (see below) 64bit libs? Have you tried the same invocation way (arguments/redirection/pipes) from command prompt ? IIRC, you were using FLAC decoding feature using libFLAC without even knowing it. If so, you need 64bit libFLAC. Title: QAAC: discussion, questions, feature requests, etc. Post by: stax on 03 May, 2015, 05:34:17 AM It was a stupid bug on my side, I hope it wasn't too much of a headache, I'm so sorry. Title: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 08 May, 2015, 06:20:55 AM QAAC 2.48 fixes an issue on MP4Source: trailing samples were discarded under certain conditions. thanks nu774 Title: QAAC: discussion, questions, feature requests, etc. Post by: decollated on 12 May, 2015, 04:00:40 PM QAAC 2.48 fixes an issue on MP4Source: trailing samples were discarded under certain conditions. thanks nu774 Can I ask what the certain conditions were? I encoded a lot of files with 2.47, and am wondering if this would justify re-encoding. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 12 May, 2015, 08:53:55 PM Can I ask what the certain conditions were? I encoded a lot of files with 2.47, and am wondering if this would justify re-encoding. Well, if you don't encode from MP4/m4a files, you don't need re-encoding. The bug was inside of MP4 reader, which was first implemented at 2.39: If you DO encode from MP4/m4a files, then you may have encountered loss of certain amount of trailing samples (shorter than one frame length), but it did not always happen. Title: QAAC: discussion, questions, feature requests, etc. Post by: decollated on 12 May, 2015, 09:15:12 PM Well, if you don't encode from MP4/m4a files, you don't need re-encoding. The bug was inside of MP4 reader, which was first implemented at 2.39: If you DO encode from MP4/m4a files, then you may have encountered loss of certain amount of trailing samples (shorter than one frame length), but it did not always happen. Ah, thank you. Some of my encodes were indeed from M4A source files (ALAC). So I may re-encode, just to be safe. BTW, much appreciation for this excellent tool. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 12 May, 2015, 09:22:52 PM Ah, thank you. Some of my encodes were indeed from M4A source files (ALAC). So I may re-encode, just to be safe. Needless to say, you don't need re-encoding if you are using qaac from fb2k or other GUI frontends, since they decode ALAC into WAV and pass WAV to qaac. Title: QAAC: discussion, questions, feature requests, etc. Post by: decollated on 12 May, 2015, 09:25:36 PM Needless to say, you don't need re-encoding if you are using qaac from fb2k or other GUI frontends, since they decode ALAC into WAV and pass WAV to qaac. Yes, that occurred to me soon after posting. I did use fb2k, so all is well! Title: QAAC: discussion, questions, feature requests, etc. Post by: Zarggg on 14 May, 2015, 12:02:18 PM I have a question regarding QAAC's ability to create chaptered files. I've recently been converting some audiobooks I have on CD to iPod format. The files for the previous audiobook I converted had chapter stops in them when I played them back on my iPod. However, I must have forgotten the process I used (even though it was only two months ago), since the current one I'm working on doesn't seem to have them. When I check the files with MediaInfo, the relevant "Menu" section seems to be set up properly, but the iPod does not recognize the chapter stops. I've tried ripping the CDs with both foobar2000 and CUERipper, but the end result is always the same. Here is the MediaInfo output for two files in question: Code: [Select] GeneralComplete name              : L:\Audiobooks\Wheel of Time\01 - Eye of the World\Robert Jordan - The Eye of the World, Part 02.m4aFormat                      : MPEG-4Format profile              : Apple audio with iTunes infoCodec ID                    : M4A File size                  : 47.4 MiBDuration                    : 1h 12mnOverall bit rate mode      : VariableOverall bit rate            : 90.9 KbpsAlbum                      : The Wheel of TimeTrack name                  : The Eye of the World, Part 02Track name/Position        : 2Track name/Total            : 25Performer                  : Robert JordanGenre                      : AudiobookRecorded date              : 1990Encoded date                : UTC 2015-03-13 18:25:57Tagged date                : UTC 2015-03-13 18:28:05Writing application        : qaac 2.46, CoreAudioToolbox 7.9.9.6, AAC-LC Encoder, TVBR q73, Quality 96Cover                      : Yesreplaygain_track_gain      : +4.13 dBreplaygain_track_peak      : 0.965155AudioID                          : 1Format                      : AACFormat/Info                : Advanced Audio CodecFormat profile              : LCCodec ID                    : 40Duration                    : 1h 12mnBit rate mode              : VariableBit rate                    : 89.3 KbpsMaximum bit rate            : 114 KbpsChannel(s)                  : 2 channelsChannel positions          : Front: L RSampling rate              : 44.1 KHzCompression mode            : LossyStream size                : 46.6 MiB (98%)Encoded date                : UTC 2015-03-13 18:25:57Tagged date                : UTC 2015-03-13 18:28:05Menu #1ID                          : 2Codec ID                    : textDuration                    : 1h 12mnEncoded date                : UTC 2015-03-13 18:28:05Tagged date                : UTC 2015-03-13 18:28:0500:00:00.000                : Chapter - 2 - Strangers E00:03:42.266                : Chapter - 2 - Strangers F00:06:29.120                : Chapter - 2 - Strangers G00:10:06.813                : Chapter - 2 - Strangers H00:12:59.173                : Chapter - 2 - Strangers I00:14:46.280                : Chapter - 3 - The Peddler A00:17:25.653                : Chapter - 3 - The Peddler B00:21:17.866                : Chapter - 3 - The Peddler C00:24:17.906                : Chapter - 3 - The Peddler D00:27:37.653                : Chapter - 3 - The Peddler E00:31:40.386                : Chapter - 3 - The Peddler F00:34:33.746                : Chapter - 3 - The Peddler G00:37:48.866                : Chapter - 3 - The Peddler H00:41:36.813                : Chapter - 3 - The Peddler I00:45:15.173                : Chapter - 4 - The Gleeman A00:49:45.493                : Chapter - 4 - The Gleeman B00:54:23.760                : Chapter - 4 - The Gleeman C00:58:37.653                : Chapter - 4 - The Gleeman D01:03:13.093                : Chapter - 4 - The Gleeman E01:07:33.400                : Chapter - 4 - The Gleeman F01:11:51.786                : Chapter - 4 - The Gleeman GBit rate mode              : VBRMenu #200:00:00.047                : The Eye of the World, Part 02 Code: [Select] GeneralComplete name              : L:\Audiobooks\Wheel of Time\02 - The Great Hunt\The Great Hunt, Part 01.m4bFormat                      : MPEG-4Format profile              : Apple audio with iTunes infoCodec ID                    : M4A File size                  : 61.9 MiBDuration                    : 1h 14mnOverall bit rate mode      : VariableOverall bit rate            : 117 KbpsAlbum                      : The Great Hunt, Part 01Performer                  : Robert JordanGenre                      : AudiobookRecorded date              : 1991Encoded date                : UTC 2015-05-14 15:31:27Tagged date                : UTC 2015-05-14 15:33:10Writing application        : qaac 2.46, CoreAudioToolbox 7.9.9.6, AAC-LC Encoder, TVBR q91, Quality 96Cover                      : YesAudioID                          : 1Format                      : AACFormat/Info                : Advanced Audio CodecFormat profile              : LCCodec ID                    : 40Duration                    : 1h 14mnBit rate mode              : VariableBit rate                    : 115 KbpsMaximum bit rate            : 148 KbpsChannel(s)                  : 2 channelsChannel positions          : Front: L RSampling rate              : 44.1 KHzCompression mode            : LossyStream size                : 60.8 MiB (98%)Encoded date                : UTC 2015-05-14 15:31:27Tagged date                : UTC 2015-05-14 15:33:10Menu00:01:31.834                : Prologue A00:04:22.274                : Prologue B00:07:36.954                : Prologue C00:10:27.727                : Prologue D00:14:50.461                : Prologue E00:17:45.194                : Prologue F00:21:20.287                : Prologue G00:23:52.154                : Prologue H00:27:19.874                : Prologue I00:30:37.514                : Prologue J00:35:07.141                : Chapter 01 - The Flame of Tar Valon A00:38:45.061                : Chapter 01 - The Flame of Tar Valon B00:41:33.034                : Chapter 01 - The Flame of Tar Valon C00:45:15.367                : Chapter 01 - The Flame of Tar Valon D00:48:45.807                : Chapter 01 - The Flame of Tar Valon E00:52:06.394                : Chapter 01 - The Flame of Tar Valon F00:55:39.594                : Chapter 01 - The Flame of Tar Valon G00:59:02.887                : Chapter 01 - The Flame of Tar Valon H01:00:44.021                : Chapter 02 - The Welcome A01:03:32.834                : Chapter 02 - The Welcome B01:07:06.981                : Chapter 02 - The Welcome C01:10:39.701                : Chapter 02 - The Welcome D It looks like the "Menu" section of the second group of files is getting truncated (note the lack of the "ID", "Codec ID", "Duration", and "Bit rate mode" subsections and the "Menu #2" section). Does anyone have a clue what I'm doing wrong, or if there is a step of the process I am forgetting? These results are the same regardless of whether I use foobar2000's "Edit MP4 chapters" or let QAAC handle it automatically. If it is relevant, here is the CUE file generated by CUERipper: Code: [Select] REM DISCID 4E116317PERFORMER "Robert Jordan"TITLE "The Great Hunt, Part 01"REM DATE 1991REM GENRE "Audiobook"REM COMMENT "CUERipper v2.1.6 Copyright © 2008-13 Grigory Chudov"FILE "Robert Jordan - The Great Hunt, Part 01.flac" WAVE  TRACK 01 AUDIO   PERFORMER "Robert Jordan"   TITLE "Introduction"   INDEX 01 00:00:00  TRACK 02 AUDIO   PERFORMER "Robert Jordan"   TITLE "Prologue A"   INDEX 01 01:31:59  TRACK 03 AUDIO   PERFORMER "Robert Jordan"   TITLE "Prologue B"   INDEX 01 04:22:17  TRACK 04 AUDIO   PERFORMER "Robert Jordan"   TITLE "Prologue C"   INDEX 01 07:36:68  TRACK 05 AUDIO   PERFORMER "Robert Jordan"   TITLE "Prologue D"   INDEX 01 10:27:51  TRACK 06 AUDIO   PERFORMER "Robert Jordan"   TITLE "Prologue E"   INDEX 01 14:50:31  TRACK 07 AUDIO   PERFORMER "Robert Jordan"   TITLE "Prologue F"   INDEX 01 17:45:11  TRACK 08 AUDIO   PERFORMER "Robert Jordan"   TITLE "Prologue G"   INDEX 01 21:20:18  TRACK 09 AUDIO   PERFORMER "Robert Jordan"   TITLE "Prologue H"   INDEX 01 23:52:08  TRACK 10 AUDIO   PERFORMER "Robert Jordan"   TITLE "Prologue I"   INDEX 01 27:19:62  TRACK 11 AUDIO   PERFORMER "Robert Jordan"   TITLE "Prologue J"   INDEX 01 30:37:35  TRACK 12 AUDIO   PERFORMER "Robert Jordan"   TITLE "Chapter 01 - The Flame of Tar Valon A"   INDEX 01 35:07:07  TRACK 13 AUDIO   PERFORMER "Robert Jordan"   TITLE "Chapter 01 - The Flame of Tar Valon B"   INDEX 01 38:45:01  TRACK 14 AUDIO   PERFORMER "Robert Jordan"   TITLE "Chapter 01 - The Flame of Tar Valon C"   INDEX 01 41:32:74  TRACK 15 AUDIO   PERFORMER "Robert Jordan"   TITLE "Chapter 01 - The Flame of Tar Valon D"   INDEX 01 45:15:24  TRACK 16 AUDIO   PERFORMER "Robert Jordan"   TITLE "Chapter 01 - The Flame of Tar Valon E"   INDEX 01 48:45:57  TRACK 17 AUDIO   PERFORMER "Robert Jordan"   TITLE "Chapter 01 - The Flame of Tar Valon F"   INDEX 01 52:06:26  TRACK 18 AUDIO   PERFORMER "Robert Jordan"   TITLE "Chapter 01 - The Flame of Tar Valon G"   INDEX 01 55:39:41  TRACK 19 AUDIO   PERFORMER "Robert Jordan"   TITLE "Chapter 01 - The Flame of Tar Valon H"   INDEX 01 59:02:63  TRACK 20 AUDIO   PERFORMER "Robert Jordan"   TITLE "Chapter 02 - The Welcome A"   INDEX 01 60:43:73  TRACK 21 AUDIO   PERFORMER "Robert Jordan"   TITLE "Chapter 02 - The Welcome B"   INDEX 01 63:32:59  TRACK 22 AUDIO   PERFORMER "Robert Jordan"   TITLE "Chapter 02 - The Welcome C"   INDEX 01 67:06:70  TRACK 23 AUDIO   PERFORMER "Robert Jordan"   TITLE "Chapter 02 - The Welcome D"   INDEX 01 70:39:49 Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 14 May, 2015, 08:50:33 PM First, there are two incompatible styles of chapters. 1. Apple style: written as a MP4 text track, shown in your first sample (track ID, codec or something are shown, meaning that it is a track) 2. Nero style: written in 'chpl' box under udta, shown in your second sample. For iPods you need Apple style chapters, and qaac can create both of them. However, qaac creates chapters only when you feed: 1. multiple input files with --concat option 2. cuesheet with --concat option In other words, if you are running qaac from fb2k, qaac will never create chapters. Finally, IIRC fb2k supports only Nero style chapters. You can try mp4chaps.exe (of mp4v2 project) for importing/exporting chapters, converting chapters from/to Apple/Nero style chapters. Title: QAAC: discussion, questions, feature requests, etc. Post by: Zarggg on 14 May, 2015, 11:21:12 PM I'll give that a go. I'm completely baffled how I did the first batch, though. I'm pretty sure the only tools I had at my disposal were CueTools/Ripper and fb2k. I was kind of surprised the chapters were added at all, since I wasn't expecting it. Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 18 May, 2015, 12:36:56 PM [qaac] release 2.49 posted 3 hours ago by nu 774 Fixed issues on MP4Source: Fixed handling of Nero style chapters starting from non-zero timestamp (typically inserted by fb2k and old neroaacenc). Fixed handling of reading MP4 files with multiple elst entries. Title: QAAC: discussion, questions, feature requests, etc. Post by: Steve Forte Rio on 26 May, 2015, 03:30:57 AM Why does QAAC can't decode audio encoded by it's own? Quote qaac --decode out24.m4a -o out.wav qaac 2.49, CoreAudioToolbox 7.9.9.6 ERROR: Not available input file format Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 26 May, 2015, 04:08:43 AM Why does QAAC can't decode audio encoded by it's own? In the past, qaac didn't support lossy input at all. Now it supports MP1/2/3 and AAC-LC, but not HE-AAC. Title: QAAC: discussion, questions, feature requests, etc. Post by: Steve Forte Rio on 26 May, 2015, 07:34:24 AM Well... so what would you recommend for sample-accurate decoding of HE-AAC? FFmpeg can't do it accurately. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 26 May, 2015, 08:42:05 AM Well... so what would you recommend for sample-accurate decoding of HE-AAC? FFmpeg can't do it accurately. Well, you mean gapless? It's very difficult because there are two incompatible usage of iTunSMPB for HE-AAC. http://www.hydrogenaud.io/forums/index.php?showtopic=98450 (http://www.hydrogenaud.io/forums/index.php?showtopic=98450) As far as I can see, current fb2k seems trying to support both way by treating Apple-HE-AAC files specially. I don't know how well it's working. If you want sample accuracy at all, just don't use HE-AAC. Seriously. Title: QAAC: discussion, questions, feature requests, etc. Post by: HonestAbe on 15 June, 2015, 05:55:25 PM Hi I'm kind of a noob to this but I've been using the tvbr for awhile and I can't figure out why it is changing the DR and Peak of the source material. Here's an example: here's the original FLAC: -------------------------------------------------------------------------------- Analyzed: Otis Redding / The Dock Of The Bay -------------------------------------------------------------------------------- DR        Peak        RMS    Duration Track -------------------------------------------------------------------------------- DR9      -0.80 dB  -11.50 dB      2:44 01-(Sittin' On) The Dock Of The Bay DR10      -0.80 dB  -12.67 dB      2:53 02-I Love You More Than Words Can Say DR10      -0.80 dB  -12.81 dB      2:56 03-Let Me Come On Home DR10      -0.80 dB  -13.84 dB      2:26 04-Open The Door DR10      -0.80 dB  -12.01 dB      2:31 05-Don't Mess With Cupid DR9      -0.80 dB  -14.29 dB      2:38 06-The Glory Of Love DR11      -0.80 dB  -12.96 dB      3:01 07-I'm Coming Home To See About You DR11      -0.80 dB  -13.45 dB      3:00 08-Tramp DR10      -0.80 dB  -11.94 dB      3:02 09-The Huckle-Buck DR10      -0.80 dB  -13.44 dB      3:08 10-Nobody Knows You (When You're Down And Out) DR12      -0.80 dB  -14.85 dB      2:36 11-Ole Man Trouble -------------------------------------------------------------------------------- Number of tracks:  11 Official DR value: DR10 Samplerate:        192000 Hz and here's the QAAC tvbr transcode Parameters used: --ignorelength -s --no-optimize --tvbr 91 --quality 2 -o %d - -------------------------------------------------------------------------------- Analyzed: Otis Redding / The Dock Of The Bay -------------------------------------------------------------------------------- DR        Peak        RMS    Duration Track -------------------------------------------------------------------------------- DR9      -0.31 dB  -11.52 dB      2:44 01-(Sittin' On) The Dock Of The Bay DR10      -0.55 dB  -12.68 dB      2:53 02-I Love You More Than Words Can Say DR11      -0.11 dB  -12.84 dB      2:56 03-Let Me Come On Home DR11      -0.35 dB  -13.86 dB      2:26 04-Open The Door DR11      -0.03 dB  -12.04 dB      2:31 05-Don't Mess With Cupid DR10      -0.16 dB  -14.31 dB      2:38 06-The Glory Of Love DR11      -0.43 dB  -12.98 dB      3:01 07-I'm Coming Home To See About You DR11      -0.33 dB  -13.48 dB      3:00 08-Tramp DR11      0.00 dB  -11.97 dB      3:02 09-The Huckle-Buck DR10      -0.15 dB  -13.46 dB      3:08 10-Nobody Knows You (When You're Down And Out) DR13      -0.02 dB  -14.88 dB      2:36 11-Ole Man Trouble -------------------------------------------------------------------------------- Number of tracks:  11 Official DR value: DR11 Samplerate:        48000 Hz Channels:          2 Title: QAAC: discussion, questions, feature requests, etc. Post by: Zarggg on 15 June, 2015, 09:49:23 PM I don't know enough to go into the details of exactly what is happening under the hood, but AAC is a lossy codec. Similar to MP3, it uses a psychoacoustic model to determine what data can be effectively "thrown away" without impacting what you actually hear. These changes can and often do affect the apparent dynamic range. If you examine the results closely, you'll see that mostly the peak values are different. The RMS of the AAC-encoded tracks is still very close to that of the original FLAC encodes. Title: QAAC: discussion, questions, feature requests, etc. Post by: HonestAbe on 16 June, 2015, 03:52:29 AM I don't know enough to go into the details of exactly what is happening under the hood, but AAC is a lossy codec. Similar to MP3, it uses a psychoacoustic model to determine what data can be effectively "thrown away" without impacting what you actually hear. These changes can and often do affect the apparent dynamic range. If you examine the results closely, you'll see that mostly the peak values are different. The RMS of the AAC-encoded tracks is still very close to that of the original FLAC encodes. I noticed it doesn't happen as much with MP3 and the thing that confuses me the most is how it adds DR value. Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 16 June, 2015, 04:24:58 AM and the thing that confuses me the most is how it adds DR value. http://www.hydrogenaud.io/forums/index.php?showtopic=102895 (http://www.hydrogenaud.io/forums/index.php?showtopic=102895) DR values aren't very reliable. Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 26 June, 2015, 06:11:41 AM [qaac] release 2.50 posted 21 hours ago by nu 774 Better support for embedded cuesheet. When cuesheet is embedded in an input file, qaac was encoding it into an single output file with chapters. From this version, qaac now splits into multiple tracks by default (same as the case of external cuesheet). If you still want single output, use --concat Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 28 June, 2015, 11:29:34 PM [qaac] release 2.51 posted 13 hours ago by nu774 1. Always write zero into avgBitrate field in esds decConfigDescriptor to be spec compliant. Was writing actual average bitrate before (this was automatically done by libmp4v2). The spec says that in case of VBR, is should be zero. 2. Write iTunes compatible "Encoding Params" tag. Details on Encoding Params tag. In this (binary) tag, encoding mode(CBR/ABR/CVBR/TVBR), bitrate, and the codec version is written. As far as I know, this tag is only used by iTunes to show bitrate and show if it is VBR or not. For the sake of compatibility with iTunes, qaac writes nominal (target) bitrate into this tag, and iTunes will show this value when "Encoding Params" tag is present. Therefore, now the result of -v 256 encoding will always look like "256kbps (VBR)" in iTunes. On the other hands, other (spec compliant) tools will show actual bitrate. Title: QAAC: discussion, questions, feature requests, etc. Post by: Makm on 01 July, 2015, 03:47:27 PM [qaac] release 2.51 posted 13 hours ago by nu774 1. Always write zero into avgBitrate field in esds decConfigDescriptor to be spec compliant. Was writing actual average bitrate before (this was automatically done by libmp4v2). The spec says that in case of VBR, is should be zero. 2. Write iTunes compatible "Encoding Params" tag. Details on Encoding Params tag. In this (binary) tag, encoding mode(CBR/ABR/CVBR/TVBR), bitrate, and the codec version is written. As far as I know, this tag is only used by iTunes to show bitrate and show if it is VBR or not. For the sake of compatibility with iTunes, qaac writes nominal (target) bitrate into this tag, and iTunes will show this value when "Encoding Params" tag is present. Therefore, now the result of -v 256 encoding will always look like "256kbps (VBR)" in iTunes. On the other hands, other (spec compliant) tools will show actual bitrate. So I'm facing some weird problems. After encoding a flac (using v2.51 at -v 256) dbpoweramp doesn't show any bit rate info! The bit rate field is just blank in the audio properties page  . Also, I've noticed that if the original file has some tags in it the encoding parameter doesn't appear in the info after encoding, itunes doesn't recognise the file as 256 vbr, it shows the original bit rate instead, but if the original file has no tags in it, the encoded file has the encoding parameter written and itunes shows it as 256 vbr Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 01 July, 2015, 08:44:06 PM So I'm facing some weird problems. After encoding a flac (using v2.51 at -v 256) dbpoweramp doesn't show any bit rate info! The bit rate field is just blank in the audio properties page  . Most likely it is because avgBitrate field is now set to zero. ISO 14496-1 says: Quote avgBitrate – is the average bitrate in bits per second of this elementary stream. For streams with variable bitrate this value shall be set to zero. See the strong word shall. Therefore, we have to set to zero (although everybody seems to break the spec here). Quote Also, I've noticed that if the original file has some tags in it the encoding parameter doesn't appear in the info after encoding, itunes doesn't recognise the file as 256 vbr, it shows the original bit rate instead, but if the original file has no tags in it, the encoded file has the encoding parameter written and itunes shows it as 256 vbr Currently, dBpa and fb2k are known to break "Encoding Params" when writing tags. Maybe others. Title: QAAC: discussion, questions, feature requests, etc. Post by: Makm on 02 July, 2015, 04:39:50 AM So I'm facing some weird problems. After encoding a flac (using v2.51 at -v 256) dbpoweramp doesn't show any bit rate info! The bit rate field is just blank in the audio properties page  . Most likely it is because avgBitrate field is now set to zero. ISO 14496-1 says: Quote avgBitrate – is the average bitrate in bits per second of this elementary stream. For streams with variable bitrate this value shall be set to zero. See the strong word shall. Therefore, we have to set to zero (although everybody seems to break the spec here). Quote Also, I've noticed that if the original file has some tags in it the encoding parameter doesn't appear in the info after encoding, itunes doesn't recognise the file as 256 vbr, it shows the original bit rate instead, but if the original file has no tags in it, the encoded file has the encoding parameter written and itunes shows it as 256 vbr Currently, dBpa and fb2k are known to break "Encoding Params" when writing tags. Maybe others. I also noticed that MediaInfo no longer shows the 'Maximum bit rate' field after encoding with 2.51, it only shows the 'Bit rate' field, whereas I could see both of the fields (Bit rate and Maximum bit rate)  after encoding with the previous versions of qaac. For me, iTunes is not recognizing my files as '256 vbr' unless I encode a file with no tags in it, and also the bit rate field in dbpa audio properties is blank (I encoded my files using both dbpoweramp and foobar2000). So until this issue is fixed I'm gonna stick with v2.50. Apart from the mentioned change log, there are no encoding quality difference between 2.50 and 2.51 right? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 02 July, 2015, 04:51:28 AM So until this issue is fixed I'm gonna stick with v2.50. Since those "issues" are not qaac's fault, it won't fix on qaac side. Apart from the mentioned change log, there are no encoding quality difference between 2.50 and 2.51 right? No. Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 05 July, 2015, 10:07:10 AM I can't seem to get QAAC to work with dBPowerAMP 15.3? I have tried using; --ALAC -o "[outfile]" -v256 -q2 -o "[outfile]" --tvbr 63 -o "[outfile]" --abr 128 -o "[outfile]" --cvbr 128 -o"[outfile]" But I'm getting "CLI Encoder: Error writing audio data to StdIn Pipe [dEncoder::EncodeBlock]". Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 05 July, 2015, 10:54:02 AM I can't seem to get QAAC to work with dBPowerAMP 15.3? I have tried using; --ALAC -o "[outfile]" -v256 -q2 -o "[outfile]" --tvbr 63 -o "[outfile]" --abr 128 -o "[outfile]" --cvbr 128 -o"[outfile]" But I'm getting "CLI Encoder: Error writing audio data to StdIn Pipe [dEncoder::EncodeBlock]". It seems a "-" (for stdin input) missing in your command line. Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 05 July, 2015, 11:06:37 AM I don't quite follow? Do you mean the [clistring] should be - -o "[outfile]" instead of -o "[outfile]? Because I've tried both and they give the same results. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 05 July, 2015, 11:36:28 AM I don't quite follow? Do you mean the [clistring] should be - -o "[outfile]" instead of -o "[outfile]? Because I've tried both and they give the same results. Yes, an command line argument for input file is mandatory. It's strange that you fail when "-" appended. Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 05 July, 2015, 12:20:44 PM Seems to be a dBPowerAMP 15.3 bug, tried to revert back to 15.1 and it's working again. Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 09 July, 2015, 01:56:17 PM Is it so that the bitrate you set using cVBR (constrained vbr) is the maximum or the minimum that will be used? I thought constrained vbr worked so if I for example choose 128 kbps that would be the minimum to be used, but as it is variable it's allowed to go beyond 128 kbps if need be. Or is it the other way around, so the bitrate I choose for cVBR is the maximum allowed so the VBR algorithm will not go above the specified bitrate? Title: QAAC: discussion, questions, feature requests, etc. Post by: Aleron Ives on 09 July, 2015, 02:47:47 PM CVBR is just a slightly improved ABR (allowing more bitrate fluctuations than ABR, but fewer fluctuations than VBR). The bitrate of your files will still vary, but they'll always come out at approximately the bitrate you specify. From moment to moment, the bitrate can go above or below the value you set. Title: QAAC: discussion, questions, feature requests, etc. Post by: rudolf.gelpke on 09 July, 2015, 02:54:23 PM Difference between CVBR and ABR in AAC (http://www.hydrogenaud.io/forums/index.php?s=&showtopic=104234&view=findpost&p=855350) If you for example encode a song like Lalena from Donovan where there is not much bitrate needed. I used qaac and used ~256 Kbps for every mode. Results: ABR: 5 301 KB (240 Kbps) CVBR: 5 628 KB (255 Kbps) VBR: 2 729 KB (123 Kbps) You see that encoder in VBR mode realizes that it doesn't need very much bitrate for the song and has the lowest size. Where ABR and CVBR try to stay close to my requested bitrate of ~256 Kbps. So no the bitrate you give to CVBR is not the mininum or maximum bitrate. From the perpective of quality VBR > CVBR > ABR is better. So if you don't trust VBR or want your files have the same bitrate go for CVBR. In the lower bitrates VBR should beat CVBR or ABR, but in the higher like 256 Kbps i don't think there would be that much difference to CVBR. (VBR has a bigger bit-reservoir for problematic places in songs.) And of course you can set a managed bitrate and use minimum and maximum settings if you really need to. Atleast this works in LAME and Vorbis. Just my experience. Hope didn't get too much wrong. Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 09 July, 2015, 04:05:29 PM I was going for tVBR, but as I'm ripping audiobooks and tVBR does not support HE it's a no-go. So I've been using cVBR 80 kbps thus far and it seems to do fine. Title: QAAC: discussion, questions, feature requests, etc. Post by: decollated on 09 July, 2015, 05:27:56 PM From the perpective of quality VBR > CVBR > ABR is better. There was a listening test that ranked CVBR slightly higher than TVBR at ~96kbps: http://wiki.hydrogenaud.io/index.php?title...Tests#AAC_Tests (http://wiki.hydrogenaud.io/index.php?title=Hydrogenaudio_Listening_Tests#AAC_Tests) Title: QAAC: discussion, questions, feature requests, etc. Post by: halb27 on 10 July, 2015, 12:08:28 AM And recently there was a thread here giving pre-echo samples where borh CVBR and ABR provided the better quality than TVBR did. I think there is a bias here on HA towards TVBR which is not based on listening tests. Title: QAAC: discussion, questions, feature requests, etc. Post by: rudolf.gelpke on 10 July, 2015, 12:22:35 AM ABR was better too? I remember some problem samples were CVBR was better, but thougt it was not relevant. I thought VBR should be better theoretically, but yeah i guess i was wrong. I even remember that listening test now. Would be interesting too know if this is the same for Opus. Title: QAAC: discussion, questions, feature requests, etc. Post by: Case on 10 July, 2015, 02:47:54 AM There was a listening test that ranked CVBR slightly higher than TVBR at ~96kbps: http://wiki.hydrogenaud.io/index.php?title...Tests#AAC_Tests (http://wiki.hydrogenaud.io/index.php?title=Hydrogenaudio_Listening_Tests#AAC_Tests) Simple reason. TVBR has the lowest bitrate of all tested codecs there. In one sample it uses almost 30% lower bitrate than CVBR. When you use TVBR with settings that don't result in such low bitrate it won't dip behind anything and can use extra bits for difficult parts. Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 10 July, 2015, 06:03:04 AM How about Apple's "optimise for voice" / "voice filtering" option within iTunes? Is there some way to push this using QAAC? Title: QAAC: discussion, questions, feature requests, etc. Post by: rudolf.gelpke on 10 July, 2015, 05:27:50 PM I guess it just uses HE-AAC. Just encode one file in iTunes and check it with MediaInfo. Edit: Tested it myself. It still uses AAC LC but reduces the file to 22.05 KHz and uses a lower bitrate. So you will need to use a resampler like : qaac ... -r 22050 ... Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 10 July, 2015, 06:00:42 PM Are you sure? I tried using Qaac64.exe with --cvbr 48 -q2 --he --rate 32000 --native-resampler=bats,127 and compared it to iTunes - AAC Custom - 48 kbps - 32.000 kHz -- Stereo -- VBR -- HE -- Optimise for Voice and they don't seem to be identical? Another thing I've noticed, when using HE I'm not able to --r 22050, I can't go lower than 32000? Title: QAAC: discussion, questions, feature requests, etc. Post by: rudolf.gelpke on 10 July, 2015, 11:26:09 PM iTunes seems to choose the sample rate depending on the bitrate. With 24 Kbps my test file is resampled to 8000 kHz. With other bitrates it was 32000 kHz like you said. So "optimise for voice" is just resampling and HE can be choosen seperatly. Preferred Sample Rates (http://www.sonnoxplugins.com/pub/plugins/products/Pro-Codec-Spec.htm) I would encode some files with iTunes in your preferred bitrate and look which sample rate iTunes uses. And then use this sample rate in qaac. Result: Code: [Select] File 1 (using qaac):Bit rate mode                            : VariableBit rate                                : 32.9 KbpsChannel(s)                              : 2 channelsSampling rate                            : 32.0 KHz / 16.0 KHzStream size                              : 959 KiB (98%)File 2 (using iTunes):Bit rate mode                            : VariableBit rate                                : 32.0 KbpsMaximum bit rate                        : 42.9 KbpsChannel(s)                              : 2 channelsSampling rate                            : 32.0 KHz / 16.0 KHzStream size                              : 958 KiB (98%) It comes close, but still not complettly identical. And yes, if i choose 8000 kHz and 80 Kbps, qaac will use more kHz automatically, so you would need to choose a lower bitrate for 8000 kHz to work. PS: Depending on the bitrate, resampling is maybe not really necessary, since HE AAC uses Spectral band replication for higher frequencies. But i guess the guys from Apple now what they are doing. Title: QAAC: discussion, questions, feature requests, etc. Post by: rudolf.gelpke on 11 July, 2015, 01:31:20 AM Your command line looks fine btw.. Strange that it worked for me. Are your files that much different? Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 11 July, 2015, 05:27:53 AM Ignore all that... Seems like iTunes reports HEv2 mono as stereo.. Same goes for dBPowerAMP. How do I downmix stereo to mono using QAAC? Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 11 July, 2015, 06:51:01 AM Nevermind, I figured it out. Now it's time to compare CVBR 24-64kbps HE Stereo to Mono and figure out whatever settings I find the best for my audiobook library. Title: QAAC: discussion, questions, feature requests, etc. Post by: Apollo89 on 13 July, 2015, 06:49:28 AM Hello, I've been using QAAC for quite some time and I am very happy with it, but in the latest version (2.51) something really annoys me. Files encoded with TVBR q91 all show up as 192 kbps (VBR) in iTunes. Since TVBR has no target bitrate (only a target quality level), the 192 kbps is meaningless and it should show the average bitrate instead (like it used to do). Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 13 July, 2015, 09:56:37 AM I've been using QAAC for quite some time and I am very happy with it, but in the latest version (2.51) something really annoys me. Files encoded with TVBR q91 all show up as 192 kbps (VBR) in iTunes. Since TVBR has no target bitrate (only a target quality level), the 192 kbps is meaningless and it should show the average bitrate instead (like it used to do). The following option will remove Encoding Params tag, and iTunes will show actual average bitrate: Code: [Select] --long-tag="Encoding Params:" Title: QAAC: discussion, questions, feature requests, etc. Post by: Seren on 14 July, 2015, 09:15:49 AM I've been using QAAC for quite some time and I am very happy with it, but in the latest version (2.51) something really annoys me. Files encoded with TVBR q91 all show up as 192 kbps (VBR) in iTunes. Since TVBR has no target bitrate (only a target quality level), the 192 kbps is meaningless and it should show the average bitrate instead (like it used to do). The following option will remove Encoding Params tag, and iTunes will show actual average bitrate: Code: [Select] --long-tag="Encoding Params:" I'm sorry, I must have missed it but what's the advantage of showing the Encoding Params by default? I just deleted the whole tag... Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 14 July, 2015, 10:04:41 AM I'm sorry, I must have missed it but what's the advantage of showing the Encoding Params by default? I just deleted the whole tag... "Encoding Params" tag only concerns with how bitrate is shown in iTunes, and that's all. Title: QAAC: discussion, questions, feature requests, etc. Post by: Aleron Ives on 14 July, 2015, 02:47:32 PM Since the tag is a proprietary form of metadata for iTunes, and qaac is for people who want access to Apple's encoder without having to use iTunes, it seems to me that the tag should be disabled by default, so the burden of using an extra switch is on the few people who care what iTunes says about their AAC files. Title: QAAC: discussion, questions, feature requests, etc. Post by: marc2003 on 14 July, 2015, 03:20:02 PM and qaac is for people who want access to Apple's encoder without having to use iTunes that's a very narrow minded view. i despise itunes myself but i'm sure plenty of itunes users use qaac for the many other features it provides like being able to pipe in any input you want, spawning multiple instances with other front ends, usage with bit perfect CD rippers and so on.... Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 14 July, 2015, 09:55:57 PM Since the tag is a proprietary form of metadata for iTunes, and qaac is for people who want access to Apple's encoder without having to use iTunes, it seems to me that the tag should be disabled by default, so the burden of using an extra switch is on the few people who care what iTunes says about their AAC files. Strictly speaking, it's just that we all use proprietary "iTunes Metadata Format" for tagging m4a, but most of us (including developers) don't know / care about "Encoding Params" tag. For non iTunes users, presence of "Encoding Params" doesn't make any practical difference other than having one obscure 32 bytes binary tag. It sounds like you're saying that large majority of non iTunes users hate so much having this tag that they must kill this tag using an extra switch, but why? Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 15 July, 2015, 07:13:42 PM Are you saying that checking tVBR encoded files in iTunes is meaningless? I was testing various bitrates for my audiobooks and was testing out tVBR with various quality levels and always checked the bitrate reported for iTunes and they always seemed to be the same for every single track from a specific audiobook, but when I compared across audiobooks the reported bitrate in iTunes was very different? I don't have the files at hand right now, but if I remember correctly Harry Potter og De Vises Stein (Harry Potter Book 1 in Norwegian) ended up with a reported bitrate of around 100-120 kbps using --tvbr 127, while on Harry Potter og Fangen fra Azkaban (Harry Potter Book 3 in Norwegian) each track ended up with a reported bitrate of about 190-200 kbps. If iTunes is not reading tVBR bitrates correctly, how come it didn't not reported the same for both audiobooks? Is it because iTunes is reporting the maximum bitrate used while encoding or something instead of the actual average bitrate? It doesn't really matter too much for me, I've ended up with going for --cVBR 32 -q2 --he --rate 32000 --native-resampler=bats,127 --matrix-preset mono for my audiobooks and I guess tVBR is of no use when going that low on the bitrate as it cannot be used with HE/HEv2 encoding. After comparing the first 15 minutes of Harry Potte og Fangen fra Azkaban (I selected this one, as it was the only ending with the highest reported bitrate from iTunes when using tVBR 127 so I figured it would be the one that seemed to demand the most bandwidth of all the audiobooks) using --cVBR 24, 32, 48, 64 and 80 in both stereo and mono (HEv2 with Parametric Stereo) it was very hard to really tell them apart when I tested on iPhone 6 Plus using Westone 4R IEM's. If I was listening very closely I could notice a few differences but it was very hard. The only expectation was --cVBR 24, it sounded horrible with lots of echo on the voice even with mono and HEv2. I guess I could save even more space by going from 32,000 kHz down to 22,050 kHz from what I'm reading online. But it doesn't seem like the Apple Encoder and QAAC allows for anything lower than 32,000 kHz when utilising HE/HEv2 for some reason? If I run the same settings without applying --he it allows for downsampling to 22,050 kHZ sampling rate. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 16 July, 2015, 12:09:59 AM Are you saying that checking tVBR encoded files in iTunes is meaningless? If you want to see actual bitrate, then yes. Bitrate can be easily computed from size and duration of the stream, but iTunes prefers values in metadata which might be wrong. Quote It doesn't really matter too much for me, I've ended up with going for --cVBR 32 -q2 --he --rate 32000 --native-resampler=bats,127 --matrix-preset mono --native-resampler is terribly inefficient compared to default libsoxr with no practical gain. Also, HE encoder of Apple might not be so great as their LC encoder, and HEv2 is not supported. Have you compared to FhG, CT(Dolby) or Nero? Quote I guess I could save even more space by going from 32,000 kHz down to 22,050 kHz from what I'm reading online. But it doesn't seem like the Apple Encoder and QAAC allows for anything lower than 32,000 kHz when utilising HE/HEv2 for some reason? qaac --formats will show you all available combinations of profile/samplerate/channels/bitrate. 22050Hz means freq components above 5.5kHz is encoded by SBR, which might be too low for SBR I guess. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 16 July, 2015, 12:45:43 AM http://listening-tests.hydrogenaud.io/igorc/results.html (http://listening-tests.hydrogenaud.io/igorc/results.html) So, Apple HE-AAC was at least better than Nero at 64kbps test. Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 16 July, 2015, 03:02:32 AM I'm pretty sure that Apple Encoder / QAAC supports HEv2? As everything I have encoded using below 64kbps @ mono with HE (in both QAAC and with iTunes itself) has been reported as "Profile: High-Efficiency v2" and is still reported as "Stereo" which might be because iTunes still counts Parametric Stereo as Stereo? dBPowerAMP is reporting it as AAC (LC) + SBS + PS. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 16 July, 2015, 03:47:13 AM I'm pretty sure that Apple Encoder / QAAC supports HEv2? As everything I have encoded using below 64kbps @ mono with HE (in both QAAC and with iTunes itself) has been reported as "Profile: High-Efficiency v2" and is still reported as "Stereo" which might be because iTunes still counts Parametric Stereo as Stereo? dBPowerAMP is reporting it as AAC (LC) + SBS + PS. It's quite normal for an AAC decoder to treat mono HE-AAC as HE-AACv2 and even decoding as stereo, since detecting PS (which is implicitly signaled) requires full parsing of AAC bitstream beforehand. IIRC, mediainfo does full parsing of AAC frame to show profile/num channels/sample rate or something. Go get mediainfo and see what it says. Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 16 July, 2015, 04:09:47 AM This is getting rather confusing. I tried to do a test encode of Harry Potter og Fangen fra Azkaban using FDK AAC with VBR Q1 settings and HEv2 to compare, but iTunes does not seem to recognize this one at all. It's being reported as "Profile: Low Complexity" and "Mono" instead of Stereo. And reported as 22,050 kHz even though dBPowerAMP is reporting it as AAC (LC) + SBR + PS with 44,100 kHz. So it seems like it's using HEv2 and iTunes is simply not able to play it back as HEv2 so it resolves to playing only the Low Complexity portion of the file, hence why it's being reported as Profile: Low Complexity, Mono and only 22,050 kHz sampling rate? I thought Apple was supporting decoding of HEv2 in both iTunes and iOS these days? How come Apple is tagging their own SBR + PS encodes as High-Efficiency v2 and yet iTunes seems to not be able to figure out HEv2 from FDK AAC encoder at all? What's even more funny is that from this small test, where iTunes is playing back the audiobook as mono instead of stereo or parametric stereo I find it to actually sound better. I actually like the sound of this 40 kbps AAC LC Mono file from FDK AAC more than the Apple Lossless source file I'm converting it from. My only problem is that Apple Encoder / QAAC seems to automatically enforce parametric stereo when I do the matrix-preset mono in combination with --he? How do I end up with HE/HEv2 with actual mono instead of parametric stereo? Do I need to downmix the Apple Lossless beforehand and then try to convert with --cvbr 32 -q2 --he? EDIT: I will try and grab mediainfo Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 16 July, 2015, 04:22:24 AM According to MediaInfo this is what I can gather; Spoiler (click to show/hide) Compared to my test with FDK AAC: Spoiler (click to show/hide) What I can gather from all this is that you are absolutely correct. It does not seem like Apple Encoder / QAAC is using HEv2, but HE with Parametric Stereo (I thought PS was a part of HEv2? But I might be mistaken). Why is Apple reporting their own encoded HE with Parametric Stereo as "High Efficiency v2" is they are not HEv2 at all? Seems like a rather confusing and stupid thing to do. And shouldn't iTunes and iOS devices be able to decode HEv2 at all? Considering the FDK AAC with HEv2 is clearly being played back without the HEv2 portion, and without parametric stereo. Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 16 July, 2015, 05:19:07 AM Hmm.. Seems like it's not possible to go from ALAC Stereo to ALAC Mono using QAAC? So I used dBPowerAMP to go from ALAC Stereo to AIFF Mono and then I used the *.aif files to make Apple Lossless and then I finally had my audiobooks playing in mono, which I actually prefer compared to stereo. But sadly, when I'm taking Apple Lossless Mono and convert using --cvbr 32 -q2 --he I'm still ending up with parametric stereo.... Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 16 July, 2015, 06:03:59 AM It does not seem like Apple Encoder / QAAC is using HEv2, but HE with Parametric Stereo (I thought PS was a part of HEv2? But I might be mistaken). No. Apple encoder doesn't support PS. HE + PS = HEv2. Quote Why is Apple reporting their own encoded HE with Parametric Stereo as "High Efficiency v2" is they are not HEv2 at all? I explained the reason. To reliably detect implicitly signaled PS, you need full parsing. And they just don't. Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 16 July, 2015, 06:08:57 AM Is there any way for me to make QAAC and iTunes / iOS to not upsample my AAC HE mono files to stereo using parametric stereo? It sounds much worse compared to just playing it mono for audiobooks.. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 16 July, 2015, 06:09:30 AM Hmm.. Seems like it's not possible to go from ALAC Stereo to ALAC Mono using QAAC? So I used It's possible, but you need to explicitly add "-b 16" or something if you want ALAC, since floating point format is not supported by ALAC. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 16 July, 2015, 06:39:31 AM Is there any way for me to make QAAC and iTunes / iOS to not upsample my AAC HE mono files to stereo using parametric stereo? To make it clear, it's the decoder that converts mono HE-AAC to stereo even when PS not present in the stream. Therefore, there's nothing qaac can do for it. And it seems that all of libavcodec (ffmpeg), faad, and Apple CoreAudio decodes mono HE-AAC to stereo. Quote It sounds much worse compared to just playing it mono for audiobooks.. What are you comparing to? Do you have a decoder that decodes mono HE-AAC to mono and comparing against it? Or what? There's no reason for sound quality degrading when mono HE-AAC is decoded as stereo instead of mono. Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 16 July, 2015, 06:53:23 AM I'm only comparing to having AAC at about the same bitrate outputted in mono, I did just try to compare AAC LC @ 32 kbps @ Mono to AAC HC @ 32 kbps @ PS Stereo and the first sound way better for spoken word in my opinion. I seem to notice a tad bit more artefacts with the AAC LC @ 32 kbps compared to AAC HE @ 32 kbps but the added halo / echo effect of the parametric stereo is far more noticeable compared to having a few hiccups and pops when the audiobook reader is starting to crank up his voice levels. With that said, I don't blame it all on the parametric stereo itself, as I said earlier I find downmixing from stereo to mono on the source (lossless) files also to be a improvement. The spoken words from the audiobook become more focused and easier to hear when being outputted in mono instead of stereo. I gave my girlfriend (not tech-heavy at all) a small portion of the audiobook to compare; #1: cVBR 32 kbps HE (PS Stereo playback) #2: cVBR 32 kbps LC Mono (Mono playback) #3: cVBR 32 kbps LC Stereo (Stereo playback) And she couldn't really tell #1 and #3 apart, but preferred the second one without question. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 16 July, 2015, 07:34:02 AM Your wording of "PS stereo" is a lot confused. Mono HE-AAC file created by qaac has nothing to do with PS. Although it is decoded into a stereo file containing two channels, PS tool is not used on decoding since PS is not present in the stream. It's just that mono output is copied into 2 channels, and it should exactly sound like mono. In other words, if you are listening to the encoded result of qaac, what you are hearing is not artifact of PS but artifact of SBR. Having said that, if you prefer LC result, then just stick to it. Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 16 July, 2015, 07:39:25 AM I'm not that good with the technical portions of audio and encoders, so it might be that I'm confusing things. With SBR / PS out of the question I'm testing out with tVBR instead (had rolled out tVBR as I thought HE was preferred below 64 kbps) and it seems like --tVBR 9 -q2 --matrix-preset mono is providing very good performance for audiobooks. Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 27 July, 2015, 02:36:20 AM Just a small heads-up. QAAC and QAAC64.exe does not seem to be working win Windows 10 RTM. I tried using it with dBPowerAMP was well as utilising the .exe directly through an elevated command prompt and it claims that I'm lacking coreaudiotoolbox, but I have the latest iTunes 64-bit and QuickTime installed. I tried to manually copy the coreaudiotoolbox.dll into the QAAC64.exe folder but then it start to give me various coreaudiotoolbox errors instead. Title: QAAC: discussion, questions, feature requests, etc. Post by: Aleron Ives on 27 July, 2015, 03:58:40 AM You need to put the required DLLs in the QTfiles folder, not the same folder as qaac.exe. Does doing that fix the error? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 27 July, 2015, 07:12:32 AM I tried to manually copy the coreaudiotoolbox.dll into the QAAC64.exe folder but then it start to give me various coreaudiotoolbox errors instead. CoreAudioToolbox.dll is not enough. You need 10 DLLs from Apple Application Support package. It should be easier to use makeportable. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 27 July, 2015, 07:13:07 AM You need to put the required DLLs in the QTfiles folder, not the same folder as qaac.exe. Does doing that fix the error? Well actually, both should work. Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 27 July, 2015, 02:57:46 PM I tried to manually copy the coreaudiotoolbox.dll into the QAAC64.exe folder but then it start to give me various coreaudiotoolbox errors instead. CoreAudioToolbox.dll is not enough. You need 10 DLLs from Apple Application Support package. It should be easier to use makeportable. What am I supposed to do with the makeportable cmd? Simply running it as administrator didn't seem to do much. Title: QAAC: discussion, questions, feature requests, etc. Post by: juza on 27 July, 2015, 04:41:40 PM I tried to manually copy the coreaudiotoolbox.dll into the QAAC64.exe folder but then it start to give me various coreaudiotoolbox errors instead. CoreAudioToolbox.dll is not enough. You need 10 DLLs from Apple Application Support package. It should be easier to use makeportable. What am I supposed to do with the makeportable cmd? Simply running it as administrator didn't seem to do much. Hi, Download iTunesSetup.exe or QuickTimeInstaller.exe put it into a folder, copy makeportable.cmd into that folder, right click makeportable.cmd and select "Open" after a few seconds you will have a new folder called "QTfiles", if you used the 64bit version of iTunes executable you'll also have the folder "QTfiles64". Inside the qaac folder there are 2 folders called "x64" and "x86" copy the folder "QTfiles" into "x86" and "QTfiles64" into "x64". Now you can use your favourite software to encode your tracks to aac :-) qaac works with Windows10. I'm using it without problems. Excuse me about my english. Cheers. Title: QAAC: discussion, questions, feature requests, etc. Post by: sPeziFisH on 27 July, 2015, 04:49:26 PM What am I supposed to do with the makeportable cmd? Simply running it as administrator didn't seem to do much. You also have to copy the itunes6464setup.exe/QuickTimeInstaller.exe in the extracted folder, directly next to the makeportable.cmd. 7-zip needs to be installed or the '7z.exe' has to be copied there too. Then fire the makeportable.cmd. The 10 dlls get extracted to newly created subdirectories. Code: [Select] folder_xyz/          makeportable.cmd          itunes6464setup.exe          7z.exe Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 28 July, 2015, 02:53:55 AM Thanks! Will try that when I get home. Hopefully I will get QAAC-working again to convert my ALAC ripped audiobooks without me needing to transfer everything to my HTPC running Windows 8.1. Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 28 July, 2015, 04:41:36 AM I was able to run the makeportable and got the QTFiles folder, but it's still not working? http://bildr.no/view/Rm5HVXY0 (http://bildr.no/view/Rm5HVXY0) Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 28 July, 2015, 08:30:32 AM I was able to run the makeportable and got the QTFiles folder, but it's still not working? http://bildr.no/view/Rm5HVXY0 (http://bildr.no/view/Rm5HVXY0) You need QTfiles64 (created from iTunes6464Setup.exe) for qaac64, and place it like this: Code: [Select] C:\Path\To\qaac64.exeC:\Path\To\QTfiles64\ASL.dllC:\Path\To\QTfiles64\CoreAudioToolbox.dll           : Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 28 July, 2015, 09:32:27 AM Makeportable does not give me QTFiles64, only QTFiles? Renaming the folder to QTFiles64 I only get this ERROR: 193 CoreAudioToolbox.dll Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 28 July, 2015, 12:00:50 PM Makeportable does not give me QTFiles64, only QTFiles? Renaming the folder to QTFiles64 I only get this ERROR: 193 CoreAudioToolbox.dll Are you trying with iTunes 64bit? Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 28 July, 2015, 12:42:26 PM I'm using the itunes64setup.exe EDIT: Extracting the installer using 7zip shows that it actually lacks the AppleApplicationSupport64.exe that makeportabale is looking for. Seems like Apple removed it from the latest iTunes64setup.exe? EDIT2: It seems like one should never relay on Filehippo.com for installers.. The download directly from Apple contains the AppleApplicationSupport64.exe.. I have no clue why Filehippo would fiddle with the installer for whatever reason. EDIT3: It was the installer from Filehippo that was causing the issues. It installs iTunes 64-bit, but did not contain the AppleApplicationSupport64.exe for whatever reason.. When installing using iTunes64setup.exe that actually contains the AppleApplicationSupport64.exe everything is working as it should. No need for makeportable or anything. Title: QAAC: discussion, questions, feature requests, etc. Post by: sPeziFisH on 28 July, 2015, 02:25:16 PM When installing using iTunes64setup.exe that actually contains the AppleApplicationSupport64.exe everything is working as it should. No need for makeportable or anything. yeah, guess what, 'makeportable' is for low-footprint-installationfree-setups Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 28 July, 2015, 08:05:30 PM I'm using the itunes64setup.exe You need iTunes6464Setup.exe. Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 31 July, 2015, 02:45:26 AM I'm here yet again facing some new issues... I've ripped all my audiobooks using dBPowerAMP and Apple Lossless and now I'm trying to encode them using tvbr 18 and combine them all into a single *.m4b. This is working for some of the them, but not all of them. I keep receiving this error for a number of the audiobooks; http://bildr.no/view/RmdqVE0y (http://bildr.no/view/RmdqVE0y) "ERROR: D:\Music\Apple Lossless\J.K. Rowling\Harry Poter and the Deathly Hallows\CD17-Spor17.m4a: The operation completed successfully." Using this command; qaac64.exe "D:\Music\Apple Lossless\J.K. Rowling\Harry Poter and the Deathly Hallows\*.m4a" --tvbr 18 --quality=2 --rate 32000 --native-resampler=bats,127 --matrix-preset mono --concat -o "D:\Music\Apple Lossless\Harry Poter and the Deathly Hallows.m4b" It shouldn't be anything wrong with the command as the exact same command works with other audiobooks? Is there a limit of how many *.m4a files it can combine or some kind of size restrictions or anything? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 31 July, 2015, 04:43:46 AM It seems that opening that input file (CD17-Spor17.m4a) failed for unknown reason (_wfsopen() failed but OS errno is not set). Maybe a bug of Windows 10, I dunno. I haven't seen such a case, and there's nothing qaac can do for it. Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 31 July, 2015, 07:20:46 AM I will try to do the same with Windows 8.1 and see what happens. EDIT: Keep getting the same error under Windows 8.1 and Windows Server 2012 R2. I guess I need to find some other way to join all the M4A files and then use Qaac for the TVBR encoding. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 31 July, 2015, 08:24:25 AM Keep getting the same error under Windows 8.1 and Windows Server 2012 R2. I guess I need to find some other way to join all the M4A files and then use Qaac for the TVBR encoding. Can you upload the file? Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 31 July, 2015, 08:57:58 AM It plays back just fine with iTunes and Windows Media Player (or Music.app or whatever it's called in Windows 10) and I was also able to convert everything to WAV using dBPowerAMP and tried to use QAAC64.exe to --concat the wave files but then it reports the same error just on CD17-Spor17.wav instead. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 31 July, 2015, 09:20:25 AM Can't reproduce your issue. qaac64 can read CD17-Spor17.m4a just fine here. Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 31 July, 2015, 10:36:18 AM It works with me as well, it's when I point to the entire folder containing *.m4a and using --concat it fails like that. If I take the file by itself it works. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 31 July, 2015, 11:03:01 AM It works with me as well, it's when I point to the entire folder containing *.m4a and using --concat it fails like that. If I take the file by itself it works. I tried both of wildcard and --concat for that file, and it works. If the issue happens only when multiple files exist in the folder, please provide all of them. Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 31 July, 2015, 02:36:42 PM Well, it's an entire audiobook in Apple Lossless so it's quite large. I've created a torrent of the all the files within a 7-zip archive, you can grab the torrent here; Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 31 July, 2015, 07:24:22 PM Well, it's an entire audiobook in Apple Lossless so it's quite large. I've created a torrent of the all the files within a 7-zip archive, you can grab the torrent here; Coudn't download it, but I could reproduce your issue on the folder containing more than 512 files or so. Before encoding, qaac opens all the inputs. And you hit the maximum number of open files available on MSVC runtime. I don't want to change this behavior to catch errors on input files before encoding, and also to avoid implementing pre-fetching on the transition required for --play. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 31 July, 2015, 09:13:00 PM Released 2.52. Now qaac can handle up to 2048 files. (Haven't thought default limit of 512 is not enough for qaac) Title: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 01 August, 2015, 04:38:01 AM Perfect, it's all working with 2.52! Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 01 August, 2015, 06:05:55 AM "It's all fun and games until someone ..." — passes an unexpected threshold. Title: QAAC: discussion, questions, feature requests, etc. Post by: darkbyte on 02 September, 2015, 01:27:23 PM Hope you don't mind if i ask my question here, i don't want to open a new thread for a possibly 2-3 post long conversation. Do you know if recent QuickTime distribution comes with the mp3 encoder of iTunes as well? If yes, is it possible to access it somehow on the CLI level like it's being done with the AAC encoder using QAAC? I like to experiment with the encoder. There is iTunes Encode (http://www.hydrogenaud.io/forums/index.php?showtopic=29821) which seems to be capable to use this encoder, but it's really outdated and not working with my QT installation. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 02 September, 2015, 08:32:05 PM Do you know if recent QuickTime distribution comes with the mp3 encoder of iTunes as well? If yes, is it possible to access it somehow on the CLI level like it's being done with the AAC encoder using QAAC? IIRC, MP3 encoder is not provided by CoreAudio/QuickTime (even on Mac) and is implemented in iTunes. Title: QAAC: discussion, questions, feature requests, etc. Post by: exb on 19 September, 2015, 09:07:22 PM The current makeportable.cmd (Feb 6, 2015, 2:17 AM) doesn't seem to extract the current iTunes6464Setup.exe (12.3.0.44) properly- I get an empty QTFiles folder, an AppleApplicationSupport.msi installer, and an AppleApplicationSupport64.msi installer. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 19 September, 2015, 10:25:17 PM The current makeportable.cmd (Feb 6, 2015, 2:17 AM) doesn't seem to extract the current iTunes6464Setup.exe (12.3.0.44) properly- I get an empty QTFiles folder, an AppleApplicationSupport.msi installer, and an AppleApplicationSupport64.msi installer. Not reproducing here. As far as I can see, structure of 12.3.0.44 installer is unchanged. Can you install iTunes normally? Also, you can manually extract iTunes6464Setup.exe using 7-zip, then also extract AppleApplicationSupport.msi and AppleApplicationSupport64.msi. See what you get. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 20 September, 2015, 12:01:22 AM makeportable works fine here too. Title: QAAC: discussion, questions, feature requests, etc. Post by: exb on 20 September, 2015, 12:09:07 AM I re-downloaded makeportable and iTunes and rebooted, it still didn't work. Then I re- downloaded Quicktime 7.7.6.80.95 and used my old makeportable, same result (but no 64 bit Apple Application Support, of course.) I can manually extract the files from iTunes then Apple Application Support, but Foobar won't convert a .flac to an .m4a. When I restore the old QTFiles folder, Foobar works.  Also, the old QTFiles folder had a folder named Microsoft.VC80.CRT, this folder doesn't extract manually from Apple Application Support. Title: QAAC: discussion, questions, feature requests, etc. Post by: exb on 20 September, 2015, 12:41:41 AM I installed iTunes and removed the QTFiles folder from foobar and it converted to .m4a. For what it's worth, here's what's in the old QTFiles folder: ASL.dll, CoreAudioToolbox.dll, CoreFoundation.dll, icudt46.dll, icuil40.dll, icuuc40.dll, libdispatch.dll, libicuin.dll, libicuuc.dll, objc.dll, pthreadVC2.dll, and the subfolder Microsoft.VC80.CRT, which contains: Microsoft.VC80.CRT.manifest, msvcp80.dll, msvcr80.dll. I can manually extract new versions of everything except those in the subfolder, and Foobar will convert with these (and the old contents from the subfolder). I'm using Windows 7 (64-bit) OEM. Title: QAAC: discussion, questions, feature requests, etc. Post by: exb on 20 September, 2015, 12:54:25 AM The manually extracted Apple Application Support also has these files: When I rename them to msvcp80.dll and msvcr80.dll and place them in the subfolder (after uninstalling iTunes and everything installed along with it), Foobar will convert the files. Removing the old Microsoft.VC80.CRT.manifest file does NOT work. So that is the only unupgraded file in my QTFiles folder now. Title: QAAC: discussion, questions, feature requests, etc. Post by: exb on 20 September, 2015, 01:26:48 AM AppleApplicationSupport64.msi doesn't seem to have enough of the .dlls.  Trying to get qaac64 to work hasn't worked. Activating it through the command line rather than Foobar has failed so far. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 20 September, 2015, 01:34:41 AM Just tested, redownloaded everything and everything works, 32 bit and 64 bit. I don't have anything Apple installed in my system, everything is being tested in portable mode and straight with the CLI/exe. You don't need the 80 dlls, 100 are the new ones, 120 sometimes but not here. Please describe all you do, step by step even where you download the files and what you have installed in the system. Make a video, do whatever makes it easier to explain. Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 20 September, 2015, 02:45:05 AM Name: itunes6464setup.exe Size: 167601944 Bytes (159 MB) CRC32: 992F204B CRC64: 645A86B91C1EBD8C SHA256: AEE2BE960C962BDFF6911D17C3C8209A02C69EC29201F572C0DA2F3A33721229 SHA1: 50B0412E29BCC876B48DF87BE687E5557E11A692 BLAKE2sp: 72EDE257292A2BBC8707B0806496E241019C35BBA30EB24F335D072B27104653 (calculated using the Explorer context menu of 7-zip 15.07 for Win64 when "CRC SHA" is enabled in the settings) Extraction worked correctly, using the installed version 15.07 beta of 7-zip. Maybe exb has an outdated version of 7-zip? For several cases, 9.20 is the recommended minimum version; I don't know which version is required to handle this installer correctly, but I would strongly recommend at least beta versions 9.3x (if you are afraid of the new beta versios 15.0x). Title: QAAC: discussion, questions, feature requests, etc. Post by: exb on 20 September, 2015, 02:53:29 AM Okay, I just ran makeportable again and it worked. What the hell. Not complaining. Got the 64 bit version running under Foobar too. What is up with you, Windows. Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 20 September, 2015, 03:59:42 AM This new version uses ICU 5.5 (icudt55.dll). A minimal version: Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 20 September, 2015, 05:34:33 AM ^  Much appreciated! Thanks! Title: QAAC: discussion, questions, feature requests, etc. Post by: Rumbah on 20 September, 2015, 06:47:57 AM I have a problem with Qaac > 2.51. I use the following command line to encode my files: Code: [Select] Tools\qaac\qaac64.exe -V 45 -q 2 -N -d Temp Temp\*.flac Using 2.51 it works as expected. But with version 2.52 and 2.53 I get the following error although qaac64 --check shows the libFLAC.dll Code: [Select] qaac 2.53, CoreAudioToolbox 7.9.9.6ERROR: Not available input file format Do you need any additional info? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 20 September, 2015, 08:34:39 AM I have a problem with Qaac > 2.51. I use the following command line to encode my files: Code: [Select] Tools\qaac\qaac64.exe -V 45 -q 2 -N -d Temp Temp\*.flac Using 2.51 it works as expected. But with version 2.52 and 2.53 I get the following error although qaac64 --check shows the libFLAC.dll Code: [Select] qaac 2.53, CoreAudioToolbox 7.9.9.6ERROR: Not available input file format Do you need any additional info? Unfortunately, not reproducing here. Also, no relevant change comes to my mind... Difference between 2.51 and 2.52 is tiny. Only 3 commits: https://github.com/nu774/qaac/commit/458d21...9187a690bc5e064 (https://github.com/nu774/qaac/commit/458d21540c86e18131f761fc69187a690bc5e064) https://github.com/nu774/qaac/commit/699e07...6732d12b0d71b96 (https://github.com/nu774/qaac/commit/699e07904884a150e7911dde56732d12b0d71b96) https://github.com/nu774/qaac/commit/68ef0c...a77fbe65aeb4746 (https://github.com/nu774/qaac/commit/68ef0cf79093e46d7915aebb6a77fbe65aeb4746) Can you isolate the condition of  the failure? Title: QAAC: discussion, questions, feature requests, etc. Post by: Rumbah on 20 September, 2015, 09:08:47 AM I have a problem with Qaac > 2.51. I use the following command line to encode my files: Code: [Select] Tools\qaac\qaac64.exe -V 45 -q 2 -N -d Temp Temp\*.flac Using 2.51 it works as expected. But with version 2.52 and 2.53 I get the following error although qaac64 --check shows the libFLAC.dll Code: [Select] qaac 2.53, CoreAudioToolbox 7.9.9.6ERROR: Not available input file format Do you need any additional info? Unfortunately, not reproducing here. Also, no relevant change comes to my mind... Difference between 2.51 and 2.52 is tiny. Only 3 commits: https://github.com/nu774/qaac/commit/458d21...9187a690bc5e064 (https://github.com/nu774/qaac/commit/458d21540c86e18131f761fc69187a690bc5e064) https://github.com/nu774/qaac/commit/699e07...6732d12b0d71b96 (https://github.com/nu774/qaac/commit/699e07904884a150e7911dde56732d12b0d71b96) https://github.com/nu774/qaac/commit/68ef0c...a77fbe65aeb4746 (https://github.com/nu774/qaac/commit/68ef0cf79093e46d7915aebb6a77fbe65aeb4746) Can you isolate the condition of  the failure? Sorry, I'm not very good at C++ but I see that the changes have to do with the path. I guess it's that QAAC cannot load/find the flac dll file in the newer versions although they are located in the qaac64 exe folder. Using only wav files works without problems with the same command line: Code: [Select] Tools\qaac\qaac64.exe -V 45 -q 2 -N -d Temp Temp\*.wav EDIT: Or are there possibly errors because of a new build chain as I see that errors for MSVC14 are fixed? I'm using the 64 bit flac ICC dll from rarewares. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 20 September, 2015, 09:17:32 AM I guess it's that QAAC cannot load/find the flac dll file in the newer versions although they are located in the qaac64 exe folder. Using only wav files works without problems with the same command line: Code: [Select] Tools\qaac\qaac64.exe -V 45 -q 2 -N -d Temp Temp\*.wav How do you place qaac64.exe and libFLAC.dll? If you replace qaac64.exe (not working now) with an older one in the same path, and the older version works? Then it's unlikely that only new version cannot locate / load libFLAC.dll. Quote EDIT: Or are there possibly errors because of a new build chain as I see that errors for MSVC14 are fixed? I'm using the 64 bit flac ICC dll from rarewares. 2.52 is built with the same compiler as 2.51 (MSVC12), so it's not relevant (2.53 is built with MSVC14 though). Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 20 September, 2015, 10:06:11 AM Everything works here with qaac 2.51 and 2.53 with 64-bit libFLAC from rarewares. Title: QAAC: discussion, questions, feature requests, etc. Post by: Rumbah on 20 September, 2015, 10:38:11 AM After some trying reproducing it I noticed that v2.52 fails only for files bigger than 2 GB. If you want I can give you my sample and environment that can reproduce it. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 20 September, 2015, 10:58:04 AM After some trying reproducing it I noticed that v2.52 fails only for files bigger than 2 GB. If you want I can give you my sample and environment that can reproduce it. Well, does 2.51 succeed on the same file? Title: QAAC: discussion, questions, feature requests, etc. Post by: Rumbah on 20 September, 2015, 11:18:24 AM Well, does 2.51 succeed on the same file? Yes, it does succeed and 2.52 does not. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 20 September, 2015, 11:26:50 AM I checked 2.52 binary, and found that it was actually compiled with MSVC14. So, my memory was wrong. It might have something to do with it. Title: QAAC: discussion, questions, feature requests, etc. Post by: Rumbah on 20 September, 2015, 11:30:12 AM I checked 2.52 binary, and found that it was actually compiled with MSVC14. So, my memory was wrong. It might have something to do with it. I sent you my environment to reproduce the error as a PM. Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 20 September, 2015, 11:37:20 AM I built qaac64 2.52 myself (with VS2013 u5, and also with VS2010) and it works with 2.3 GB FLAC file. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 20 September, 2015, 12:28:00 PM It seems that fstat() of new Universal CRT introduced on VC14 now fails when the file in question is larger than 2GB, so we have to always use _fstat64() instead. Actually, qaac was using fstat() only to test if it's a seekable regular file, so file size returned by fstat() was insignificant. However, new fstat() implementation hates to silently return incorrect value when the size doesn't fit in 32bit integer. Title: QAAC: discussion, questions, feature requests, etc. Post by: Rumbah on 20 September, 2015, 03:23:06 PM Ok, then I'll use 2.51 till there is a fixed version, thank you for your help. Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 20 September, 2015, 03:27:24 PM I thought that 2.54 is the fixed version. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 20 September, 2015, 04:17:26 PM This new version uses ICU 5.5 (icudt55.dll). A minimal version: Sorry if I asked before, how can a 3KB empty/dummy DLL replace a 24MB DLL? Does it still work? The encoder doesn't need it? Thanks. Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 20 September, 2015, 04:31:40 PM how can a 3KB empty/dummy DLL replace a 24MB DLL? Does it still work? The encoder doesn't need it? Thanks. icudt55.dll contains no code, only data for ICU ("International Components for Unicode"). "The default ICU data consists of the data needed for the converters, collators, locales, etc. that are provided with ICU." -- http://userguide.icu-project.org/icudata (http://userguide.icu-project.org/icudata) Apparently qaac/refalac doesn't use any function of CoreAudioToolbox that require these data to be present. Sorry if I asked before Indeed (https://www.hydrogenaud.io/forums/index.php?s=&showtopic=85135&view=findpost&p=897016)... Title: QAAC: discussion, questions, feature requests, etc. Post by: Rumbah on 20 September, 2015, 04:49:33 PM Ah, I didn't notice that there is a newer version already. Thanks. Title: QAAC: discussion, questions, feature requests, etc. Post by: Brazil2 on 22 September, 2015, 11:50:08 AM I use the following command line to encode my files: Code: [Select] Tools\qaac\qaac64.exe -V 45 -q 2 -N -d Temp Temp\*.flac Using 2.51 it works as expected. But with version 2.52 and 2.53 I get the following error Code: [Select] qaac 2.53, CoreAudioToolbox 7.9.9.6ERROR: Not available input file format I have the exact same problem with FLAC files and the x86 build: v2.51 works but versions 2.52 and above all fail, using the same QTfiles portable folder and just changing Qaac.exe. I'm on XP and the FLAC files I've used for testing are about 4 and 16 MB. Title: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 22 September, 2015, 12:23:58 PM Are we talking only about Windows XP? Because here on Win 8.1 qaac 2.54 works perfectly with flac files, testes right now. Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 22 September, 2015, 12:40:40 PM I have the exact same problem with FLAC files and the x86 build With the same error message? I'm on XP and the FLAC files I've used for testing are about 4 and 16 MB. Do you use XP SP3? Title: QAAC: discussion, questions, feature requests, etc. Post by: Brazil2 on 22 September, 2015, 03:08:19 PM With the same error message? Yes: Code: [Select] ERROR: Not available input file format BTW same behaviour with Wavpack DLL. Do you use XP SP3? No I'm on SP2. Because I had troubles with some -not so recent but I like it- musical hardware and some of their drivers I had to revert back to SP2 years ago. MSVC14 requires SP3 ? Well, encoding from WAV works fine so I can live with it Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 22 September, 2015, 03:58:58 PM MSVC14 requires SP3 ? Apparently no, if qaac works (at least partially). Title: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 22 September, 2015, 04:38:53 PM I tested qaac on XP SP3. 2.51 works with flac files, 2.54 doesn't work. But --check shows that qaac 2.54 is able to find and load libflac.dll. Another problem with _fstat64() or fileno()? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 22 September, 2015, 08:06:08 PM It turned out that stat() family of UniversalCRT calls GetFileInformationByHandleEx() API that wasn't present in XP. Title: QAAC: discussion, questions, feature requests, etc. Post by: Brazil2 on 23 September, 2015, 01:15:47 AM The issue has been fixed with v2.55, it's now working as expected. Thanks for the update Title: QAAC: discussion, questions, feature requests, etc. Post by: bbrabant on 22 October, 2015, 12:29:41 PM I use Qaac to encode my music for use on my portable device. I do not install Itunes but I use makeportable to extract the QTfiles. When testing the new Itunes QTfiles I discovered that one particular music track is not always encoded correctly. When I encode this pcm wav 44.100 16bit stereo track multiple times then the resulting m4a's are not bit identical to each other. I have encoded the test.wav as follow: qaac 2.55 (x86) CoreAudioToolbox 7.10.5.0 (iTunes6464Setup.exe 12.3.1.23) qaac test.wav --cvbr 96 -o test 1.m4a qaac test.wav --cvbr 96 -o test 2.m4a qaac test.wav --cvbr 96 -o test 3.m4a qaac test.wav --cvbr 96 -o test 4.m4a qaac test.wav --cvbr 96 -o test 5.m4a qaac test.wav --cvbr 96 -o test 6.m4a I used foobar 1.3.8 to Bit compare the m4a's result bit compare: test 1.m4a = test 2.m4a = test 3.m4a = test 5.m4a test 1.m4a and test 4.m4a => Differences found: 4096 values, starting at 0:34.410522, peak: 0.0037362 at 0:34.430862, 2ch test 1.m4a and test 6.m4a => Differences found: 4096 values, starting at 0:34.410522, peak: 0.0037362 at 0:34.430862, 2ch I also encoded this wav multiple time with Qaac64. The m4a's encoded with Qaac64 do not have this difference, all m4a's are identical to each other. Title: QAAC: discussion, questions, feature requests, etc. Post by: bbrabant on 22 October, 2015, 01:57:40 PM I have encoded the test.wav as follow: qaac 2.55 (x86) CoreAudioToolbox 7.10.5.0 (iTunes6464Setup.exe 12.3.1.23) qaac test.wav --cvbr 96 -o test 1.m4a qaac test.wav --cvbr 96 -o test 2.m4a qaac test.wav --cvbr 96 -o test 3.m4a qaac test.wav --cvbr 96 -o test 4.m4a qaac test.wav --cvbr 96 -o test 5.m4a qaac test.wav --cvbr 96 -o test 6.m4a Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 23 October, 2015, 05:41:03 AM Quote File was downloaded 2015-10-23 3:26 and has been permanently removed from Sprend. Our terms for Sprend Free state that the file is removed 2 hours after a successful download. If you need the file you have to contact the person who sent you the file via Sprend. Sprend does not have any more information about the file or the sender. Title: QAAC: discussion, questions, feature requests, etc. Post by: bbrabant on 23 October, 2015, 10:55:56 AM File was downloaded 2015-10-23 3:26 and has been permanently removed from Sprend. Hello nu774, I have send you a PM with a new link to download the test.wav Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 23 October, 2015, 11:32:02 AM Hello nu774, I have send you a PM with a new link to download the test.wav Thanks. Tried with the same setting 50 times, and I got all identical files. However, they were different than yours. Total size of AAC bitstream of mine: 537889 bytes Yours (1-3,5): 537887 bytes Yours (4, 6): 537895 bytes Difference between yours and mine might come from difference of CPU in use. I don't know why it is non-deterministic in your case since I don't know what Apple does in their encoder. Anyway, difference looks so tiny that I don't think the result is incorrect. Title: QAAC: discussion, questions, feature requests, etc. Post by: bbrabant on 23 October, 2015, 11:58:24 AM Difference between yours and mine might come from difference of CPU in use. I don't know why it is non-deterministic in your case since I don't know what Apple does in their encoder. Anyway, difference looks so tiny that I don't think the result is incorrect. Thank you for answering my question. I think I will use Qaac 64bit for the meantime. So far I found no difference after encoding with Qaac64. Title: QAAC: discussion, questions, feature requests, etc. Post by: IotaQouta on 03 November, 2015, 02:35:00 PM Hello, first off, I apologize if this has been asked before. If it's been answered, I ask if anybody could point me to it. I am trying to convert/encode my audio files to iTunes Plus quality and format, so I did a test comparison between an .m4a file converted using the latest version of iTunes (12.3.1.23) and qaac 2.55, and I couldn't get them to match up. (I'm pretty much just re-encoding .m4a to .m4a, since iTunes won't be able to convert .FLAC files.) My goal is to convert .FLAC files into iTunes Plus .m4a using Apple's encoder, so I'm using qaac. So my question is, what is causing this discrepancy between the one converted using iTunes and one using qaac and how can it be resolved? Here are some screenshots of their properties: (File properties were viewed with dBpoweramp) [!--sizeo:3--][span style=\"font-size:12pt;line-height:100%\"][!--/sizeo--]iTunes12.3.1.23:[/size] It gives out a file with a slightly lower bitrate but dBpoweramp says the "audio quality" is very high. (Whatever iTunes output is the one I prefer) Encode settings: (https://hydrogenaud.io/imgcache.php?id=eeb511f7c6c5cfe9df40c071461b1348" rel="cached" data-warn="External image, click to view at original size" data-url="http://i.imgur.com/i4y2IzN.jpg) Output: (https://hydrogenaud.io/imgcache.php?id=39eabcf0e6b7a02eac2e50e198f4ba4e" rel="cached" data-warn="External image, click to view at original size" data-url="http://i.imgur.com/ZlMHheg.jpg) [!--sizeo:3--][span style=\"font-size:12pt;line-height:100%\"][!--/sizeo--]qaac 2.55:[/size] It gives out a file with a slightly higher bitrate but dBpoweramp says the "audio quality" is low. Encode settings: Code: [Select] qaac64 --no-smart-padding -v256 -q2 *.m4a Output: (https://hydrogenaud.io/imgcache.php?id=79de9957d9cbc3b738e1334e1fd9c6a9" rel="cached" data-warn="External image, click to view at original size" data-url="http://i.imgur.com/8Kz7fnI.jpg) Qaac outputs a similar file when I convert .FLAC to .m4a. I've also tried using dBpoweramp with qaac(through it's command-line encoder option) but the results are the same. Title: QAAC: discussion, questions, feature requests, etc. Post by: sluggy on 03 November, 2015, 05:26:54 PM Hello, first off, I apologize if this has been asked before. If it's been answered, I ask if anybody could point me to it. I am trying to convert/encode my audio files to iTunes Plus quality and format, so I did a test comparison between an .m4a file converted using the latest version of iTunes (12.3.1.23) and qaac 2.55, and I couldn't get them to match up. (I'm pretty much just re-encoding .m4a to .m4a, since iTunes won't be able to convert .FLAC files.) My goal is to convert .FLAC files into iTunes Plus .m4a using Apple's encoder, so I'm using qaac. So my question is, what is causing this discrepancy between the one converted using iTunes and one using qaac and how can it be resolved? Here are some screenshots of their properties: (File properties were viewed with dBpoweramp) [!--sizeo:3--][span style=\"font-size:12pt;line-height:100%\"][!--/sizeo--]iTunes12.3.1.23:[/size] It gives out a file with a slightly lower bitrate but dBpoweramp says the "audio quality" is very high. (Whatever iTunes output is the one I prefer) Encode settings: (https://hydrogenaud.io/imgcache.php?id=eeb511f7c6c5cfe9df40c071461b1348" rel="cached" data-warn="External image, click to view at original size" data-url="http://i.imgur.com/i4y2IzN.jpg) Output: (https://hydrogenaud.io/imgcache.php?id=39eabcf0e6b7a02eac2e50e198f4ba4e" rel="cached" data-warn="External image, click to view at original size" data-url="http://i.imgur.com/ZlMHheg.jpg) [!--sizeo:3--][span style=\"font-size:12pt;line-height:100%\"][!--/sizeo--]qaac 2.55:[/size] It gives out a file with a slightly higher bitrate but dBpoweramp says the "audio quality" is low. Encode settings: Code: [Select] qaac64 --no-smart-padding -v256 -q2 *.m4a Output: (https://hydrogenaud.io/imgcache.php?id=79de9957d9cbc3b738e1334e1fd9c6a9" rel="cached" data-warn="External image, click to view at original size" data-url="http://i.imgur.com/8Kz7fnI.jpg) Qaac outputs a similar file when I convert .FLAC to .m4a. I've also tried using dBpoweramp with qaac(through it's command-line encoder option) but the results are the same. I would say the audio quality is highlighted as low because dbpoweramp loses iTunes id tag info when encoding using qaac as you can see nothing is showing up under bit rate on audio properties. As for file size differences im not sure. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 03 November, 2015, 07:40:42 PM So my question is, what is causing this discrepancy between the one converted using iTunes and one using qaac and how can it be resolved? Difference in file size is insignificant, since that can come from difference of container. Use Binary Comparator of fb2k or something for comparing audio signal. As for blank bit rate property, see posts from https://www.hydrogenaud.io/forums/index.php...st&p=902359 (https://www.hydrogenaud.io/forums/index.php?s=&showtopic=85135&view=findpost&p=902359) . I guess "Audio Quality" portion is judged from bit rate, so is derived from the same issue. Title: QAAC: discussion, questions, feature requests, etc. Post by: IotaQouta on 03 November, 2015, 10:29:03 PM As for blank bit rate property, see posts from https://www.hydrogenaud.io/forums/index.php...st&p=902359 (https://www.hydrogenaud.io/forums/index.php?s=&showtopic=85135&view=findpost&p=902359) . I guess "Audio Quality" portion is judged from bit rate, so is derived from the same issue. Thanks for pointing me to the right direction, at least I've got an answer now. It seems that it won't be resolved because it's not broken, and it's not an issue with qaac. Anyway, since that "0 average bitrate" will bother me when looking at audio properties, I'll settle with encoding .FLAC into ALAC instead with dBpoweramp, and if necessary further re-encode it into lossy .m4a (with iTunes). Title: QAAC: discussion, questions, feature requests, etc. Post by: Dive on 20 November, 2015, 07:14:06 AM I'm trying to encode 24/48 FLAC with it's cue file as input file with plain QAAC command line. But always end with this error: "ERROR: cuesheet: Invalid INDEX time format at line 33". Foobar2000 with QAAC doesnt' have this problem. Here's the track content at line 33: Code: [Select]   TRACK 07 AUDIO    TITLE "Loving Sea"    PERFORMER "Steve Hackett"    INDEX 01 36:18:80 Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 20 November, 2015, 08:16:52 AM INDEX 01 36:18:80 The format of cuesheet index position is mm:ss:ff(minutes-seconds-frames), where frame is 1/75 second. Apparently, 80 is out of valid range (00-74). Title: QAAC: discussion, questions, feature requests, etc. Post by: Dive on 20 November, 2015, 08:40:36 AM INDEX 01 36:18:80 The format of cuesheet index position is mm:ss:ff(minutes-seconds-frames), where frame is 1/75 second. Apparently, 80 is out of valid range (00-74). Is that mean, when using QAAC, Foobar2000 can tolerate that invalid range of frames? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 20 November, 2015, 08:57:08 AM Is that mean, when using QAAC, Foobar2000 can tolerate that invalid range of frames? Maybe fb2k treats 36:18:80 as 36:19:05. It has nothing to do with qaac or whatever encoder you run from fb2k. Title: QAAC: discussion, questions, feature requests, etc. Post by: duchski on 25 December, 2015, 03:17:36 PM anybody knows the switches for qaac to produce output in the following folder/name structure: artist/album/track - title.m4a ? I tried everything but i am probably missing something.... Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 25 December, 2015, 10:25:30 PM No, qaac doesn't automatically create folders for you. If you try --fname-format="${artist}/${album}/${track} -${title}", this expression will be evaluated to something like "Coralie Clément_Bye Bye Beauté_03 - L'Enfer". So, All invalid characters for a file name (including slashes/back slashes) are replaced into underscores. Slashes in a title is quite common, and you usually don't want them treated as path separators. Title: QAAC: discussion, questions, feature requests, etc. Post by: duchski on 26 December, 2015, 12:40:49 AM No, qaac doesn't automatically create folders for you. If you try --fname-format="${artist}/${album}/${track} -${title}", this expression will be evaluated to something like "Coralie Clément_Bye Bye Beauté_03 - L'Enfer". So, All invalid characters for a file name (including slashes/back slashes) are replaced into underscores. Slashes in a title is quite common, and you usually don't want them treated as path separators. So that's why it didint work Thanks... Title: QAAC: discussion, questions, feature requests, etc. Post by: KinG-KanG on 30 December, 2015, 12:06:50 AM Code: [Select] qaac64.exe --tag "gen:J-Pop" --tag "geID:27" -o output.m4a input.wav From the above code qaac just want to save "gen" tag,  but do not want to save "geID" tag. Why? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 30 December, 2015, 05:12:08 AM Code: [Select] qaac64.exe --tag "gen:J-Pop" --tag "geID:27" -o output.m4a input.wav From the above code qaac just want to save "gen" tag,  but do not want to save "geID" tag. Why? Thanks for the report. Was ignored due to a typo, and fixed in 2.56. Title: QAAC: discussion, questions, feature requests, etc. Post by: KinG-KanG on 30 December, 2015, 04:54:00 PM Glad, could help and fast too. Other thing, can you make qaac get "comment" tag read from a file (UTF8 Without BOM), so I can insert non english comment. Title: QAAC: discussion, questions, feature requests, etc. Post by: decollated on 30 December, 2015, 09:21:37 PM Is it possible for "--gapless-mode 2" (qaac 2.55) to actually cause small gaps during playback on an iPod Classic? I was listening to a live album today that I encoded using that option, and noticed some tiny gaps that are not in the source. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 30 December, 2015, 10:04:55 PM Is it possible for "--gapless-mode 2" (qaac 2.55) to actually cause small gaps during playback on an iPod Classic? I was listening to a live album today that I encoded using that option, and noticed some tiny gaps that are not in the source. Well.. Only thing an encoder can do is to write some metadata describing amount of gaps. Basically, gapless playback is a player side task. It's impossble for an encoder to make a player play gaplessly when the player doesn't properly support gapless playback. Having said that, why do you use --gapless-mode 2 ? Have you tried the default ? Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 30 December, 2015, 10:08:36 PM Other thing, can you make qaac get "comment" tag read from a file (UTF8 Without BOM), so I can insert non english comment. It is possible, but why don't you just use some tag editor? Preparing a comment file for each encoding seems no easier. Title: QAAC: discussion, questions, feature requests, etc. Post by: KinG-KanG on 31 December, 2015, 09:22:40 PM Other thing, can you make qaac get "comment" tag read from a file (UTF8 Without BOM), so I can insert non english comment. It is possible, but why don't you just use some tag editor? Preparing a comment file for each encoding seems no easier. I used kid3 and it mess up with some tag entry (cmid, i think). I used mp3tag and this program never tell whether it was an id3tag or itune tag. both program mess up with tag sorting in the file. QAac sort tag from copyright sign (it looks neat to me), kid3 sort tag from com.itunes (i hate their Qt GUi) and mp3tag sort based on tag description (artist, album,...,year). Title: QAAC: discussion, questions, feature requests, etc. Post by: Makm on 03 January, 2016, 12:16:09 PM @nu774 Is it possible to use "--native-resampler=bats,127" in qaac to convert a 24/96 file to a 16/44.1 ALAC? Title: QAAC: discussion, questions, feature requests, etc. Post by: Makm on 03 January, 2016, 12:30:01 PM @nu774 Is it possible to use "--native-resampler=bats,127" in qaac to convert a 24/96 file to a 16/44.1 ALAC? It's ok I figured it out Title: QAAC: discussion, questions, feature requests, etc. Post by: Sohl on 04 January, 2016, 03:04:38 PM Hello, I am having a problem converting my songs using qaac 2.57 with foobar2000. I keep getting a code 2 error only when the songs I'm converting are mono. I have included a Process Monitor Logfile with foobar2000 and its subtree. https://drive.google.com/open?id=0B3xjPcNPu...V2dLZUc0djN6M28 (https://drive.google.com/open?id=0B3xjPcNPu0-4V2dLZUc0djN6M28) Title: QAAC: discussion, questions, feature requests, etc. Post by: Brazil2 on 04 January, 2016, 05:15:42 PM Hello, I am having a problem converting my songs using qaac 2.57 with foobar2000. I keep getting a code 2 error only when the songs I'm converting are mono. Same here and without encoding through foobar: Code: [Select] mono.m4aERROR: Not supported channel layout But it's working fine with version 2.55: Code: [Select] mono.m4aAAC-LC Encoder, TVBR q82, Quality 96[100.0%] 0:24.681/0:24.681 (68.6x), ETA 0:00.0001088433/1088433 samples processed in 0:00.360Overall bitrate: 79.8316kbpsOptimizing...done Title: QAAC: discussion, questions, feature requests, etc. Post by: KinG-KanG on 04 January, 2016, 06:03:35 PM Other thing, can you make qaac get "comment" tag read from a file (UTF8 Without BOM), so I can insert non english comment. It is possible, but why don't you just use some tag editor? Preparing a comment file for each encoding seems no easier. I used kid3 and it mess up with some tag entry (cmid, i think). I used mp3tag and this program never tell whether it was an id3tag or itune tag. both program mess up with tag sorting in the file. QAac sort tag from copyright sign (it looks neat to me), kid3 sort tag from com.itunes (i hate their Qt GUi) and mp3tag sort based on tag description (artist, album,...,year). Version 2.57... WOW... Thank you... and thank you. Great update, Tag and libsnd pre 6. Title: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 04 January, 2016, 10:23:56 PM Hello, I am having a problem converting my songs using qaac 2.57 with foobar2000. I keep getting a code 2 error only when the songs I'm converting are mono. I have included a Process Monitor Logfile with foobar2000 and its subtree. https://drive.google.com/open?id=0B3xjPcNPu...V2dLZUc0djN6M28 (https://drive.google.com/open?id=0B3xjPcNPu0-4V2dLZUc0djN6M28) Fixed on 2.58, thanks. Title: QAAC: discussion, questions, feature requests, etc. Post by: francesco on 05 January, 2016, 03:26:20 AM Hello, I am having a problem converting my songs using qaac 2.57 with foobar2000. I keep getting a code 2 error only when the songs I'm converting are mono. I have included a Process Monitor Logfile with foobar2000 and its subtree. https://drive.google.com/open?id=0B3xjPcNPu...V2dLZUc0djN6M28 (https://drive.google.com/open?id=0B3xjPcNPu0-4V2dLZUc0djN6M28) Fixed on 2.58, thanks. hi Nu774 thanks for the update may i know which is the most update site to download quaac ? cheers Title: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 05 January, 2016, 03:33:37 AM Title: QAAC: discussion, questions, feature requests, etc. Post by: francesco on 05 January, 2016, 03:37:57 AM thanks so i'm sure i get the new one! Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: jacobacci on 23 January, 2016, 04:58:38 AM I saw on the qaac version history that qaac uses libsoxrate (modified version of libsox; dynamically linked). Does this mean that qaac uses SOX to resample? In some threads I saw reference being made to qaac using speex. Which is correct? Thanks Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 23 January, 2016, 05:43:48 AM SoX (Sound Exchange) is a "swiss army knife" software for audio filtering, libsoxrate will probably contain (more or less) only its resampler. Speex is a low bandwidth voice-optimized audio codec, not primarily a resampler. I would be surprised if QAAC used any features from the Speex codec (which is rather a competitor to an AAC codec). But go on, surprise me. ;) Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: Moni on 26 January, 2016, 12:48:48 PM I saw on the qaac version history that qaac uses libsoxrate (modified version of libsox; dynamically linked). Does this mean that qaac uses SOX to resample? In some threads I saw reference being made to qaac using speex. Which is correct? Thanks If it does use SOX to resample, you can get an idea of performance by picking SOX 14.4 VHQ and cycling through the charts. It's basically perfect. http://src.infinitewave.ca Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: ZakiSayed on 29 January, 2016, 12:00:39 AM Hey, can anyone help me out?... https://hydrogenaud.io/index.php/topic,111102.0.html Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: LigH on 01 February, 2016, 08:11:17 AM ^ link didn't work; but you got a reply already (https://hydrogenaud.io/index.php/topic,111102.msg915344.html#msg915344). Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: ZakiSayed on 08 February, 2016, 02:53:11 AM ^ link didn't work; but you got a reply already (https://hydrogenaud.io/index.php/topic,111102.msg915344.html#msg915344). The link is working fine for me, though as you've already noticed, I've got a lot of helpful answers already. Thanks anyway. Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 22 February, 2016, 07:45:43 AM Hi ! Does have sense as for better quality converting hi-res sources to any samplerate above 44.1kHz? I.e. 48k. I thied to use several sources but the frequency cut-off is still put too low even for presets at bitrates near 320kbps. For better trade-off I'd welcome possibility to set the threshold higher, possibly up to 24kHz. Otherwise I don't see any advantage of using 48kHz. It seems to me that all predefined lowpass filters are hard set to targets with 44.1kHz. Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 22 February, 2016, 08:00:12 AM I thied to use several sources but the frequency cut-off is still put too low even for presets at bitrates near 320kbps. Not confirmed: I cannot see any lowpass for 48kHz stereo file @320 kbps. CoreAudioToolbox 7.10.5.0. Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 23 February, 2016, 02:59:09 AM lvqcl - I confirm. This is used for 2nd highest preset (at 282kbps) (https://hydrogenaud.io/imgcache.php?id=aee12d55e310cc9afedb0c00ab125009" rel="cached" data-warn="External image, click to view at original size" data-url="http://i.imgur.com/5QsTCaJ.png) At highest preset the frequencies go up to 24kHz but at the cost of too high bitrate (352kbps)) Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 23 February, 2016, 04:32:14 AM You wrote "for presets at bitrates near 320kbps", so I tested CBR 320 and TVBR 127 (its average bitrate is close to 330 kbps for me). TVBR 118 has lowpass at 23 kHz. Anyway: if you think that current encoding settings aren't optimal, you should ask Apple to change them. Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 23 February, 2016, 05:05:42 AM That sounds like the thresholds can't be changed (neither by hacking CoreAudioToolbox?) If that's the situation, which is the lowest quality preset with which keeping 48kHz for output gives advantage over 44.1kHz ? Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 23 February, 2016, 05:31:01 AM If you cannot hear above 20 kHz then there's no advantage anyway. Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: John Silver on 08 March, 2016, 07:02:59 AM Hello nu774. What are the exact encoding parameters do you recommend for better sound: 1. CD 16 bit 44 kHz to M4A - 128, 160, 192, 224, 256 kbps 2. Vinyl 24-bit 96 kHz to M4A - 128, 160, 192, 224, 256 kbps? Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: RamGuy on 10 April, 2016, 02:49:31 PM Is there any way to make QAAC split a M4A/M4B/AAC in two? I'm tagging some audiobooks using Audiobook Builder for Mac OS X and there is a issue with QuickTime in Mac OS X not allowing files longer than 18-hours to be tagged and most of my audiobooks are 20-40 hours so I would want to split them in half, do the tagging and then re-join them afterwards. Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: Ludo13B on 07 May, 2016, 11:23:23 AM Hi! I discover the Apple AAC encoder yesterday (use Nero's a few years ago), and am really interested in using it for movies audio. I read a lot of this thread, but am lost for a few things, I would be glad if you could help me. :) Two categories, qaac options (to use with eac3to to convert my .dts or .ac3 to .wav) and foobar2000. I tried foobar2000, but I want to use eac3to if possible. QAAC options: - are these options required for movies audio? --rate keep, --adts, --no-delay, --no-optimize (I even saw a --limiter?) - when is --ignorelength - used? Whith piped .WAV input? - does the use of .m4a or .aac output change anything ? It is to remux with MKVToolNix (mkvmerge), video in .mkv container. - options are they different to encode stereo, 5.1, downmix stereo, etc channels? Or from different types of DTS (ES, HDMA, etc)? I am not sure how to handle the "iTunSMPB" tag in my case. With the --no-delay option? Or nothing? foobar2000: - I suppose I will have the same result with the same version of QAAC? Or the different DTS or AC3 decoder may result of slightly different output files? I use foo_input_dts and foo_ac3 here: http://kode54.foobar2000.org/ (http://kode54.foobar2000.org/) - I locate the qaac.exe in C:\Program Files (x86)\foobar2000\encoders, but not libsoxr.dll and libsoxconvolver.dll. Are they required? Can I only replace qaac.exe to update it? - are options used differents when using only eac3to, or exactly the same in foobar? Thanks a lot! Ludo Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 07 May, 2016, 01:37:40 PM Quote - are these options required for movies audio? --rate keep, --adts, --no-delay, --no-optimize (I even saw a --limiter?) Well, nothing is required. However, you might find --no-delay to be useful for the purpose. Every MDCT codec including AAC has encoder delay by design. qaac, when encoding to .m4a, writes delay information in the container metadata called iTunSMPB. As long as you don't remux the resulting file and you are using players such as fb2k that is capable of gapless playback, delay is not an issue (player can trim them using the information from iTunSMPB). OTOH, in case of movie you remux the resulting .m4a to another container file. So, the delay information can be lost in the remuxing process. Even when it is not lost, player might not take care of the audio delay very well. Therefore, --no-delay comes as the easiest way to avoid tiny A/V sync issue due to the delay (it is 40ms or so). --no-delay option simply compensates the delay. However, using --no-delay means that necessary priming samples are not encoded in the result, so first 20ms or so cannot be correctly reconstructed. If you know that the very beginning of the sound track is silence (usually yes), it should do no harm. As for mkvmerge, it seems to be able to handle iTunSMPB in .m4a. So, you might not need --no-delay. Try yourself. Using --adts means no metadata (including iTunSMPB). Therefore, there's no chance of avoiding A/V sync issue unless you specify --no-delay. Other than that, as long as the remuxer is capable of reading both of .m4a and .aac, choice of intermediate format is not important. Finally, if you are remuxing to another format, optimizing .m4a is nonsense. Therefore you can safely skip the optimizing process with --no-optimize. Quote - when is --ignorelength - used? Whith piped .WAV input? Yes, but it's not always needed. If you are piping from ffmpeg, it is totally unnecessary. qaac can detect "invalid" length in the WAV header in some ways, in which case qaac automatically switches to ignorelength mode anyway. You can easily tell if qaac is running in ignorelength mode (in which case qaac doesn't print total time and ETA in the progress, since it is unknown). And if something goes wrong, it should be obvious from resulting duration. Therefore, it's not something you should worried about too much. Quote options are they different to encode stereo, 5.1, downmix stereo, etc channels? Or from different types of DTS (ES, HDMA, etc)? No. Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: Ludo13B on 07 May, 2016, 02:23:03 PM nu774, thanks a lot for your clear explanations. :) I will make some tests with these information in mind. Thanks again, and for your great work too. :) Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: Brazil2 on 10 May, 2016, 02:51:50 PM New version 2.59: Quote Fix: Was failing encoding mono AAC to AAC. Support Wavpack v5 interface (with large file support) Wavpack version 5 is still in alpha state, and DLL is not officially provided. If you want to try it, you have to build it yourself from the source code. This version of qaac supports both of old and new Wavpack DLL (they are binary compatible). When new functions provided in version 5 library are detected, qaac will use them. Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 17 May, 2016, 12:18:27 PM makeportable gives a console error with the latest version of iTunes 64-bit (12.4.0.119). All good with the 32-bit version. Using 7-Zip 16.00 64-bit (installed and copied same error). 7-Zip issue, 15.14 can extract just fine. 7-Zip developer notified. Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 19 May, 2016, 11:13:38 PM ...aaand 7-Zip 16.01 fixes the issue. Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: sundance on 25 August, 2016, 08:10:56 AM Just out of curiosity: Why is qaac's output directed to stderr when you supply --check as parameter? (other messages like help screen goes to stdout...) Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 25 August, 2016, 09:23:02 AM Just out of curiosity: Why is qaac's output directed to stderr when you supply --check as parameter? (other messages like help screen goes to stdout...) Well actually, only the help (usage) messages goes to stdout, since it's so long that you most likely want to pipe to a pager or redirect to a file or something to view it, in which case you'd need more typing if it were going to stderr, such as : Code: [Select] qaac 2>&1 | more Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: asbjbo on 03 September, 2016, 05:26:08 AM Hi, I use qaac embedded in a Python script to create and maintain AAC copies of a well-tagged FLAC library of some ~30000 songs. It works very well - thanks! However, some portable AAC players are not very smart in their handling of tags. The player in my car generates an "artist" list containing everyone and everything mentioned as artist, album artist, ensemble, composer or conductor in the file. That becomes rather unwieldy. Even worse, the "album" list splits by artist, generating multiple entries for albums with multiple artists. An album like "100 Country Hits" generates 100 entries in the album list, each with a different artist and one track. Setting a common album artist or the compilation flag does not help. So I would like to generate AAC files with only the most basic tags, deleting all the others. By default, qaac copies the tags from source to target. I can partially override this by using the command line option --artist to set the track artist to the value of album artist (even when this is factually incorrect!). This still leaves tags like composer or conductor to clutter up the player artist list. I have not found an easy way to strip extended tags like these from the conversion. First converting the FLAC file to a tag-less WAV, and then again to an AAC file with tags given on the command line works, but is a slower, two-stage process. I imagine it would be easy to add an option to ignore any tags in the source file and only use those provided on the command line. Is there an easy way to do this in qaac? Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: Rollin on 03 September, 2016, 05:40:20 AM asbjbo, you can do mass removal of unneded tags after conversion, using foobar2000 or mp3tag. Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: asbjbo on 03 September, 2016, 06:01:18 AM Yes, that is possible, but since this runs as an automated script I would prefer not to need any manual post-processing. Also, opening 30000 server files in mp3tag over the network is extremely slow and crash-prone. An automated two-stage process via WAV is preferrable, but still slow and not very elegant. Is there an even better way? Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: sundance on 05 September, 2016, 04:31:30 AM @nu774: According to QAAC's wiki it is save to use --normalize with aac/mp4, although that may result in samples above 0dbFS. Did I understand that correctly? Or is an additional --limiter recommended? And how much is "near 0dbFS", where the limiter is activated? edit: Just did a quick test (with 4 mp3 files being concatenated to one mp4 file) with --normalize alone and together with --limiter. Both mp4 files had peaks above 0dbFS (1.044) when checked with foobar's replaygain scanner... Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 05 September, 2016, 06:22:57 AM First converting the FLAC file to a tag-less WAV, and then again to an AAC file with tags given on the command line works, but is a slower, two-stage process. I imagine it would be easy to add an option to ignore any tags in the source file and only use those provided on the command line. Is there an easy way to do this in qaac? If you are scripting in python, I think you can fetch tags in FLAC files first (with metaflac command or Python's mutagen library), filter them as you like (in the script), then apply them. And you don't have to use an intermediate, temporary WAV file for that purpose. Just use pipe instead. Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 05 September, 2016, 06:30:22 AM @nu774: According to QAAC's wiki it is save to use --normalize with aac/mp4, although that may result in samples above 0dbFS. Did I understand that correctly? Or is an additional --limiter recommended? And how much is "near 0dbFS", where the limiter is activated? edit: Just did a quick test (with 4 mp3 files being concatenated to one mp4 file) with --normalize alone and together with --limiter. Both mp4 files had peaks above 0dbFS (1.044) when checked with foobar's replaygain scanner... Both of --normalize and --limiter works on PCM before encoding. Peaks of the resulting AAC can easily go above 0dBFS (when decoded) but there's nothing I can do for it. (It is quite normal for lossy codecs to have peaks above 0dBFS when decoded, and you won't hear those "clipping" anyway), Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: audiophool on 22 September, 2016, 06:53:33 AM When trying to extract the necessary QTfiles from itunes6464setup.exe (latest iTunes 12.5.1) using makeportable, I get a bunch of errors. makeportable doesn't create QTfiles64, either. It says it cannot find the commands "findstr" and "reg". I tried with the latest makeportable.cmd on Win 10 x64 and 7-zip 16.02. Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: Brazil2 on 22 September, 2016, 07:51:21 AM Quote Changed 7.1ch rear channel layout in favor of Apple QuickTime. qaac was using 3 front + 2 side + 2 back for 7.1ch rear output. However, it turned out that Apple QuickTime cannot handle this, and only recognizes 3 front + 4 back. I have encoded 8_Channel_ID.wav (http://www-mmsp.ece.mcgill.ca/documents/audioformats/wave/Samples/Microsoft/8_Channel_ID.wav) with QAAC v2.58 and v2.60 and it looks like MediaInfo 0.7.88 doesn't make any difference between the two encodings and always reports: Code: [Select] Channel(s)                               : 8 channelsChannel positions                        : Front: L C R, Side: L R, Back: L R, LFE Am I missing anything ? MediaInfo or QAAC problem ? Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 22 September, 2016, 08:40:43 AM In my PC, MediaInfo reports like this for 2.60: Code: [Select] Channel(s)                              : 8 channelsChannel positions                        : Front: L C R, Side: 4, LFE Actually, your result is better than this, and it should be. 7.1ch file encoded by 2.60 should be decoded in exactly the same way as before into L+R+C+LF+BL+BR+SL+SR. Recent ffmpeg just treats both of them as "7.1ch". What is diffrent (in 2.60) is how channel layout is described in the AAC program config element, where channels are described in a completely different manner compared to WAV's channel mask. Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: Brazil2 on 22 September, 2016, 10:54:41 AM In my PC, MediaInfo reports like this for 2.60: Code: [Select] Channel(s)                              : 8 channelsChannel positions                        : Front: L C R, Side: 4, LFE I can get this result with the 24-bit version of the file but not with the attached 16-bit version. Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 22 September, 2016, 11:41:21 AM Well, channel mask of the attached WAV file is 0xFF, which is  7.1ch front layout ( L + R + C + LF +  BL + BR + FLC + FRC). This is the default AAC 7.1ch layout, and is completely diffrent from 7.1ch rear layout (L + R + C + LF + BL + BR + SL + SR). It seems that channel layout printed by MediaInfo is questionable on these files. As is written at the wiki ( https://github.com/nu774/qaac/wiki/Multichannel--handling ), qaac treats these two channel layouts differently, and resulting file have actually different AAC channel layouts. Also, note that ffmpeg needs "-strict 1" switch to decode 7.1ch AAC with default (front) layout correctly. It is written here: https://sites.google.com/site/qaacpage/news/qaacrelease235refalac135 Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: tebasuna51 on 02 October, 2016, 06:08:31 AM What is diffrent (in 2.60) is how channel layout is described in the AAC program config element, where channels are described in a completely different manner compared to WAV's channel mask. Encoding a wav FL,FR,FC,LF,BL,BR,SL,SR to .aac with qaac 2.60 and decoded by itself we obtain a wav FC,FL,FR,FC,SL,SR,BL,BR,LF Others decoders have also problems with qaac 2.60: http://forum.doom9.org/showthread.php?p=1782018#post1782018 Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 02 October, 2016, 10:31:23 AM Quote [qaac] release 2.61 posted 16 minutes ago by nu 774 Fix 7.1ch PCE: was incorrectly writing object_type(2) instead of profile(1). AFAIK, this doesn't seem to actually affect decoders because decoders are only interested in channel layouts in PCE, but still it was incorrect, and should be fixed. Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 02 October, 2016, 11:02:17 AM Encoding a wav FL,FR,FC,LF,BL,BR,SL,SR to .aac with qaac 2.60 and decoded by itself we obtain a wav FC,FL,FR,FC,SL,SR,BL,BR,LF Others decoders have also problems with qaac 2.60: http://forum.doom9.org/showthread.php?p=1782018#post1782018 Thanks, I noticed one issue in 7.1ch PCE and released 2.61 just now. However,  this release wouldn't fix your problem. It may sound strange that qaac cannot correctly decode 7.1ch AAC created by itself. Actually, Apple encoder only supports 7.1ch front layout, but not 7.1ch rear layout. (qaac supports encoding 7.1ch rear layout by manually inserting PCE that describes 7.1ch rear channel configuration). Therefore, it is not too strange that Apple's AAC decoder doesn't correctly handle 7.1ch rear layout. (Although it seems that QuickTime player (not CoreAudio) recognizes the correct channel layout) Because of complexity and too much of flexibility in the PCE based channel configuration, poor handling in the decoder side is to be expected, and there's nothing I can do for it. These days, at least libavcodec has good support on the PCE based AAC channel configuration, but it was far worse before. Instead of PCE, I could use new standard ISO/IEC 14496-3:2009/Amd 4:2013 where ChannelConfiguration for 7.1ch rear layout is defined, but I suppose it's even worse in compatibility, since it's rather new standard that is used by no encoder AFAIK. As a matter of fact, qaac 2.61 now writes exactly the same PCE as FDK-AAC encoder. Therefore, those results should also apply to FDK-AAC encoder. Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: tebasuna51 on 03 October, 2016, 07:41:43 AM Thanks, I noticed one issue in 7.1ch PCE and released 2.61 just now. However,  this release wouldn't fix your problem. Yep, 2.61 is the same. Quote Because of complexity and too much of flexibility in the PCE based channel configuration, poor handling in the decoder side is to be expected, and there's nothing I can do for it. Ok, thanks for your interest. Well we can survive with this. I don't like very much 7.1 and 5.1 work fine always. And we have libavcodec decoders (like Foobar2000) than work with 7.1. Thanks. Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: ghostman6842 on 11 October, 2016, 07:22:18 AM Used QAAC (via fb2K) to make some AAC files for my iPod, and they all had the same problem: After a track had finished playing, the next file would not only stop playing, it would cause the iPod to shut down. Specifically, the beginning of the next file would shut down after a second (maybe a little less), and the the Apple symbol pops up on the iPod's screen. Now here's the werid part: I ended using the lossless source files to create ALAC files, imported them into iTunes, then used them to create AAC files...that worked on my iPod! WTF?!?! Both times (the last time I checked) Apple Application Support was used to create both sets of AAC files. Yet the iTunes version worked and the fb2/QAAC files didn't. I thought the defaults for QAAC prevented this type of problem. Here's the unaltered default fb2k command line that QAAC uses to create AAC files: --ignorelength -s --no-optimize -V 127 -o %d - Any ideas about why this is happening? Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: ghostman6842 on 11 October, 2016, 05:58:19 PM Ended up creating another batch of AAC files with QAAC via command line (used -V 127), and they still wouldn't work correctly on my iPod at all. Both iTunes and QAAC are using Apple Application Support to make the AAC files, but iTunes has the edge??? At this point, I'm starting to wonder if there's a bug in QAAC that's causing this, because when I can't even create AAC files via command line with QAAC without having this problem, something is clearly going on. At this point, I guess I have to use stupid iTunes to get what I want, because, at this point, QAAC is failing at getting the job done. Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: lvqcl on 11 October, 2016, 07:22:58 PM Try earlier versions? Or try CBR mode, maybe this will change the result. Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: ghostman6842 on 11 October, 2016, 09:29:05 PM Try earlier versions? Or try CBR mode, maybe this will change the result. I've tried TVBR, CVBR & CBR, and gotten to the same place: Nowhere! But, again, anything that I created through the latest iTunes is just totally fine. At this point, I have to wonder if QAAC, again, has a bug in it, or Apple has purposefully modified iTunes to make it so that you can't do anything with AAC, or ALAC, if it's not created in their program (Wouldn't put it past them). Who knows? In any case, I'm so pissed off right now that I want to throw my iPod 20gb in the garbage can. It's nothing more than a useless piece of crap at this point. I don't like not having control of my music. Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: nu774 on 12 October, 2016, 12:50:51 AM Maybe related, or maybe not: https://hydrogenaud.io/index.php/topic,17773.0.html m4a container written by qaac is different from the ones created by iTunes. fb2k also writes some metadata which is never written by iTunes. Therefore, there *can* be a compatibility issue regarding container. MP4 is an unnecessarily versatile, complex format for storing a single audio track. Some kind of compatibility issues are quite common in poor parser implementations, especially seen in hardware players. However, since your ipod is able to play at least one track, I guess it's not simply because there's something in the container which iPod cannot handle properly. There might be an issue in the sync process when files came from outside of iTunes... but I really have no idea. Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: ghostman6842 on 16 October, 2016, 02:59:06 AM Maybe related, or maybe not: https://hydrogenaud.io/index.php/topic,17773.0.html m4a container written by qaac is different from the ones created by iTunes. fb2k also writes some metadata which is never written by iTunes. Therefore, there *can* be a compatibility issue regarding container. MP4 is an unnecessarily versatile, complex format for storing a single audio track. Some kind of compatibility issues are quite common in poor parser implementations, especially seen in hardware players. However, since your ipod is able to play at least one track, I guess it's not simply because there's something in the container which iPod cannot handle properly. There might be an issue in the sync process when files came from outside of iTunes... but I really have no idea. For the hell of it, I installed Rockbox on my iPod, and I ended up having no problems at all with the QAAC-based AAC files. That tells me the Apple software didn't like them. By design or on purpose? Some might say it's the former. I still think it's the latter. But at least the damn thing works now. Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: tvks on 04 December, 2016, 11:45:30 AM I get the same problem with my iPods (both 5G & 5.5G) just like ghostman6842's. I encoded the QAAC files using command line "-v 256" setting, and they can be played if I select certain one of them. But when iPod starts the next track (both starting when the previous is ended and I press the NEXT button), iPod get restarting. I tried the problem files played on different iPod models. iPod 5G & 5.5G get this problem while iPod nano 1G & iPod nano 5G do not. BUT I have found that the files I encoded years before using QAAC 2.33 with the same setting, are played fine on all my iPods. AND I also tried encoding some files using command line "-v 128" setting, and they are played fine. So, I guess there may be something with the latest QAAC with “-v 256” setting? Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: Rigby on 30 January, 2017, 11:30:34 PM Question: Is it possible to somehow alter the chapter names QAAC generates when using the --concat option? I'm trying to make audiobooks from MP3 files; unfortunately the track names in their metadata have ridiculously long prefixes, so I can't see the track numbers in the chapter list on my phone. Ideally I'd like to have simple sequentially numbered names like "Chapter 1", "Chapter 2" and so on. BTW, thanks to the dev! This is a really useful tool that has already saved me a ton of work. :) Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 12 May, 2017, 08:20:21 AM Good news, new qaac brings alot of new features Quote Support decoding FLAC in MP4. AAC decoder: now recognizes 6.1ch/7.1ch AAC channel config constants (11 and 12) defined in 14496-3:2009/Amd 4:2013 Concatenating files with no title tag with --concat now creates empty chapter name (formerly file name was used). --start now accept negative value. using --start and --delay at the same time is not allowed now. These two options are the same in functionality-wise, except for the reversed sign (For trimming, you use positive value for ---start, and negative value for --delay). --adts and --concat doesn't allow concatenating files with varying sample format anymore. External dlls are now loaded in lazy way, which means that they are not loaded until needed. Increased buffering size for --play to avoid glitches on multi channel files. --native-resampler now always use dedicated AudioConverter. CAF: enabled 7.1ch rear AAC output. AAC in CAF: when chan chunk is not present, get channel layout from kuki chunk. Named pipe output is removed. I guess it has been rarely used anyway, but if you happen to want it, use https://github.com/nu774/namedpipe . Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: eahm on 12 May, 2017, 09:43:08 PM Thanks for the news and for the update. Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: raymondjpg on 19 May, 2017, 05:47:27 AM I am getting audio breakup when multiplexing HE-AAC encoded with QAAC.exe v2.63, and HEVC, in MeGUI. I downloaded v2.63 from two sources with the same result. I have gone back to the QAAC backup in MeGUI, which is I think v2.62, and no problems. Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: Anakunda on 19 May, 2017, 08:40:04 AM QAAC 2.64 is out. Fixed regression of 2.63: HE-AAC frames were incorrectly multiplexed. Title: Re: QAAC: discussion, questions, feature requests, etc. Post by: dev on 29 June, 2017, 04:57:54 PM Hi Is this normal for the encoder to put lowpass on LFE channel, or is this an error? Or is putting higher than 120Hz frequencies on the LFE channel a mistake? ChannelPlacement.wav http://www6.zippyshare.com/v/KVIBROyc/file.html ChannelPlacement.m4a http
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4481643736362457, "perplexity": 10786.101153656218}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187823255.12/warc/CC-MAIN-20171019065335-20171019085335-00005.warc.gz"}
https://www.originlab.com/doc/LabTalk/ref/Wks-obj
# 3.7.5.98 Wks The WKS object has properties and methods related to an Origin sheet (Note: A sheet can be either Worksheet or Matrix Sheet). You can use range notation to define a worksheet object: range wksObject = [winName]sheetName! If no range is specified, Origin will work on the active sheet. Once a worksheet (matrix sheet) object is defined, the object properties and methods can be accessed using the following syntax: wksObject.property wksObject.method(argument) For example: range rWa = [Book2]Sheet2!; // Define a worksheet object range rWa.colSel(2,1); // Select the second column of that sheet rWa.vGrids = 0; // Turn off vertical grid lines range rWb = !; // Use the active worksheet as a range NumColumns = rWb.ncols; // Find out how many columns ## Properties When operating on the active worksheet or matrix sheet, you can use wks.property to access sheet properties; otherwise, range notation should be used. Property Access Description (8.0 SR0) integer Auto add rows when sheet is resized. Example: range aa=[book1]sheet2!; // Disable auto add rows to maintain fixed // number of rows and columns // Setup the wks with 3x2 aa.nCols = 2;aa.nRows = 3; wks.c1, c2, r1, r2 Read only integer Selection range. First and last columns and rows. wks.cNamen$Read only string The nth worksheet column short name. See wks.cnamemode to operate on specific column types. (See also: Wks.Col (Object)) wks.cNameMode Read/write integer Its value determines the columns that wks.cnamen$ operates on. Set wks.cnamemode to the following values: 0 = all columns, 1 = numeric columns, 2 = text columns, 4 = text and numeric (mixed) columns, and 64 = columns in the selection range. Set wks.cnamemode = 128 to return the full dataset name to wks.cnamen$. wks.col Read/write integer Current column. See also: The Wks.Col object properties. wks.colWidth Read/write integer Column width. example: wks.col2.width=10; Or use the wcolwidth X-Function to update column width. wks.DC (2019b SR0) Read/write integer Data Connector. See also: The Wks.DC object properties. wks.epd Read/write specify if exclude current worksheet when do plotting with Layer Contents and Plot Setup etc graphing related dialogs. 1= tag this sheet as Exclude from plotting dialog; 0 = relief the exclude tag. You use the system variable @TCE to indicate the tagged(excluded) sheet name with desired color, such as @TCE=Color(255, 60, 60);. wks.export Read/write Worksheet export settings; enter wks.export.= for sub-methods. wks.font Read/write integer Font (by index) of the Standard name style in the sheet. You can use the font(name) function to get a font's index, like wks.font = font(Courier New); wks.fSize Read/write float Font size of the Standard name style in the sheet, like wks.fsize = 12; wks.hasDC (2019b SR0) Read only bool Return whether the workbook/worksheet has a Data Connector: 0 = workbook doesn't have Data Connector, 1 = worksheet is used by a Data Connector as the output (destination sheet), 2 = worksheet is not used by any Data Connector as the output. Iin this case, the worksheet may contain a Data Connector but doesn't build a valid connection. To check if the worksheet is used by a Data Connector as output or not, use "wks.DC.Valid". wks.hGrids wks.vGrids Read/write integer Display horizontal and vertical grid: 1 = enable, 0 = disable. wks.hiddenRows (2019 SR0) Read Only integer Return the number of hidden rows in the worksheet. wks.hiddenRows == wks.maxRows - wks.visibleRows. wks.hierarchical (9.1 SR0) Read Only integer Read whether the worksheet is hierarchical (i.e. contains collapsible nodes and tables such as an analysis report sheet does) or not: 1=hierarchical, 0=flat sheet. wks.ignorehidden Read/write Treatment of hidden rows in plotting and analysis operations: 0 = Include data in hidden rows on plotting and analysis. 1 = (default) Ignore hidden rows on plotting and analysis. wks.import Read/write Worksheet import settings; enter wks.import.= for sub-methods. wks.index Read/write integer Worksheet index in workbook, i.e. 1,2,3, etc. Use this property to reorder worksheets. For example: newbook sheet:=4; // Create a 4 sheets workbook; wks.index = 3; // Move "Sheet1" to the 3nd worksheet; Note: This property is Read Only before 8.5.0 SR1. wks.joinMode Read/write integer Set/get the worksheet join mode. Values may be the following: 0 = enumerate when column names match. Append when matching rows are not found. 1 = drop when column names match. Append when matching rows are not found. 2 = enumerate when column names match. Drop when matching rows are not found. 3 = drop when column names match. Drop when matching rows are not found. See the wks.join() method. wks.khra (2018 SR0) Read/write bool When the X column of a column/bar graph contains text, this text is used to label major ticks, ordered by row index. Prior to Origin 2018, when applying a worksheet data filter, plots registered the vacant ticks and labels of filtered data, though the data points were not plotted. This was changed in Origin 2018 so that ticks associated with filtered data no longer display. This only applies to X columns that contain text and are NOT Set as Categorical. 0 (default) = hide filtered data labels, 1 = restore old behavior and show data labels even though data are filtered. wks.loadedgrid Read/write integer 0 if grid not loaded; 1 if grid loaded. wks.longname$ (9.1 SR0) string Long name of worksheet. integer Scan all columns and find the largest row index that has value. You can setup a worksheet with wks.nRows, but before filling it with values, wks.maxRows will still be zero. To reduce the size of a worksheet, use wks.nRows, as this property is only to get the longest column row size. integer Multiple X columns: 1 = Yes, 0 = No. wks.name$Read/write string Worksheet name. wks.nMats (8.5.0) Read/write integer Number of matrix objects in a matrix sheet. wks.nCols Read/write integer Number of columns in the worksheet. Before Origin 8, this property was Read-Only wks.nRows Read/write integer Number of rows in the worksheet. Before Origin 8, this property was Read-Only. See also: wks.maxRows. wks.rhw Read/write integer Row heading width in units of 1/10 of cell height. Example: // Set to about 5 char height range aa=2!; // 2nd sheet of active book aa.rhw=50; wks.sel Read only integer Selection flags. The hex return number indicates what is selected in the worksheet. Values may be the following, or a combination of these bits: 0 = none, 1 = editing cell, 2 = column, 4 = row, 8 = range, and 16 = 1 column. wks.useFont Read/write integer Font usage: 1 = use selected font, 0 = use system font. wks.userParamn (8.0 SR0) Read/write integer Show/hide specified User Parameter. For example: wks.UserParam1=1; // Show the first user parameter wks.userParamn$ (8.0 SR0) string Access the User Parameter's name. For example: // Set parameter name as "Site Index" wks.UserParam1$="Site Index"; wks.VisibleCols (9.0 SR0) Read only integer Number of visible columns (not include the hidden columns) in the worksheet. wks.VisibleRows (9.0 SR0) Read only integer Number of visible rows (not include the hidden rows) in the worksheet. †There is no LabTalk property or command for merging selected worksheet cells but you can accomplish this by capturing the menu id of the Merge cells toolbar button and using it with the menu -e command. ## Methods Method Description wks.addCol(name) Add a single named column to the end of the worksheet. If name is not specified, a generic name is chosen. wks.colSel(colNum, n) Column selection. If n = 1, select the colNum column. If n = 0, deselect the colNum column. wks.copy(strRegister, Col, Row) Copy(Z): Copy entire wks into string register %Z. (It is recommended that you use %Z which can hold up to 6,290 characters. If the text is too large, it is not copied and no error occurs.) See also: wks.paste(). Copy(Z, n): copy all rows of column n. Copy(Z, 0, n): copy all columns of row n. See the colcopy, colcopy, wcopy and wrcopy X-Functions for more options. wks.deleteRows(rowBegin[,numRows, colBegin, colEnd]) (2016 SR0) Delete a range of rows. Specifying only rowBegin deletes rowBegin in all columns in the worksheet. Adding option numRows deletes numRows from rowBegin, in all columns. Use colBegin and colEnd to limit deletion of rows to specified columns, from (a) colBegin to the last column in the sheet (colEnd not specified), or (b) from colBegin to colEnd. See examples. Also, see wks.insertRows, below. wks.findLabels(ind, K[,n]) Finds an apparent label in a column of data (Origin worksheet or Excel workbook. If an Excel worksheet is active, make sure that the internal data has been updated (as with layer -s) before use). ind = (required) index of the column in which to find label; K = (required) global string variable letter to store the found label string; n = (optional) 0 to disregard selection, 1 to consider selection inside the column if only a range of rows inside the column is selected (if nothing in the column is selected or if the whole column is selected, treat as 0) By default (i.e. if n is omitted), it is considered to be 0. wks.hasfilter() (9.0 SR0) Test whether there are filters applied in the worksheet. If yes, return 1, else return 0. For more details about filter property scripts, please see wks.col.filter. wks.insert(name list) Insert the list of columns at the current location. The current column position is specified by wks.col. The list consists of one or more desired column names separated by spaces. If a column name is already used, it is automatically enumerated. wks.insertRows(rowBegin[,numRows, colBegin, colEnd]) (2016 SR0) Insert a range of rows. Specifying only rowBegin inserts one row before rowBegin in all columns in the worksheet. Adding option numRows inserts numRows from rowBegin, in all columns. Use colBegin and colEnd to limit insertion of rows to specified columns, from (a) colBegin to the last column in the sheet (colEnd not specified), or (b) from colBegin to colEnd. See examples. Also, see wks.deleteRows, above. wks.isColHidden(colNum) (9.0 SR0) Test whether the column (specified by column number, colNum), is hidden. If hidden, return 1, and 0 for else. wks.isColSel([colNum]) If colNum is included as an argument, the method returns the selection state of colNum. 0 = the column isn't selected. 1 = entire column is selected. 2 = a range of the column is selected. If colNum is not included as an argument, this method returns the number of columns selected (partial and entire selections). wks.isRowHidden(rowNum) (9.0 SR0) Test whether the row (specified by row number, rowNum), is hidden. If hidden, return 1, and 0 for else. [ToWks!]wks.join(FromWks) Join the worksheet specified by FromWks to the worksheet specified by ToWks. This method adds the columns of FromWks to ToWks according to the method specified by wks.joinmode. If ToWks is not specified, then the currently active worksheet is used. wks.labels(str) (8.0 SR1) Control the display of worksheet column labels. No argument = do not show any labels, otherwise a string containing column label row characters, for example: // Show Long Name and Comments, if they are not empty wks.labels(); // Do not show any label rows wks.labels(0); // Set to show long name, units and comments wks.labels(LUC) // Show Comments, User Parameter 1, and Long Name wks.labels(CD1L) The prefixes +, - and * were added in Origin 8 SR2. The prefixes < and > were added in Origin 2017. // To remove Units wks.labels(-U); // To insert Sample Rate and Sparklines at the top wks.labels(+ES); // To append Units to the bottom wks.labels(*U); // To move F(x)= to the bottom wks.labels(>O); //To move Comments to the top wks.labels(<C); Note that you can also use + and * to "move" (add) a label row to top or bottom. The characters < and > will do nothing if the label row is not already shown. wks.paste(strRegister, Col, Row) Paste the contents of a string register (specified without the %) into the cell beginning at (Col, Row). wks.runfilter() (9.0 SR0) Run or re-apply filter. For more details on filter property scripts, please see wks.col.filter. wks.setaslabel(type, rowNum, label, append) (8.5.1Sr0) Set or append one row as LongName, Unit, Comment, etc. type : Label type , L, C, U, P, etc. rowNum : The Number of the row to set as label (This row will not be removed); -1 = remove the active row when you set it as the label (Note: you need to select the row before running this script). label : 1 = select the label row, 0 = select the data row. append : 1 = append the content in the selected row to label (Only works for Long Name and Comments); 0 = use the content in the selected row to replace the original content. //Set the second label row as the Long name. wks.SetAsLabel(L,2,1,0); //Append the fourth data row to the Comment. wks.SetAsLabel(C,4,0,1); //Set the first data row as the Unit. //(The first data row should be active, // and it will be removed after running the script). worksheet -s 0 1 0 1; wks.SetAsLabel(U,-1,0,0); wks.template( FileName[,[WinName],NumRows]) Apply the template named FileName to <NumRows> rows of window WinName wks.GetNextVisibleRow(n) Find the next visible row starting from the given row n. The GetNextVisibleRow(n) will check the rows after row n one by one, it outputs the row index if a visible row was found. So wks.GetNextVisibleRow(0) will give you the first visible row. ## Examples ### Work with Worksheet Columns and Rows When a new worksheet is created, there are 2 columns and 32 rows by default. To read or set the number of worksheet columns and rows, you can use the wks.ncols and wks.nrows properties. newsheet; // Add a new worksheet wks.ncols = 5; // Set the number of columns to 5 wks.nrows = 100; // Set the number of rows to 100 Note that Origin will delete columns beyond (i.e., to the right of) the number you specify. So, in general, it is safer to use the wks.addCol(name) method to add columns. wks.addCol(Does); // Add a column with short name 'Does' Regarding worksheet rows, two properties are similar, wks.maxRows and wks.nRows. The former finds the largest row index in the worksheet that has a value, while the latter sets or reads the number of rows in the worksheet. The following script illustrates how to use these two properties: newbook; // Create a new workbook col(b) = {1:10}; // Fill 10 numbers to column B wks.maxRows = ; // Returns 10 wks.nRows = ; // Returns 32 ### Display Worksheet Column Labels This script creates an empty table for the average temperature in different cities. In this example, we will create a user-defined parameter and show the worksheet long name, unit and the user-defined parameter. range ww = !; // Define a range, on active worksheet ww.name$ = "Average Temperature"; // Rename the worksheet ww.ncols = 13; // Set total number of columns ww.userParam1$= Month; // Define a new user parameter label // Show the worksheet long name, unit and a user parameter ww.labels(LUD1); Col(1)[L]$ = City; // Set column long name stringarray month = {"Jun.", "Feb.", "Mar.", "Apr.", "May.", "Jun.", "July", "Aug.", "Sep.", "Oct.", "Nov.", "Dec."}; loop(ii, 2, 13) { Col($(ii))[L]$ = Temperature; // Set column long name Col($(ii))[U]$ = \+(o)F; // Set column unit // Set column user parameter Col($(ii))[D1]$ = month.getAt(ii-1)$; } ### Get the unhidden value from the column in worksheet 1. Assume some of values are hidden by Filter in worksheet, the script below will list the unhidden value in the column. //with the worksheet active for (int ii=1; ii <= wks.nrows; ii++){ if(wks.isRowHidden(ii)){ //continue id there is a hidden row ii = wks.GetNextVisibleRow(ii); //get the 1st not hidden row number after this hidden row col(A)[$(ii)]=; } }; 2. The second example shows how the function GetNextVisibleRow works, assume we have a worksheet as shown below: Run the script with the worksheet active: //with the worksheet active type "The 1st visable row is $(wks.GetNextVisibleRow(0))"; loop(ii,1,3){ a$(ii)=wks.GetNextVisibleRow(ii); type "The 1st visible row after row $(ii) is$(a\$(ii))"; } The results output: The 1st visible row is 1 The 1st visible row after row 1 is 3 The 1st visible row after row 2 is 3 The 1st visible row after row 3 is 5
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2603752315044403, "perplexity": 9541.918410290482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675316.51/warc/CC-MAIN-20191017122657-20191017150157-00452.warc.gz"}
https://www.mankier.com/1/texsis
# texsis man page TeXsis — TeX macros for Physicists ## Synopsis `texsis [ filename ]` ## Description TeXsis is a collection of TeX macros for typesetting physics documents such as papers and preprints, conference proceedings, books, theses, referee reports, letters, and memos. TeXsis macros provide automatic numbering of equations, automatic numbering and formatting of references, double column formatting, macros for making tables and figures, with or without captions, including tables with horizontal and vertical rules. TeXsis supports a wide variety of type sizes and a number of specialized document formats, and it even includes macros for making form letters for job applications or letters of recommendation. TeXsis is an extension of "plain" TeX, so anything you know how to do in plain TeX you can do in TeXsis. TeXsis macro instructions are simply abbreviations for often used combinations of control sequences used to typeset physics documents. For more information about plain TeX see the man pages for tex(1), and/or The TeXbook, by D.E. Knuth. TeXsis is stored as a pre-loaded format so that it loads quickly (see the man pages for initex(1), and/or "preloaded formats" in The TeXbook ). To run TeXsis simply give the command texsis in place of the tex command, i.e. texsis [ filename ] where filename.tex is the name of a file containing TeX and/or TeXsis \controlsequences. TeXsis is initally in plain TeX mode, i.e. 10pt type and singlespaced, but the control sequence \texsis selects 12pt type, doublespacing, and enables other useful features. Alternatively, \paper turns on these features and sets things up to typeset a paper, \thesis does the same for typesetting a thesis, \letter is used to produce a letter using macros similar to those listed in the back of The TeXbook, \memo gives a setup for producing memoranda, and so on. A manual which describes all of the TeXsis macro instructions is available. It is written in TeXsis, so it serves as its own example of how to write a document with TeXsis. The source code is also heavily commented, so it is possible to extract useful macros from the source code and modify them to suit your own purposes. Provisions are made for local customization of TeXsis. In particular, the file TXSmods.tex, if it exists, is read from the current directory or from the path TEXINPUTS whenever TeXsis is started. You can therefore put your own custom macros for a given project in a directory and they will automatically be loaded when TeXsis is run from that directory. ## Installation There is an appendix to the printed manual containing detailed installation instructions, but they are also provided in a form which can be processed by plain TeX, in the file Install.tex. ## Diagnostics TeXsis informational messages are written to the terminal and the log file begining with `% '. Warning and error messages begin with `> '. ## Files The source files for TeXsis and the TeXsis manual are usually installed in the same place the rest of TeX is kept. Although this may vary from intallation to installation, it will generally include a root directory named texmf. Common examples are /usr/share/texmf/, /usr/lib/teTeX/texmf, or /usr/local/lib/texmf. Filenames here are relative to this texmf root directory. web2c/texsis.fmt tex/texsis/TXS*.tex TeXsis source code. tex/texsis/*.txs "Style" files which can be read in at run time for special document formats. doc/texsis/TXS*.doc Source for the printed TeXsis manual (written in TeXsis). tex/texsis/TXSsite.tex Local site customization instructions (this is read only once, when the format file is created). tex/texsis/TXSpatch.tex Run time patch file (like a system TeXsis.rc file, it is read every time TeXsis is run). TXSmods.tex Run time init file (this is read every time TeXsis is run from the current directory, or from the search path in TEXINPUTS ). ## Restrictions Please note that TeXsis is designed to be completely compatible with plain TeX. As a result it cannot be compatible with LaTeX. Having the full manual written in TeXsis can cause a problem if you don't have a version of TeXsis already running. To get around this you can run Manual.tex through plain TeX and it will load the TeXsis files before processing the manual. This takes longer, but not by much. ## Bugs Please report bugs (or suggestions for improvements) to [email protected]. Patchs to correct small problems or make small improvements are available at in the file TXSpatch.tex (If that file doesn't exist then there are no current patches.) initex(1), tex(1), virtex(1) Donald E. Knuth, The TeXbook; Michael Doob, A Gentle Introduction to TeX. ## Authors Eric Myers <[email protected]> Department of Physics University of Michigan Ann Arbor, Michigan USA and Frank E. Paige <[email protected]> Physics Department Brookhaven National Laboratory Upton, New York 11973 USA ## Version Revision Number: 2.18/beta3 Release Date: 16 May 2000
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9670374393463135, "perplexity": 4796.015632242637}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719033.33/warc/CC-MAIN-20161020183839-00564-ip-10-171-6-4.ec2.internal.warc.gz"}
https://www.math.uni-potsdam.de/professuren/graphentheorie/team/dr-siegfried-beckus/
# Dr. rer. nat. Siegfried Beckus Kontakt Raum: 2.09.2.15 Telefon: +49 331 977 2748 ## Research interests • spectral theory of Schrödinger operators on graphs with aperiodic ordered potentials • dynamical Systems • Delone sets • graph limits and graphings • aperiodic tilings • operator algebras • fields of C*-algebras (especially C*-algebras induced by groupoids) ## PhD thesis Spectral approximation of aperiodic Schrödinger operators Friedrich-Schiller Universität Jena, October 2016 ## Diploma thesis Generalized Bloch Theory for Quasicrystals Friedrich-Schiller Universität Jena, February 2012 ## Publications 2019 | Hölder Continuity of the Spectra for Aperiodic Hamiltonians | Siegfried Beckus, Jean Bellissard, Horia Cornean Zeitschrift: Annales Henri Poincaré Link zur Publikation, Link zum Preprint ### Hölder Continuity of the Spectra for Aperiodic Hamiltonians #### Autoren: Siegfried Beckus, Jean Bellissard, Horia Cornean (2019) We study the spectral location of strongly pattern equivariant Hamiltonians arising through configurations on a colored lattice. Roughly speaking, two configurations are "close to each other" if, up to a translation, they "almost coincide" on a large fixed ball. The larger this ball is, the more similar they are, and this induces a metric on the space of the corresponding dynamical systems. Our main result states that the map which sends a given configuration into the spectrum of its associated Hamiltonian, is Hölder (even Lipschitz) continuous in the usual Hausdorff metric. Specifically, the spectral distance of two Hamiltonians is estimated by the distance of the corresponding dynamical systems. Zeitschrift: Annales Henri Poincaré 2019 | Corrigendum to “Spectral continuity for aperiodic quantum systems I. General theory | Siegfried Beckus, Jean Bellissard, Giuseppe De Nittis Zeitschrift: Journal of Functional Analysis Reihe: 277 Seiten: 3351-3353 Link zur Publikation, Link zum Preprint ### Corrigendum to “Spectral continuity for aperiodic quantum systems I. General theory #### Autoren: Siegfried Beckus, Jean Bellissard, Giuseppe De Nittis (2019) A correct statement of Theorem 4 in [1] is provided. The change does not affect the main results. Zeitschrift: Journal of Functional Analysis Reihe: 277 Seiten: 3351-3353 2018 | Spectral continuity for aperiodic quantum systems I. General theory | Siegfried Beckus, Jean Bellissard, Giuseppe De Nittis Zeitschrift: Journal of Functional Analysis Reihe: 275 Seiten: 2917 - 2977 Link zur Publikation, Link zum Preprint ### Spectral continuity for aperiodic quantum systems I. General theory #### Autoren: Siegfried Beckus, Jean Bellissard, Giuseppe De Nittis (2018) How does the spectrum of a Schrödinger operator vary if the corresponding geometry and dynamics change? Is it possible to define approximations of the spectrum of such operators by defining approximations of the underlying structures? In this work a positive answer is provided using the rather general setting of groupoid C*-algebras. A characterization of the convergence of the spectra by the convergence of the underlying structures is proved. In order to do so, the concept of continuous field of groupoids is slightly extended by adding continuous fields of cocycles. With this at hand, magnetic Schrödinger operators on dynamical systems or Delone systems fall into this unified setting. Various approximations used in computational physics, like the periodic or the finite cluster approximations, are expressed through the tautological groupoid, which provides a universal model for fields of groupoids. The use of the Hausdorff topology turns out to be fundamental in understanding why and how these approximations work. Zeitschrift: Journal of Functional Analysis Reihe: 275 Seiten: 2917 - 2977 2018 | Delone dynamical systems and spectral convergence | Siegfried Beckus, Felix Pogorzelski Zeitschrift: Ergodic Theory and Dynamical Systems Link zur Publikation, Link zum Preprint ### Delone dynamical systems and spectral convergence #### Autoren: Siegfried Beckus, Felix Pogorzelski (2018) In the realm of Delone sets in locally compact, second countable Hausdorff groups, we develop a dynamical systems approach in order to study the continuity behavior of measured quantities arising from point sets. A special focus is both on the autocorrelation, as well as on the density of states for random bounded operators. It is shown that for uniquely ergodic limit systems, the latter measures behave continuously with respect to the Chabauty–Fell convergence of hulls. In the special situation of Euclidean spaces, our results complement recent developments in describing spectra as topological limits: we show that the measured quantities under consideration can be approximated via periodic analogs. Zeitschrift: Ergodic Theory and Dynamical Systems 2018 | Shnol-type theorem for the Agmon ground state | Siegfried Beckus, Yehuda Pinchover Zeitschrift: Journal of Spectral Theory Link zum Preprint ### Shnol-type theorem for the Agmon ground state #### Autoren: Siegfried Beckus, Yehuda Pinchover (2018) Let H be a Schrödinger operator defined on a noncompact Riemannian manifold Ω, and let WL(Ω;R). Suppose that the operator H+W is critical in Ω, and let φ be the corresponding Agmon ground state. We prove that if u is a generalized eigenfunction of H satisfying |u|φ in Ω, then the corresponding eigenvalue is in the spectrum of H. The conclusion also holds true if for some KΩ the operator H admits a positive solution in Ω'=ΩK, and |u|ψ in Ω', where ψ is a positive solution of minimal growth in a neighborhood of infinity in Ω. Under natural assumptions, this result holds true also in the context of infinite graphs, and Dirichlet forms. Zeitschrift: Journal of Spectral Theory 2018 | Note on spectra of non-selfadjoint operators over dynamical system | Siegfried Beckus, Daniel Lenz,Marko Lindner,Christian Seifert Zeitschrift: Proceedings of the Edinburgh Mathematical Society Reihe: 61 Seiten: 371 -386 Link zur Publikation, Link zum Preprint ### Note on spectra of non-selfadjoint operators over dynamical system #### Autoren: Siegfried Beckus, Daniel Lenz,Marko Lindner,Christian Seifert (2018) We consider equivariant continuous families of discrete one-dimensional operators over arbitrary dynamical systems. We introduce the concept of a pseudo-ergodic element of a dynamical system. We then show that all operators associated to pseudo-ergodic elements have the same spectrum and that this spectrum agrees with their essential spectrum. As a consequence we obtain that the spectrum is constant and agrees with the essential spectrum for all elements in the dynamical system if minimality holds. Zeitschrift: Proceedings of the Edinburgh Mathematical Society Reihe: 61 Seiten: 371 -386 2017 | On the spectrum of operator families on discrete groups over minimal dynamical Systems | Siegfried Beckus, Daniel Lenz, Marko Lindner, Christian Seifert Zeitschrift: Mathematische Zeitschrift Reihe: 287 Seiten: 993 - 1007 Link zur Publikation, Link zum Preprint ### On the spectrum of operator families on discrete groups over minimal dynamical Systems #### Autoren: Siegfried Beckus, Daniel Lenz, Marko Lindner, Christian Seifert (2017) It is well known that, given an equivariant and continuous (in a suitable sense) family of selfadjoint operators in a Hilbert space over a minimal dynamical system, the spectrum of all operators from that family coincides. As shown recently similar results also hold for suitable families of non-selfadjoint operators in p(ℤ). Here, we generalize this to a large class of bounded linear operator families on Banach-space valued p-spaces over countable discrete groups. We also provide equality of the pseudospectra for operators in such a family. A main tool for our analysis are techniques from limit operator theory. Zeitschrift: Mathematische Zeitschrift Reihe: 287 Seiten: 993 - 1007 2016 | Continuity of the Spectrum of a Field of Self-Adjoint Operators | Siegfried Beckus, Jean Bellissard Zeitschrift: Annales Henri Poincaré Reihe: 17 Seiten: 3425 - 3442 Link zur Publikation, Link zum Preprint ### Continuity of the Spectrum of a Field of Self-Adjoint Operators #### Autoren: Siegfried Beckus, Jean Bellissard (2016) Given a family of self-adjoint operators (At)tT indexed by a parameter t in some topological space T, necessary and sufficient conditions are given for the spectrum σ(At) to be Vietoris continuous with respect to t. Equivalently the boundaries and the gap edges are continuous in t. If (T,d) is a complete metric space with metric d, these conditions are extended to guarantee Hölder continuity of the spectral boundaries and of the spectral gap edges. As a corollary, an upper bound is provided for the size of closing gaps. Zeitschrift: Annales Henri Poincaré Reihe: 17 Seiten: 3425 - 3442 2013 | Spectrum of Lebesgue Measure Zero for Jacobi Matrices of Quasicrystals | Siegfried Beckus, Felix Pogorzelski Zeitschrift: Mathematical Physics, Analysis and Geometry Reihe: 16 Seiten: 289 -308 Link zur Publikation, Link zum Preprint ### Spectrum of Lebesgue Measure Zero for Jacobi Matrices of Quasicrystals #### Autoren: Siegfried Beckus, Felix Pogorzelski (2013) We study one-dimensional random Jacobi operators corresponding to strictly ergodic dynamical systems. We characterize the spectrum of these operators via non-uniformity of the transfer matrices and vanishing of the Lyapunov exponent. For aperiodic, minimal subshifts satisfying the so-called Boshernitzan condition this gives that the spectrum is supported on a Cantor set with Lebesgue measure zero. This generalizes earlier results for Schrödinger operators. Zeitschrift: Mathematical Physics, Analysis and Geometry Reihe: 16 Seiten: 289 -308 CV in englishCV in deutsch University of Potsdam since 10/2018 Post-Doc with Prof. Dr. Matthias Keller Israel Institute of Technology (Technion), Haifa 10/2016-09/2018 Postdoctoral Fellowship with Prof. Dr. Yehuda Pinchover and Prof. Dr. Ram Band Georgia Institute of Technology, Atlanta, USA 02-04/2014 Visiting Student/Research Collaborator Friedrich Schiller University Jena 03/2012-09/2016 PhD Student ## Education PhD 10/2016, Friedrich Schiller University Jena Diploma 02/2012, Friedrich Schiller University Jena ## Grants and Projects DFG Project "Periodic approximations of Schrödinger operators associated with quasicrystals" since 12/2018 (PostDoc position for 2 year and travel money) Scholarship of the DAAD Program to participate at congresses, International Workshop on Operator Theory and its Applications, Lisbon, Portugal Scholarship for "Research in Pairs" at the Mathematisches Forschungsinstitut Oberwolfach, Germany, 01/2018 Postdoctoral Fellowship Israel Institute of Technology (Technion), Haifa, Israel, 10/2016 - 09/2018 Scholarship of the DAAD Program to participate at congresses, XVIII International Congress on Mathematical Physics, Santiago de Chile, Chile Funding for the PhD Seminar "Förderung von interdisziplinären Arbeitsgruppen und Nachwuchsnetzwerken" funded by the Graduierten-Akademie in Jena ## Selected Talks at international conferences • 07/2019 International Workshop on Operator Theory and its Applications (IWOTA 2019), Instituto Superior Técnico, Lisbon (Portugal): Hunting the spectra via the underlying dynamics • 05/2019 8th Miniworkshop on Operator Theoretic Aspects of Ergodic Theory, Leipzig (Germany): Hunting the spectra via the underlying dynamics • 01/2018 Hardy-type inequalities and elliptic PDEs, Midreshet Sde Boker (Israel), Poster: Spectral Approximation of Schrödinger Operators • 10/2017 Workshop "Spectral Structures and Topological Methods in Mathematical Quasicrystals", MFO Oberwolfach (Germany): Spectral stability of Schrödinger operators in the Hausdorff metric • 07/2017 Analysis and Geometry on Graphs and Manifolds, Universität-Potsdam (Germany): Shnol type Theorem for the Agmon ground state • 05/2017 Israel Mathematical Union 2017, Acre (Israel): The space of Delone dynamical systems and related objects • 01/2017 Workshop on Mathematical Physics, Weizmann Institute of Science, Rehovot (Israel), Poster: Spectral Approximation of Schrödinger Operators • 06/2016 Thematic School "Transversal Aspects of Tilings", Oleron (France): Continuity of the spectra associated with Schrödinger operators • 09/2015 CMO-BIRS, Workshop on "Spectral properties of quasicrystals via analysis, dynamics and geometric measure theory", Oaxaca (Mexiko): Spectral approximation of Schrödinger operators: continuity of the spectrum • 07/2015 Young researcher symposium, Pontificia Universidad Catolica de Chile, Santiago de Chile (Chile): Spectral study of Schrödinger operators with aperiodic ordered potential in one-dimensional systems • 06/2015 Workshop on "Time-frequency analysis and aperiodic order", Norwegian University of Science and Technology, Trondheim (Norway): An approximation theorem for the spectrum of Schrödinger operators related to quasicrystals ## Selected Talks in seminars and colloquia • 05/2019 Justus-Liebig-Universität Gießen (Germany): Hunting the spectra via the underlying dynamics • 07/2018 Technische Universität München (Germany): When do the spectra of self-adjoint operators converge? • 10/2017 Pontificia Universidad Catolica de Chile, Santiago (Chile): Shnol type Theorem for the Agmon ground state • 10/2017 Hebrew University of Jerusalem (Israel): When do the spectra of self-adjoint operators converge? • 08/2017 University of Oslo (Norway): Spectral approximation via an approach from C*-algebras • 07/2017 Friedrich-Alexander Universität Erlangen-Nürnberg (Germany): The space of Delone dynamical systems and its application • 07/2017 RWTH Aachen (Germany): Shnol type Theorem for the Agmon ground state • 09/2016 Aalborg University (Danemark): Continuous variation of the spectra: A characterization and a tool • 07/2016 Universität Bielefeld (Germany): Hölder-continuous behavior of the spectra associated with self-adjoint operators • 05/2015 Technische Universität Chemnitz (Germany): Schrödinger operators on quasicrystals • 04/2015 Israel Institute of Technology (Technion), Haifa (Israel): The role of Gähler-Anderson-Putnam graphs in the view of Schrödinger operators • 04/2014 University of Alabama at Birmingham (USA): Gähler-Anderson-Putnam graphs of 1-dimensional Delone sets of finite local complexity • 01/2014 Technische Universität Hamburg-Harburg (Germany): Wannier transformation for Schrödinger operators with aperiodic potential ## (Co-)Supervision Franziska Sieron (Master thesis), The density of periodic configurations in strongly irreducible subshifts of finite type (joint with Prof. Dr. Daniel Lenz), 2016 Daniel Sell (Master thesis), Topological groupoids and Matuis spatial realization theorem (joint with Prof. Dr. Daniel Lenz), 2015 Franziska Sieron (Bachelor thesis), The balanced property of primitive substitutions (joint with Prof. Dr. Daniel Lenz), 2014 ## (Co-)Organized Scientific meetings Euler-Lecture, Universität Potsdam, 05/2019 PhD seminar, Friedrich-Schiller-Universität Jena, 03/2013 – 09/2016 Colloquium: "Job opportunities for mathematicians", Friedrich-Schiller-Universität Jena, 2013 – 2015 PhD symposium at the TU Chemnitz, within the Fall school Dirichlet forms, operator theory and mathematical physics, 02/2013
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8024067282676697, "perplexity": 3332.2777075973504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347425148.64/warc/CC-MAIN-20200602130925-20200602160925-00077.warc.gz"}
http://simplystatistics.org/page/3/
## Data Analysis for Genomics edX Course Mike Love (@mikelove) and I have been working hard the past couple of months preparing a free online edX course on data analysis for genomics. Our target audience are the postdocs, graduate students and research scientists that are tasked with analyzing genomics data, but don't have any formal training. The eight week course will start with the very basics, but will ramp up rather quickly and end with real-life workflows for genome variation, RNA-seq, DNA methylation, and ChIP-seq. Throughout the course students will learn skills and concepts that provide a foundation for analyzing genomics data. Specifically, we will cover exploratory data analysis, basic statistical inference, linear regression, modeling with parametric distributions, empirical Bayes, multiple comparison corrections and smoothing techniques. In the class we will make heavy use of computer labs. Almost every lecture is accompanied by an R markdown document that students can use to recreate the plots shown in the lectures. The html document resulting from these R markdown files will result in an html document that will serve as a text book for the class. Questions will be discussed on online forums led by Stephanie Hicks (@stephaniehicks) and Jim MacDonald. Posted in Uncategorized | 2 Comments ## A non-comprehensive comparison of prominent data science programs on cost and frequency. We did a really brief comparison of a few notable data science programs for a grant submission we were working on. I thought it was pretty fascinating, so I'm posting it here. A couple of notes about the table. 1. Our program can be taken for free, which includes assessments. If you want the official certificate and to take the capstone you pay the above costs. 2. Udacity's program can also be taken for free, but if you want the official certificate, assessments, or tutoring you pay the above costs. 3. The asterisks denote programs where you get an official master's degree. 4. The MOOC programs (Udacity's and ours) offer the more flexibility in the terms of student schedules. Ours is the most flexible with courses running every month. The in person programs having the least flexibility but obviously the most direct instructor time. 5) The programs are all quite different in the terms of focus, design, student requirements, admissions, instruction, cost and value. 6) As far as we know, ours is the only one where every bit of lecture content has been open sourced (https://github.com/DataScienceSpecialization) Posted in Uncategorized | 9 Comments ## The fact that data analysts base their conclusions on data does not mean they ignore experts Paul Krugman recently joined the new FiveThirtyEight hating bandwagon. I am not crazy about the new website either (although I'll wait more than one weeks before judging) but in a recent post Krugman creates a false dichotomy that is important to correct. Krugmanam states that "[w]hat [Nate Silver] seems to have concluded is that there are no experts anywhere, that a smart data analyst can and should ignore all that." I don't think that is what Nate Silver, nor any other smart data scientist or applied statistician has concluded. Note that to build his election prediction model, Nate had to understand how the electoral college works, how polls work, how different polls are different, the relationship between primaries and presidential election, among many other details specific to polls and US presidential elections. He learned all of this by reading and talking to experts. Same is true for PECOTA where data analysts who know quite a bit about baseball collect data to create meaningful and predictive summary statistics. As Jeff said before, the key word in "Data Science" is not Data, it is Science. The one example Krugman points too as ignoring experts appears to be written by someone who, according to the article that Krugman links to, was biased by his own opinions, not by data analysis that ignored experts. However, in Nate's analysis of polls and baseball data it is hard to argue that he let his bias affect his analysis. Furthermore, it is important to point out that he did not simply stick data into a black box prediction algorithm. Instead he did what most of us applied statisticians do: we build empirically inspired models but guided by expert knowledge. ps - Krugman links to a Timothy Egan piece which has another false dichotomy as the title: "Creativity vs. Quants". He should try doing it before assuming there is no creativity involved in extracting information from data. Posted in Uncategorized | 4 Comments ## The 80/20 rule of statistical methods development Developing statistical methods is hard and often frustrating work. One of the under appreciated rules in statistical methods development is what I call the 80/20 rule (maybe could even by the 90/10 rule). The basic idea is that the first reasonable thing you can do to a set of data often is 80% of the way to the optimal solution. Everything after that is working on getting the last 20%. (Edit: Rafa points out that once again I've reverse-scooped a bunch of people and this is already a thing that has been pointed out many times. See for example the Pareto principle and this post also called the 80:20 rule) Sometimes that extra 20% is really important and sometimes it isn't. In a clinical trial, where each additional patient may cost a large amount of money to recruit and enroll, it is definitely worth the effort. For more exploratory techniques like those often used when analyzing high-dimensional data it may not. This is particularly true because the extra 20% usually comes at a cost of additional assumptions about the way the world works. If your assumptions are right, you get the 20%, if they are wrong, you may lose and it isn't always clear how much. Here is a very simple example of the 80/20 rule from frequentist statistics - in my experience similar ideas hold in machine learning and Bayesian inference as well. Suppose that I collect some observations $X_1,\ldots, X_n$ and want to test whether the mean of the observations is greater than 0. Suppose I know that the data are normal and that the variance is equal to 1. Then the absolute best statistical test (called the uniformly most powerful test) you could do rejects the hypothesis the mean is zero if $\bar{X} > z_{\alpha}\left(\frac{1}{\sqrt{n}}\right)$ . There are a bunch of other tests you could do though. If you assume the distribution is symmetric you could also use the sign test to test the same hypothesis by creating the random variables $Y_i = 1(X_i > 0)$ and testing the hypothesis $H_0: Pr(Y_i = 1) = 0.5$ versus the alternative that the probability is greater than 0.5 . Or you could use the one sided t-test. Or you could use the Wilcoxon test. These are suboptimal if you know the data are Normal with variance one. I tried each of these tests with a sample of size $n=20$ at the $\alpha=0.05$ level. In the plot below I show the ratio of power between each non-optimal test and the optimal z-test (you could do this theoretically but I'm lazy so did it with simulation, code here, colors by RSkittleBrewer). The tests get to 80% of the power of the z-test for different sizes of the true mean (0.6 for Wilcoxon, 0.5 for the t-test, and 0.85 for the sign test). Overall, these methods very quickly catch up to the optimal method. In this case, the non-optimal methods aren't much easier to implement than the optimal solution. But in many cases, the optimal method requires significantly more computation, memory, assumptions, theory, or some combination of the four. The hard decision is whether to create a new method is whether the 20% is worth it. This is obviously application specific. An important corollary of the 80/20 rule is that you can have a huge impact on new technologies if you are the first to suggest an already known 80% solution. For example, the first person to suggest hierarchical clustering or the singular value decomposition for a new high-dimensional data type will often get a large number of citations. But that is a hard way to make a living - you aren't the only person who knows about these methods and the person who says it first soaks up a huge fraction of the credit. So the only way to take advantage of this corollary is to spend your time constantly trying to figure out what the next big technology will be. And you know what they say about prediction being hard, especially about the future. Posted in Uncategorized | 1 Comment ## The time traveler's challenge. Editor's note: This has nothing to do with statistics. I do a lot of statistics for a living and would claim to know a relatively large amount about it. I also know a little bit about a bunch of other scientific disciplines, a tiny bit of engineering, a lot about pointless sports trivia, some current events, the geography of the world (vaguely) and the geography of places I've lived (pretty well). I have often wondered, if I was transported back in time to a point before the discovery of say, how to make a fire, how much of human knowledge I could recreate. In other words, what would be the marginal effect on the world of a single person (me) being transported back in time. I could propose Newton's Laws, write down a bunch of the basis of calculus, and discover the central limit theorem. I probably couldn't build an internal combustion engine - I know the concept but don't know enough of the details. So the challenge is this. If you were transported back 4,000 or 5,000 years, how much could you accelerate human knowledge? When I told Leah J. about this idea she came up with an even more fascinating variant. Suppose that I told you that in 5 days you were going to be transported back 4,000 or 5,000 years but you couldn't take anything with you. What would you read about on Wikipedia? Posted in Uncategorized | 27 Comments ## ENAR is in Baltimore - Here's What To Do This year's meeting of the Eastern North American Region of the International Biometric Society (ENAR) is in lovely Baltimore, Maryland. As local residents Jeff and I thought we'd put down a few suggestions for what to do during your stay here in case you're not familiar with the area. Venue The conference is being held at the Marriott in the Harbor East area of the city, which is relatively new and a great location. There are a number of good restaurants right in the vicinity, including Wit & Wisdom in the Four Seasons hotel across the street and Pabu, an excellent Japanese restaurant that I personally believe is the best restaurant in Baltimore (a very close second is Woodberry Kitchen, which is a bit farther away near Hampden). If you go to Pabu, just don't get sushi; try something new for a change. Around Harbor East you'll also find a Cinghiale (excellent northern Italian restaurant), Charleston (expensive southern food), Lebanese Taverna, and Ouzo Bay. If you're sick of restaurants, there's also a Whole Foods. If you want a great breakfast, you can walk just a few blocks down Aliceanna street to the Blue Moon Cafe. Get the eggs Benedict. If you get the Cap'n Crunch French toast, you will need a nap afterwards. Just east of Harbor East is an area called Fell's Point. This is commonly known as the "bar district" and it lives up to its reputation. Max's in Fell's Point (on the square) has an obscene number of beers on tap. The Heavy Seas Alehouse on Central Avenue has some excellent beers from the local Heavy Seas brewery and also has great food from chef Matt Seeber. Finally, the Daily Grind coffee shop is a local institution. Around the Inner Harbor Outside of the immediate Harbor East area, there are a number of things to do. For kids, there's Port Discovery, which my 3-year-old son seems to really enjoy. There's also the National Aquarium where the Tuesday networking event will be held. This is also a great place for kids if you're bringing family. There's a neat little park on Pier 6 that is small, but has a number of kid-related things to do. It's a nice place to hang out when the weather is nice. Around the other side of the harbor is the Maryland Science Center, another kid-fun place, and just west of the Harbor down Pratt Street is the B&O Railroad Museum, which I think is good for both kids and adults (I like trains). Unfortunately, at this time there's no football or baseball to watch. Around Baltimore There are a lot of really interesting things to check out around Baltimore if you have the time. If you need to get around downtown and the surrounding areas there's the Charm City Circulator which is a free bus that runs every 15 minutes or so. The Mt. Vernon district has a number of cultural things to do. For classical music fans there's the wonderful Baltimore Symphony Orchestra directed by Marin Alsop. The Peabody Institute often has some interesting concerts going on given by the students there. There's the Walters Art Museum, which is free, and has a very interesting collection. There are also a number of good restaurants and coffee shops in Mt. Vernon, like Dooby's (excellent dinner) and Red Emma's  (lots of Noam Chomsky). That's all I can think of right now. If you have other questions about Baltimore while you're here for ENAR tweet us up at @simplystats. ## How to use Bioconductor to find empirical evidence in support of π being a normal number Happy π day everybody! I wanted to write some simple code (included below) to the test parallelization capabilities of my  new cluster. So, in honor of  π day, I decided to check for evidence that π is a normal number. A normal number is a real number whose infinite sequence of digits has the property that picking any given random m digit pattern is 10−m. For example, using the Poisson approximation, we can predict that the pattern "123456789" should show up between 0 and 3 times in the first billion digits of π (it actually shows up twice starting, at the 523,551,502-th and  773,349,079-th decimal places). To test our hypothesis, let Y1, ..., Y100 be the number of "00", "01", ...,"99" in the first billion digits of  π. If  π is in fact normal then the Ys should be approximately IID binomials with N=1 billon and p=0.01.  In the qq-plot below I show Z-scores (Y - 10,000,000) /  √9,900,000) which appear to follow a normal distribution as predicted by our hypothesis. Further evidence for π being normal is provided by repeating this experiment for 3,4,5,6, and 7 digit patterns (for 5,6 and 7 I sampled 10,000 patterns). Note that we can perform a chi-square test for the uniform distribution as well. For patterns of size 1,2,3 the p-values were 0.84, 0.89, 0.92, and 0.99. Another test we can perform is to divide the 1 billion digits into 100,000 non-overlapping segments of length 10,000. The vector of counts for any given pattern should also be binomial. Below I also include these qq-plots. These observed counts should also be independent, and to explore this we can look at autocorrelation plots: To do this in about an hour and with just a few lines of code (included below), I used the Bioconductor Biostrings package to match strings and the foreach function to parallelize. library(Biostrings) library(doParallel) registerDoParallel(cores = 48) x=scan("pi-billion.txt",what="c") x=substr(x,3,nchar(x)) ##remove 3. x=BString(x) n<-length(x) p <- 1/(10^d) par(mfrow=c(2,3)) for(d in 2:4){ if(d<5){ patterns<-sprintf(paste0("%0",d,"d"),seq(0,10^d-1)) } else{ patterns<-sprintf(paste0("%0",d,"d"),sample(10^d,10^4)-1) } res <- foreach(pat=patterns,.combine=c) %dopar% countPattern(pat,x) z <- (res - n*p ) / sqrt( n*p*(1-p) ) qqnorm(z,xlab="Theoretical quantiles",ylab="Observed z-scores",main=paste(d,"digits")) abline(0,1) if(d<5) print(1-pchisq(sum ((res - n*p)^2/(n*p)),length(res)-1)) } ###Now count in segments d <- 1 m <-10^5 patterns <-sprintf(paste0("%0",d,"d"),seq(0,10^d-1)) res <- foreach(pat=patterns,.combine=cbind) %dopar% { tmp<-start(matchPattern(pat,x)) tmp2<-floor( (tmp-1)/m) return(tabulate(tmp2+1,nbins=n/m)) } ##qq-plots par(mfrow=c(2,5)) p <- 1/(10^d) for(i in 1:ncol(res)){ z <- (res[,i] - m*p) / sqrt( m*p*(1-p) ) qqnorm(z,xlab="Theoretical quantiles",ylab="Observed z-scores",main=paste(i-1)) abline(0,1) } ##ACF plots par(mfrow=c(2,5)) for(i in 1:ncol(res)) acf(res[,i]) NB: A normal number has the above stated property in any base. The examples above a for base 10. ## Oh no, the Leekasso.... An astute reader (Niels Hansen, who is visiting our department today) caught a bug in my code on Github for the Leekasso. I had: lm1 = lm(y ~ leekX) predict.lm(lm1,as.data.frame(leekX2)) Unfortunately, this meant that I was getting predictions for the training set on the test set. Since I set up the test/training sets the same, this meant that I was actually getting training set error rates for the Leekasso. Neils Hansen noticed the bug and reran the fixed code with this term instead: lm1 = lm(y ~ ., data = as.data.frame(leekX)) predict.lm(lm1,as.data.frame(leekX2)) He created a heatmap subtracting the average accuracy of the Leekasso/Lasso and showed they are essentially equivalent. This is a bummer, the Leekasso isn't a world crushing algorithm. On the other hand, I'm happy that just choosing the top 10 is still competitive with the optimized lasso on average. More importantly, although I hate being wrong, I appreciate people taking the time to look through my code. Just out of curiosity I'm taking a survey. Do you think I should publish this top10 predictor thing as a paper? Or do you think it is too trivial? Posted in Uncategorized | 9 Comments ## Per capita GDP versus years since women received right to vote Below is a plot of per capita GPD (in log scale) against years since women received the right to vote for 42 countries. Is this cause, effect, both or neither? We all know correlation does not imply causation, but I see many (non statistical) arguments to support both cause and effect here. Happy International Women's Day ! The data is from here and here. I removed countries where women have had the right to vote for less than 20 years. pd -What's with Switzerland? update - R^2 and p-value added to graph Posted in Uncategorized | 16 Comments ## PLoS One, I have an idea for what to do with all your profits: buy hard drives I've been closely following the fallout from PLoS One's new policy for data sharing. The policy says, basically, that if you publish a paper, all data and code to go with that paper should be made publicly available at the time of publishing and include an explicit data sharing policy in the paper they submit. I think the reproducibility debate is over. Data should be made available when papers are published. The Potti scandal and the Reinhart/Rogoff scandal have demonstrated the extreme consequences of lack of reproducibility and the reproducibility advocates have taken this one home. The question with reproducibility isn't "if" anymore it is "how". The transition toward reproducibility is likely to be rough for two reasons. One is that many people who generate data lack training in handling and analyzing data, even in a data saturated field like genomics. The story is even more grim in areas that haven't been traditionally considered "data rich" fields. The second problem is a cultural and economic problem. It involves the fundamental disconnect between (1) the incentives of our system for advancement, grant funding, and promotion and (2) the policies that will benefit science and improve reproducibility. Most of the debate on social media seems to conflate these two issues. I think it is worth breaking the debate down into three main constituencies: journals, data creators, and data analysts. Journals with requirements for data sharing Data sharing, especially for large data sets, isn't easy and it isn't cheap. Not knowing how to share data is not an excuse - to be a modern scientist this is one of the skills you have to have. But if you are a journal that makes huge profits and you want open sharing, you should put up or shut up. The best way to do that would be to pay for storage on something like AWS for all data sets submitted to comply with your new policy. In the era of cheap hosting and standardized templates, charging \$1,000 or more for an open access paper is way too much. It costs essentially nothing to host that paper online and you are getting peer review for free. So you should spend some of your profits paying for the data sharing that will benefit your journal and the scientific community. Data creators It is really hard to create a serious, research quality data set in almost any scientific discipline. If you are studying humans, it requires careful adherence to rules and procedures for handling human data. If you are in ecology, it may involve extensive field work. If you are in behavioral research, it may involve careful review of thousands of hours of video tape. The value of one careful, rigorous, and interesting data set is hard to overstate. In my field, the data Leonid Kruglyak's group generated measuring gene expression and genetics in a careful yeast experiment spawned an entirely new discipline within both genomics and statistics. The problem is that to generate one really good data set can take months or even years. It is definitely possible to publish more than one paper on a really good data set. But after the data are generated, most of these papers will have to do with data analysis, not data generation. If there are ten papers that could be published on your data set and your group publishes the data with the first one, you may get to the second or third, but someone else might publish 4-10. This may be good for science, but it isn't good for the careers of data generators. Ask anyone in academics whether you'd rather have 6 citations from awesome papers or 6 awesome papers and 100% of them will take the papers. I'm completely sympathetic to data generators who spend a huge amount of time creating a data set and are worried they may be scooped on later papers. This is a place where the culture of credit hasn't caught up with the culture of science. If you write a grant and generate an amazing data set that 50 different people use - you should absolutely get major credit for that in your next grant. However, you probably shouldn't get authorship unless you intellectually contributed to the next phase of the analysis. The problem is we don't have an intermediate form of credit for data generators that is weighted more heavily than a citation. In the short term, this lack of a proper system of credit will likely lead data generators to make the following (completely sensible) decision to hold their data close and then publish multiple papers at once - like ENCODE did. This will drive everyone crazy and slow down science - but it is the appropriate career choice for data generators until our system of credit has caught up. Data analysts I think that data analysts who are pushing for reproducibility are genuine in their desire for reproducibility. I also think that the debate is over. I think we can contribute to the success of the reproducibility transition by figuring out ways to give stronger and more appropriate credit to data generators. I don't think authorship is the right approach. But I do think that it is the right approach to loudly and vocally give credit to people who generated the data you used in your purely data analytic paper. That includes making sure the people that are responsible for their promotion and grants know just how incredibly critical it is that they keep generating data so you can keep doing your analysis. Finally, I think that we should be more sympathetic to the career concerns of folks who generate data. I have written methods and made the code available. I have then seen people write very similar papers using my methods and code - then getting credit/citations for producing a very similar method to my own. Being reverse scooped like this is incredibly frustrating. If you've ever had that experience imagine what it would feel like to spend a whole year creating a data set and then only getting one publication. I also think that the primary use of reproducibility so far has been as a weapon. It has been used (correctly) to point out critical flaws in research. It has also been used as a way to embarrass authors who don't (and even some who do) have training in data analysis. The transition to fully reproducible science can either be a painful fight or a smoother transition. One thing that would go a long way would be to think of code review/reproducibility not like peer review, but more like pull requests and issues on Github. The goal isn't to show how the other person did it wrong, the goal is to help them do it right. Posted in Uncategorized | 7 Comments
{"extraction_info": {"found_math": true, "script_math_tex": 6, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 6, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28558510541915894, "perplexity": 1236.3648477908425}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997888236.74/warc/CC-MAIN-20140722025808-00201-ip-10-33-131-23.ec2.internal.warc.gz"}
http://math.stackexchange.com/questions/117510/security-analysis-of-a-matrix-multiplication-protocol/117700
# Security analysis of a matrix multiplication protocol Suppose Alice would like to obtain the product of two $m\times m$ matrices i.e. $A$ and $B.$ Alice has $A,$ whereas Bob has $B.$ Since Alice does not want to reveal $A$ to Bob, she chooses a $m\times m$ random invertible matrix $R.$ She sends $RA$ to Bob over a secure channel. Bob obtains $RA,$ and calculates $RAB,$ and sends it to Alice over a secure channel. Alice obtains $AB$ by inverting $R$ i.e. $R^{-1}RAB$. $R$ is only utilized once. Any ideas on how to proceed with the security analysis of the above protocol? Specifically is H(A|RA) = H(A) ? - oops sorry about that. – user996522 Mar 7 '12 at 13:04 Relevant: Check out this paper cs.bgu.ac.il/~kobbi/papers/oge_tcc_camera2.pdf and the forward/backward citations. There are existing works on secure multiparty computation, and secure function evaluations. – user2468 Mar 7 '12 at 13:31 What is this supposed to achieve compared to Bob simply sending $B$ to Alice over the secure channel? – Henning Makholm Mar 8 '12 at 0:13 Its a primitive that i need as i have an idea for securely solving a linear equation which depends on the security of this. – user996522 Mar 8 '12 at 0:26 Crossposted to crypto.SE as crypto.stackexchange.com/questions/2023/… – Ilmari Karonen Mar 8 '12 at 9:05 If A is invertible (over a fixed finite field), then this protocol is information-theoretically secure. To see this, first note that, for any $A$, the ciphertext $RA$ is uniformly distributed. Furthermore, the value of $RA$ is independent of $A$. Therefore, for any prior $P$ over messages, we have $\Pr[A|RA] = \Pr[A \wedge RA] / \Pr[RA] = \Pr[A]\Pr[RA] / \Pr[RA] = \Pr[A]$. @user996522: The key is that R is chosen uniformly at random. By analogy, consider picking a point in $[0,1)$. Let A be an arbitrary point in the interval, and let let R by a uniform random value in $[0,1)$. Then $A+R$ is uniformly random over the interval $[A, A+1)$, and so the fractional part is uniform over $[0,1)$. – Jeremy Hurwitz Jul 3 '12 at 4:53
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9354709982872009, "perplexity": 301.05631334390836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701160918.28/warc/CC-MAIN-20160205193920-00090-ip-10-236-182-209.ec2.internal.warc.gz"}
https://planetmath.org/juxtapositionofautomata
# juxtaposition of automata Let $A=(S_{1},\Sigma_{1},\delta_{1},I_{1},F_{1})$ and $B=(S_{2},\Sigma_{2},\delta_{2},I_{2},F_{2})$ be two automata. We define the juxtaposition of $A$ and $B$, written $AB$, as the sextuple $(S,\Sigma,\delta,I,F,\epsilon)$, as follows: 1. 1. $S:=S_{1}\lx@stackrel{{\scriptstyle\cdot}}{{\cup}}S_{1}$, where $\lx@stackrel{{\scriptstyle\cdot}}{{\cup}}$ denotes disjoint union, 2. 2. $\Sigma:=(\Sigma_{1}\cup\Sigma_{2})\lx@stackrel{{\scriptstyle\cdot}}{{\cup}}\{\epsilon\}$, 3. 3. $\delta:S\times\Sigma\to P(S)$ given by • $\delta(s,\epsilon):=I_{2}$ if $s\in F_{1}$, and $\delta(s,\epsilon):=\{s\}$ otherwise, • $\delta|(S_{1}\times\Sigma_{1}):=\delta_{1}$, • $\delta|(S_{2}\times\Sigma_{2}):=\delta_{2}$, and • $\delta(s,\alpha):=\varnothing$ otherwise (where $\alpha\neq\epsilon$). 4. 4. $I:=I_{1}$, 5. 5. $F:=F_{2}$. Because $S_{1}$ and $S_{2}$ are considered as disjoint subsets of $S$, $I\cap F=\varnothing$. Also, from the definition above, we see that $AB$ is an automaton with $\epsilon$-transitions (http://planetmath.org/AutomatonWithEpsilonTransitions). The way $AB$ works is as follows: a word $c=ab$, where $a\in\Sigma_{1}^{*}$ and $b\in\Sigma_{2}^{*}$, is fed into $AB$. $AB$ first reads $a$ as if it were read by $A$, via transition function $\delta_{1}$. If $a$ is accepted by $A$, then one of its accepting states will be used as the initial state for $B$ when it reads $b$. The word $c$ is accepted by $AB$ when $b$ is accepted by $B$. Visually, the state diagram $G_{A_{1}A_{2}}$ of $A_{1}A_{2}$ combines the state diagram $G_{A_{1}}$ of $A_{1}$ with the state diagram $G_{A_{2}}$ of $A_{2}$ by adding an edge from each final node of $A_{1}$ to each of the start nodes of $A_{2}$ with label $\epsilon$ (the $\epsilon$-transition). ###### Proposition 1. $L(AB)=L(A)L(B)$ ###### Proof. Suppose $c=ab$ is a words such that $a\in\Sigma_{1}^{*}$ and $b\in\Sigma_{2}^{*}$. If $c\in L(AB)$, then $\delta(q,a\epsilon b)\cap F\neq\varnothing$ for some $q\in I=I_{1}$. Since $\delta(q,a\epsilon b)\cap F_{2}=\delta(q,a\epsilon b)\cap F\neq\varnothing$ and $b\in\Sigma_{2}^{*}$, we have, by the definition of $\delta$, that $\delta(q,a\epsilon b)=\delta(\delta(q,a\epsilon),b)=\delta_{2}(\delta(q,a% \epsilon),b)$, which shows that $b\in L(B)$ and $\delta(q,a\epsilon)\cap I_{2}\neq\varnothing$. But $\delta(q,a\epsilon)=\delta(\delta(q,a),\epsilon)$, by the definition of $\delta$ again, we also have $\delta(q,a)\cap F_{1}\neq\varnothing$, which implies that $\delta(q,a)=\delta_{1}(q,a)$. As a result, $a\in L(A)$. Conversely, if $a\in L(A)$ and $b\in L(B)$, then for any $q\in I=I_{1}$, $\delta(q,a)=\delta_{1}(q,a)$, which has non-empty intersection with $F_{1}$. This means that $\delta(q,a\epsilon)=\delta(\delta(q,a),\epsilon)=I_{2}$, and finally $\delta(q,a\epsilon b)=\delta(\delta(q,a\epsilon),b)=\delta(I_{2},b)$, which has non-empty intersection with $F_{2}=F$ by assumption. This shows that $a\epsilon b\in L((AB)_{\epsilon})$, or $ab\in L(AB)$. ∎ Title juxtaposition of automata JuxtapositionOfAutomata 2013-03-22 18:03:51 2013-03-22 18:03:51 CWoo (3771) CWoo (3771) 14 CWoo (3771) Definition msc 03D05 msc 68Q45
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 80, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9901582598686218, "perplexity": 209.98621327038796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703524858.74/warc/CC-MAIN-20210121132407-20210121162407-00737.warc.gz"}
http://mathoverflow.net/questions/70397/turing-machines-that-read-the-entire-program-tape?sort=oldest
# Turing machines that read the entire program tape Consider a two tape universal Turing machine with a one-way-infinite, read-only program tape with a head that can only move right, as well as a work tape. The work tape is initialized to all zeros and the program tape is initialized randomly, with each cell being filled from a uniform distribution over the possible symbols. What are the possibilities for the probability that the head on the program tape will move infinitely far to the right in the limit? Obviously, this will depend on the specifics of the Turing machine, but it must always be in the range $[0,1-\Omega)$, where $\Omega$ is Chaitin's constant for the TM. Since this TM is universal, $\Omega$ must be in the range $(0,1)$, so the probability must always be in $[0,1)$. Is this entire range, or at least a set dense in this range and including zero, possible? - Related question: mathoverflow.net/questions/64773/… –  Joel David Hamkins Jul 15 '11 at 18:52 As far as I can see, if you consider a single TM, then you get only one specific probability, not a dense set, whereas if you let the TM vary then $\Omega$ will vary also, and the set of probabilities will contain all rational numbers in $[0,1]$ (and some other numbers too). If you fix the number of symbols but let the TM vary, it's not so clear that you'll get all the rationals, but you'll still get a dense set. EDIT to take into account the revision of the question: Given a universal TM, you can make trivial modifications that maintain universality but change the probability $p$ of going infinitely far to the right. For example, modify your original machine $M$ to an $M'$ that works like this: If the first symbol $x$ on the program tape is 0, then halt immediately; otherwise, move one step to the right and work like $M$ on the program minus the initial symbol $x$ (and, just to guarantee universality, if the computation halts, go back to $x$, erase it, and move $M$'s answer one step to the left so that it's located where answers should be). That modification decreases the probability $p$. You can increase $p$ by having an initial 0 in the program trigger a race to the right by $M'$ --- it just keeps marching to the right regardless of what symbols it sees. You can achieve some control over the amount by which $p$ increases or decreases by having the modification $M'$ begin by checking more than just one symbol at the beginning of the program. As far as I can tell, such modifications, carried out with enough care (which I don't have time for just now) should give you a dense set of $p$'s. EDIT to add some details: Given a universal TM $M$ with tape alphabet $A$, and given a subinterval of $[0,1]$, choose an integer $n$ so large that your given interval includes one of the form $[k/|A|^n,(k+1)/|A|^n]$. Let $S$ be a set of $k$ words of length $n$ over $A$, and let $w$ be another such word that is not in $S$. Modify $M$ to $M'$ that works as follows. If the first $n$ symbols on the tape are a word from $S$, then march to the right forever, ignoring everything else. If they are the word $w$, then simulate $M$ on the remainder of the tape (the part after $w$), moving any final answer into the right location, as in my previous edit. Finally, if the word consisting of the tape's first $n$ letters is neither in $w$ nor in $S$, then halt immediately. Then the probability that $M'$ moves infinitely to the right will be at least $k/|A|^n$ (the probability that the initial $n$-word on the tape is in $S$) and at most $(k+1)/|A|^n$ (the probability that this $n$-word is either $w$ or in $S$) and therefore within the originally given interval. - My question wasn't very well stated; I will revise it. –  Declan Freeman Jul 15 '11 at 4:04 Andreas considered the interpretation of your question where we fix the program and then vary the input. Let me now consider the dual version of the question, where we fix the infinite random input and vary the program. Surprisingly, there is something interesting to say. The concept of asymptotic density provides a natural way to measure the size or density of a collection of Turing machine programs. Given a set $P$ of Turing machine programs, one considers the proportion of all $n$-state programs that are in $P$, as $n$ goes to infinity. This limit, when it exists, is called the asymptotic density or probability of the set $P$, and a set with asymptotic density $1$ will contain more than 99% of all $n$-state programs, when $n$ is large enough, as close to 100% as desired. What I claim is that for your computational model, almost every program leads to a finite computation. Theorem. For any fixed infinite input (on the read-only tape), the set of Turing machine programs that complete their computation in finitely many steps has asymptotic density $1$. In other words, for fixed input, almost every program stops in finite time. The proof follows from the main result of my article: J. D. Hamkins and A. Miasnikov, The halting problem is decidable on a set of asymptotic probability one, Notre Dame J. Formal Logic 47, 2006. http://arxiv.org/abs/math/0504351. The argument depends on the convention in the one-way infinite tape context that computation stops should the head attempt to move off the end of the tape. The idea has also come up on a few other MO quesstions: What are the limits of non-halting? and Solving NP problems in (usually) polynomial time? in which it is explained that the theme of the result is the black-hole phenomenon in undecidability problems, the phenomenon by which the difficulty of an undecidable or infeasible problem is confined to a very small region, outside of which it is easy. The main result of our paper is to show that the classical halting problem admits a black hole. In other words, there is a computable procedure to correctly decide almost every instance of the classical halting problem, with asymptotic probability one. The proof method is to observe that on fixed infinite input, a random Turing machine operates something like a random walk, up to the point where it begins to repeat states. And because of Polya's recurrence theorem, it follows that with probability as close as you like to one, the work tape head will return to the starting position and fall off the tape before repeating a state. My point now is that the same observation applies to your problem. For any particular fixed infinite input, the work tape head will fall off for almost all programs. Thus, almost every program sees only finitely much of the input before stopping. - Theorem 3 in the linked paper is exactly the claim that for any fixed input the asymptotic probability one behavior of a Turing machine (in this one-way infinite tape model) is that the head falls off the tape. –  Joel David Hamkins Jul 15 '11 at 18:36
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8728349804878235, "perplexity": 235.37465747085383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163056120/warc/CC-MAIN-20131204131736-00010-ip-10-33-133-15.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/the-circle-has-returned-help-me-please.561726/
# THE CIRCLE HAS RETURNED(Help me please) 1. Dec 20, 2011 ### Plutonium88 1. The problem statement, all variables and given/known data PICTURE: http://imageshack.us/photo/my-images/21/circls.png/ A Force of gravity acts upon a ball on top a circle. The ball rolls a down the curve of the circle until a CERTAIN POINT. at this CERTAIN POINT the ball detaches from the circle and travels until it his the ground. What is the distance between the top of the CIRCLE and the point at which it detaches... GIVEN INFORMATION: Mo=Mas Of ball, D = Diameter Of Circle -There is NO FRICTION -Energy is conserved (only conservative forces acting-- Gravity) - 2. Relevant equations Centrepetal Force = Inward force Conservation Of Energy This problem is similiar to BANKED CURVE!!!!!!!! (this is the only way that i know of an object can be 'forced' towards the center, which is due to the horizontal component of normal force) 3. The attempt at a solution First i started by solving for forces on a banked curve... Fnety= FNSinθ - mg = 0 Therefore FN = mg/Sinθ Fnetx = FNCosθ Fnetx = MgTanθ Now i used the Horizontal component (FNET X) and made it Equal to FC because FC is the horizontal inward force. Fc = Fnetx (Simplify) Vo=√[DgTanθ/2] Now i'm using the conservation of energy and comparing it's initial moment at the top where it has a Velocity of 0. And comparing it to the "CERTAIN POINT" at which it detaches. ET1 = mgD Et2 = mg(d-x) + 1/2mVo^2 Et1 = Et2 (SIMPLIFY) X = DTanθ/4 Now i'm left with a problem... The angle... I tried to solve for it by creating a Right angled triangle with a radius line mideway through the cirlce, and a radius line going toward the object... but yea no luck.. Any ideas anyone? My teacher for some reason seems to tell me i don't need an angle... Which leads me to think that it's not on a banked curve and i'm supposed to have an angle but it must... C.c SOME ONE HELP ME!!! i've had this problem for WEEKS but i'll never give it UP!!!! I Believe my solution is correct, due to the fact that the units match up. HOWEVER i don't know how to solve for Theta... THE ANGLE the projectile detaches at!!!! :( -Can some one give me some hints on how i can get this angle... 2. Dec 20, 2011 ### Staff: Mentor My advice: Scrap the idea of using the banked curve solution to solve this problem. Instead, apply Newton's 2nd law to the ball. What's the condition that tells you when the ball just starts to lose contact? 3. Dec 20, 2011 ### Plutonium88 I would have to say... When the balls horizontal velocity is more than the balls vertical velocity... The normal force on the object would also have to be zero at the point it detaches as well.. And is it impossible to find the angle cause i dont have enough info :(? Last edited: Dec 20, 2011 4. Dec 20, 2011 ### Staff: Mentor I don't think so. That's the one. Express that mathematically. You have all the info needed to find the angle. 5. Dec 20, 2011 ### Plutonium88 omg. i think i love you if the way i'm thinking is correct... Okay so when i did the Y components on the banked curve for the force. I know FN is = 0 as you confirmed for me So, Fnety = FNSinθ - FG = 0 there fore FN = Mg/Sinθ So since FN is 0 0=mg/sinθ θ=mg/sin :)? 6. Dec 20, 2011 ### Plutonium88 pooh.. when i plug this in units don't match up.. I get a newton instead of meters. .. So if i do FN as components.. the only force acting on the ball is the conservative force of gravity... But how can this help me solve the angle? 7. Dec 20, 2011 ### Staff: Mentor OK. Nah. Consider forces in the radial direction. Apply Newton's 2nd law. (What's the acceleration in the radial direction?) 8. Dec 20, 2011 ### Plutonium88 okay so heres my next attempt then... And I'll let the banked curve idea go, as long as you promise to tell me why? or after i solved this explain how to do it using the banked curve? So i have my forces set as FN in the x direction, and FG in the y direction... I solved for Velocity at that point by FC = FN mv^2/r = 0 V=sqrt [D/2m] But if i do this, do i consider the speed at the top of the circle also which is V=sqrt[dg/2] Last edited: Dec 20, 2011 9. Dec 20, 2011 ### Plutonium88 wait. i see i just use this velocity into the energy haha one sec.. let me write this up 10. Dec 20, 2011 ### Staff: Mentor The only connection between the two is that they both involve centripetal acceleration. I don't quite understand your x and y axes. Two forces act on the ball: The normal force and the weight. Which way do they act? That's not true. Consider force components in the radial direction. (There are only two forces. What are their radial components?) 11. Dec 22, 2011 ### Plutonium88 Okay my bad i had to study for a math test.. so can you correct me plz if im wrong. So if consider the force components in the radial direction.. I have Force of gravity FG, which is in the y component and a normal force in the x component.. Now is this normal force on an angle in the radial direction? and if it is... this is the same problem as the banked curve, how am i to solve for this dang angle. http://imageshack.us/photo/my-images/14/circlesk.png/ i gotta pass out.. but ill get ur feedback tmw hopefully.. uhh and also.. Does the fact that there are two normal forces? the normal force from the circle and the normal force from the Ball itself in opposite directions? 12. Dec 22, 2011 ### Plutonium88 Also my force diagram... http://postimage.org/image/7g18q7iip/ [Broken] correct me if im wrong here too. Last edited by a moderator: May 5, 2017 13. Dec 22, 2011 ### Staff: Mentor The force of gravity is vertical, but the normal force is not generally horizontal. The normal force is in the radial direction. It's not the same problem. Once again: Apply Newton's 2nd law in the radial direction. All you care about are the forces acting on the ball. Only one normal force acts on the ball; it pushes the ball radially outward. 14. Dec 22, 2011 ### Staff: Mentor Only two forces act on the ball. Get rid of everything else. Also: Indicate the angle of the ball's position, measured from the vertical. You'll need that angle when finding the radial components of the forces. (And when you set up the equations as I suggest you'll end up solving for that angle.) Last edited by a moderator: May 5, 2017 15. Dec 23, 2011 ### Plutonium88 OKAY FINALLY CHRISTMAS BREAK. Now i can invest all my time into solving this damned problem... http://imageshack.us/photo/my-images/835/newbitmapimage3o.png/ Okay so here are my ideas... just let me know if i can't use them :O!(also your help is much appreciated through all of this. I'm sorry its taking me so long.. i just can't seem to grasp this for some reason :() okay so fnety= fncosa - mg = 0 therefore fn = mg/cosa Fnetx = fnsina fnetx= mgtana (sina/cosa = tana) fnetx = mAx mAx = mgtana Ax = Gtana Now since i know my Ay = g i thought i could make a triangle using the ax and ay vectors.. but that didn't seem to work, i'm wondering why i can't do that? anyways, with the diagram, i noticed there is some type of relation between the x and the forces, but i still don't quite see how to relate it? a force with a length? :(? 16. Dec 24, 2011 ### Staff: Mentor The diagram is OK except that you drew the point of contact at 90° from the vertical. Instead, imagine the point of contact at angle 'a' from the vertical. The acceleration isn't zero in the y-direction. It's sliding down the sphere, accelerating as it goes. Again, you're getting hung up by comparing it to the 'banked road' problem. One more time: Consider forces in the radial direction. Similar Discussions: THE CIRCLE HAS RETURNED(Help me please)
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9276984930038452, "perplexity": 1221.7638912097518}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084887253.36/warc/CC-MAIN-20180118111417-20180118131417-00259.warc.gz"}
https://www.bodhiai.in/blogs/angles-their-measumentsin-degree-radians-area-of-sector-156/
BodhiAI Nov. 11, 2019 #### Angles & their Measuments(in degree ,Radians) ,Area of Sector): Angle an Its Measurement An angle is generate by rotating a line segment about any of its end points from some Initial position to some terminal position. The measure of an angle is the amount of rotation. Important If the rotation is in anticlockwise sense, the angle measure is positive and if the rotation is in clockwise sense, the angle measured is negative. Types of Systems of Measuring Angles A. Sexagesimal System/English Measure or British System In Sexagesimal system of measurement, the units of measurement are degrees, minutes and seconds. Degree- A right angle is divided into 90 equal parts an each part is calle a degree. One degree is denoted by 10. Minute- A degree is ivided into 60 equal parts and each part is called a minute. One minute is denoted by 1'. Second- A minute is divided into 60 equal parts and each part is called a second. One secon is denoted by 1". Important Sexagesimal system of angles 1 night angle = 90 degree (900) 1 degree = 60 minute (600) 1 minute = 60 second (600) B. Centesimal System or French System In centesimal system of measurement, the units of measurement are grades, minutes an seconds. Grade- A right angle is divided into 100 equal parts and each part is called a grade. One grade is denoted by 1g. Minute- A grade is divided into 100 equal parts and each part is called a minute. One minute is denote by 1'. Second- A minute is divied into 100 equal parts and each part is called a second. One secon is enoted by 1". Important Centersimal system of angles 1 right angle = 100 grade (1000) 1 grade = 100 minute (100') 1 minute = 100 second (100") C. Circular Measure In circular system of measurement, the unit of measurement is radian. A radian is the angle subtended at the centre of the circle by an arc equal to the length of the radius of the circle. Arc Length The length l of an arc AB of the circle of radius r subtending an angle at the centre of the circle is So, the length of one full rotation around the circle will subtend angle of 360º Such that Important The radian is a constant angle i.e. it does not depend upon the radius of the circle from which it is derived Problem Solving Trick If D, G and C are respectively the measures of an angle in degrees, grades and radians, then The Constant Number The length of the circumference of a circle always bears a constant ratio to its diameter. Thus the ratio (circumference) : (diameter) is the same for all circles. This constant ratio is always denoted by the Greek letter , so that is a real number.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9422662854194641, "perplexity": 913.1664074248893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250594391.21/warc/CC-MAIN-20200119093733-20200119121733-00387.warc.gz"}
https://chemistry.stackexchange.com/questions/127933/remove-carbon-dioxide-from-solution
# Remove carbon dioxide from solution [closed] What solution could I add to a solution of water with dissolved carbon dioxide in order to get the carbon to precipitate out? I believe the ions are H+ and HCO3- Thanks, Jerid • There is no solution that is a solution, but, for example, magnesium metal will burn in carbon dioxide gas and produce carbon particles and magnesium oxide. Totally impractical energetically as a way to get the carbon back from carbon dioxide. – Ed V Feb 16 '20 at 2:12 • do you want to precipitate out the carbonate ions or do you actually want to obtain elemental carbon? Feb 16 '20 at 2:16 • Thank you for the replies. I am not trying to obtain the elemental carbon. I have a process where I have a high concentration of CO2 in a gas sample. The sample also contains 95% Nitrogen and a small amount of oxygen. I am able to extract the CO2 by dissolving the gas through water. This leaves the original gas with a much lower CO2 concentration which is the goal. I am trying to figure a way to get the CO2 out of the water without it returning to a gas state which I am doing now with electrolysis but it is not the ideal setup for this application. Feb 16 '20 at 3:16 • I think you must explain your needs in more detail for a reasonable answer. Is this a batch process in a lab, or is it supposed to be a continuous process on an industrial scale? What sort of flow rate of the gas? What is the starting concentration of CO2 and what would be an acceptable final concentration of CO2 in the gas? Is this supposed to be economical somehow? Is humidity change of gas a problem? Do you really need to ppt the CO2 somehow, or just trap the CO2 in the solution? – MaxW Feb 16 '20 at 10:52 • You might also consider scrubbing the CO2 straight from the gas. There are many commercial CO2 scrubbers that work by flowing the gas over solid NaOH. That seems more efficient than going into solution and out again. Feb 16 '20 at 13:10 Adding aqueous Ca(OCl)2 to CO2/H2O will create a precipitate of CaCO3 and Hypochlorous acid (HOCl). $$\ce{Ca(OCl)2 + CO2 + H2O -> CaCO3 (s) + 2 HOCl}$$ Per a source, 'A STUDY OF CALCIUM HYPOCHLORITE AS A DISINFECTANT OF WATER', to quote: The dissolved calcium hypochlorite reacts with the free carbonic acid and half bound carbonic acid in the water and there is formed calcium carbonate, and at the same time free oxychlorid, which is known technically as hypochlorous acid. • Adding Ca(OCl)2 may no be the best idea, because this compound is not pure. It contains also a lot of CaCl2. Feb 16 '20 at 10:34 • The question is 'what solution' and not what pure chemical compound. Here the presence of some dissolved CaCl2, as well, is not relevant. Feb 16 '20 at 23:42 I would rather add chalk water, which is a solution of $$Ca(OH)_2$$. At least it will not create a new substance dissolved in water, because the reaction will be : $$Ca(OH)_2 + CO_2 -> CaCO_3 + H_2O$$ and $$CaCO_3$$ is an insoluble substance. • Of course it depends on how much $\ce{CO2}$ must be ppt'ed per liter of solution. $\ce{Ca(OH)2}$ isn't greatly soluble. – MaxW Feb 16 '20 at 10:43
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6086477041244507, "perplexity": 854.5228098855988}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588113.25/warc/CC-MAIN-20211027084718-20211027114718-00005.warc.gz"}
https://www.zora.uzh.ch/id/eprint/160544/
# Weighted dependency graphs Féray, Valentin (2018). Weighted dependency graphs. Electronic Journal of Probability, 23(93):1-65. ## Abstract The theory of dependency graphs is a powerful toolbox to prove asymptotic normality of sums of random variables. In this article, we introduce a more general notion of weighted dependency graphs and give normality criteria in this context. We also provide generic tools to prove that some weighted graph is a weighted dependency graph for a given family of random variables. To illustrate the power of the theory, we give applications to the following objects: uniform random pair partitions, the random graph model $G(n,M)$, uniform random permutations, the symmetric simple exclusion process and multilinear statistics on Markov chains. The application to random permutations gives a bivariate extension of a functional central limit theorem of Janson and Barbour. On Markov chains, we answer positively an open question of Bourdon and Vallée on the asymptotic normality of subword counts in random texts generated by a Markovian source. ## Abstract The theory of dependency graphs is a powerful toolbox to prove asymptotic normality of sums of random variables. In this article, we introduce a more general notion of weighted dependency graphs and give normality criteria in this context. We also provide generic tools to prove that some weighted graph is a weighted dependency graph for a given family of random variables. To illustrate the power of the theory, we give applications to the following objects: uniform random pair partitions, the random graph model $G(n,M)$, uniform random permutations, the symmetric simple exclusion process and multilinear statistics on Markov chains. The application to random permutations gives a bivariate extension of a functional central limit theorem of Janson and Barbour. On Markov chains, we answer positively an open question of Bourdon and Vallée on the asymptotic normality of subword counts in random texts generated by a Markovian source. ## Statistics ### Citations Dimensions.ai Metrics 8 citations in Web of Science® 8 citations in Scopus® ### Altmetrics Detailed statistics
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5593763589859009, "perplexity": 424.4674500348564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154089.68/warc/CC-MAIN-20210731141123-20210731171123-00486.warc.gz"}
http://quant.stackexchange.com/questions/8575/do-some-option-pricing-models-allow-for-misspecification-and-what-does-it-mean
Do some option pricing models allow for misspecification and what does it mean? This is to some extent a theoretical question and maybe we can work together to produce some input and output. Diverse option pricing models are reported to be misspecified in various studies. One example is the paper of Baksi et. al (1997) called "Empirical performance of alternative option pricing models". The authors come to this conclusion by estimating the implied volatilities for a full dataset, and then re-estimating these implied volatilities for six subsets based on the moneyness-maturity categories. They find differences between the values of the implied volatilities. I quote: "if each candidate option pricing model were correctly specified, the six sets of option prices, formed across either moneyness or maturity, should not have resulted in different implied parameter volatility values nor should the “implied-parameter matrix” treatment have led to any performance improvement." My first question is, what does misspecified actually mean? Isn't this difference between the implied parameters due to the presence of the volatility smile; in that case, one should say that the models are not misspecified, but that this is a result due to the data. Secondly, how do some models allow for this misspecification? If for instance, a specific model is misspecified during a particular period, it is imaginable that it produces a smaller pricing error during a different sub-period. One example I heard is the GARCH option pricing model; a constant GARCH model is nested within the GARCH framework, so that it allows for misspecification. I don't entirely understand this concept, so maybe someone can help me out? Thank you. - You are saying that "Diverse option pricing models are reported to be misspecified in various studies." Then you ask what it means...I am confused. And I feel you need to supply a lot more information to have readers understand what is under discussion: How do the authors "estimate" the implied volatilities (from what data/model), re-estimate by changing what, what are those "6 sets of option prices". I am afraid the reader cannot extrapolate from the little information in the quote provided. –  Matt Wolf Jul 26 '13 at 14:28
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9161821007728577, "perplexity": 917.11963745157}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375098468.93/warc/CC-MAIN-20150627031818-00219-ip-10-179-60-89.ec2.internal.warc.gz"}
https://www.physicsforums.com/threads/question-on-the-particles-that-formed-the-earth.919113/
# I Question on the particles that formed the Earth. 1. Jul 1, 2017 ### Damian79 Full disclosure, I am a creationist, but i want to know the finer points about the big bang and the creation of the universe. So we know that the formation of new rock from lava doesnt make them "day zero" rocks, ie they still would be considered aged when we do radiometric dating. So we know these changes dont change their "clocks" on how old they are, I think this is accepted among creationists and non creationists alike. So how do we know when the earth was formed by the particles of the big bang that the particles from the big bang havent aged on the way to the creation of the Earth assuming the particles from the big bang are "day zero" particles? Could being in the proximity of antimatter age or reverse age matter? So many questions regarding this but I'll stat here. 2. Jul 1, 2017 ### Orodruin Staff Emeritus This is false. For example, potassium-argon dating is performed by comparing the potassium and argon abundances in the rock. The argon created by potassium decays while the rock is molten, but once it solidifies it traps the argon. The rock is therefore "day zero" due to not having any argon in it when it is formed and you can perform the dating by comparing the amounts of potassium and argon. For basic information on K-Ar dating, see the wikipedia page. 3. Jul 1, 2017 ### Staff: Mentor All dating methods where one element can form a crystal but its decay product cannot form the same crystal start at zero age when the rock solidifies. All dating methods using radiation damage in solids start at zero age. Basically all dating methods for anorganic material rely on one of these two ideas. Not a coincidence, you need a well-known initial state. It was not. The big bang only produced hydrogen, helium and tiny amounts of lithium. Most of Earth is made out of heavier elements that formed in stars later. For things like the overall uranium isotope ratio (238 to 235; 234 is produced from 238 decay so that is special), what we see is indeed not the age of the Earth, it is the age of the uranium, and it is a bit older than Earth. This ratio on its own is not used for dating. No. And there are no relevant amounts of antimatter around anyway. 4. Jul 1, 2017 ### Staff: Mentor Hi Damian79. Welcome to PF! Before we begin this discussion (which appears to have already started while I was typing this), I'd like to make it clear that ALL discussion should take place in the context of known science. This means that if someone tells you that X is true or Y is the way that something works, we are talking about those things as currently understood by the mainstream scientific community. There is no discussion of "absolute truth" here. I say this because I want to avoid many of the issues that often plague these conversations where criticism is given of the scientific view for not "truly" knowing what happened in the past or at large distances. We fully know and admit that we can't know any absolute truth and any statements or facts given here should always be understood as being part of a theory or model that is always being tested and verified to the best of our abilities. And rather than being a weakness of science, it's actually a strength in that it allows us to constantly ensure that our body of knowledge is as accurate as possible For starters, this is not how cosmologists and other scientists model and understand the formation of the Earth or anything within the universe. It would be beyond the scope of this post and probably this thread to give you the entire history of the universe as given in the standard model of cosmology (you can find a decent explanation on wikipedia), but we can talk about a few key points. Note that this is a very brief and general overview and is not intended to be an extremely accurate description. 1. The big bang and subsequent evolution of the universe resulted in the formation of mostly hydrogen and helium, with a tiny smattering of lithium and a few other light elements (we're going to mostly ignore dark matter here, as it's not well understood yet and doesn't do much except provide extra gravity help form galaxies and galaxy clusters). 2. These atoms eventually coalesced under gravity to form the galaxies and then the first stars. 3. The fusion of light elements inside these stars created heavier elements like carbon, oxygen, nitrogen, etc. These first stars were very, very massive and eventually underwent supernova, spreading their heavier elements out into the universe to mix with the hydrogen and helium gas still out there. Galaxy and star formation continued, pumping out larger quantities of heavier elements over time. 4. During subsequent star formation, the heavier elements formed what we call "dust". Now, dust is a very different thing that hydrogen and helium gas and has a profound impact on the events of star formation. With only hydrogen and helium (and perhaps trace quantities of lithium), the collapsing gas cloud tends to just get blown away once the proto-star becomes hot enough to emit lots of radiation and solar wind. There is no formation of rocky planets at this time because there are no heavier elements. However, once you add carbon, oxygen, nitrogen, iron, and the dozens of other heavier elements (including uranium) to the collapsing cloud of dust and gas, things change. Heavy elements are much denser than either hydrogen or helium and when the collapsing cloud of dust and gas forms a large, rotating disk surrounding the proto-star they tend to "stick together" to form molecules, dust grains, and small rocks that aren't simply blown away when the proto-star heats up. Over time, these rocks collide and merge with other rocks to form larger bodies, which then collide with more material, building up what are called "planetesimals". Further merging of these planetesimals results in the formation of proto-planets which eventually become full-fledged planets as they finally merge with the remaining material. 5. Now, this is where a crucial part of dating the ages of rocks comes into play. At first, the proto-planets and newborn planets are very, very hot. So hot that they are essentially completely molten. Over time they cool down and the different elements are able to form solid rock. The particular composition of this rock is extremely important. We know that certain elements only bond in certain ways with other elements. For example, a particular type of rock is formed by silicon, oxygen, and zirconium and is known as Zircon. Zircon has the property that it readily incorporates uranium into itself, but it strongly rejects lead during its formation. So as the Earth cooled, zircon formed wherever there was sufficient quantities of oxygen, silicon, zirconium, and uranium. However, uranium is radioactive and has a half-life of about 4-billion years (experiments have verified this to a very high precision). Over time, part of the uranium that was taken up into zircon decays into various other elements, which themselves also decay into lighter elements. This chain of decay eventually stops at lead. As I said above, lead is strongly rejected by zircon when zircon is initially forming. So we can say with good confidence that any lead present inside zircon is the result of the decay of uranium. By looking at the ratio of lead to uranium, and knowing the decay rate of uranium and its decay products, we can reliably date the age of a sample of rock. Obviously things are more complicated than I've described them, but that's the general idea behind radiometric dating. Now, the reason I explained all of this was to give a very basic overview of how we date rocks and to show that much of the atoms making up the Earth were not formed directly via the big bang, but inside of massive stars and supernovae. When it comes to dating the age of the universe things get a bit more complicated and we have to use multiple methods that are very difficult to explain if you know very little about astrophysics. For example, I could tell you that we can date the age of a star cluster by looking at the type of stars remaining in the cluster (the ones that haven't undergone supernova yet), but you'd need to know about the details of how stars work to understand why that particular type of dating method works. And things only get more complicated from there. No. Antimatter is understood pretty well. It does not have any "mystical" properties that normal matter lacks. Antimatter works just like matter in all respects except that the sign of certain properties change (charge goes from positive to negative or vice versa as an example). 5. Jul 1, 2017 ### Damian79 I am a little confused by what you are saying. Do fresh lava rocks return a result of possibly zero days old when radiometric dating is done on them? Do you have a link that shows this? 6. Jul 1, 2017 ### Orodruin Staff Emeritus In molten rock, the argon escapes. When it solidifies there will therefore be no argon. If you make a measurement right after the rock has solidified, you will get an age of zero. Due to the long half-life of potassium-40, "zero" essentially means that you know that the rock is "less than 100000 years" as it takes some time for a measurable amount of argon to accumulate. I also suggest you read @Drakkith 's post regarding uranium-lead dating, which is based on a similar principle. 7. Jul 1, 2017 ### Damian79 Thank you for that primer Drakkith. So we get the dates from calculating the amount of material created by the original material? Or am I wrong here? 8. Jul 1, 2017 ### Staff: Mentor A good source on the general methods of radiometric dating is the Isochron Dating article at Talk.Origins: http://www.talkorigins.org/faqs/isochron-dating.html Potassium-Argon is one of the methods to which the general principles given in this article apply. 9. Jul 1, 2017 ### Damian79 I dont see any examples of fresh rocks coming up in the links of "potassium argon dating fresh lava rocks" that have low dates listed in the links. Perhaps my google search is borked because of my search history, so I can only see those dates from creationists which I know are contested. 10. Jul 1, 2017 ### Orodruin Staff Emeritus Yes, but you also need to know how much of the original material is left. Otherwise you cannot know the fraction of the original material that has decayed and, by extension, the age of the sample. Let us take a hands-on example with made up numbers. Let us say that your friend has a bunch of peaches and you know that every day your friend will eat half of the peaches that are left, leaving only the seed. If you only count the seeds, you have no way of knowing when the peaches were picked. However, if you see that there are 4 peaches and 28 seeds, then you know that • there were 8 peaches and 24 seeds 1 day ago • there were 16 peaches and 16 seeds 2 days ago • there were 32 peaches and 0 seeds 3 days ago and consequently the peaches were picked 3 days ago. Without the information of how many peaches there were or without the information on how many seeds there were, you would not have been able to obtain the information on when there was no seeds. Because of low accuracy for young rock, it is very impractical to use K-Ar dating on young rock (all it will tell you is that the rock is less than 100000 years). For young rock, it is much more interesting to use dating methods that employ nuclei that decay faster, since they will give more accurate results. Of course, you can try to do K-Ar dating on fresh rock, but it will just come out with zero argon abundance and this is not a very exciting result. 11. Jul 1, 2017 ### Orodruin Staff Emeritus To put this in a formula. The basic idea is based on having a number of nuclei $N_0$ of the parent nucleus and none of the daughter at time zero. A priori, you do not know $N_0$. The number of parent nuclei after a time $t$ has passed will be given by $N_P = N_0 2^{-t/t_0}$, where $t_0$ is the half-life of the parent. This also means that the number of daughter nuclei that have been produced are $N_D = N_0 (1 - 2^{-t/t_0})$ and consequently the ratio $R = N_D/N_P$ at time $t$, which is what you can measure, is given by $$R = \frac{1-2^{-t/t_0}}{2^{-t/t_0}} = 2^{t/t_0} - 1 = e^{\ln(2) t/t_0} - 1$$ and we can solve for $t$ as $$t = \frac{t_0}{\ln(2)} \ln(R+1).$$ If you only knew $N_D$ or $N_P$, you would not know what $R$ was. Note that there is no need to know the original number $N_0$, you can make do with just things that you can measure today. 12. Jul 1, 2017 ### Damian79 I see. That is the issue I am currently having to accept all. I want to see a result that comes to 0.1 or less million years old. Has there been any tests done to prove the assumption that all the argon would leak out and give an almost zero day result? Has there been a study of the rate of argon leaving the rock? So at least I can be lead to believe that at the start, the age of the rocks would be zero? 13. Jul 1, 2017 ### Orodruin Staff Emeritus This will be difficult to find. Not because it is not possible, but because it is very basic and rather uninteresting to do such a study although it would in principle be very easy to do it. Just take some freshly formed rock and try to measure its argon content, you will get zero. I am not a geologist so I do not know the early publication history regarding radiogenic dating. It would however have made sense for early scientists to do such tests with known young samples. 14. Jul 1, 2017 ### Staff: Mentor Pierre-Yves Gillot, Yves Cornette: The Cassignol technique for potassium—Argon dating, precision and accuracy: Examples from the Late Pleistocene to Recent volcanics from southern Italy 2000 years is short enough to use well-documented volcanic eruptions. Table IV compares the measured ages with the actual eruption dates. Eolian islands: Eruptions 1400-1500 years ago, K-Ar measurements range from "0 to 4000 years ago" to "1200-2000 years ago" depending on the sample. Isle of Ischia: Eruption 715 years ago, K-Ar measurements go from "0 to 2000 years ago" to "300 to 1500 years ago". Random example, not the only such study. 15. Jul 1, 2017 ### Staff: Mentor In addition to the above examples, note that it is a very, very well understand fact that gases in a liquid will diffuse from areas of higher concentrations to areas of lower concentrations if possible (perhaps "concentration" is not the right word. Partial pressures perhaps?). 16. Jul 1, 2017 ### Orodruin Staff Emeritus I stand corrected. 17. Jul 1, 2017 ### Damian79 That about wraps it up for the questions from me. Thanks you for such quick responses. Sorry for the late reply, I had to do something. Draft saved Draft deleted Similar Discussions: Question on the particles that formed the Earth.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6207339763641357, "perplexity": 827.2070324272555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886108264.79/warc/CC-MAIN-20170821095257-20170821115257-00342.warc.gz"}
https://physics.aps.org/synopsis-for/10.1103/PhysRevLett.112.161303
# Synopsis: Sterile Neutrino as Dark Matter Candidate A dark matter particle in the form of a noninteracting neutrino could explain the recent detection of an x-ray emission line from galaxy clusters. A hypothetical neutrino that does not interact through the weak force could be the source of a recently detected x-ray emission line coming from galaxy clusters. However, previous models using this so-called “sterile” neutrino as a form of dark matter were not able to satisfy constraints from cosmological observations. Now, writing in Physical Review Letters, Kevork Abazajian of the University of California, Irvine, shows that a sterile neutrino with a mass of $7$ kilo-electron-volts (keV) could be a viable dark matter candidate that both explains the new x-ray data and solves some long-standing problems in galaxy structure formation. Cosmologists have long considered neutrinos as possible dark matter particles. However, because of their small mass (less than about $1$ eV), conventional neutrinos are too fast, or “hot,” to form the dense dark matter structures needed to hold galaxies and galaxy clusters together. By contrast, sterile neutrinos, which result from certain neutrino theories, can have larger masses and could have been naturally produced in the big bang by neutrino flavor mixing. The problem has been that sterile neutrinos should decay, producing an x-ray signal that no one has observed—until maybe now. Earlier in 2014, an analysis of galaxy cluster data revealed an x-ray emission line, which is consistent with the decay of a $7$-keV sterile neutrino. Normally, dark matter with this mass would be too “warm” to match galaxy data. However, Abazajian showed that the sterile neutrinos could have a “cooler” momentum distribution if they were produced through resonantly enhanced neutrino flavor mixing (the MSW effect). When Abazajian plugged this neutrino into a cosmological model, he found it could explain both the small number of Milky Way satellite galaxies and their central densities, which have eluded the currently favored cold dark matter model. – Michael Schirber More Features » ### Announcements More Announcements » Optics ## Next Synopsis Materials Science ## Related Articles Cosmology ### Synopsis: Universe Preceded by an Antiuniverse? A new cosmology model suggests that our Universe has a mirror image in the form of an “antiuniverse” that existed before the big bang. Read More » Astrophysics ### Focus: Black Hole as Extreme Particle Accelerator Large-scale simulations suggest a mechanism by which supermassive black holes could accelerate particles to ultrahigh energies. Read More » Particles and Fields ### Viewpoint: Black Hole Evolution Traced Out with Loop Quantum Gravity Loop quantum gravity—a theory that extends general relativity by quantizing spacetime—predicts that black holes evolve into white holes. Read More »
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8959102630615234, "perplexity": 1451.5617077112222}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583680452.20/warc/CC-MAIN-20190119180834-20190119202834-00570.warc.gz"}
http://qatestingblog.com/types-of-xpath/
# Types of XPath 1. Absolute XPath Absolute XPaths starts with the root of the HTML pages. Absolute XPaths are not advisable for most of the time due to following reasons 1. Absolute XPaths are lengthier and hence they are not readable 2. Absolute XPaths are brittle when minor structural changes are done to the web pages Absolute XPaths shall be used only when a relative XPath cannot be constructed. (highly unlikely). Absolute XPaths tends to break as web pages/content is changed. Hence it is not recommended to use absolute XPath in Selenium. Syntax: Absolute XPaths start with  /html Example: /html/body/div[1]/div/div[2]/form/div[2]/input 2. Relative XPath Relative XPaths is used for locating an element with respect to an element known as XPath. The element of your choice is referred relative to a known element. Syntax: Relative XPaths are started with two forward slashes ‘//’. Example: //div[@id=’divUsername’]/input Note: Absolute XPaths are faster than the relative XPaths 3. Exact XPath Locating elements using their attributes, values and inner text of the elements.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.921970009803772, "perplexity": 3419.575900724945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578534596.13/warc/CC-MAIN-20190422035654-20190422061654-00169.warc.gz"}
https://www.oercommons.org/authoring/27052-10x-bigg/view
# 10X Bigg ## 10X Bigger! By Amelia Terrapin, for Big Ideas in Beta ### LEARNING OUTCOMES: • Students will recognize that a digit in one place represents 10x the value of the digit to its right. • Students will be able to compare multi-digit numbers using <, =, > by looking at the value of the digits in each place. • Students will add and subtract multi-digit numbers. • Students will work cooperatively in groups to arrive at solutions. Generalize place value understanding for multi-digit whole numbers. • 4.NBT. 1 Recognize that in a multi-digit whole number, a digit in one place represents ten times what it represents in the place to its right. For example, recognize that 700 ÷ 70 = 10 by applying concepts of place value and division. • 4.NBT.2 Read and write multi-digit whole numbers using base-ten numerals, number names, and expanded form. Compare two multi-digit numbers based on meanings of the digits in each place, using >, =, and < symbols to record the results of comparisons. • 4.NBT.3 Use place value understanding to round multi-digit whole numbers to any place. Use place value understanding and properties of operations to perform multi-digit arithmetic. 4.NBT.4. Fluently add and subtract multi-digit whole numbers using the standard algorithm. ### TIME REQUIRED FOR LESSON: 45 minutes to one hour ### TIME REQUIRED FOR TEACHER PREPARATION: Less than ten minutes ### MATERIALS FOR LESSON: • Index cards • Pencils/pens • 3 Poly spots, cones or other kind of marker for ones, tens and hundreds place • Open space; push the desks aside, going to the gym or outside is ideal Teacher-created worksheets for recording unifix work; see step 5 of lesson overview ### OVERVIEW OF LESSON: 1. Write a 3-digit whole number on the board, for example, 400.  Ask the students “Can anyone figure out how many tens are in this number?” Write a single digit number on the board, for example, 6. Ask the students “Can anyone figure out what 10 times this number is?” As a pre-assessment, spend some time exploring what the class knows about place value. 2. Each student should have paper and a writing utensil to record numbers.  Ask everyone to stand up.  Tell the students “We are going to record how many times we can jump (feel free to substitute your own movements: cartwheels, karate kicks etc) in 10 seconds. Write that number down on your index card.  Now let’s record how many push-ups you can do in 10 seconds.  Write that number down.” 3. Divide students into small groups.  Ask the students “What if we wanted to know how many you could do in 60 seconds? How could we solve that using our 6-second numbers?”  Each group should work together to make sure that as a group they have worked on each person’s answer. Add   As you monitor the groups, reinforce the idea of place value. 4. Now split the entire class in half.  Make 3 circles or boxes large enough to hold up to 9 students using cones, tape etc.  Each circle or box will represent one digit of a 3-digit number.  Take one of the student’s index cards that has a 3-digit number, for example 160.  Ask one person to represent the hundreds place by standing in the circle furthest to the left.  Ask the students “How many people do we need to represent the tens place?” Ask six people to stand in the box in the middle.  Ask the students “How many people do we need to represent the ones place?” Zero.  Have the students assess whether the number has been represented correctly by giving a thumbs up or thumbs down sign.  Pick another card with a different 3-digit number on it and repeat the same process. 5. Say to the students “Now let’s start small.  Can you show me the number 3?” Three students will stand in the ones place.  Ask the students “What if we take 3 and multiply it by 10?”  Allow for students to respond with guesses of how the 3 ones should move.  The 3 students slide over into the tens place. Ask the students “What if we take 30 and multiply it by 10?”  Again, allow for students to provide the answer and discuss where the bodies should move.  The 3 students slide over again into the hundredths place. 6. Tell the students, “Now let’s add and subtract two numbers.”  Make another set of circles or boxes parallel to the first set.  Draw two more cards from students, for example, 120 and 150.  Ask students to represent each number in the boxes.  Ask the rest of the class to do the same procedure on paper as we watch it happen with bodies. Tell the class, “Let’s add 120 and 150.”  All the students representing tens should move into one box, all the students representing hundreds should move into another box so that the total equals 270.  The first time through you may need to count out the tens and hundreds together as a class.  As you repeat the process a few times, make sure each student does some of each kind of addition (with bodies/writing).  Continue to ask what’s happening with place value.  Notice what happens when a number is carried over, for example, 140 and 170. Spend some time reinforcing this concept of carrying numbers to make sure the students understand.  Provide guidance when needed. 7. Say to students “Now let’s take two of our numbers and compare them using <, =, >.  Ask students to represent two numbers in the boxes, for example 150 and 220.  Ask the other students to write these numbers down.  Ask the students “What symbol should go in between the two numbers?”  Ask one student to represent the symbol using two arms as an alligator mouth always wanting to eat the bigger number.  Ask the students who are writing to write the symbol with the alligator mouth facing the bigger number.  Ask the students “Which place value should we look at first? Why?”  Repeat this exercise a few times with different numbers. 8. To assess, write a whole multi-digit number on the board, for example, 800.  Ask a series of questions to which students can respond verbally.  ”How many tens are in this number?” (800 ¸  10  = 80) Allow time for discussion.  “Which number is bigger 130 or 410?  What is 3 times 10? What is 30 times 10? What is 300 times 10?”  As students respond to the questions, make sure to reinforce the core standards addressed in this lesson.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 1, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6790747046470642, "perplexity": 1800.2233561274788}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038059348.9/warc/CC-MAIN-20210410210053-20210411000053-00118.warc.gz"}
https://discuss.codechef.com/questions/64224/spshort-editorial
You are not logged in. Please login at www.codechef.com to post your questions! × # SPSHORT - Editorial Author: Devendra Agarwal Tester: Anudeep Nekkanti Editorialist: Amit Pandey Medium-Hard. # PREREQUISITES: Dijkstra Algorithm, STL uses. # PROBLEM: You are given a graph, you need to find the shortest walk in the graph from src to snk which satisfies the following property: Let the shortest path from src to snk goes from edges $E_{1} -> E_{2} -> \cdots -> E_{k}$ , then $Weight(E_{1}) > Weight(E_{2}) < Weight(E_{3}) > Weight(E_{4}) \cdots$ and so on. # QUICK EXPLANATION: The given problem can be solved using few modification in Dijkstra Algorithm. # EXPLANATION: First Subproblem: Given a graph, You need to find the shortest path in the graph from $\text{src}$ to $\text{snk}$, without any condition. This problem can be solved using Dijkstra Algorithm. In Dijkstra Algorithm, We initialize distance of the $\text{src}$ with $0$, and distance of other vertices as $\infty$ and add $\text{src}$ into a priority queue, and we do it in following manner. while Q is not empty: u ← vertex in Q with min dist[u] // Source node in first case remove u from Q for each neighbor v of u: // where v has not yet been removed from Q. alt ← dist[u] + length(u, v) if alt < dist[v]: // A shorter path to v has been found dist[v] ← alt prev[v] ← u end if end for end while The given pseudo-code calculates the shortest path for each of the vertex. The shortest path for the $\text{snk}$ can be reported easily. Implementation of the Dijkstra Algorithm can be looked here. Second Subproblem: You are given a graph, you need to find the shortest wak in the graph from src to snk which satisfies the following property: Let the shortest path from src to snk goes from edges $E_{1} -> E_{2} -> \cdots -> E_{k}$ , then $Weight(E_{1}) < Weight(E_{2}) < Weight(E_{3}) < Weight(E_{4}) \cdots$ and so on. In sub-problem 1, we kept a visited array. Once a node has been visited, we get the shortest path for that particular node and there is no need to visit the same vertex again. We can do it in a different manner, at the moment we are exploring vertex $v$, and pushing all neighbour of $v$ in the priority queue, we can delete all neighbouring edges of vertex $v$ just after pushing them in priority queue. So, for solving subproblem 2, Consider we have arrived at vertex $V$ by an edge having weight $W$. when removing the vertex $V$ from the priority queue, push all neighbour $X$ of $V$ if $W_{VX} > W$, and delete all those edges from the edge-list . This algorithm guarantees that each edge is processed at most once. hence, algorithm will terminate. We may visit a vertex more than once. The proof of correctness of the given algorithm can be understood in following way. If we arrived at vertex $V$ with cost $C_{1}$, and later we arrived at same vertex with cost $C_{2}$, then $C_{1} < C_{2}$, cost of last explored edge in both cases are the same . Implementation details: Preprocess the graph by sorting outgoing edges from each node (by weight). Keep a pointer at each vertex, which will tell edges having higher weight should not be considered. Original Problem: To solve the original problem, where edge length is increasing and decreasing alternatively. We need to make a few changes in second subproblem. We will keep two copies of the graph in as the edge list, Call them $GL$ and $GS$. A $\text{counter}$ should be kept, $\text{even counter}$ will denote that we are looking for a smaller edge and $\text{odd counter}$ will tell that we are looking for a larger edge than previous one.While we are looking for a larger edges we will delete those larger edges after pushing them in the priority queue.This deletion will happen only in larger copy of the graph which is edge-list $GL$, and similarly in other case. The reason for keeping two copies of graph is that if edge $E$ has been taken and it is larger than last one, then later it can considered again(as shorter than last one). Now see the reason of keeping two copies of graph, edge having length $10$ is being considered twice. During execution of the algorithm, first it will be deleted from the smaller copy of the graph i.e. $GS$(while traversing from length $200$ to $10$)and later it will be deleted from the larger copy of the graph i.e. $GL$(while traversing for length $8$ to $10$). As the deletion can be costly or can have linear complexity, we can keep counters which will tell range in which edge list need to be considered. Time Complexity of the above approach will be $O(ElogE + ElogV)$, $O(ElogE)$ will appear due to sorting edge list, and $O(ElogV)$ due to Dijkstra. # Solutions: Setter's solution can be found here. Tester's solution can be found here. asked 11 Feb '15, 00:50 25911422 accept rate: 0% 18.4k347492528 1 Nice problem, too bad $O(E^2 \log E)$ solution passes tests. answered 16 Feb '15, 00:23 7★mmaxio 81●1●2●8 accept rate: 0% 1 Thanks @mmaxio for the appreciation on problem. I should have coded the E^2 log E solution :( . Will take care in near future for sure :) (16 Feb '15, 01:26) 0 1 st test case is wrong. there is no path from 1 to 4 it should be 4 to 1 6. then only answer is possible. answered 16 Feb '15, 18:40 1●1 accept rate: 0% toggle preview community wiki: Preview ### Follow this question By Email: Once you sign in you will be able to subscribe for any updates here Markdown Basics • *italic* or _italic_ • **bold** or __bold__ • image?![alt text](/path/img.jpg "title") • numbered list: 1. Foo 2. Bar • to add a line break simply add two spaces to where you would like the new line to be. • basic HTML tags are also supported • mathemetical formulas in Latex between \$ symbol Question tags: ×1,045 ×985 ×142 ×109 ×101 ×41 ×1 question asked: 11 Feb '15, 00:50 question was seen: 2,763 times last updated: 11 Feb '16, 19:14
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.757674515247345, "perplexity": 1212.9855233152932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650685.77/warc/CC-MAIN-20180324132337-20180324152337-00595.warc.gz"}
https://www.arxiv-vanity.com/papers/1904.12115/
# Direct capture cross section of 9Be(n,γ)10Be Peter Mohr Diakonie-Klinikum, Schwäbisch Hall D-74523, Germany Institute for Nuclear Research (Atomki), Debrecen H-4001, Hungary April 12, 2022 ###### Abstract The cross section of the Be(n,)Be reaction was calculated in the direct capture model. All parameters of the calculations were adjusted to properties of the Be + n system at thermal energies. The calculated cross section at thermonuclear energies shows the expected behavior of -wave capture at low energies, but increases towards higher energies as typical -wave capture. Excellent agreement between new experimental data in the astrophysically relevant energy region and the present calculation is found. ## I Introduction In a recent study the Be(n,)Be reaction was investigated at thermal and stellar energies Wallner et al. (2019). The main aim of that study was the measurement of the cross section at energies in the keV region which is essential to determine the astrophysical reaction rate at the high temperatures which can be found during core-collapse supernova explosions. Here the Be(n,)Be reaction may play an important role in the so-called -process under neutron-rich conditions Wallner et al. (2019). In general, the formation of C from nucleons and  particles is hindered by the gaps of stable nuclei at masses and which has to be bypassed by three-particle reactions. Depending on the  and neutron densities in the astrophysical environment, the triple-alpha () process may be supplemented by the (n) or (nn) reactions which both proceed via Be, either directly produced in (n) or indirectly in (nn) and subsequent He(,n)Be. Then C can be formed from the Be(,n)C reaction; however, Be can also be detracted from the C formation by either the Be(n,)Be or Be(,n)Be reactions (the latter becoming only relevant at high temperatures). The neutron-rich bypasses to the triple-alpha process occur in the -process in core-collapse supernovae. The onset of the -process is discussed in detail in Woosley and Hoffman (1992), and further information on the relevance of the different three-body processes is given in Görres et al. (1995); Bartlett et al. (2006). Experimental data for the Be(n,)Be reaction in the keV region are very sparse. The resonance properties of the lowest resonance in Be(n,)Be have been studied by Kitazawa et al. Kitazawa et al. (1994), and three data points with relatively large error bars are provided by Shibata in an unpublished thesis made in the same group Shibata (1992). This gap is filled now by the new experimental data of Wallner et al. Wallner et al. (2019). A very brief theoretical analysis of the new experimental data in the direct capture model is also given in Wallner et al. (2019), and it is concluded that the -wave contribution had to be scaled down by about 30% to fit the new experimental data. It is the scope of the present study to provide a more detailed analysis of the direct capture process in the Be(n,)Be reaction. It will be shown that the new data in the keV region can be well described if the parameters of the calculation are carefully chosen to reproduce the well-known properties of Be + n at thermal energies (i.e., without any additional adjustment of parameters to the new data in the keV region). Furthermore, the contribution of low-lying resonances is re-analyzed, leading to a slightly different reaction rate at very high temperatures. Obviously, there is no major change in the astrophysical reaction rate at lower temperatures because finally the calculated -wave contributions in Wallner et al. (2019) (adjusted to fit the new data in the keV region) and in this study (which fit the keV data without adjustment) are practically identical. ## Ii The direct capture model ### ii.1 Basic considerations As long as the level density in the compound nucleus (Be in the present case) is low, resonances play only a minor role, and the capture cross section is dominated by the direct capture (DC) process. Often this is the case for light nuclei, but DC may also be dominant for neutron-rich nuclei, in particular with closed neutron shells, where the low -value of neutron capture corresponds to relatively small excitation energies and thus low level densities in the compound nucleus. As a nice example, DC was experimentally confirmed for the Ca(n,)Ca reaction Beer et al. (1996), and it was possible to describe the cross section in the keV region after adjustment of the parameters to thermal properties of the Ca + n system. The full DC formalism is given by Kim et al. Kim et al. (1987) and also listed in Beer et al. (1996); Mohr et al. (1998). Basic considerations on DC have already been provided by Lane and Lynn more than 50 years ago Lane and Lynn (1960a, b). The chosen model in Wallner et al. (2019) is based on Mengoni et al. (1995) which contains the same underlying physics with a focus on direct -wave capture. Here I briefly repeat only the essential features of the DC model; for details, see Beer et al. (1996); Mohr et al. (1998); Mengoni et al. (1995); Kim et al. (1987). The DC cross sections  scale with the square of the overlap integrals I=∫dru(r)OE1/M1χ(r) (1) where is the electric or magnetic dipole operator; E2 transitions are much weaker than E1 transitions for the light nucleus Be and can be neglected for the DC calculations. The and are the bound state wave function and scattering state wave function. These wave functions are calculated from the two-body Schrödinger equation using a nuclear potential without imaginary part because the damping of the wave function in the entrance channel by the small DC cross sections is typically very small Krausmann et al. (1996). Finally, the DC cross section has to be normalized with the spectroscopic factor to obtain the capture cross section to a final state : σγ,f=(C2S)fσDCf. (2) The total capture cross section is obtained by the sum over all final states : σγ=∑fσγ,f. (3) An essential ingredient for the DC model is the nuclear potential for the calculation of the wave functions and . In the present work, a folding potential was used: V(r)=λVF(r) (4) with the strength parameter of the order of unity. For details of the folding potential, see Beer et al. (1996); Mohr et al. (1998). The advantage of the folding potential is that only one parameter, namely the strength , has to be adjusted which reduces the available parameter space significantly (compared to the widely used Woods-Saxon potentials with three parameters). ### ii.2 Adjustment of the potential For the calculation of bound state wave functions , the potential strength is adjusted to the binding energy of the respective state to ensure the correct asymptotic shape of . Thus, the only parameter of the potential is fixed for each final state , and all wave functions can be calculated without further adjustment of parameters (see Table 1). The scattering wave function for the -wave with angular momentum has to reproduce the thermal scattering length. From the bound coherent and incoherent scattering lengths fm and fm Sears (1992) it turns out that the free scattering lengths and for and are almost identical, and thus for simplicity a weighted average was used for all scattering -waves instead of and . Note that the above and result from the coupling of the neutron spin , the spin of the Be ground state , and angular momentum . The very minor variations of within about 1% do practically not affect the calculated DC cross sections. The adjustment of the potential strength for the scattering -wave is more complicated because the thermal scattering lengths are related to -wave scattering only. As an alternative, the same procedure as for the bound states was applied. Parameters were determined by adjustment to all bound () and quasi-bound () states in Be where transfer was clearly assigned in the Be(d,p)Be or Be(,He)Be reactions Tilley et al. (2004). From the average of all states one finds a significantly lower for the -wave, compared to for the -wave. Similar to the -wave, the same value for was used for both channel spins and . ### ii.3 Adjustment of spectroscopic factors Spectroscopic factors are required for neutron transfer to the , , and shells. As the potential is well-constrained for the incoming -wave at thermal energies, spectroscopic factors can be derived from the thermal neutron capture cross section of Be using Eq. (2). The thermal neutron capture cross section has been determined in several experiments, and the results are in excellent agreement. I adopt mb which results from the weighted average of mb Firestone and Revay (2016), mb Conneely et al. (1986), and mb from the new experiment Wallner et al. (2019). The branching ratios to the individual final states in Be are also taken from the recent experiment by Firestone and Revay Firestone and Revay (2016). For the bound states with , contributions of the transfers to the and shells have to be added. However, this can be simplified because the -wave capture scales approximately with for any combination of and transfer. As long as a proper adjustment to the capture cross section is made at thermal energies, the -wave capture in the keV region must also be reproduced. Therefore, an effective spectroscopic factor is listed in Table 1 which takes into account only the transfer to the shell; contributions of the transfer are neglected. The adjustment of the effective spectroscopic factors to the thermal capture cross section is fortunately possible also for the state at MeV because a weak M1 transition to this state was detected in Firestone and Revay (2016). Only for the state at MeV an adjustment of from the thermal capture cross section is not possible because no primary -ray could be detected. Consequently, for this state had to be fixed in a different way. For that purpose a procedure was used which relates the thermal scattering lengths to the spectroscopic factors of subthreshold -wave states Mohr et al. (1997). As the adjusted for the neighboring state from Eq. (2) is about 35% lower than from the procedure of Mohr et al. (1997), the same reduction factor was applied for the unknown for the state, leading to (see Table 1). This value is roughly consistent with which can be derived with huge uncertainties from a weak secondary -ray in thermal neutron capture after correction for feeding Firestone and Revay (2016). A comparison of the effective spectroscopic factors in Table 1 to spectroscopic factors from transfer reactions like Be(d,p)Be is not straightforward. First, the effective spectroscopic factors of this study are calculated for the transfer to the shell only which simplifies the present calculations (see discussion above), but complicates the comparison to data from transfer reactions. Second, spectroscopic factors from transfer depend on the chosen parameters of the underlying calculations of the reaction cross sections Mukhamedzhanov et al. (2008), which are typically based on the distorted wave Born approximation (DWBA). This is reflected by wide variations of from (d,p), (,He), and (Li,Li). In some cases there is even disagreement on the transferred angular momentum . The generally poor agreement of the from different transfer reactions is explicitly stated in the compilation of Tilley et al. Tilley et al. (2004). Third, the two levels around MeV in Be cannot be resolved easily in transfer experiments. Therefore, I restrict myself here to list the adopted spectroscopic factors from different transfer reactions in Table 1 (as compiled in the ENSDF database ENS or given in Tilley et al. Tilley et al. (2004)). The only noticeable peculiarity is the deviation for the first excited state in Be between the huge from the thermal (n,) cross section and from different transfer reactions. The thermal branching to the state at MeV is moderate with about 11%, but well-defined Firestone and Revay (2016), and thus is well-constrained in the present approach. A more detailed discussion of spectroscopic factors is omitted because of the significant uncertainties of the from the different transfer reactions. ## Iii Results and discussion After the adjustment of the potential in Sec. II.2 and of the spectroscopic factors in Sec. II.3, all parameters for the DC calculations are now completely fixed. The DC cross sections for -wave and -wave capture can now be calculated without any further adjustment of parameters. The results are shown in Fig. 1. As usual, -wave capture decreases with energy by roughly , whereas -wave capture increases with . A transition from the to the behavior is found at several tens of keV. This is a typical result for light nuclei at the upper end of the -shell like C Ohsaki et al. (1994) and in the -shell (e.g., O Igashira et al. (1995); Mohr et al. (2016) and Mg Mohr et al. (1998)). Important ingredients of the DC calculations like wave functions and overlaps are further illustrated in Fig. 2. Both bound state wave functions (shown in the upper part a as in logarithmic scale and in the middle part b as in linear scale) of the ground state and the excited state at 5.96 MeV are characterized by and thus mainly differ in the exterior which is determined by the binding energies of both states. Contrary, the state at 5.96 MeV has and one node in the interior. In the exterior, the and wave functions show the same slope because of the almost identical binding energies. The resulting integrand of the overlap integral in Eq. (1) is shown in the lower part c of Fig. 2 for a chosen energy keV. Obviously, the main contributions for the capture to the ground state come from the nuclear interior and surface at relatively small radii ( fm). Because of the smaller binding energies of the and final states, the main contributions for the transitions to these states appear in the nuclear exterior for radii 10 fm fm. Nevertheless, for all transitions noticeable cancellation effects are found between the positive and the negative areas of the integrands in Fig. 2 (part c). A similar observation has already been made in an earlier study of direct neutron capture at thermal energies Lynn et al. (1987) which is based on the model described in Raman et al. (1985). The DC calculation of -wave and -wave capture is complemented by the contributions of the four lowest known resonances which correspond to the states in Be at MeV (), 7.542 MeV (), 9.27 MeV (), and 9.56 MeV (). The properties of the resonances are listed in Table 2. For the calculation of the resonance cross sections the approximation was used because it is known that for these states Tilley et al. (2004). The radiation width of the lowest resonance was determined experimentally as eV Kitazawa et al. (1994). This resonance decays by E1 transitions to the first excited state in Be ( eV which corresponds to a noticeable strength of 31 mW.u.) and to the second excited state ( eV, corresponding to 124 mW.u.). If one assumes the same average Weisskopf units for the E1 transitions in the decay of the next resonance with at MeV, one ends up with a smaller radiation width of eV because E1 transitions can only lead to odd-parity states around MeV and thus correspond to relatively low transition energies. Because of the high transition energy of the E2 transition to the ground state, almost the same radiation width for the E2 transition can be estimated using a typical strength of about 5 W.u. for E2 transitions in this mass region Endt (1993). This leads to an overall radiation width of eV which is significantly lower than assumed by Wallner et al. who use the same eV as for the resonance. Assuming the same strengths of 75 mW.u. for E1 and 5 W.u. for E2 transitions, the resonance has only a tiny radiation width of meV which results from the E2 transition to the state at MeV. Additional -transitions may occur to the levels in Be above the neutron threshold with larger strength (e.g., for the M1 transition to the state at 7.371 MeV); however, the final state of this transition decays preferentially by neutron emission and thus does not contribute to Be production. A large radiation width is found for the state at 9.56 MeV because of strong E2 transitions to low-lying and states in Be: eV. However, this resonance is located at almost 3 MeV and thus contributes to the astrophysical reaction rate only at very high temperatures. Interference effects between the resonances are not taken into account in the present study because no experimental information is available. However, it can be estimated that interference effects will be minor because the dominating -wave resonance does not interfere with the dominating -wave DC contributions. For completeness it has to be noted that the two resonances contain a significant amount of the total strength. As these resonances are taken into account explicitly, an additional calculation of the -wave contribution of the DC cross section would double-count the strength, and thus the -wave contribution of the DC cross section is intentionally omitted. The folding potential for the -waves contains two bound states close below the neutron threshold (see Table 1). Assuming the same potential for the -wave automatically leads to the appearance of -wave resonances at low energies which are the theoretical counterparts of the experimentally observed -wave resonances (see Table 2). Overall, the agreement between the calculated total cross section and the new experimental data Wallner et al. (2019) is very good with a small per point. The dominating contribution to comes from the upper data point at keV where an average cross section of b is reported in Wallner et al. (2019). The calculated cross section at exactly 473 keV is b. Averaging the calculated cross section over the experimental energy distribution of the neutrons (see Fig. 3 of Wallner et al. (2019)) leads to b which deviates only by 1.2 from the experimental . The increase from 6.97 b to 7.18 b results from the higher calculated cross sections at the upper end of the experimental neutron energy interval. As a consequence, per point approaches 1.0 in this case. Including the Shibata points Shibata (1992) reduces the deviations further to per point. It has to be repeated that the present calculation has been made completely independent, without any adjustment to the new experimental data points in the keV region. ## Iv Astrophysical reaction rate The astrophysical reaction rate  was calculated by numerical integration of the cross sections in Sec. III. A narrow energy grid from 1 to 4000 keV was used to cover the the full temperature range up to . Because of the relatively high first excited state in Be, no stellar enhancement factor was used (as also suggested in the KADoNiS database Dillmann et al. (2006)). The result is shown in Fig. 3. At low temperatures below a few keV this energy grid is not sufficient. Therefore, the calculation of the cross section was repeated in 10 eV steps from 10 eV to 50 keV. With these settings a constant rate for the -wave capture was found down to the lowest temperatures in Fig. 3 which confirms that the numerical treatment is stable. The -wave capture dominates the low-temperature region below whereas at higher temperatures around -wave capture becomes the major contributor. At even higher temperatures the resonance contributions become comparable to -wave capture which result mainly from the lowest resonance at 559 keV. As expected, the present rate is in very good agreement with the rate by Wallner et al. Wallner et al. (2019) because their DC calculation was adjusted to their new experimental data (whereas the present calculation reproduces the new experimental data without adjustment). The only significant difference appears at relatively high temperatures around and results from the lower resonance strength of the lowest resonance in the present study (see Table 2 and discussion in Sec. III). At the highest temperature in Fig. 3 the present rate becomes similar to the Wallner rate again because the lower strength of the resonance is compensated by the additional resonances at higher energies which were not taken into account in Wallner et al. (2019). Fig. 3 also includes the recommended rate of the KADoNiS database Dillmann et al. (2006) (version 1.0) which was derived from preliminary data of Wallner et al. and thus can be recommended for astrophysical calculations. The REACLIB database Cyburt et al. (2010) also recommends to use the KADoNiS rate. However, STARLIB Sallaska et al. (2013) contains a theoretical rate which is based on the statistical model. This theoretical rate exceeds the recommended rate by far at low temperatures and shows a completely different temperature dependence (see Fig. 3). Such a discrepancy is not very surprising because the statistical model is inappropriate for such light nuclei. A comparison of the new capture data to different libraries for neutron cross sections was already given in Wallner et al. (2019) and is omitted here. The astrophysical reaction rate  was fitted using the same parametrization as in Eq. (7) of Wallner et al. (2019): NA<σv>cm3s−1mol−1= a0(1.0+a1T1/29+a2T9+a3T3/29 (5) +a4T29+a5T5/29) +a6T−3/29exp(−b0/T9) The and parameters are listed in Table 3. The deviation of the fitted rate is below 1% over the full temperature range. ## V Conclusions The cross section of the Be(n,)Be reaction was calculated in the direct capture model. All parameters of the calculations could be adjusted to thermal properties of the Be + n system, and therefore the calculation of the capture cross sections in the astrophysically relevant keV region is completely free of any adjustments. The calculated cross sections agree very well with the recently published experimental results by Wallner et al. Wallner et al. (2019) and also with earlier unpublished data by Shibata Shibata (1992). The astrophysical reaction rate of the KADoNiS database is essentially confirmed; it is based on a preliminary analysis of the Wallner et al. data. REACLIB also suggests to use the KADoNiS rate. However, the reaction rate of STARLIB should not be used because it is based on a statistical model calculation which overestimates the experimental data significantly. ###### Acknowledgements. I thank A. Wallner for encouraging discussions. This work was supported by NKFIH (K108459 and K120666). ## References Want to hear about new tools we're making? Sign up to our mailing list for occasional updates. If you find a rendering bug, file an issue on GitHub. Or, have a go at fixing it yourself – the renderer is open source! For everything else, email us at [email protected].
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9480420351028442, "perplexity": 846.686144325915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00454.warc.gz"}
https://www.goldbook.iupac.org/terms/view/V06614
## vibrational redistribution https://doi.org/10.1351/goldbook.V06614 @I03130@ redistribution of energy among the vibrational modes usually giving a statistical distribution of their populations, characterized by the 'vibrational temperature'. For large molecules, this process does not require collisions. Source: PAC, 1996, 68, 2223. (Glossary of terms used in photochemistry (IUPAC Recommendations 1996)) on page 2283 [Terms] [Paper]
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8283817768096924, "perplexity": 7274.257848583922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496669795.59/warc/CC-MAIN-20191118131311-20191118155311-00542.warc.gz"}
https://www.physicsforums.com/threads/abstract-algebra.183314/
# Abstract algebra 1. Sep 6, 2007 ### quasar987 1. The problem statement, all variables and given/known data I am asked to show that if E is a semi-group and if (i) there is a left identity in E (ii) there is a left inverse to every element of E then, E is a group. 3. The attempt at a solution Well I can't seem to find the solution, but it's very easy if one of the two "left" above is replaced by a "right". For instance, if we replace the existence of a left inverse condition by the existence of a right inverse, then we find that the left identity is also a right identity like so: Let a,b be in E. Then ab=a(eb)=(ae)b ==> a=ae (by multiplying by bˉ¹ from the right). So e is a right identity also. Then it follows that every right inverse is also a left inverse: aaˉ¹=e ==>(aaˉ¹)a=ea ==>a(aˉ¹a)=a ==> (aˉ¹a)=e. So, does anyone know for a fact that this question contains or does not contain a typo? 2. Sep 6, 2007 ### quasar987 No, this only means that ae=ae. 3. Sep 6, 2007 ### Hurkyl Staff Emeritus Well, I can prove that all of the left inverses of e are, in fact, equal to e. So I'm making progress. 4. Sep 6, 2007 ### Hurkyl Staff Emeritus Also, x * (left inverse of x) = (left identity) I think that the group structure follows from these facts. So I have at least as much confidence in the original problem as I do that I didn't make a mistake. (I'm not saying how much confidence that is. ) 5. Sep 6, 2007 ### Dick I'm having problems with this as well, but then I'm tired. I would suggest though, that if you really think it's wrong that you start trying to construct a counterexample. If you can't construct a counterexample then the effort may teach you what you need to do. Last edited: Sep 6, 2007 6. Sep 6, 2007 ### d_leet I know that I've done this proof before, and as I recall you need to somehow use the two facts in conjuction, and remember that if y is the left inverse of x, then y also has a left inverse say z... I remember having to use this someow, but my efforts on this tonight are not going well. Edit: With a fair amount of work I managed to prove that every left inverse is also a right inverse, and from that I think it follows a little more easily that the left identity is also a right identity. Edit 2: It actually follows almost trivially from the fact that every left inverse is also a right inverse that the left identit is also the right identity. Last edited: Sep 6, 2007 7. Sep 6, 2007 ### Hurkyl Staff Emeritus Basically, I tried writing lots of expressions that could be simplified in multiple ways to derive new properties. For example, I first wondered how left inverses of the indentity (and their left inverses, etc) behaved, then I started worrying about how inverses of general elements behaved. Incidentially, I did get started by searching for a counterexample; I decided to let 0 be the identity, 1 a left inverse of 0, 2 a left inverse of 1, and so forth, then I tried to compute how multiplication had to behave. 8. Sep 7, 2007 ### Timo I think I got it. According to my calculation, the statement is true as stated. But since I'm not a mathematician and did in fact not even know the terms left-inverse and left-identity before tackling the problem (I found left-inverse in an algebra book, didn't find left-identity) I need some sanity-check: I assumed the two conditions mean that: $$\exists \, 1_L \in E : \forall a \in E : 1_L \, a = a$$ (i) and $$\forall \, a \in E: \, \exists \, a_L \in E: a_L \, a = 1_L$$ (ii). Is this translation of the two conditions correct? Last edited: Sep 7, 2007 9. Sep 7, 2007 ### matt grime It is, though it is always preferable not to write things in logical forms like that since they are unnecessarily opaque. 10. Sep 7, 2007 ### Mr.Brown i guess this is pretty easy: i would go like this: From knowing E is a semi group you have associativity! And i know: e*a=a , for e beeing a left identity! and a^-1 * a = e , a^-1 is left inverse now i multiply a from the left and get: a*a^(-1)*a = a from using associativity i get: (a*a^(-1))*a = a*(a^(-1)*a) = a (1) hence by using both assumptions the first part of (1) implies that a*a^(-1) = e -> every left inverse is a right inverse if associativity holds! the second part implies that while we assumes a^(-1)*a= e -> every left identity is a right identity. QED 11. Sep 7, 2007 ### Timo I don't completely understand what you said Mr. Brown. Most notably, I don't get the step which seems to imply that a*e=a. 12. Sep 7, 2007 ### quasar987 You went a little too fast. Multiplying from the left by a gives a*a^(-1)*a = ae but ae is not known to be a, for e is only a left identity. 13. Sep 7, 2007 ### quasar987 Let me try something here. (Assuming the problem is stated correctly) If I show that (aˉ¹)ˉ¹=a, then this will mean that e=(aˉ¹)ˉ¹(aˉ¹)=aaˉ¹, meaning left inverses are also right inverses. Lets begin the random manipulations :) (aˉ¹)ˉ¹(aˉ¹)=e ==>(aˉ¹)ˉ¹(aˉ¹)a=ea ==> (aˉ¹)ˉ¹e=a ==> aˉ¹(aˉ¹)ˉ¹e=aˉ¹a ==>aˉ¹(aˉ¹)ˉ¹e=e ==> aˉ¹(aˉ¹)ˉ¹=e ==> aˉ¹=((aˉ¹)ˉ¹)ˉ¹. If every element can be seen as the left inverse of another, then I have succeeded. But is this implied? Gotta go. 14. Sep 7, 2007 ### Timo ae=a strikes back. I don't think it's a good idea to label the left-inverse and left-identity $$a^{-1}$$ and e/1. That nomenclature imho cries for stupid mistakes caused by that you usually label real inverses and identities with these symbols. Might differ from person to person, but I did chose different names for exactly the reason that I screwed up too many steps otherwise. Last edited: Sep 7, 2007 15. Sep 7, 2007 ### Hurkyl Staff Emeritus How'd you manage that? You have neither proven that e is a right identity, that e has a unique left inverse, or anything else I've noticed that would allow you to conclude that. 16. Sep 7, 2007 ### quasar987 I accepted w/o proof that 17. Sep 7, 2007 ### learningphysics I was stuck trying to figure out this same problem recently. I looked it up online... I can't find the link right now. But the trick is to first prove: If x*x = x, then x = e, for any element x. This is simple x*x = x x^-1 * x * x = x^-1 * x (left multiply both sides by x^-1) e * x = e x = e Then you can show that every left inverse is a right inverse x*x^-1 = x * (e * x^-1) = x * (x^-1 * x) * x^-1 (write out e as x^-1*x) = (x * x^-1) * (x * x^-1) So using the previous solution we know that x * x^-1 = e, so the right inverse part is proven So to prove e is a right identity: x * e = x * (x^-1 * x) = (x * x^-1) * x = e * x = x Then you can also prove that e is the unique left identity and unique right identity. 18. Sep 7, 2007 ### quasar987 Cheers! Have something to add? Similar Discussions: Abstract algebra
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9578505158424377, "perplexity": 1234.4240856204976}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718840.18/warc/CC-MAIN-20161020183838-00353-ip-10-171-6-4.ec2.internal.warc.gz"}
https://chemistry.stackexchange.com/questions/123833/can-stimulated-raman-processes-be-strong-enough-to-drive-out-of-equilibrium
# Can stimulated Raman processes be strong enough to drive out of equilibrium? In the (spontaneous) Raman process, incident light $$\hbar \omega_1$$ scatters and transfers some energy $$\hbar \omega$$ to a vibrational excitation of molecule or solid. Typically this is a very rare process, and only happens to one of every $$10^9$$ photons or so. So even if a sample is irradiated with a ultrafast pump laser of high intensity, Raman processes tend to not be significant enough to move the system out of equilibrium. This is in comparison to something like direct absorption processes which easily bring the system out of equilibrium and often dominate the response. But the discussion above is for spontaneous Raman processes and makes me wonder, what about stimulated Raman processes? In the case of stimulated Raman, two photons come in with $$\omega_1-\omega_2=\omega$$, causing vibrational excitations to be much more efficiently created. So my questions are 1. In practical cases, what is the efficiency of Stimulated Raman processes? In other words, for a given number of pump photon $$n_1$$ and Stokes photon $$n_2$$, what are some ballpark numbers for the number of vibrational excitations created? 2. Can stimulated Raman processes be so strong as to move a system out of equilibrium in a pump-probe setup? • Vibrational relaxation is, except in a rather dilute gas, extremely fast. I`ll say no chance. – Karl Nov 16 '19 at 7:52 • @Karl, what about in a pump-probe setup like in Q2? Usually they look at femtoseconds there, which is also pretty fast. – user157879 Nov 16 '19 at 7:55 • even the faintest absorption drives a system out of equillibrium. Do you mean you want to get a significant saturation effect? – Karl Nov 16 '19 at 7:58 • @Karl yes that's more along what I was thing, some significant saturation on the ultrafast time scale – user157879 Nov 16 '19 at 13:19
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 6, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6163569092750549, "perplexity": 1036.2515287543752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655902496.52/warc/CC-MAIN-20200710015901-20200710045901-00065.warc.gz"}
https://www.groundai.com/project/robin-problems-with-indefinite-linear-part-and-competition-phenomena/
[ # [ ###### Abstract We consider a parametric semilinear Robin problem driven by the Laplacian plus an indefinite potential. The reaction term involves competing nonlinearities. More precisely, it is the sum of a parametric sublinear (concave) term and a superlinear (convex) term. The superlinearity is not expressed via the Ambrosetti-Rabinowitz condition. Instead, a more general hypothesis is used. We prove a bifurcation-type theorem describing the set of positive solutions as the parameter varies. We also show the existence of a minimal positive solution and determine the monotonicity and continuity properties of the map . Indefinite potential, Robin boundary condition, strong maximum principle, truncation, competing nonlinear, positive solutions, regularity theory, minimal positive solution. Nonlinear Robin problems] Robin problems with indefinite linear part and competition phenomena N.S. Papageorgiou, V.D. Rădulescu and D.D. Repovš] \subjclassPrimary: 35J20, 35J60; Secondary: 35J92. thanks: Corresponding author: Vicenţiu D. Rădulescu Nikolaos S. Papageorgiou Department of Mathematics, National Technical University Zografou Campus, Athens 15780, Greece Vicenţiu D. Rădulescu Department of Mathematics, Faculty of Sciences, King Abdulaziz University, P.O. Box 80203, Jeddah 21589, Saudi Arabia Department of Mathematics, University of Craiova, Street A.I. Cuza No. 13, 200585 Craiova, Romania Dušan D. Repovš Faculty of Education and Faculty of Mathematics and Physics, University of Ljubljana, Kardeljeva ploščad 16, SI-1000 Ljubljana, Slovenia (Communicated by Xuefeng Wang) ## 1 Introduction Let () be a bounded domain with a -boundary . In this paper we study the following parametric Robin problem ⎧⎪⎨⎪⎩−Δu(z)+ξ(z)u(z)=λg(z,u(z))+f(z,u(z)) in Ω∂u∂n+β(z)u=0 on ∂Ω.⎫⎪⎬⎪⎭ (Pλ) In this problem, is a parameter, () is a potential function which is indefinite (that is, sign changing) and in the reaction, and are Carathéodory functions (that is, for all , are measurable and for almost all , are continuous). We assume that for almost all , is strictly sublinear near (concave nonlinearity), while for almost all , is strictly superlinear near (convex nonlinearity). Therefore the reaction in problem () exhibits the combined effects of competing nonlinearities (“concave-convex problem”). The study of such problems was initiated with the well-known work of Ambrosetti, Brezis and Cerami [2], who dealt with a Dirichlet problem with zero potential (that is, ) and the reaction had the form λxq−1+xr−1 for all x≥0 with 1 They proved a bifurcation-type result for small values of the parameter . The work of Ambrosetti, Brezis and Cerami [2] was extended to more general classes of Dirichlet problems with zero potential by Bartsch and Willem [4], Li, Wu and Zhou [9], and Rădulescu and Repovš [19]. Our aim in this paper is to extend all the aforementioned results to the more general problem (). Note that when , we recover the Neumann problem with an indefinite potential. Robin and Neumann problems are in principle more difficult to deal with, due to the failure of the Poincaré inequality. Therefore in our problem, the differential operator (left-hand side of the equation) is not coercive (unless , ). Recently we have examined Robin and Neumann problems with indefinite linear part. We mention the works of Papageorgiou and Rădulescu [13, 14, 16]. In [13] the problem is parametric with competing nonlinearities. The concave term is , , (so it enters into the equation with a negative sign) while the perturbation is Carathéodory, asymptotically linear near and resonant with respect to the principal eigenvalue. We proved a multiplicity result for all small values of the parameter , producing five nontrivial smooth solutions, four of which have constant sign (two positive and two negative). In this paper, using variational tools together with truncation, perturbation and comparison techniques, we prove a bifurcation-type theorem, describing the existence and multiplicity of positive solutions as the parameter varies. We also establish the existence of a minimal positive solution and determine the monotonicity and continuity properties of the map . ## 2 Preliminaries Let be a Banach space and its topological dual. By we denote the duality brackets for the dual pair . Given , we say that satisfies the “Cerami condition” (the “C-condition” for short), if the following property is satisfied: “Every sequence such that is bounded and (1+||un||)φ′(un)→0 in X∗ as n→∞, admits a strongly convergent subsequence”. This is a compactness-type condition on the functional . It leads to a deformation theorem from which one can derive the minimax theory for the critical values of (see, for example, Gasinski and Papageorgiou [6]). The following notion is central to this theory. ###### Definition 2.1. Let be a Hausdorff topological space and nonempty, closed sets such that . We say that the pair is linking with in if: • ; • For any such that , we have . Using this topological notion, one can prove the following general minimax principle, known in the literature as the “linking theorem” (see, for example, Gasinski and Papageorgiou [6, p. 644]). ###### Theorem 2.2. Assume that is a Banach space, are nonempty, closed subsets such that is linking with in , satisfies the -condition supE0φ and , where . Then and is a critical value of (that is, there exists such that ). With a suitable choice of the linking sets, we can produce as corollaries of Theorem 2.2, the main minimax theorems of the critical point theory. For future use, we recall the so-called “mountain pass theorem”. ###### Theorem 2.3. Assume that is a Banach space, satisfies the -condition, , max{φ(u0),φ(u1)} and with . Then and is a critical value of . ###### Remark 1. Theorem 2.3 can be deduced from Theorem 2.2 if we have , , In the analysis of problem (), we will use the following spaces: the Sobolev space , the Banach space and the boundary Lebesgue spaces , . By we denote the norm of the Sobolev space . So ||u||=[||u||22+||Du||22]12 for all u∈H1(Ω). The space is an ordered Banach space with positive cone C+={u∈C1(¯¯¯¯Ω): u(z)≥0 for all z∈¯¯¯¯Ω}. We will use the open set defined by D+={u∈C+:u(z)>0 for all z∈¯¯¯¯Ω}. On we consider the -dimensional Hausdorff (surface) measure Using this measure, we can define the Lebesgue spaces () in the usual way. Recall that the theory of Sobolev spaces says that there exists a unique continuous linear map , known as the “trace map”, such that γ0(u)=u|∂Ω for all u∈H1(Ω)∩C(¯¯¯¯Ω). This map is not surjective and it is compact into if and into In what follows, for the sake of notational simplicity, we drop the use of the map . All restrictions of Sobolev functions on are understood in the sense of traces. Let be a Carathéodory function such that |f0(z,x)|≤a0(z)(1+|x|r−1) for almost all z∈Ω and all x∈R, with We set . Also, let and with on . We consider the -functional defined by φ0(u)=12ϑ(u)−∫ΩF0(z,u)dz, where ϑ(u)=||Du||22+∫Ωξ(z)u2dz+∫∂Ωβ(z)u2dσ for all u∈H1(Ω). The next result follows from Papageorgiou and Rădulescu [12, Proposition 3] using the regularity theory of Wang [20]. ###### Proposition 1. Let be a local -minimizer of , that is, there exists such that φ0(u0)≤φ0(u0+h) for all h∈C1(¯¯¯¯Ω) with ||h||C1(¯¯¯Ω)≤ρ0. Then with and is also a local -minimizer of , that is, there exists such that φ0(u0)≤φ0(u0+h) for all h∈H1(Ω) with ||h||≤ρ1. We will need some facts concerning the spectrum of with Robin boundary condition. Details can be found in Papageorgiou and Rădulescu [12, 16]. So, we consider the following linear eigenvalue problem −Δu(z)+ξ(z)u(z)=^λu(z) in Ω, ∂u∂n+β(z)u=0 on ∂Ω. (1) We know that there exists such that ϑ(u)+μ||u||22≥c0||u||2 for all u∈H1(Ω) and for some c0>0. (2) Using (2) and the spectral theorem for compact self-adjoint operators, we generate the spectrum of (1), which consists of a strictly increasing sequence such that . Also, there is a corresponding sequence of eigenfunctions which form an orthonormal basis of and an orthogonal basis of . In fact, the regularity theory of Wang [20] implies that . By (for every ) we denote the eigenspace corresponding to the eigenvalue . We have the following orthogonal direct sum decomposition H1(Ω)=¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯⊕k≥1E(^λk). Each eigenspace has the so-called “unique continuation property” (UCP for short) which says that if vanishes on a set of positive Lebesgue measure, then . The eigenvalues have the following properties: ∙ ^λ1 is simple (that is, dimE(^λ1)=1); ∙ ^λ1=inf[ϑ(u)||u||22:u∈H1(Ω),u≠0]; ∙ for m≥2 we have ^λm= sup[ϑ(u)||u||22:u∈⊕mk=1E(^λk),u≠0] = inf[ϑ(u)||u||22:u∈¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯¯⊕k≥mE(^λk),u≠0] (4) In (2) the infimum is realized on . In (2) both the supremum and the infimum are realized on . From these properties, it is clear that the elements of have constant sign while for the elements of are nodal (that is, sign changing). Let denote the -normalized (that is, ) positive eigenfunction corresponding to . As we have already mentioned, . Using Harnack’s inequality (see, for example Motreanu, Motreanu and Papageorgiou [11, p. 212]), we have that for all . Moreover, if , then using the strong maximum principle, we have . The following useful inequalities are also easy consequences of the above properties. ###### Proposition 2. • If then . • If then . Finally, let us fix some basic notations and terminology. So, by we denote the linear operator defined by ⟨A(u),h⟩=∫Ω(Du,Dh)RNdz for all u,h∈H1(Ω). A Banach space is said to have the “Kadec-Klee property” if the following holds ‘‘un\lx@stackrelw→u in X and%  ||un||→||u||⇒un→u in X". Locally uniformly convex Banach spaces, in particular Hilbert spaces, have the Kadec-Klee property. Let . We set and for we define u±(⋅)=u(⋅)±. We know that u±∈H1(Ω), |u|=u++u−, u=u+−u−. By we denote the Lebesgue measure on . Also, if then Kφ={u∈X:φ′(u)=0} (the critical % set of φ). If , then and . Finally, we set n0=max{k∈N:^λk≤0}. If for all (this is the case if and or ), then we set . ## 3 Positive solutions The hypotheses on the data of problem () are the following: . . • for every , there exists such that g(z,x)≤aρ(z) for almost all z∈Ω and all x∈[0,ρ]; • uniformly for almost all ; • there exist constants and such that c3xq−1≤g(z,x) for almost all z∈Ω and all x≥0, limsupx→0+g(z,x)xq−1≤c4 % uniformly for almost all z∈Ω; • if , then  for almost all and all ; • for every , there exists such that for almost all the function x↦g(z,x)+^ξρx is nondecreasing on . is a Carathéodory function such that • for almost all and all with ; • uniformly for almost all ; • uniformly for almost all and there exists such that f(z,x)≥0 for almost all z∈Ω and all x∈[0,δ0]; • for every , there exists such that for almost all the function x→f(z,x)+~ξρx is nondecreasing on We set and define γλ(z,x)=λg(z,x)+f(z,x)−2[λG(z,x)+F(z,x)] % for all (z,x)∈Ω×R+. For every , there exists such that γλ(z,x)≤γλ(z,y)+eλ(z) for % almost all z∈Ω and all 0≤x≤y. ###### Remark 2. Since we are looking for positive solutions and all of the above hypotheses concern the positive semi-axis , we may assume without any loss of generality that g(z,x)=f(z,x)=0 for almost all z∈Ω all x≤0 (note that hypotheses and imply that for almost all ). Hypothesis implies that for almost all is strictly sublinear near . This, together with hypothesis , implies that is globally the “concave” contribution to the reaction of problem (). On the other hand, hypothesis implies that for almost all is strictly superlinear near . Hence is globally the “convex” contribution to the reaction of (). Therefore on the right-hand side (reaction) of problem (), we have the competition of concave and convex nonlinearities (“concave-convex problem”). We stress that the superlinearity of is not expressed using the well-known Ambrosetti-Rabinowitz condition (see Ambrosetti and Rabinowitz [3]). Instead, we use hypothesis , which is a slightly more general version of a condition used by Li and Yang [10]. Hypothesis is less restrictive than the Ambrosetti-Rabinowitz superlinearity condition and permits the consideration of superlinear terms with “slower” growth near , which fail to satisfy the AR-condition (see the examples below). Hypothesis is a quasimonotonicity condition on and it is satisfied if there exists such that for almost all , x↦λg(z,x)+f(z,x)x is nondecreasing on (see [10]). Examples. The following pair satisfies hypotheses and : g(z,x)=a(z)xq−1, f(z,x)=b(z)xr−1 for all x≥0 with for almost all and . If , this is the reaction pair used by Ambrosetti, Brezis and Cerami [2] in the context of Dirichlet problems with zero potential (that is, ). The above reaction pair was used by Rădulescu and Repovš [19], again for Dirichlet problems with . Another possibility of a reaction pair which satisfies hypotheses and are the following functions (for the sake of simplicity, we drop the -dependence): g(x)={2xq−1−xτ−1if 0≤x≤1xη−1if 1 In this pair, the superlinear term fails to satisfy the Ambrosetti-Rabinowitz condition. Let be as in (2) and . Let be the Carathéodory function defined by kλ(z,x)=λg(z,x)+f(z,x)+μx+. (5) We set and consider the -functional defined by ^φλ(u)=12ϑ(u)+μ2||u||22−∫ΩKλ(z,u)dz for all u∈H1(Ω). ###### Proposition 3. If hypotheses and hold, then for every the functional satisfies the C-condition. ###### Proof. Let be a sequence such that |^φλ(un)|≤M1 for some M1>0 and all n∈N, (6) (1+||un||)^φ′λ(un)→0 in H1(Ω)∗ as n→∞. (7) By (7) we have ∣∣∣⟨A(un),h⟩+∫Ω(ξ(z)+μ)unhdz+∫∂Ωβ(z)unhdσ−∫Ωkλ(z,un)hdz∣∣∣≤ϵn||h||1+||un||, (8) for all with . In (8) we choose . Then ϑ(u−n)+μ||u−n||22≤ϵn for all n∈N (see (???)), (9) ⇒ c0||u−n||2≤ϵn for all n∈N (see (???)), ⇒ u−n→0 in H1(Ω). It follows from (6) and (9) that ϑ(u+n)−∫Ω2[λG(z,u+n)+F(z,u+n)]dz≤M2 for some M2>0 and all n∈N. (10) If in (8) we choose , then −ϑ(u+n)+∫Ω[λg(z,u+n)+f(z,u+n)]u+ndz≤ϵn for all n∈N. (11) Adding (10) and (11), we obtain ∫Ωγλ(z,u+n)dz≤M3 for some M3>0 and all n∈N. (12) Claim. is bounded. We argue by contradiction. So, suppose that the claim is not true. By passing to a subsequence if necessary, we may assume that . Let , . Then ||yn||=1, yn≥0 for all n∈N and so we may assume that yn\lx@stackrelw→y in H1(Ω) and yn→y in L2s′(Ω) % and in L2(∂Ω),y≥0. (13) Suppose that and let . Then and u+n(z)→+∞ for almost all z∈Ω∗. We have G(z,u+n)||u+n||2=G(z,u+n)(u+n)2y2n→0 for a.a. z∈Ω∗ (% see hypothesis H(g)(ii)), (14) F(z,u+n)||u+n||2=F(z,u+n)(u+n)2y2n→+∞ for a.a. z∈Ω∗ (see hypothesis H(f)(ii)). (15) It follows from (14), (15) and Fatou’s lemma that limn→∞[λ∫ΩG(z,u+n)||u+n||2dz+∫ΩF(z,u+n)||u+n||2dz]=+∞. (16) On the other hand, (6) and (9) imply that λ∫ΩG(z,u+n)||u+n||2dz+∫ΩF(z,u+n)||u+n||2dz≤M1||u+n||2+12ϑ(yn)+μ2||yn||22≤M4 (17) for some , all . Comparing (16) and (17) we obtain a contradiction. Next, suppose that . For we set . Then in and so we have ∫ΩG(z,^yn)dz→0 and ∫ΩF(z,^yn)dz→0. (18) Since , we can find such that (2η)12||u+n||∈(0,1] for all n≥n1. (19) We choose such that ^φλ(tnu+n)=max[^φλ(tu+n):0≤t≤1]. (20) From (19), (20) we have ^φλ(tnu+n)≥ ^φλ(^yn) (see (???)) = 12ϑ(^yn)−λ∫ΩG(z,^yn)dz−∫ΩF(z,^yn)dz (see (???)) ≥ η−λ∫ΩG(z,^yn)dz−∫ΩF(z,^yn)dz ≥ 12η for all n≥n2≥n1 (% see (???)). (21) Since is arbitrary, we infer from (3) that ^φλ(tnu+n)→+∞ as n→∞. (22) We know that ^φλ(0)=0 and ^φλ(u+n)≤M5 for some M5>0 and all n∈N (see (???) and (???)), ⇒ tn∈(0,1) for all n≥n3 (see (???)). So, (20) implies that tnddt^φλ(tu+n)|t=tn=0, (23) ⇒ ⟨^φ′λ(tnu+n),tnu+n⟩=0 (by the chain rule), ⇒ ϑ(tnu+n)=∫Ω[λg(z,tnu+n)+f(z,tnu+n)](tnu+n)dz for all n≥n3. We have . Then hypothesis
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9946456551551819, "perplexity": 788.9243868343932}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655911092.63/warc/CC-MAIN-20200710144305-20200710174305-00591.warc.gz"}
https://www.generalacoustics.com/hydro/
EFFICIENT MEASUREMENTS Log_aFlow Hydrodynamic ADCP data evaluation software for rapid flow chart generation, based on real measurements. Resulting flow charts show velocity, vorticity (turbulence) and divergence (quality). Surface Flow Sensor Ideal sensor for all applications in water flow monitoring applications. It is particularly suitable for flow measurement in open flumes river and lakes as well as coastal areas. TidePredictor Tide measurements and current measurements, calculation of harmonic contents, incorporation of wind speed and barometric pressure data. Log_aLevel Our standard tide gauge for water level and wave measurement. Extendable to include hydrological and meteorological sensors. Log_aLevel Mobile Mobile tide gauge for water level and wave measurement. Extendable to include hydrological and meteorological sensors. LOG_aLevel (Long Range) Tide gauge for long measurement ranges (>10 meters). Water level and wave, Extendable to include hydrological and meteorological sensors.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8301880359649658, "perplexity": 14005.537112168082}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710691.77/warc/CC-MAIN-20221129100233-20221129130233-00108.warc.gz"}
http://tex.stackexchange.com/questions/12930/bibtex-with-plain-tex?answertab=votes
BibTeX with Plain TeX The manpage of bibtex says that it can be used with both LaTeX and TeX. However, I did not find any resource how to do it and also no TeX book of mine explains it. Can someone provide a minimalistic example? - You can look at the btxmac.tex from Eplain (usually found in texmf-dist/tex/eplain). It includes an example of use with plain TeX. - Here is the example given in http://www.tug.org/TUGboat/tb24-1/patashnik.pdf: \input btxmac The \TeX{}book~\cite{knuth:tex} is good. \medskip \leftline{\bf References} \bibliography{mybib} \bibliographystyle{plain} \bye There is nothing really special. - The eplain manual describes how to use bibtex with some of the eplain commands. It looks really similar to the way it's done in LaTeX. - Could you provide a working example? – qubyte Feb 1 '12 at 5:47
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9766854643821716, "perplexity": 1727.602172487724}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860116886.38/warc/CC-MAIN-20160428161516-00125-ip-10-239-7-51.ec2.internal.warc.gz"}
https://itectec.com/matlab/matlab-matrix-operation-with-equation/
# MATLAB: Matrix operation with equation. matricesmatrixmatrix manipulation I have a matrix (5×1) I = [-6;-5;9;-7;-3]; How to perform calculation in MATLAB command window for each value in this matrix, the equation is A(I) = 1/(1+e^-I), where I = -6, -5, 9, -7, and -3. and return the answer in matrix form (5×1). Thanks. • A = 1./(1+exp(-I));
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8671274781227112, "perplexity": 1883.8921712984277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989856.11/warc/CC-MAIN-20210511184216-20210511214216-00429.warc.gz"}
http://forums.randi.org/showthread.php?t=150007&page=2
Forum Index Register Members List Events Mark Forums Read Help JREF Forum Merged: Studying Sharma's equation on Linear Field Equations Welcome to the JREF Forum, where we discuss skepticism, critical thinking, the paranormal and science in a friendly but lively way. You are currently viewing the forum as a guest, which means you are missing out on discussing matters that are of interest to you. Please consider registering so you can gain full use of the forum features and interact with other Members. Registration is simple, fast and free! Click here to register today. Tags gravity , hydrogen , kinetic , linear , Nordstrom , nucleosynthesis 6th August 2009, 01:17 PM #41 ben m Illuminator   Join Date: Jul 2006 Posts: 4,651 Dedicate was a fine word, I missed the intended sense (perfectly clear in retrospect). In any case, the word "Sharma" does not appear in the full-text search of the book---must be some other crackpot, there's no shortage of them. 6th August 2009, 04:56 PM #42 Singularitarian Banned   Join Date: Jul 2009 Posts: 1,008 Originally Posted by Ziggurat I said absolutely nothing about the charge on any antiparticle, the sign of which is quite obviously irrelevant. So not only are you wrong about antiparticles having negative energy, you fundamentally misunderstand what I have said. But let's suppose that an antiparticle has negative energy. What should happen when a particle and an antiparticle annihilate each other? Why, nothing: energy is conserved, a + and a - energy add to zero, so that's the end of the story. And what should be required to make such a pair? Again, nothing: I can create a positive and negative energy pair from zero starting energy, so real pairs (not just virtual pairs) can pop out of nowhere. But that's not what happens. When a positron and an electron annihilate each other, it creates TWO photons, each with the same energy as the electron's rest mass. Which means the positron has the same energy as the electron. And what if I want to make a positron-electron pair? I cannot do so with zero energy. In fact, if I want to do single-photon production (whack a heavy nucleus with it), I need that photon to have TWICE the energy of the electron, because I need to create an electron and a positron which BOTH have positive energy. So you are wrong. Completely and utterly wrong. Where on earth did you get such a foolish idea? Look, your fooling no one. Any scientist knows that in a Hamiltonian the energy-equivalance is best described with a negative matter solution, and this has been worked on by nearly any university at some time. You said i was wrong, and i was not. I even linked you to varification, and you are still sitting there telling me i was wrong. Sigh* You obviously have no conceptual knowledge of the Dirac Sea, and how its predictions of the antiparticle come from a negative sea of spinning quantum virtual particles. It's been varified time and time again, with the added problem its entire energy is about $10^{122}$ magnitudes of energy more than what should be expected. 6th August 2009, 05:13 PM #43 Ziggurat Penultimate Amazing     Join Date: Jun 2003 Posts: 26,199 Originally Posted by Singularitarian Look, your fooling no one. And yet, multiple other posters have also said that antiparticles have positive energy. So if I'm wrong, and I'm not fooling them, who is? The rest mass energy of an electron is +511 keV. What is the rest mass energy of a positron? If an electron and a positron annihilate each other, how much energy should be released? (And as an aside, the proper English in this case is "you're", which is a contraction of "you are", not "your", which is the possessive form of "you") (Oh, and simple superscripts are much easier to use and read than latex in many cases. For example, 10122. If you quote this message, you can see how to use superscripts.) __________________ "As long as it is admitted that the law may be diverted from its true purpose -- that it may violate property instead of protecting it -- then everyone will want to participate in making the law, either to protect himself against plunder or to use it for plunder. Political questions will always be prejudicial, dominant, and all-absorbing. There will be fighting at the door of the Legislative Palace, and the struggle within will be no less furious." - Bastiat, The Law Last edited by Ziggurat; 6th August 2009 at 05:15 PM. 6th August 2009, 05:17 PM #44 ben m Illuminator   Join Date: Jul 2006 Posts: 4,651 Originally Posted by Singularitarian Look, your fooling no one. Any scientist knows that in a Hamiltonian the energy-equivalance is best described with a negative matter solution, and this has been worked on by nearly any university at some time. You said i was wrong, and i was not. I even linked you to varification, and you are still sitting there telling me i was wrong. Sigh* You obviously have no conceptual knowledge of the Dirac Sea, and how its predictions of the antiparticle come from a negative sea of spinning quantum virtual particles. It's been varified time and time again, with the added problem its entire energy is about $10^{122}$ magnitudes of energy more than what should be expected. "Any scientist"? Funny, you're on a board with multiple professional physicists and nobody seems to agree with you. Sol explained it very clearly: the Dirac Sea was the first method used to predict/describe antimatter. We still teach it in intro-particle-physics courses because it's kind of neat how the math works. Viewed in more detail, it is not a good description of the real world, and the correct version is taught later in those same courses. You haven't taken those courses yet, Sing, so you've missed half the picture. (Never in this process does E = -mc^2 enter a kinematic equation; even in the Dirac Sea picture you can only do kinematics with "holes", or antiparticles, for which E = mc^2. Also, the Dirac sea would be made of real, not virtual particles---you are mistaken in associating it with the famous factor of 10^122 which is all virtual.) 6th August 2009, 05:17 PM #45 Singularitarian Banned   Join Date: Jul 2009 Posts: 1,008 No that is right. Real antiparticles have the positive energy you refer to. Where you are wrong is when you told me the equation $E=\pm Mc^2$ was wrong. That is why you where wrong, not for what you think. At least now, you do know that virtual antiparticles are described that way from a Hamiltonian viewpoint. 6th August 2009, 05:18 PM #46 nescafe Caffeinated Beverage     Join Date: Apr 2006 Location: Just above the coffeemaker Posts: 864 Originally Posted by Singularitarian You obviously have no conceptual knowledge of the Dirac Sea, and how its predictions of the antiparticle come from a negative sea of spinning quantum virtual particles. It's been varified time and time again, with the added problem its entire energy is about magnitudes of energy more than what should be expected. More like the concept of the Dirac Sea was rendered obsolete in the 30s when quantum field theory was formulated. As a bonus, QFT does not demand that there be an infinite sea of negative energy that is balanced by the vacuum having infinite positive energy. 6th August 2009, 05:18 PM #47 Singularitarian Banned   Join Date: Jul 2009 Posts: 1,008 Anyway, off your high horse. I prove you where wrong in the link i gave you, if you even bothered educating yourself. 6th August 2009, 05:20 PM #48 Singularitarian Banned   Join Date: Jul 2009 Posts: 1,008 Originally Posted by nescafe More like the concept of the Dirac Sea was rendered obsolete in the 30s when quantum field theory was formulated. As a bonus, QFT does not demand that there be an infinite sea of negative energy that is balanced by the vacuum having infinite positive energy. I think you will find that the Dirac Sea did correctly predict the antiparticle. The good thing here is that the sea is replaced by a more logical and also varified existence, taking the form of the ZPF. 6th August 2009, 05:23 PM #49 Singularitarian Banned   Join Date: Jul 2009 Posts: 1,008 In this case, if we do not require a normalization, then it's still strange how there is too much energy, more than what is in the observable universe. In QFT, i can assure you its still a problem, because the ZPF is an infinite energy-filling resviour of negative potential particles. The renormalization might be as simple as a quantum ''cut-off'' in the region of particles in the vacuum, or at least, this is what has been suggested 6th August 2009, 05:23 PM #50 Ziggurat Penultimate Amazing     Join Date: Jun 2003 Posts: 26,199 The rest mass energy of an electron is +511 keV. What is the rest mass energy of a positron? If an electron and a positron annihilate each other, how much energy should be released? __________________ "As long as it is admitted that the law may be diverted from its true purpose -- that it may violate property instead of protecting it -- then everyone will want to participate in making the law, either to protect himself against plunder or to use it for plunder. Political questions will always be prejudicial, dominant, and all-absorbing. There will be fighting at the door of the Legislative Palace, and the struggle within will be no less furious." - Bastiat, The Law 6th August 2009, 05:26 PM #51 Singularitarian Banned   Join Date: Jul 2009 Posts: 1,008 It releases due to conservation 1022KeV of energy, in the form of two photons. It can also be seen as a form of decay, but this has absolutly nothing to do with what is being said. You are completely off-topic. You're arguing for a real antiparticle, the Hamiltonian of E=Mc^2 leads to a negative solution for virtual particles. Do you know the difference? 6th August 2009, 05:45 PM #52 Ziggurat Penultimate Amazing     Join Date: Jun 2003 Posts: 26,199 Originally Posted by Singularitarian It releases due to conservation 1022KeV of energy, in the form of two photons. It can also be seen as a form of decay, but this has absolutly nothing to do with what is being said. You are completely off-topic. You're arguing for a real antiparticle, the Hamiltonian of E=Mc^2 leads to a negative solution for virtual particles. Do you know the difference? Yes, I do know the difference. But first off, you never specified virtual particles only, and second, you're still wrong. __________________ "As long as it is admitted that the law may be diverted from its true purpose -- that it may violate property instead of protecting it -- then everyone will want to participate in making the law, either to protect himself against plunder or to use it for plunder. Political questions will always be prejudicial, dominant, and all-absorbing. There will be fighting at the door of the Legislative Palace, and the struggle within will be no less furious." - Bastiat, The Law 6th August 2009, 05:53 PM #53 Singularitarian Banned   Join Date: Jul 2009 Posts: 1,008 lol!! I never specified that? I certainly did when i linked you to the Dirac Sea yonks ago. And i am not wrong, just because ''you say so-method.'' lol Just admit, you did not know that the mass in the Hamiltonian of E=Mc^2 is actually E=\pm Mc^2, and has nothing to do with real particles. If you read the link, you might have saved yourself all this trouble. 6th August 2009, 05:54 PM #54 Singularitarian Banned   Join Date: Jul 2009 Posts: 1,008 I also clarified it in post 42 as well. Try another tactic. 6th August 2009, 05:59 PM #55 Ziggurat Penultimate Amazing     Join Date: Jun 2003 Posts: 26,199 Originally Posted by Singularitarian lol!! I never specified that? I certainly did when i linked you to the Dirac Sea yonks ago. No. The Dirac sea is supposed to be real particles, not virtual particles. Quote: And i am not wrong, just because ''you say so-method.'' No, you're wrong because what you say has no relationship to reality. __________________ "As long as it is admitted that the law may be diverted from its true purpose -- that it may violate property instead of protecting it -- then everyone will want to participate in making the law, either to protect himself against plunder or to use it for plunder. Political questions will always be prejudicial, dominant, and all-absorbing. There will be fighting at the door of the Legislative Palace, and the struggle within will be no less furious." - Bastiat, The Law 6th August 2009, 06:08 PM #56 Singularitarian Banned   Join Date: Jul 2009 Posts: 1,008 No, the Dirac Sea is composed of virtual energy. But it is actually the ZPF which now takes its identity. This also a sea of virtual potential energy. Please go learn this stuff. Only when there sufficient energy in a given slice $\sum$ can a virtual particle appear commondating the exitence of its negative or opposite solution. So the Dirac Sea cannot consist of real matter, or all-space would be consumed by its energy. Real matter is the stuff you and me is made of. Potential matter exists beyond the threshold of observation, but still have real effects in the world, dispite them having the unusual properties virtual particles have. 6th August 2009, 06:28 PM #57 Floyt Chordate     Join Date: Apr 2003 Location: Cape Town! Not mugged yet. Looking for chameleons. Posts: 1,426 (Continually insulting those who continually demonstrate a better grounded knowledge than you makes one heck of an impression on observers. Just sayin'.) __________________ They had no god; they had no gods; they had no faith. What they appear to have had is a working metaphor. - Ursula K. Le Guin, "Always Coming Home" 6th August 2009, 06:40 PM #58 Singularitarian Banned   Join Date: Jul 2009 Posts: 1,008 Better grasp? Is everyone on magic mushrooms on this site or something. Who's side are you watching? He argued the Hamiltonian equation $E=\pm Mc^2$ was false. I showed him he was wrong, actually countless times i've showed he was wrong. Amazingly, he's been able to even fool you. He never understood the math, and he can't even apologize. He also accused me of not warning him it was about virtual particles, which was absolutely not the case as well. 6th August 2009, 07:32 PM #59 ben m Illuminator   Join Date: Jul 2006 Posts: 4,651 Originally Posted by Singularitarian lol!! I never specified that? I certainly did when i linked you to the Dirac Sea yonks ago. Your first post in this thread seems to have something to do with a mildly-relativistic particle falling under uniform gravity---not virtual particle, nor an antiparticle, nor a component of the Dirac sea. Exactly the sort of thing for which only the + solution is meaningful. Nonetheless, your negative sign is there. 6th August 2009, 10:07 PM #60 Ziggurat Penultimate Amazing     Join Date: Jun 2003 Posts: 26,199 Originally Posted by Singularitarian Better grasp? Is everyone on magic mushrooms on this site or something. If everyone around you seems crazy, maybe it's you. __________________ "As long as it is admitted that the law may be diverted from its true purpose -- that it may violate property instead of protecting it -- then everyone will want to participate in making the law, either to protect himself against plunder or to use it for plunder. Political questions will always be prejudicial, dominant, and all-absorbing. There will be fighting at the door of the Legislative Palace, and the struggle within will be no less furious." - Bastiat, The Law 7th August 2009, 12:39 AM #61 Singularitarian Banned   Join Date: Jul 2009 Posts: 1,008 Originally Posted by ben m Your first post in this thread seems to have something to do with a mildly-relativistic particle falling under uniform gravity---not virtual particle, nor an antiparticle, nor a component of the Dirac sea. Exactly the sort of thing for which only the + solution is meaningful. Nonetheless, your negative sign is there. I think you'll find its use was obsolete anyway due to an error i had made, so it doesn't matter. What does matter, is when i am told i am wrong when i say ''the hamiltonian expresses E=Mc^2 with a negative solution in respect with the vacuum energy, and takes the form of the positron, and hence, other antiparticles.'' This is not wrong, but i was told i was. How long are we going to keep this up? 7th August 2009, 12:43 AM #62 lionking In the Peanut Gallery     Join Date: Jan 2007 Location: Melbourne Posts: 29,653 Originally Posted by Singularitarian I think you'll find its use was obsolete anyway due to an error i had made I thought everyone else made errors but you. Just out of curiosity, how old are you? __________________ A fanatic is one who can't change his mind and won't change the subject. Sir Winston Churchill 7th August 2009, 12:46 AM #63 Singularitarian Banned   Join Date: Jul 2009 Posts: 1,008 No, that's you just being patronizing. >Age; old enough. 7th August 2009, 12:57 AM #64 lionking In the Peanut Gallery     Join Date: Jan 2007 Location: Melbourne Posts: 29,653 Originally Posted by Singularitarian patronizing. The irony is breathtaking. __________________ A fanatic is one who can't change his mind and won't change the subject. Sir Winston Churchill 7th August 2009, 01:01 AM #65 Singularitarian Banned   Join Date: Jul 2009 Posts: 1,008 Not as breathtaking as your ego, might I add. 7th August 2009, 02:30 AM #66 edd Graduate Poster     Join Date: Nov 2007 Posts: 1,556 Originally Posted by Singularitarian It releases due to conservation 1022KeV of energy, in the form of two photons. It can also be seen as a form of decay, but this has absolutly nothing to do with what is being said. You are completely off-topic. You're arguing for a real antiparticle, the Hamiltonian of E=Mc^2 leads to a negative solution for virtual particles. Do you know the difference? It has everything to do with it, but you missed what I would consider the best response to RC - namely that the positron is supposed to be an absence of a negative energy particle in the Dirac Sea model, and two negatives (the absence of a negative energy) make a positive. Also the Dirac Sea particles are not virtual - at least not according to any useful definition I can think of. Anyway, the Dirac Sea as an idea has some unpleasant properties, and furthermore it seems to me ben m is quite right in noting the error in the original placement of the - sign in your original post. __________________ When I look up at the night sky and think about the billions of stars out there, I think to myself: I'm amazing. - Peter Serafinowicz 7th August 2009, 05:01 AM #67 Singularitarian Banned   Join Date: Jul 2009 Posts: 1,008 Originally Posted by edd It has everything to do with it, but you missed what I would consider the best response to RC - namely that the positron is supposed to be an absence of a negative energy particle in the Dirac Sea model, and two negatives (the absence of a negative energy) make a positive. Also the Dirac Sea particles are not virtual - at least not according to any useful definition I can think of. Anyway, the Dirac Sea as an idea has some unpleasant properties, and furthermore it seems to me ben m is quite right in noting the error in the original placement of the - sign in your original post. First off, it wasn't Ben. Secondly, tell me then how you reconcile the obviously contrary work: http://en.wikipedia.org/wiki/Dirac_sea So if i was wrong, how come the page here explains that the equation is not wrong, as thus expressed in a Hamitonian? Now, stop defending someone, when you don't even know the facts. 7th August 2009, 05:36 AM #68 sol invictus Philosopher     Join Date: Oct 2007 Location: Nova Roma Posts: 8,419 Originally Posted by Singularitarian Secondly, tell me then how you reconcile the obviously contrary work: http://en.wikipedia.org/wiki/Dirac_sea So if i was wrong, how come the page here explains that the equation is not wrong, as thus expressed in a Hamitonian? That article needs to be re-written. 7th August 2009, 05:41 AM #69 Singularitarian Banned   Join Date: Jul 2009 Posts: 1,008 But it couldn't get any simpler in that article. This is how i have learned it independantly as well. I know its right. 7th August 2009, 06:47 AM #70 DazzaD Critical Thinker   Join Date: Jul 2006 Location: Romford Posts: 303 Wikipedia is an excellent tool and one I often recommend to my students. What I also make clear is they should never really completely trust ANY single source of information, and that includes myself, and that they should be especially careful when quoting from websites. The fact that the article doesn't have a single source or reference should make any student go "hmmmm" and means they may have to dig a little deeper or ask a few more people before they take every single word and symbol as "gospel". 7th August 2009, 07:19 AM #71 ben m Illuminator   Join Date: Jul 2006 Posts: 4,651 Originally Posted by Singularitarian First off, it wasn't Ben. Secondly, tell me then how you reconcile the obviously contrary work: http://en.wikipedia.org/wiki/Dirac_sea So if i was wrong, how come the page here explains that the equation is not wrong, as thus expressed in a Hamitonian? Now, stop defending someone, when you don't even know the facts. Sing: I know what the Wiki article says. In fact, I specifically said to you "You seem to be quoting the Wiki article". What made me think so? You keep tossing in the word "Hamiltonian" without knowing what it means, as though your main exposure to the word Hamiltonian were in the intro to the equation you want to force on everyone. You don't dare leave out the phrase "as expressed in a hamiltonian" because you have no way of telling, based on that one Wiki sentence, what it would mean to leave it out. The physicists here have all agreed that: 1) Yes, if you allow E= -mc^2 (and some other assumptions) you can predict the Dirac sea 2) The Dirac sea would be a sea of *real* particles whose "holes" are *real* (not virtual), antiparticles 3) That's the first, last, and only use for E=-mc^2; since the Dirac sea has all sorts of other horrible properties, physicists think it does not really exist Sol is right, that article needs to be rewritten. 7th August 2009, 04:47 PM #72 Singularitarian Banned   Join Date: Jul 2009 Posts: 1,008 Originally Posted by ben m Sing: I know what the Wiki article says. In fact, I specifically said to you "You seem to be quoting the Wiki article". What made me think so? You keep tossing in the word "Hamiltonian" without knowing what it means, as though your main exposure to the word Hamiltonian were in the intro to the equation you want to force on everyone. You don't dare leave out the phrase "as expressed in a hamiltonian" because you have no way of telling, based on that one Wiki sentence, what it would mean to leave it out. The physicists here have all agreed that: 1) Yes, if you allow E= -mc^2 (and some other assumptions) you can predict the Dirac sea 2) The Dirac sea would be a sea of *real* particles whose "holes" are *real* (not virtual), antiparticles 3) That's the first, last, and only use for E=-mc^2; since the Dirac sea has all sorts of other horrible properties, physicists think it does not really exist Sol is right, that article needs to be rewritten. I've solved plenty Hamiltonians. What surprises me is the continuous dogmnatism between some people here, despite the evidence flying in their faces. At least, this way, i differ somewhat. By the way, no negative solutions equals a true positive real matter particle. Only in the appearance with a real electron, unless disturbed by the CP-Violation, then its appearance is simultaneous with a *real* particle which is its antithesis. I can assure you, before such an appearance of a real electron does a real positron appear. Until then, it does not abide by natural energy-momentum laws, nor does it apply generally with the matter we observe frequently. Deny this, and you are a fool. 7th August 2009, 04:51 PM #73 Singularitarian Banned   Join Date: Jul 2009 Posts: 1,008 I was reading today, funnily enough, Doctor Wolfs 'Spritual Universe...' Don't let the name threat you - it's quite a good read, and he actually talked about the Dirac Sea, in the third chapter if i remember rightly. He descrived it as an ''energy-filling vacuum of potential particles, with a negative energy.'' And ''The motion of the electron is buffetted by these virtual particles.'' (hence, the periodic time, internal and fundamental to the electron). And ''When an electron appears in spacetime, a positron appears also.'' This is when the particles become ''real.'' 7th August 2009, 05:03 PM #74 Ziggurat Penultimate Amazing     Join Date: Jun 2003 Posts: 26,199 Originally Posted by Singularitarian I was reading today, funnily enough, Doctor Wolfs 'Spritual Universe...' Don't let the name threat you - it's quite a good read, and he actually talked about the Dirac Sea, in the third chapter if i remember rightly. So rather than cite texts which are meant to teach physics, you're referencing a text which is, at its heart, about religion. That is unpersuasive, to put it charitably. __________________ "As long as it is admitted that the law may be diverted from its true purpose -- that it may violate property instead of protecting it -- then everyone will want to participate in making the law, either to protect himself against plunder or to use it for plunder. Political questions will always be prejudicial, dominant, and all-absorbing. There will be fighting at the door of the Legislative Palace, and the struggle within will be no less furious." - Bastiat, The Law 7th August 2009, 05:14 PM #75 ben m Illuminator   Join Date: Jul 2006 Posts: 4,651 Originally Posted by Singularitarian I've solved plenty Hamiltonians. Are you sure? I've never assigned or been assigned the task of "solving a Hamiltonian". Quote: By the way, no negative solutions equals a true positive real matter particle. Only in the appearance with a real electron, unless disturbed by the CP-Violation, then its appearance is simultaneous with a *real* particle which is its antithesis. I can assure you, before such an appearance of a real electron does a real positron appear. Until then, it does not abide by natural energy-momentum laws, nor does it apply generally with the matter we observe frequently. Deny this, and you are a fool. I can dodge the "fool" bullet (whew!) in this case---I can't deny "this" because it does not make any sense. I can discern, in the second sentence, something like "electrons and positrons are created in pairs"---is that your point? That's true (and it's true of virtual as well as real electrons). The first and third sentences are incomprehensible. The fourth may contain something like "Virtual particles are allowed nonzero energies only thanks to Heisenberg's Uncertainty Principle"---is that what you meant?---which is true, and perhaps also some other statement (the second clause, basically) which is incomprehensible. 7th August 2009, 05:20 PM #76 Singularitarian Banned   Join Date: Jul 2009 Posts: 1,008 Yeh, sure, when we have to solve to find a certain condition of the Hamiltonian. Don't be circular in your specificies. Either way, anyone who has come here, will see that the initial poster you defended was wholey wrong. You've made yourself out to be a fool, so i cannot even continue with this. I've explained, linked, and this still is not enough, so why should i continue an endless battle, which is pretty much boring to read. 7th August 2009, 05:22 PM #77 Singularitarian Banned   Join Date: Jul 2009 Posts: 1,008 Originally Posted by Ziggurat So rather than cite texts which are meant to teach physics, you're referencing a text which is, at its heart, about religion. That is unpersuasive, to put it charitably. No it's not about religion actually, but brief mentions, maybe at least three times throughout the whole book. Most of it is to do with scientists he met, and how he came to understand quantum physics can model a soul for the observer. 7th August 2009, 05:22 PM #78 ben m Illuminator   Join Date: Jul 2006 Posts: 4,651 Originally Posted by Ziggurat So rather than cite texts which are meant to teach physics, you're referencing a text which is, at its heart, about religion. That is unpersuasive, to put it charitably. Keep in mind, his other citations have been mainly (a) unread Google search summaries and (b) non-refereed crackpot journals (Journal of Theoretics, Concepts of Physics) and (c) crackpot web pages (Calphysics). This makes Fred Wolfs look like Halliday and Resnick by comparison. 7th August 2009, 05:26 PM #79 Singularitarian Banned   Join Date: Jul 2009 Posts: 1,008 Though he has still been a ''professor of physics,'' at at least four universities and colleges, he has been an award winning author of ''taking the quantum leap,'' and he was the best seller for a year... so, yeh, he must be totally cranked. 7th August 2009, 05:30 PM #80 ben m Illuminator   Join Date: Jul 2006 Posts: 4,651 Originally Posted by Singularitarian I've explained, linked, and this still is not enough I am willing to drop the (basically academic) point about whether or not the negative sign which Dirac used in his "Dirac Sea" model actually has anything to do with reality. Sing says "yes", Wikipedia's article contains sentences which say "yes" and paragraphs which say "no", and several qualified physicists here say "no, not at all". This is all rather divorced from your first-post long essay, which (a) presumably you thought would be an interesting thing to discuss, (b) has absolutely nothing to do with the Dirac Sea, and (c) tosses a negative sign into things which look like kinematic equations. What was that all about? JREF Forum Bookmarks Digg del.icio.us StumbleUpon Google Reddit
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 6, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6983689665794373, "perplexity": 1730.3901434284767}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702447607/warc/CC-MAIN-20130516110727-00068-ip-10-60-113-184.ec2.internal.warc.gz"}
https://kb.osu.edu/dspace/handle/1811/11943
# EXPERIMENTAL DETERMINATION TO LARGE INTERNUCLEAR SEPARATION OF THE '$\Sigma^{+}$ STATE ELECTRIC DIPOLE MOMENT FUNCTION OF CO Please use this identifier to cite or link to this item: http://hdl.handle.net/1811/11943 Files Size Format View 1983-TA-04.jpg 78.19Kb JPEG image Title: EXPERIMENTAL DETERMINATION TO LARGE INTERNUCLEAR SEPARATION OF THE '$\Sigma^{+}$ STATE ELECTRIC DIPOLE MOMENT FUNCTION OF CO Creators: Chackerian, C., Jr.; Farrenq, R.; Guelachvili, G.; Rossetti, C.; Urban, W. Issue Date: 1983 Publisher: Ohio State University Abstract: We have experimentally determined the EDMF of CO’s ground electronic state to about the classical turning points of the V=40th level. The $Pade’$ approximant representation of the EDMF is determined via a non-linear least-squares fit which combines numerically obtained vibrational wavefunctions and experimentally determined vibrational band intensities for which $\Delta v=1,2,3$ and 40 for both emission spectra by considering pairs of vibrational transitions from common upper vibrational states and assuming a well defined rotational temperature. These results should be useful in the interpretation of solar infrared spectra. Description: Author Institution: NASA Ames Research Center, Astrophysical Experiments Branch; Laboratoire d' Infrarouge, Universite de Paris-Sud. Campus; Institut for Angewandte Physik der, Universitat Bonn Wegel str. 8, 0-5300 URI: http://hdl.handle.net/1811/11943 Other Identifiers: 1983-TA-04
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7895544767379761, "perplexity": 10658.044174504303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989790.89/warc/CC-MAIN-20150728002309-00227-ip-10-236-191-2.ec2.internal.warc.gz"}
https://collegemathteaching.wordpress.com/2014/09/02/using-convolutions-and-fourier-transforms-to-prove-the-central-limit-theorem/
# College Math Teaching ## September 2, 2014 ### Using convolutions and Fourier Transforms to prove the Central Limit Theorem Filed under: probability — Tags: , , — collegemathteaching @ 5:40 pm I’ve used the presentation in the our Probability and Statistics text; it is appropriate given that many of our students haven’t seen the Fourier Transform. But this presentation is excellent. Upshot: use the convolution to derive the density function for $S_n = X_1 + X_2 + ....X_n$ (independent, identically distributed random variables of finite variance), assume mean is zero, variance is 1 and divide $S_n$ by $\sqrt{n}$ to obtain the variance of the sum to be 1. Then use the Fourier transform on the whole thing (the normalized version) to turn convolution into products, use the definition of Fourier transform and use the Taylor series for the $e^{i 2 \pi x \frac{s}{\sqrt{n}}}$ terms, discard the high order terms, take the limit as $n$ goes to infinity and obtain a Gaussian, which, of course, inverse Fourier transforms to another Gaussian.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 5, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9931216239929199, "perplexity": 382.19635535090083}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218188550.58/warc/CC-MAIN-20170322212948-00660-ip-10-233-31-227.ec2.internal.warc.gz"}
https://platops.tech/delete-lines-usig-sed-command/
# Linux - sed command June 27, 2019 , by Jinna ## Delete lines usig sed command In the following examples, the sed command removes the lines in file that are in a particular position in a file. #### Delete first line or header line The d option in sed command is used to delete a line. The syntax for deleting a line is: ``````sed 'Nd' file `````` Here N indicates Nth line in a file. In the following example, the sed command removes the first line in a file. ``````sed '1d' file unix fedora debian ubuntu `````` The following sed command is used to remove the footer line in a file. The \$ indicates the last line of a file. ``````sed '\$d' file linux unix fedora debian `````` #### Delete particular line This is similar to the first example. The below sed command removes the second line in a file. ``````sed '2d' file linux fedora debian ubuntu `````` #### Delete range of lines The sed command can be used to delete a range of lines. The syntax is shown below: ```bashsed ‘m,nd’ file Here m and n are min and max line numbers. The sed command removes the lines from m to n in the file. The following sed command deletes the lines ranging from 2 to 4: ``````sed '2,4d' file linux ubuntu `````` #### Delete lines other than the first line or header line Use the negation (!) operator with d option in sed command. The following sed command removes all the lines except the header line. ``````sed '1!d' file linux `````` ```bashsed ‘\$!d’ file ubuntu #### Delete lines other than the specified range ``````sed '2,4!d' file unix fedora debian `````` Here the sed command removes lines other than 2nd, 3rd and 4th. #### Delete first and last line You can specify the list of lines you want to remove in sed command with semicolon as a delimiter. ``````sed '1d;\$d' file unix fedora debian `````` #### Delete empty lines or blank lines ``````sed '/^\$/d' file `````` The ^\$ indicates sed command to delete empty lines. However, this sed do not remove the lines that contain spaces. Sed Command to Delete Lines - Based on Pattern Match In the following examples, the sed command deletes the lines in file which match the given pattern. #### Delete lines that begin with specified character ``````sed '/^u/d' file linux fedora debian `````` ^ is to specify the starting of the line. Above sed command removes all the lines that start with character ‘u’. #### Delete lines that end with specified character ``````sed '/x\$/d' file fedora debian ubuntu `````` \$ is to indicate the end of the line. The above command deletes all the lines that end with character ‘x’. ####Delete lines which are in upper case or capital letters ```bashsed ‘/^[A-Z]*\$/d’ file #### Delete lines that contain a pattern ``````sed '/debian/d' file linux unix fedora ubuntu `````` #### Delete lines starting from a pattern till the last line ``````sed '/fedora/,\$d' file linux unix `````` Here the sed command removes the line that matches the pattern fedora and also deletes all the lines to the end of the file which appear next to this matching line. #### Delete last line only if it contains the pattern ``````sed '\${/ubuntu/d;}' file linux unix fedora debian `````` Here \$ indicates the last line. If you want to delete Nth line only if it contains a pattern, then in place of \$ place the line number. Note: In all the above examples, the sed command prints the contents of the file on the unix or linux terminal by removing the lines. However the sed command does not remove the lines from the source file. To Remove the lines from the source file itself, use the -i option with sed command. ``````sed -i '1d' file `````` If you dont wish to delete the lines from the original source file you can redirect the output of the sed command to another file. ``````sed '1d' file > newfile ``````
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.876630425453186, "perplexity": 5209.835902353911}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439736057.87/warc/CC-MAIN-20200810145103-20200810175103-00590.warc.gz"}
http://www.newtonproject.ox.ac.uk/view/texts/diplomatic/THEM00239
<1r> ## of Iewish Synagogues & Xtian Churches Loose papers {illeg} upon Daniels 4 Monarchies. the Revelations &c <2r> The {illeg} \ancient/ eastern & Egyptian nations were apt \anciently very much addicted/ to speake by figures & in their language to introduce the qualities of things & the substances of things inanimate \& inanimate substances of things/ under the character of intelligent beings or persons So they \often/ presented death & the grave & time & fortune & health & wealth & love \& ffame/ & the Elements & Planets \by persons/ & the Iews gave the names of evil spirits to diseases & to vices & of Sephiroths to the attributes of the Deity \& erroneous opinions/ & so Solomon represents speaks|ake| of Wisdom as ap Person & Orpheus Plato & Philo \& some of the Gnosticks/ gave ye name of λόγος to the wisdom of God considered as a Person & the Ca [And the Sephiroths of the Cabbalists & Aeons of the Gnosticks are nothing else \then the notional thoughts/ attributes \powers actions or/ or {sic} supposed qualities \titles {power} qualities/ of the Deity turned into persons & sometimes into the souls of dead men.] And the Ideas of the Platonists Sephiroths of the Cabbalists & Æons of the Gnosticks are nothing else then the thoughts notions actions powers \names or/ attributes \or parts/ of the Deity turned into persons & sometimes into the souls of men. For the ancient Heresies consisted chiefly <3r> Moses commanded the people of Israel that they should make Iudges & Officers in all their Gates (that is, in the Gates of all their cities) to judge the people with just judgment Deut. 16.18. These judges sat in the Gates of the City & were called the Elders of the city, & judged of capital causes & much more of smaller matters {illeg} according to \for putting/ the law of Moses in execution Deut 21.19, 20, 21: & 22.15, 16, 17, 18, 21 & 25.7, 8. & Ruth 4.1, 2, 9, 11 & Amos 5.14. And this sort of government by Elders \elected by the peo\ple// for putting the laws of God in execution continued in Israel till the Captivity & then was abolished by the Chaldeans (Lament 5.14) & at length restored by the Commission of Artaxerxes given to Ezra (Ezra 7.25, 26 & 10.14.) And in the Gate of ye city there was also a place of worship set apart for the Elders & their Officers & such of the people as came together. There publick prayers were put up & Moses was read & expounded. &|A|nd if the Elders sacrificed they did it on the next hill, erecting there an Altar & a place for eating the sacrifices, wch they called the High Place. 1 Sam. 9.19, 25. 1 King. 7.11 & 13.3 2 King. 17.29, 32. But these High places were not according by the Law of Moses All Israel was to sacrifice in the Tabernacle & Temple. And as this was the place of worship for all Israel so a court of seventy Elders sitting in the eastern Gate of the Temple was over all Israel. And In the reign of the Greeks this Court was called the Sanhedrim, & the lesser Courts wth their places of worship in every city were called Synagogues & the Elders were by the Greeks called \or Iudges were called the/ Presbyters|y| And sometimes Synagogues \1 Tim. 4.14. And the Elders & people together/ were called Churches Matt. 18.17. The Iews also who were dispersed among the Gentiles erected Synagogues in every city where they were sufficiently numerous to incorporate themselves into a religious body for worshipping God & putting his laws in execution amongst themselves. And the Christians of the circumcision did the like both in Iudea & in other places. ffor <4r> In ye 2d page of ye Synopsis lin. 12. instead of the uncircumcised read tho uncircumcised Ch. 1. v 4 in the Notes. r. ὁρισθέντος ib. v 3. Paraphrase. May not ye word accounted be better omitted sense be clearer by writing instead of ye words accounted as these words as he was accounted ib v. 3, 4 Is not this the sense. Who was made of ye seed of David \in being born of a woman/ according to the flesh in being born of a woman \by being \his/ born|irth| of a woman/ & expressed \shewed declared/ to be the son of God with power \in being/ the first begotten from ye dead according to ye spirit of holyness & \by his resurrection he/ being the first begotten \born/ from ye dead & appearing to many after witnesses after his resurrection of whose resurrection from ye dead there are many witnesses |(Coloss. I.18. The promise wch was made unto our fathers God hath fulfilled unto us in that he hath raised up Iesus from again as it is also written in the second Psalm, Thou art my Son, this day have I begotten thee Act XIII.33.| Ch I{illeg} 25 Paraphr. ffor creature thing read creature, a thing ib p 26 use even into. Blot out even ib p 27 leaving also the. Blot out also Ch. II v 1. Notes. In the words more free but less offensive then some word seems to be wanting. Ch III Contents pag 2 of ye Contents. |In| The words who to ye circumcision of ye flesh {illeg} \&/ the other observances of ye Law for the word r read add for and Ch III. v. 5 Paraphr. and cast we off. read {illeg} and cast us off. Ib. \Notes on/ v. 6 ffor appositively read appositely. Ib. Notes on v. 24. for metaphor, must be read metaphor, it must be Paraphrase Ch. IV. 12. The words [but to them of it. i.e. to to such of the Iews as did also walk in the steps of ye faith &c] seem not so clear as ye text. The sense is [but to \such of/ them who were \being/ not only of ye circumcision but \who/ did also walk in the steps of ye faith &c] or [but to such of them (ie of ye Iews) as did also walk &c] Paraph Notes on Ch VII. 6 * \Th/ read [end of the Law for &c. Notes on Ch. VIII. 3 . appensities, so to. should it not be appetites & so to Ib. v. 4. if we make it choise. should it not be if we make it or choise OO. p 2. l. 9. I have struck out ye words by his spirit, I think rightly. Par. Ch IX 4. particular manner the sons of God. QQ. p. 2. l. 24|5|. particular manner the sons of God. YY p. 2. l 11 be come for become ZZ p 4 l ult. givenemies. perhaps it should be called enemies. CCC p 3 l 21. either the interpretation explaining. read and explaining CCC p 4 l 11. The words sentence, That no one should go beyond that wch was given him then he really had, seems imperfect. GGG p 1. l 4 & balanceth. Ib p 3 l 4 & friendly [manner] <4Av> Th Except a man be born again he cannot see ye kingdom of God. \Except a man be born of water & of the spirit he cannot enter into the kingdom of God vizt of water in baptism the symbol of ye Resurrection & of the spirit at the Resurrection/ That wch is born of flesh is flesh & that wch is born of ye spirit is spirit. As ye wind bloweth where it listeth so is every one that is born of ye spirit. Iohn III.3, 6, 8. It is sown in corruption it is raised in incorruption, it is sown in weakness it is raised in power, it is sown a natural body it is raised a spirituall body. There is a natural body & there is a spirituall body. – Flesh & blood cannot inherit the kingdom of God neither doth corruption inherit incorruption. \This corruptible must put on incorruption/ Iohn 15 XV.42, 43, 44, 50, 53. The children of this world marry & are given in marriage, but they who shall be accounted worthy of that world to obtain that world & the resurrection of \from/ the dead, neither marry nor are given in marriage, neither can they dye any more: for they are equal unto the Angels & are the children of God, being the children of the Resurrection. Luk. 20.34, 35, 36. They desire a better country that is an heavenly. Wherefore God is not ashamed to be called their God: for he hath prepared for them a city I will be his God & he shall be my son Apoc 21.3, 7. They shall be his people & God shall be their God ib v. 3. <5r> 2 Chron 35.1 2 |1| Esdr. 1.15 || || 2 Chron 36.21 2 |1| Esdr. 1.5 8 or 2.1 2 Chron. 36.22 2 Ezra. 1.1 1 Esdr. 2.1 2 Chron 3 11 || || 2 Chron 36.23 Ezra. 1.3 1 Esdr 2.5 Ezra. 1.3. to 1.11 1 Esdr. 2.5 to 2.15 Ezra 1.3. to 1.11 Ezra 2.1 to 2.70 Nehem 7.6 to 7.75. 1 Ezdr 5 7 to 5.46 Ezra 3.1 to 4.5 1 Ezdr 5.47 to 5.73 or 6.1 Ezra 4.24 to 6.22 1 Ezdr. 6.1 to 7.15 or 8.1 Ezra 4.6 to 23 1 Ezdr. 2.16 to 2.30 Ezra 7.1 to 10.44 1 Ezdr. 8.1 to 9.36, 37 Nehem 7.73 to 8.12 1 Esdr. 9.37 to to {sic} 9 55 Nehem. 8.12 to 11.36 Nehem 1.1 to 7.5 2 Chron 35.1 –– Ezra 2:1 –– 4.5/4.24 –– 10.44 1 Esdr. 1.1 –– 2.15/1 Esdr \[3.1:]/ 5.7 –– 6.1/{illeg} –– 9.36, 55 2 Chron. 35.1 –– Ezra 2.1 –– 4.5 [––] 4.24 –– 10.44 1 Esdr. 1.1 –– 2.15 [––] 5.7 –– 6.1 –– 9.36, 55. 2. Chron 35.1 –– Ezra 2.1 –– 2:1 –– 4.5 –X– 4:24 –– 10.44 –– 1 Esdr. 1.1 –– 2.15 –X– 5.7 –– 6.1 –– 6.1 –– 9.36, 55. The sacred History from 2 Chron 35.1 to ye end of Esdras Ezra agrees with ye first book of Esdras if you omitt the story of Ahasuerus & Artaxerxes in Ezra 4 & the same story in 1 Ezdras 2. & also ye story of ye 3 wise men in 1 Esdras 3, {illeg} 4 & 5 2 Chron. 35.1 A Ezra 2.1 O 2.1 B 4.5 G 4.24 C 10.44 \Nehem 8.1/ D 8.12 E 11.36 1 Esdr. 1.1 A 2.15 G+ 5.7 B 6.1 O 6.1 C 9.36 D 9.55 O <5v> Ezra 1.1 –– 3.7| Nehem 8.1 –– 11.36, or 12.9| Ezra 3.8 –– 4.5.| 4.24 –– 6.22| 4.6 |7.1 –– Ezr. 10.44 {illeg}| 4.7 –– 4.23| Nehem 1.1 –– 7.69.| Nehem 12.1 –– 13.31. If you would have ye history of ye Iews under ye Persian Monarchy in due order of time you must read first from ye beginning of Ezra to ye end of the seventh verse of the third chapter, then the 8th 9th 10th & 11th chapters of Nehemiah, then from the beginning of ye 8th verse of ye 3d chapter of Ezra to ye end of ye 5t verse of the 4th chapter, then from the beginning of the last verse of the 4th chapter to ye end of the sixt chapter. Then the sixt verse of the 4th chapter Then the 7th 8th 9th & 10th chapters. Then from ye beginning of ye seventh verse of the 4th chapter to ye end of ye 23th verse of that chapter. Then from ye beginning of Nehemiah to ye end of the 69th verse of ye 7th chapter. Then And lastly the 12th & 13th 12th & 13th chapters of Nehemiah. But if you would understand ye History of those times as it lies in the books of Ezra & Nehemiah wthout altering the order of the books: [by [Cyrus Ahasuerus & Artaxerxes in ye 4th chapter of Ezra you must understand Cyrus, Xerxes & Artaxerxes \Long./ ye successor of Xerxes &] \then/ by the Iews who came up from ye [this] Artaxerxes to Ierusalem & were \began to/ building that city \& set up the walls thereof & were joyned the foundations Ezra 4.12/ you must understand Ezra & his companions who after Zerubbabel had finished the temple came to Ierusalē in ye {illeg} 7th year of this King as if is afterwards declared Ezra 7.7, 8 & restored the Iewish polity. ffor the Temple was finished before the Iews began to build the City & its walls set up its walls. And when Ezra came from Artaxerxes wth authority to restore the Iewish worship & polity set up Iudges Magistrates & Iudges over the People wth power of life & death & by consequence had power sufficient to set on foot \attempt/ the rebuilding of the city, yet he was hinded {sic} & the people \notwithstanding his Com\mission// continued in great affliction & reproach & the ye wall was broken down & ye gates burnt with fire untill the 20th year Nehemiah obteined a \new/ decree to rebuild ye City Nehem 1.3 & 2.3, 8. When therefore in ye last verse of the 4th Chapter you read that ye work of ye house of God ceased und|t|ill ye 2d year of Darius understand not {illeg} \another/ Darius wch succeeded Artaxerxes but the same Darius wch had been mentioned before, as if the words had run \been wrote/ thus. Now the re |Ezra had said| with respect to ye reign \time/ of Cyrus: Then ceased the work of ye ho\u/se of God \(as was said before) above)/, {&} \so/ it ceased untill ye second year of the reign of Darius King of Persia. Afterwards in reading the book of Nehemiah understand all from Nehemiah {illeg} 7.6 to Nehem 12 9 inclusively not to be wrote by Nehemiah originally but by him extracted \copied/ out of the book of Chronicles as that book was extant before the warrs of the Maccabees |& to respect the history of the Iews at their first return from captivity under Zerubbabel in the days of Cyrus|. ffor that book |of| \Chron/ was \originally/ continued down to the Priesthood of reign of \days of the High/ Priesthood of Iohanan the son of Eliasib or rather \perhaps/ to yt of Iaddua & the regn {sic} of Darius Codomannus Nehem 12.22, 23. But in the warrs Persecution \& war/ of Antiochus ye sacred books were rent \in pieces/ & bu\r/nt & it was death to have any book of the Testament (1 Macc. 1, 56, 57 untill Iudas became victorious & recollected the sacred writings. 2 Mac. 2.14. <6r> After ye first discourse on ye whole Apocalyps add transcribe what is material of ye vision of Gog Ezek 38 & 39. And because Ezek 38.17 Gog is said to be prophesied of in old time by ye prophets, add further what is of the same kind out of Ioel 2. & 3. Mica 4 & 5. Isa {illeg}6 6, & 34 |& 2 & 11 & 14 & 24 & 25 & 26 & 30. & 41 & 42 & 49 & 51 & 52|. Ier 25.29, 30, 31, 32, 33. & 30.{illeg} 3, 5, 6, 7, 8, 9, 10, 11, 16. \Ezek. 28.25, 26. & 36./ Obadiah vers 15, 16, 17, 21 Zephaniah 1.7. & 3.6, 8, 9, 13, 19, 20 Haggai 2.6, 7, 9, 22|1|, 22, 23. Zach 12, & 13, & 14. Then shew that these speak all of the same thing in that they agree wth one another vizt Contingit hæc gentium congregatio et perditio proxime post conversionem & reductionem filiorum Israel de captivitate Ezek 38.8, 11, 12 & 39.23. 24 Ioel 3.1, 7. Mica 3.12 & 4.1, 3, 7, 10. & 5.3, 8, 9. Isa 66.8, 16, |20| & 34.2, 16, 17 \& 35.10/ & 2.4 & 11.11, 12, 13. & 14.1 & 24.23. & 25.8, 9 & 26.20, 21. & 43.5 & 51.11, 22, 23. Ier 30.{illeg} 3, 7, 8, 10, 11, 16. Ezek 28.25, 26. Zeph 3.8, 9, 10, 11. |Imò antequam omnes de captivitate redeunt. Isa 66.20 & 35.10 /& 14.2\| tentis collectis \et cæsis/ copijs quantis nullo alio tempore |Ezek 38.4, 5, 6, 9, 15, 16 & 39.9, 10, 12 Ioel 2.2, 3 & 3.2, 11, 14. Mica 4.11, 13 & 5.8, 9, 10. Isa 66.16, 18, 19 Isa 34.2.| quæ perduntur partim civilibus discordijs Ezek 38.21. maxime verò manu cælesti Ezek 38.18, 22 & 39.21 Ioel. 2.11, 17, 18, 21 & 3.2, 12, 13, 16. Mica 5.15 Isa 66.15, 16 Isa 34 2 & 35.4. & 30.27, 28, 30. Nam hic est ille dies domini magnus et terribilis Ezek 38.17, 19 & 39.8 Ioel 2.1, 10, 11, 31 & 3.14 Isa 34.4, 8 Isa 2.12, 19 |Ex| Quo tempore spiritus Dei effundetur in omnem carnem Ezek 39.29 Ioel 2.28, 29 Deus regnabit in Sion Ioel 3.17, 21. Ezek 39.7, 22, 29. Mica 4.7 Isa 66.20 Isa 66.20 & 24.23 & 33.20, 21, 22 Et Deus erit Dominus terræ totius Ezek 38.16, 23 & 39.6, 21, 27. Mica 4.3, 7, 8. Isa 66.18, 19 20, 23 Isa 2.3, 4, 12|1|. & 11.9, 10 & 12.4, 5, 6. Et Ierusalem non amplius sentiet vim hostium sed \in posterū/ luto incoletur Ezek 39.7, 22, 29. Ioel 2.19, 20,26, 27 & 3.17, 20 Mica 4.7. Isa 66.22. Isa 34.17 & 35.10 Et pace et omni rerum copia abundabit Ioel 2.22, 23, 24, 25, 26 et 3.18. Mica 4.4. Isa 66.12|1|, 12 Isa 35. {illeg} 2, 7, 10. Isa 11.6, 7, 8, 9 & 30.23 neq bellum amplius discent Et aquæ vivæ exibunt de Ierusalem Ioel 3.18. Isa 35.7 & Sancti etiam resurgent Isa 66.14. Isa 26.19 Lex in corde scribetur Nondum tamen ultima conflagratio sed gentes perseverant. Ezek 38.16, 23 & 39.7, 9, 21, 26, 27, 28. Ioel 2.2, 17, 20 & 3.18, 19 Micah 4.2, 3, 4 Isa 66.19, 20, 23. & 35 Bellum \Bellum/ tamen non discent amplius Mica 4.3. Isa 2.4 |nisi| quod \semel/ post multa sæcula rursum congregantur ad bellum Ioel 2.2 Improbe \jam/ in barathrum injiciuntur Isa 66.24 & 34.9, 10 & 11.4 & 30.33. |Cædes| Sacrificium & cœna Dei Ezek. 39.17 Mica 4.13 Isa 34.6. Scribunt omnes Gente Dei \gentes/ populo Dei Mica 4.1, 2, 3, 8. & 5.7, 8. Isa 66.19, 20, 23. Isa 2.2, 3, 4 & 14.2. Tribulatio gravissima. Mica 4.9, 10 & 5.3 Isa 66.7, 8 Idolorum abolitio Mica 2.13. Isa 2.18. <6v> Gentium divitiæ magna congregabuntur Mica 4.13. Zach 14.14 These then belonging all to ye same subject let us see now how they agree wth the Apocalyps. To Conversio Iudæorum non nisi in tempora turbæ palmifera incidere potest. In ejus finem incidit eorum tribulatio et collectio gentium omnium ad prœlium magnum Dei omnipotentis. Locus in quem congregantur dicitur Ar-magadon id est exercitum omnis perditio tum perditio \destructio/ turmarum eorum, voce hebraica ut ut ex lingua locus innotescat. Gentiùm collectio maxima est perduntur partim in lacu ignis Isa 66.24 & 30.33. 34.3, 5, 6 & 11.4. Ezek. 38.\21/, & 39 &c De gentibus multi vero superstites manent, Deus et Christus per totam terram regnans sancti resurgunt Hic est ille dies magnus Dei omnipotentis. Apoc. 10.6, 7 & 16.14, 17 allude to Ezek 38.17 & 39.8 Hos 2.11 & the like. The supper of ye gt God &c Apoc 19.9, 17, 18, 21 to Ezek 39.17, 20. Isa 34.6 They that dwell in ye Isles Ezek 39.6                      to ye beast & fals prophet The harvest & vintage Ioel 3.13. The reign & kingdom of Christ Mica 4.7, 8. The images & Idolaters then cease. After many generations ye nations shall be gathered again Ioel 2.2. The tribes mourn look on him whom they have pierced & mourn Zach. 12.10 Apoc. 1.7. Tormenting in ye presence of ye lamb & his holy angels to Isa 66.24. He trode ye winepress alone Isa 63.20 & 30.33. <7r> After these prophesies of ye old Testament, add those of the marriage of the Lamb. Matt 25.10, 13. Mark 22 & Luc 14.16. |In| The two last places \Matt 22.3/ The first servants sent out were the first Christians till ye Apostacy. The last the palmbearing multitude. The arain slaying of his servants ye great tribulation. The armies of ye King wch destroyed those murderers those Apoc 19. Those gathered in the high ways all good & bad, the people in ye millennium. In Luc 14 16 those gathered in the streets of ye City the multitudes converted by the palmbearing multitude. Those gathered & compelled to come out of the high ways to fill up the vacant room the multitudes in the millennium to make up ye number of ye future kingdom of heaven. All this is done on ye marriage day, the millennium. The figtree frutiless three years Luc 13.6 is the Apostate Church during Daniel's three times. The digging about it & dunging it in ye 4th year till it & be cutten down then cutting it down, the recruit of the gospel in the 4th time After the first discourse on ye Apocalyps is ended, expound & apply say how this is a key to all ye prophetick scriptures &c Then by the help thereof expound first ye 24th chap of Matthew vizt vers 9 of ye fift seale. vers 10, 11, 12 of ye sixt & beginning of ye 7th. vers 14 of ye palm bearing multitude. vers 15 of ye armies gathered in Armagedon. The abomination of desolation in ye holy place, Idolatrous armies in Iudæa Luc 21.20. Dan 9 & 12. |vers 21| Great tribulation (not of incredelous {sic} Iews but of ye faithful vers 22) that of ye palm bearing multitude see Dan 12. Fals Christs & fals prophets \vers 23, 24/ of ye three unclean spirits out of ye mouth of ye Dragon Beast & fals prophet. The covering \coming/ of ye son {sic} of man \moon & stars/ at ye end of ye tribulation vers 29 the overthrow of the heathen kingdoms at ye seventh trumpet. The {son} of man coming at ye same time all ye earth mourns, Angels gater {sic} ye elect vers 30, 31 compare with Apoc 16.15. & 14.15, 16 & 19.7, 15 & 1.7. This generation (γενεα the nation of ye Iews shall not pass till all these things are fulfilled vers 34 because their fulfilling depens {sic} on ye nation of ye Iews. There be some standing here wch shall not tast of death till they a [eternal] till they see ye son of man coming in his kingdom Matt 16.28 \compare wth Iohn 8.51/. The casting of ye Beast & fals prophet into ye Lake of fire Matt 13.41, 42, 50. & 24.51 \vizt in this earth burning 2 Pet. 3.7/. The marriage of the Lamb Matt 25.10, 13. The millennium ye marriage day Matt 22.10 wherein great multitudes even all that are met with are good or bad are compelled to come in to fill up ye number of ye kingdom of heaven Matt 22.10 Luc. 14.23. Of the kingdom of Christ in this earth Matt 13.41, 42. 19.28 & 22.32. & 22.10 & 25.31, 32, 40. & 26.29. Luc. 22.16, 18, 29, 30 compared with Luc 24.30, 43 & wth Iohn 21.12, 13, 15. Colos. 2.16, 17. Of the reign of Antichrist Luck. 13.6. 2 Thess. 2. 1 Tim 4. 2 Tim 3. & 4. & 2 Pet. 3.3, 4, 8 Iude {illeg} 4, 5, 14, 15, 18. After the new testament pass to ye old & lastly to Daniel <8r> Dacia was a large country bounded on the south by the Danube on the east by the Euxine sea & \Alania/ or the country of the Alans, on the north by the river Neister & mountain Crapac & on the west by the river Tibesis or Teis which ran into ye Danube a little below Belgrade. &separated Dacia from ancient Germany |the country of the the {Iazyges} Metarusia now called Hungary on the north side of ye Danube.| It comprehended the countries now called Transylvania Moldavia & Valacchia. Its \ancient/ inhabitants were anciently called Daci by the Latines & Getæ by the Greeks, & from Getæ came the name Goths. |called Getæ by the Greks & Daci by the Latines. And fom the name Getæ the Latines have formed the name of Goths.| Trajan conquered them & reduced their country into a Province of the Roman Empire: whereby the propagation of the Christian religion amongst them was much promoted. Some time after they revolted & lived under their own kings & by successive conquests grew into a large & potent Empire composed of many nations. |The Church of Dacia was governed by a bishop or Patriarch & the continued united to the Church of the Roman Empire as well The Church of Dacia after the revolt of the Province of Dacia from the Romans as before. ffor Theophilus Bishop of the Goths was at the Council of Nice A.C. 325 & his successor Vlphilas was at the Councill of Constantinople A.C. 360.| Ostrogotha one of their kings conquered the Gepides, Geberich another the Vandals & Ermaneric another the Heruli, Veneti, Antes, Sclavi & many other warlike northern nations of Germany Scythia & Germany, as Iornandes informs us, & particularly the Æstri \or Estij/ seated upon a very long tract of the German ocean \or Baltic Sea, that is people of ‡/ < insertion from the left margin of f 8r > Coverland on ye north of Livonia as far as Riga. Also the nations < text from f 8r resumes > & the nations \wch Iornands/ called|s| Thuidi, Vasinambrocæ, Mærens, Mordensimni, Cari, Rocæ, Tadzans, Athual, Navigo, Bubegentæ & Coldæ. From which conquests some have compared him to Alexander the great. \/ He reigned long & died \very old/ about ye year of Christ 376 being 110 years \very/ old. |And as the Greek & Latine Empires were two parts of the Roman Empire so the Gothic Empire may \of Dacia may/ be recconed a third.// Ermaneric reigned long & died 110 years old| Three or four years \A little before/ before {sic} his death eighty thousand Burgundians \the Burgundians (a Gothic nation) to the number of 8000 eighty thousand/ fled from ye Goths & seated themselves upon the Rhene in the lower Palatinat. Vpon his death his kingdom became divided into the amongst several successors, & was at the same time time {sic} invaded & reconquered \conquered/ by the Goths Huns, & many of the people for to seek new seats: which commotion gave occasion to the division of the western Empire of the Romans into ten kingdoms. The eastern part of the Goths called Ostrogoths under several kings \(Athanaric Sigismund & Winitharius/ staid in Dacia in subjection to yeHuns & so did the Gepides. The western part of the Goths called Visigoths under the conduct of Fridigernus, Alatheus & Safrac fled to ye side of the Danube wth several other nations & sent an embassy to the Emperor Valens desiring seats in the Roman Empire. The |He died while the Hunns were conquering the nations wch lay between them & him amongst \Dacia/ wch were the Alans & Gruthungi And soon after his death the Hunns conquered {illeg} his son Hunnimund with the eastern part of the Goths henceforward called Ostrogoths & the rest of the {illeg} Goths either now or a little before this conquest set up other kings over them vizt Winithar \{Windwater}/ or Vithimar the son of Valeravanus, Fridigern & Athanaric. Winithar made some resistance but the Hunns being assisted by an army of Ostrogoths commanded by Sigismund the son of Hunnimund routed the western his army & slew him in battel & gave his kingdom to Hunnimund. Yet a great part of his people fled to the Danube under the conduct of Alatheus & Saphrax the Guardians of Videric the young son of Winithar. And so did Fridigern with his people. And Athanaric being pursued by the Huns, a great part of his people deserted him & under the conduct of Ahaverus fled also to the Danube| The head of the|i|s Embassy was Vlphilas B the Bishop or Patriarch of the Goths. He \was/ Bishop of all the Goths both Ostrogoths & Visigoths wch shews that they had hitherto been but one nation He was at the Council of Nic Constantinople A.C. 360 & his predecessor Theophilus was at the Council of Nice A.C. 36|2|5. And this communion shews that the communion between ye Goths & Romans wch in ma wch began in matters of religion while Dacia was a Province of the Roman Empire, was not interrupted by the revolt of the heathen Goths from ye Roman Empire but continued still entire, the G Romans & Goths {illeg} \being/ being still \still/ united in religion & \still/ looking upon one another as members of one & the same Church catholick of Christ \as if they had been still been but one Empire/. And this made Vlphilas a very proper person to be sent upon this Embassy. He had invented the Gothic letters & translated the scriptures into the language of the scripture Goths & very promoted the Christian religion much among the Goths of both nations so that Fridigern {was} the king of the Visigoths was become a Christian. & Athanaric And this was another argument to incline the Emperor in favour of the Goths. They had therefore seats granted to them in Thrace. But upon their coming thither \they/ wanted food & the Roman commander Lupicinus exacting upon them & deceitfully inviting Fritigern their king to a feast with a designe to assassinate him & his retinue & killing some of them: Fritigern took up arms against the Romans <8v> \under the conduct of Ostrogotha/ beat them in battel & slew their Emperor Valens. Ostrogotha conquered the Gepides, Geberic the Vandals & Hermaneric the Heruli Veneti, Antes, Sclavi & many other warlike nations of Scythia & Germany as Iornandes informs us & particularly the nations wch Iornandes calls the Thuidi, Vasinambrocæ, Mœrens, Mordensinni, Cari, Rocæ, Tadzans, Athual, Navigo, Bubegentæ & Coldæ, & the Æstri or Estij seated upon a long tract of the German or Baltick Sea in Livonia. So that the kingdom sems {sic} at this this time to have comprehended the \Volinia, red Russia, \Lithuani & other/ Scythian/ nations between the Vistula & the Niper \or Boristhenes/ as far northwards as Revet & Narva \besides some nations of Germany/. And from these conquests saith Iornandes some have compared this king to Alexander the great. He reigned long & died 140 years old about the same time wth the Emperor Valentinian |in the year 367 or 368 in the fift year of the reign of Valentinian & Valens A.C. 368|, being 110 years old. At wch time [About the same time 80000 Goths fle Burgundians (a {illeg} Gothic nation) fled to the side of the Danub Rhene above Ments, & the Huns or Massagetæ a fierce & brutish nation seated upon the eastern side of the lake Mæotis & & at his death or soon after his kingdom became divided amongst many successors, Hunnimund, Winithar or Vithimar, Athanaric, Fridigern, Box & perhaps some others. Hunnimund was his son, \&/ reigned over the eastern Goths called Ostrogoths. W Vithimar \was the son of Valeravan & grandson of Athaulus #/ |&| reigned over a \great/ part of the Goths called Gruthungi by A. Marcelline & Gothunni by Claudian \& Sarmatæ & Scythians by others/. Athanaric reigned over t|a|nother \great/ part of the Goths called Thervingi, & & Fridigern over another called {illeg} Visigoths from their western situation, & Box was king of the Antes, \& the Gepides had also their king/// In those days eighty thousand Burgundians (a Gothic nation) fled fl to the side of the Danube & the {illeg} Huns a firce {sic} & bruitish nation seated upon the eastern side of the lake Mæotis rose from their seats & invaded \under the conduct of their king Balamber or Balamir/ invaded the nations wch lay westward between them & the Dacia & soon after the death of Hermaneric made the Ostrogoths submit. Winithar was warlike, conquered the Antes & slew their king Box, & resisted resisted Huns \Theudicar resisted the Huns &/ beat ym Huns in one or two battles but was slain by them in the third battel \A.C. 376/ & his kingdom given to Hunnimund. ffor Sigismund the son of Hunnimund had assisted the Hunns in this war with an army of Ostrogoths. Then the Hunns purused Athanaric & the greatest part of his people wth some other |scattered| Goths under the conduct of Alavivus fled to the side of the Danube & so did Fridigern with his people the Visigoths: & these nations sent \by sending Vlfilas & others in/ an Embassy sent to the Emperor Valens obteined leave to pass the Danube & seat them selves in Mœsia in the northern & Thrace. \Their Patriarch Vlphilas was at the head of this Embassy./ Also also {sic} a great part of the Gruthingi under the conduct of Alatheus & Saphrax the guardians of Videric the young son of Winithar (now their king,) fled \fled {sic} from the Huns & Ostrogoths/ to the side of the Danube & made the same petition but were rejected, & not long after passed the Danube without leave while the Roman army was detained in Rhætia in a war against the Alemans & Sweves. All this rout was in the year 376. The Goths were no sooner seated in the Empire but being prest with famin & grosly abused by the Roman governours they took up arms invaded Thrace, called to their assistance Athanaric with his forces & some Huns & Alans from beyond the Danube routed the Roman army slew the Emperor Valens & spread them selves into Greece & Pannonia as far as the Alps, Alatheus & Saphrax going westward. But in the years 379 & 380 they were checkt by the arms of the Emperors Gratian & Theodosius & made made {illeg} peace & the Visigoths \& Thervingij/ returned to their seats in Mœsia as subjects of the Empire, the Huns retired over the Danube & the Alans retired over th \& Grutungi/ obteined seats in Pannonia, & Athanaric this peace was much promoted by the honourable reception & {illeg} of Athanaric at Constantinople, ffor he died there in Ianuary A.C. 381 after a reign of 13 years, & the Thervingi remained wthout a king |And upon this peace Athanaric went to Constantinople, was honoura{bly} received their & dying a few days after was honourably interred, his funeral Whereupon his people submitted to live under the Romans without any other king {than} the Emperor.| But {illeg} \Fridigern/ king of the Visigoths was succeed \the next year/ by Alaric & Videric king of the Gruthungi \{they so} following/ by Radagaisus. died there in Ianuary 1681 after a reign of 13 years & was very honourably interred by the Roman \Emperor/ & thereupon his people submitted to live under the Romans without a king. But Fridigern king of the Visigoths was succeeded by Alaric & Videric king of the Gruthungi by Radagaisus [Editorial Note 1] ## Chap     Of the rise of the Roman Catholick Church <9r> The author who continued the history of Annals of Eutropius, a Greek by nation, tells us that the {illeg} \in those days/ there were four principl nations beyond the Danube, the Goths or Ostrogoths, the Hypogoths or Visigoths, the Gepides & the Vandals, differ {sic} in name & in nothing els, using all the same language & being all of the Arian faith. And that in the reign of Arcadius & Honorius they passed the Danube & were seated in the Roman territory of ye Romans. And that ye Gepides, from whom the Lombards & Avares were afterwards divided, inhabited the towns about Singidonum & Sirmium. The Visigots {sic} depopulated Rome & went thence into Gallia, the Ostrogoths inhabited Pannonia & in ye 18th yeare of Theodosius junior went thence into Thrace, & after 58 years more obteined ye western Empire, & the Vandals in conjunction with the Alans & Germans passed the Rhene under the conduct of Godedisalus Mogodisclus. Procopius tells us that \The visigoths &/ ye Ostrogoths past the Danube about ye same time that ye Visigoths invaded Italy & the Vandals passed the Rhene into France, But seing \& by consequence/ they \& the Gepides/ passed the Danube in the reign of Arcadius & Honorius. & by consequence between the years 395 & 407, it seems to me that this passage was about the time that \the main body of them passed over when/ Radagaisus called {illeg} the Gothic nations from beyond the Danube to his assistance, that is about ye year 405 or 406. The Visigoths & {illeg} Gouthungi came over before & th \But the Visigoths as has been said above passed ye Danube in the reign of Valens./ Some reccon the Vandals to be a branch of the Gepides, but when they separated is uncertain. – – – – – in Pannonia by that Emperor. Perhaps But its probable yt by this conquest they grew into one body \united/ & afterwards separated again Iornandes tells us that ye Vandals lived quietly in Pannonia 40 years Procopius in his first book of the Vandalick history \war/ tells us that amongst the Gothic nations wch were many the greatest & most noble were the Goths [or Ostrogoths] the Vandals the Visigoths & the Gepides [called anciently Sauromatæ & Melanclœni & Getæ,] that these four nations differed only in name \being white, tall & beautiful & {illeg} handsome, using the same language/ using the same language {sic} called the Gothic Gothic, & the same laws, & being of ye same religion called \by the Romans/ Arian: \&/ that they all lived at first beyond the Danube. And no doubt they had their common language & laws & religion from being subjects of one & the same kingdom till the death of Hermaneric. But the Hunns might enter Pannonia in ye year 406. when \For then/ Radagaisus called invited the other nations from beyond the Danube & the Vandals & Alans left their seats in Pannonia to new For I reccon & then the Vandals & Alans left Pannonia & went \{illeg} westward/ to seek new seats. The Ostrogoths & Gepides first in Dacia & then in Pannonia continued subject to the Hunns till the death of Attila A.C. 454 & the They warred under him against the Romans & after his death the Gepides returned to their seats in The Gepides \The {illeg} the Ostrogoths/ Dacia beyond the Danube, & ye Ostrogoths to theirs in Pannonia & shook of the dominion of the Hunns. {illeg} But the time when the main body of the Ostrogoths came over is uncertain, {the Danube} is uncertain Some reccon that the At the main body of the Ostrogoths was brought over the Danube by Attila when he made war upon the Romans A.C. 444. The Hunns came over the Danube into Pannonia the Gouthungi under Radagaisus invaded Italy rising from their seats in Pannonia wer & being strengthened wth great numbers of barbarians from beyond the Danube invaded Italy with a numerous army; the Visigoths were marched from Pannonia against the Greek Emperor; the Vandals & other ba Alans quitting their seats in Pannonia to the Huns, marched westward & taking \took/ along with them the \a body of/ Su{evi}ans & \another of/ Burgiansundians who rose from their seats in Suebia & the lower Palatinate. And these nations under their several kings, the Vandals under Resplendial, the Alans in two bodies one under Goar &c – – – advanced \through Germany/ towards {illeg} Gallia, ruffled the ffranks beyond the Rhene passed & the Rhene at Mentz on the last day of December A.C. 406 passed the Rhene at Mentz|s| & diffused themselves into Germania prima. & <9v> I date this kingdom in Pannonia from the time that ye Vandals & Alans left their seats to them \Quades & Marcomans/ relinquished Pannonia to them, {illeg} A.C. 406, Sigonius from the time that the Visigoths relinquished Pannonia A.C. 408 Constat, saith he, quod Gothis ex Illyrico profectis Hunni successerunt, atq imprimis Pannoniam tenuerunt. And when {gentem deletam} \Dalmatia & all Pannonia./ And when Alaric \also/ invaded Pannonia the Romans were defending Rhetia against the Suevians, wch gave Alaric an opportunity of invading Italy as Claudian thus mentions. Non nisi perfidia nacti penetrabile tempus Impere Getiæ, nostras dum Rhætia vires Occupat atq alio desudant marte cohortes And when Alaric went into Italy some other \of the/ barbarians \wch were come over the Danube/ invaded Noricum & Vindelicia as the same Claudian thus mentions. Iam fœdera gentes Vindelicos saltus & Norica rura tenebant. \For understanding/ What Claudian means by the insurrection of the nations, I should tell you that in the winter next after the death of Theodosius Among these nations I reccon the Suevians Quades & Marcomans. For they were all in arms at this time & the Quades & Marcomans were Suevian nations & now united under one common king who soon after led them into Gallia The Vandals & Alans might also about his time extend themselves into Noricum. Also Vldin with another great body of Hunns – – – – invaded the eastern. <10r> Gundicar {illeg} advancing \with them/ through Germany towards Gallia ruffled the Francs beyond the Rhene & on the first day of last day of December A.C. 406 passed the Rhene at Ments, & diffused themselves into Germania prima & the adjacent regions & amongst other actions the Vandals take Trevirs . . . . . . to their assistance. – gentem deletam. And when Alaric went into Italy, the {S\u/avi}, \some other Barbarians, I think \& amongst them// invaded Vindelicia & Noricum as Claudian thus mentions – – – – Iam fœdera gentes Vindelicos saltus & Norica rura tenebant. Among these nations I reccon the Suevi for they were very \most/ apt to invade those regions ] |& when Alaric invaded Italy the Romans were defending Rhætia against them as Claudian thus mentions| Af And Radagaisus king of the Gruthungi calling inviting over more barbarians from beyond the Danube invaded Italy with an army of above 200000 Goths, {illeg} A.C. 405 & the next year perished wth his army was overcome by Stilico & perished with his army. And now Sti\li/co – – – – – invaded the eastern. <10v> <11r> <11v> See the mystery. 1 + $\frac{1}{2}$ + $\frac{1}{3}$ + $\frac{1}{4}$ + $\frac{1}{5}$ &c − $\frac{1}{2}$$\frac{1}{3}$$\frac{1}{4}$$\frac{1}{5}$ &c = 1 = 1 − $\frac{1}{2}$ + $\frac{1}{2}$$\frac{1}{3}$ + $\frac{1}{3}$ $\frac{1}{4}$ + $\frac{1}{4}$$\frac{1}{5}$ &c = $\frac{1}{1 × 2}$ + $\frac{1}{2 × 3}$ + $\frac{1}{3 × 4}$ + $\frac{1}{4 × 5}$ + $\frac{1}{5 × 6}$ &c.. In Obedience to yor Lordps Order of Reference from the Treasury signified to us by Mr Lowndes the      day of December last on the Bill & Petition of Mr Richard Barrow Clerk of the Warden of the Mint for prosecuting Clippers & Coyners. We humbly represent to yor Lordp that his Bill \of Charges/ for this service is for a year & three Quarters from Christmas to {sic} 1711 to Christmas {illeg} Michaelmas last & does amount unto the summ of 470li. 6s. 1d: out of wch \the summ of/ 19li 6s 6d is to be wch was for Mrs Weddels charges of receiving her money from the Excheqr \is to be deducted/ the same having been considered & abated in a allowed in a former report. And also \the summ of/ 53.li 5.s 0d for attending at the old Bayly & taking journeys into ye Country to prosecute coyners, is to be \further/ deducted. And the remainder of the Bill is 397li. 14s. 7d, whereof 105li is an allowance for the said time as usual. And the residue being 292\li/. 14s. 7d, is for the other \remainder of the/ /for other\ charges of the Prosecutions. There have been 23 persons prosecuted by him in town & country, as by the proper Officers certificates of the proper Officers of the several Courts doth appear \to us/, but there being no vouchers for the \said/ charges thereof wee are humbly of opinion that for the said allowance & in part of the said 292l 14.s 7d the summ of 200li be allowed at present over & above the said allowance of 105li \in all 305li/ untill it shall appear to us what further services \allowance/ the said services may deserve But there being no vouchers for the said charges thereof we are humbly of opinion that the said 105li & in discharge of ye said 105 & in part of the said 292l 14s 7d the summ of 300 320l be allowed untill it shall appear to us what further allowance the said services may deserve. Also the summ of 24li for attendance at Hicks is to be deduc the old Bayly & Hicks Hall, 29li 5s for a journey into yorkshire & 10li for a treat there is to be deducted further deducte \at York/ in all 63li 5s, is to be \further abated &/ deducted. And the reminder {sic} of the Bill is 387li 14s 7d, whereof 105li is an allowance for the said time as usual & 10li. 8s. 2d is a bill of {illeg} Henry Smitsons. paid off These And the Residue being 271li. 6.s 5d, is for other charges of the Prosecutions. There have been 23 persons prosecuted by him \the Petitioner/ in town & country as by the certificates of the proper Officers of the several Courts doth appear, but there being no vouchers for the said charges of 271li. 6.s 5d we are humbly of opinion that that in part thereof [there be \may be/ allowed the summ of 200li until it shall appear] & in discharge of the said 105li & 10li. 8.s 2d there may be allowed \at present/ the summ of 320li, All wch \at present/ untill it shall appear what further allowances the said services may deserve. All wch we is most humbly submitted to yor Lordps great wisdomes. [Editorial Note 1] The following header is written upside down at the bottom of the page and is overwritten by the preceding paragraph.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 20, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5324399471282959, "perplexity": 22143.10949916437}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267158633.40/warc/CC-MAIN-20180922182020-20180922202420-00234.warc.gz"}
http://clay6.com/qa/1241/integrate-the-rational-functions
Browse Questions # Integrate the rational functions$\frac{3x-1}{(x-1)(x-2)(x-3)}$ Toolbox: • $(i)\;$Form of the rational function$\frac{px+q}{(x-a)(x-b)(x-c)},a\neq b\neq c$ • $(ii)\;$Form of the partial function$\frac{A}{x-a}+\frac{B}{x-b}+\frac{C}{(x-c)}$ Given $I=\int\frac{3x-1}{(x-1)(x-2)(x-3)}.$ This can be written as: $\frac{3x-1}{(x-1)(x-2)(x-3)}=\frac{A}{(x-1)}+\frac{B}{(x-2)}+\frac{C}{(x-3)}.$ $\Rightarrow 3x-1=A(x-2)(x-3)+B(x-1)(x-3)+c(x-1)(x-2)$. Equating the coefficients of $x^2$, 0=A+B+C-----(1) equating the coefficients of x, 3=-5A-5B-3C----(2) Equating the constant terms, -1=6A+3B+2C------(3) Let us solve for A,B and C. Multiply equ(1) by 5 and add with equ(2) 5A+5B+5C=0 -5A-4B-3C=3 ____________________ B+2C=3------(4) Multiply equ(1) by 6 and subtract from equ(3) 6A+3B+2C=-1 6A+6B+6C=0 ______________________ -3B-4C=-1-------(5) Multiply equ(4) by 3 and add equ(5) 3B+6C=9 -3B-4C=-1 ________________ 2C=8$\Rightarrow C=4.$ Substituting for C in equ(4) we get, B+8=3$\Rightarrow B=-5.$ Substituting for B and C in equ(1) A-5+4=0. $\rightarrow A=1.$ Hence A=1,B=-5 and C=4. Now substituting for A,B and C in I we get, $I=\int\frac{1}{(x+1)}+\frac{(-5)}{(x-2)}+\frac{4}{(x-3)}dx.$ On separating the terms we get, $I=\int\frac{1}{x-1}dx-\int\frac{5}{(x-2)}dx+\int\frac{4}{(x-3)}dx.$ On integrating we get, $\;\;\;=log|x-1|-5log|x-2|+4log|x-3|+c.$ Hence $I=log|x-1|-5log|x-2|+4log|x-3|+c.$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8550795912742615, "perplexity": 7159.1547817749815}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189403.13/warc/CC-MAIN-20170322212949-00066-ip-10-233-31-227.ec2.internal.warc.gz"}
https://tex.stackexchange.com/questions/266549/show-equation-number-in-colored-box-without-brackets?noredirect=1
# Show Equation number in colored box without brackets In the attached code, I would like to get the equation number to show up boxed like this: instead of the normal brackets "()". Here is my MWE: \documentclass[11pt,fleqn]{book} \usepackage[english]{babel} \usepackage{xcolor} \usepackage{amsmath,amsfonts,amssymb,amsthm} \usepackage[most]{tcolorbox} \definecolor{ocre}{RGB}{243,102,25} \definecolor{mygray}{RGB}{243,243,244} \tcbset{myformula/.style={ arc=0pt, outer arc=0pt, colback=mygray, colframe=ocre, boxrule=0.8pt, left=2pt, right=2pt, highlight math style={ arc=0pt, outer arc=0pt, colback=mygray, colframe=red. } } } }{% \ignorespacesafterend } \begin{document} \chapter{This is how it all began} \section{Introduction} \begin{tcolorbox}[ams align,myformula] LT~&\approx~\frac{400}{F_{c}}(1-log_{10}|\Delta F|)\\ \Delta F~&=~\frac{Frequency~Tolerance}{Frequency~Jump}\nonumber \end{tcolorbox} \end{document} You can redefine the internal \tagform@; the original definition is \def\tagform@#1{\maketag@@@{(\ignorespaces#1\unskip\@@italiccorr)}} and I used a \fcolorbox: \makeatletter \def\tagform@#1{\fcolorbox{ocre}{mygray}{\maketag@@@{\ignorespaces\textcolor{ocre}{#1}\unskip\@@italiccorr}}} \makeatother The complete code: \documentclass[11pt,fleqn]{book} \usepackage[english]{babel} \usepackage{xcolor} \usepackage{amsmath,amsfonts,amssymb,amsthm} \usepackage[most]{tcolorbox} \definecolor{ocre}{RGB}{243,102,25} \definecolor{mygray}{RGB}{243,243,244} \tcbset{myformula/.style={ arc=0pt, outer arc=0pt, colback=mygray, colframe=ocre, boxrule=0.8pt, left=2pt, right=2pt, highlight math style={ arc=0pt, outer arc=0pt, colback=mygray, colframe=red. } }, } }{% \ignorespacesafterend } \makeatletter \def\tagform@#1{\fcolorbox{ocre}{mygray}{\maketag@@@{\ignorespaces\textcolor{ocre}{#1}\unskip\@@italiccorr}}} \makeatother \begin{document} \chapter{This is how it all began} \section{Introduction} A cross-reference using \verb!\ref!: \ref{equ:test}\par\noindent A cross-reference using \verb!\eqref!: \eqref{equ:test} \begin{tcolorbox}[ams align,myformula] LT~&\approx~\frac{400}{F_{c}}(1-log_{10}|\Delta F|) \label{equ:test}\\ \Delta F~&=~\frac{Frequency~Tolerance}{Frequency~Jump}\nonumber \end{tcolorbox} \end{document} • I didn't realize that you posted, so I have deleted my (duplicate) answer. I apparently used your nice solution at tex.stackexchange.com/questions/122177/…. +1 – Steven B. Segletes Sep 8 '15 at 18:12 • how do you give a cross-reference to such an equation? (neither \ref not \eqref seems to work, but that could be because i don't have easy access to the latest versions of all the packages.) would be helpful if you added an example. – barbara beeton Sep 8 '15 at 18:26 • @barbarabeeton both \ref and \eqref work. The latter will produce the reference in a frame like the one used for the tags. I'll add an example. – Gonzalo Medina Sep 8 '15 at 18:28 • @barbarabeeton I spoke too soon! \eqref produced the proper refeernce inside the framed box but on a new line, so now I changed to a simple \fcolorbox which gives better results for \eqref. I updated the code with an example of cross-references. – Gonzalo Medina Sep 8 '15 at 18:38 • @Joe You're welcome. Due to a little problem with \eqref using the \tcbox approach I changed the code to use nos a \fcolorbox which behaves as expected. Please see the updated answer. – Gonzalo Medina Sep 8 '15 at 18:39 Here is a simple solution with mathtools and its \newtagform command: \documentclass[11pt,fleqn]{book} \usepackage[english]{babel} \usepackage{xcolor} \usepackage{mathtools,amsfonts,amssymb,amsthm} \newtagform{boxed}[\fboxrule=0.6pt\fcolorbox{ocre}{ocre!15!}]{\color{ocre}}{} \usepackage[most]{tcolorbox} \definecolor{ocre}{RGB}{243,102,25} \definecolor{mygray}{RGB}{243,243,244} \tcbset{myformula/.style={ arc=0pt, outer arc=0pt, colback=mygray, colframe=ocre, boxrule=0.8pt, left=2pt, right=2pt, highlight math style={ arc=0pt, outer arc=0pt, colback=mygray, colframe=red. } } } }{% \ignorespacesafterend } \begin{document} \chapter{This is how it all began} \section{Introduction} \usetagform{boxed} \begin{tcolorbox}[ams align,myformula] LT~&\approx~\frac{400}{F_{c}}(1-log_{10}|\Delta F|)\label{coloureq}\\ \Delta F~&=~\frac{Frequency~Tolerance}{Frequency~Jump}\nonumber \end{tcolorbox} \ref{coloureq} \eqref{coloureq} \end{document} • can you extend the example to show how \ref and \eqref` would be used? – barbara beeton Sep 8 '15 at 19:30 • @barbara beeton: Done, milady! – Bernard Sep 8 '15 at 19:47 • With this approach, how would you change the color of the equation number text to ocre in the equation environment? – Joe Sep 8 '15 at 20:03 • In the first pair of empty braces (where usually one puts an opening bracket or parenthesis). See my updated answer. – Bernard Sep 8 '15 at 20:10 • You're welcome. 'Twas a pleasure to help. – Bernard Sep 8 '15 at 20:32
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7814209461212158, "perplexity": 5204.759496288459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178368431.60/warc/CC-MAIN-20210304021339-20210304051339-00257.warc.gz"}
https://hackage-origin.haskell.org/package/ghc-prim-0.5.0.0/docs/GHC-Types.html
ghc-prim-0.5.0.0: GHC primitives GHC.Types Contents Description GHC type definitions. Use GHC.Exts from the base package instead of importing this module directly. Synopsis # Documentation data Bool Source # Constructors False True data Char Source # The character type Char is an enumeration whose values represent Unicode (or equivalently ISO/IEC 10646) characters (see http://www.unicode.org/ for details). This set extends the ISO 8859-1 (Latin-1) character set (the first 256 characters), which is itself an extension of the ASCII character set (the first 128 characters). A character literal in Haskell has type Char. To convert a Char to or from the corresponding Int value defined by Unicode, use toEnum and fromEnum from the Enum class respectively (or equivalently ord and chr). Constructors C# Char# data Int Source # A fixed-precision integer type with at least the range [-2^29 .. 2^29-1]. The exact range for a given implementation can be determined by using minBound and maxBound from the Bounded class. Constructors I# Int# data Word Source # A Word is an unsigned integral type, with the same size as Int. Constructors W# Word# data Float Source # Single-precision floating point numbers. It is desirable that this type be at least equal in range and precision to the IEEE single-precision type. Constructors F# Float# data Double Source # Double-precision floating point numbers. It is desirable that this type be at least equal in range and precision to the IEEE double-precision type. Constructors D# Double# data Ordering Source # Constructors LT EQ GT newtype IO a Source # A value of type IO a is a computation which, when performed, does some I/O before returning a value of type a. There is really only one way to "perform" an I/O action: bind it to Main.main in your program. When your program is run, the I/O will be performed. It isn't possible to perform I/O from an arbitrary function, unless that function is itself in the IO monad and called at some point, directly or indirectly, from Main.main. IO is a monad, so IO actions can be combined using either the do-notation or the >> and >>= operations from the Monad class. Constructors IO (State# RealWorld -> (#State# RealWorld, a#)) Alias for tagToEnum#. Returns True if its parameter is 1# and False if it is 0#. data SPEC Source # SPEC is used by GHC in the SpecConstr pass in order to inform the compiler when to be particularly aggressive. In particular, it tells GHC to specialize regardless of size or the number of specializations. However, not all loops fall into this category. Libraries can specify this by using SPEC data type to inform which loops should be aggressively specialized. Constructors SPEC SPEC2 data Nat Source # (Kind) This is the kind of type-level natural numbers. data Symbol Source # (Kind) This is the kind of type-level symbols. Declared here because class IP needs it class a ~~ b Source # Lifted, heterogeneous equality. By lifted, we mean that it can be bogus (deferred type error). By heterogeneous, the two types a and b might have different kinds. Because ~~ can appear unexpectedly in error messages to users who do not care about the difference between heterogeneous equality ~~ and homogeneous equality ~, this is printed as ~ unless -fprint-equality-relations is set. class Coercible a b Source # Coercible is a two-parameter class that has instances for types a and b if the compiler can infer that they have the same representation. This class does not have regular instances; instead they are created on-the-fly during type-checking. Trying to manually declare an instance of Coercible is an error. Nevertheless one can pretend that the following three kinds of instances exist. First, as a trivial base-case: instance a a Furthermore, for every type constructor there is an instance that allows to coerce under the type constructor. For example, let D be a prototypical type constructor (data or newtype) with three type arguments, which have roles nominal, representational resp. phantom. Then there is an instance of the form instance Coercible b b' => Coercible (D a b c) (D a b' c') Note that the nominal type arguments are equal, the representational type arguments can differ, but need to have a Coercible instance themself, and the phantom type arguments can be changed arbitrarily. The third kind of instance exists for every newtype NT = MkNT T and comes in two variants, namely instance Coercible a T => Coercible a NT instance Coercible T b => Coercible NT b This instance is only usable if the constructor MkNT is in scope. If, as a library author of a type constructor like Set a, you want to prevent a user of your module to write coerce :: Set T -> Set NT, you need to set the role of Set's type parameter to nominal, by writing type role Set nominal For more details about this feature, please refer to Safe Coercions by Joachim Breitner, Richard A. Eisenberg, Simon Peyton Jones and Stephanie Weirich. Since: 4.7.0.0 data TYPE a :: RuntimeRep -> * Source # GHC maintains a property that the kind of all inhabited types (as distinct from type constructors or type-level data) tells us the runtime representation of values of that type. This datatype encodes the choice of runtime value. Note that TYPE is parameterised by RuntimeRep; this is precisely what we mean by the fact that a type's kind encodes the runtime representation. For boxed values (that is, values that are represented by a pointer), a further distinction is made, between lifted types (that contain ⊥), and unlifted ones (that don't). Constructors VecRep VecCount VecElem a SIMD vector type PtrRepLifted lifted; represented by a pointer PtrRepUnlifted unlifted; represented by a pointer VoidRep erased entirely IntRep signed, word-sized value WordRep unsigned, word-sized value Int64Rep signed, 64-bit value (on 32-bit only) Word64Rep unsigned, 64-bit value (on 32-bit only) AddrRep A pointer, but not to a Haskell value FloatRep a 32-bit floating point number DoubleRep a 64-bit floating point number UnboxedTupleRep An unboxed tuple; this doesn't specify a concrete rep The kind of types with values. For example Int :: Type. A backward-compatible (pre-GHC 8.0) synonym for Type A unicode backward-compatible (pre-GHC 8.0) synonym for Type The kind of constraints, like Show a data VecCount Source # Length of a SIMD vector type Constructors Vec2 Vec4 Vec8 Vec16 Vec32 Vec64 data VecElem Source # Element of a SIMD vector type # Runtime type representation data Module Source # Constructors Module TrName TrName data TrName Source # Constructors
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25752684473991394, "perplexity": 6069.241342905573}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034930.3/warc/CC-MAIN-20220625095705-20220625125705-00198.warc.gz"}
https://questions.examside.com/past-years/medical/question/in-a-first-order-reaction-a-to-b-if-k-is-rate-consta-neet-chemistry-chemical-kinetics-bzvt1d5a93kgy2pg
1 ### AIPMT 2007 In a first-order reaction A $\to$ B, if k is rate constant and initial concentration of the reactant A is 0.5 M, then the half-life is A ${{\log 2} \over k}$ B ${{\log 2} \over {k\sqrt {0.5} }}$ C ${{\ln 2} \over k}$ D ${{0.693} \over {0.5k}}$ ## Explanation For first order reaction k = ${{2.303} \over t}\log {a \over {a - x}}$ at ${t_{1/2}}$, x = ${a \over 2}$ ${t_{1/2}}$ = ${{2.303} \over k}\log {a \over {a - {a \over 2}}}$ = ${{\ln 2} \over k}$ 2 ### AIPMT 2007 If 60% of a first order reaction was completed in 60 minutes, 50% of the same reaction would be completed in approximately (log 4 = 0.60, log 5 = 0.69) A 45 minutes B 60 minutes C 40 minutes D 50 minutes ## Explanation For a first order reaction, k = ${{2.303} \over t}\log {a \over {a - x}}$ k = ${{2.303} \over {60}}\log {{100} \over {40}}$ = ${{2.303} \over {60}}\log 2.5$ = 0.0153 Also, ${t_{1/2}}$ = ${{2.303} \over k}\log {{100} \over {50}}$ = ${{2.303} \over {0.0153}}\log 2$ = 45 min. 3 ### AIPMT 2006 For the reaction, 2A + B $\to$ 3C + D, which of the following does not express the reaction rate? A $- {{d\left[ A \right]} \over {2dt}}$ B $- {{d\left[ C \right]} \over {3dt}}$ C $- {{d\left[ B \right]} \over {dt}}$ D ${{d\left[ D \right]} \over {dt}}$ ## Explanation Given, 2A + B $\to$ 3C + D Rate of reaction = $- {1 \over 2}{{d\left[ A \right]} \over {dt}} = - {{d\left[ B \right]} \over {dt}}$ = ${1 \over 3}{{d\left[ C \right]} \over {dt}} = {{d\left[ C \right]} \over {dt}}$ 4 ### AIPMT 2006 Consider the reaction :  N2(g) + 3H2(g) $\to$ 2NH3(g) The equality relationship between ${{d\left[ {N{H_3}} \right]} \over {dt}}$ and $- {{d\left[ {{H_2}} \right]} \over {dt}}$ is A ${{d\left[ {N{H_3}} \right]} \over {dt}} = - {{d\left[ {{H_2}} \right]} \over {dt}}$ B ${{d\left[ {N{H_3}} \right]} \over {dt}} = - {1 \over 3}{{d\left[ {{H_2}} \right]} \over {dt}}$ C $+ {{d\left[ {N{H_3}} \right]} \over {dt}} = - {2 \over 3}{{d\left[ {{H_2}} \right]} \over {dt}}$ D $+ {{d\left[ {N{H_3}} \right]} \over {dt}} = - {3 \over 2}{{d\left[ {{H_2}} \right]} \over {dt}}$ ## Explanation N2(g) + 3H2(g) $\to$ 2NH3(g) Rate = ${{ - d\left[ {{N_2}} \right]} \over {dt}} = - {1 \over 3}{{d\left[ {{H_2}} \right]} \over {dt}} = {1 \over 2}{{d\left[ {N{H_3}} \right]} \over {dt}}$ $\Rightarrow$${{d\left[ {N{H_3}} \right]} \over {dt}} = - {2 \over 3}{{d\left[ {{H_2}} \right]} \over {dt}}$ ### EXAM MAP NEET #### Graduate Aptitude Test in Engineering GATE CE GATE ECE GATE ME GATE IN GATE EE GATE CSE GATE PI
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9859340190887451, "perplexity": 9173.147516873672}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358842.4/warc/CC-MAIN-20211129194957-20211129224957-00223.warc.gz"}
https://iaifi.org/papers.html
IAIFI Papers View high energy physics IAIFI papers on INSPIRE Poisson Flow Generative Models Yilun Xu, Ziming Liu, Max Tegmark, Tommi Jaakkola [ arXiv:2209.11178 | code ] Abstract We propose a new "Poisson flow" generative model (PFGM) that maps a uniform distribution on a high-dimensional hemisphere into any data distribution. We interpret the data points as electrical charges on the z=0 hyperplane in a space augmented with an additional dimension z, generating a high-dimensional electric field (the gradient of the solution to Poisson equation). We prove that if these charges flow upward along electric field lines, their initial distribution in the z=0 plane transforms into a distribution on the hemisphere of radius r that becomes uniform in the r→∞ limit. To learn the bijective transformation, we estimate the normalized field in the augmented space. For sampling, we devise a backward ODE that is anchored by the physically meaningful additional dimension: the samples hit the unaugmented data manifold when the z reaches zero. Experimentally, PFGM achieves current state-of-the-art performance among the normalizing flow models on CIFAR-10, with an Inception score of 9.68 and a FID score of 2.48. It also performs on par with the state-of-the-art SDE approaches while offering 10× to 20× acceleration on image generation tasks. Additionally, PFGM appears more tolerant of estimation errors on a weaker network architecture and robust to the step size in the Euler method. Inferring subhalo effective density slopes from strong lensing observations with neural likelihood-ratio estimation Gemma Zhang, Siddharth Mishra-Sharma, Cora Dvorkin [ arXiv:2208.13796 ] Abstract Strong gravitational lensing has emerged as a promising approach for probing dark matter models on sub-galactic scales. Recent work has proposed the subhalo effective density slope as a more reliable observable than the commonly used subhalo mass function. The subhalo effective density slope is a measurement independent of assumptions about the underlying density profile and can be inferred for individual subhalos through traditional sampling methods. To go beyond individual subhalo measurements, we leverage recent advances in machine learning and introduce a neural likelihood-ratio estimator to infer an effective density slope for populations of subhalos. We demonstrate that our method is capable of harnessing the statistical power of multiple subhalos (within and across multiple images) to distinguish between characteristics of different subhalo populations. The computational efficiency warranted by the neural likelihood-ratio estimator over traditional sampling enables statistical studies of dark matter perturbers and is particularly useful as we expect an influx of strong lensing systems from upcoming surveys. Neural Embedding: Learning the Embedding of the Manifold of Physics Data Sang Eon Park, Philip Harris, Bryan Ostdiek [ arXiv:2208.05484 ] Abstract In this paper, we present a method of embedding physics data manifolds with metric structure into lower dimensional spaces with simpler metrics, such as Euclidean and Hyperbolic spaces. We then demonstrate that it can be a powerful step in the data analysis pipeline for many applications. Using progressively more realistic simulated collisions at the Large Hadron Collider, we show that this embedding approach learns the underlying latent structure. With the notion of volume in Euclidean spaces, we provide for the first time a viable solution to quantifying the true search capability of model agnostic search algorithms in collider physics (i.e. anomaly detection). Finally, we discuss how the ideas presented in this paper can be employed to solve many practical challenges that require the extraction of physically meaningful representations from information in complex high dimensional datasets. Modeling early-universe energy injection with Dense Neural Networks Yitian Sun, Tracy R. Slatyer [ arXiv:2207.06425 ] Abstract We show that Dense Neural Networks can be used to accurately model the cooling of high-energy particles in the early universe, in the context of the public code package DarkHistory. DarkHistory self-consistently computes the temperature and ionization history of the early universe in the presence of exotic energy injections, such as might arise from the annihilation or decay of dark matter. The original version of DarkHistory uses large pre-computed transfer function tables to evolve photon and electron spectra in redshift steps, which require a significant amount of memory and storage space. We present a light version of DarkHistory that makes use of simple Dense Neural Networks to store and interpolate the transfer functions, which performs well on small computers without heavy memory or storage usage. This method anticipates future expansion with additional parametric dependence in the transfer functions without requiring exponentially larger data tables.. Strong Lensing Source Reconstruction Using Continuous Neural Fields Siddharth Mishra-Sharma, Ge Yang [ arXiv:2206.14820 ] Abstract From the nature of dark matter to the rate of expansion of our Universe, observations of distant galaxies distorted through strong gravitational lensing have the potential to answer some of the major open questions in astrophysics. Modeling galaxy-galaxy strong lensing observations presents a number of challenges as the exact configuration of both the background source and foreground lens galaxy is unknown. A timely call, prompted by a number of upcoming surveys anticipating high-resolution lensing images, demands methods that can efficiently model lenses at their full complexity. In this work, we introduce a method that uses continuous neural fields to non-parametrically reconstruct the complex morphology of a source galaxy while simultaneously inferring a distribution over foreground lens galaxy configurations. We demonstrate the efficacy of our method through experiments on simulated data targeting high-resolution lensing images similar to those anticipated in near-future astrophysical surveys. The Dark Energy Camera Plane Survey 2 (DECaPS2): More Sky, Less Bias, and Better Uncertainties A. K. Saydjari, E. F. Schlafly, D. Lang, A. M. Meisner, G. M. Green, C. Zucker, I. Zelko, J. S. Speagle, T. Daylan, A. Lee, F. Valdes, D. Schlegel, D. P. Finkbeiner [ arXiv:2206.11909 ] Abstract Deep optical and near-infrared imaging of the entire Galactic plane is essential for understanding our Galaxy's stars, gas, and dust. The second data release of the DECam Plane Survey (DECaPS2) extends the five-band optical and near-infrared survey of the southern Galactic plane to cover 6.5% of the sky, |b| < 10° and 6° > l > -124°, complementary to coverage by Pan-STARRS1. Typical single-exposure effective depths, including crowding effects and other complications, are 23.5, 22.6, 22.1, 21.6, and 20.8 mag in g, r, i, z, and Y bands, respectively, with around 1 arcsecond seeing. The survey comprises 3.32 billion objects built from 34 billion detections in 21.4 thousand exposures, totaling 260 hours open shutter time on the Dark Energy Camera (DECam) at Cerro Tololo. The data reduction pipeline features several improvements, including the addition of synthetic source injection tests to validate photometric solutions across the entire survey footprint. A convenient functional form for the detection bias in the faint limit was derived and leveraged to characterize the photometric pipeline performance. A new post-processing technique was applied to every detection to de-bias and improve uncertainty estimates of the flux in the presence of structured backgrounds, specifically targeting nebulosity. The images and source catalogs are publicly available at this http URL: http://decaps.skymaps.info/ Simplifying Polylogarithms with Machine Learning Aurélien Dersy, Matthew D. Schwartz, Xiaoyuan Zhang [ arXiv:2206.04115 ] Abstract Polylogrithmic functions, such as the logarithm or dilogarithm, satisfy a number of algebraic identities. For the logarithm, all the identities follow from the product rule. For the dilogarithm and higher-weight classical polylogarithms, the identities can involve five functions or more. In many calculations relevant to particle physics, complicated combinations of polylogarithms often arise from Feynman integrals. Although the initial expressions resulting from the integration usually simplify, it is often difficult to know which identities to apply and in what order. To address this bottleneck, we explore to what extent machine learning methods can help. We consider both a reinforcement learning approach, where the identities are analogous to moves in a game, and a transformer network approach, where the problem is viewed analogously to a language-translation task. While both methods are effective, the transformer network appears more powerful and holds promise for practical use in symbolic manipulation tasks in mathematical physics. Stable Object Reorientation using Contact Plane Registration Richard Li, Carlos Esteves, Ameesh Makadia, Pulkit Agrawal International Conference on Robotics and Automation 2022 [ ] Abstract We present a system for accurately predicting stable orientations for diverse rigid objects. We propose to overcome the critical issue of modelling multimodality in the space of rotations by using a conditional generative model to accurately classify contact surfaces. Our system is capable of operating from noisy and partially-observed pointcloud observations captured by real world depth cameras. Our method substantially outperforms the current state-of-the-art systems on a simulated stacking task requiring highly accurate rotations, and demonstrates strong sim2real zero-shot transfer results across a variety of unseen objects on a real world reorientation task. Revealing the Milky Way’s Most Recent Major Merger with a Gaia EDR3 Catalog of Machine-Learned Line-of-Sight Velocities Adriana Dropulic, Hongwan Liu, Bryan Ostdiek, Mariangela Lisanti [ arXiv:2205.12278 ] Abstract Machine learning can play a powerful role in inferring missing line-of-sight velocities from astrometry in surveys such as Gaia. In this paper, we apply a neural network to Gaia Early Data Release 3 (EDR3) and obtain line-of-sight velocities and associated uncertainties for ~92 million stars. The network, which takes as input a star's parallax, angular coordinates, and proper motions, is trained and validated on ~6.4 million stars in Gaia with complete phase-space information. The network's uncertainty on its velocity prediction is a key aspect of its design; by properly convolving these uncertainties with the inferred velocities, we obtain accurate stellar kinematic distributions. As a first science application, we use the new network-completed catalog to identify candidate stars that belong to the Milky Way's most recent major merger, Gaia-Sausage-Enceladus (GSE). We present the kinematic, energy, angular momentum, and spatial distributions of the ~450,000 GSE candidates in this sample, and also study the chemical abundances of those with cross matches to GALAH and APOGEE. The network's predictive power will only continue to improve with future Gaia data releases as the training set of stars with complete phase-space information grows. This work provides a first demonstration of how to use machine learning to exploit high-dimensional correlations on data to infer line-of-sight velocities, and offers a template for how to train, validate and apply such a neural network when complete observational data is not available. Towards Understanding Grokking: An Effective Theory of Representation Learning Ziming Liu, Ouail Kitouni, Niklas Nolte, Eric J. Michaud, Max Tegmark, Mike Williams [ arXiv:2205.10343 ] Abstract We aim to understand grokking, a phenomenon where models generalize long after overfitting their training set. We present both a microscopic analysis anchored by an effective theory and a macroscopic analysis of phase diagrams describing learning performance across hyperparameters. We find that generalization originates from structured representations whose training dynamics and dependence on training set size can be predicted by our effective theory in a toy setting. We observe empirically the presence of four learning phases: comprehension, grokking, memorization, and confusion. We find representation learning to occur only in a 'Goldilocks zone' (including comprehension and grokking) between memorization and confusion. Compared to the comprehension phase, the grokking phase stays closer to the memorization phase, leading to delayed generalization. The Goldilocks phase is reminiscent of 'intelligence from starvation' in Darwinian evolution, where resource limitations drive discovery of more efficient solutions. This study not only provides intuitive explanations of the origin of grokking, but also highlights the usefulness of physics-inspired tools, e.g., effective theories and phase diagrams, for understanding deep learning. Power Counting Energy Flow Polynomials Pedro Cal, Jesse Thaler, Wouter J. Waalewijn [ arXiv:2205.06818 ] Abstract Power counting is a systematic strategy for organizing collider observables and their associated theoretical calculations. In this paper, we use power counting to characterize a class of jet substructure observables called energy flow polynomials (EFPs). EFPs provide an overcomplete linear basis for infrared-and-collinear safe jet observables, but it is known that in practice, a small subset of EFPs is often sufficient for specific jet analysis tasks. By applying power counting arguments, we obtain linear relationships between EFPs that hold for quark and gluon jets to a specific order in the power counting. We test these relations in the parton shower generator Pythia, finding excellent agreement. Power counting allows us to truncate the basis of EFPs without affecting performance, which we corroborate through a study of quark-gluon tagging and regression. Bias and Priors in Machine Learning Calibrations for High Energy Physics Rikab Gambhir, Benjamin Nachman, Jesse Thaler [ arXiv:2205.05084 ] Abstract Machine learning offers an exciting opportunity to improve the calibration of nearly all reconstructed objects in high-energy physics detectors. However, machine learning approaches often depend on the spectra of examples used during training, an issue known as prior dependence. This is an undesirable property of a calibration, which needs to be applicable in a variety of environments. The purpose of this paper is to explicitly highlight the prior dependence of some machine learning-based calibration strategies. We demonstrate how some recent proposals for both simulation-based and data-based calibrations inherit properties of the sample used for training, which can result in biases for downstream analyses. In the case of simulation-based calibration, we argue that our recently proposed Gaussian Ansatz approach can avoid some of the pitfalls of prior dependence, whereas prior-independent data-based calibration remains an open problem. Disentangling Quarks and Gluons with CMS Open Data Patrick T. Komiske, Serhii Kryhin, Jesse Thaler [ arXiv:2205.04459 ] Abstract We study quark and gluon jets separately using public collider data from the CMS experiment. Our analysis is based on 2.3/fb of proton-proton collisions at 7 TeV, collected at the Large Hadron Collider in 2011. We define two non-overlapping samples via a pseudorapidity cut -- central jets with |eta| < 0.65 and forward jets with |eta| > 0.65 -- and employ jet topic modeling to extract individual distributions for the maximally separable categories. Under certain assumptions, such as sample independence and mutual irreducibility, these categories correspond to "quark" and "gluon" jets, as given by a recently proposed operational definition. We consider a number of different methods for extracting reducibility factors from the central and forward datasets, from which the fractions of quark jets in each sample can be determined. The greatest stability and robustness to statistical uncertainties is achieved by a novel method based on parametrizing the endpoints of a receiver operating characteristic (ROC) curve. To mitigate detector effects, which would otherwise induce unphysical differences between central and forward jets, we use the OmniFold method to perform central value unfolding. As a demonstration of the power of this method, we extract the intrinsic dimensionality of the quark and gluon jet samples, which exhibit Casimir scaling, as expected from the strongly-ordered limit. To our knowledge, this work is the first application of full phase space unfolding to real collider data, and one of the first applications of topic modeling to extract separate quark and gluon distributions at the LHC. Learning Uncertainties the Frequentist Way: Calibration and Correlation in High Energy Physics Rikab Gambhir, Benjamin Nachman, Jesse Thaler [ arXiv:2205.03413 ] Abstract Calibration is a common experimental physics problem, whose goal is to infer the value and uncertainty of an unobservable quantity Z given a measured quantity X. Additionally, one would like to quantify the extent to which X and Z are correlated. In this paper, we present a machine learning framework for performing frequentist maximum likelihood inference with Gaussian uncertainty estimation, which also quantifies the mutual information between the unobservable and measured quantities. This framework uses the Donsker-Varadhan representation of the Kullback-Leibler divergence -- parametrized with a novel Gaussian Ansatz -- to enable a simultaneous extraction of the maximum likelihood values, uncertainties, and mutual information in a single training. We demonstrate our framework by extracting jet energy corrections and resolution factors from a simulation of the CMS detector at the Large Hadron Collider. By leveraging the high-dimensional feature space inside jets, we improve upon the nominal CMS jet resolution by upwards of 15%. Rapid Locomotion via Reinforcement Learning Gabriel B. Margolis, Ge Yang, Kartik Paigwar, Tao Chen, Pulkit Agrawal [ arXiv:2205.02824 ] Abstract Agile maneuvers such as sprinting and high-speed turning in the wild are challenging for legged robots. We present an end-to-end learned controller that achieves record agility for the MIT Mini Cheetah, sustaining speeds up to 3.9m/s. This system runs and turns fast on natural terrains like grass, ice, and gravel and responds robustly to disturbances. Our controller is a neural network trained in simulation via reinforcement learning and transferred to the real world. The two key components are (i) an adaptive curriculum on velocity commands and (ii) an online system identification strategy for sim-to-real transfer leveraged from prior work. Videos of the robot’s behaviors are available at https://agility.csail.mit.edu/. Going Beyond the Galaxy Power Spectrum: an Analysis of BOSS Data with Wavelet Scattering Transforms Georgios Valogiannis, Cora Dvorkin [ arXiv:2204.13717 ] Abstract We perform the first application of the wavelet scattering transform (WST) on actual galaxy observations, through a WST analysis of the BOSS DR12 CMASS dataset. We lay out the detailed procedure on how to capture all necessary layers of realism for an application on data obtained from a spectroscopic survey, including the effects of redshift-space anisotropy, non-trivial survey geometry, the shortcomings of the dataset through a set of systematic weights and the Alcock-Paczynski distortion effect. In order to capture the cosmological dependence of the WST, we use galaxy mocks obtained from the state-of-the-art ABACUSSUMMIT simulations, tuned to match the anisotropic correlation function of the BOSS CMASS sample in the redshift range 0.46<z<0.60. Using our theory model for the WST coefficients, as well as for the first 2 multipoles of the galaxy power spectrum, that we use as reference, we perform a likelihood analysis of the CMASS data and obtain the posterior probability distributions of 4 cosmological parameters, {ωbc,ns8}, as well as the Hubble constant, derived from a fixed value of the angular size of the sound horizon at last scattering measured by the Planck satellite, all of which are marginalized over the 7 nuisance parameters of the Halo Occupation Distribution model. The WST is found to deliver a substantial improvement in the values of the predicted 1σ errors compared to the regular power spectrum, which are tighter by a factor in the range 3−6 in the case of flat and uninformative priors and by a factor of 4−28, when a Big Bang Nucleosynthesis prior is applied on the value of ωb. Furthermore, in the latter case, we obtain a 0.6% measurement of the Hubble constant. Our results are investigative and subject to certain approximations in our analysis, that we discuss in the text. DiffCSE: Difference-based Contrastive Learning for Sentence Embeddings Yung-Sung Chuang, Rumen Dangovski, Hongyin Luo, Yang Zhang, Shiyu Chang, Marin Soljačić, Shang-Wen Li, Wen-tau Yin, Yoon Kim, James Glass [ arXiv:2204.10298 ] Abstract We propose DiffCSE, an unsupervised contrastive learning framework for learning sentence embeddings. DiffCSE learns sentence embeddings that are sensitive to the difference between the original sentence and an edited sentence, where the edited sentence is obtained by stochastically masking out the original sentence and then sampling from a masked language model. We show that DiffSCE is an instance of equivariant contrastive learning (Dangovski et al., 2021), which generalizes contrastive learning and learns representations that are insensitive to certain types of augmentations and sensitive to other "harmful" types of augmentations. Our experiments show that DiffCSE achieves state-of-the-art results among unsupervised sentence representation learning methods, outperforming unsupervised SimCSE by 2.3 absolute points on semantic textual similarity tasks. Photometrically-Classified Superluminous Supernovae from the Pan-STARRS1 Medium Deep Survey: A Case Study for Science with Machine Learning-Based Classification Brian Hsu, Griffin Hosseinzadeh, V. Ashley Villar, Edo Berger [ arXiv:2204.09809 ] Abstract With the upcoming Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST), it is expected that only ∼0.1% of all transients will be classified spectroscopically. To conduct studies of rare transients, such as Type I superluminous supernovae (SLSNe), we must instead rely on photometric classification. In this vein, here we carry out a pilot study of SLSNe from the Pan-STARRS1 Medium-Deep Survey (PS1-MDS) classified photometrically with our SuperRAENN and Superphot algorithms. We first construct a sub-sample of the photometric sample using a list of simple selection metrics designed to minimize contamination and ensure sufficient data quality for modeling. We then fit the multi-band light curves with a magnetar spin-down model using the Modular Open-Source Fitter for Transients (MOSFiT). Comparing the magnetar engine and ejecta parameter distributions of the photometric sample to those of the PS1-MDS spectroscopic sample and a larger literature spectroscopic sample, we find that these samples are overall consistent, but that the photometric sample extends to slower spins and lower ejecta masses, which correspond to lower luminosity events, as expected for photometric selection. While our PS1-MDS photometric sample is still smaller than the overall SLSN spectroscopic sample, our methodology paves the way to an orders-of-magnitude increase in the SLSN sample in the LSST era through photometric selection and study. Luminous Supernovae: Unveiling a Population Between Superluminous and Normal Core-collapse Supernovae Sebastian Gomez, Edo Berger, Matt Nicholl, Peter K. Blanchard, Griffin Hosseinzadeh [ arXiv:2204.08486 ] Pareto-optimal clustering with the primal deterministic information bottleneck Andrew K. Tan, Max Tegmark, Isaac L. Chuang Entropy, 2022, 24(6) [ arXiv:2204.02489 ] Abstract At the heart of both lossy compression and clustering is a trade-off between the fidelity and size of the learned representation. Our goal is to map out and study the Pareto frontier that quantifies this trade-off. We focus on the Deterministic Information Bottleneck (DIB) formulation of lossy compression, which can be interpreted as a clustering problem. To this end, we introduce the {\it primal} DIB problem, which we show results in a much richer frontier than its previously studied dual counterpart. We present an algorithm for mapping out the Pareto frontier of the primal DIB trade-off that is also applicable to most other two-objective clustering problems. We study general properties of the Pareto frontier, and give both analytic and numerical evidence for logarithmic sparsity of the frontier in general. We provide evidence that our algorithm has polynomial scaling despite the super-exponential search space; and additionally propose a modification to the algorithm that can be used where sampling noise is expected to be significant. Finally, we use our algorithm to map the DIB frontier of three different tasks: compressing the English alphabet, extracting informative color classes from natural images, and compressing a group theory inspired dataset, revealing interesting features of frontier, and demonstrating how the structure of the frontier can be used for model selection with a focus on points previously hidden by the cloak of the convex hull. AI Poincaré 2.0: Machine Learning Conservation Laws from Differential Equations Ziming Liu, Varun Madhavan, Max Tegmark [ arXiv:2203.12610 ] Abstract We present a machine learning algorithm that discovers conservation laws from differential equations, both numerically (parametrized as neural networks) and symbolically, ensuring their functional independence (a non-linear generalization of linear independence). Our independence module can be viewed as a nonlinear generalization of singular value decomposition. Our method can readily handle inductive biases for conservation laws. We validate it with examples including the 3-body problem, the KdV equation and nonlinear Schrödinger equation. Unsupervised Semantic Segmentation by Distilling Feature Correspondences Mark Hamilton, Zhoutong Zhang, Bharath Hariharan, Noah Snavely, William T. Freeman [ arXiv:2203.08414 ] Abstract Unsupervised semantic segmentation aims to discover and localize semantically meaningful categories within image corpora without any form of annotation. To solve this task, algorithms must produce features for every pixel that are both semantically meaningful and compact enough to form distinct clusters. Unlike previous works which achieve this with a single end-to-end framework, we propose to separate feature learning from cluster compactification. Empirically, we show that current unsupervised feature learning frameworks already generate dense features whose correlations are semantically consistent. This observation motivates us to design STEGO (Self-supervised Transformer with Energy-based Graph Optimization), a novel framework that distills unsupervised features into high-quality discrete semantic labels. At the core of STEGO is a novel contrastive loss function that encourages features to form compact clusters while preserving their relationships across the corpora. STEGO yields a significant improvement over the prior state of the art, on both the CocoStuff (+14 mIoU) and Cityscapes (+9 mIoU) semantic segmentation challenges. Categorical Representation Learning and RG flow operators for algorithmic classifiers Artan Sheshmani, Yizhuang You, Wenbo Fu, Ahmadreza Azizi [ arXiv:2203.07975 ] Abstract Following the earlier formalism of the categorical representation learning (arXiv:2103.14770) by the first two authors, we discuss the construction of the RG-flow based categorifier. Borrowing ideas from theory of renormalization group flows (RG) in quantum field theory, holographic duality, and hyperbolic geometry, and mixing them with neural ODE's, we construct a new algorithmic natural language processing (NLP) architecture, called the RG-flow categorifier or for short the RG categorifier, which is capable of data classification and generation in all layers. We apply our algorithmic platform to biomedical data sets and show its performance in the field of sequence-to-function mapping. In particular we apply the RG categorifier to particular genomic sequences of flu viruses and show how our technology is capable of extracting the information from given genomic sequences, find their hidden symmetries and dominant features, classify them and use the trained data to make stochastic prediction of new plausible generated sequences associated with new set of viruses which could avoid the human immune system. The content of the current article is part of the recent US patent application submitted by first two authors (U.S. Patent Application No.: 63/313.504). Creating Simple, Interpretable Anomaly Detectors for New Physics in Jet Substructure Layne Bradshaw, Spencer Chang, Bryan Ostdiek [ arXiv:2203.01343 ] Abstract Anomaly detection with convolutional autoencoders is a popular method to search for new physics in a model-agnostic manner. These techniques are powerful, but they are still a "black box," since we do not know what high-level physical observables determine how anomalous an event is. To address this, we adapt a recently proposed technique by Faucett this http URL, which maps out the physical observables learned by a neural network classifier, to the case of anomaly detection. We propose two different strategies that use a small number of high-level observables to mimic the decisions made by the autoencoder on background events. Despite the underlying differences in their approach, we find that both strategies have similar ordering performance as the autoencoder and independently use the same five high-level observables. From there, we compare the performance of these networks as anomaly detectors. We find that both strategies perform similarly to the autoencoder across a variety of signals, giving a nontrivial demonstration that learning to order background events transfers to ordering a variety of signal events. Biological error correction codes generate fault-tolerant neural networks Alexander Zlokapa, Andrew K. Tan, John M. Martyn, Max Tegmark, Isaac L. Chuang [ arXiv:2202.12887 ] Abstract It has been an open question in deep learning if fault-tolerant computation is possible: can arbitrarily reliable computation be achieved using only unreliable neurons? In the mammalian cortex, analog error correction codes known as grid codes have been observed to protect states against neural spiking noise, but their role in information processing is unclear. Here, we use these biological codes to show that a universal fault-tolerant neural network can be achieved if the faultiness of each neuron lies below a sharp threshold, which we find coincides in order of magnitude with noise observed in biological neurons. The discovery of a sharp phase transition from faulty to fault-tolerant neural computation opens a path towards understanding noisy analog systems in artificial intelligence and neuroscience. Flow-based sampling in the lattice Schwinger model at criticality Michael S. Albergo, Denis Boyda, Kyle Cranmer, Daniel C. Hackett, Gurtej Kanwar, Sébastien Racanière, Danilo J. Rezende, Fernando Romero-López, Phiala E. Shanahan, Julian M. Urban [ arXiv:2202.11712 ] Abstract Recent results suggest that flow-based algorithms may provide efficient sampling of field distributions for lattice field theory applications, such as studies of quantum chromodynamics and the Schwinger model. In this work, we provide a numerical demonstration of robust flow-based sampling in the Schwinger model at the critical value of the fermion mass. In contrast, at the same parameters, conventional methods fail to sample all parts of configuration space, leading to severely underestimated uncertainties. Topogivity: A Machine-Learned Chemical Rule for Discovering Topological Materials Andrew Ma, Yang Zhang, Thomas Christensen, Hoi Chun Po, Li Jing, Liang Fu, Marin Soljačić [ arXiv:2202.05255 ] Abstract Topological materials present unconventional electronic properties that make them attractive for both basic science and next-generation technological applications. The majority of currently-known topological materials have been discovered using methods that involve symmetry-based analysis of the quantum wavefunction. Here we use machine learning to develop a simple-to-use heuristic chemical rule that diagnoses with a high accuracy whether a material is topological using only its chemical formula. This heuristic rule is based on a notion that we term topogivity, a machine-learned numerical value for each element that loosely captures its tendency to form topological materials. We next implement a high-throughput strategy for discovering topological materials based on the heuristic topogivity-rule prediction followed by ab initio validation. This way, we discover new topological materials that are not diagnosable using symmetry indicators, including several that may be promising for experimental observation. Finite-Volume Pionless Effective Field Theory for Few-Nucleon Systems with Differentiable Programming Xiangkai Sun, William Detmold, Di Luo, Phiala E. Shanahan [ arXiv:2202.03530 ] Abstract Finite-volume pionless effective field theory provides an efficient framework for the extrapolation of nuclear spectra and matrix elements calculated at finite volume in lattice QCD to infinite volume, and to nuclei with larger atomic number. In this work, it is demonstrated how this framework may be implemented via a set of correlated Gaussian wavefunctions optimised using differentiable programming and via solution of a generalised eigenvalue problem. This approach is shown to be significantly more efficient than a stochastic implementation of the variational method based on the same form of correlated Gaussian wavefunctions, yielding comparably accurate representations of the ground-state wavefunctions with an order of magnitude fewer terms. The efficiency of representation allows such calculations to be extended to larger systems than in previous work. The method is demonstrated through calculations of the binding energies of nuclei with atomic number A∈{2,3,4} in finite volume, matched to lattice QCD calculations at quark masses corresponding to mπ=806 MeV, and infinite-volume effective field theory calculations of A∈{2,3,4,5,6} systems based on this matching. Constraining the Time of Gravitational Wave Emission from Core-Collapse Supernovae Kiranjyot Gill, Griffin Hosseinzadeh, Edo Berger, Michele Zanolin, Marek Szczepanczyk The Astrophysical Journal, 2022, Volume 931, Number 2 [ arXiv:2201.03609 ] Abstract The advent of sensitive gravitational wave (GW) detectors, coupled with wide-field, high cadence optical time-domain surveys, raises the possibility of the first join GW-electromagnetic (EM) detections of core-collapse supernovae (CCSNe). For targeted searches of Gas from CCSNe optical observation can be used to increase the sensitivity of the search by restricting the relevant time interval, defined here as the GW search window (GSW). The extent of the GSW is a critical factor in determining the achievable false alarm probability (FAP) for a triggered CCSN search. The ability to constrain the GSW from optical observations depends on how early a CCSN is detected, as well as the ability to model the early optical emission. Here we present several approaches to constrain the GSW, ranging in complexity from model-independent analytical fits of the early light curve, model-dependent fits of the rising or entire light curve, and a new data-driven approach using existing well-sampled CCSN light curves from {\it Kepler} and the Transiting Exoplanet Survey Satellite (TESS). We use these approaches to determine the time of core-collapse and its associated uncertainty (i.e., the GSW). We apply our methods to two Type II See that occurred during LIGO/Virgo Observing Run 3: SN\,2019fcn and SN\,2019ejj (both in the same galaxy at d = 15.7 Mac). Our approach shortens the duration of the GSW and improves the robustness of the GSW compared to techniques used in past GW CCSN searches. Analyzing N-Point Energy Correlators Inside Jets with CMS Open Data Patrick T. Komiske, Ian Moult, Jesse Thaler, Hua Xing Zhu [ arXiv:2201.07800 | code ] Abstract Jets of hadrons produced at high-energy colliders provide experimental access to the dynamics of asymptotically free quarks and gluons and their confinement into hadrons. In this paper, we show that the high energies of the Large Hadron Collider (LHC), together with the exceptional resolution of its detectors, allow multipoint correlation functions of energy flow operators to be directly measured within jets for the first time. Using Open Data from the CMS experiment, we show that reformulating jet substructure in terms of these correlators provides new ways of probing the dynamics of QCD jets, which enables direct imaging of the confining transition to free hadrons as well as precision measurements of the scaling properties and interactions of quarks and gluons. This opens a new era in our understanding of jet substructure and illustrates the immense unexploited potential of high-quality LHC data sets for elucidating the dynamics of QCD. Photometry on Structured Backgrounds: Local Pixelwise Infilling by Regression Andrew K. Saydjari, Douglas P. Finkbeiner [ arXiv:2201.07246 ] Abstract Photometric pipelines struggle to estimate both the flux and flux uncertainty for stars in the presence of structured backgrounds such as filaments or clouds. However, it is exactly stars in these complex regions that are critical to understanding star formation and the structure of the interstellar medium. We develop a method, similar to Gaussian process regression, which we term local pixelwise infilling (LPI). Using a local covariance estimate, we predict the background behind each star and the uncertainty on that prediction in order to improve estimates of flux and flux uncertainty. We show the validity of our model on synthetic data and real dust fields. We further demonstrate that the method is stable even in the crowded field limit. While we focus on optical-IR photometry, this method is not restricted to those wavelengths. We apply this technique to the 34 billion detections in the second data release of the Dark Energy Camera Plane Survey (DECaPS2). In addition to removing many >3σ outliers and improving uncertainty estimates by a factor of ∼2−3 on nebulous fields, we also show that our method is well-behaved on uncrowded fields. The entirely post-processing nature of our implementation of LPI photometry allows it to easily improve the flux and flux uncertainty estimates of past as well as future surveys. Cracking the Quantum Scaling Limit with Machine Learned Electron Densities Joshua A. Rackers, Lucas Tecot, Mario Geiger, Tess E. Smidt [ arXiv:2201.03726 ] Abstract A long-standing goal of science is to accurately solve the Schrödinger equation for large molecular systems. The poor scaling of current quantum chemistry algorithms on classical computers imposes an effective limit of about a few dozen atoms for which we can calculate molecular electronic structure. We present a machine learning (ML) method to break through this scaling limit and make quantum chemistry calculations of very large systems possible. We show that Euclidean Neural Networks can be trained to predict the electron density with high fidelity from limited data. Learning the electron density allows us to train a machine learning model on small systems and make accurate predictions on large ones. We show that this ML electron density model can break through the quantum scaling limit and calculate the electron density of systems of thousands of atoms with quantum accuracy. Impact of Massive Binary Star and Cosmic Evolution on Gravitational Wave Observations II: Double Compact Object Rates and Properties Floor S. Broekgaarden, Edo Berger, Simon Stevenson, Stephen Justham, Ilya Mandel, Martyna Churślińska, Like A. C. van Son, Tom Wagg, Alejandro Vigna-Gómez, Selma E. De Mink, Debatri Chattopadhyay, Coenraad J. Neijssel [ arXiv:2112.05763 ] Abstract Making the most of the rapidly increasing population of gravitational-wave detections of black hole (BH) and neutron star (NS) mergers requires comparing observations with population synthesis predictions. In this work we investigate the combined impact from the key uncertainties in population synthesis modelling of the isolated binary evolution channel: the physical processes in massive binary-star evolution and the star formation history as a function of metallicity, Z, and redshift z,S(Z,z). Considering these uncertainties we create 560 different publicly available model realizations and calculate the rate and distribution characteristics of detectable BHBH, BHNS, and NSNS mergers. We find that our stellar evolution and S(Z,z) variations can impact the predicted intrinsic and detectable merger rates by factors 102-104. We find that BHBH rates are dominantly impacted by S(Z,z) variations, NSNS rates by stellar evolution variations and BHNS rates by both. We then consider the combined impact from all uncertainties considered in this work on the detectable mass distribution shapes (chirp mass, individual masses and mass ratio). We find that the BHNS mass distributions are predominantly impacted by massive binary-star evolution changes. For BHBH and NSNS we find that both uncertainties are important. We also find that the shape of the delay time and birth metallicity distributions are typically dominated by the choice of S(Z,z) for BHBH, BHNS and NSNS. We identify several examples of robust features in the mass distributions predicted by all 560 models, such that we expect more than 95% of BHBH detections to contain a BH ≳8M⊙ and have mass ratios ≲4. Our work demonstrates that it is essential to consider a wide range of allowed models to study double compact object merger rates and properties. SymmetryGAN: Symmetry Discovery with Deep Learning Krish Desai, Benjamin Nachman, Jesse Thaler Physical. Rev. D, 2022, 105:096031 [ arXiv:2112.05722 ] Abstract What are the symmetries of a dataset? Whereas the symmetries of an individual data element can be characterized by its invariance under various transformations, the symmetries of an ensemble of data elements are ambiguous due to Jacobian factors introduced while changing coordinates. In this paper, we provide a rigorous statistical definition of the symmetries of a dataset, which involves inertial reference densities, in analogy to inertial frames in classical mechanics. We then propose SymmetryGAN as a novel and powerful approach to automatically discover symmetries using a deep learning method based on generative adversarial networks (GANs). When applied to Gaussian examples, SymmetryGAN shows excellent empirical performance, in agreement with expectations from the analytic loss landscape. SymmetryGAN is then applied to simulated dijet events from the Large Hadron Collider (LHC) to demonstrate the potential utility of this method in high energy collider physics applications. Going beyond symmetry discovery, we consider procedures to infer the underlying symmetry group from empirical data. Neural Descriptor Fields: SE(3) Equivariant Object Representations for Manipulation Anthony Simeonov, Yilun Du, Andrea Tagliasacchi, Joshua B. Tenenbaum, Alberto Rodriguez, Pulkit Agrawal, Vincent Sitzmann [ arXiv:2112.05124 | code ] Abstract We present Neural Descriptor Fields (NDFs), an object representation that encodes both points and relative poses between an object and a target (such as a robot gripper or a rack used for hanging) via category-level descriptors. We employ this representation for object manipulation, where given a task demonstration, we want to repeat the same task on a new object instance from the same category. We propose to achieve this objective by searching (via optimization) for the pose whose descriptor matches that observed in the demonstration. NDFs are conveniently trained in a self-supervised fashion via a 3D auto-encoding task that does not rely on expert-labeled keypoints. Further, NDFs are SE(3)-equivariant, guaranteeing performance that generalizes across all possible 3D object translations and rotations. We demonstrate learning of manipulation tasks from few (5-10) demonstrations both in simulation and on a real robot. Our performance generalizes across both object instances and 6-DoF object poses, and significantly outperforms a recent baseline that relies on 2D descriptors. Building Quantum Field Theories Out of Neurons James Halverson [ arXiv:2112.04527 ] Abstract An approach to field theory is studied in which fields are comprised of N constituent random neurons. Gaussian theories arise in the infinite-N limit when neurons are independently distributed, via the Central Limit Theorem, while interactions arise due to finite-N effects or non-independently distributed neurons. Euclidean-invariant ensembles of neurons are engineered, with tunable two-point function, yielding families of Euclidean-invariant field theories. Some Gaussian, Euclidean invariant theories are reflection positive, which allows for analytic continuation to a Lorentz-invariant quantum field theory. Examples are presented that yield dual theories at infinite-N, but have different symmetries at finite-N. Landscapes of classical field configurations are determined by local maxima of parameter distributions. Predictions arise from mixed field-neuron correlators. Near-Gaussianity is exhibited at large-N, potentially explaining a feature of field theories in Nature. Ref-NeRF: Structured View-Dependent Appearance for Neural Radiance Fields Dor Verbin, Peter Hedman, Ben Mildenhall, Todd Zickler, Jonathan T. Barron, Pratul P. Srinivasan [ arXiv:2112.03907 ] Abstract Neural Radiance Fields (NeRF) is a popular view synthesis technique that represents a scene as a continuous volumetric function, parameterized by multilayer perceptrons that provide the volume density and view-dependent emitted radiance at each location. While NeRF-based techniques excel at representing fine geometric structures with smoothly varying view-dependent appearance, they often fail to accurately capture and reproduce the appearance of glossy surfaces. We address this limitation by introducing Ref-NeRF, which replaces NeRF's parameterization of view-dependent outgoing radiance with a representation of reflected radiance and structures this function using a collection of spatially-varying scene properties. We show that together with a regularizer on normal vectors, our model significantly improves the realism and accuracy of specular reflections. Furthermore, we show that our model's internal representation of outgoing radiance is interpretable and useful for scene editing. Artificial Intelligence and Machine Learning in Nuclear Physics Amber Boehnlein, Markus Diefenthaler, Cristiano Fanelli, Morten Hjorth-Jensen, Tanja Horn, Michelle P. Kuchera, Dean Lee, Witold Nazarewicz, Kostas Orginos, Peter Ostroumov, Long-Gang Pang, Alan Poon, Nobuo Sato, Malachi Schram, Alexander Scheinker, Michael S. Smith, Xin-Nian Wang, Veronique Ziegler [ arXiv:2112.02309 ] Abstract Advances in artificial intelligence/machine learning methods provide tools that have broad applicability in scientific research. These techniques are being applied across the diversity of nuclear physics research topics, leading to advances that will facilitate scientific discoveries and societal applications. This Review gives a snapshot of nuclear physics research which has been transformed by artificial intelligence and machine learning techniques. Infinite Neural Network Quantum States Di Luo, James Halverson [ arXiv:2112.00723 ] Abstract We study infinite limits of neural network quantum states (∞-NNQS), which exhibit representation power through ensemble statistics, and also tractable gradient descent dynamics. Ensemble averages of Renyi entropies are expressed in terms of neural network correlators, and architectures that exhibit volume-law entanglement are presented. A general framework is developed for studying the gradient descent dynamics of neural network quantum states (NNQS), using a quantum state neural tangent kernel (QS-NTK). For ∞-NNQS the training dynamics is simplified, since the QS-NTK becomes deterministic and constant. An analytic solution is derived for quantum state supervised learning, which allows an ∞-NNQS to recover any target wavefunction. Numerical experiments on finite and infinite NNQS in the transverse field Ising model and Fermi Hubbard model demonstrate excellent agreement with theory. ∞-NNQS opens up new opportunities for studying entanglement and training dynamics in other physics applications, such as in finding ground states. Substructure Detection Reanalyzed: Dark Perturber shown to be a Line-of-Sight Halo Atınç Çağan Şengül, Cora Dvorkin, Bryan Ostdiek, Arthur Tsang [ arXiv:2112.00749 ] Abstract Observations of structure at sub-galactic scales are crucial for probing the properties of dark matter, which is the dominant source of gravity in the universe. It will become increasingly important for future surveys to distinguish between line-of-sight halos and subhalos to avoid wrong inferences on the nature of dark matter. We reanalyze a sub-galactic structure (in lens JVAS B1938+666) that has been previously found using the gravitational imaging technique in galaxy-galaxy lensing systems. This structure has been assumed to be a satellite in the halo of the main lens galaxy. We fit the redshift of the perturber of the system as a free parameter, using the multi-plane thin-lens approximation, and find that the redshift of the perturber is zint=1.22+0.11−0.11 (with a main lens redshift of z=0.881). Our analysis indicates that this structure is more massive than the previous result by more than an order of magnitude. This constitutes the first dark perturber shown to be a line-of-sight halo with a gravitational lensing method. Robust and Provably Motonic Networks Ouail Kitouni, Niklas Nolte, Mike Williams [ arXiv:2112.00038 ] Abstract The Lipschitz constant of the map between the input and output space represented by a neural network is a natural metric for assessing the robustness of the model. We present a new method to constrain the Lipschitz constant of dense deep learning models that can also be generalized to other architectures. The method relies on a simple weight normalization scheme during training that ensures the Lipschitz constant of every layer is below an upper limit specified by the analyst. A simple residual connection can then be used to make the model monotonic in any subset of its inputs, which is useful in scenarios where domain knowledge dictates such dependence. Examples can be found in algorithmic fairness requirements or, as presented here, in the classification of the decays of subatomic particles produced at the CERN Large Hadron Collider. Our normalization is minimally constraining and allows the underlying architecture to maintain higher expressiveness compared to other techniques which aim to either control the Lipschitz constant of the model or ensure its monotonicity. We show how the algorithm was used to train a powerful, robust, and interpretable discriminator for heavy-flavor decays in the LHCb realtime data-processing system. Quantum reservoir computing using arrays of Rydberg atoms Rodrigo Araiza Bravo, Khadijeh Najafi, Xun Gao, Susanne F. Yelin [ arXiv:2111.10956 ] Abstract Quantum computing promises to provide machine learning with computational advantages. However, noisy intermediate-scale quantum (NISQ) devices pose engineering challenges to realizing quantum machine learning (QML) advantages. Recently, a series of QML computational models inspired by the noise-tolerant dynamics on the brain have emerged as a means to circumvent the hardware limitations of NISQ devices. In this article, we introduce a quantum version of a recurrent neural network (RNN), a well-known model for neural circuits in the brain. Our quantum RNN (qRNN) makes use of the natural Hamiltonian dynamics of an ensemble of interacting spin-1/2 particles as a means for computation. In the limit where the Hamiltonian is diagonal, the qRNN recovers the dynamics of the classical version. Beyond this limit, we observe that the quantum dynamics of the qRNN provide it quantum computational features that can aid it in computation. To this end, we study a qRNN based on arrays of Rydberg atoms, and show that the qRNN is indeed capable of replicating the learning of several cognitive tasks such as multitasking, decision making, and long-term memory by taking advantage of several key features of this platform such as interatomic species interactions, and quantum many-body scars. New limits on light dark matter: proton cross section from the cosmic large-scale structure Keir K. Rogers, Cora Dvorkin, Hiranya V. Peiris [ arXiv:2111.10386 ] Abstract We set the strongest limits to-date on the velocity-independent dark matter (DM) - proton cross section σ for DM masses m=10keV to 100GeV, using large-scale structure traced by the Lyman-alpha forest: e.g., a 95% lower limit σ<6×10−30cm2, for m=100keV. Our results complement direct detection, which has limited sensitivity to sub-GeV DM. We use an emulator of cosmological simulations, combined with data from the smallest cosmological scales used to-date, to model and search for the imprint of primordial DM-proton collisions. Cosmological bounds are improved by up to a factor of 25. Equivariant Contrastive Learning Rumen Dangovski, Li Jing, Charlotte Loh, Seungwook Han, Akash Srivastava, Brian Cheung, Pulkit Agrawal, Marin Soljačić [ arXiv:2111.00899 ] Abstract In state-of-the-art self-supervised learning (SSL) pre-training produces semantically good representations by encouraging them to be invariant under meaningful transformations prescribed from human knowledge. In fact, the property of invariance is a trivial instance of a broader class called equivariance, which can be intuitively understood as the property that representations transform according to the way the inputs transform. Here, we show that rather than using only invariance, pre-training that encourages non-trivial equivariance to some transformations, while maintaining invariance to other transformations, can be used to improve the semantic quality of representations. Specifically, we extend popular SSL methods to a more general framework which we name Equivariant Self-Supervised Learning (E-SSL). In E-SSL, a simple additional pre-training objective encourages equivariance by predicting the transformations applied to the input. We demonstrate E-SSL's effectiveness empirically on several popular computer vision benchmarks. Furthermore, we demonstrate usefulness of E-SSL for applications beyond computer vision; in particular, we show its utility on regression problems in photonics science. We will release our code. Surrogate- and invariance-boosted contrastive learning for data-scarce applications in science Charlotte Loh, Thomas Christensen, Rumen Dangovski, Samuel Kim, Marin Soljačić [ arXiv:2110.08406 ] Abstract Deep learning techniques have been increasingly applied to the natural sciences, e.g., for property prediction and optimization or material discovery. A fundamental ingredient of such approaches is the vast quantity of labelled data needed to train the model; this poses severe challenges in data-scarce settings where obtaining labels requires substantial computational or labor resources. Here, we introduce surrogate- and invariance-boosted contrastive learning (SIB-CL), a deep learning framework which incorporates three inexpensive'' and easily obtainable auxiliary information sources to overcome data scarcity. Specifically, these are: 1)~abundant unlabeled data, 2)~prior knowledge of symmetries or invariances and 3)~surrogate data obtained at near-zero cost. We demonstrate SIB-CL's effectiveness and generality on various scientific problems, e.g., predicting the density-of-states of 2D photonic crystals and solving the 3D time-independent Schrodinger equation. SIB-CL consistently results in orders of magnitude reduction in the number of labels needed to achieve the same network accuracies. A neural simulation-based inference approach for characterizing the Galactic Center γ-ray excess Siddharth Mishra-Sharma, Kyle Cranmer Physical Review D, 2922, Volume 105, Article 063017 [ arXiv:2110.06931 ] Abstract The nature of the Fermi gamma-ray Galactic Center Excess (GCE) has remained a persistent mystery for over a decade. Although the excess is broadly compatible with emission expected due to dark matter annihilation, an explanation in terms of a population of unresolved astrophysical point sources e.g., millisecond pulsars, remains viable. The effort to uncover the origin of the GCE is hampered in particular by an incomplete understanding of diffuse emission of Galactic origin. This can lead to spurious features that make it difficult to robustly differentiate smooth emission, as expected for a dark matter origin, from more "clumpy" emission expected for a population of relatively bright, unresolved point sources. We use recent advancements in the field of simulation-based inference, in particular density estimation techniques using normalizing flows, in order to characterize the contribution of modeled components, including unresolved point source populations, to the GCE. Compared to traditional techniques based on the statistical distribution of photon counts, our machine learning-based method is able to utilize more of the information contained in a given model of the Galactic Center emission, and in particular can perform posterior parameter estimation while accounting for pixel-to-pixel spatial correlations in the gamma-ray map. This makes the method demonstrably more resilient to certain forms of model misspecification. On application to Fermi data, the method generically attributes a smaller fraction of the GCE flux to unresolved point sources when compared to traditional approaches. We nevertheless infer such a contribution to make up a non-negligible fraction of the GCE across all analysis variations considered, with at least 38+9−19% of the excess attributed to unresolved points sources in our baseline analysis. Challenges for Unsupervised Anomaly Detection in Particle Physics Katherine Fraser, Samuel Homiller, Rashmish K. Mishra, Bryan Ostdiek, Matthew D. Schwartz Journal of High Energy Physics, 2022, Article 66 [ arXiv:2110.06948 ] Abstract Anomaly detection relies on designing a score to determine whether a particular event is uncharacteristic of a given background distribution. One way to define a score is to use autoencoders, which rely on the ability to reconstruct certain types of data (background) but not others (signals). In this paper, we study some challenges associated with variational autoencoders, such as the dependence on hyperparameters and the metric used, in the context of anomalous signal (top and W) jets in a QCD background. We find that the hyperparameter choices strongly affect the network performance and that the optimal parameters for one signal are non-optimal for another. In exploring the networks, we uncover a connection between the latent space of a variational autoencoder trained using mean-squared-error and the optimal transport distances within the dataset. We then show that optimal transport distances to representative events in the background dataset can be used directly for anomaly detection, with performance comparable to the autoencoders. Whether using autoencoders or optimal transport distances for anomaly detection, we find that the choices that best represent the background are not necessarily best for signal identification. These challenges with unsupervised anomaly detection bolster the case for additional exploration of semi-supervised or alternative approaches. Mixture Model Auto-Encoders: Deep Clustering through Dictionary Learning Alexander Lin, Andrew H. Song, Demba Ba ICASSP 2022 - 2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2022, pp. 3368-3372 [ arXiv:2110.04683 ] Abstract State-of-the-art approaches for clustering high-dimensional data utilize deep auto-encoder architectures. Many of these networks require a large number of parameters and suffer from a lack of interpretability, due to the black-box nature of the auto-encoders. We introduce Mixture Model Auto-Encoders (MixMate), a novel architecture that clusters data by performing inference on a generative model. Derived from the perspective of sparse dictionary learning and mixture models, MixMate comprises several auto-encoders, each tasked with reconstructing data in a distinct cluster, while enforcing sparsity in the latent space. Through experiments on various image datasets, we show that MixMate achieves competitive performance compared to state-of-the-art deep clustering algorithms, while using orders of magnitude fewer parameters. Pruning a restricted Boltzmann machine for quantum state reconstruction Anna Golubeva, Roger G. Melko Physical Review B, 2022, Volume 105, Article 125124 [ arXiv:2110.03676 ] Abstract Restricted Boltzmann machines (RBMs) have proven to be a powerful tool for learning quantum wavefunction representations from qubit projective measurement data. Since the number of classical parameters needed to encode a quantum wavefunction scales rapidly with the number of qubits, the ability to learn efficient representations is of critical importance. In this paper we study magnitude-based pruning as a way to compress the wavefunction representation in an RBM, focusing on RBMs trained on data from the transverse-field Ising model in one dimension. We find that pruning can reduce the total number of RBM weights, but the threshold at which the reconstruction accuracy starts to degrade varies significantly depending on the phase of the model. In a gapped region of the phase diagram, the RBM admits pruning over half of the weights while still accurately reproducing relevant physical observables. At the quantum critical point however, even a small amount of pruning can lead to significant loss of accuracy in the physical properties of the reconstructed quantum state. Our results highlight the importance of tracking all relevant observables as their sensitivity varies strongly with pruning. Finally, we find that sparse RBMs are trainable and discuss how a successful sparsity pattern can be created without pruning. Inferring dark matter substructure with astrometric lensing beyond the power spectrum Siddharth Mishra-Sharma [ arXiv:2110.01620 ] Abstract Astrometry -- the precise measurement of positions and motions of celestial objects -- has emerged as a promising avenue for characterizing the dark matter population in our Galaxy. By leveraging recent advances in simulation-based inference and neural network architectures, we introduce a novel method to search for global dark matter-induced gravitational lensing signatures in astrometric datasets. Our method based on neural likelihood-ratio estimation shows significantly enhanced sensitivity to a cold dark matter population and more favorable scaling with measurement noise compared to existing approaches based on two-point correlation statistics, establishing machine learning as a powerful tool for characterizing dark matter using astrometric data. Physics-Augmented Learning: A New Paradigm Beyond Physics-Informed Learning Ziming Liu, Yunyue Chen, Yuanqi Du, Max Tegmark [ arXiv:2109.13901 ] Abstract Integrating physical inductive biases into machine learning can improve model generalizability. We generalize the successful paradigm of physics-informed learning (PIL) into a more general framework that also includes what we term physics-augmented learning (PAL). PIL and PAL complement each other by handling discriminative and generative properties, respectively. In numerical experiments, we show that PAL performs well on examples where PIL is inapplicable or inefficient. Overcoming the Spectral Bias of Neural Value Approximation Ge Yang, Anurag Ajay, Pulkit Agrawal ICLR 2022 Conference Proceedings [ arXiv:2206.04672 ] Abstract Value approximation using deep neural networks is at the heart of off-policy deep reinforcement learning, and is often the primary module that provides learning signals to the rest of the algorithm. While multi-layer perceptron networks are universal function approximators, recent works in neural kernel regression suggest the presence of a \textit{spectral bias}, where fitting high-frequency components of the value function requires exponentially more gradient update steps than the low-frequency ones. In this work, we re-examine off-policy reinforcement learning through the lens of kernel regression and propose to overcome such bias via a composite neural tangent kernel. With just a single line-change, our approach, the Fourier feature networks (FFN) produce state-of-the-art performance on challenging continuous control domains with only a fraction of the compute. Faster convergence and better off-policy stability also make it possible to remove the target network without suffering catastrophic divergences, which further reduces TD(0)'s estimation bias on a few tasks. Code and analysis available at https://geyang.github.io/ffn. Machine-learning hidden symmetries Ziming Liu, Max Tegmark Physical Review Letters, 2022, 128, 180201 [ arXiv:2109.09721 ] Abstract We present an automated method for finding hidden symmetries, defined as symmetries that become manifest only in a new coordinate system that must be discovered. Its core idea is to quantify asymmetry as violation of certain partial differential equations, and to numerically minimize such violation over the space of all invertible transformations, parametrized as invertible neural networks. For example, our method rediscovers the famous Gullstrand-Painleve metric that manifests hidden translational symmetry in the Schwarzschild metric of non-rotating black holes, as well as Hamiltonicity, modularity and other simplifying traits not traditionally viewed as symmetries. Deep Set Auto Encoders for Anomaly Detection in Particle Physics Bryan Ostdiek SciPost Physics, 2022, Vol. 12, Issue 1 [ arXiv:2109.01695 ] Abstract There is an increased interest in model agnostic search strategies for physics beyond the standard model at the Large Hadron Collider. We introduce a Deep Set Variational Autoencoder and present results on the Dark Machines Anomaly Score Challenge. We find that the method attains the best anomaly detection ability when there is no decoding step for the network, and the anomaly score is based solely on the representation within the encoded latent space. This method was one of the top-performing models in the Dark Machines Challenge, both for the open data sets as well as the blinded data sets. Machine-Learning media bias Samantha D’Alonzo, Max Tegmark [ arXiv:2109.00024 ] Abstract We present an automated method for measuring media bias. Inferring which newspaper published a given article, based only on the frequencies with which it uses different phrases, leads to a conditional probability distribution whose analysis lets us automatically map newspapers and phrases into a bias space. By analyzing roughly a million articles from roughly a hundred newspapers for bias in dozens of news topics, our method maps newspapers into a two-dimensional bias landscape that agrees well with previous bias classifications based on human judgement. One dimension can be interpreted as traditional left-right bias, the other as establishment bias. This means that although news bias is inherently political, its measurement need not be. Hardware-accelerated Inference for Real-Time Gravitational-Wave Astronomy Alec Gunny, Dylan Rankin, Jeffrey Krupa, Muhammed Saleem, Tri Nguyen, Michael Coughlin, Philip Harris, Erik Katsavounidis, Steven Timm, Burt Holzman [ arXiv:2108.12430 ] Abstract The field of transient astronomy has seen a revolution with the first gravitational-wave detections and the arrival of multi-messenger observations they enabled. Transformed by the first detection of binary black hole and binary neutron star mergers, computational demands in gravitational-wave astronomy are expected to grow by at least a factor of two over the next five years as the global network of kilometer-scale interferometers are brought to design sensitivity. With the increase in detector sensitivity, real-time delivery of gravitational-wave alerts will become increasingly important as an enabler of multi-messenger followup. In this work, we report a novel implementation and deployment of deep learning inference for real-time gravitational-wave data denoising and astrophysical source identification. This is accomplished using a generic Inference-as-a-Service model that is capable of adapting to the future needs of gravitational-wave data analysis. Our implementation allows seamless incorporation of hardware accelerators and also enables the use of commercial or private (dedicated) as-a-service computing. Based on our results, we propose a paradigm shift in low-latency and offline computing in gravitational-wave astronomy. Such a shift can address key challenges in peak-usage, scalability and reliability, and provide a data analysis platform particularly optimized for deep learning applications. The achieved sub-millisecond scale latency will also be relevant for any machine learning-based real-time control systems that may be invoked in the operation of near-future and next generation ground-based laser interferometers, as well as the front-end collection, distribution and processing of data from such instruments. Towards an Optimal Estimation of Cosmological Parameters with the Wavelet Scattering Transform Georgios Valogiannis, Cora Dvorkin Physical Review D, 2022, 105, 103534 [ arXiv:2108.07821 ] Abstract Optimal extraction of the non-Gaussian information encoded in the Large-Scale Structure (LSS) of the universe lies at the forefront of modern precision cosmology. We propose achieving this task through the use of the Wavelet Scattering Transform (WST), which subjects an input field to a layer of non-linear transformations that are sensitive to non-Gaussianity in spatial density distributions through a generated set of WST coefficients. In order to assess its applicability in the context of LSS surveys, we apply the WST on the 3D overdensity field obtained by the Quijote simulations, out of which we extract the Fisher information in 6 cosmological parameters. It is subsequently found to deliver a large improvement in the marginalized errors on all parameters, ranging between 1.2−4× tighter than the corresponding ones obtained from the regular 3D cold dark matter + baryon power spectrum, as well as a 50% improvement over the neutrino mass constraint given by the marked power spectrum. Through this first application on 3D cosmological fields, we demonstrate the great promise held by this novel statistic and set the stage for its future application to actual galaxy observations. Harold Erbin, Riccardo Finotello, Robin Schneider, Mohamed Tamaazousti Machine Learning: Science and Technology, 2021, Volume 3, Number 1 [ arXiv:2108.02221 ] Abstract We continue earlier efforts in computing the dimensions of tangent space cohomologies of Calabi-Yau manifolds using deep learning. In this paper, we consider the dataset of all Calabi-Yau four-folds constructed as complete intersections in products of projective spaces. Employing neural networks inspired by state-of-the-art computer vision architectures, we improve earlier benchmarks and demonstrate that all four non-trivial Hodge numbers can be learned at the same time using a multi-task architecture. With 30% (80%) training ratio, we reach an accuracy of 100% for h(1,1) and 97% for h(2,1) (100% for both), 81% (96%) for h(3,1), and 49% (83%) for h(2,2). Assuming that the Euler number is known, as it is easy to compute, and taking into account the linear constraint arising from index computations, we get 100% total accuracy. Nonperturbative renormalization for the neural network–QFT correspondence Harold Erbin, Vincent Lahoche, Dine Ousmane Samary Machine Learning Science and Technology, 2022, Volume 3, Number 1, Article 015027 [ arXiv:2108.01403 ] Abstract In a recent work~[1], Halverson, Maiti and Stoner proposed a description of neural networks in terms of a Wilsonian effective field theory. The infinite-width limit is mapped to a free field theory, while finite N corrections are taken into account by interactions (non-Gaussian terms in the action). In this paper, we study two related aspects of this correspondence. First, we comment on the concepts of locality and power-counting in this context. Indeed, these usual space-time notions may not hold for neural networks (since inputs can be arbitrary), however, the renormalization group provides natural notions of locality and scaling. Moreover, we comment on several subtleties, for example, that data components may not have a permutation symmetry: in that case, we argue that random tensor field theories could provide a natural generalization. Second, we improve the perturbative Wilsonian renormalization from~[1] by providing an analysis in terms of the nonperturbative renormalization group using the Wetterich-Morris equation. An important difference with usual nonperturbative RG analysis is that only the effective (IR) 2-point function is known, which requires setting the problem with care. Our aim is to provide a useful formalism to investigate neural networks behavior beyond the large-width limit (i.e.~far from Gaussian limit) in a nonperturbative fashion. A major result of our analysis is that changing the standard deviation of the neural network weight distribution can be interpreted as a renormalization flow in the space of networks. We focus on translations invariant kernels and provide preliminary numerical results. Discovering Sparse Interpretable Dynamics from Partial Observations Peter Y. Lu, Joan Ariño, Marin Soljačić [ arXiv:2107.10879 ] Abstract Identifying the governing equations of a nonlinear dynamical system is key to both understanding the physical features of the system and constructing an accurate model of the dynamics that generalizes well beyond the available data. We propose a machine learning framework for discovering these governing equations using only partial observations, combining an encoder for state reconstruction with a sparse symbolic model. Our tests show that this method can successfully reconstruct the full system state and identify the underlying dynamics for a variety of ODE and PDE systems. Flow-based sampling for multimodal distributions in lattice field theory Daniel C. Hackett, Chung-Chun Hsieh, Michael S. Albergo, Denis Boyda, Jiunn-Wei Chen, Kai-Feng Chen, Kyle Cranmer, Gurtej Kanwar, Phiala E. Shanahan [ arXiv:2107.00734 ] Abstract Recent results have demonstrated that samplers constructed with flow-based generative models are a promising new approach for configuration generation in lattice field theory. In this paper, we present a set of methods to construct flow models for targets with multiple separated modes (i.e. theories with multiple vacua). We demonstrate the application of these methods to modeling two-dimensional real scalar field theory in its symmetry-broken phase. In this context we investigate the performance of different flow-based sampling algorithms, including a composite sampling algorithm where flow-based proposals are occasionally augmented by applying updates using traditional algorithms like HMC. Xiang Fu, Ge Yang, Pulkit Agrawal, Tommi Jaakkola [ arXiv:2106.15612 | code ] Abstract Current model-based reinforcement learning methods struggle when operating from complex visual scenes due to their inability to prioritize task-relevant features. To mitigate this problem, we propose learning Task Informed Abstractions (TIA) that explicitly separates reward-correlated visual features from distractors. For learning TIA, we introduce the formalism of Task Informed MDP (TiMDP) that is realized by training two models that learn visual features via cooperative reconstruction, but one model is adversarially dissociated from the reward signal. Empirical evaluation shows that TIA leads to significant performance gains over state-of-the-art methods on many visual control tasks where natural and unconstrained visual distractions pose a formidable challenge. The Principles of Deep Learning Theory Daniel A. Roberts, Sho Yaida, Boris Hanin Cambridge University Press (Book), 2022 [ arXiv:2106.10165 ] Abstract This book develops an effective theory approach to understanding deep neural networks of practical relevance. Beginning from a first-principles component-level picture of networks, we explain how to determine an accurate description of the output of trained networks by solving layer-to-layer iteration equations and nonlinear learning dynamics. A main result is that the predictions of networks are described by nearly-Gaussian distributions, with the depth-to-width aspect ratio of the network controlling the deviations from the infinite-width Gaussian description. We explain how these effectively-deep networks learn nontrivial representations from training and more broadly analyze the mechanism of representation learning for nonlinear models. From a nearly-kernel-methods perspective, we find that the dependence of such models' predictions on the underlying learning algorithm can be expressed in a simple and universal way. To obtain these results, we develop the notion of representation group flow (RG flow) to characterize the propagation of signals through the network. By tuning networks to criticality, we give a practical solution to the exploding and vanishing gradient problem. We further explain how RG flow leads to near-universal behavior and lets us categorize networks built from different activation functions into universality classes. Altogether, we show that the depth-to-width ratio governs the effective model complexity of the ensemble of trained networks. By using information-theoretic techniques, we estimate the optimal aspect ratio at which we expect the network to be practically most useful and show how residual connections can be used to push this scale to arbitrary depths. With these tools, we can learn in detail about the inductive bias of architectures, hyperparameters, and optimizers. Single electrons on solid neon as a solid-state quit platform Xianjing Zhou, Gerwin Koolstra, Xufeng Zhang, Ge Yang, Xu Han, Brennan Dizdar, Divan Ralu, Wei Guo, Kater W. Murch, David I. Shuster, Dafei Jin Nature, 2022, 605, 46-50 [ arXiv:2106.10326 ] Abstract Progress toward the realization of quantum computers requires persistent advances in their constituent building blocks - qubits. Novel qubit platforms that simultaneously embody long coherence, fast operation, and large scalability offer compelling advantages in the construction of quantum computers and many other quantum information systems. Electrons, ubiquitous elementary particles of nonzero charge, spin, and mass, have commonly been perceived as paradigmatic local quantum information carriers. Despite superior controllability and configurability, their practical performance as qubits via either motional or spin states depends critically on their material environment. Here we report our experimental realization of a new qubit platform based upon isolated single electrons trapped on an ultraclean solid neon surface in vacuum. By integrating an electron trap in a circuit quantum electrodynamics architecture, we achieve strong coupling between the motional states of a single electron and a single microwave photon in an on-chip superconducting resonator. Qubit gate operations and dispersive readout are implemented to measure the energy relaxation time T1 of 15 μs and phase coherence time T2 over 200 ns. These results indicate that the electron-on-solid-neon qubit already performs near the state of the art as a charge qubit. Flow-based sampling for fermionic lattice field theories Michael S. Albergo, Gurtej Kanwar, Sébastien Racanière, Danilo J. Rezende, Julian M. Urban, Denis Boyda, Kyle Cranmer, Daniel C. Hackett, Phiala E. Shanahan Physical Review D, 2021, Vol. 104, Iss. 11 – 1 [ arXiv:2106.05934 ] Abstract Algorithms based on normalizing flows are emerging as promising machine learning approaches to sampling complicated probability distributions in a way that can be made asymptotically exact. In the context of lattice field theory, proof-of-principle studies have demonstrated the effectiveness of this approach for scalar theories, gauge theories, and statistical systems. This work develops approaches that enable flow-based sampling of theories with dynamical fermions, which is necessary for the technique to be applied to lattice field theory studies of the Standard Model of particle physics and many condensed matter systems. As a practical demonstration, these methods are applied to the sampling of field configurations for a two-dimensional theory of massless staggered fermions coupled to a scalar field via a Yukawa interaction. Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering Vincent Sitzmann, Semon Rezchikov, William T. Freeman, Joshua B. Tenenbaum, Fredo Durand [ arXiv:2106.02634 ] Abstract Inferring representations of 3D scenes from 2D observations is a fundamental problem of computer graphics, computer vision, and artificial intelligence. Emerging 3D-structured neural scene representations are a promising approach to 3D scene understanding. In this work, we propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field parameterized via a neural implicit representation. Rendering a ray from an LFN requires only a *single* network evaluation, as opposed to hundreds of evaluations per ray for ray-marching or volumetric based renderers in 3D-structured neural scene representations. In the setting of simple scenes, we leverage meta-learning to learn a prior over LFNs that enables multi-view consistent light field reconstruction from as little as a single image observation. This results in dramatic reductions in time and memory complexity, and enables real-time rendering. The cost of storing a 360-degree light field via an LFN is two orders of magnitude lower than conventional methods such as the Lumigraph. Utilizing the analytical differentiability of neural implicit representations and a novel parameterization of light space, we further demonstrate the extraction of sparse depth maps from LFNs. Symmetry-via-Duality: Invariant Neural Network Densities from Parameter-Space Correlators Anindita Maiti, Keegan Stoner, James Halverson [ arXiv:2106.00694 ] Abstract Parameter-space and function-space provide two different duality frames in which to study neural networks. We demonstrate that symmetries of network densities may be determined via dual computations of network correlation functions, even when the density is unknown and the network is not equivariant. Symmetry-via-duality relies on invariance properties of the correlation functions, which stem from the choice of network parameter distributions. Input and output symmetries of neural network densities are determined, which recover known Gaussian process results in the infinite width limit. The mechanism may also be utilized to determine symmetries during training, when parameters are correlated, as well as symmetries of the Neural Tangent Kernel. We demonstrate that the amount of symmetry in the initialization density affects the accuracy of networks trained on Fashion-MNIST, and that symmetry breaking helps only when it is in the direction of ground truth. Machine-Learning Non-Conservative Dynamics for New-Physics Detection Ziming Liu, Bohan Wang, Qi Meng, Wei Chen, Max Tegmark, Tie-Yan Liu Physical Review E, 2021, Vol. 104, Article 055302 [ arXiv:2106.00026 ] Abstract Energy conservation is a basic physics principle, the breakdown of which often implies new physics. This paper presents a method for data-driven "new physics" discovery. Specifically, given a trajectory governed by unknown forces, our Neural New-Physics Detector (NNPhD) aims to detect new physics by decomposing the force field into conservative and non-conservative components, which are represented by a Lagrangian Neural Network (LNN) and a universal approximator network (UAN), respectively, trained to minimize the force recovery error plus a constant λ times the magnitude of the predicted non-conservative force. We show that a phase transition occurs at λ=1, universally for arbitrary forces. We demonstrate that NNPhD successfully discovers new physics in toy numerical experiments, rediscovering friction (1493) from a damped double pendulum, Neptune from Uranus' orbit (1846) and gravitational waves (2017) from an inspiraling orbit. We also show how NNPhD coupled with an integrator outperforms previous methods for predicting the future of a damped double pendulum. The Dark Machines Anomaly Score Challenge: Benchmark Data and Model Independent Event Classification for the Large Hadron Collider T. Aarrestad, M. Van Beekveld, M. Bona, A. Bovenin, S. Caron, J. Davies, A. De Simone, C. Doglioni, J.M. Duarte, A. Farbin, H. Gupta, L. Hendriks, L. Heinrich, J. Howarth, P. Jawahar, A. Jueid, J. Lastow, A. Leinweber, J. Mamuzic, E. Merényi, A. Morandini, P. Moskvitina, C. Nellist, J. Ngadiuba, B. Ostdiek, M. Pierini, B. Ravina, R. Ruiz de Austri, S. Sekmen, M. Touranakou, M. Vaškevičiūte, R. Vilalta, J.-R. Vlimant, R. Verheyen, M. White, E. Wulff, E. Wallin, K.A. Wozniak, Z. Zhang SciPost Physics, 2022, Volume 12, Issue 1, Page 43 [ arXiv:2105.14027 | code ] Abstract We describe the outcome of a data challenge conducted as part of the Dark Machines initiative and the Les Houches 2019 workshop on Physics at TeV colliders. The challenged aims at detecting signals of new physics at the LHC using unsupervised machine learning algorithms. First, we propose how an anomaly score could be implemented to define model-independent signal regions in LHC searches. We define and describe a large benchmark dataset, consisting of > 1 Billion simulated LHC events corresponding to 10 fb−1 of proton-proton collisions at a center-of-mass energy of 13 TeV. We then review a wide range of anomaly detection and density estimation algorithms, developed in the context of the data challenge, and we measure their performance in a set of realistic analysis environments. We draw a number of useful conclusions that will aid the development of unsupervised new physics searches during the third run of the LHC, and provide our benchmark dataset for future studies at https://www.phenoMLdata.org. Code to reproduce the analysis is provided at https://github.com/bostdiek/DarkMachines-UnsupervisedChallenge. Scaffolding Simulations with Deep Learning for High-dimensional Deconvolution Anders Andreassen, Patrick T. Komiske, Eric M. Metodiev, Benjamin Nachman, Adi Suresh, and Jesse Thaler Workshop paper at ICLR 2021 SimDL Workshop [ arXiv:2105.04448 ] Abstract A common setting for scientific inference is the ability to sample from a high-fidelity forward model (simulation) without having an explicit probability density of the data. We propose a simulation-based maximum likelihood deconvolution approach in this setting called OmniFold. Deep learning enables this approach to be naturally unbinned and (variable-, and) high-dimensional. In contrast to model parameter estimation, the goal of deconvolution is to remove detector distortions in order to enable a variety of down-stream inference tasks. Our approach is the deep learning generalization of the common Richardson-Lucy approach that is also called Iterative Bayesian Unfolding in particle physics. We show how OmniFold can not only remove detector distortions, but it can also account for noise processes and acceptance effects. A reconfigurable neural network ASIC for detector front-end data compression at the HL-LHC Giuseppe Di Guglielmo, Farah Fahim, Christian Herwig, Manuel Blanco Valentin, Javier Duarte, Cristian Gingu, Philip Harris, James Hirschauer, Martin Kwok, Vladimir Loncar, Yingyi Luo, Llovizna Miranda, Jennifer Ngadiuba, Daniel Noonan, Seda Ogrenci-Memik, Maurizio Pierini, Sioni Summers, Nhan Tran IEEE Transactions on Nuclear Science, 2021, Vol. 68, Issue 8 [ arXiv:2105.01683 ] Abstract Despite advances in the programmable logic capabilities of modern trigger systems, a significant bottleneck remains in the amount of data to be transported from the detector to off-detector logic where trigger decisions are made. We demonstrate that a neural network autoencoder model can be implemented in a radiation tolerant ASIC to perform lossy data compression alleviating the data transmission problem while preserving critical information of the detector energy profile. For our application, we consider the high-granularity calorimeter from the CMS experiment at the CERN Large Hadron Collider. The advantage of the machine learning approach is in the flexibility and configurability of the algorithm. By changing the neural network weights, a unique data compression algorithm can be deployed for each sensor in different detector regions, and changing detector or collider conditions. To meet area, performance, and power constraints, we perform a quantization-aware training to create an optimized neural network hardware implementation. The design is achieved through the use of high-level synthesis tools and the hls4ml framework, and was processed through synthesis and physical layout flows based on a LP CMOS 65 nm technology node. The flow anticipates 200 Mrad of ionizing radiation to select gates, and reports a total area of 3.6 mm^2 and consumes 95 mW of power. The simulated energy consumption per inference is 2.4 nJ. This is the first radiation tolerant on-detector ASIC implementation of a neural network that has been designed for particle physics applications. Towards Designing and Exploiting Generative Networks for Neutrino Physics Experiments using Liquid Argon Time Projection Chambers Paul Lutkus, Taritree Wongjirad, Schuchin Aeron Conference paper at ICLR 2021 [ | code ] Abstract In this paper, we show that a hybrid approach to generative modeling via combin- ing the decoder from an autoencoder together with an explicit generative model for the latent space is a promising method for producing images of particle tra- jectories in a liquid argon time projection chamber (LArTPC). LArTPCs are a type of particle physics detector used by several current and future experiments focused on studies of the neutrino. We implement a Vector-Quantized Variational Autoencoder (VQ-VAE) and PixelCNN which produces images with LArTPC- like features and introduce a method to evaluate the quality of the images using a semantic segmentation that identifies important physics-based features. Scalable and Flexible Deep Bayesian Optimization with Auxiliary Information for Scientific Problems Samuel Kim, Peter Y. Lu, Charlotte Loh, Jamie Smith, Jasper Snoek, Marin Soljačić [ arXiv:2104.11667 ] Abstract Bayesian optimization (BO) is a popular paradigm for global optimization of expensive black-box functions, but there are many domains where the function is not completely black-box. The data may have some known structure, e.g. symmetries, and the data generation process can yield useful intermediate or auxiliary information in addition to the value of the optimization objective. However, surrogate models traditionally employed in BO, such as Gaussian Processes (GPs), scale poorly with dataset size and struggle to incorporate known structure or auxiliary information. Instead, we propose performing BO on complex, structured problems by using Bayesian Neural Networks (BNNs), a class of scalable surrogate models that have the representation power and flexibility to handle structured data and exploit auxiliary information. We demonstrate BO on a number of realistic problems in physics and chemistry, including topology optimization of photonic crystal materials using convolutional neural networks, and chemical property optimization of molecules using graph neural networks. On these complex tasks, we show that BNNs often outperform GPs as surrogate models for BO in terms of both sampling efficiency and computational cost. A Compound Poisson Generator approach to Point-Source Inference in Astrophysics Gabriel H. Collin, Nicholas L. Rodd, Tyler Erjavec, Kerstin Perez The Astrophysical Journal, 2022, Volume 260, Number 2 [ arXiv:2104.04529 | code ] Abstract The identification and description of point sources is one of the oldest problems in astronomy; yet, even today the correct statistical treatment for point sources remains as one of the field's hardest problems. For dim or crowded sources, likelihood based inference methods are required to estimate the uncertainty on the characteristics of the source population. In this work, a new parametric likelihood is constructed for this problem using Compound Poisson Generator (CPG) functionals which incorporate instrumental effects from first principles. We demonstrate that the CPG approach exhibits a number advantages over Non-Poissonian Template Fitting (NPTF) - an existing parametric likelihood method - in a series of test scenarios in the context of X-ray astronomy. These demonstrations show that the effect of the point-spread function, effective area, and choice of point-source spatial distribution cannot, in general, be factorised as they are in the NPTF construction, while the new CPG construction is validated in these scenarios. Separately, an examination of the diffuse-flux emission limit is used to show that most simple choices of priors on the standard parameterisation of the population model can result in unexpected biases: when a model comprising both a point-source population and diffuse component is applied to this limit, nearly all observed flux will be assigned to either the population or to the diffuse component. A new parametrisation is presented for these priors which is demonstrated to properly estimate the uncertainties in this limit. In this choice of priors, the CPG correctly identifies that the fraction of flux assigned to the population model cannot be constrained by the data. Why is AI hard and Physics simple? Daniel A. Roberts [ arXiv:2104.00008 ] Abstract We discuss why AI is hard and why physics is simple. We discuss how physical intuition and the approach of theoretical physics can be brought to bear on the field of artificial intelligence and specifically machine learning. We suggest that the underlying project of machine learning and the underlying project of physics are strongly coupled through the principle of sparsity, and we call upon theoretical physicists to work on AI as physicists. As a first step in that direction, we discuss an upcoming book on the principles of deep learning theory that attempts to realize this approach. Machine Learning the 6th Dimension: Stellar Radial Velocities from 5D Phase-Space Correlations Adriana Dropulic, Bryan Ostdiek, Laura J. Chang, Hongwan Liu, Timothy Cohen, and Mariangela Lisanti The Astrophysical Journal Letters, 2021, 915, L14 [ arXiv:2103.14039 ] Abstract The Gaia satellite will observe the positions and velocities of over a billion Milky Way stars. In the early data releases, the majority of observed stars do not have complete 6D phase-space information. In this Letter, we demonstrate the ability to infer the missing line-of-sight velocities until more spectroscopic observations become available. We utilize a novel neural network architecture that, after being trained on a subset of data with complete phase-space information, takes in a star's 5D astrometry (angular coordinates, proper motions, and parallax) and outputs a predicted line-of-sight velocity with an associated uncertainty. Working with a mock Gaia catalog, we show that the network can successfully recover the distributions and correlations of each velocity component for stars that fall within ∼5 kpc of the Sun. We also demonstrate that the network can accurately reconstruct the velocity distribution of a kinematic substructure in the stellar halo that is spatially uniform, even when it comprises a small fraction of the total star count. Modern Machine Learning and Particle Physics Matthew D. Schwartz Harvard Data Science Review, 2021, Issue 3.2, 13 May [ arXiv:2103.12226 ] Abstract Over the past five years, modern machine learning has been quietly revolutionizing particle physics. Old methodology is being outdated and entirely new ways of thinking about data are becoming commonplace. This article will review some aspects of the natural synergy between modern machine learning and particle physics, focusing on applications at the Large Hadron Collider. A sampling of examples is given, from signal/background discrimination tasks using supervised learning to direct data-driven approaches. Some comments on persistent challenges and possible future directions for the field are included at the end. Deep learning: a statistical viewpoint Peter L. Bartlett, Andrea Montanari, and Alexander Rakhlin [ arXiv:2103.09177 ] Abstract The remarkable practical success of deep learning has revealed some major surprises from a theoretical perspective. In particular, simple gradient methods easily find near-optimal solutions to non-convex optimization problems, and despite giving a near-perfect fit to training data without any explicit effort to control model complexity, these methods exhibit excellent predictive accuracy. We conjecture that specific principles underlie these phenomena: that overparametrization allows gradient methods to find interpolating solutions, that these methods implicitly impose regularization, and that overparametrization leads to benign overfitting. We survey recent theoretical progress that provides examples illustrating these principles in simpler settings. We first review classical uniform convergence results and why they fall short of explaining aspects of the behavior of deep learning methods. We give examples of implicit regularization in simple settings, where gradient methods lead to minimal norm functions that perfectly fit the training data. Then we review prediction methods that exhibit benign overfitting, focusing on regression problems with quadratic loss. For these methods, we can decompose the prediction rule into a simple component that is useful for prediction and a spiky component that is useful for overfitting but, in a favorable setting, does not harm prediction accuracy. We focus specifically on the linear regime for neural networks, where the network can be approximated by a linear model. In this regime, we demonstrate the success of gradient flow, and we consider benign overfitting with two-layer networks, giving an exact asymptotic analysis that precisely demonstrates the impact of overparametrization. We conclude by highlighting the key challenges that arise in extending these insights to realistic deep learning settings. hls4ml: An Open-Source Codesign Workflow to Empower Scientific Low-Power Machine Learning Devices Farah Fahim, Benjamin Hawks, Christian Herwig, James Hirschauer, Sergo Jindariani, Nhan Tran, Luca P. Carloni, Giuseppe Di Guglielmo, Philip Harris, Jeffrey Krupa, Dylan Rankin, Manuel Blanco Valentin, Josiah Hester, Yingyi Luo, John Mamish, Seda Orgrenci-Memik, Thea Aarrestad, Hamza Javed, Vladimir Loncar, Maurizio Pierini, Adrian Alan Pol, Sioni Summers, Javier Duarte, Scott Hauck, Shih-Chieh Hsu, Jennifer Ngadiuba, Mia Liu, Duc Hoang, Edward Kreinar, Zhenbin Wu [ arXiv:2103.05579 ] Abstract Accessible machine learning algorithms, software, and diagnostic tools for energy-efficient devices and systems are extremely valuable across a broad range of application domains. In scientific domains, real-time near-sensor processing can drastically improve experimental design and accelerate scientific discoveries. To support domain scientists, we have developed hls4ml, an open-source software-hardware codesign workflow to interpret and translate machine learning algorithms for implementation with both FPGA and ASIC technologies. We expand on previous hls4ml work by extending capabilities and techniques towards low-power implementations and increased usability: new Python APIs, quantization-aware pruning, end-to-end FPGA workflows, long pipeline kernels for low power, and new device backends include an ASIC workflow. Taken together, these and continued efforts in hls4ml will arm a new generation of domain scientists with accessible, efficient, and powerful tools for machine-learning-accelerated discovery. The Luminous and Double-Peaked Type Ic Supernova 2019stc: Evidence for Multiple Energy Sources Sebastian Gomez, Edo Berger, Griffin Hosseinzadeh, Peter K. Blanchard, Matt Nicholl, V. Ashley Villar The Astrophysical Journal, 2021, Vol. 913, Article 143 [ arXiv:2103.02611 ] Abstract On the Minimal Error of Empirical Risk Minimization Gil Kur, Alexander Rakhlin [ arXiv:2102.12066 ] Abstract RWe study the minimal error of the Empirical Risk Minimization (ERM) procedure in the task of regression, both in the random and the fixed design settings. Our sharp lower bounds shed light on the possibility (or impossibility) of adapting to simplicity of the model generating the data. In the fixed design setting, we show that the error is governed by the global complexity of the entire class. In contrast, in random design, ERM may only adapt to simpler models if the local neighborhoods around the regression function are nearly as complex as the class itself, a somewhat counter-intuitive conclusion. We provide sharp lower bounds for performance of ERM for both Donsker and non-Donsker classes. We also discuss our results through the lens of recent studies on interpolation in overparameterized models. Topological obstructions to autoencoding Joshua Batson, C. Grace Haaf, Yonatan Kahn, Daniel A. Roberts Journal of High Energy Physics, 2021, Issue 4, Article 280 [ arXiv:2102.08380 ] Abstract Autoencoders have been proposed as a powerful tool for model-independent anomaly detection in high-energy physics. The operating principle is that events which do not belong to the space of training data will be reconstructed poorly, thus flagging them as anomalies. We point out that in a variety of examples of interest, the connection between large reconstruction error and anomalies is not so clear. In particular, for data sets with nontrivial topology, there will always be points that erroneously seem anomalous due to global issues. Conversely, neural networks typically have an inductive bias or prior to locally interpolate such that undersampled or rare events may be reconstructed with small error, despite actually being the desired anomalies. Taken together, these facts are in tension with the simple picture of the autoencoder as an anomaly detector. Using a series of illustrative low-dimensional examples, we show explicitly how the intrinsic and extrinsic topology of the dataset affects the behavior of an autoencoder and how this topology is manifested in the latent space representation during training. We ground this analysis in the discussion of a mock "bump hunt" in which the autoencoder fails to identify an anomalous "signal" for reasons tied to the intrinsic topology of n-particle phase space. On the convergence of group-sparse autoencoders Emmanouil Theodosis, Bahareh Tolooshams, Pranay Tankala, Abiy Tasissa, Demba Ba [ arXiv:2102.07003 ] Abstract Recent approaches in the theoretical analysis of model-based deep learning architectures have studied the convergence of gradient descent in shallow ReLU networks that arise from generative models whose hidden layers are sparse. Motivated by the success of architectures that impose structured forms of sparsity, we introduce and study a group-sparse autoencoder that accounts for a variety of generative models, and utilizes a group-sparse ReLU activation function to force the non-zero units at a given layer to occur in blocks. For clustering models, inputs that result in the same group of active units belong to the same cluster. We proceed to analyze the gradient dynamics of a shallow instance of the proposed autoencoder, trained with data adhering to a group-sparse generative model. In this setting, we theoretically prove the convergence of the network parameters to a neighborhood of the generating matrix. We validate our model through numerical analysis and highlight the superior performance of networks with a group-sparse ReLU compared to networks that utilize traditional ReLUs, both in sparse coding and in parameter recovery tasks. We also provide real data experiments to corroborate the simulated results, and emphasize the clustering capabilities of structured sparsity models. Path integral contour deformations for observables in SU(N) gauge theory William Detmold, Gurtej Kanwar, Henry Lamm, Michael L. Wagman, Neill C. Warrington Physical Review D, 2021, Vol. 103, Issue 9, Article 094517 [ arXiv:2101.12668 ] Abstract Path integral contour deformations have been shown to mitigate sign and signal-to-noise problems associated with phase fluctuations in lattice field theories. We define a family of contour deformations applicable to SU(N) lattice gauge theory that can reduce sign and signal-to-noise problems associated with complex actions and complex observables. For observables, these contours can be used to define deformed observables with identical expectation value but different variance. As a proof-of-principle, we apply machine learning techniques to optimize the deformed observables associated with Wilson loops in two dimensional SU(2) and SU(3) gauge theory. We study loops consisting of up to 64 plaquettes and achieve variance reduction of up to 4 orders of magnitude. The LHC Olympics 2020: A Community Challenge for Anomaly Detection in High Energy Physics Gregor Kasieczka (ed), Benjamin Nachman (ed), David Shih (ed), Oz Amram, Anders Andreassen, Kees Benkendorfer, Blaz Bortolato, Gustaaf Brooijmans, Florencia Canelli, Jack H. Collins, Biwei Dai, Felipe F. De Freitas, Barry M. Dillon, Ioan-Mihail Dinu, Zhongtian Dong, Julien Donini, Javier Duarte, D. A. Faroughy, Julia Gonski, Philip Harris, Alan Kahn, Jernej F. Kamenik, Charanjit K. Khosa, Patrick Komiske, Luc Le Pottier, Pablo Martín-Ramiro, Andrej Matevc, Eric Metodiev, Vinicius Mikuni, Inês Ochoa, Sang Eon Park, Maurizio Pierini, Dylan Rankin, Veronica Sanz, Nilai Sarda, Urous Seljak, Aleks Smolkovic, George Stein, Cristina Mantilla Suarez, Manuel Szewc, Jesse Thaler, Steven Tsan, Silviu-Marian Udrescu, Louis Vaslin, Jean-Roch Vlimant, Daniel Williams, Mikaeel Yunus Reports on Progress in Physics, 2021, Volume 84, Number 12 [ arXiv:2101.08320 ] Abstract A new paradigm for data-driven, model-agnostic new physics searches at colliders is emerging, and aims to leverage recent breakthroughs in anomaly detection and machine learning. In order to develop and benchmark new anomaly detection methods within this framework, it is essential to have standard datasets. To this end, we have created the LHC Olympics 2020, a community challenge accompanied by a set of simulated collider events. Participants in these Olympics have developed their methods using an R&D dataset and then tested them on black boxes: datasets with an unknown anomaly (or not). This paper will review the LHC Olympics 2020 challenge, including an overview of the competition, a description of methods deployed in the competition, lessons learned from the experience, and implications for data analyses with future datasets as well as future colliders. Introduction to Normalizing Flows for Lattice Field Theory Michael S. Albergo, Denis Boyda, Daniel C. Hackett, Gurtej Kanwar, Kyle Cranmer, Sébastien Racanière, Danilo Jimenez Rezende, and Phiala E. Shanahan [ arXiv:2101.08176 ] Abstract This notebook tutorial demonstrates a method for sampling Boltzmann distributions of lattice field theories using a class of machine learning models known as normalizing flows. The ideas and approaches proposed in arXiv:1904.12072, arXiv:2002.02428, and arXiv:2003.06413 are reviewed and a concrete implementation of the framework is presented. We apply this framework to a lattice scalar field theory and to U(1) gauge theory, explicitly encoding gauge symmetries in the flow-based approach to the latter. This presentation is intended to be interactive and working with the attached Jupyter notebook is recommended. E Pluribus Unum Ex Machina: Learning from Many Collider Events at Once Benjamin Nachman and Jesse Thaler Physical Review D, 2021, Vol. 103, Issue 11, Article 116013 [ arXiv:2101.07263 | code ] Abstract There have been a number of recent proposals to enhance the performance of machine learning strategies for collider physics by combining many distinct events into a single ensemble feature. To evaluate the efficacy of these proposals, we study the connection between single-event classifiers and multi-event classifiers under the assumption that collider events are independent and identically distributed (IID). We show how one can build optimal multi-event classifiers from single-event classifiers, and we also show how to construct multi-event classifiers such that they produce optimal single-event classifiers. This is illustrated for a Gaussian example as well as for classification tasks relevant for searches and measurements at the Large Hadron Collider. We extend our discussion to regression tasks by showing how they can be phrased in terms of parametrized classifiers. Empirically, we find that training a single-event (per-instance) classifier is more effective than training a multi-event (per-ensemble) classifier, as least for the cases we studied, and we relate this fact to properties of the loss function gradient in the two cases. While we did not identify a clear benefit from using multi-event classifiers in the collider context, we speculate on the potential value of these methods in cases involving only approximate independence, as relevant for jet substructure studies. Fast convolutional neural networks on FPGAs with hls4ml Thea Aarrestad, Vladimir Loncar, Nicolò Ghielmetti, Maurizio Pierini, Sioni Summers, Jennifer Ngadiuba, Christoffer Petersson, Hampus Linander, Yutaro Iiyama, Giuseppe Di Guglielmo, Javier Duarte, Philip Harris, Dylan Rankin, Sergo Jindariani, Kevin Pedro, Nhan Tran, Mia Liu, Edward Kreinar, Zhenbin Wu, Duc Hoang Machine Learning Science and Technology, 2021, Volume 2, Issue 4, Article 045015 [ arXiv:2101.05108 ] Abstract We introduce an automated tool for deploying ultra low-latency, low-power deep neural networks with convolutional layers on FPGAs. By extending the hls4ml library, we demonstrate an inference latency of 5μs using convolutional architectures, targeting microsecond latency applications like those at the CERN Large Hadron Collider. Considering benchmark models trained on the Street View House Numbers Dataset, we demonstrate various methods for model compression in order to fit the computational constraints of a typical FPGA device used in trigger and data acquisition systems of particle detectors. In particular, we discuss pruning and quantization-aware training, and demonstrate how resource utilization can be significantly reduced with little to no loss in model accuracy. We show that the FPGA critical resource consumption can be reduced by 97% with zero loss in model accuracy, and by 99% when tolerating a 6% accuracy degradation. Detection and Parameter Estimation of Gravitational Waves from Binary Neutron-Star Mergers in Real LIGO Data using Deep Learning Plamen G. Krastev, Kiranjyot Gill, V. Ashley Villar, Edo Berger Physics Letters B, 2021, Vol. 815, Article 136161 [ arXiv:2012.13101 ] Abstract One of the key challenges of real-time detection and parameter estimation of gravitational waves from compact binary mergers is the computational cost of conventional matched-filtering and Bayesian inference approaches. In particular, the application of these methods to the full signal parameter space available to the gravitational-wave detectors, and/or real-time parameter estimation is computationally prohibitive. On the other hand, rapid detection and inference are critical for prompt follow-up of the electromagnetic and astro-particle counterparts accompanying important transients, such as binary neutron-star and black-hole neutron-star mergers. Training deep neural networks to identify specific signals and learn a computationally efficient representation of the mapping between gravitational-wave signals and their parameters allows both detection and inference to be done quickly and reliably, with high sensitivity and accuracy. In this work we apply a deep-learning approach to rapidly identify and characterize transient gravitational-wave signals from binary neutron-star mergers in real LIGO data. We show for the first time that artificial neural networks can promptly detect and characterize binary neutron star gravitational-wave signals in real LIGO data, and distinguish them from noise and signals from coalescing black-hole binaries. We illustrate this key result by demonstrating that our deep-learning framework classifies correctly all gravitational-wave events from the Gravitational-Wave Transient Catalog, GWTC-1 [Phys. Rev. X 9 (2019), 031040]. These results emphasize the importance of using realistic gravitational-wave detector data in machine learning approaches, and represent a step towards achieving real-time detection and inference of gravitational waves. Field of Junctions: Extracting Boundary Structure at Low SNR Dor Verbin, Todd Zickler [ arXiv:2011.13866 ] Abstract We introduce a bottom-up model for simultaneously finding many boundary elements in an image, including contours, corners and junctions. The model explains boundary shape in each small patch using a 'generalized M-junction' comprising M angles and a freely-moving vertex. Images are analyzed using non-convex optimization to cooperatively find M+2 junction values at every location, with spatial consistency being enforced by a novel regularizer that reduces curvature while preserving corners and junctions. The resulting 'field of junctions' is simultaneously a contour detector, corner/junction detector, and boundary-aware smoothing of regional appearance. Notably, its unified analysis of contours, corners, junctions and uniform regions allows it to succeed at high noise levels, where other methods for segmentation and boundary detection fail. AI Poincaré: Machine Learning Conservation Laws from Trajectories Ziming Liu and Max Tegmark Physical Review Letters, 2021, Volume 126, Issue 18, Article 180604 [ arXiv:2011.04698 ] Abstract We present AI Poincaré, a machine learning algorithm for auto-discovering conserved quantities using trajectory data from unknown dynamical systems. We test it on five Hamiltonian systems, including the gravitational 3-body problem, and find that it discovers not only all exactly conserved quantities, but also periodic orbits, phase transitions and breakdown timescales for approximate conservation laws. Parameter Inference from Event Ensembles and the Top-Quark Mass Forrest Flesher, Katherine Fraser, Charles Hutchison, Bryan Ostdiek, Matthew D. Schwartz Journal of High Energy Physics, 2021, Article 58 [ arXiv:2011.04666 ] Abstract One of the key tasks of any particle collider is measurement. In practice, this is often done by fitting data to a simulation, which depends on many parameters. Sometimes, when the effects of varying different parameters are highly correlated, a large ensemble of data may be needed to resolve parameter-space degeneracies. An important example is measuring the top-quark mass, where other physical and unphysical parameters in the simulation must be marginalized over when fitting the top-quark mass parameter. We compare three different methodologies for top-quark mass measurement: a classical histogram fitting procedure, similar to one commonly used in experiment optionally augmented with soft-drop jet grooming; a machine-learning method called DCTR; and a linear regression approach, either using a least-squares fit or with a dense linearly-activated neural network. Despite the fact that individual events are totally uncorrelated, we find that the linear regression methods work most effectively when we input an ensemble of events sorted by mass, rather than training them on individual events. Although all methods provide robust extraction of the top-quark mass parameter, the linear network does marginally best and is remarkably simple. For the top study, we conclude that the Monte-Carlo-based uncertainty on current extractions of the top-quark mass from LHC data can be reduced significantly (by perhaps a factor of 2) using networks trained on sorted event ensembles. More generally, machine learning from ensembles for parameter estimation has broad potential for collider physics measurements. Quasi Anomalous Knowledge: Searching for new physics with embedded knowledge Sang Eon Park, Dylan Rankin, Silviu-Marian Udrescu, Mikaeel Yunus, Philip Harris Journal of High Energy Physics, 2021, Article 30 [ arXiv:2011.03550 | code ] Abstract Discoveries of new phenomena often involve a dedicated search for a hypothetical physics signature. Recently, novel deep learning techniques have emerged for anomaly detection in the absence of a signal prior. However, by ignoring signal priors, the sensitivity of these approaches is significantly reduced. We present a new strategy dubbed Quasi Anomalous Knowledge (QUAK), whereby we introduce alternative signal priors that capture some of the salient features of new physics signatures, allowing for the recovery of sensitivity even when the alternative signal is incorrect. This approach can be applied to a broad range of physics models and neural network architectures. In this paper, we apply QUAK to anomaly detection of new physics events at the CERN Large Hadron Collider utilizing variational autoencoders with normalizing flow. Learning to Unknot Sergei Gukov, James Halverson, Fabian Ruehle, and Piotr Sułkowski Machine Learning - Science and Technology, 2021, Volume 2, Number 2, Article 025035 [ arXiv:2010.16263 ] Abstract We introduce natural language processing into the study of knot theory, as made natural by the braid word representation of knots. We study the UNKNOT problem of determining whether or not a given knot is the unknot. After describing an algorithm to randomly generate $N$-crossing braids and their knot closures and discussing the induced prior on the distribution of knots, we apply binary classification to the UNKNOT decision problem. We find that the Reformer and shared-QK Transformer network architectures outperform fully-connected networks, though all perform well. Perhaps surprisingly, we find that accuracy increases with the length of the braid word, and that the networks learn a direct correlation between the confidence of their predictions and the degree of the Jones polynomial. Finally, we utilize reinforcement learning (RL) to find sequences of Markov moves and braid relations that simplify knots and can identify unknots by explicitly giving the sequence of unknotting actions. Trust region policy optimization (TRPO) performs consistently well for a wide range of crossing numbers and thoroughly outperformed other RL algorithms and random walkers. Studying these actions, we find that braid relations are more useful in simplifying to the unknot than one of the Markov moves. Enhancing searches for resonances with machine learning and moment decomposition Ouail Kitouni, Benjamin Nachman, Constantin Weisser, and Mike Williams Journal of High Energy Physics, 2021, Article 70 [ arXiv:2010.09745 | code ] Abstract A key challenge in searches for resonant new physics is that classifiers trained to enhance potential signals must not induce localized structures. Such structures could result in a false signal when the background is estimated from data using sideband methods. A variety of techniques have been developed to construct classifiers which are independent from the resonant feature (often a mass). Such strategies are sufficient to avoid localized structures, but are not necessary. We develop a new set of tools using a novel moment loss function (Moment Decomposition or MoDe) which relax the assumption of independence without creating structures in the background. By allowing classifiers to be more flexible, we enhance the sensitivity to new physics without compromising the fidelity of the background estimation.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7226521372795105, "perplexity": 2072.23817368218}, "config": {"markdown_headings": false, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337338.11/warc/CC-MAIN-20221002150039-20221002180039-00477.warc.gz"}
https://arxiv-export-lb.library.cornell.edu/abs/2002.01459v1
### Current browse context: cond-mat.mtrl-sci (what is this?) # Title: Expression and interactions of stereo-chemically active lone pairs and their relation to structural distortions and thermal conductivity Abstract: Stereo-chemically active lone pairs are typically described as an important non-bonding effect, and large interest has centered on understanding the derived effect of lone pair expression on physical properties such as the thermal conductivity. To manipulate such properties, it is essential to understand the conditions that lead to lone pair expression and to provide a quantitative chemical description. Here we first use density functional theory calculations to establish the presence of stereo-chemically active lone pairs on antimony in $\text{MnSb}_{2}\text{O}_{4}$. The lone pairs are formed through a similar mechanism to those in binary post-transition metal compounds in an oxidation state of two less than their main group number, where the degree of orbital interaction determines the expression of the lone pair. In $\text{MnSb}_{2}\text{O}_{4}$ the Sb lone pairs interact through a void space in the crystal structure, and they minimize their mutual repulsion by introducing a deflection angle. This angle increases significantly with decreasing Sb-Sb distance, thus showing the highly destabilizing nature of the lone pair interactions. Analysis of the chemical bonding in the structure shows that it is dominated by polar covalent interactions. A database search of related ternary chalcogenide structures shows that for structures with a lone pair the degree of lone pair expression is largely determined by whether the antimony-chalcogen units are connected or not, suggesting a cooperative effect. Isolated $\text{SbX}_3$ units have larger X-Sb-X bond angles, and therefore weaker lone pair expression than connected units. Since increased lone pair expression is equivalent to an increased orbital interaction (covalent bonding), which typically leads to increased heat conduction, this can explain the previously established correlation between larger bond angles and lower thermal conductivity. Comments: 30 pages (including supporting information), 8 figures Subjects: Materials Science (cond-mat.mtrl-sci) Journal reference: IUCrJ (2020). 7, 480-489 DOI: 10.1107/S2052252520003619 Cite as: arXiv:2002.01459 [cond-mat.mtrl-sci] (or arXiv:2002.01459v1 [cond-mat.mtrl-sci] for this version) ## Submission history From: Kasper Tolborg [view email] [v1] Tue, 4 Feb 2020 18:36:46 GMT (3577kb) Link back to: arXiv, form interface, contact.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7489717602729797, "perplexity": 2777.321109540755}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362919.65/warc/CC-MAIN-20211203212721-20211204002721-00358.warc.gz"}
http://ijscai.iraj.in/paper_detail.php?paper_id=15546&name=Feeding_Hand-Crafted_Features_for_Enhancing_The_Performance_of_Convolutional_Neural_Networks_Based_Age/Gender_Estimation
International Journal of Soft Computing And Artificial Intelligence (IJSCAI) . current issues Volume-7,Issue-2  ( Nov, 2019 ) Past issues Statistics report Oct. 2020 Submitted Papers : 80 Accepted Papers : 10 Rejected Papers : 70 Acc. Perc : 12% Issue Published : 14 Paper Published : 188 No. of Authors : 530 Journal Paper Paper Title Feeding Hand-Crafted Features for Enhancing The Performance of Convolutional Neural Networks Based Age/Gender Estimation Abstract The convolutional neural network (CNN) is believed to find the right features for a given problem, and thus the hand-crafted features for the problem have been neglected recently. In this paper, we show that studying and using an appropriate feature for the problem may still be important as they can enhance the performance of CNN-based algorithms. Specifically, we propose age and gender estimation methods based on the CNN, which takes as input the face image and its Gabor filter responses. This is based on the domain knowledge that the Gabor response has been one of the most effective features for the face related problems. Precisely, we first derive several Gabor filters with different parameters and then apply them to the given image to obtain several Gabor responses. The stack of input and its Gabor responses are fed through the $1\times 1$ convolution to the CNN so that the input of CNN is a fusion of the input image and Gabor responses. Experiments show that our method helps CNN to find the face related features at the earlier layers and thus improves its performance. Keywords - Convolutional Neural Network (CNN), Hand-Crafted Features, Age Classification Network. Author - Sepidehsadat Hosseini, Naim Ik Cho, Monkey King, Bajie Zhu, Seng Tang | PDF | Viewed - 46 | Published on 2019-07-26 IRAJ Other Journals
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5849863886833191, "perplexity": 1482.6240691825979}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600402130615.94/warc/CC-MAIN-20201001030529-20201001060529-00676.warc.gz"}
https://rlcard.org/algorithms.html
# Algorithms¶ ## Deep Monte-Carlo¶ Deep Monte-Carlo (DMC) is a very effective algorithm for card games. This is the only algorithm that shows human-level performance on complex games such as Dou Dizhu. ## Deep-Q Learning¶ Deep-Q Learning (DQN) [paper] is a basic reinforcement learning (RL) algorithm. We wrap DQN as an example to show how RL algorithms can be connected to the environments. In the DQN agent, the following classes are implemented: • DQNAgent: The agent class that interacts with the environment. • Memory: A memory buffer that manages the storing and sampling of transitions. • Estimator: The neural network that is used to make predictions. ## NFSP¶ Neural Fictitious Self-Play (NFSP) [paper] end-to-end approach to solve card games with deep reinforcement learning. NFSP has an inner RL agent and a supervised agent that is trained based on the data generated by the RL agent. In the toolkit, we use DQN as RL agent. ## CFR (chance sampling)¶ Counterfactual Regret Minimization (CFR) [paper] is a regret minimizaiton method for solving imperfect information games.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7820964455604553, "perplexity": 4379.176611527546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362230.18/warc/CC-MAIN-20211202145130-20211202175130-00502.warc.gz"}
http://etna.ricam.oeaw.ac.at/volumes/2011-2020/vol44/abstract.php?vol=44&pages=624-638
## Perturbation of partitioned linear response eigenvalue problems Zhongming Teng, Linzhang Lu, and Ren-Cang Li ### Abstract This paper is concerned with bounds for the linear response eigenvalue problem for $H=\begin{bmatrix} 0 & K \\ M & 0 \end{bmatrix}$, where $K$ and $M$ admit a $2\times 2$ block partitioning. Bounds on how the changes of its eigenvalues are obtained when $K$ and $M$ are perturbed. They are of linear order with respect to the diagonal block perturbations and of quadratic order with respect to the off-diagonal block perturbations in $K$ and $M$. The result is helpful in understanding how the Ritz values move towards eigenvalues in some efficient numerical algorithms for the linear response eigenvalue problem. Numerical experiments are presented to support the analysis. Full Text (PDF) [420 KB], BibTeX ### Key words linear response eigenvalue problem, random phase approximation, perturbation, quadratic perturbation bound 15A42, 65F15
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9510281085968018, "perplexity": 965.6638103699185}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738819.78/warc/CC-MAIN-20200811180239-20200811210239-00533.warc.gz"}
https://socratic.org/questions/what-is-the-percent-composition-of-a-carbon-in-heptane-c-7h-16
Chemistry Topics What is the percent composition of a carbon, in heptane, C_7H_16? Formula mass of${C}_{7} {H}_{16} = 12 \times 7 + 16 \times 1 = 100$ 100 parts of ${C}_{7} {H}_{16}$ contains 84 parts Carbon and 16 parts hydrogen
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 2, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7785631418228149, "perplexity": 10471.825395099866}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304883.8/warc/CC-MAIN-20220129092458-20220129122458-00427.warc.gz"}
http://mathematica.stackexchange.com/questions/4696/how-to-set-different-labelstyles-for-controls-in-manipulate/4698
# How to set different labelstyles for controls in manipulate? Background: Manipulate[ {jj, kk}, {{jj, 2, "Select j"}, 1, 11}, {{kk, 2, "Select k"}, 1, 11}, LabelStyle -> {Bold, Medium} ] In the example above the fonts of the labels "Select j" and "Select k" are, as expected, set to bold and medium size. Question: Is there a way to set the labelstyle for each variable individually? - You can use Style for the labels: Manipulate[{jj, kk},
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3145520091056824, "perplexity": 19414.559998953355}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461861700245.92/warc/CC-MAIN-20160428164140-00160-ip-10-239-7-51.ec2.internal.warc.gz"}
https://www.dlubal.com/en-US/support-and-learning/support/knowledge-base/001451
Loading According to EN 1991-1-4 and Safety Against Overturning of Circular Cylinders Technical Article 001451 06/14/2017 This article describes the determination of force coefficients by using a wind load and the calculation of a stability factor due to overturning. • Factor of safety against overturning < 1: The structural component is at risk of overturning. • Factor of safety against overturning = 1: The stability moment and the overturning moment are equal. The model is unstable and it cannot be ruled out that it is overturning. • Factor of safety against overturning > 1: The model is not at risk of overturning. Example As an example, there is a circular cylinder with a diameter of 2.5 m and a height of 8 m. It is located in the wind load zone 2 with the terrain category 3. Fundamental value of the basic velocity: $${\mathrm v}_{\mathrm b0}\;=\;25.0\;\mathrm m/\mathrm s$$ Directional factor: $${\mathrm c}_\mathrm{dir}\;=\;1$$ Season factor: $${\mathrm c}_\mathrm{season}\;=\;1$$ Air density at the atmospheric pressure of 1,013 hPa and T = 10° C: $$\mathrm\rho\;=\;1.25\;\mathrm{kg}/\mathrm m^3$$ Kinematic viscosity of the air: $$\mathrm v\;=\;15\;\cdot\;10^{-6}$$ Basic velocity: $${\mathrm v}_\mathrm b\;=\;{\mathrm c}_\mathrm{dir}\;\cdot\;{\mathrm c}_\mathrm{season}\;\cdot\;{\mathrm v}_{\mathrm b0}\;=\;25.0\;\mathrm m/\mathrm s$$ Basic velocity pressure: $${\mathrm q}_\mathrm b\;=\;1/2\;\cdot\;\mathrm\rho\;\cdot\;{\mathrm v}_\mathrm b^2\;=\;0.391\;\mathrm{kN}/\mathrm m^2$$ Peak velocity pressure: $${\mathrm q}_\mathrm p\;=\;1.5\;\cdot\;{\mathrm q}_\mathrm b\;=\;0.586\;\mathrm{kN}/\mathrm m^2$$ Peak velocity: $${\mathrm v}_\mathrm{ze}\;=\;\sqrt{\frac{2\;\cdot\;{\mathrm q}_\mathrm p}{\mathrm\rho}}\;=\;30.619\;\mathrm m/\mathrm s$$ Equivalent surface roughness: $$\mathrm k\;=\;0.2\;\mathrm{mm}\;(\mathrm{galvanized}\;\mathrm{steel})$$ Ratio of equivalent surface roughness and width: $$\frac{\mathrm k}{\mathrm b}\;=\;8\;\cdot\;10^{-5}$$ Reynolds number: $${\mathrm R}_\mathrm e\;=\;\frac{\mathrm b\;\cdot\;{\mathrm v}_\mathrm{ze}}{\mathrm v}\;=\;5.1\;\cdot\;10^6$$ Force coefficient of cylinders without free-end flow: $${\mathrm c}_{\mathrm f0}\;=\;1.2\;\cdot\;\frac{0.18\;\cdot\;\log\;({\displaystyle\frac{1\;\cdot\;\mathrm k}{\mathrm b}})}{1\;+\;0.4\;\cdot\;\log\;({\displaystyle\frac{{\mathrm R}_\mathrm e}{10^6}})}\;=\;0.7666$$ Effective slenderness: $$\mathrm\lambda\;=\;\frac{\mathrm l}{\mathrm b}\;=\;3.2$$ End-effect factor: $${\mathrm\psi}_\mathrm\lambda\;=\;0.65$$ Structural factor: $${\mathrm c}_\mathrm s{\mathrm c}_\mathrm d\;=\;1$$ Reference area: $${\mathrm A}_\mathrm{ref}\;=\;\mathrm l\;\cdot\;\mathrm b\;=\;20\;\mathrm m^2$$ Force coefficient: $${\mathrm c}_\mathrm f\;=\;{\mathrm c}_{\mathrm f0}\;\cdot\;{\mathrm\psi}_\mathrm\lambda\;=\;0.498$$ Wind force: $${\mathrm F}_\mathrm w\;=\;{\mathrm c}_\mathrm s{\mathrm c}_\mathrm d\;\cdot\;{\mathrm c}_\mathrm f\;\cdot\;{\mathrm q}_\mathrm p\;\cdot\;{\mathrm A}_\mathrm{ref}\;=\;5.835\;\mathrm{kN}$$ Surface load due to the wind: $${\mathrm F}_\mathrm w\;=\;\frac{{\mathrm F}_\mathrm w}{{\mathrm A}_\mathrm{ref}}\;=\;0.292\;\mathrm{kN}/\mathrm m^2$$ Stability Factor due to Overturning Height of the circular cylinder: $$\mathrm h\;=\;6\;\mathrm m$$ Distance between supports: $$\mathrm a\;=\;1.35\;\mathrm m$$ $${\mathrm F}_\mathrm G\;=\;18.495\;\mathrm{kN}$$ Overturning moment: $${\mathrm M}_\mathrm K\;=\;{\mathrm F}_\mathrm w\;\cdot\;\frac{\mathrm h}2\;=\;13.128\;\mathrm{kNm}$$ Stability moment: $${\mathrm M}_\mathrm S\;=\;{\mathrm F}_\mathrm G\;\cdot\;\frac{\mathrm a}2\;=\;12.484\;\mathrm{kNm}$$ Factor of safety against overturning: $$\mathrm\eta\;=\;\frac{{\mathrm M}_\mathrm S}{{\mathrm M}_\mathrm K}\;=\;0.951$$ If using RFEM for the calculation, you can recognize from the position of the resultants that they are within its extension behind the overturning edge of the circular cylinder. Thus, the model would be unstable if the supports were not additionally secured against pulling out. Reference [1] Eurocode 1: Actions on structures - Part 1‑4: General actions - Wind actions; EN 1991‑1‑4:2005 + A1:2010 + AC:2010 [2] National Annex - Nationally determined parameters - Eurocode 1: Actions on structures - Part 1‑4: General actions - Wind actions.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6202957630157471, "perplexity": 4831.011472846775}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648594.80/warc/CC-MAIN-20180323200519-20180323220519-00243.warc.gz"}
https://repository.upenn.edu/dissertations/AAI9026666/
Heads and tails: Energetic and structural bases of binding and electron transfer performance of cofactors at the Q(A) site of the reaction center protein from Rhodobacter sphaeroides Abstract Factors controlling cofactor binding and electron transfer function at the primary quinone, Q$\sb{\rm A}$ site in isolated reaction center protein from Rhodobacter sphaeroides are determined from effects of quinone cofactor head group and tail structure alterations on: (1) Q$\sb{\rm A}$ site binding free energy in water ($\rm\Delta G\sbsp{\rm B,W}{\circ}$) and hexane ($\rm\Delta G\sbsp{\rm B,H}{\circ}$), (2) cofactor electrochemical potential in the site relative to dimethylformamide solution, and (3) temperature and reaction free energy dependences of the electron transfer rate constants for Q$\sb{\rm A}$ reduction by bacteriopheophytin (k$\sb1$) and charge recombination with the oxidized bacteriochlorophyll dimer (k$\sb{\rm b}$). A thermodynamic cycle formalism is developed to resolve intrinsic ligand-protein interaction and aqueous solvation contributions to binding. Relative to hexane, the aqueous solvation contribution to $\rm\Delta G\sbsp{\rm B,W}{\circ}$ is specified by 0.8$\rm\Delta G\sbsp{\rm tr}{\circ}$, where $\rm\Delta G\sbsp{\rm tr}{\circ}$ is the quinone solvent transfer free energy (range: -0.5 to -6.8 Kcal/mole). Effects of systematic variation of the native isoprene tail structure on $\rm\Delta G\sbsp{\rm B,H}{\circ}$ and $\rm\Delta G\sbsp{\rm B,W}{\circ}$ at the Q$\sb{\rm A}$ and secondary, Q$\sb{\rm B}$ sites reveals: (1) a defined tail binding domain spanning the first three isoprene units, and (2) a strong binding specificity for the isoprene relative to saturated alkyl structures ($>$4.1 Kcal/mole). Tail structure does not significantly influence electron transfer rates (variation of k$\sb1$ and k$\sb{\rm b}$ $<$5 fold). Comparison of Q$\sb{\rm A}$ site $\rm\Delta G\sbsp{\rm B,H}{\circ}$ values of quinone and analog head groups in which one or both carbonyl groups are removed shows that one carbonyl oxygen atom dominates hydrogen bond contact with the protein ($\rm\Delta H$ = -3.5 Kcal/mole). In contrast, both oxygen atoms participate in the semiquinone-site interaction, as shown by a 166 mV loss of in situ cofactor redox couple stability relative to DMF associated with single removal. The rationally-selected exotic cofactors tetrafluoro- and trinitro-fluorenone, and m-dinitrobenzene display k$\sb1$ and k$\sb{\rm b}$ dependences on temperature (7 to 295 K) and reaction free energy ($\rm\Delta G\sbsp{\rm et}{\circ}$) that are comparable with quinones. These results indicate that: (1) structural elements of the native quinone-Q$\sb{\rm A}$ site interaction are not essential for electron transfer function, and (2) the values of k$\sb1$ and k$\sb{\rm b}$ appear to be determined at the Q$\sb{\rm A}$ site primarily through the contribution of the in situ electrochemical free energy of the cofactor to $\rm\Delta G\sbsp{\rm et}{\circ}$. Subject Area Biochemistry|Biophysics Recommended Citation Warncke, Kurt, "Heads and tails: Energetic and structural bases of binding and electron transfer performance of cofactors at the Q(A) site of the reaction center protein from Rhodobacter sphaeroides" (1990). Dissertations available from ProQuest. AAI9026666. https://repository.upenn.edu/dissertations/AAI9026666 COinS
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5463805794715881, "perplexity": 7462.57771199506}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711001.28/warc/CC-MAIN-20221205000525-20221205030525-00061.warc.gz"}
https://jsd-cog.org/danger-in-rejecting-light-2/
1.    How does God regard sins of ignorance? “And the times of this ignorance God winked at; but now commandeth all men everywhere to repent,” Acts 17:30. 2.    To whom is sin imputed? “Therefore to him that knoweth to do good, and doeth it not, to him it is sin,” James 4:17. 3.    In what words did Christ teach the same truth? “Jesus said unto them, If ye were blind, ye should have no sin: but now ye say, We see: therefore your sin remaineth.” “If I had not come and spoken unto them, they had not had sin: but now they have no cloak [Margin: excuse] for their sin,” John 9:41; 15:22. See John 3:19. 4.    In view of this, what instruction does He give? “Walk while ye have the light, lest darkness come upon you . . . . While ye have light, believe in the light, that ye may be children of light,” John 12:35, 36. 5.    Who courts the light? “Every one that doeth evil hateth the light . . . . But he that doeth truth cometh to the light, that his deeds may be made manifest, that they are wrought in God,” John 3:20, 21.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8125240802764893, "perplexity": 21021.003888770247}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107874637.23/warc/CC-MAIN-20201021010156-20201021040156-00028.warc.gz"}
https://quizlet.com/8496388/test-3-flash-cards/
How can we help? You can also find more resources in our Help Center. 29 terms # test 3 ###### PLAY Enzymes are potent catalysts because they: dramatically lower the activation energy for the reactions they catalyze. Which of the following is true of the binding energy derived from enzyme-substrate interactions? It is used to hold functional groups in optimal orientations for reactions to occur. The concept of "induced fit" refers to the model that says: substrate binding induces a conformational change in the enzyme, which can enhance binding of substrate enzyme and/or brings functional groups into proper orientation for the rapid change of substrate to product. For an enzyme that follows Michaelis-Menten kinetics, which of the following statements about a plot of vo vs. [S] is false? At very high [S], the velocity curve approaches a horizontal line that intersects the y-axis at Km. Which of these statements about enzyme-catalyzed reactions is false? The activation energy for the catalyzed reaction is the same as for the uncatalyzed reaction, but the equilibrium constant is more favorable in the forward direction for the enzyme-catalyzed reaction. Which of the following statements about allosteric control of enzymatic activity is false? Negative allosteric effectors compete with substrates for their binding sites. A metabolic pathway proceeds according to the scheme R->S->T->U->V->W. A regulatory enzyme, X, catalyzes the first reaction in the pathway, R-> S. Which of the following is most likely correct for this pathway? The last product, W, is likely to be a negative regulator of X, leading to feedback inhibition. Which of the following describes how trypsinogen is converted to trypsin? Proteolysis of trypsinogen forms trypsin. Enzyme X exhibits maximum enzyme activity at pH = 7.0. X shows a sharp decrease in activity as the pH drops below 6.4. One likely interpretation of this pH activity is that: a critical His residue on the enzyme is involved in the catalysis Which one of the following is true of the pentoses found in nucleic acids, DNA, and RNA? The pentoses are always in the B-furanose form. In double-stranded DNA, cytosine typically base-pairs with: guanine Which of the following is not true of naturally occurring DNA? The ratio of A+T/G+C is constant for all natural DNAs. In naturally occurring double-stranded DNA: the two strands have complementary sequences B-form DNA in vivo has a _____-handed helix, is ______ A in diameter, with a rise of _____ A per base pair. right; 20; 3.4 Which of the following is a palindromic sequence? GTATAC CATATG Double-stranded regions of RNA: can form between two complementary regions of the same single strand of RNA. When double-stranded DNA is heated at neutral pH, which change occurs? The hydrogen bonds between A's and T's break. In living cells, nucleotides and their derivatives can serve as: carriers of metabolic energy enzyme cofactors intracellular signaling molecules precursors for nucleic acid synthesis all of the above Triple-helical DNA structures can result from Hoogsteen interactions. These interactions are primarily: hydrogen bonds involving the bases. The biological role of restriction enzymes is to: degrade foreign DNA that enters a cell Certain restriction enzymes produce cohesive (sticky) ends. This means that they: make a staggered double-strand cut, leaving ends with a few nucleotides of single-stranded DNA protruding. In the laboratory, recombinant plasmids are commonly introduced into bacterial cells by: transformation-heat shock of the cells incubated with plasmid DNA in the presence of CaCl2. A PCR reaction mixture needs to contain all of the following except: heat-stable Restriction Endonuclease Which of the following statements about the polymerase chain reaction (PCR) is false? DNA is amplified at many points within a cellular genome. Which of the following statements about type II restriction enzymes is false? They cleave and ligate DNA. The E. coli recombinant plasmid pBR322 has been widely utilized in genetic engineering experiments. pBR322 has all of the following features except: a number of palindromic sequences near the EcoRI site, which permit the plasmid to assume a conformation that protects newly inserted DNA from nuclease degradation. Which of the following statements regarding plasmid cloning vectors is correct? The copy number of plasmids may vary from a few to 10-20 to possibly several hundred within a cell depending on the plasmid. Michaelis-Menten equation Vo=(Vmax[S])/(km+[S]) Hypochromic Effect base stacking of double stranded complementary bases that occur in DNA and RNA
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8267882466316223, "perplexity": 5336.129954848588}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698542648.35/warc/CC-MAIN-20161202170902-00291-ip-10-31-129-80.ec2.internal.warc.gz"}
https://www.envoyproxy.io/docs/envoy/v1.17.4/configuration/http/http_filters/jwt_authn_filter
JWT Authentication¶ This HTTP filter can be used to verify JSON Web Token (JWT). It will verify its signature, audiences and issuer. It will also check its time restrictions, such as expiration and nbf (not before) time. If the JWT verification fails, its request will be rejected. If the JWT verification succeeds, its payload can be forwarded to the upstream for further authorization if desired. JWKS is needed to verify JWT signatures. They can be specified in the filter config or can be fetched remotely from a JWKS server. Following are supported JWT alg: ES256, ES384, ES512, HS256, HS384, HS512, RS256, RS384, RS512, PS256, PS384, PS512, EdDSA Configuration¶ This filter should be configured with the name envoy.filters.http.jwt_authn. This HTTP filter config has two fields: • Field providers specifies how a JWT should be verified, such as where to extract the token, where to fetch the public key (JWKS) and where to output its payload. • Field rules specifies matching rules and their requirements. If a request matches a rule, its requirement applies. The requirement specifies which JWT providers should be used. JwtProvider¶ JwtProvider specifies how a JWT should be verified. It has the following fields: • issuer: the principal that issued the JWT, usually a URL or an email address. • audiences: a list of JWT audiences allowed to access. A JWT containing any of these audiences will be accepted. If not specified, the audiences in JWT will not be checked. • local_jwks: fetch JWKS in local data source, either in a local file or embedded in the inline string. • remote_jwks: fetch JWKS from a remote HTTP server, also specify cache duration. • forward: if true, JWT will be forwarded to the upstream. • from_params: extract JWT from query parameters. Default Extract Location¶ If from_headers and from_params is empty, the default location to extract JWT is from HTTP header: Authorization: Bearer <token> and query parameter key access_token as: /path?access_token=<JWT> If a request has two tokens, one from the header and the other from the query parameter, all of them must be valid. In the filter config, providers is a map, to map provider_name to a JwtProvider. The provider_name must be unique, it is referred in the JwtRequirement <envoy_v3_api_msg_extensions.filters.http.jwt_authn.v3.JwtRequirement> in its provider_name field. Important For remote_jwks, a jwks_cluster cluster is required. Due to above requirement, OpenID Connect Discovery is not supported since the URL to fetch JWKS is in the response of the discovery. It is not easy to setup a cluster config for a dynamic URL. Remote JWKS config example¶ providers: provider_name1: issuer: https://example.com audiences: remote_jwks: http_uri: uri: https://example.com/jwks.json cluster: example_jwks_cluster timeout: 1s cache_duration: seconds: 300 Above example fetches JWSK from a remote server with URL https://example.com/jwks.json. The token will be extracted from the default extract locations. The token will not be forwarded to upstream. JWT payload will not be added to the request header. Following cluster example_jwks_cluster is needed to fetch JWKS. cluster: name: example_jwks_cluster type: STRICT_DNS cluster_name: example_jwks_cluster endpoints: - lb_endpoints: - endpoint: port_value: 443 transport_socket: name: envoy.transport_sockets.tls Inline JWKS config example¶ Another config example using inline JWKS: providers: provider_name2: issuer: https://example2.com local_jwks: inline_string: PUBLIC-KEY - name: jwt-assertion forward: true Above example uses config inline string to specify JWKS. The JWT token will be extracted from HTTP headers as: jwt-assertion: <JWT>. x-jwt-payload: base64url_encoded(jwt_payload_in_JSON) RequirementRule¶ RequirementRule has two fields: • Field match specifies how a request can be matched; e.g. by HTTP headers, or by query parameters, or by path prefixes. • Field requires specifies the JWT requirement, e.g. which provider is required. Important • If a request matches multiple rules, the first matched rule will apply. • If the matched rule has empty requires field, JWT verification is not required. • If a request doesn’t match any rules, JWT verification is not required. Single requirement config example¶ providers: jwt_provider1: issuer: https://example.com audiences: audience1 local_jwks: inline_string: PUBLIC-KEY rules: - match: prefix: /health - match: prefix: /api requires: provider_and_audiences: provider_name: jwt_provider1 audiences: api_audience - match: prefix: / requires: provider_name: jwt_provider1 Above config uses single requirement rule, each rule may have either an empty requirement or a single requirement with one provider name. Group requirement config example¶ providers: provider1: issuer: https://provider1.com local_jwks: inline_string: PUBLIC-KEY provider2: issuer: https://provider2.com local_jwks: inline_string: PUBLIC-KEY rules: - match: prefix: /any requires: requires_any: requirements: - provider_name: provider1 - provider_name: provider2 - match: prefix: /all requires: requires_all: requirements: - provider_name: provider1 - provider_name: provider2 Above config uses more complex group requirements: • The first rule specifies requires_any; if any of provider1 or provider2 requirement is satisfied, the request is OK to proceed. • The second rule specifies requires_all; only if both provider1 and provider2 requirements are satisfied, the request is OK to proceed.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.15629714727401733, "perplexity": 19071.473104131444}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103661137.41/warc/CC-MAIN-20220630031950-20220630061950-00009.warc.gz"}
http://clay6.com/qa/962/the-number-of-arbitrary-constants-in-the-general-solution-of-a-differential
Browse Questions # The number of arbitrary constants in the general solution of a differential equation of fourth order are $(A)\;0\qquad(B)\;2\qquad(C)\;3\qquad(D)\;4$ Toolbox: • The number of arbitrary constants in a solution of a differential equation of order n is equal to its order. In the given problem says the differential equation is of fourth order, so the number of arbitrary constants is 4. Hence the correct answer is $D$.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9635086059570312, "perplexity": 82.28604600049631}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171171.24/warc/CC-MAIN-20170219104611-00398-ip-10-171-10-108.ec2.internal.warc.gz"}
https://www.gradesaver.com/textbooks/math/algebra/elementary-algebra/chapter-3-equations-and-problem-solving-3-1-solving-first-degree-equations-problem-set-3-1-page-100/21
## Elementary Algebra $b=0.27$ Using the properties of equality, the value of the variable that satisfies the given equation, $b+0.19=0.46 ,$ is \begin{array}{l}\require{cancel} b=0.46-0.19 \\\\ b=0.27 .\end{array}
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9945158362388611, "perplexity": 3952.1482674470485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039743184.39/warc/CC-MAIN-20181116194306-20181116220306-00177.warc.gz"}
http://poisotlab.io/EcologicalNetwork.jl/latest/community/nlinks/
# Number of links and connectance # EcologicalNetwork.linksFunction. Number of links in a network links(N::EcoNetwork) For all type of networks, this is the sum of the adjacency matrix. Note that for quantitative networks, this is the cumulative sum of link weights. # EcologicalNetwork.link_numberFunction. Number of links in a quantitative network link_number(N::QuantitativeNetwork) In quantitative networks only, returns the number of non-zero interactions. # EcologicalNetwork.links_varFunction. Variance in the expected number of links links_var(N::ProbabilisticNetwork) Expected variance of the number of links for a probabilistic network. # EcologicalNetwork.linkage_densityFunction. linkage_density(N::DeterministicNetwork) Number of links divided by species richness. ## Connectance # EcologicalNetwork.connectanceFunction. Connectance connectance(N::EcoNetwork) Number of links divided by the number of possible interactions. In unipartite networks, this is $L/S^2$. In bipartite networks, this is $L/(T × B)$. Connectance of a quantitative network connectance(N::QuantitativeNetwork) Connectance of a quantitative network – the information on link weight is ignored. # EcologicalNetwork.connectance_varFunction. Variance in the expected connectance connectance_var(N::ProbabilisticNetwork) Expected variance of the connectance for a probabilistic matrix, measured as the variance of the number of links divided by the squared size of the matrix.
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8717333674430847, "perplexity": 1733.891125722349}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864546.30/warc/CC-MAIN-20180622143142-20180622163142-00258.warc.gz"}
http://www.maa.org/press/periodicals/convergence/gauss-titan-of-science
# Gauss: Titan of Science Author(s): Jon Choate, reviewer Gauss: Titan of Science, G. Waldo Dunnington (with additions by Jeremy Gray), 2004, reprint of 1955 edition, 537 pp. $51.95 (member price$41.50), cloth. ISBN 0-88385-547-X. Washington , DC: The Mathematical Association of America, 800-331-1622, / If you are interested in an in-depth look at how one of the world’s most prolific mathematicians lived his life then you will enjoy G. Waldo Dunnington’s Gauss: Titan of Science. Gauss’s life as a professor at the University of Gottingen is described in great detail, and one is left with a good look at what it was like to live and work in a prestigious German university during the first half of the nineteenth century amidst the constant political turmoil in Europe at that time. Gauss was very much the applied mathematician, and Dunnington traces in some detail the genesis of several of his more important contributions: his work on least squares analysis resulting from trying to make sense of data collected from attempts to find new planets, his work on geodesics which came out of his completing a survey of the part of Germany where he lived, and his work in non-Euclidian geometry which resulted from an early meeting with Nicolai Bolyai. Gauss did not travel much, but he did have extensive correspondence with many of the mathematical figures of his time, and where relevant, parts of these communications are given. The text does not cite much of the mathematics of Gauss’s work because it tends to be very dense and written in Latin. However, an extensive overview of his work providing a good account of all that Gauss accomplished is included as an appendix at the end of the book.  Because of the amount of detail, this is not an easy read, but if you are interested in who Gauss interacted with, where his ideas came from, and what it was like to live in his era you will find reading this book both rewarding and informative. Jon Choate, Mathematics Dept., Groton School, Groton, MA Jon Choate, reviewer, "Gauss: Titan of Science," Convergence (July 2007)
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4815981686115265, "perplexity": 951.6483021044173}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443737913406.61/warc/CC-MAIN-20151001221833-00037-ip-10-137-6-227.ec2.internal.warc.gz"}
http://crypto.stackexchange.com/questions/6541/is-it-safe-for-the-chacha8-nonce-to-be-deterministic/6544
# Is it safe for the ChaCha8 nonce to be deterministic? ChaCha8 takes a 8 byte nonce (or IV) that should not be the same twice for the same key. Generating this nonce randomly makes me very very nervous for collisions. Is it safe to generate this nonce deterministically? For example the first 6 bytes could be the number of milliseconds since the Unix epoch, with the last 2 bytes being an unsigned counter wrapping around. This will prevent any collisions from happening for 9000 years if no more than 65535 streams get encrypted in a millisecond (things get more complicated if the same encryption key is being used by multiple machines/threads, but let's ignore that for now). - Use a larger nonce. See XSalsa20. Will post more tomorrow. – CodesInChaos Mar 1 '13 at 22:47 You could expand the nonce size to 192 bits, similar to how XSalsa20 expands the nonce size of Salsa. – CodesInChaos Jun 1 at 17:19 @CodesInChaos "tomorrow" :P – orlp Jun 1 at 17:43 Procrastination... – CodesInChaos Jun 1 at 17:46 Yes, it is safe. The only requirement for the nonce in Salsa/Chacha is to be unique; being predictable is not an issue, so a counter is fine. Like CodesInChaos indicated, I believe extending XSalsa20 to XChaCha20 would also work if you want to a larger nonce, but have nothing concrete so will leave the details to him/her. - Looking at the reference implementation, it looks like the IV is 64 bytes, not 8. In ECRYPT_ivsetup, it's a pointer to uint8_t, which is then treated as a pair of quartets (e.g., 8) of 8-byte values. - You're misunderstanding something. Input to the permutation is 64 bytes, but that includes key, stream-id(nonce), offset and a constant. – CodesInChaos Mar 1 '13 at 22:40 And chacha operates on 16 words with 4 bytes each. Iv setup sets the offset to 0, and sets the 8 byte nonce. – CodesInChaos Mar 1 '13 at 23:07 Apologies. I wasn't able to find any real documentation on chacha, so went to the source. I've obviously misread something in the implementation. – Stephen Touset Mar 1 '13 at 23:42 While ChaCha doesn't have much documentation, you can look at Salsa20's documentation. They're almost identical. – CodesInChaos Mar 2 '13 at 10:41
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2547801434993744, "perplexity": 2644.521998561325}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257824995.51/warc/CC-MAIN-20160723071024-00000-ip-10-185-27-174.ec2.internal.warc.gz"}
http://mathhelpforum.com/differential-equations/193253-homogeneous-ode-print.html
# Homogeneous ODE • December 2nd 2011, 11:17 AM losm1 Homogeneous ODE Equation is $xy' = y \ln \frac{y}{x}$. I have tried to substitute $v = \frac{y}{x}$ and I am stuck at $\frac{1}{dx}x = v (\ln v - 1)\frac{1}{dv}$ To be exact I do not know where to go from there in calculus sense. • December 2nd 2011, 12:07 PM Darkprince Re: Homogeneous ODE So you have (1/x)dx = -(1/(vlnv-v) )dv= -1/(v(lnv-1) Now the integral of -1/(v(lnv-1) is -ln(lnv - 1) Then proceed accordingly, hope I helped :) • December 2nd 2011, 12:08 PM alexmahone Re: Homogeneous ODE Quote: Originally Posted by losm1 Equation is $xy' = y \ln \frac{y}{x}$. I have tried to substitute $v = \frac{y}{x}$ and I am stuck at $\frac{1}{dx}x = v (\ln v - 1)\frac{1}{dv}$ To be exact I do not know where to go from there in calculus sense. $\frac{dv}{v(\ln v-1)}=\ln (\ln v-1)+C$ • December 3rd 2011, 03:19 AM losm1 Re: Homogeneous ODE In my example differentials dx and dv are in denominator: $\frac{1}{dx}x = v (\ln v - 1)\frac{1}{dv}$ I'm having trouble applying your formula in this case. Can you please clarify further? • December 3rd 2011, 04:13 AM Prove It Re: Homogeneous ODE Quote: Originally Posted by losm1 Equation is $xy' = y \ln \frac{y}{x}$. I have tried to substitute $v = \frac{y}{x}$ and I am stuck at $\frac{1}{dx}x = v (\ln v - 1)\frac{1}{dv}$ To be exact I do not know where to go from there in calculus sense. Make the substitution \displaystyle \begin{align*} v = \frac{y}{x} \implies y = v\,x \implies \frac{dy}{dx} = \frac{dy}{dx} = v + x\,\frac{dv}{dx} \end{align*} and the DE becomes \displaystyle \begin{align*} x\,\frac{dy}{dx} &= y\ln{\left(\frac{y}{x}\right)} \\ x\left(v + x\,\frac{dv}{dx}\right) &= v\,x\ln{v} \\ v + x\,\frac{dv}{dx} &= v\ln{v} \\ x\,\frac{dv}{dx} &= v\ln{v} - v \\ x\,\frac{dv}{dx} &= v\left(\ln{v} - 1\right) \\ \frac{1}{v\left(\ln{v} - 1\right)}\,\frac{dv}{dx} &= \frac{1}{x} \\ \int{\frac{1}{v\left(\ln{v} - 1\right)}\,\frac{dv}{dx}\,dx} &= \int{\frac{1}{x}\,dx} \\ \int{\frac{1}{\ln{v} - 1}\,\frac{1}{v}\,dv} &= \ln{|x|} + C_1 \\ \int{\frac{1}{u}\,du} &= \ln{|x|} + C_1 \textrm{ after making the substitution }u = \ln{v} - 1 \implies du = \frac{1}{v}\,dv \\ \ln{|u|} + C_2 &= \ln{|x|} + C_1 \\ \ln{|u|} - \ln{|x|} &= C \textrm{ where }C = C_1 - C_2 \\ \ln{\left|\frac{u}{x}\right|} &= C \\ \ln{\left|\frac{\ln{v} - 1}{x}\right|} &= C \\ \ln{\left|\frac{\ln{\left(\frac{y}{x}\right)} - 1}{x}\right|} &= C \\ \frac{\ln{\left(\frac{y}{x}\right)} - 1}{x} &= A \textrm{ where } A = \pm e^C\end{align*} You could get y in terms of x if you wanted to. • December 3rd 2011, 05:17 AM Darkprince Re: Homogeneous ODE Quote: Originally Posted by losm1 In my example differentials dx and dv are in denominator: $\frac{1}{dx}x = v (\ln v - 1)\frac{1}{dv}$ I'm having trouble applying your formula in this case. Can you please clarify further? So you have x/dx = v(lnv - 1)/dv implies xdv = v(lnv-1)dx implies (1/x)dx = (1/v(lnv-1)) dv • December 3rd 2011, 09:47 AM tom@ballooncalculus Re: Homogeneous ODE Yes, that is the idea. However, just in case an overview helps... http://www.ballooncalculus.org/draw/double/five.png http://www.ballooncalculus.org/draw/double/fivea.png http://www.ballooncalculus.org/draw/double/fiveb.png ... where (key in spoiler) ... Spoiler: http://www.ballooncalculus.org/asy/chain.png ... is the chain rule. Straight continuous lines differentiate downwards (integrate up) with respect to the main variable (in this case x), and the straight dashed line similarly but with respect to the dashed balloon expression (the inner function of the composite which is subject to the chain rule). __________________________________________________ __________ Don't integrate - balloontegrate! Balloon Calculus; standard integrals, derivatives and methods Balloon Calculus Drawing with LaTeX and Asymptote!
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 14, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000098943710327, "perplexity": 4672.260195300459}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701157012.30/warc/CC-MAIN-20160205193917-00111-ip-10-236-182-209.ec2.internal.warc.gz"}
https://kluedo.ub.uni-kl.de/frontdoor/index/index/docId/1535
## Theory and Numerics of Open System Thermodynamics • A general framework for the thermodynamics of open systems is developed in the spatial and the material setting. Special emphasis is placed on the balance of mass which is enhanced by additional source and flux terms. Different solution strategies within the finite element technique are derived and compared. A number of numerical examples illustrates the features of the proposed approach. • Theorie und Numerik der Thermodynamik Offener Systeme $Rev: 13581$
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9111805558204651, "perplexity": 1112.1962775169504}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-30/segments/1469257823802.12/warc/CC-MAIN-20160723071023-00091-ip-10-185-27-174.ec2.internal.warc.gz"}
https://www.arxiv-vanity.com/papers/hep-lat/9411067/
###### Abstract We study the valence approximation in lattice QCD of hadrons where the cloud quarks and antiquarks are deleted by truncating the backward time propagation (Z graphs) in the connected insertions. Whereas, the sea quarks are eliminated via the quenched approximation and in the disconnected insertions. It is shown that the ratios of isovector to isoscalar matrix elements in the nucleon reproduce the SU(6) quark model predictions in a lattice QCD calculation. We also discuss how the hadron masses are affected. UK/94-03 Oct. 1994 hep-lat/9411067 Quark Model From Lattice QCD Keh-Fei Liu ***Talk presented at Int. Conf. High Energy Phys., Galsgow, July, 1994 and Shao-Jing Dong [0.5cm] Dept. of Physics and Astronomy Univ. of Kentucky, Lexington, KY 40506 [1em] ## 1 Introduction In addition to its classification scheme, the quark model is, by and large, quite successful in delineating the spectrum and structure of mesons and baryons. One often wonders what the nature of the approximation is, especially in view of the advent of quantum chromodynamics (QCD). In order to answer this question, we need to understand first where the quark model is successful and where it fails. To begin with, we need to define what we mean by the quark model. We consider the simplest approach which includes the following ingredients: • The Fock space is restricted to the valence quarks only. • These valence quarks, be them the dressed constituent quarks or the bare quarks, are confined in a potential or a bag. To this zeroth order, the hadron wavefunctions involving u,d, and s quarks are classified by the totally symmetric wavefunctions in the flavor-spin and orbital space according to the and totally antisymmetric/symmetric in the color space for the baryons/mesons. • The degeneracy within the the multiplts are lifted by the different quark masses and the residual interaction between the quarks which is weak compared to the confining potential. The one-gluon exchange potential is usually taken as this residual interaction to describe the hyper-fine and fine splittings of the hadron masses. Given what we mean by the quark model, it is easier to understand where the quark model succeeds and fails. It is successful in describing hadron masses, relations of coupling and decay constants, magnetic moments, Okubo-Zweig rule, etc. It is worthwhile noting that all these are based on the valence picture aided with group for its color-spin and space group. On the other hand, it fails to account for the U(1) anomaly (the mass) , the proton spin crisis and the term. All these problems involve large contribution from disconnected insertions involving sea-quarks [1]. It is natural not to expect the valence quark model to work. There are other places where the valence quark model does not work well. These include , scatterings, current algebra relations, and the form factors of the nucleon which are better described by meson effective theories with chiral symmetry taken into account. For example, the scattering is well described in the chiral perturbation theory, the scattering and the nucleon electromagnetic ,axial, and pseudoscalar form factors (especially the neutron charge radius), Goldberg-Treiman relation are all quite well given in the skyrmion approach [2]. One common theme of these models is the chiral symmetry which involves meson cloud and hence the higher Fock space beyond the valence. ## 2 Valence Approximation It is then clear that there are three ingredients in the classification the quarks, i.e. the valence, the cloud, and the sea quarks. The question is how one defines them unambiguously and in a model independent way in QCD. It has been shown recently [3] that in evaluating the hadronic tensor in the deep inelastic scattering, the three topological distinct contractions of quark fields lead to the three quark-line skeleton diagrams. The self-contraction of the current leading to a quark loop is separated from the quark lines joining the nucleon interpolating fields. This disconnected insertion (D.I.) refers to the quark lines which are of courses connected by the gloun lines. This D.I. defines the sea-parton. One class of the connected insertion (C.I.) involves an anti-quark propagating backwards in time between the currents and is defined as the cloud anti-quark. Another class of the C.I. involves a quark propagating forward in time between the currents and is defined to be the sum of the valence and cloud quarks. Thus, in the parton model, the antiquark distribution should be written as ¯¯¯qi(x)=¯¯¯qic(x)+¯¯¯qis(x). (1) to denote their respective origins for each flavor i. Similarly, the quark distribution is written as qi(x)=qiV(x)+qic(x)+qis(x) (2) Since , we define so that will be responsible for the baryon number, i.e. and for the proton. We can reveal the role of these quarks in the nucleon matrix elements which involve the three-point function with one current. The D.I. in the three-point function involves the sea-quark contribution to the m.e. It has been shown that the this diagram has indeed large contributions for the flavor-singlet scalar and axial charges [4] so that the discrepancy between the valence quark model and the experiment in the term and the flavor-singlet can be understood. Thus we conclude that in order to simulate the valence quark model, the first step is to eliminate the quark loops. This can be done in the well-known quenched approximation by setting the fermion determinant to a constant. In order to reveal the effect of the cloud degree of freedom, we have calculated the ratios of the isoscalar to isovector axial and scalar charges in a quenched lattice calculation. The ratio of the isoscalar (the C.I. part) to isovector axial charge can be written as (3) where is the polarized parton distribution of the u(d) quark and antiquark in the C.I. For the non-relativistic case, is 5/3 and for the C.I. is 1 Thus, the ratio should be 3/5. Our lattice results based on quenched lattices with for the Wilson ranging between 0.154 to 0.105 which correspond to strange and twice the charm masses are plotted in Fig. 1 as a function of the quark mass . We indeed find this ratio for the heavy quarks (i.e. or in Fig.1). This is to be expected because the cloud antiquarks which involves Z-graphs are suppressed for non-relativistic quarks by . Interestingly, the ratio dips under 3/5 for light quarks. We interpret this as due to the cloud quark and antiquark, since in the relativistic valence quark models (i.e. no cloud nor sea quarks) the ratio remains to be 3/5. To verify that this is indeed caused by the cloud antiquarks from the backward time propagation, we perform the following approximation. In the Wilson lattice action, the backward time hopping is prescribed by the term . We shall amputate this term from the quark matrix in our calculation of the quark propagators. As a result, the quarks are limited to propagating forward in time and there will be no Z-graph and hence no cloud quarks and antiquarks. The Fock space is limited to 3 valence quarks. Thus we shall refer to this as the valence approximation and we believe it simulates what the naive quark model is supposed to describe by design. After making this valence approximation for the light quarks with and 0.154 (The quark mass turns out to differ from before only at the perturbative one-loop order,i.e. , which is very small. we find that the ratio becomes 3/5 with errors less than the size of the circles in Fig. 1. Since the valence quark model prediction of is well reproduced by the valence approximation, we believe this proves our point that the deviation of from 3/5 in Fig. 1 is caused by the backward time propagation, i.e. the cloud quarks and antiquarks. Similar situation happens in the scalar matrix elements. In the parton model description of the forward m.e., the ratio of the isovector to isoscalar scalar charge of the proton for the C.I. is then approximated according to eqs. (1) and (2) as RS=⟨p|¯uu−¯dd|p⟩⟨p|¯uu+¯dd|p⟩\footnotesizeC.I.=1+2∫dx[¯uc(x)−¯dc(x)]3+2∫dx[¯uc(x)+¯dc(x)] (4) Since the quark/antiquark number is positive definite, we expect this ratio to be . For heavy quarks where the cloud antiquarks are suppressed, the ratio is indeed 1/3 (see Fig. 2). For quarks lighter than , we find that the ratio is in fact less than 1/3. The lattice results of the valence approximation for the light quarks, shown as the circles in Fig. 2, turn out to be 1/3. This shows that the deviation of from 1/3 is caused by the cloud quarks and antiquarks. With these findings, we obtain an upper-bound for the violation of GSR [3], i.e. . This clearly shows that is negative and is quite consistent with the experimental result . To further explore the consequences of the valence approximation, we calculate the baryon masses. Plotted in fig. 3 are masses of , and as a function of the quark mass on our lattice with quenched approximation. We see that the hyper-fine splittings between the and N, and the and grow when the quark mass approaches the chiral limit as expected. However, it is surprising to learn that in the valence approximation, the and N become degenerate within errors, so do the and as shown in Fig. 4. Since the one-gluon exchange is not switched off in the valence approximation, the hyper-fine splitting is probably not due to the one-gluon exchange potential as commonly believed. Since this is a direct consequence of eliminating the cloud quark/antiquark degree of freedom, one can speculate that it has something to do with the cloud. It seems that a chiral soliton like the skyrmion might delineate a more accurate dynamical picture than the one-gluon exchange spin-spin interaction. To conclude, we find that the valence approximation in QCD reproduces the SU(6) results of the valence quark model better than we anticipated. Especially in hadron masses, the results seem to indicate that there are no hyper-fine splittings, modulo the uncertainty due to the statistical and systematic errors. ## References Figure Captions: Fig. 1 The ratio of eq. (3) as a function of the quark mass . Fig. 2 The ratio of eq. (4) as a function of the quark mass ma. Fig. 3 Masses of , and (in lattice units) as a function of the quark mass ma in the quenched approximation. Fig. 4 The same as in Fig. 3 with the valence approximation.
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9705812335014343, "perplexity": 630.6297061068404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585382.32/warc/CC-MAIN-20211021071407-20211021101407-00650.warc.gz"}