url
stringlengths 14
1.76k
| text
stringlengths 100
1.02M
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|
https://en.wikibooks.org/wiki/Solutions_To_Mathematics_Textbooks/Principles_of_Mathematical_Analysis_(3rd_edition)_(ISBN_0070856133)/Chapter_6
|
# Chapter 6
## 7
### a
By page 121 we know that ${\displaystyle f}$ must be bounded, say by ${\displaystyle M}$. We need to show that given ${\displaystyle \epsilon >0}$ we can find some ${\displaystyle c}$ such that ${\displaystyle \int _{c}^{1}f(x)dx\in B_{\epsilon }\left(\int _{0}^{1}f(x)dx\right).}$So, by Theorem 6.12 (c) we have ${\displaystyle \int _{0}^{1}f(x)dx=\int _{0}^{c}f(x)dx+\int _{c}^{1}f(x)dx}$ and ${\displaystyle \int _{0}^{c}f(x)dx\leq M\cdot c}$.
Hence, ${\displaystyle \int _{0}^{1}f(x)dx\leq M\cdot c+\int _{c}^{1}f(x)dx}$ but since we can choose any ${\displaystyle c>0}$ and ${\displaystyle M}$ is fixed we can choose ${\displaystyle c={\frac {\epsilon }{2M}}}$ which yields ${\displaystyle \int _{0}^{1}f(x)dx\leq {\frac {\epsilon }{2}}+\int _{c}^{1}f(x)dx}$ So, given ${\displaystyle \epsilon }$ we can always choose a ${\displaystyle c}$ such that ${\displaystyle \int _{c}^{1}f(x)dx\in B_{\epsilon }\left(\int _{0}^{1}f(x)dx\right)}$ as desired.
### b
Considered the function which is defined to be ${\displaystyle n(-1)^{n}}$ on the last ${\displaystyle 6/(n^{2}\pi ^{2})}$ of the interval [0,1] and zero at those ${\displaystyle f(x)}$ where ${\displaystyle x=6/(n^{2}\pi ^{2})}$. This function is well defined, since we know that ${\displaystyle \sum _{n=1}^{\infty }6/(n^{2}\pi ^{2})=1}$.
More specifically the function has value ${\displaystyle n(-1)^{n}}$ on the open interval from ${\displaystyle (p_{n}=1-\sum _{m=1}^{n-1}6/(m^{2}\pi ^{2}),p_{n+1}=1-\sum _{m=1}^{n}6/(m^{2}\pi ^{2}))}$
First we evaluate the integral of the function itself. Consider a partitioning of the interval ${\displaystyle [0,1]}$ at each ${\displaystyle p_{n}\pm \epsilon }$ for some ${\displaystyle \epsilon >0}$
Then, the lower and upper sums corresponding to the intervals of the partition from ${\displaystyle p_{n}-\epsilon }$ to ${\displaystyle p_{n+1}+\epsilon }$ are the same, since the function is constant valued on these intervals. Moreover, as ${\displaystyle \epsilon \to 0}$ the value of the upper and lower sums both approach ${\displaystyle n(-1)^{n}(p_{n+1}-p_{n})}$.
Thus we can express the value of the integral as the sum of the series ${\displaystyle \sum _{n=1}^{\infty }\left({\frac {6}{n^{2}\pi ^{2}}}\right)n(-1)^{n}=\sum _{n=1}^{\infty }\left({\frac {(-1)^{n}6}{n\pi ^{2}}}\right)}$ ${\displaystyle ={\frac {6}{\pi ^{2}}}\sum _{n=1}^{\infty }\left({\frac {(-1)^{n}}{n}}\right)}$ but we recognize this sum as just a constant multiple of the alternating harmonic series. Hence, the integral converges.
Now we examine the integral of the absolute value of the function. We argue similarly to the above, again partitioning the function at ${\displaystyle p_{n}\pm \epsilon }$ as defined above. The difference is that now, as we let ${\displaystyle \epsilon \to 0}$ the upper and lower sums both go to ${\displaystyle \sum _{n=1}^{\infty }\left({\frac {6}{n^{2}\pi ^{2}}}\right)n=\sum _{n=1}^{\infty }\left({\frac {6}{n\pi ^{2}}}\right)}$ ${\displaystyle ={\frac {6}{\pi ^{2}}}\sum _{n=1}^{\infty }{\frac {1}{n}}}$ and so the integral does not exist, as this is the harmonic series, which does not converge.
In the above proof of divergence the important point is that the lower sums diverge. The fact that the upper sums diverge is an immediate consequence of this.
So, we have demonstrated a function whose integral converges, but does not converge absolutely as desired.
## 8
We begin by showing (${\displaystyle \Rightarrow [itex])that[itex]\int _{1}^{\infty }f(x)dx}$ converges if ${\displaystyle \sum _{n=1}^{\infty }f(n)}$ converges.
So, we assume to start that ${\displaystyle \sum _{n=1}^{\infty }f(n)}$ converges. Now consider the partition ${\displaystyle P=\{p_{n}\ |\ p_{n}=n,n\in \mathbb {N} \}}$. Since ${\displaystyle f(x)}$ decreases monotonically it must be that ${\displaystyle inf\{f([p_{n},p_{n+1}])\}=f(p_{n+1})}$ and similarly that ${\displaystyle sup\{f([p_{n},p_{n+1}])\}=f(p_{n})}$. Thus, the integral which we are trying to evaluate is bounded above by ${\displaystyle \sum _{n=1}^{\infty }f(n)}$ and below by ${\displaystyle \sum _{n=2}^{\infty }f(n)}$.
Now we observe that ${\displaystyle \int _{a}^{\infty }f(x)dx}$ may be written as a sum over the domain as ${\displaystyle \sum _{n=1}^{\infty }\left(\int _{p_{n}}^{p_{n+1}}f(x)dx\right)}$ We know moreover that each of these integrals exist, by Theorem 6.9. Also, since ${\displaystyle f(x)}$ is always positive each such integral must be positive. Therefore, the integral may be expressed as a sum of a nonnegative series which is bounded above. Hence, by Theorem 3.24 the integral exists.
Now we prove (${\displaystyle \Leftarrow }$) that if ${\displaystyle \int _{1}^{\infty }f(x)dx}$ converges then ${\displaystyle \sum _{n=1}^{\infty }f(n)}$ converges.
So assume now that ${\displaystyle \int _{1}^{\infty }f(x)dx}$ converges. Then we can prove that the summation ${\displaystyle \sum _{n=1}^{\infty }f(n)}$ satisfies the Cauchy criterion. We established above ${\displaystyle \int _{k}^{\infty }f(x)dx}$ is bounded above by ${\displaystyle \sum _{n=k}^{\infty }f(n)}$ and below by ${\displaystyle \sum _{n=K+1}^{\infty }f(n)}$. This implies that given a sum ${\displaystyle \sum _{n=K+1}^{\infty }f(n)}$ it is bounded above by the integral ${\displaystyle \int _{k}^{\infty }f(x)dx}$. Moreover, since the integral ${\displaystyle \int _{k}^{\infty }f(x)dx}$ exists and ${\displaystyle f}$ is nonegative we know that it has the property given ${\displaystyle \epsilon >0\ \exists M}$ such that ${\displaystyle \int _{M}^{\infty }f(x)dx<\epsilon }$. For otherwise the integral would not exist and instead tend to infinity.
So now we can apply the Cauchy criterion for series. Since an upper bound of the series has the property that given ${\displaystyle \epsilon >0\ \exists M}$ such that ${\displaystyle \sum _{M}^{\infty }f(x)<\epsilon }$. So must the series itself have this property.
Thus, the sum converges as desired.
## 10
### a
We will prove that If ${\displaystyle u\geq 0}$ and ${\displaystyle v\geq 0}$ then ${\displaystyle uv\leq {\frac {u^{p}}{p}}+{\frac {v^{q}}{q}}}$ and that equality holds if and only if ${\displaystyle u^{p}=v^{q}}$ \begin{proof} We begin by proving the special case of equality
Assume that ${\displaystyle u^{p}=v^{q}}$. ${\displaystyle \Leftrightarrow u=v^{q/p}}$ ${\displaystyle \Leftrightarrow vu=v^{q/p+1}}$ ${\displaystyle \Leftrightarrow vu=v^{q/p+1}}$ ${\displaystyle \Leftrightarrow vu=v^{q(1/p+1/q)}}$ ${\displaystyle \Leftrightarrow vu=v^{q}}$ (Similarly we can show that ${\displaystyle vu=u^{p}\Leftrightarrow u^{p}=v^{q}}$.) Thus, ${\displaystyle vu=v^{q}\Leftrightarrow u^{p}=v^{q}}$ and we see moreover that ${\displaystyle uv={\frac {u^{p}}{p}}+{\frac {v^{q}}{q}}\Leftarrow vu=v^{q}}$ since in this case we have ${\displaystyle uv=v^{q}\left({\frac {1}{p}}+{\frac {1}{q}}\right)\checkmark }$ Also, if it is not the case that ${\displaystyle vu=v^{q}}$ then it is easy to see that ${\displaystyle uv\neq {\frac {u^{p}}{p}}+{\frac {v^{q}}{q}}}$ as for a sum of quotients by ${\displaystyle p}$ and ${\displaystyle q}$ to not contain ${\displaystyle p}$, ${\displaystyle q}$ we must have the numerators equal.
Now we show that as we vary ${\displaystyle u}$ we must always have ${\displaystyle uv\leq {\frac {u^{p}}{p}}+{\frac {v^{q}}{q}}}$. For, compute the derivative of ${\displaystyle uv}$ with respect to ${\displaystyle u}$, and the derivative of ${\displaystyle {\frac {u^{p}}{p}}+{\frac {v^{q}}{q}}}$ with respect to ${\displaystyle u}$. We get ${\displaystyle v}$ and ${\displaystyle u^{p-1}}$ respectively. If we have ${\displaystyle u^{p}=v^{q}}$ then these are equal as demonstrated above (we showed that ${\displaystyle uv=u^{p}}$ in that case). In the case that ${\displaystyle u}$ is larger than this value then ${\displaystyle u^{p-1}>v}$ and in the case that ${\displaystyle u}$ is less than this value then ${\displaystyle u^{p-1}.
This argument can be repeated in an analogous manner for variations in ${\displaystyle v}$, and given any ${\displaystyle p}$ and ${\displaystyle q}$ we can find values for which ${\displaystyle u^{p}=v^{q}}$.
Thus, we observe that ${\displaystyle uv\leq {\frac {u^{p}}{p}}+{\frac {v^{q}}{q}}}$ as desired\end{proof}
### b
If ${\displaystyle f\in {\mathcal {R}}(\alpha )}$, ${\displaystyle g\in {\mathcal {R}}(\alpha )}$, ${\displaystyle f\geq 0}$, ${\displaystyle g\geq 0}$, and ${\displaystyle \int _{a}^{b}f^{p}d\alpha =1=\int _{a}^{b}g^{q}d\alpha ,}$ then ${\displaystyle \int _{a}^{b}fgd\alpha \leq 1}$ \begin{proof}
If ${\displaystyle 0\leq f\in {\mathcal {R}}(\alpha )}$ and ${\displaystyle 0\leq g\in {\mathcal {R}}(\alpha )}$ then ${\displaystyle f^{p}}$ and ${\displaystyle g^{q}}$ are in ${\displaystyle {\mathcal {R}}(\alpha )}$ by Theorem 6.11. Also, we have ${\displaystyle fg\in {\mathcal {R}}(\alpha )}$ so we get ${\displaystyle \int _{a}^{b}fgd\alpha \leq {\frac {1}{p}}\int _{a}^{b}f^{p}d\alpha +{\frac {1}{q}}\int _{a}^{b}g^{q}d\alpha =1}$ as desired.\end{proof}
### c
We prove H\"older's inequality \begin{proof} If ${\displaystyle f}$ and ${\displaystyle g}$ are complex valued then we get ${\displaystyle \left|\int _{a}^{b}fgd\alpha \right|\leq \int _{a}^{b}|f||g|d\alpha .}$
If ${\displaystyle \int _{a}^{b}|f|^{p}\neq 0}$and ${\displaystyle \int _{a}^{b}|g|^{q}\neq 0}$ then applying the previous part to the functions ${\displaystyle |f|/c}$ and ${\displaystyle |g|/d}$ where ${\displaystyle c^{p}=\int _{a}^{b}|g|^{q}}$ and ${\displaystyle d^{q}=\int _{a}^{b}|g|^{q}}$ gives what we wanted to show.
${\displaystyle \left|\int _{a}^{b}fgd\alpha \right|\leq \left(\int _{a}^{b}|f|^{p}d\alpha \right)^{1/p}+\left(\int _{a}^{b}|g|^{q}d\alpha \right)^{1/q}}$
However, if one of the above is zero (say without loss of generality ${\displaystyle \int _{a}^{b}|f|^{p}=0}$ then we just have ${\displaystyle \int _{a}^{b}|f|(c|g|)d\alpha \leq c^{q}{\frac {1}{q}}\int _{a}^{b}|g|^{q}d\alpha }$ for ${\displaystyle c>0}$. Taking the limit ${\displaystyle c\to 0}$ we observe that the inequality is still true.
${\displaystyle \int _{a}^{b}|f||g|d\alpha =0}$
\end{proof}
## 16
\begin{enumerate}
### a
We take the expression ${\displaystyle s\int _{1}^{\infty }{\frac {[x]}{x^{s+1}}}dx}$ and express it as a sum of integrals on the intervals ${\displaystyle (n,n+1)}$ to get ${\displaystyle s\left(\int _{1}^{2}{\frac {[x]}{x^{s+1}}}dx+\int _{2}^{3}{\frac {[x]}{x^{s+1}}}dx+\dots \right)}$ but since each such interval ${\displaystyle [x]}$ is the same, we just write ${\displaystyle s\left(\int _{1}^{2}{\frac {1}{x^{s+1}}}dx+\int _{2}^{3}{\frac {2}{x^{s+1}}}dx+\dots \right)}$(1)
Now we exploit the Fundamental Theorem of Calculus, computing ${\displaystyle \int _{n}^{n+1}{\frac {n}{x^{s+1}}}dx=n\left[-{\frac {x^{-s}}{s}}\right]_{n}^{n+1}=n\left(-{\frac {(n+1)^{-s}}{s}}+{\frac {n^{-s}}{s}}\right).}$ So, the summation in Equation 1 can, more explicitly be written as ${\displaystyle s\sum _{n=1}^{\infty }n\left(-{\frac {(n+1)^{-s}}{s}}+{\frac {n^{-s}}{s}}\right)=\sum _{n=1}^{\infty }\left({\frac {n}{n^{s}}}-{\frac {n}{(n+1)^{s}}}\right)}$ However, grouping common denominators, we observe that the sum partially telescopes to yield more simply ${\displaystyle \sum _{n=1}^{\infty }{\frac {1}{n^{s}}}=\zeta (s).}$
### b
Having now proved Part a it suffices to show that ${\displaystyle s\int _{1}^{\infty }{\frac {[x]}{x^{s+1}}}dx={\frac {s}{s-1}}-s\int _{1}^{\infty }{\frac {x-[x]}{x^{s+1}}}dx.}$
By the Fundamental Theorem of Calculus we have ${\displaystyle \int _{1}^{\infty }{\frac {1}{x^{s}}}dx={\frac {1}{s-1}}}$ So \begin{eqnarray} \int_1^\infty \frac{x}{x^{s+1}} dx&=&\frac{1}{s-1}\\ \Rightarrow s \int_1^\infty \frac{x}{x^{s+1}} dx&=&\frac{s}{s-1}\\ \Rightarrow s \int_1^\infty \left( \frac{x-[x]}{x^{s+1}} + \frac{[x]}{x^{s+1}} \right) dx&=&\frac{s}{s-1}\\ \Rightarrow s \int_1^\infty \left( \frac{x-[x]}{x^{s+1}} + \frac{[x]}{x^{s+1}} \right) dx&=&\frac{s}{s-1}\\ \Rightarrow s \int_1^\infty \frac{[x]}{x^{s+1}} dx &=&\frac{s}{s-1} - s \int_1^\infty \frac{x-[x]}{x^{s+1}} dx\\ \end{eqnarray*} as desired\
end part b
It remains now to show that the integral in Part \ref{2} converges.
Since for ${\displaystyle x\in (1,\infty )[itex]wehave[itex]0\leq {\frac {x-[x]}{x^{s+1}}}\leq {\frac {1}{x^{s+1}}}}$ we know that ${\displaystyle \int _{1}^{\infty }{\frac {x-[x]}{x^{s+1}}}dx}$ converges if and only if ${\displaystyle \int _{1}^{\infty }{\frac {1}{x^{s+1}}}dx}$ converges.
However, ${\displaystyle \int _{1}^{\infty }{\frac {1}{x^{s+1}}}dx}$ converges by the integral test (Problem 8) since we have already shown that the sequence ${\displaystyle \sum _{x=1}^{\infty }{\frac {1}{x^{s+1}}}}$ is convergent for ${\displaystyle 1
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 146, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995516538619995, "perplexity": 544.00834868785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280587.1/warc/CC-MAIN-20170116095120-00441-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://link.springer.com/article/10.1007%2Fs10590-013-9136-6
|
Machine Translation
, Volume 27, Issue 2, pp 139–166
# Substring-based machine translation
• Graham Neubig
• Taro Watanabe
• Shinsuke Mori
• Tatsuya Kawahara
Article
## Abstract
Machine translation is traditionally formulated as the transduction of strings of words from the source to the target language. As a result, additional lexical processing steps such as morphological analysis, transliteration, and tokenization are required to process the internal structure of words to help cope with data-sparsity issues that occur when simply dividing words according to white spaces. In this paper, we take a different approach: not dividing lexical processing and translation into two steps, but simply viewing translation as a single transduction between character strings in the source and target languages. In particular, we demonstrate that the key to achieving accuracies on a par with word-based translation in the character-based framework is the use of a many-to-many alignment strategy that can accurately capture correspondences between arbitrary substrings. We build on the alignment method proposed in Neubig et al. (Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics. Portland, Oregon, pp. 632–641, 2011), improving its efficiency and accuracy with a focus on character-based translation. Using a many-to-many aligner imbued with these improvements, we demonstrate that the traditional framework of phrase-based machine translation sees large gains in accuracy over character-based translation with more naive alignment methods, and achieves comparable results to word-based translation for two distant language pairs.
## Keywords
Character-based translation Alignment Inversion transduction grammar
## References
1. Abouelhoda MI, Kurtz S, Ohlebusch E (2004) Replacing suffix trees with enhanced suffix arrays. J Discret Algorithms 2(1):53–86
2. Al-Onaizan Y, Knight K (2002) Translating named entities using monolingual and bilingual resources. In: 40th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, Philadelphia, pp 400–408Google Scholar
3. Bai MH, Chen KJ, Chang JS (2008) Improving word alignment by adjusting Chinese word segmentation. In: IJCNLP 2008, Proceedings of the 3rd International Joint Conference on Natural Language Processing. Hyderabad, pp 249–256Google Scholar
4. Blunsom P, Cohn T (2010) Inducing synchronous grammars with slice sampling. In: Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Proceedings of the Main Conference, Los Angeles, pp 238–241Google Scholar
5. Blunsom P, Cohn T, Dyer C, Osborne M (2009) A Gibbs sampler for phrasal synchronous grammar induction. In: ACL-IJCNLP 2009, Joint Conference of the 47th Annual Meeting of the Association for Computational Linguistics and 4th International Joint Conference on Natural Language Processing of the AFNLP. Proceedings of the Conference, Suntec, pp 782–790Google Scholar
6. Bojar O (2007) English-to-Czech factored machine translation. In: ACL 2007: Proceedings of the Second Workshop on Statistical Machine Translation. Czech Republic, Prague, pp 232–239Google Scholar
7. Brown PF, Della-Pietra VJ, Della-Pietra SA, Mercer RL (1993) The mathematics of statistical machine translation: parameter estimation. Comput Linguist 19:263–312Google Scholar
8. Brown RD (2002) Corpus-driven splitting of compound words. In: TMI-2002 Conference: Proceedings of the 9th International Conference on Theoretical and Methodological issues in Machine Translation, Keihanna, pp 12–21Google Scholar
9. Chang PC, Galley M, Manning CD (2008) Optimizing Chinese word segmentation for machine translation performance. In: ACL-08: HLT: Third Workshop on Statistical Machine Translation. Proceedings of the Workshop, Columbus, pp 224–232Google Scholar
10. Chomsky N (1956) Three models for the description of language. IRE Trans Inf Theory 2(3):113–124
11. Chu C, Nakazawa T, Kawahara D, Kurohashi S (2012) Exploiting shared Chinese characters in Chinese word segmentation optimization for Chinese–Japanese machine translation. In: EAMT 2012, Proceedings of the 16th Annual Conference of the European Association for Machine Translation. Trento, pp 35–42Google Scholar
12. Chung T, Gildea D (2009) Unsupervised tokenization for machine translation. In: EMNLP 2009: Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, Singapore, pp 718–726Google Scholar
13. Corston-Oliver S, Gamon M (2004) Normalizing German and English inflectional morphology to improve statistical word alignment. In: Machine Translation: from Real Users to Research, 6th Conference of the Association for Machine Translation in the Americas, AMTA (2004) Washington, DC, pp 48–57Google Scholar
14. Cromières F (2006) Sub-sentential alignment using substring co-occurrence counts. In: COLING—ACL 2006, 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics. Proceedings of the Student Research Workshop, Sydney, pp 13–18Google Scholar
15. DeNero J, Bouchard-Côté A, Klein D (2008) Sampling alignment structure under a Bayesian translation model. In: EMNLP 2008: 2008 Conference on Empirical Methods in Natural Language Processing. Proceedings of the Conference, Honolulu, pp 314–323Google Scholar
16. Denkowski M, Lavie A (2011) Meteor 1.3: automatic metric for reliable optimization and evaluation of machine translation systems. In: Proceedings of the 6th Workshop on Statistical Machine Translation (WMT), Edinburgh, pp 85–91Google Scholar
17. Denoual E, Lepage Y (2005) BLEU in characters: towards automatic MT evaluation in languages without word delimiters. In: Proceedings of the 2nd International Joint Conference on Natural Language Processing, IJCNLP-05, Jeju Island, pp 81–86Google Scholar
18. Finch A, Sumita E (2007) Phrase-based machine transliteration. In: Proceedings of the Workshop on Technologies and Corpora for Asia-Pacific Speech Translation (TCAST), Hyderabad, pp 13–18Google Scholar
19. Goldwater S, McClosky D (2005) Improving statistical MT through morphological analysis. In: HLT/EMNLP 2005: Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing. Proceedings of the Conference, Vancouver, British Columbia, pp 676–683Google Scholar
20. Haghighi A, Blitzer J, DeNero J, Klein D, (2009) Better word alignments with supervised ITG models. In: ACL-IJCNLP 2009, Joint Conference of the 47th Annual Meeting of the Association for Computational Linguistics and 4th International Joint Conference on Natural Language Processing of the AFNLP. Proceedings of the Conference, Suntec, pp 923–931Google Scholar
21. Jiang J, Ahmed Z, Carson-Berndsen J, Cahill P, Way A (2011) Phonetic representation-based speech translation. In: Proceedings of Machine Translation Summit XIII, Xiamen, pp 81–88Google Scholar
22. Karlsson F (1999) Finnish: an essential grammar. Routledge, LondonGoogle Scholar
23. Klein D, Manning CD (2003) A* parsing: fast exact Viterbi parse selection. In: HLT-NAACL 2003: Conference combining Human Language Technology conference series and the North American Chapter of the Association for Computational Linguistics conference series . Edmonton, pp 40–47Google Scholar
24. Kneser R, Ney H (1995) Improved backing-off for M-gram language modelling. In: International Conference on Acoustics, Speech, and Signal Processing (ICASSP), vol. 1, Detroit, pp 181–184Google Scholar
25. Knight K, Graehl J (1998) Machine transliteration. Comput Linguist 24(4):599–612Google Scholar
26. Koehn P (2005) Europarl: a parallel corpus for statistical machine translation. In: MT Summit X: The Tenth Machine Translation Summit, Phuket, pp 79–86Google Scholar
27. Koehn P, Axelrod A, Mayne AB, Callison-Burch C, Osborne M, Talbot D (2005) Edinburgh system description for the 2005 IWSLT speech translation evaluation. In: International Workshop on Spoken Language Translation: Evaluation Campaign on Spoken Language Translation [IWSLT 2005], Pittsburgh, 8pp [no page numbers]Google Scholar
28. Koehn P, Hoang H, Birch A, Callison-Burch C, Federico M, Bertoldi N, Cowan B, Shen W, Moran C, Zens R, Dyer C, Bojar O, Constantin A, Herbst E, (2007) Moses: open source toolkit for statistical machine translation. In: ACL 2007: proceedings of demo and poster sessions. Czech Republic, Prague, pp 177–180Google Scholar
29. Koehn P, Och FJ, Marcu D (2003) Statistical phrase-based translation. In: HLT-NAACL, (2003) conference combining Human Language Technology conference series and the North American Chapter of the Association for Computational Linguistics conference series. Edmonton, pp 48–54Google Scholar
30. Kondrak G, Marcu D, Knight K (2003) Cognates can improve statistical translation models. In: HLT-NAACL 2003: conference combining Human Language Technology conference series and the North American Chapter of the Association for Computational Linguistics conference series. Edmonton, pp 46–48Google Scholar
31. Lee YS (2004) Morphological analysis for statistical machine translation. In: HLT-NAACL, 2004: Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. Proceedings of the Main Conference, Boston, Massachusetts, pp 57–60Google Scholar
32. Levenberg A, Dyer C, Blunsom P (2012) A Bayesian model for learning SCFGs with discontiguous rules. In: Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). Jeju Island, pp 223–232Google Scholar
33. Li H, Zhang M, Su J (2004) A joint source-channel model for machine transliteration. In: Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL), Barcelona, pp 159–166Google Scholar
34. Li M, Zong C, Ng HT (2011) Automatic evaluation of Chinese translation output: word-level or character-level? In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics (ACL). Portland, pp 159–164Google Scholar
35. Liang P, Taskar B, Klein D (2006) Alignment by agreement. In: Proceedings of the 2006 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL). Montreal, pp 104–111Google Scholar
36. Liu C, Ng HT (2012) Character-level machine translation evaluation for languages with ambiguous word boundaries. In: [ACL, 2012] Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. Republic of Korea, pp 921–929Google Scholar
37. Macherey K, Dai A, Talbot D, Popat A, Och F (2011) Language-independent compound splitting with morphological operations. In: ACL-HLT, 2011: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics. Portland, pp 1395–1404Google Scholar
38. Marcu D, Wong W (2002) A phrase-based, joint probability model for statistical machine translation. In: Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing. Philadelphia, pp 133–139Google Scholar
39. Nakov P, Tiedemann J (2012) Combining word-level and character-level models for machine translation between closely-related languages. In: [ACL, 2012] Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. Republic of Korea, Jeju, pp 301–305Google Scholar
40. Naradowsky J, Toutanova K (2011) Unsupervised bilingual morpheme segmentation and alignment with context-rich hidden semi-Markov models. In: ACL-HLT, 2011: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics. Portland, pp 895–904Google Scholar
41. Neubig G (2011) The Kyoto free translation task. http://www.phontron.com/kftt. Accessed 16 May 2011
42. Neubig G, Watanabe T, Mori S, Kawahara T (2012) Machine translation without words through substring alignment. In: : [ACL, 2012] Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. Republic of Korea, pp 165–174Google Scholar
43. Neubig G, Watanabe T, Sumita E, Mori S, Kawahara T (2011) An unsupervised model for joint phrase alignment and extraction. In: ACL-HLT, 2011: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics. Portland, pp 632–641Google Scholar
44. Nguyen T, Vogel S, Smith NA, (2010) Nonparametric word segmentation for machine translation. In: Coling 2010, 23rd International Conference on Computational Linguistics. Proceedings of the Conference, Beijing, pp 815–823Google Scholar
45. Nießen S, Ney H (2000) Improving SMT quality with morpho-syntactic analysis. In: The 18th International Conference on Computational Linguistics, COLING 2000 in Europe. Proceedings of the Conference, Saarbrücken, pp 1081–1085Google Scholar
46. Och, FJ (2003) Minimum error rate training in statistical machine translation. In: ACL-2003: 41st Annual meeting of the Association for Computational Linguistics, Sapporo, pp 160–167Google Scholar
47. Och FJ, Ney H (2003) A systematic comparison of various statistical alignment models. Comput Linguist 29(1):19–51
48. Papineni K, Roukos S, Ward T, Zhu WJ (2002) BLEU: a method for automatic evaluation of machine translation. In: 40th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference, Philadelphia, pp 311–318Google Scholar
49. Saers M, Nivre J, Wu D (2009) Learning stochastic bracketing inversion transduction grammars with a cubic time biparsing algorithm. In: IWPT-09: Proceedings of the 11th International Conference on Parsing Technologies, Paris, pp 29–32Google Scholar
50. Snyder B, Barzilay R (2008) Unsupervised multilingual learning for morphological segmentation. In: ACL-08: HLT, 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, Columbus, pp 737–745Google Scholar
51. Sornlertlamvanich V, Mokarat C, Isahara H (2008) Thai-lao machine translation based on phoneme transfer. In: Proceedings of the 14th Annual Meeting of the Association for Natural Language Processing, Tokyo, pp 65–68Google Scholar
52. Subotin M (2011) An exponential translation model for target language morphology. In: ACL-HLT, 2011: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics. Portland, Oregon, pp 230–238Google Scholar
53. Talbot D, Osborne M (2006) Modelling lexical redundancy for machine translation. In: COLING—ACL, 2006, 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics. Proceedings of the Conference, Sydney, pp 969–976Google Scholar
54. Tiedemann J (2009) Character-based PSMT for closely related languages. In: EAMT-2009: Proceedings of the 13th Annual Conference of the European Association for Machine Translation, Barcelona, pp 12–19Google Scholar
55. Tiedemann J (2012) Character-based pivot translation for under-resourced languages and domains. In: [EACL 2012] Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics . Avignon, pp 141–151Google Scholar
56. Vilar D, Peter JT, Ney H (2007) Can we translate letters? In: ACL 2007: Proceedings of the Second Workshop on Statistical Machine Translation. Czech Republic, Prague, pp 33–39Google Scholar
57. Vogel S, Ney H, Tillmann C (1996) HMM-based word alignment in statistical translation. In: COLING-96: The 16th International Conference on Computational Linguistics, Proceedings, Copenhagen, pp 836–841Google Scholar
58. Wang Y, Uchimoto K, Kazama J, Kruengkrai C, Torisawa K (2010) Adapting Chinese word segmentation for machine translation based on short units. In: LREC 2010: proceedings of the seventh international conference on Language Resources and Evaluation. La Valetta, Malta, pp 1758–1764Google Scholar
59. Wu D (1997) Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Comput Linguist 23(3):377–403Google Scholar
60. Xu J, Zens R, Ney H (2004) Do we need Chinese word segmentation for statistical machine translation? In: Proceedings of the 3rd SIGHAN workshop on Chinese language processing. Barcelona, pp 122–128Google Scholar
61. Zhang H, Gildea D (2005) Stochastic lexicalized inversion transduction grammar for alignment. In: ACL-05: 43rd Annual Meeting of the Association for Computational Linguistics Ann Arbor, Michigan, pp 475–482Google Scholar
62. Zhang H, Quirk C, Moore RC, Gildea D (2008a) Bayesian learning of non-compositional phrases with synchronous parsing. In: ACL-08: HLT, 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, Columbus, pp 97–105Google Scholar
63. Zhang R, Yasuda K, Sumita E (2008b) Improved statistical machine translation by multiple Chinese word segmentation. In: ACL-08: HLT: Third Workshop on Statistical Machine Translation, Proceedings of the Workshop, Columbus, pp 216–223Google Scholar
## Authors and Affiliations
• Graham Neubig
• 1
• Taro Watanabe
• 2
• Shinsuke Mori
• 3
• Tatsuya Kawahara
• 3
1. 1.Nara Institute of Science and TechnologyIkomaJapan
2. 2.National Institute of Information and Communications TechnologyKyotoJapan
3. 3.Kyoto UniversityKyotoJapan
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8406347036361694, "perplexity": 19280.12153204853}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257650188.31/warc/CC-MAIN-20180324093251-20180324113251-00503.warc.gz"}
|
https://repository.lboro.ac.uk/articles/A_study_of_aero_engine_fan_flutter_at_high_rotational_speeds_using_holographic_interferometry/9525545/1
|
## A study of aero engine fan flutter at high rotational speeds using holographic interferometry
2013-06-27T10:45:40Z (GMT)
Aero-elastic instability is often a constraint on the design of modern high by-pass rat.io aero engines. Unstalled supersonic flutter is an instability which can be encountered in shrouded fans, in which mechanical vibrations give rise to unsteady aerodynamic forces which couple further energy into the mechanical vibration. This phenomenon is particularly sensitive to the deflection shape of the mechanical vibration, A detailed measurement of the vibrational deflection shape of a test fan undergoing supersonic unstalled flutter was sought by the author. This measurement was required in order to assess the current theoretical understanding and modelling of unstalled fan flutter, The suitability of alternative techniques for this measurement was assessed, Pulsed holographic interferometry was considered optimum for this study because of its full field capability, large range of sensitivity, high spatial resolution and good accuracy. A double pulsed holographic system, employing a rnirror~Abbe image rotator, was built specifically for this study, The mirror-Abbe unit was employed to rotate the illuminating beam and derotate the light returned from the rotating fan. This therefore maintained correlation between the two resultant holographic images. The holographic system was used to obtain good quality interferograms of the 0.86m diameter test fan when rotating at speeds just under 10 000rpm and undergoing unstalled flutter. The resultant interferograms were analysed to give the flutter deflection shape of the fan. The study of the fan in flutter was complemented by measurement of the test fan's vibrational characteristics under non-rotating conditions. The resultant experimental data were in agreement with the current theoretical understanding of supersonic unstalled fan flutter. Many of the assumptions employed in flutter prediction by calculation of unsteady work were experimentally verified, The deflection shapes of the test fan under non-rotating and flutter conditions were compared with those predicted by a finite element model of the structure and reasonably good agreement was obtained.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8781489133834839, "perplexity": 2470.482772944841}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592636.25/warc/CC-MAIN-20200118135205-20200118163205-00232.warc.gz"}
|
http://heattransfer.asmedigitalcollection.asme.org/article.aspx?articleid=2646872&journalid=124
|
0
Research Papers: Heat and Mass Transfer
# Nanoparticle Aggregation in Ionic Solutions and Its Effect on Nanoparticle Translocation Across the Cell Membrane
[+] Author and Article Information
Kai Yue
School of Energy and
Environmental Engineering,
University of Science and Technology Beijing,
Beijing 10083, China
e-mail: [email protected]
Jue Tang
School of Energy and
Environmental Engineering,
University of Science and Technology Beijing,
Beijing 10083, China
e-mail: [email protected]
Hongzheng Tan, Xiaoxing Lv, Xinxin Zhang
School of Energy and
Environmental Engineering,
University of Science and Technology Beijing,
Beijing 10083, China
1Corresponding author.
Presented at the 5th ASME 2016 Micro/Nanoscale Heat & Mass Transfer International Conference. Paper No. MNHMT2016-6395.Contributed by the Heat Transfer Division of ASME for publication in the JOURNAL OF HEAT TRANSFER. Manuscript received June 9, 2016; final manuscript received March 6, 2017; published online August 23, 2017. Assoc. Editor: Chun Yang.
J. Heat Transfer 140(1), 012003 (Aug 23, 2017) (10 pages) Paper No: HT-16-1371; doi: 10.1115/1.4037392 History: Received June 09, 2016; Revised March 06, 2017
## Abstract
Nanoparticle (NP) aggregation can not only change the unique properties of NPs but also affect NP transport and membrane penetration behavior in biological systems. Coarse-grained (CG) molecular dynamics (MD) simulations were performed in this work to investigate the aggregation behavior of NPs with different properties in ionic solutions under different temperature conditions. Four types of NPs and NP aggregates were modeled to analyze the effects of NP aggregation on NP translocation across the cell membrane at different temperatures. Hydrophilic modification and surface charge modification inhibited NP aggregation, whereas stronger hydrophobicity and higher temperature resulted in a higher degree of NP aggregation and a denser structure of NP aggregates. The final aggregation percentage of hydrophobic NPs in the NaCl solution at 37 °C is 87.5%, while that of hydrophilic NPs is 0%, and the time required for hydrophobic NPs to reach 85% aggregation percentage at 42 °C is 6 ns, while it is 9.2 ns at 25 °C. The counterions in the solution weakened the effect of surface charge modification, thereby realizing good dispersity. High temperature could promote the NP membrane penetration for the same NP, while it also could enhance the NP aggregation which would increase the difficulty in NP translocation across cell membrane, especially for the hydrophobic NPs. Therefore, suitable surface modification of NPs and temperature control should be comprehensively considered in promoting NP membrane penetration in biomedical applications.
<>
## References
Salata, O. V. , 2004, “ Applications of Nanoparticles in Biology and Medicine,” J. Nanotech. Eng. Med., 2, p. 3.
Van de Broek, B. , Devoogdt, N. , D'Hollander, A. , Gijs, H.-L. , Jans, K. , Lagae, L. , Muyldermans, S. , Maes, G. , and Borghs, G. , 2011, “ Specific Cell Targeting With Nanobody Conjugated Branched Gold Nanoparticles for Photothermal Therapy,” ACS Nano, 5(6), pp. 4319–4328. [PubMed]
El-Sayed, M. A. , 2001, “ Some Interesting Properties of Metals Confined in Time and Nanometer Space of Different Shapes,” Acc. Chem. Res., 34(4), pp. 257–264. [PubMed]
Schwartzberg, A. M. , Grant, C. D. , Wolcott, A. , Talley, C. E. , Huser, T. R. , Bogomolni, R. , and Zhang, J. Z. , 2004, “ Unique Gold Nanoparticle Aggregates as a Highly Active Surface-Enhanced Raman Scattering Substrate,” J. Phys. Chem., B, 108(50), pp. 19191–19197.
Hotze, E. M. , Phenrat, T. , and Lowry, G. V. , 2010, “ Nanoparticle Aggregation: Challenges to Understanding Transport and Reactivity in the Environment,” J. Environ. Qual., 39(6), pp. 1909–1924. [PubMed]
Lacerda, S. H. D. P. , Park, J. J. , Meuse, C. , Pristinski, D. , Becker, M. L. , Karim, A. , and Douglas, J. F. , 2009, “ Interaction of Gold Nanoparticles With Common Human Blood Proteins,” ACS Nano, 4(1), pp. 365–379.
Aoudia, M. , and Zana, R. , 1998, “ Aggregation Behavior of Sugar Surfactants in Aqueous Solutions: Effects of Temperature and the Addition of Nonionic Polymers,” J. Colloid Interface Sci., 206(1), pp. 158–167. [PubMed]
Keller, A. A. , Wang, H. , Zhou, D. , Lenihan, H. S. , Cherr, G. , Cardinale, B. J. , Miller, R. , and Ji, Z. , 2010, “ Stability and Aggregation of Metal Oxide Nanoparticles in Natural Aqueous Matrices,” Environ. Sci. Technol., 44(6), pp. 1962–1967. [PubMed]
Fiedler, S. L. , Izvekov, S. , and Violi, A. , 2007, “ The Effect of Temperature on Nanoparticle Clustering,” Carbon, 45(21), pp. 1786–1794.
Ghosh, S. , Mashayekhi, H. , Pan, B. , Bhowmik, P. , and Xing, B. , 2008, “ Colloidal Behavior of Aluminum Oxide Nanoparticles as Affected by pH and Natural Organic Matter,” Langmuir, 24(21), pp. 12385–12391. [PubMed]
Chen, Y. , Huang, Y. , and Li, K. , 2012, “ Temperature Effect on the Aggregation Kinetics of CeO2 Nanoparticles in Monovalent and Divalent Electrolytes,” J. Environ. Anal. Toxicol., 2(7), pp. 158–162.
Li, K. G. , Zhang, W. , Huang, Y. , and Chen, Y. S. , 2011, “ Aggregation Kinetics of CeO2 Nanoparticles in KCl and CaCl2 Solutions: Measurements and Modeling,” J. Nanopart. Res., 13(12), pp. 6483–6491.
Li, K. G. , and Chen, Y. S. , 2012, “ Effect of Natural Organic Matter on the Aggregation Kinetics of CeO2 Nanoparticles in KCl and CaCl2 Solutions: Measurements and Modeling,” J. Hazard. Mater., 209–210, pp. 264–270. [PubMed]
Qiu, Y. , Liu, Y. , Wang, L. , Xu, L. , Bai, R. , Ji, Y. , Wu, X. , Zhao, Y. , Li, Y. , and Chen, C. , 2010, “ Surface Chemistry and Aspect Ratio Mediated Cellular Uptake of Au Nanorods,” Biomaterials, 31(30), pp. 7606–7619. [PubMed]
Liu, Z. , Jiao, Y. , Wang, T. , Zhang, T. , and Xue, W. , 2012, “ Interactions Between Solubilized Polymer Molecules and Blood Components,” J. Controlled Release, 160(1), pp. 14–24.
Rausch, K. , Reuter, A. , Fischer, K. , and Schmidt, M. , 2010, “ Evaluation of Nanoparticle Aggregation in Human Blood Serum,” Biomacromolecules, 11(11), pp. 2836–2839. [PubMed]
Maiorano, G. , Sabella, S. , Sorce, B. , Brunetti, V. , Malvindi, M. A. , Cingolani, R. , and Pompa, P. P. , 2010, “ Effects of Cell Culture Media on the Dynamic Formation of Protein-Nanoparticle Complexes and Influence on the Cellular Response,” ACS Nano, 4(12), pp. 7481–7491. [PubMed]
Kim, D. , El-Shall, H. , Dennis, D. , and Morey, T. , 2005, “ Interaction of PLGA Nanoparticles With Human Blood Constituents,” Colloids Surf., B, 40(2), pp. 83–91.
Lundqvist, M. , Stigler, J. , Elia, G. , Lynch, I. , Cedervall, T. , and Dawson, K. A. , 2008, “ Nanoparticle Size and Surface Properties Determine the Protein Corona With Possible Implications for Biological Impacts,” Proc. Natl. Acad. Sci. U.S.A., 105(38), pp. 14265–14270. [PubMed]
Lee, H. , and Larson, R. G. , 2008, “ Lipid Bilayer Curvature and Pore Formation Induced by Charged Linear Polymers and Dendrimers: The Effect of Molecular Shape,” J. Phys. Chem. B, 112(39), pp. 12279–12285. [PubMed]
Lin, X. B. , Li, Y. , and Gu, N. , 2010, “ Nanoparticle's Size Effect on Its Translocation Across a Lipid Bilayer: A Molecular Dynamics Simulation,” J. Comput. Theor. Nanosci., 7(1), pp. 269–276.
Li, Y. , Chen, X. , and Gu, N. , 2008, “ Computational Investigation of Interaction Between Nanoparticles and Membranes: Hydrophobic/Hydrophilic Effect,” J. Phys. Chem. B., 112(51), pp. 16647–16653. [PubMed]
Lee, H. , and Larson, R. G. , 2006, “ Molecular Dynamics Simulations of PAMAM Dendrimer-Induced Pore Formation in DPPC Bilayers With a Coarse-Grained Model,” J. Phys. Chem. B, 110(37), pp. 18204–18211. [PubMed]
Wong-Ekkabut, J. , Baoukina, S. , Triampo, W. , Tang, I. M. , Tieleman, D. P. , and Monticelli, L. , 2008, “ Computer Simulation Study of Fullerene Translocation Through Lipid Membranes,” Nat. Nanotechnol., 3(6), pp. 363–368. [PubMed]
Qiao, R. , Roberts, A. P. , Mount, A. S. , Klaine, S. J. , and Ke, P. C. , 2007, “ Translocation of C60 and Its Derivatives Across a Lipid Bilayer,” Nano Lett., 7(3), pp. 614–620. [PubMed]
Lin, J. Q. , Zheng, Y. G. , Zhang, H. W. , and Chen, Z. , 2011, “ A Simulation Study on Nanoscale Holes Generated by Gold Nanoparticles on Negative Lipid Bilayers,” Langmuir, 27(13), pp. 8323–8332. [PubMed]
Li, Y. , and Gu, N. , 2010, “ Thermodynamics of Charged Nanoparticles Adsorption on Charge-Neutral Membranes: A Simulation Study,” J. Phys. Chem. B, 114(8), pp. 2749–2754. [PubMed]
Ding, H. M. , Tian, W. D. , and Ma, Y. Q. , 2012, “ Designing Nanoparticle Translocation Through Membranes by Computer Simulations,” ACS Nano, 6(2), pp. 1230–1238. [PubMed]
Yang, K. , and Ma, Y. Q. , 2010, “ Computer Simulation of the Translocation of Nanoparticles With Different Shapes Across a Lipid Bilayer,” Nat. Nanotechnol., 5(8), pp. 579–583. [PubMed]
Vasir, J. K. , and Labhasetwar, V. , 2008, “ Quantification of the Force of Nanoparticle-Cell Membrane Interactions and Its Influence on Intracellular Trafficking of Nanoparticles,” Biomaterials, 29(31), pp. 4244–4252. [PubMed]
Zhang, A. L. , Guan, Y. X. , and Xu, L. X. , 2011, “ Theoretical Study on Temperature Dependence of Cellular Uptake of QDs Nanoparticles,” ASME J. Biomech. Eng., 133(12), p. 125402.
Hu, G. , Jiao, B. , Shi, X. , Valle, R. P. , Fan, Q. , and Zuo, Y. Y. , 2013, “ Physicochemical Properties of Nanoparticles Regulate Translocation Across Pulmonary Surfactant Monolayer and Formation of Lipoprotein Corona,” ACS Nano, 7(12), pp. 10525–10533. [PubMed]
Choe, S. , Chang, R. , Jeon, J. , and Violi, A. , 2008, “ Molecular Dynamics Simulation Study of a Pulmonary Surfactant Film Interacting With a Carbonaceous Nanoparticle,” Biophys. J., 95(9), pp. 4102–4114. [PubMed]
Marrink, S. J. , Vries, A. H. , and Mark, A. E. , 2004, “ Coarse Grained Model for Semiquantitative Lipid Simulation,” J. Phys. Chem. B, 108(2), pp. 750–760.
Marrink, S. J. , Risselada, H. , Yefimov, S. , Tieleman, D. P. , and de Vries, A. H. , 2007, “ The MARITINI Force Field: Coarse Grained Model for Biomolecular Simulation,” J. Phys. Chem. B., 111(27), pp. 7812–7824. [PubMed]
Lindahl, E. , Hess, B. , and der Spoel, D. V. , 2001, “ gromacs 3.0: A Package for Molecular Simulation and Trajectory Analysis,” J. Mol. Model., 7(8), pp. 306–317.
Berendsen, H. J. C. , Postma, J. P. M. , Gunsteren, W. F. V. , DiNola, A. , and Haak, J. R. , 1984, “ Molecular-Dynamics With Coupling to an External Bath,” J. Chem. Phys., 81(8), pp. 3684–3690.
Dobrovolskaia, M. A. , Patri, A. K. , Zheng, J. , Clogston, J. D. , Ayub, N. , Aggarwal, P. , Neun, B. W. , Hall, J. B. , and McNeil, S. E. , 2009, “ Interaction of Colloidal Gold Nanoparticles With Human Blood: Effects on Particle Size and Analysis of Plasma Protein Binding Profiles,” Nanomedicine, 5(2), pp. 106–117. [PubMed]
Lazzari, S. , Moscatelli, D. , Codari, F. , Salmona, M. , Morbidelli, M. , and Diomede, L. , 2012, “ Colloidal Stability of Polymeric Nanoparticles in Biological Fluids,” J. Nanopart. Res., 14(6), pp. 1–10. [PubMed]
Nagle, J. F. , and Stephanie, T. N. , 2000, “ Structure of Lipid Bilayers,” Biochim. Biophys. Acta, 1469(3), pp. 159–195. [PubMed]
Leekumjorn, S. , and Sum, A. K. , 2007, “ Molecular Studies of the Gel to Liquid-Crystalline Phase Transition for Fully Hydrated DPPC and DPPE Bilayers,” Biochim. Biophys. Acta., 1768(2), pp. 354–365. [PubMed]
Ramalho, J. P. P. , Gkeka, P. , and Sarkisov, L. , 2011, “ Structure and Phase Transformations of DPPC Lipid Bilayers in the Presence of Nanoparticles: Insight From Coarse-Grained Molecular Dynamics Simulation,” Langmuir, 27(7), pp. 3723–3730. [PubMed]
## Figures
Fig. 1
Initial configuration structure: (a) NP aggregation simulation and (b) NP–membrane interaction
Fig. 2
Snapshots of NP aggregation under different degrees of hydrophobicity/hydrophilicity
Fig. 3
Effect of surface charge on NP aggregation in ionic solution: (a) aggregation percentages of different NPs, (b) snapshots of aggregation of the surface-modified NPs, (c) distributions of Na+ (dark color) and Cl (light color) ions, and (d) radial distribution function g(r) of Na+ and Cl ions
Fig. 4
Effect of temperature on NP aggregation: (a) time sequence of snapshots of NP aggregation, (b) size distribution of the aggregate, and (c) time required to reach certain aggregation percentages at different temperatures
Fig. 5
Snapshots of the translocation across the DPPC membrane at 35 ns; distance △z between the centers of the NP and DPPC membrane; and interaction energy E between the hydrophobic C1, semihydrophobic C5, or semihydrophilic N0 NPs and the system
Fig. 6
Snapshots of translocation across the DPPC membrane of different C1 aggregates: (a) one single 4 nm NP, 350 kJ·mol−1·nm−2; (b) three-NP aggregate, 350 kJ·mol−1·nm−2; (c) five-NP aggregate, 350 kJ·mol−1·nm−2; (d) five-NP aggregate, 700 kJ·mol−1·nm−2; and (e) minimum force required for the membrane penetration of different NPs
Fig. 7
Effect of NP aggregation and temperature on NP translocation across the membrane: (a) changes in area per lipid and RDF with temperature, (b) effect of temperature on NP membrane penetration, and (c) MSD of NPs at different temperatures
## Discussions
Some tools below are only available to our subscribers or users with an online account.
### Related Content
Customize your page view by dragging and repositioning the boxes below.
Related Journal Articles
Related Proceedings Articles
Related eBook Content
Topic Collections
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.821240246295929, "perplexity": 18783.911743324195}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589892.87/warc/CC-MAIN-20180717183929-20180717203929-00033.warc.gz"}
|
http://export.arxiv.org/abs/1711.00061
|
math.AP
(what is this?)
Title: Weak Harnack estimates for supersolutions to doubly degenerate parabolic equations
Authors: Qifan Li
Abstract: We establish weak Harnack inequalities for positive, weak supersolutions to certain doubly degenerate parabolic equations. The prototype of this kind of equations is $$\partial_tu-\operatorname{div}|u|^{m-1}|Du|^{p-2}Du=0,\quad p>2,\quad m+p>3.$$ Our proof is based on Caccioppoli inequalities, De Giorgi's estimates and Moser's iterative method.
Subjects: Analysis of PDEs (math.AP) MSC classes: 35K65, 35K92, 35B65 (Primary), 35K59, 35B45 (Secondary) Cite as: arXiv:1711.00061 [math.AP] (or arXiv:1711.00061v1 [math.AP] for this version)
Submission history
From: Qifan Li [view email]
[v1] Tue, 31 Oct 2017 19:12:33 GMT (28kb)
Link back to: arXiv, form interface, contact.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25204822421073914, "perplexity": 9478.329282253404}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195967.34/warc/CC-MAIN-20201129004335-20201129034335-00014.warc.gz"}
|
http://www.ck12.org/analysis/Sums-of-Finite-Geometric-Series/flashcard/user:13IntK/Finding-the-Sum-of-a-Finite-Geometric-Series/r1/
|
<meta http-equiv="refresh" content="1; url=/nojavascript/">
# Sums of Finite Geometric Series
## Series with defined ending value has sum: Sn = (a1(1-r^n))/(1-r)
%
Progress
Practice Sums of Finite Geometric Series
Progress
%
Finding the Sum of a Finite Geometric Series
Use these flashcards to study Algebra II with Trigonometry Concepts.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.956803560256958, "perplexity": 9593.457898172812}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207928869.6/warc/CC-MAIN-20150521113208-00065-ip-10-180-206-219.ec2.internal.warc.gz"}
|
https://custom-writing.org/qna/the-primary-sample-used-in-defining-the-sample-size-required/
|
A
The primary sampling is habitually done to warrant that the study populace that is being researched is accurately sampled while the study sample is equitably disseminated. In other words, the distribution of the sample within the population is fair. Moreover, the preliminary testing or previous sampling is useful in avoiding the problem of collecting large sets of data that would not be useful at the end of the survey. Collecting useless data may result from improper sampling preparation, preservation, or ineffective methods of sampling. Preliminary sampling prevents situations where the target population could easily be missed.
Preliminary sampling is the method used to get the auxiliary information that would be utilized to attain more efficient sampling as well as the estimation procedures. Moreover, preliminary samples are better used, especially where there is the unavailability of prior information about the entire population. The information attained through preliminary sampling is then used to estimate the smaller and final sample. Generally, the information gotten through the preliminary sample for sampling purposes is used to change the selection of probabilities, group the units, and for the direct use in forming the estimates.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8934962153434753, "perplexity": 817.0649437192285}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347407001.36/warc/CC-MAIN-20200530005804-20200530035804-00214.warc.gz"}
|
https://math.codidact.com/posts/282046
|
### Communities
tag:snake search within a tag
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
created:<1w created < 1 week ago
post_type:xxxx type of post
Q&A
# Why rational to be indifferent between two urns, when urn A has 50-50 red and white balls, but you don't know urn B's ratio?
+1
−1
Please see the embolden sentence below. Assume that I'm risk adverse and "prefer the known chance over the unknown". Why's it irrational for me to choose A?
Also, there were problems on the probability side. One famous debate concerned a paradox posed by Daniel Ellsberg (of later fame due to publishing the Pentagon Papers) It involved multiple urns, some with known and some with unknown odds of drawing a winning ball. Instead of estimating the expected value of the unknown probability, and sticking with that estimate, most people exhibit strong aversion to ambiguity in violation of basic probability principles. A simpler version of the paradox would be as follows. You can choose one of two urns, each containing red and white balls. If you draw red you win $100 and nothing otherwise. You know that urn A has exactly a 50-50 ratio of red and white balls. In urn B, the ratio is unknown. From which urn do you wish to draw? Most people say A since they prefer the known chance over the unknown, especially since some suspect that urn B is perhaps stacked against them. But even if people can choose the color on which to bet, they still prefer A. Rationally, you should be indifferent, or if you think you can guess the color ratios, choose the urn with the better perceived odds of winning. Yet, smart people would knowingly violate this logical advice. Paul Slovic, The Irrational Economist (2010), p 56. Why does this post require moderator attention? You might want to add some details to your flag. Why should this post be closed? #### 1 comment thread General comments (1 comment) ## 2 answers +2 −0 Let's assume you know that urn B has 5 balls in it. I deliberately take an odd number, because that way we know for sure that there are not exactly the same number of red and white balls in that urn. Note that since you don't know the content of the urn, you have to assign probabilities to the urn. Now what could the content of the urn be? Well, for example, it could have 1 red and 4 white balls. But then, it also could have 1 white and 4 red balls in it. So which of those is more likely? Well, unless you have any reason to assume that the urn contains more white than red, or more red than white urns, you have to assign the same probability to both. Now for any number of balls in the urn, exchanging red and white balls gives another possible content of the urn, and the same argument as above gives equal probabilities for both of those contents. So what is the probability to draw a red ball from urn b, provided that it has 5 balls? Well, let's denote the probability of drawing red with$p(R)$, and the probability of drawing red given that there are$n$red balls (and$5-n$white balls) in the urn with$p(R|n)$. Then Bayes' Theorem gives us: $$p(R) = \frac05 p(R|0) + \frac15 p(R|1) + \frac25 p(R|2) + \frac35 p(R|3) + \frac45 p(R|4) + \frac55 p(R|5)$$ But by the argument above,$p(R|n) = p(R|5-n)$, therefore the above simplifies to $$p(R) = (\frac05+\frac55)p(R|0) + (\frac15+\frac45)p(R|1) + (\frac23+\frac35)p(R|2) = p(R|0) + p(R|1) + p(R|2) = \frac12$$ where the last equality is again because of the symmetry, and the fact that all probabilities have to add to$1$. Now this analysis works not just for$5$balls, but for any odd number of balls, and with a minor change also for all even numbers of balls. Thus no matter how many balls there are in urn B, the probability of drawing a red ball will always turn out to be$1/2$. For this reason, it also doesn't matter that you don't actually know the number of balls in urn B (except that of course there has to be at least one ball in it). Now whether it is really irrational to choose urn A over urn B is a completely different question. I think the text is wrong in claiming this. It is true that the expectation value is the same. But the expectation value is not everything. Consider the specific case that urn A contains one red and one white b all, while urn B can with equal probability contain two white balls, a white and a red ball, or two red balls. Note that here we are in a better situation than in the original puzzle because we are actually given both the possible contents of the urn and the corresponding probabilities. Now let's consider that we play two rounds. Obviously the expectation value is to win one of those rounds, no matter which of the urns we choose. Thus according to that text, both choices should be equivalent. But let us ask a different question: What is the probability that we don't win anything? Well, with urn A, the probability clearly is$1/4$: There are four different outcomes, and for only one of them the white ball is drawn twice. But for urn B, with probability$1/3$we have an urn where you are guaranteed to get a white ball twice, and with another probability$1/3$, you get the same urn as A, with probability$1/4$. Therefore the probability of not winning either game is$5/12$, which is considerably higher than$1/4$. In other words, with urn B indeed the risk is higher, although the probability of winning a single game (and therefore the expectation value) is equal. And thus if you are risk averse, choosing A over B is indeed rational. Anyone arguing otherwise would also have to argue that betting on getting a billion dollars with a probability of$1/1000000\$ is equivalent to getting a thousand dollars for sure.
Why does this post require moderator attention?
+1
−0
Say you have a coin and, if flipped, will land either heads or tails. What is the probability that it lands, say, heads? The "real" answer is that the probability is unknown. The information was not given at the start. We cannot proceed further then. But if we insist on moving on, we have to have a number. So we assume the probability is exactly 1/2 because there is one desired outcome (heads) and there are two possible outcomes (heads, tails). Because no information is given, we have no reason to think that heads are more likely or that tails are more likely.
Say an urn C has exactly one ball, either a red ball or a white ball. That's the only information you have. What then is the probability that a, say, red ball is drawn? The real answer is that the probability is unknown. But if we insist on moving on, we assume the probability is exactly 1/2.
Urn B has only red or white balls. We don't know how many of each there are. What is the probability that, say, a red ball is drawn? We assume the probability is exactly 1/2. This is the same as the probability for urn A. And since the probabilities are the same for urn A and for urn B, there is no reason to prefer one over the other.
Why does this post require moderator attention?
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9351654648780823, "perplexity": 869.4850912672047}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945376.29/warc/CC-MAIN-20230325222822-20230326012822-00169.warc.gz"}
|
http://cms.math.ca/cjm/msc/42C40?fromjnl=cjm&jnl=CJM
|
location: Publications → journals
Search results
Search: MSC category 42C40 ( Wavelets and other special systems )
Expand all Collapse all Results 1 - 4 of 4
1. CJM 2011 (vol 63 pp. 689)
Olphert, Sean; Power, Stephen C.
Higher Rank Wavelets A theory of higher rank multiresolution analysis is given in the setting of abelian multiscalings. This theory enables the construction, from a higher rank MRA, of finite wavelet sets whose multidilations have translates forming an orthonormal basis in $L^2(\mathbb R^d)$. While tensor products of uniscaled MRAs provide simple examples we construct many nonseparable higher rank wavelets. In particular we construct \emph{Latin square wavelets} as rank~$2$ variants of Haar wavelets. Also we construct nonseparable scaling functions for rank $2$ variants of Meyer wavelet scaling functions, and we construct the associated nonseparable wavelets with compactly supported Fourier transforms. On the other hand we show that compactly supported scaling functions for biscaled MRAs are necessarily separable. Keywords: wavelet, multi-scaling, higher rank, multiresolution, Latin squaresCategories:42C40, 42A65, 42A16, 43A65
2. CJM 2008 (vol 60 pp. 334)
Curry, Eva
Low-Pass Filters and Scaling Functions for Multivariable Wavelets We show that a characterization of scaling functions for multiresolution analyses given by Hern\'{a}ndez and Weiss and that a characterization of low-pass filters given by Gundy both hold for multivariable multiresolution analyses. Keywords:multivariable multiresolution analysis, low-pass filter, scaling functionCategories:42C40, 60G35
3. CJM 2006 (vol 58 pp. 1121)
Bownik, Marcin; Speegle, Darrin
The Feichtinger Conjecture for Wavelet Frames, Gabor Frames and Frames of Translates The Feichtinger conjecture is considered for three special families of frames. It is shown that if a wavelet frame satisfies a certain weak regularity condition, then it can be written as the finite union of Riesz basic sequences each of which is a wavelet system. Moreover, the above is not true for general wavelet frames. It is also shown that a sup-adjoint Gabor frame can be written as the finite union of Riesz basic sequences. Finally, we show how existing techniques can be applied to determine whether frames of translates can be written as the finite union of Riesz basic sequences. We end by giving an example of a frame of translates such that any Riesz basic subsequence must consist of highly irregular translates. Keywords:frame, Riesz basic sequence, wavelet, Gabor system, frame of translates, paving conjectureCategories:42B25, 42B35, 42C40
4. CJM 2002 (vol 54 pp. 634)
Weber, Eric
Frames and Single Wavelets for Unitary Groups We consider a unitary representation of a discrete countable abelian group on a separable Hilbert space which is associated to a cyclic generalized frame multiresolution analysis. We extend Robertson's theorem to apply to frames generated by the action of the group. Within this setup we use Stone's theorem and the theory of projection valued measures to analyze wandering frame collections. This yields a functional analytic method of constructing a wavelet from a generalized frame multi\-resolution analysis in terms of the frame scaling vectors. We then explicitly apply our results to the action of the integers given by translations on $L^2({\mathbb R})$. Keywords:wavelet, multiresolution analysis, unitary group representation, frameCategories:42C40, 43A25, 42C15, 46N99
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8344419002532959, "perplexity": 1476.2872237623308}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375099361.57/warc/CC-MAIN-20150627031819-00014-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://gmatclub.com/forum/a-survey-was-conducted-to-determine-the-popularity-of-3-food-160584.html
|
GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 11 Dec 2018, 15:09
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
## Events & Promotions
###### Events & Promotions in December
PrevNext
SuMoTuWeThFrSa
2526272829301
2345678
9101112131415
16171819202122
23242526272829
303112345
Open Detailed Calendar
• ### Free GMAT Prep Hour
December 11, 2018
December 11, 2018
09:00 PM EST
10:00 PM EST
Strategies and techniques for approaching featured GMAT topics. December 11 at 9 PM EST.
• ### The winning strategy for 700+ on the GMAT
December 13, 2018
December 13, 2018
08:00 AM PST
09:00 AM PST
What people who reach the high 700's do differently? We're going to share insights, tips and strategies from data we collected on over 50,000 students who used examPAL.
# A survey was conducted to determine the popularity of 3 food
Author Message
TAGS:
### Hide Tags
Intern
Joined: 24 Jan 2012
Posts: 9
A survey was conducted to determine the popularity of 3 food [#permalink]
### Show Tags
26 Sep 2013, 07:27
15
93
00:00
Difficulty:
95% (hard)
Question Stats:
58% (02:47) correct 42% (02:43) wrong based on 1126 sessions
### HideShow timer Statistics
A survey was conducted to determine the popularity of 3 foods among students. The data collected from 75 students are summarized as below
48 like Pizza
45 like Hoagies
58 like tacos
28 like pizza and hoagies
37 like hoagies and tacos
40 like pizza and tacos
25 like all three food
What is the number of students who like none or only one of the foods ?
A. 4
B. 16
C. 17
D. 20
E. 23
I got this one right but I spent a lot of time playing with numbers. Can someone please show a faster way.
Math Expert
Joined: 02 Sep 2009
Posts: 51100
A survey was conducted to determine the popularity of 3 food [#permalink]
### Show Tags
28 Sep 2013, 03:35
17
36
violetsplash wrote:
A survey was conducted to determine the popularity of 3 foods among students. The data collected from 75 students are summarized as below
48 like Pizza
45 like Hoagies
58 like tacos
28 like pizza and hoagies
37 like hoagies and tacos
40 like pizza and tacos
25 like all three food
What is the number of students who like none or only one of the foods ?
A. 4
B. 16
C. 17
D. 20
E. 23
I got this one right but I spent a lot of time playing with numbers. Can someone please show a faster way.
$$Total = A + B + C - (sum \ of \ 2-group \ overlaps) + (all \ three) + Neither$$.
75 = 48 + 45 + 58 - (28 + 37 + 40) + 25 + Neither --> Neither = 4.
Only Pizza = P - (P and H + P and T - All 3) = 48 - (28 + 40 - 25) = 5;
Only Hoagies = H - (P and H + H and T - All 3) = 45 - (28 + 37 - 25) = 5;
Only Tacos = T - (P and T + H and T - All 3) = 58 - (40 + 37 - 25) = 6.
The number of students who like none or only one of the foods = 4 + (5 + 5 + 6) = 20.
Hope this helps.
_________________
Intern
Joined: 24 Jan 2012
Posts: 9
Re: A survey was conducted to determine the popularity of 3 food [#permalink]
### Show Tags
27 Sep 2013, 19:30
39
15
violetsplash wrote:
A survey was conducted to determine the popularity of 3 foods among students. The data collected from 75 students are summarized as below
48 like Pizza
45 like Hoagies
58 like tacos
28 like pizza and hoagies
37 like hoagies and tacos
40 like pizza and tacos
25 like all three food
What is the number of students who like none or only one of the foods ?
A. 4
B. 16
C. 17
D. 20
E. 23
I got this one right but I spent a lot of time playing with numbers. Can someone please show a faster way.
I have edited the question. The total number of students was missing. After that I could solve it as well. The updated solution is attached.
Attachments
Solution.jpg [ 81.28 KiB | Viewed 57986 times ]
##### General Discussion
Manager
Status: How easy it is?
Joined: 09 Nov 2012
Posts: 92
Location: India
Concentration: Operations, General Management
GMAT 1: 650 Q50 V27
GMAT 2: 710 Q49 V37
GPA: 3.5
WE: Operations (Other)
A survey was conducted to determine the popularity of 3 food [#permalink]
### Show Tags
29 Sep 2014, 11:23
Bunuel / VeritasPrepKarishma, is there a simpler method similar to the table method used for such problems? This appeared in the first 10 questions of my test and I was stumped by the numbers involved.
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 8659
Location: Pune, India
Re: A survey was conducted to determine the popularity of 3 food [#permalink]
### Show Tags
30 Sep 2014, 07:58
14
3
nitin6305 wrote:
Is there a simpler method similar to the table method used for such problems? This appeared in the first 10 questions of my test and I was stumped by the numbers involved.
Make a Venn diagram. The toughest of sets questions can be done easily through venn diagrams because you can visualize easily.
Draw three overlapping circles (as shown by Bunuel in his post above). Mark set A as Pizzas, set B as Hoagies and set C as Tacos.
The part where all three overlap (g in the diagram), mark that as 25.
28 like pizza and hoagies. 25 like all three so 3 like only pizzas and hoagies. Mark d as 3.
37 like hoagies and tacos. 25 like all three so 12 like only hoagies and tacos. Mark f as 12.
40 like pizza and tacos. 25 like all three so 15 like only pizza and tacos. Mark e as 15.
From 75, subtract d, e, f and g. This will give the number of students lying in a, b, c and None. This is exactly what we want.
75 - 3 - 12 - 15 - 25 = 20
_________________
[b]Karishma
Veritas Prep GMAT Instructor
Intern
Joined: 03 Jul 2014
Posts: 16
Re: A survey was conducted to determine the popularity of 3 food [#permalink]
### Show Tags
23 Nov 2014, 11:06
This one is tricky and lengthy , I guess it doesnt end there in Knowing your Formula 1 and Formula 2 :D ..... I tried solving this and then I got stuck again solving for exactly 1 like . Though it was preety straight forward for solving Neither .
Also I recognize something , for the same problem I can also number of people who like exactly 2 like
using
Total = A+B+C - ( Exactly 2 like AB+BC+CA) - 2 ABC +Neither.
Hence
75 = 151 - Exactly 2 like - 2 (25) + 4
Exactly 2 like =26 .
Correct me if I am missing something .I guess though its the right approach . Also for this we need to primarily work 1st step Using Formula 1 to find Neither . Then only its possible . Suggest me if there is any other approach
Thanks.
Bunuel wrote:
violetsplash wrote:
A survey was conducted to determine the popularity of 3 foods among students. The data collected from 75 students are summarized as below
48 like Pizza
45 like Hoagies
58 like tacos
28 like pizza and hoagies
37 like hoagies and tacos
40 like pizza and tacos
25 like all three food
What is the number of students who like none or only one of the foods ?
A. 4
B. 16
C. 17
D. 20
E. 23
I got this one right but I spent a lot of time playing with numbers. Can someone please show a faster way.
$$Total = A + B + C - (sum \ of \ 2-group \ overlaps) + (all \ three) + Neither$$.
75 = 48 + 45 + 58 - (28 + 37 + 40) + 25 + Neither --> Neither = 4.
Only Pizza = P - (P and H + P and T - All 3) = 48 - (28 + 40 - 25) = 5;
Only Hoagies = H - (P and H + H and T - All 3) = 45 - (28 + 37 - 25) = 5;
Only Tacos = T - (P and T + H and T - All 3) = 58 - (40 + 37 - 25) = 5.
The number of students who like none or only one of the foods = 4 + (5 + 5 + 6) = 20.
Hope this helps.
Intern
Joined: 03 Jul 2014
Posts: 16
Re: A survey was conducted to determine the popularity of 3 food [#permalink]
### Show Tags
23 Nov 2014, 11:23
How are we assuming about the neither part to be . I mean when we are subtracting (All three) from each of the (2 like region )
we assume that its part of the bigger a , b and c .
I mean suppose we try solving for Exactly two like using the approach above then we have ,
All 2 like = (28-25)= 3 ; (37-25)=12 ; (40-25)=15 ; Therefore 3+12+15=30 .....But I guess its wrong as I am missing out on the Neither Part.
Now If I consider the approach used by Bruneal then I know that Neither =4 ; so Exactly liking 2 items would be 30-4=26 .
Please correct me if I am wrong , also what approach should I take for Neither.
VeritasPrepKarishma wrote:
nitin6305 wrote:
Is there a simpler method similar to the table method used for such problems? This appeared in the first 10 questions of my test and I was stumped by the numbers involved.
Make a Venn diagram. The toughest of sets questions can be done easily through venn diagrams because you can visualize easily.
Draw three overlapping circles (as shown by Bunuel in his post above). Mark set A as Pizzas, set B as Hoagies and set C as Tacos.
The part where all three overlap (g in the diagram), mark that as 25.
28 like pizza and hoagies. 25 like all three so 3 like only pizzas and hoagies. Mark d as 3.
37 like hoagies and tacos. 25 like all three so 12 like only hoagies and tacos. Mark f as 12.
40 like pizza and tacos. 25 like all three so 15 like only pizza and tacos. Mark e as 15.
From 75, subtract d, e, f and g. This will give the number of students lying in a, b, c and None. This is exactly what we want.
75 - 3 - 12 - 15 - 25 = 20
Veritas Prep GMAT Instructor
Joined: 16 Oct 2010
Posts: 8659
Location: Pune, India
Re: A survey was conducted to determine the popularity of 3 food [#permalink]
### Show Tags
23 Nov 2014, 19:39
3
4
hanschris5 wrote:
How are we assuming about the neither part to be . I mean when we are subtracting (All three) from each of the (2 like region )
we assume that its part of the bigger a , b and c .
I mean suppose we try solving for Exactly two like using the approach above then we have ,
All 2 like = (28-25)= 3 ; (37-25)=12 ; (40-25)=15 ; Therefore 3+12+15=30 .....But I guess its wrong as I am missing out on the Neither Part.
Now If I consider the approach used by Bruneal then I know that Neither =4 ; so Exactly liking 2 items would be 30-4=26 .
Please correct me if I am wrong , also what approach should I take for Neither.
The question asks you the sum of 'None' and 'Only 1' i.e. it asks you 'None + Only 1' (irrespective of how many are in None and how many are in Only 1). You just need the sum.
You have that the total number of students = 75
75 = None + Only 1 + Only 2 + All 3
To get 'None + Only 1', all you need to do is subtract 'Only 2' and 'All 3' from 75.
'Only 2' =3 + 12 + 15 = 30 (as correctly calculated by you. It does not include 'None')
'All 3' = 25
So None + Only 1 = 75 - 30 - 25 = 20
_________________
[b]Karishma
Veritas Prep GMAT Instructor
Intern
Joined: 12 Aug 2014
Posts: 10
Location: United States
Concentration: Strategy, General Management
GMAT 1: 710 Q50 V35
GMAT 2: 720 Q49 V40
WE: Other (Consulting)
Re: A survey was conducted to determine the popularity of 3 food [#permalink]
### Show Tags
25 Dec 2014, 04:00
5
1
There's a faster way of doing it.
Let x= total with 3 foods
Total with with 2 foods only => (37-x) + (28-x) + (40-x) = 105-3x= 30
Total with 3 = x = 25
Total 0 and 1 = Total - (Total 2 and Total 3)
= 75 - 30 - 25
= 20
Manager
Status: PLAY HARD OR GO HOME
Joined: 25 Feb 2014
Posts: 148
Location: India
Concentration: General Management, Finance
Schools: Mannheim
GMAT 1: 560 Q46 V22
GPA: 3.1
Re: A survey was conducted to determine the popularity of 3 food [#permalink]
### Show Tags
12 Feb 2015, 06:20
Dear Bunuel,
I think in your solution, you intended to use the second formula from the gmatclub mathbook,but wrote the first formula.
_________________
ITS NOT OVER , UNTIL I WIN ! I CAN, AND I WILL .PERIOD.
Math Expert
Joined: 02 Sep 2009
Posts: 51100
Re: A survey was conducted to determine the popularity of 3 food [#permalink]
### Show Tags
12 Feb 2015, 06:33
vards wrote:
Dear Bunuel,
I think in your solution, you intended to use the second formula from the gmatclub mathbook,but wrote the first formula.
Can you elaborate on this? Why is the formula used wrong?
_________________
Intern
Joined: 02 Jan 2015
Posts: 32
GMAT Date: 02-08-2015
GPA: 3.7
WE: Management Consulting (Consulting)
Re: A survey was conducted to determine the popularity of 3 food [#permalink]
### Show Tags
03 Jul 2015, 03:08
I'd appreciate further clarification on why we use the version of the formula where we add back in the 'all' category? I'm not sure why we are able to assume that the 'all' category also includes numbers from the 'two items' sections? Thanks
Math Expert
Joined: 02 Sep 2009
Posts: 51100
Re: A survey was conducted to determine the popularity of 3 food [#permalink]
### Show Tags
03 Jul 2015, 03:11
ElCorazon wrote:
I'd appreciate further clarification on why we use the version of the formula where we add back in the 'all' category? I'm not sure why we are able to assume that the 'all' category also includes numbers from the 'two items' sections? Thanks
Hope this helps.
_________________
Director
Status: Professional GMAT Tutor
Affiliations: AB, cum laude, Harvard University (Class of '02)
Joined: 10 Jul 2015
Posts: 671
Location: United States (CA)
Age: 38
GMAT 1: 770 Q47 V48
GMAT 2: 730 Q44 V47
GMAT 3: 750 Q50 V42
GRE 1: Q168 V169
WE: Education (Education)
Re: A survey was conducted to determine the popularity of 3 food [#permalink]
### Show Tags
03 Jun 2016, 09:15
3
1
When you draw the Venn diagram, I suggest starting with the "triple overlap" number of 25 and working your way out from there. Don't forget to adjust the numbers as you go.
Attached is a visual that should help.
Attachments
Screen Shot 2016-06-03 at 10.14.20 AM.png [ 112.84 KiB | Viewed 35430 times ]
_________________
Harvard grad and 99% GMAT scorer, offering expert, private GMAT tutoring and coaching worldwide since 2002.
One of the only known humans to have taken the GMAT 5 times and scored in the 700s every time (700, 710, 730, 750, 770), including verified section scores of Q50 / V47, as well as personal bests of 8/8 IR (2 times), 6/6 AWA (4 times), 50/51Q and 48/51V (1 question wrong).
You can download my official test-taker score report (all scores within the last 5 years) directly from the Pearson Vue website: https://tinyurl.com/y94hlarr Date of Birth: 09 December 1979.
GMAT Action Plan and Free E-Book - McElroy Tutoring
Contact: [email protected] (I do not respond to PMs on GMAT Club.)
...or find me on Reddit: http://www.reddit.com/r/GMATpreparation
Director
Joined: 12 Nov 2016
Posts: 734
Location: United States
Schools: Yale '18
GMAT 1: 650 Q43 V37
GRE 1: Q157 V158
GPA: 2.66
Re: A survey was conducted to determine the popularity of 3 food [#permalink]
### Show Tags
12 Oct 2017, 18:16
violetsplash wrote:
A survey was conducted to determine the popularity of 3 foods among students. The data collected from 75 students are summarized as below
48 like Pizza
45 like Hoagies
58 like tacos
28 like pizza and hoagies
37 like hoagies and tacos
40 like pizza and tacos
25 like all three food
What is the number of students who like none or only one of the foods ?
A. 4
B. 16
C. 17
D. 20
E. 23
I got this one right but I spent a lot of time playing with numbers. Can someone please show a faster way.
The thing that's important to remember with this time of problem is that it says "37 like hoagies and tacos" not "37 like only hoagies and tacos" ---> knowing this we can apply the first of 2 formulas used for 3 overlapping sets
total= A + B + C - [sum of 2] + [all three] + neither
75= 71 + x
Neither= 4
Work in reverse to find the number of exactly 2
D
Director
Joined: 11 Feb 2015
Posts: 611
Re: A survey was conducted to determine the popularity of 3 food [#permalink]
### Show Tags
20 Jul 2018, 09:54
The question could be solved in two parts:-
1) First 75 minus the number contained in union of the three sets will give the number of students who like none of the foods.(i.e. 75-71=4)
2) In second part, number of students who like only one of the foods = (71-55=16)
Total = 4+16 = 20
Option D is the correct answer.
_________________
"Please hit +1 Kudos if you like this post"
_________________
Manish
"Only I can change my life. No one can do it for me"
Manager
Joined: 31 Jul 2017
Posts: 138
Location: Tajikistan
Concentration: Finance, Economics
Schools: Booth '21, CBS '21
GMAT 1: 690 Q48 V35
GRE 1: Q164 V162
GPA: 3.85
WE: Accounting (Investment Banking)
Re: A survey was conducted to determine the popularity of 3 food [#permalink]
### Show Tags
01 Sep 2018, 22:39
CAMANISHPARMAR wrote:
The question could be solved in two parts:-
1) First 75 minus the number contained in union of the three sets will give the number of students who like none of the foods.(i.e. 75-71=4)
2) In second part, number of students who like only one of the foods = (71-55=16)
Total = 4+16 = 20
Option D is the correct answer.
CAMANISHPARMAR, how do you find 4 from first part (or 71)?
Re: A survey was conducted to determine the popularity of 3 food &nbs [#permalink] 01 Sep 2018, 22:39
Display posts from previous: Sort by
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7552285194396973, "perplexity": 2729.7945552559836}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823705.4/warc/CC-MAIN-20181211215732-20181212001232-00570.warc.gz"}
|
http://physics.stackexchange.com/questions/67966/fluids-in-thermodynamic-equlibrium
|
# Fluids in thermodynamic equlibrium
I am reading about the Euler equations of fluid dynamics from Leveque's Numerical Methods for Conservation Laws (Amazon link). After introducing the mass, momentum and energy equations, some thermodynamic concepts are discussed, to introduce an equation of state.
He says
In the Euler equations we assume that the gas is in chemical and thermodynamic equilibrium and that the internal energy is a known function of pressure and density.
After this, the usual thermodynamics-related equation of state (EOS) discussions are carried out.
Now chemical equilibrium I understand (number of moles of the chemical constituents do not change), however I don't understand how the assumption of thermodynamic equilibrium can be imposed.
From what baby thermodynamics I know, any thermodynamic analysis is always calculated for quasi-static processes, like 'slowly' pushing a piston in a cylinder of gas.
But in fluid dynamics fluids are flowing and that too rapidly and from intuition there will not be any thermodynamic equilibrium during fluid flow.
Where is my understanding going wrong?
-
The excerpt from the text forgets to mention that you assume Local Thermodynamic Equilibrium, and not full Thermodynamic Equilibrium, so to make it possible to define point to point (or from region to region) an EoS.
If there is no sense of being 'close' to thermodynamical equilibrium, it is simply impossible to talk about EoS, pressure and the like from the "Hydrodynamics as Local Thermal Equilibrium".
From the strict thermodynamic view, you can't talk of anything time-dependent. All piston an thermal cycles use the abuse of talking of 'quasi-equilibrium' without really defining it, and simply postulate that you can use full traditional thermodynamics all around. From a 'rigorous' point of view, the only thing that you can talk in thermodynamics are stationary, homogeneous systems, which are in the 'thermodynamical limit' ( $N,V,S...\rightarrow \infty$ but $N/V, S/V...$ fixed), so no time dependence.
The idea of flow is that even if the fluid flows very fast in relation to a fixed observer, if you go to the rest frame of that piece of fluid, you can talk of thermodynamic equilibrium close to that small part of the fluid.
I believe that the best way to understand how this all works is through Boltzmann Equation, which I can develop latter if you wish so.
So, you have boltzmann equation:
$\frac{\partial f}{\partial t}+\vec v \cdot \frac{\partial f}{\partial \vec r}+\vec F\cdot \frac{\partial f}{\partial \vec p} = \int d^3 \vec p_0 d\Omega\ g\ \sigma(g,\Omega) (f'f_1' - ff_1)$
Where:$g=|\vec p - \vec p_0|$, $\sigma(g,\Omega)$ is the differential cross section between gas molecules, and the primed distributions are evaluated with the momentum that corresponds to an out-going (solid)-angle $\Omega$, with ingoing momenta $\vec p$ and $\vec p_0$. The normalization is $\int d^3\vec r\ d^3 \vec p\ f(t,\vec r,\vec p)=N$, where N is the total number of particles.
We believe that this equation provides a good description to 1-particle distribution function, in phase space, of a a rarefied gas composed with hard-spheres(i.e. hard, short range, repulsive potential, with only elastic collisions). Putting aside whether it's justified or not to model a gas this way, simply believe for the moment that it works.
Now you want to model a gas inside a box as beeing in full thermodynamical equilibrium. Equilibrium is when you have stationary, homogeneous material. So, you want to look for solutions of Boltzmann equation that have this kind of symmetry, and thus:
$f(t,\vec r, \vec p) = \frac{N}{V}Id_V(\vec r)f_0(\vec p)$
So, the most of inhomogeneity that there may be is an indicator function that says that outside the box, there is no gas. We are also supposing that the only external forces are on the walls of the box, and so, in the bulk of the gas we have $\vec F=0$
Now we feed this ansatz to the Boltzmann equation and see what happens. Now, from the above assumptions $\partial f/\partial t=0$, $\partial f/\partial \vec r = 0$ and $\vec F=0$ on the bulk. This gives us:
$\frac{\partial f}{\partial t}+\vec v \cdot \frac{\partial f}{\partial \vec r}+\vec F\cdot \frac{\partial f}{\partial \vec p} = 0 = \int d^3 \vec p_0 d\Omega\ g\ \sigma(g,\Omega) (f'f_1' - ff_1)$
So we need to kill the collision kernel in order to satisfy the Boltzmann equation. The easiest way is to nullify the subtraction inside it by putting:
$f_0(\vec p)f_0(\vec p_1)=f_0(\vec p')f_0(\vec p_1')$
For all possible (elastic) collision outcomes. Now comes the the smart point. Lets take the $\log$ of the above expression.
$\log f_0(\vec p) + \log f_0(\vec p_1)=\log f_0(\vec p') + \log f_0(\vec p_1')$
If $\log f_0$ is function only of additive conserved quantities on the collision, we get the relation above for free! (Ok, not completely for free, its possible to show that this is essentially the only way to do it)
Now, for elastic binary collisions, we have only 3 conserved quantitites: Mass, Linear Momentum (because we believe that there is no relevant rotation) and kinectic energy.
Now we write:
$\log f_0(\vec p) = A\frac{\vec p^2}{2m}+ \vec B \cdot \vec p + Cm$
Massaging the above expression and we use integrability conditions, we may write:
$\log f_0(\vec p) = -\frac{ (\vec p-\vec p_0)^2}{2m\sigma^2}+ \log N_0$
In the case of the box, we know that the box isn't moving (equivalently, it's locally isotropic), so we put $\vec p_0=0$ and we get the Boltzmann Distribution as a solution to Boltzmann equation for equilibrium conditions. Further, we can identify $\sigma^2 = k_BT$ and we close the identification.
Now, to hydrodynamics. To find hydrodynamical equation from Botlzmann equation, we "Take Moments" from it, i.e., e multiply it by powers of the linear momentum, and integrate in momentum, so we get equations for things that live in usual 3D space.
Multiplying by $\chi(\vec p)$ and integrating:
$\frac{\partial}{\partial t}\left(\int d^3\vec p\ \chi(\vec p) f\right) + \frac{1}{m} \nabla_{\vec r} \cdot \left(\int d^3\vec p\ \chi(\vec p)\vec p f\right) + \vec F \cdot \left(\int d^3\vec p\ \chi(\vec p) \frac{\partial f}{\partial \vec p}\right) = \int d^3 \vec p\ d^3 \vec p_0\ d\Omega\ g\chi(\vec p)\ \sigma(g,\Omega) (f'f_1' - ff_1)$
It's possible to show that if $\chi(\vec p)$ is a conserved quantity on binary collisions, the last term is $=0$, so that's what we are going to look for. Choosing $\chi(\vec p)=m$, we arrive at:
$\frac{\partial}{\partial t}\left(\int d^3\vec p\ m f\right) + \nabla_{\vec r} \cdot \left(\int d^3\vec p\ \vec p f\right) = 0$
Identifying $\rho = \int d^3\vec p\ m f$ as the mass density and $\int d^3\vec p\ \vec p f = \vec j = \rho <\vec v> = \rho \vec u$ the mass current, we have the continuity equation for mass density:
$\frac{\partial \rho}{\partial t} + \nabla \cdot (\rho \vec v) = 0$
Setting $\chi(\vec p) = \vec p$ we arrive at:
$\frac{\partial }{\partial t}(\rho \vec u) + \nabla \cdot \Sigma - \rho F = 0$
Where $\Sigma_{ij} = \int d^3\vec p\ p_i p_j f(t,\vec r,\vec p)$. Now we can decompose $\vec p = m<\vec v> + \delta \vec p = m\vec u + \delta \vec p$, where we identify the average velocity as $\vec u = \frac{1}{\rho} \vec j$. This average velocity is what we identify as the fluid velocity. Going back to the last equation we have:
$\frac{\partial }{\partial t}(\rho \vec u) + \nabla \cdot \left(\rho \vec u \otimes \vec u + \Pi \right) = \vec f$
Where, finally, we identify $\Pi_{ij} = \int d^3\vec p\ \delta p_i \delta p_j f(t,\vec r,\vec p)$ as the stress tensor. The convective par is already there, and we now can appreciate the conection between kinectic theory and hydrodynamics. Comming back to Boltzmann Distribuition:
$f=n(t,\vec r)f_0(\vec p)$
$f_0(\vec p) = \frac{1}{(2\pi m k_BT)^{3/2}}e^{-\frac{ (\vec p-\vec p_0)^2}{2mk_BT}}$
We said that for Thermodynamics, we had $n$,$T$ and $\vec p_0$ constant all along the gas. For Hydrodynamics, we try to retain that functional form, and relax this assumption, i.e., we try (again) to find solutions to Boltzmann equation with the above form, but with $T(t,\vec r)$ and $\vec p(t,\vec r)$ possibly have some dependence in time and space, and so we talk about local thermal equilibrium, since we try to keep, locally, an equilibrium distribution.
If we do that, we end up with $\rho = m\times n(t,\vec r)$ and $\vec u = \frac{\vec p_0}{m}$, which wasn't totally unexpected, and $\vec p - m\vec u= \delta \vec p$, so the Boltzmann distribution measures the (local) fluctuation of velocity. Now computing the stress tensor:
$\Pi_{ij} = \frac{n}{(2\pi m k_BT)^{3/2}}\int d^3\vec p\ \delta p_i \delta p_j e^{-\frac{\delta \vec p^2}{2mk_BT}}$
It' not too difficult to see that the stress tensor above is proportional to the identity tensor, and we identify $\Pi_{ij} = p \delta_{ij}$, and since we have a relation between pressure, density and temperature, we have an EoS. If you plug that on the original equation with $\Pi$, you end up with Euler equation for Fluid Dynamics. So you can think that Euler equation is an evolution equation for something that is in strict local equilibrium.
Also, if you look closely, you will see that the probability distribution only care about the velocity fluctuation $\delta \vec p/m$, and not the actual velocity of the fluid $\vec u$. Here enters your question about the fluid flow:
But in fluid dynamics fluids are flowing and that too rapidly and from intuition there will not be any thermodynamic equilibrium during fluid flow.
From the fluid standpoint, the average velocity is not important to the thermodynamics, only the fluctuations around this average velocity.
Chemical equilibrium is not being considered here, since we are supposing that the fluid have a single chemical species, so it's naturally in chemical equilibrium.
Now, beyond Euler equation:
One very (strong) assumption that we made was that the fluid had the distribution in phase space that was locally Maxwell-Boltzmann. What would happen if we dropped this assumption?
Generally, we can't solve (or can only solve numerically) Boltzmann Equation except on very special cases, so, as any good physicist, we go to the next best thing: Approximate Solutions
What happens if our system is not on equilibrium but close to equilibrium? It should be possible to write $f=f_0\phi$ where $\phi \approx 1$. Now, you would like to find some parameter that you could use to to some kind of perturbation expansion around it. This parameter is essentially the Knudsen Number of the system. If you do this, essentially, the only thing that should change here is the stress tensor, which depends explicitly of the form of the distribution on phase space.
The Knudsen Number is essencially a measure of how far the "microscopic" scale of your system is far from the "macroscopic" scale. If they are sufficiently far apart, i.e. $Kn << 1$, an macroscopic, or Hydrodynamical, description of your system should be good.
The zero-th order on Kn should be $f_0$, so you seek something like $\phi = 1+Kn \phi_1 + (Kn)^2 \phi_2 + ...$
You can carry out this calculation, which is rather lengthy, and what you find (if I remember correctly) is that in first order on the Knudsen Number, you find the Navier-Stokes Stress tensor, which in turn bring you to Navier-Stokes equation, with the bulk and shear viscosity coefficients.
Not only this, you can calculate the dependence on the density and temperature for this coeficients, so you not only have the general form of the evolution equation, but you also have an "EoS" in the extended sense, so to encompass also the viscosity coeffients.
So, the idea of using this method is to define pressure, temperature and the like on the "Equilibrium" part of the distribution, and viscosity and any other kind of effect on the "Non-Equilibrium" part. In this sense you can talk about thermodynamics since you are near (local) equilibrium, even you are not exactly on equilibrium.
So, this is one way to see hydrodynamics as 'mean kinetic theory', and also as an (almost) local thermodynamics. There are also other ways to do it. One is to study Non-Equilibrium Thermodynamics as a macroscopic (in the same sense as classical thermodynamics) mean theory. This is done, in the linear theory, by De Groot & Mazur.
I hope that I have clarified some of your questions. I believe this is a very interesting subject, and I like it very much myself.
-
Wow thanks! Yes I would really appreciate some pointers on how Boltzmann equation explains all this away. – smilingbuddha Jun 13 '13 at 22:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9346691966056824, "perplexity": 323.09128303207757}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-18/segments/1461860118807.54/warc/CC-MAIN-20160428161518-00188-ip-10-239-7-51.ec2.internal.warc.gz"}
|
http://physics.stackexchange.com/questions/63927/localized-electrons-in-the-crystals
|
# localized electrons in the crystals
Why electrons in low lying levels of individual atoms stay localized in their own atoms in a crystal? Doesn't this contradict Bloch's theorem?
-
Yes (that would be the lowest state in the energy band, to which 1s energy level would expand to). But in that case the crystal would be unstable, because the average charge density would be positive then. To make a zero average charge density, you need to take as many electrons as protons in the nuclei. For the simpler model, consider 1 electron per 2 protons - an $\mathrm{H}_2^+$ hydrogen molecular ion. – firtree May 9 at 10:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9059723615646362, "perplexity": 849.6280938830796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345777160/warc/CC-MAIN-20131218054937-00014-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://indianf.com/aries-astronomers-trace-the-mystery-behind-dwarf-galaxies/
|
# ARIES astronomers trace the mystery behind dwarf galaxies
Amidst the billions of galaxies in the universe, a large number are tiny ones 100 times less massive than our own Milky-way galaxy.
While most of these tiny tots called dwarf galaxies form stars at a much slower rate than the massive ones, some dwarf galaxies are seen forming new stars at a mass-normalized rate 10-100 times more than that of the Milky-way galaxy. These activities, however, do not last longer than a few tens of million-years, a period which is much shorter than the age of these galaxies – typically a few billion years.
Scientists observing dozens of such galaxies using two Indian telescopes have found that the clue to this strange behaviour of these galaxies lies in the disturbed hydrogen distribution in these galaxies and also in recent collisions between two galaxies.
To understand the nature of star formation in dwarf galaxies astronomers Dr. Amitesh Omar and his former student Dr. Sumit Jaiswal from Aryabhatta Research Institute of Observational Sciences (ARIES), an autonomous institute of Department of Science & Technology (DST), observed many such galaxies using the 1.3-meter Devasthal Fast Optical Telescope (DFOT) near Nainital and the Giant Meter wave Radio Telescope (GMRT).
While the former operated at optical wavelengths sensitive to detect optical line radiation emanating from the ionized Hydrogen, in the latter 30 dishes of 45-meter diameter, each worked in tandem and produced sharp interferometric images via spectral line radiation at 1420.40 MHz coming from the neutral Hydrogen in galaxies.
Star formation at a high rate requires very high density of Hydrogen in the galaxies. According to the study conducted by the ARIES team, the 1420.40 MHz images of several intense star-forming dwarf galaxies indicated that hydrogen in these galaxies is very disturbed. While one expects a nearly symmetric distribution of hydrogen in well-defined orbits in galaxies, hydrogen in these dwarf galaxies is found to be irregular and sometimes not moving in well-defined orbits.
Some hydrogen around these galaxies is also detected in forms of isolated clouds, plumes, and tails as if some other galaxy recently has collided or brushed away with these galaxies, and gas is scattered as debris around the galaxies. The optical morphologies sometimes revealed multiple nuclei and high concentration of ionized hydrogen in the central region.
Although galaxy-galaxy collision was not directly detected, various signatures of it were revealed through radio, and optical imaging, and these are helping to build up a story. The research, therefore, suggests that recent collisions between two galaxies trigger intense star formation in these galaxies.
The findings of this research with detailed images of 13 galaxies will be appearing in the forthcoming issue of Monthly Notices of Royal Astronomical Society (MNRAS) Journal published by the Royal Astronomical Society, the U.K. It will help astronomers to understand the formation of stars and evolution of less massive galaxies in the Universe
COVID 19: India reaches another record of highest single day recoveries
India has touched another peak of posting the highest recoveries of COVID-19 cases in a single day with 62,282 people have been discharged from the hospitals in the
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8307620882987976, "perplexity": 2302.7263643530714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323584913.24/warc/CC-MAIN-20211016170013-20211016200013-00189.warc.gz"}
|
http://quant-splinters.blogspot.ch/2014/01/
|
i
## Friday, 24 January 2014
### Drawdown Risk Budgeting: Contributions to Drawdown-At-Risk and the Drawdown Parity Portfolio
Similar to Value-At-Risk, Drawdown-At-Risk is defined as a point on the drawdown distribution defined by a probability interpreted as a "level of confidence". The well-known risk measure Maximum Drawdown is the 100% Drawdown-At-Risk, i.e. the drawdown which is not exceeded with certainty.
The table below shows Drawdown-At-Risk values for the constituents of a specific multi asset class universe (total returns, monthly figures, Jan 2001 to Oct 2011, base currency USD)...
In portfolio analytics, fully additive contributions to risk can be derived from Euler's homogeneous function theorem for linear homogeneous risk measures. Portfolio volatility and tracking error are examples of risk measure which are linear homogeneous in constituent weights.
Non-linear homogeneous risk measures can be approximated (e.g. Taylor series expansions, using the total differential as a linear approximation and so on). In the chart below, we show how the 95% DaR of an equal-weighted portfolio varies with variations in individual constituent weights (we make the assumptions that exposures are booked against a riskfree cash account with zero return)...
This chart is called "the Spaghetti chart" by certain people. In the case of the minimum 95% DaR portfolio, i.e. the fully invested long-only portfolio with minimum 95% Drawdown-At-Risk, all spaghettis must point downwards...
The full details of the risk decomposition for the equal-weighted portfolio...
...in comparsion with the minimum 95% DaR portfolio...
Additive contributions to portfolio Drawdown-At-Risk open up the door for drawdown risk budgeting. For example, the Drawdown Parity Portfolio can be calculated as the portfolio with equal constituent contributions to portfolio drawdown risk...
Due to the residual, the DaR contributions are not perfectly equalized. Taking into account estimation risk and other implementation issues, this is acceptable for practical purposes.
Being able to calculate additive contributions to drawdown-at-risk is useful for descriptive ex post or ex ante risk budgeting purposes. The trade risk charts are useful indicators providing information on a) the risk drivers in the portfolio and b) the directions to trade.
Budgeting drawdown risk is really budgeting for future drawdowns ("ex ante drawdown"). This involves estimating future drawdowns. Whether future drawdowns can be estimated with the required precision is an empirical question. In order to assess what this task might involve, it is interesting reviewing certain findings in the theoretical literature related to the expected maximum drawdown for geometric Brownian motions (see for example "An Analysis of the Expected Maximum Drawdown Risk Measure" by Magdon-Ismail/Atyia. More recently, analytical results have been derived for return generating processes with time-varying volatility). In the long-run, the expected maximum drawdown for a geometric Brownian motion is...
\$MDD_{e} = (0.63519 + 0.5 \cdot ln(T) + ln(\frac{mu}{sigma})) \cdot \frac{sigma^2}{mu} \$
Expected maximum drawdown is function in investment horizon (+), volatility (+) and expected return (-).
While we have time series models with proven high predictive power to estimate volatility risk (e.g. GARCH), the estimation of maximum drawdown is a much more challenging task because it involves estimating expected returns, which is known to be subject to much higher estimation risk.
## Monday, 6 January 2014
### Resampling the Efficient Frontier - With How Many Observations?
Since optimizer inputs are stochastic variables, it follows that any efficient frontier must be a stochastic object. The efficient frontier we usually plot in mean/variance space is the expected efficient frontier. The realized efficient frontier will almost always deviate from the expected frontier and will lie within certain confidence bands.
Several attempts have been made to illustrate the stochastic nature of the efficient frontier, the most famous one probably being the so-called "Resampled Efficient Frontier" (tm) by Michaud/Michaud(1998).
Resampling involves setting the number of simulations as well as setting the number of observations to generate in each simulation. The importance of the latter decision is typically underestimated.
The chart below plots the resampled portfolios of 16 portfolios on a particular mean/variance efficient frontier...
The larger density of points at the bottom left end of the frontier is a result from the fact that there exist two very similar corner portfolios in this area of the curve.
The chart below plots the same frontier with the same number of simulations, but a much larger number of generated observations...
As the confidence bands, average weights or any risk and return characteristics are largely determined by the choice of number of simulations and number of observations in each simulation, it is worth keeping an eye on these modelling decisions when using relying on a resampling approach for investment purposes.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.874976634979248, "perplexity": 2251.1065988790515}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123560.51/warc/CC-MAIN-20170423031203-00241-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/please-help-me-split-this-equation-into-2-equations.85842/
|
1. Aug 21, 2005
### memarf1
Im trying to turn this equation into 2 seperate equations in order to place it in a runge kutta problem. This is the proposed problem and conditions:
$$\frac{d^2f}{dx^2} + f = 0$$
allowing
$$f (x) = A\cos x + B\sin x$$
$$f ' (x) = -A\sin x + B\cos x$$
$$f '' (x) = -A\cos x - B\sin x$$
and
$$g = \frac{df}{dx}$$
meaning
$$\frac{df}{dx} - g = 0$$ which is identical to $$\frac{d^2f}{dx^2} + f = 0$$
so
$$\frac{dg}{dx} + f = 0$$
the initial conditions for equation 1 are:
$$f (0) = 1$$
$$f ' (0) = 0$$
and for equation 2 are:
$$f (0) = 0$$
$$g (0) = 1$$
I hope this formatting is more easy to read.
any suggestions??
Last edited: Aug 21, 2005
2. Aug 21, 2005
### Zurtex
Right, you really need to learn Latex. So your post I tjink would go like this:
$$\frac{d''f}{dx''} + f = 0$$
Therefore:
$$f (x) = A\cos x + B\sin x$$
$$f ' (x) = -A\sin x + B\cos x$$
$$f '' (x) = -A\cos x - B\sin x$$
However, before I try to translate the rest, I feel it worth noting that this is very confusing:
$$\frac{d''f}{dx''}$$
Please stick to something like this:
$$\frac{d^{2}y}{dx^{2}} \quad \text{or} \quad y''$$
3. Aug 21, 2005
### memarf1
yes, that is correct.
Off Subject, but what is latex?
4. Aug 21, 2005
### Zurtex
Click on any of my equtions and a box should appear showing the cde I used to write it.
It's very early in the morning here, I'll come back and look at your problem later sorry, too tired right now.
5. Aug 21, 2005
### memarf1
Ok, well, I have changed the formatting, thank you for your continued help, ill check back in in the morning. Thanks again.
Im just looking for the 2 equations to plug into the runge kutta 4. I hope you can help. I have another post with my C++ code in it, but the code is correct. I just have to do this to show my professor.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.811148464679718, "perplexity": 1135.3229577456514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510998.39/warc/CC-MAIN-20181017044446-20181017065946-00180.warc.gz"}
|
https://www.physicsforums.com/threads/help-me.99621/
|
# Help me
1. Nov 12, 2005
### shravan
how to calculate the number of prime factors of 360? please give the method
2. Nov 13, 2005
### Tide
HINT: Factor the number. :)
P.S. And, no, I am not being glib!
3. Nov 13, 2005
### HallsofIvy
Staff Emeritus
As Tide said: start factoring. It's not that hard. I'll get you started:
360= 2(180)= 2(2)(90)= ...
surely you can do the rest yourself. Did you mean number of distinct prime factors or just number of prime factors (i.e. counting "2" more than once).
4. Nov 13, 2005
### shravan
sorry re question
I am sorry my question was wrong .however I wanted to ask how to find the no: of perfect squares in 360 without factorizing. I am sorry for sending the wrong question.
5. Nov 14, 2005
### bomba923
That's a different question; prime factorization of 360 yields
$$360 = 2^3 3^2 5$$
and therefore the only perfect-square factors included are
$${\{1,4,9,36\}}$$
from observing the prime factorization. There are only four perfect-square factors of 360.
(The "1" is trivial tho )
*Then again, I'll reply later when I'll write an explicitly mathematical way to calculate the quantity of perfect-square factors of 360-->without factorization, as you mentioned
Last edited: Nov 14, 2005
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8928166031837463, "perplexity": 4086.702871497544}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280888.62/warc/CC-MAIN-20170116095120-00468-ip-10-171-10-70.ec2.internal.warc.gz"}
|
http://www.researchgate.net/publication/220511595_Exact_Transient_Solutions_of_Nonempty_Markovian_Queues
|
Article
# Exact Transient Solutions of Nonempty Markovian Queues.
Computers & Mathematics with Applications 01/2006; 52:985-996. DOI: 10.1016/j.camwa.2006.04.022
Source: DBLP
ABSTRACT It has been shown by Sharma and Tarabia [1] that a power series technique can be successfully applied to derive the transient solution for an empty M/M/1/N queueing system. In this paper, we further illustrate how this technique can be used to extend [1] solution to allow for an arbitrary number of initial customers in the system. Moreover, from this, other more commonly sought results such as the transient solution of a nonempty M/M/1/∞ queue can be computed easily. The emphasis in this paper is theoretical but numerical assessment of operational consequences is also given.
0 Bookmarks
·
58 Views
• Source
##### Article: Analysis of random walks with an absorbing barrier and chemical rule
[Hide abstract]
ABSTRACT: Recently Tarabia and El-Baz [A.M.K. Tarabia, A.H. El-Baz, Transient solution of a random walk with chemical rule, Physica A 382 (2007) 430-438] have obtained the transient distribution for an infinite random walk moving on the integers -[infinity]<k<[infinity] of the real line. In this paper, a similar technique is used to derive new elegant explicit expressions for the first passage time and the transient state distributions of a semi-infinite random walk having "chemical" rule and in the presence of an absorbing barrier at state zero. The walker starting initially at any arbitrary positive integer position i,i>0. In random walk terminology, the busy period concerns the first passage time to zero. This relation of these walks to queuing problems is pointed out and the distributions of the queue length in the system and the first passage time (busy period) are derived. As special cases of our result, the Conolly et al. [B.W. Conolly, P.R. Parthasarathy, S. Dharmaraja, A chemical queue, Math. Sci. 22 (1997) 83-91] solution and the probability density function (PDF) of the busy period for the M/M/1/[infinity] queue are easily obtained. Finally, numerical values are given to illustrate the efficiency and effectiveness of the proposed approach.
Journal of Computational and Applied Mathematics 03/2009; 225(2):612–620. · 0.99 Impact Factor
• Source
##### Article: Transient results for M/M/1/c queues via path counting
[Hide abstract]
ABSTRACT: We find combinatorially the probability of having n customers in an M/M/1/c queueing system at an arbitrary time t when the arrival rate λ and the service rate µ are equal, including the case c = ∞. Our method uses path counting methods and finds a bijection between the paths of the type needed for the queueing model and paths of another type which are easy to count. The bijection involves some interesting geometric methods.
International Journal of Mathematics in Operational Research 09/2008; 1.
• ##### Article: Time dependent analysis of M/M/ 1 queue with server vacations and a waiting server
[Hide abstract]
ABSTRACT: In this paper, we have obtained explicit expressions for the time dependent probabilities of the M/M/1 queue with server vacations under a waiting server. The corresponding steady state probabilities have been obtained. We also obtain the time dependent performance measures of the systems. Numerical illustrations are provided to examine the sensitivity of the system state probabilities to changes in the parameters of the system.
08/2011;
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9034325480461121, "perplexity": 817.2041015359805}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657129431.12/warc/CC-MAIN-20140914011209-00091-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"}
|
http://mathhelpforum.com/advanced-algebra/100543-monoids-groups.html
|
1. ## Monoids and Groups
Can I get help on the following problem:
Show that if n GE 3 then the center of Sn is of order 1.
2. Originally Posted by thomas_donald
Can I get help on the following problem:
Show that if n GE 3 then the center of Sn is of order 1.
Let $\displaystyle n\geq 4$.
Let $\displaystyle \sigma$ be non-trivial permutation. Then there exists $\displaystyle a,b\in \{1,2,...,n\}$ such that $\displaystyle \sigma(a) = b$ with $\displaystyle a\not = b$. Notice that $\displaystyle a,\sigma^{-1}(a),b$ has at most three distinct points, let $\displaystyle c$ be different from all three of these (since $\displaystyle n\geq 4$ this is possible). There exists a permutation $\displaystyle \tau$ which satisfies $\displaystyle \tau(a) = \sigma^{-1}(a),\tau(b) = c$. Thus, $\displaystyle \tau \sigma(a) = \tau (b) = c$ and $\displaystyle \sigma \tau(a) = \sigma \sigma^{-1}(a) = a$. We see that $\displaystyle \tau \sigma \not = \sigma \tau$. This shows if $\displaystyle \sigma$ is not trivial then it cannot lie in the center.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9777247905731201, "perplexity": 120.91781944597031}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864848.47/warc/CC-MAIN-20180623000334-20180623020334-00448.warc.gz"}
|
https://docs.sympy.org/0.7.6.1/modules/polys/literature.html
|
/
Literature¶
The following is a non-comprehensive list of publications that were used as a theoretical foundation for implementing polynomials manipulation module.
[Kozen89] D. Kozen, S. Landau, Polynomial decomposition algorithms, Journal of Symbolic Computation 7 (1989), pp. 445-456
[Liao95] Hsin-Chao Liao, R. Fateman, Evaluation of the heuristic polynomial GCD, International Symposium on Symbolic and Algebraic Computation (ISSAC), ACM Press, Montreal, Quebec, Canada, 1995, pp. 240–247
[Gathen99] J. von zur Gathen, J. Gerhard, Modern Computer Algebra, First Edition, Cambridge University Press, 1999
[Weisstein09] Eric W. Weisstein, Cyclotomic Polynomial, From MathWorld - A Wolfram Web Resource, http://mathworld.wolfram.com/CyclotomicPolynomial.html
[Wang78] P. S. Wang, An Improved Multivariate Polynomial Factoring Algorithm, Math. of Computation 32, 1978, pp. 1215–1231
[Geddes92] K. Geddes, S. R. Czapor, G. Labahn, Algorithms for Computer Algebra, Springer, 1992
[Monagan93] Michael Monagan, In-place Arithmetic for Polynomials over Z_n, Proceedings of DISCO ‘92, Springer-Verlag LNCS, 721, 1993, pp. 22–34
[Kaltofen98] E. Kaltofen, V. Shoup, Subquadratic-time Factoring of Polynomials over Finite Fields, Mathematics of Computation, Volume 67, Issue 223, 1998, pp. 1179–1197
[Shoup95] V. Shoup, A New Polynomial Factorization Algorithm and its Implementation, Journal of Symbolic Computation, Volume 20, Issue 4, 1995, pp. 363–397
[Gathen92] J. von zur Gathen, V. Shoup, Computing Frobenius Maps and Factoring Polynomials, ACM Symposium on Theory of Computing, 1992, pp. 187–224
[Shoup91] V. Shoup, A Fast Deterministic Algorithm for Factoring Polynomials over Finite Fields of Small Characteristic, In Proceedings of International Symposium on Symbolic and Algebraic Computation, 1991, pp. 14–21
[Cox97] D. Cox, J. Little, D. O’Shea, Ideals, Varieties and Algorithms, Springer, Second Edition, 1997
[Ajwa95] I.A. Ajwa, Z. Liu, P.S. Wang, Groebner Bases Algorithm, https://citeseer.ist.psu.edu/myciteseer/login, 1995
[Bose03] N.K. Bose, B. Buchberger, J.P. Guiver, Multidimensional Systems Theory and Applications, Springer, 2003
[Giovini91] A. Giovini, T. Mora, “One sugar cube, please” or Selection strategies in Buchberger algorithm, ISSAC ‘91, ACM
[Bronstein93] M. Bronstein, B. Salvy, Full partial fraction decomposition of rational functions, Proceedings ISSAC ‘93, ACM Press, Kiev, Ukraine, 1993, pp. 157–160
[Buchberger01] B. Buchberger, Groebner Bases: A Short Introduction for Systems Theorists, In: R. Moreno-Diaz, B. Buchberger, J. L. Freire, Proceedings of EUROCAST‘01, February, 2001
[Davenport88] J.H. Davenport, Y. Siret, E. Tournier, Computer Algebra Systems and Algorithms for Algebraic Computation, Academic Press, London, 1988, pp. 124–128
[Greuel2008] G.-M. Greuel, Gerhard Pfister, A Singular Introduction to Commutative Algebra, Springer, 2008
[Atiyah69] M.F. Atiyah, I.G. MacDonald, Introduction to Commutative Algebra, Addison-Wesley, 1969
[Monagan00] M. Monagan and A. Wittkopf, On the Design and Implementation of Brown’s Algorithm over the Integers and Number Fields, Proceedings of ISSAC 2000, pp. 225-233, ACM, 2000.
[Brown71] W.S. Brown, On Euclid’s Algorithm and the Computation of Polynomial Greatest Common Divisors, J. ACM 18, 4, pp. 478-504, 1971.
[Hoeij04] M. van Hoeij and M. Monagan, Algorithms for polynomial GCD computation over algebraic function fields, Proceedings of ISSAC 2004, pp. 297-304, ACM, 2004.
[Wang81] P.S. Wang, A p-adic algorithm for univariate partial fractions, Proceedings of SYMSAC 1981, pp. 212-217, ACM, 1981.
[Hoeij02] M. van Hoeij and M. Monagan, A modular GCD algorithm over number fields presented with multiple extensions, Proceedings of ISSAC 2002, pp. 109-116, ACM, 2002
[ManWright94] Yiu-Kwong Man and Francis J. Wright, “Fast Polynomial Dispersion Computation and its Application to Indefinite Summation”, Proceedings of the International Symposium on Symbolic and Algebraic Computation, 1994, Pages 175-180 http://dl.acm.org/citation.cfm?doid=190347.190413
[Koepf98] Wolfram Koepf, “Hypergeometric Summation: An Algorithmic Approach to Summation and Special Function Identities”, Advanced lectures in mathematics, Vieweg, 1998
[Abramov71] S. A. Abramov, “On the Summation of Rational Functions”, USSR Computational Mathematics and Mathematical Physics, Volume 11, Issue 4, 1971, Pages 324-330
[Man93] Yiu-Kwong Man, “On Computing Closed Forms for Indefinite Summations”, Journal of Symbolic Computation, Volume 16, Issue 4, 1993, Pages 355-376 http://www.sciencedirect.com/science/article/pii/S0747717183710539
Previous topic
Internals of the Polynomial Manipulation Module
Printing System
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8483304977416992, "perplexity": 3705.606513719419}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986684854.67/warc/CC-MAIN-20191018204336-20191018231836-00465.warc.gz"}
|
https://api.philpapers.org/rec/ULILOP
|
# Logic of paradoxes in classical set theories
Synthese 190 (3):525-547 (2013)
Authors Boris Culina University of Applied Sciences Velika Gorica, Croatia Abstract According to Cantor (Mathematische Annalen 21:545–586, 1883 ; Cantor’s letter to Dedekind, 1899 ) a set is any multitude which can be thought of as one (“jedes Viele, welches sich als Eines denken läßt”) without contradiction—a consistent multitude. Other multitudes are inconsistent or paradoxical. Set theoretical paradoxes have common root—lack of understanding why some multitudes are not sets. Why some multitudes of objects of thought cannot themselves be objects of thought? Moreover, it is a logical truth that such multitudes do exist. However we do not understand this logical truth so well as we understand, for example, the logical truth $${\forall x \, x = x}$$ . In this paper we formulate a logical truth which we call the productivity principle. Rusell (Proc Lond Math Soc 4(2):29–53, 1906 ) was the first one to formulate this principle, but in a restricted form and with a different purpose. The principle explicates a logical mechanism that lies behind paradoxical multitudes, and is understandable as well as any simple logical truth. However, it does not explain the concept of set. It only sets logical bounds of the concept within the framework of the classical two valued $${\in}$$ -language. The principle behaves as a logical regulator of any theory we formulate to explain and describe sets. It provides tools to identify paradoxical classes inside the theory. We show how the known paradoxical classes follow from the productivity principle and how the principle gives us a uniform way to generate new paradoxical classes. In the case of ZFC set theory the productivity principle shows that the limitation of size principles are of a restrictive nature and that they do not explain which classes are sets. The productivity principle, as a logical regulator, can have a definite heuristic role in the development of a consistent set theory. We sketch such a theory—the cumulative cardinal theory of sets. The theory is based on the idea of cardinality of collecting objects into sets. Its development is guided by means of the productivity principle in such a way that its consistency seems plausible. Moreover, the theory inherits good properties from cardinal conception and from cumulative conception of sets. Because of the cardinality principle it can easily justify the replacement axiom, and because of the cumulative property it can easily justify the power set axiom and the union axiom. It would be possible to prove that the cumulative cardinal theory of sets is equivalent to the Morse–Kelley set theory. In this way we provide a natural and plausibly consistent axiomatization for the Morse–Kelley set theory. Keywords Set theory Paradoxes Limitation of size principles Cumulative cardinal hierarchy Categories (categorize this paper) ISBN(s) DOI 10.1007/s11229-011-0047-x Options Mark as duplicate Export citation Request removal from index
PhilArchive copy
Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
## References found in this work BETA
The Iterative Conception of Set.George Boolos - 1971 - Journal of Philosophy 68 (8):215-231.
Foundations of Set Theory.Abraham Adolf Fraenkel & Yehoshua Bar-Hillel - 1958 - Atlantic Highlands, NJ, USA: North-Holland.
Cantorian Set Theory and Limitation of Size.Michael Hallett - 1984 - Oxford, England: Clarendon Press.
On Some Difficulties in the Theory of Transfinite Numbers and Order Types.Bertrand Russell - 1905 - Proceedings of the London Mathematical Society 4 (14):29-53.
## Citations of this work BETA
No citations found.
## Similar books and articles
Proper Classes Via the Iterative Conception of Set.Mark F. Sharlow - 1987 - Journal of Symbolic Logic 52 (3):636-650.
On the Iterative Explanation of the Paradoxes.Christopher Menzel - 1986 - Philosophical Studies 49 (1):37 - 61.
Wide Sets, ZFCU, and the Iterative Conception.Christopher Menzel - 2014 - Journal of Philosophy 111 (2):57-83.
Set Theory and its Philosophy: A Critical Introduction.Michael D. Potter - 2004 - Oxford, England: Oxford University Press.
Anti-Admissible Sets.Jacob Lurie - 1999 - Journal of Symbolic Logic 64 (2):407-435.
Set Size and the Part–Whole Principle.Matthew W. Parker - 2013 - Review of Symbolic Logic (4):1-24.
Sets and Plural Comprehension.Keith Hossack - 2014 - Journal of Philosophical Logic 43 (2-3):517-539.
The Iterative Conception of Set: A (Bi-)Modal Axiomatisation.J. P. Studd - 2013 - Journal of Philosophical Logic 42 (5):1-29.
Sets, Classes, and Categories.F. A. Muller - 2001 - British Journal for the Philosophy of Science 52 (3):539-573.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7139948606491089, "perplexity": 2179.4285601012893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103341778.23/warc/CC-MAIN-20220627195131-20220627225131-00365.warc.gz"}
|
https://testbook.com/question-answer/what-would-be-the-corresponding-transfer-function--5f3cfe8a425b874985ba7c1a
|
# What would be the corresponding transfer function of a linear time invariant system having a unit step function as its impulse response?
This question was previously asked in
UPRVUNL AE EC 2014 Official Paper
View all UPRVUNL AE Papers >
1. 1/s
2. 1/s2
3. 1
4. s
Option 1 : 1/s
Free
Hindi Subject Test 1
4474
10 Questions 10 Marks 10 Mins
## Detailed Solution
Concept:
The transfer function is defined as the ratio of Laplace transform of the output to the Laplace transform of the input by assuming initial conditions are zero.
$$TF = \frac{{C\left( s \right)}}{{R\left( s \right)}}$$
Impulse response = Inverse Laplace transform of transfer function.
'OR'
Transfer function = Laplace transform of Impulse response.
Analysis:
Given: h(t) = u(t)
Transfer function = Laplace transform of Impulse response.
Transfer function = L{u(t)}
Transfer function = 1/s
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8768461346626282, "perplexity": 2484.6520521612956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585439.59/warc/CC-MAIN-20211021164535-20211021194535-00320.warc.gz"}
|
https://socratic.org/questions/what-is-the-value-for-y-explain-how-you-arrived-at-your-answer
|
Geometry
Topics
# What is the value for y? Explain how you arrived at your answer.
Dec 23, 2016
$y = 14$
#### Explanation:
The sum of internal angles in a triangle is ${180}^{\circ}$ The considered triangle is an isosceles triangle $\overline{A C} = \overline{B C}$ so $50 = 2 x \to x = 25$
Considering the whole triangle we have
$180 = 2 \cdot 50 + 5 y + 10 \to y = 14$ so finally
$x = 25$ and $y = 14$
##### Impact of this question
3114 views around the world
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 7, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5876534581184387, "perplexity": 964.6910369763212}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662515466.5/warc/CC-MAIN-20220516235937-20220517025937-00732.warc.gz"}
|
https://www.physicsforums.com/threads/graphene-greens-function-technique.631570/
|
# Graphene - Green's function technique
• Start date
• Tags
• #1
csopi
82
2
Graphene -- Green's function technique
Hi,
I am looking for a comprehensive review about using Matsubara Green's function technique for graphene (or at least some hints in the following problem). I have already learned some finite temperature Green's function technique, but only the basics.
What confuses me is that graphene has two sublattices (say A and B), and so (in principle) we have four non-interacting Green's functions: $$G_{AA}(k,\tau)=-\langle T_{\tau}a_k(\tau)a_k^{\dagger}(0)\rangle,$$ ,
where $$a_k$$ is the annihilation operator acting on the A sublattice. G_{AB}, G_{BA} and G_{BB} are defined in a similar way.
Of course, there are connections between them, but G_{AA} and G_{AB} are essentially different. Now when I am to compute e.g. the screened Coulomb potential, I do not know, which Green's function should be used to evaluate the polarization bubble.
• #2
6,258
906
I think you will find the answer you are looking for when you consider the expression for the bubble in coordinate space.
• #3
csopi
82
2
Dear DrDu,
thank you for your response, but I do not think, I understand how your suggestion helps me. Please explain it to me a bit more thoroughly.
• #4
6,258
906
I mean that the electromagnetic field couples locally to the electrons. Hence the bubble is some integral containing a product of two Greensfunctions G(x,x')G(x,x'). What consequences does locality have in the case of Graphene?
• #6
csopi
82
2
Dear tejas777,
This is a very nice review, thank you very much. Let me ask just one final question: can you explain, how comes
$$F_{s,s'}(p,q)$$
in eq. (2.12) and (2.13) ?
• #7
tejas777
25
1
Look at section 6.2 (on page 19/23) in:
Now, the link contains a specific example. You can probably use this type of approach to derive a more general expression, one involving the ##s## and ##s'##. I may have read an actual journal article containing the rigorous analysis, but I cannot recall which one it was at the moment. If I am able to find that article I will post it here asap.
• Last Post
Replies
1
Views
2K
• Last Post
Replies
30
Views
2K
• Last Post
Replies
1
Views
802
• Last Post
Replies
0
Views
592
• Last Post
Replies
3
Views
1K
• Last Post
Replies
1
Views
1K
• Last Post
Replies
7
Views
2K
• Last Post
Replies
4
Views
1K
• Last Post
Replies
1
Views
1K
• Last Post
Replies
3
Views
1K
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8917746543884277, "perplexity": 856.809746229046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500304.90/warc/CC-MAIN-20230206051215-20230206081215-00512.warc.gz"}
|
http://mathhelpforum.com/calculus/30606-differential-equation-equilibria.html
|
# Math Help - differential equation equilibria!??
1. ## differential equation equilibria!??
consider an interaction between two mutually inhibiting proteins with concentrations x and y, given by the differential equations:
dx/dt = f(y)-x
and
dy/dt = g(x)-y
where both f(y) and g(x) are decreasing functions.
*show that equilibria occur when f[g(x)]=x
how do i do the asterick part? any help would be very much appreciated..
2. Originally Posted by calcusucks
consider an interaction between two mutually inhibiting proteins with concentrations x and y, given by the differential equations:
dx/dt = f(y)-x
and
dy/dt = g(x)-y
where both f(y) and g(x) are decreasing functions.
*show that equilibria occur when f[g(x)]=x
how do i do the asterick part? any help would be very much appreciated..
Require that both rates of change equal zero:
f(y) - x = 0 => x = f(y) ..... (1)
g(x) - y = 0 => y = g(x) .... (2)
Solve equations (1) and (2) simultaneously:
Substitute (2) into (1) ......
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8376011848449707, "perplexity": 2641.76515104584}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398464536.35/warc/CC-MAIN-20151124205424-00006-ip-10-71-132-137.ec2.internal.warc.gz"}
|
http://git.suckless.org/libzahl/commit/5990e4e42754a84edfaed2a31ee5cea3c4c9d9b1.html
|
libzahl
big integer library
git clone git://git.suckless.org/libzahl
commit 5990e4e42754a84edfaed2a31ee5cea3c4c9d9b1
parent a541877c84e798e5a46c76f4cf4c362cfdcebae2
Author: Mattias Andrée <[email protected]>
Date: Thu, 2 Jun 2016 12:06:27 +0200
Signed-off-by: Mattias Andrée <[email protected]>
Diffstat:
Mdoc/arithmetic.tex | 36+++++++++++++++++++++++++++++++++++-
Mdoc/libzahl.tex | 26+++++++++++++++++++++++++-
Mdoc/what-is-libzahl.tex | 13+++++++------
3 files changed, 67 insertions(+), 8 deletions(-)
diff --git a/doc/arithmetic.tex b/doc/arithmetic.tex
@@ -186,6 +186,7 @@ lend you a hand.
\}
\end{alltt}
+% Floored division
\begin{alltt}
void \textcolor{c}{/* \textrm{All arguments most be unique.} */}
divmod_floor(z_t q, z_t r, z_t n, z_t d)
@@ -196,9 +197,10 @@ lend you a hand.
\}
\end{alltt}
+% Ceiled division
\begin{alltt}
void \textcolor{c}{/* \textrm{All arguments most be unique.} */}
- divmod_ceil(z_t q, z_t r, z_t n, z_t d)
+ divmod_ceiling(z_t q, z_t r, z_t n, z_t d)
\{
zdivmod(q, r, n, d);
if (!zzero(r) && isneg(n) == isneg(d))
@@ -206,6 +208,10 @@ lend you a hand.
\}
\end{alltt}
+% Division with round half aways from zero
+% This rounding method is also called:
+% round half toward infinity
+% commercial rounding
\begin{alltt}
/* \textrm{This is how we normally round numbers.} */
void \textcolor{c}{/* \textrm{All arguments most be unique.} */}
@@ -227,6 +233,9 @@ not award you a face-slap. % Had positive punishment
% been legal or even mildly pedagogical. But I would
% not put it past Coca-Cola.
+% Division with round half toward zero
+% This rounding method is also called:
+% round half away from infinity
\begin{alltt}
void \textcolor{c}{/* \textrm{All arguments most be unique.} */}
divmod_half_to_zero(z_t q, z_t r, z_t n, z_t d)
@@ -241,6 +250,9 @@ not award you a face-slap. % Had positive punishment
\}
\end{alltt}
+% Division with round half up
+% This rounding method is also called:
+% round half towards positive infinity
\begin{alltt}
void \textcolor{c}{/* \textrm{All arguments most be unique.} */}
divmod_half_up(z_t q, z_t r, z_t n, z_t d)
@@ -256,6 +268,9 @@ not award you a face-slap. % Had positive punishment
\}
\end{alltt}
+% Division with round half down
+% This rounding method is also called:
+% round half towards negative infinity
\begin{alltt}
void \textcolor{c}{/* \textrm{All arguments most be unique.} */}
divmod_half_down(z_t q, z_t r, z_t n, z_t d)
@@ -271,6 +286,16 @@ not award you a face-slap. % Had positive punishment
\}
\end{alltt}
+% Division with round half to even
+% This rounding method is also called:
+% unbiased rounding (really stupid name)
+% convergent rounding (also quite stupid name)
+% statistician's rounding
+% Dutch rounding
+% Gaussian rounding
+% odd–even rounding
+% bankers' rounding
+% It is the default rounding method used in IEEE 754.
\begin{alltt}
void \textcolor{c}{/* \textrm{All arguments most be unique.} */}
divmod_half_to_even(z_t q, z_t r, z_t n, z_t d)
@@ -288,6 +313,7 @@ not award you a face-slap. % Had positive punishment
\}
\end{alltt}
+% Division with round half to odd
\newpage
\begin{alltt}
void \textcolor{c}{/* \textrm{All arguments most be unique.} */}
@@ -306,6 +332,14 @@ not award you a face-slap. % Had positive punishment
\}
\end{alltt}
+% Other standard methods include stochastic rounding
+% and round half alternatingly, and what is is
+% New Zealand called “Swedish rounding”, which is
+% no longer used in Sweden, and is just normal round
+% half aways from zero but with 0.5 rather than
+% 1 as the integral unit, and is just a special case
+% of a more general rounding method.
+
Currently, libzahl uses an almost trivial division
algorithm. It operates on positive numbers. It begins
by left-shifting the divisor as must as possible with
diff --git a/doc/libzahl.tex b/doc/libzahl.tex
@@ -29,7 +29,9 @@
\geometry{margin=1in}
\usepackage{microtype}
\DisableLigatures{encoding = *, family = *} % NB! disables -- and ---
-\frenchspacing
+% I really dislike fi- and ff-ligatures, just like look so wrong.
+\frenchspacing % i.e. non-American spacing: i.e. no extra space after sentences,
+ % this also means that periods do not have to be context-marked.
\newcommand{\chapref}[1]{\hyperref[#1]{Chapter~\ref*{#1} [\nameref*{#1}], page \pageref*{#1}}}
\newcommand{\secref}[1]{\hyperref[#1]{Section~\ref*{#1} [\nameref*{#1}], page \pageref*{#1}}}
@@ -62,6 +64,28 @@ purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.}
\newpage
+
+% Conventionally, most words in a title in English should start with
+% uppercase. I believe that this is inconsistent stupidity, pardon my
+% Klatchian. There is not consensus of which words should not start
+% with lowercase or even if any shall start with lowercase. There is
+% also no consensus on how long the title should be before only the
+% first word should start with uppercase. It is only generally (but
+% not always) agreed that most words should start with uppercase and
+% when the title is too long only the first word start with uppercase.
+% I believe that is is better to stick with the Swedish convention:
+% It should look just like a sentience except it may not end with a
+% period unless that is part of an ellipsis or an abbreviation.
+% I would also like to use straight apostrophes, like in French, (and
+% reserve the curved ones for quotes,) but that is just too painful in
+% LaTeX, so I will only be do so for French words. Most style guides
+% for English will be followed. They will only be broken if they are
+% stupid or inferior. For example, I will never write ‘CPU's’ for
+% plural of CPU — that's just stupid, — only for genitive, nor
+% will I write ‘CPUs’ for plural of CPU, because it is inferior to
+% ‘CPU:s’.
+
+
\shorttoc{Short contents}{0}
\setcounter{tocdepth}{2}
\dominitoc
diff --git a/doc/what-is-libzahl.tex b/doc/what-is-libzahl.tex
@@ -15,8 +15,8 @@ what is its limitations.
\label{sec:The name and the what}
In mathematics, the set of all integers is represented
-by a bold uppercase Z' ({\bf Z}), or sometimes
-double-stroked (blackboard bold) ($\mathbb{Z}$). This symbol
+by a bold uppercase Z' ({\bf Z}), or sometimes % proper symbol
+double-stroked (blackboard bold) ($\mathbb{Z}$). This symbol % hand-written style, specially on whiteboards and blackboards
is derived from the german word for integers: Zahlen'
[\textprimstress{}tsa\textlengthmark{}l\textschwa{}n],
whose singular is Zahl' [tsa\textlengthmark{}l]. libzahl
@@ -100,8 +100,8 @@ followed by output parameters, and output parameters
followed by input parameters. The former variant is the
conventional for C functions. The latter is more in style
with primitive operations, pseudo-code, mathematics, and
-how it would look if the output was return. In libzahl,
-the latter convention is used. That is, we write
+how it would look if the output was return. In libzahl, the
+latter convention is used. That is, we write
\begin{alltt}
@@ -129,8 +129,9 @@ $augend + addend \rightarrow sum$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.956915557384491, "perplexity": 29854.66717662573}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202889.30/warc/CC-MAIN-20190323161556-20190323183556-00507.warc.gz"}
|
http://mathhelpforum.com/pre-calculus/62377-inequality-using-sign-chart.html
|
# Thread: Inequality using sign chart
1. ## Inequality using sign chart
Solve the inequality using a sign chart.
(x^3 - 4x)/(x^2 + 2) ≤ 0.
I'm not really sure how to start, but I do know how to use a sign chart.
Any help would be appreciated.
2. You mean something like
$\displaystyle \begin{array}{*{20}c} {} &\vline & {( - \infty , - 2)} &\vline & {( - 2,0)} &\vline & {(0,2)} &\vline & {(2,\infty )} \\ \hline x &\vline & - &\vline & - &\vline & + &\vline & + \\ \hline {x - 2} &\vline & - &\vline & - &\vline & - &\vline & + \\ \hline {x + 2} &\vline & - &\vline & + &\vline & + &\vline & + \\ \end{array}$
?
3. Originally Posted by Krizalid
You mean something like
$\displaystyle \begin{array}{*{20}c} {} &\vline & {( - \infty , - 2)} &\vline & {( - 2,0)} &\vline & {(0,2)} &\vline & {(2,\infty )} \\ \hline x &\vline & - &\vline & - &\vline & + &\vline & + \\ \hline {x - 2} &\vline & - &\vline & - &\vline & - &\vline & + \\ \hline {x + 2} &\vline & - &\vline & + &\vline & + &\vline & + \\ \end{array}$
?
Yes, but I was able to figure it out.
If at all possible, could you look at my post titled Pre-Calc HW questions?
I really need help on those.
Thanks!!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6167178750038147, "perplexity": 1709.6811182215984}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267860041.64/warc/CC-MAIN-20180618031628-20180618051628-00176.warc.gz"}
|
https://2022.help.altair.com/2022/hwsolvers/ms/topics/solvers/ms/optimization_problem_types_r.htm
|
# Optimization Problem Types
Optimization problems types include unconstrained optimization, simple bound constraints, and nonlinearly constrained optimization.
Unconstrained Optimization
An unconstrained problem has no constraints. Thus there are no equality or inequality constraints that the solution b, has to satisfy. Furthermore there are no design limits either.
Simple Bound Constraints
A bound-constrained problem has only lower and upper bounds on the design parameters. There are no equality or inequality constraints that the solution b, has to satisfy. In the finite element world, these are also known as side constraints.
Nonlinearly Constrained Optimization
This is the most complex variation of the optimization problem. The solution has to satisfy some nonlinear constraints (inequality and/or equality) and there are bounds on the design variables that specify limits on the values they can assume.
It is important to know about these problem types because several optimization search methods are available in MotionSolve. Some of these methods work only for specific types of optimization problems.
The optimization problem formulation is:(1)
minimize $\begin{array}{l}\text{}{\psi }_{0}\left(x,b\right)\text{}\text{}\end{array}$ (objective function) subject to (inequality constraints) (equality constraints) $\begin{array}{l}\text{}{b}_{L}\le b\le {b}_{U}\text{}\end{array}$ (design limits)
The functions are assumed to have the form:(2)
${\psi }_{k}\left(x,b\right)={\psi }_{k0}\left(b\right)+\underset{t0}{\overset{tf}{\int }}{L}_{k}\left(x,b,t\right)dt$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 3, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9434546828269958, "perplexity": 1051.6579627551191}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710972.37/warc/CC-MAIN-20221204104311-20221204134311-00340.warc.gz"}
|
https://www.transtutors.com/questions/under-the-accrual-basis-of-accounting-how-much-net-revenue-would-be-reported-on-fair-598667.htm
|
# Under the accrual basis of accounting, how much net revenue would be reported on Fairplay’s 2009...
Fairplay had the following information related to the sale of its products during 2009, which was its first year of business:
Revenue $1,000,000 Returns of goods sold$100,000 Cash collected $800,000 Cost of goods sold$700,000
Under the accrual basis of accounting, how much net revenue would be reported on Fairplay’s 2009 income statement?
A. $200,000 B.$900,000
C. \$1,000,000
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5245729088783264, "perplexity": 8265.737408463807}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027316549.78/warc/CC-MAIN-20190821220456-20190822002456-00397.warc.gz"}
|
http://www.jnr.ac.cn/EN/10.31497/zrzyxb.20171187
|
JOURNAL OF NATURAL RESOURCES ›› 2018, Vol. 33 ›› Issue (12): 2149-2166.
• Resource Evaluation •
### The Seasonal Spatiotemporal Variation of the Temperature Mutation and Warming Hiatus over northern China during 1951-2014
LIANG Long-teng, MA Long, LIU Ting-xi, SUN Bo-lin, ZHOU Ying, LIU Yang
1. College of Water Conservancy and Civil Engineering College, Inner Mongolia Agricultural University, Hohhot 010018, China
• Received:2017-11-09 Revised:2018-01-29 Online:2018-12-20 Published:2018-12-20
• Supported by:
Program for Young Talents of Science and Technology in Universities of Inner Mongolia Autonomous Region; National Natural Science Foundation of China, No. 51669016; the Cold and Arid Regions of the Hydrological and Ecological Effects of the Ministry of Science and Technology Innovation Team
Abstract: Over the past hundred years, remarkable changes occurred around global climate. Two significant characteristics of climate changes are temperature mutation and warming hiatus, which led to a variety of extreme climate events of all scales, such as in the north of China. Therefore, revealing the features and rules of temperature mutation and warming hiatus in northern China could provide some basis to deeply understand climate changes, prevent and mitigate disasters as well as improve ecological environment in northern China and in the whole world. Based on the seasonal (monthly) average minimum temperature, average temperature and average maximum temperature data at 357 meteorological stations in northern China and surrounding areas during 1951-2014, this paper analyzes the spatio-temporal variation of three types of seasonal temperature (the average minimum temperature, average temperature and average maximum temperature) mutation and warming hiatus over northern China by using Mann-Kendall test, slide t test and other methods. The results indicate: according to the average minimum temperature, average temperature and average maximum temperature, the abrupt change of temperature and warming hiatus became late and the cycle from abrupt change to warming hiatus became short as the latitude reduces. In spring and winter, the temperature mutation and warming hiatus in the Northeast China were the earliest (1970s-1980s, 1993-2002), in North China were the second and in the Northwest China were the latest (1990s-2000s, 1996-2010). While in summer and autumn, the temperature mutation and warming hiatus in North China were the earliest, in Northeast China were the second and in Northwest China were the latest (1990s-2010s). There was little distinction among areas in warming hiatus year. Meteorological stations with no mutation and no hiatus were centralized in mountainous regions, high-latitude areas and southern part of North China Plain where the surrounding areas were relatively late in mutation and warming hiatus. The mutations and warming hiatus of same kinds of temperature became late in the order of winter (1981-1990), spring, autumn and summer (1994-2008) and winter (1995-2008), autumn, summer and spring (1998-2010). The cycle from temperature mutated to warming hiatus became short in the sequence of winter, spring, autumn and summer, during which the average minimum temperature period was the longest in winter (9-24 a). The average minimum temperature mutated earliest in spring, summer and winter, which occurred during 1972-1999, 1987-1999 and 1971-2000, the average temperature was the second to mutate, while the average maximum temperature was the latest to mutate which occurred during 1975-2008, 1994-2008, 1972-2006. However, in autumn, the mutation of the average temperature was the earliest (1982-2001), the mutation of the lowest average temperature was the second, and the mutation of the highest average temperature was the latest (1987-2001). The warming hiatus of the average minimum temperature, average temperature and average maximum temperature in spring and summer became late successively, which were during 1994-2008, 1997-2008 and 1997-2010, respectively, while the sequence was opposite in autumn and winter. The lengths of periods from temperature mutation to warming hiatus in each season in order were that of the average minimum temperature (9-18 a), that of the average temperature and that of the average maximum temperature (5-12 a). In particular, the three kinds of temperatures all mutated earliest in the southern of North China in summer, which is inconsistent with the rule in the whole area that the mutation became late as latitude decreases, and the temperature at most stations in this area did not mutate, which is also inconsistent with the general rule in the whole area.
CLC Number:
• P423.3
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38882628083229065, "perplexity": 5012.5652124805965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300934.87/warc/CC-MAIN-20220118152809-20220118182809-00476.warc.gz"}
|
https://arxiv.org/abs/1504.03867?context=math.GR
|
math.GR
# Title:Bass' triangulability problem
Abstract: Exploring Bass' Triangulability Problem on unipotent algebraic subgroups of the affine Cremona groups, we prove a triangulability criterion, the existence of nontriangulable connected solvable affine algebraic subgroups of the Cremona groups, and stable triangulability of such subgroups; in particular, in the stable range we answer Bass' Triangulability Problem is the affirmative. To this end we prove a theorem on invariant subfields of $1$-extensions. We also obtain a general construction of all rationally triangulable subgroups of the Cremona groups and, as an application, classify rationally triangulable connected one-dimensional unipotent affine algebraic subgroups of the Cremona groups up to conjugacy.
Comments: Minor corrections of Theorem 1. 15 pages Subjects: Algebraic Geometry (math.AG); Group Theory (math.GR) Cite as: arXiv:1504.03867 [math.AG] (or arXiv:1504.03867v5 [math.AG] for this version)
## Submission history
From: Vladimir Popov L [view email]
[v1] Wed, 15 Apr 2015 11:39:20 UTC (14 KB)
[v2] Fri, 24 Apr 2015 11:18:24 UTC (14 KB)
[v3] Thu, 28 May 2015 18:14:39 UTC (25 KB)
[v4] Fri, 12 Jun 2015 19:00:11 UTC (25 KB)
[v5] Wed, 14 Oct 2015 21:14:30 UTC (26 KB)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3473917841911316, "perplexity": 5217.514943710058}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987779528.82/warc/CC-MAIN-20191021143945-20191021171445-00198.warc.gz"}
|
https://chesterrep.openrepository.com/handle/10034/305487
|
Browse by
Based at Thornton Science Park, the new Faculty of Science and Engineering is located in a major research and innovation hub for the North West which is only a 20-minute bus trip from the main Chester Campus. The Faculty offers degrees in engineering and science disciplines using a strongly interdisciplinary teaching philosophy.
Recent Submissions
• Local-Partial Signal Combining Schemes for Cell-Free Large-Scale MU-MIMO Systems with Limited Fronthaul Capacity and Spatial Correlation Channels
Cell-free large-scale multi-user MIMO is a promising technology for the 5G-and-beyond mobile communication networks. Scalable signal processing is the key challenge in achieving the benefits of cell-free systems. This study examines a distributed approach for cell-free deployment with user-centric configuration and finite fronthaul capacity. Moreover, the impact of scaling the pilot length, the number of access points (APs), and the number of antennas per AP on the achievable average spectral efficiency are investigated. Using the dynamic cooperative clustering (DCC) technique and large-scale fading decoding process, we derive an approximation of the signal-tointerference-plus-noise ratio in the criteria of two local combining schemes: Local-Partial Regularized Zero Forcing (RZF) and Local Maximum Ratio (MR). The results indicate that distributed approaches in the cell-free system have the advantage of decreasing the fronthaul signaling and the computing complexity. The results also show that the Local-Partial RZF provides the highest average spectral efficiency among all the distributed combining schemes because the computational complexity of the Local-Partial RZF is independent of the UTs. Therefore, it does not grow as the number of user terminals (UTs) increases.
• AFOM: Advanced Flow of Motion Detection Algorithm for Dynamic Camera Videos
The surveillance videos taken from dynamic cam-eras are susceptible to multiple security threats like replay attacks, man-in-the-middle attacks, pixel correlation attacks etc. Using unsupervised learning, it is a challenge to detect objects in such surveillance videos, as fixed objects may appear to be in motion alongside the actual moving objects. But despite this challenge, the unsupervised learning techniques are efficient as they save object labelling and model training time, which is usually a case with supervised learning models. This paper proposes an effective computer vision-based object identification algorithm that can detect and separate stationary objects from moving objects in such videos. The proposed Advanced Flow Of Motion (AFOM) algorithm takes advantage of motion estimation between two consecutive frames and induces the estimated motion back to the frame to provide an improved detection on the dynamic camera videos. The comparative analysis demonstrates that the proposed AFOM outperforms a traditional dense optical flow (DOF) algorithm with an average increased difference of 56 % in accuracy, 61 % in precision, and 73 % in pixel space ratio (PSR), and with minimal higher object detection timing.
• A single-layer asymmetric RNN with low hardware complexity for solving linear equations
A single layer neural network for the solution of linear equations is presented. The proposed circuit is based on the standard Hopfield model albeit with the added flexibility that the interconnection weight matrix need not be symmetric. This results in an asymmetric Hopfield neural network capable of solving linear equations. PSPICE simulation results are given which verify the theoretical predictions. A simple technique to incorporate re-configurability into the circuit for setting the different weights of the interconnection is also included. Experimental results for circuits set up to solve small problems further confirm the operation of the proposed circuit.
• FireNet-v2: Improved Lightweight Fire Detection Model for Real-Time IoT Applications
Fire hazards cause huge ecological, social and economical losses in day to day life. Due to the rapid increase in the prevalence of fire accidents, it has become vital to equip the assets with fire prevention systems. There have been numerous researches to build a fire detection model in order to avert such accidents, with recent approaches leveraging the enormous improvements in computer vision deep learning models. However, most deep learning models have to compromise with their performance and accurate detection to maintain a reasonable inference time and parameter count. In this paper, we present a customized lightweight convolution neural network for early detection of fire. By virtue of low parameter count, the proposed model is amenable to embedded applications in real-time fire monitoring equipment, and even upcoming fire monitoring approaches such as unmanned aerial vehicles (drones). The fire detection results show marked improvement over the predecessor low-parameter-count models, while further reducing the number of trainable parameters. The overall accuracy of FireNet-v2, which has only 318,460 parameters, was found to be 98.43% when tested over Foggia's dataset.
• Deep Learning based Wireless Channel Prediction: 5G Scenario
In the area of wireless communication, channel estimation is a challenging problem due to the need for real-time implementation as well as system dependence on the estimation accuracy. This work presents a Long-Short Term Memory (LSTM) based deep learning (DL) approach for the prediction of channel response in real-time and real-world non-stationary channels. The model uses the pre-defined history of channel impulse response (CIR) data along with two other features viz. transmitter-receiver update distance and root-mean-square delay spread values which are also changing in time with the channel impulse response. The objective is to obtain an approximate estimate of CIRs using prediction through the DL model as compared to conventional methods. For training the model, a sample dataset is generated through the open-source channel simulation software NYUSIM which realizes samples of CIRs for measurement-based channel models based on various multipath channel parameters. From the model test results, it is found that the proposed DL approach provides a viable lightweight solution for channel prediction in wireless communication.
• DNA codes from skew dihedral group ring
<p style='text-indent:20px;'>In this work, we present a matrix construction for reversible codes derived from skew dihedral group rings. By employing this matrix construction, the ring <inline-formula><tex-math id="M1">\begin{document}$\mathcal{F}_{j, k}$\end{document}</tex-math></inline-formula> and its associated Gray maps, we show how one can construct reversible codes of length <inline-formula><tex-math id="M2">\begin{document}$n2^{j+k}$\end{document}</tex-math></inline-formula> over the finite field <inline-formula><tex-math id="M3">\begin{document}$\mathbb{F}_4.$\end{document}</tex-math></inline-formula> As an application, we construct a number of DNA codes that satisfy the Hamming distance, the reverse, the reverse-complement, and the GC-content constraints with better parameters than some good DNA codes in the literature.</p>
• Cholesterol transport in blood, lipoproteins, and cholesterol metabolism.
The aim of this chapter is to critically discuss recent work which has focused on the dynamics of cholesterol transport and its intersection with health. Firstly, we provide an overview of the main lipoproteins, and their role in whole-body cholesterol metabolism. We then focus on low-density lipoprotein cholesterol (LDL-C) and high-density lipoprotein cholesterol (HDL-C), paying particular attention to a diverse array of evidence which associates perturbations to these lipoproteins with cardiovascular disease (CVD). Next, we explain how aging and obesity disrupt the biological mechanisms that regulate cholesterol metabolism. Crucially, we reveal the parallels between aging and obesity, underscoring that obesity superimposed on the aging process has the potential to exacerbate the age-related dysregulation of cholesterol metabolism. Following this, we unveil how mathematical modeling can be used to deepen our understanding of cholesterol metabolism. We conclude the chapter by discussing the future of this area; in doing so, we reveal how recent experimental findings could open the way for novel therapeutic approaches which could help maintain optimal blood lipoprotein levels and thus increase health span.
• Watchdog Monitoring for Detecting and Handling of Control Flow Hijack on RISC-V-based Binaries
Abstract—Control flow hijacking has been a major challenge in software security. Several means of protections have been developed but insecurities persist. This is because existing protections have sometimes been circumvented while some resilient protections do not cover all applications. Studies have revealed that a holistic way of tackling software insecurity could involve watchdog monitoring and detection via Control Flow Integrity (CFI). The CFI concept has shown a good measure of reliability to mitigate control flow hijacking. However, sophisticated attack techniques in the form of Return Oriented Programming (ROP) have persisted. A flexible protection is desirable, which not only covers as many architecture structures as possible but also mitigates known resilient attacks like ROP. The solution proffered here is a hybrid of CFI and watchdog timing via inter-process signaling (IP-CFI). It is a software-based protection that involves recompilation of the target program. The implementation here is on vulnerable RISC-V-based process but is flexible and could be adapted on other architectures. We present a proof of concept in IP-CFI which when applied to a vulnerable program, ROP is mitigated. The target program incurs a run-time overhead of 1.5%. The code is available.
• A Mathematical Model which Examines Age-Related Stochastic Fluctuations in DNA Maintenance Methylation
Due to its complexity and its ubiquitous nature the ageing process remains an enduring biological puzzle. Many molecular mechanisms and biochemical process have become synonymous with ageing. However, recent findings have pinpointed epigenetics as having a key role in ageing and healthspan. In particular age related changes to DNA methylation offer the possibility of monitoring the trajectory of biological ageing and could even be used to predict the onset of diseases such as cancer, Alzheimer's disease and cardiovascular disease. At the molecular level emerging evidence strongly suggests the regulatory processes which govern DNA methylation are subject to intracellular stochasticity. It is challenging to fully understand the impact of stochasticity on DNA methylation levels at the molecular level experimentally. An ideal solution is to use mathematical models to capture the essence of the stochasticity and its outcomes. In this paper we present a novel stochastic model which accounts for specific methylation levels within a gene promoter. Uncertainty of the eventual site-specific methylation levels for different values of methylation age, depending on the initial methylation levels were analysed. Our model predicts the observed bistable levels in CpG islands. In addition, simulations with various levels of noise indicate that uncertainty predominantly spreads through the hypermethylated region of stability, especially for large values of input noise. A key outcome of the model is that CpG islands with high to intermediate methylation levels tend to be more susceptible to dramatic DNA methylation changes due to increasing methylation age.
• DNA Methylation in Genes Associated with the Evolution of Ageing and Disease: A Critical Review
Ageing is characterised by a physical decline in biological functioning which results in a progressive risk of mortality with time. As a biological phenomenon, it is underpinned by the dysregulation of a myriad of complex processes. Recently, however, ever-increasing evidence has associated epigenetic mechanisms, such as DNA methylation (DNAm) with age-onset pathologies, including cancer, cardiovascular disease, and Alzheimer’s disease. These diseases compromise healthspan. Consequently, there is a medical imperative to understand the link between epigenetic ageing, and healthspan. Evolutionary theory provides a unique way to gain new insights into epigenetic ageing and health. This review will: (1) provide a brief overview of the main evolutionary theories of ageing; (2) discuss recent genetic evidence which has revealed alleles that have pleiotropic effects on fitness at different ages in humans; (3) consider the effects of DNAm on pleiotropic alleles, which are associated with age related disease; (4) discuss how age related DNAm changes resonate with the mutation accumulation, disposable soma and programmed theories of ageing; (5) discuss how DNAm changes associated with caloric restriction intersect with the evolution of ageing; and (6) conclude by discussing how evolutionary theory can be used to inform investigations which quantify age-related DNAm changes which are linked to age onset pathology.
• Modelling Cholesterol Metabolism and Atherosclerosis
Atherosclerotic cardiovascular disease (ASCVD) is the leading cause of morbidity and mortality among Western populations. Many risk factors have been identified for ASCVD; however, elevated low-density lipoprotein cholesterol (LDL-C) remains the gold standard. Cholesterol metabolism at the cellular and whole-body level is maintained by an array of interacting components. These regulatory mechanisms have complex behavior. Likewise, the mechanisms which underpin atherogenesis are nontrivial and multifaceted. To help overcome the challenge of investigating these processes mathematical modeling, which is a core constituent of the systems biology paradigm has played a pivotal role in deciphering their dynamics. In so doing models have revealed new insights about the key drivers of ASCVD. The aim of this review is fourfold; to provide an overview of cholesterol metabolism and atherosclerosis, to briefly introduce mathematical approaches used in this field, to critically discuss models of cholesterol metabolism and atherosclerosis, and to highlight areas where mathematical modeling could help to investigate in the future.
• The interdependency and co-regulation of the vitamin D and cholesterol metabolism.
Vitamin D and cholesterol metabolism overlap significantly in the pathways that contribute to their biosynthesis. However, our understanding of their independent and co-regulation is limited. Cardiovascular disease is the leading cause of death globally and atherosclerosis, the pathology associated with elevated cholesterol, is the leading cause of cardiovascular disease. It is therefore important to understand vitamin D metabolism as a contributory factor. From the literature, we compile evidence of how these systems interact, relating the understanding of the molecular mechanisms involved to the results from observational studies. We also present the first systems biology pathway map of the joint cholesterol and vitamin D metabolisms made available using the Systems Biology Graphical Notation (SBGN) Markup Language (SBGNML). It is shown that the relationship between vitamin D supplementation, total cholesterol, and LDL-C status, and between latitude, vitamin D, and cholesterol status are consistent with our knowledge of molecular mechanisms. We also highlight the results that cannot be explained with our current knowledge of molecular mechanisms: (i) vitamin D supplementation mitigates the side-effects of statin therapy; (ii) statin therapy does not impact upon vitamin D status; and critically (iii) vitamin D supplementation does not improve cardiovascular outcomes, despite improving cardiovascular risk factors. For (iii), we present a hypothesis, based on observations in the literature, that describes how vitamin D regulates the balance between cellular and plasma cholesterol. Answering these questions will create significant opportunities for advancement in our understanding of cardiovascular health
• Miyamoto groups of code algebras
A code algebra A_C is a nonassociative commutative algebra defined via a binary linear code C. In a previous paper, we classified when code algebras are Z_2-graded axial (decomposition) algebras generated by small idempotents. In this paper, for each algebra in our classification, we obtain the Miyamoto group associated to the grading. We also show that the code algebra structure can be recovered from the axial decomposition algebra structure.
• A unique ternary Ce(III)-quercetin-phenanthroline assembly with antioxidant and anti-inflammatory properties
Quercetin is one of the most bioactive and common dietary flavonoids, with a significant repertoire of biological and pharmacological properties. The biological activity of quercetin, however, is influenced by its limited solubility and bioavailability. Driven by the need to enhance quercetin bioavailability and bioactivity through metal ion complexation, synthetic efforts led to a unique ternary Ce(III)-quercetin-(1,10-phenanthroline) (1) compound. Physicochemical characterization (elemental analysis, FT-IR, Thermogravimetric analysis (TGA), UV–Visible, NMR, Electron Spray Ionization-Mass Spectrometry (ESI-MS), Fluorescence, X-rays) revealed its solid-state and solution properties, with significant information emanating from the coordination sphere composition of Ce(III). The experimental data justified further entry of 1 in biological studies involving toxicity, (Reactive Oxygen Species, ROS)-suppressing potential, cell metabolism inhibition in Saccharomyces cerevisiae (S. cerevisiae) cultures, and plasmid DNA degradation. DFT calculations revealed its electronic structure profile, with in silico studies showing binding to DNA, DNA gyrase, and glutathione S-transferase, thus providing useful complementary insight into the elucidation of the mechanism of action of 1 at the molecular level and interpretation of its bio-activity. The collective work projects the importance of physicochemically supported bio-activity profile of well-defined Ce(III)-flavonoid compounds, thereby justifying focused pursuit of new hybrid metal-organic materials, effectively enhancing the role of naturally-occurring flavonoids in physiology and disease.
• Multifunctional cellular sandwich structures with optimised core topologies for improved mechanical properties and energy harvesting performance
This paper developed a multifunctional composite sandwich structure with optimised design on topological cores. As the main concern, full composite sandwich structures were manufactured with carbon fibre reinforced polymer (CFRP) facesheets and designed cores. Three-point bending tests have been performed to assess the mechanical performance of designed cellular sandwich structures. To evaluate the energy harvesting performance, the piezoelectric transducer was integrated at the interface between the upper facesheet and core, with both sinusoidal base excitation input and acceleration measured from real cruising aircraft and vehicle. It has been found that the sandwich with conventional honeycomb core has demonstrated the best mechanical performance, assessed under the bending tests. In terms of energy harvesting performance, sandwich with re-entrant honeycomb manifested approximately 20% higher RMS voltage output than sandwiches with conventional honeycomb and chiral structure core, evaluated both numerically and experimentally. The resistance sweep tests further suggested that the power output from sandwich with re-entrant honeycomb core was twice as large as that from sandwiches with conventional honeycomb and chiral structure cores, under optimal external resistance and sinusoidal base excitation.
• Thermal Induced Interface Mechanical Response Analysis of SMT Lead-Free Solder Joint and Its Adaptive Optimization
Surface mount technology (SMT) plays an important role in integrated circuits, but due to thermal stress alternation caused by temperature cycling, it tends to have thermo-mechanical reliability problems. At the same time, considering the environmental and health problems of lead (Pb)-based solders, the electronics industry has turned to lead-free solders, such as ternary alloy Sn-3Ag-0.5Cu (SAC305). As lead-free solders exhibit visco-plastic mechanical properties significantly affected by temperature, their thermo-mechanical reliability has received considerable attention. In this study, the interface delamination of an SMT solder joint using a SAC305 alloy under temperature cycling has been analyzed by the nonlinear finite element method. The results indicate that the highest contact pressure at the four corners of the termination/solder horizontal interface means that delamination is most likely to occur, followed by the y-direction side region of the solder/land interface and the top arc region of the termination/solder vertical interface. It should be noted that in order to keep the shape of the solder joint in the finite element model consistent with the actual situation after the reflow process, a minimum energy-based morphology evolution method has been incorporated into the established finite element model. Eventually, an Improved Efficient Global Optimization (IEGO) method was used to optimize the geometry of the SMT solder joint in order to reduce the contact pressure at critical points and critical regions. The optimization result shows that the contact pressure at the critical points and at the critical regions decreases significantly, which also means that the probability of thermal-induced delamination decreases.
• Electromechanical characterization and kinetic energy harvesting of piezoelectric nanocomposites reinforced with glass fibers
Piezoelectric composites are a significant research field because of their excellent mechanical flexibility and sufficient stress-induced voltage. Furthermore, due to the widespread use of the Internet of Things (IoT) in recent years, small-sized piezoelectric composites have attracted a lot of attention. Also, there is an urgent need to develop evaluation methods for these composites. This paper evaluates the piezoelectric and mechanical properties of potassium sodium niobate (KNN)-epoxy and KNN-glass fiber-reinforced polymer (GFRP) composites using a modified small punch (MSP) and nanoindentation tests in addition to d33 measurements. An analytical solution for the piezoelectric composite thin plate under bending was obtained for the determination of the bending properties. Due to the glass fiber inclusion, the bending strength increased by about four times, and Young's modulus in the length direction increased by approximately two times (more than that of the KNN-epoxy); however, in the thickness direction, Young's modulus decreased by less than half. An impact energy harvesting test was then performed on the KNN-epoxy and KNN-GFRP composites. As a result, the output voltage of KNN-GFRP was larger than that of KNN-epoxy. Also, the output voltage was about 2.4 V with a compressive stress of 0.2 MPa, although the presence of the glass fibers decreased the piezoelectric constants. Finally, damped flexural vibration energy harvesting test was carried out on the KNN-epoxy and KNN-GFRP composites. The KNN-epoxy was broken during the test, however KNN-GFRP composite with a load resistance of 10 generated 35 nJ of energy. Overall, through this work, we succeeded in developing piezoelectric energy harvesting composite materials that can withstand impact and bending vibration using glass fibers and also established a method for evaluating the electromechanical properties with small test specimen.
• Numerical prediction of the chip formation and damage response in CFRP cutting with a novel strain rate based material model
Carbon fibre reinforced plastics (CFRPs) are susceptible to various cutting damages. An accurate model that could efficiently predict the material removal and chip formation mechanisms will thus help to reduce the damages during cutting and further improved machining quality can be pursued. In previous studies, macro numerical models have been proposed to predict the orthogonal cutting of the CFRP laminates with subsurface damages under quasi-static loading conditions. However, the strain rate effect on the material behaviours has rarely been considered in the material modelling process, which would lead to the inaccurate prediction of the cutting process and damage extent, especially at high cutting speed. To address this issue, a novel material failure model is developed in this work by incorporating the strain rate effect across the damage initiation (combined Hashin and Puck laws) and evolution criteria. The variation in material properties with the strain rate is considered for the characterization of the stress-strain relationships under different loading speeds. With this material model, a three-dimensional macro numerical model is established to simulate the orthogonal cutting of CFRPs under four typical fibre cutting angles. The machining process and cutting force simulated by the proposed model are well agreed with the results of the CFRP orthogonal cutting experiments, and the prediction accuracy has been improved compared with the model without considering the strain rate effect. In addition, the effects of processing conditions on the subsurface damage in machining CFRPs under 135° are assessed. The subsurface damage is found to decrease with the cutting speed increases to 100 mm/s, afterwards, it tends to be stable when the cutting speed is over 100 mm/s. The increased severity of the subsurface damage is predicted with the higher cutting depths.
• Split spin factor algebras
Motivated by Yabe's classification of symmetric $2$-generated axial algebras of Monster type \cite{yabe}, we introduce a large class of algebras of Monster type $(\alpha, \frac{1}{2})$, generalising Yabe's $\mathrm{III}(\alpha,\frac{1}{2}, \delta)$ family. Our algebras bear a striking similarity with Jordan spin factor algebras with the difference being that we asymmetrically split the identity as a sum of two idempotents. We investigate the properties of these algebras, including the existence of a Frobenius form and ideals. In the $2$-generated case, where our algebra is isomorphic to one of Yabe's examples, we use our new viewpoint to identify the axet, that is, the closure of the two generating axes.
• Enumerating 3-generated axial algebras of Monster type
An axial algebra is a commutative non-associative algebra generated by axes, that is, primitive, semisimple idempotents whose eigenvectors multiply according to a certain fusion law. The Griess algebra, whose automorphism group is the Monster, is an example of an axial algebra. We say an axial algebra is of Monster type if it has the same fusion law as the Griess algebra. The 2-generated axial algebras of Monster type, called Norton-Sakuma algebras, have been fully classified and are one of nine isomorphism types. In this paper, we enumerate a subclass of 3-generated axial algebras of Monster type in terms of their groups and shapes. It turns out that the vast majority of the possible shapes for such algebras collapse; that is they do not lead to non-trivial examples. This is in sharp contrast to previous thinking. Accordingly, we develop a method of minimal forbidden configurations, to allow us to efficiently recognise and eliminate collapsing shapes.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6130315065383911, "perplexity": 2016.6424099145406}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335504.37/warc/CC-MAIN-20220930212504-20221001002504-00582.warc.gz"}
|
http://www.stata.com/statalist/archive/2009-05/msg01034.html
|
Re: st: Explaining the Use of Inferential Statistics Even Though I Have Population Data
From Salima Bouayad Agha To [email protected] Subject Re: st: Explaining the Use of Inferential Statistics Even Though I Have Population Data Date Sat, 30 May 2009 22:15:40 +0200
Well, even if is not really a stata question, I Think that the question of your reviewer is not obvious. In statistics a sample can effectively be a part of the population, and it is the most useful and current way to talk about of a sample, but in mathematical statistical course students also learn that a sample can be just the population a one specific moment of time and that this population can be very different if something else (random phenomenon) happens. Think for example of time series or some staistical process, let for example talk about the population of computers produced by a firm. So I'm not sure that your answer is the best one, depending on which field of research you are working. Just take a few moment to review some theoretical books on xaht is a sample in statistics.
Salima
PS :
In mathematical terms, given a random variable X with distribution F, a sample of length n\in\mathbb{N} is a set of n independent, identically distributed (iid) random variables with distribution F. It concretely represents n experiments in which we measure the same quantity. For example, if X represents the height of an individual and we measure n individuals, Xi will be the height of the i-th individual. Note that a sample of random variables (i.e. a set of measurable functions) must not be confused with the realisations of these variables (which are the values that these random variables take). In other words, Xi is a function representing the mesure at the i-th experiment and xi = Xi(?) is the value we actually get when making the measure.
*
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9015507698059082, "perplexity": 567.4035476330155}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131297281.13/warc/CC-MAIN-20150323172137-00054-ip-10-168-14-71.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/hydrostatic-force-problem.147630/
|
# Hydrostatic force problem
1. Dec 10, 2006
### SUchica10
A gate in an irrigation canal is in the form of a trapezoid 3 feet wide at the bottom, 5 feet wide at the top, with the height equal to 2 feet. It is placed vertically in the canal with the water extending to its top. For simplicity, take the density of water to be 60 lb/ft cubed. Find the hydrostatic force in pounds on the gate.
I am having problems setting this problem up. It looks like its really easy but I am just not sure how to start it.
I know F = density x gravity x area x depth
2. Dec 12, 2006
### chanvincent
Because the force acts on the gate is not constant, i.e. force at the bottom is larger than that at the top , we have
dF = density x gravity x depth x d(area)
Do this integration over the trapezoid will yield the correct answer.
3. Dec 13, 2006
### HallsofIvy
Staff Emeritus
Imagine the gate being divided into many narrow horizontal bands of width "$\Delta y$". If y is the depth of a band, and $\Delta y$ is small enough that we can think of every point in the band as at depth y, then the force along that band is the pressure, 60(y) [NOT "times gravity"! The density of the water is weight density, not mass density!], times the area: the length of the band times $\Delta y$. Of course, the length of the band depends on y: it is a linear function of y since the sides are straight lines, length(2)= 3 and length(0)= 5 so length(y)= -y+ 5. The force on that narrow band is 60y(5-y)$\Delta y$. The total force on the gate is the sum of those,$\Sum 60y(5-1y)\Delta y$, as y goes from 0 to 2. In the limit, that becomes the integral
$$60\int_0^5 y(5-y)dy$$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9827743172645569, "perplexity": 688.3661183720985}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560282140.72/warc/CC-MAIN-20170116095122-00070-ip-10-171-10-70.ec2.internal.warc.gz"}
|
https://math.stackexchange.com/questions/624196/calculating-moore-penrose-pseudo-inverse
|
# Calculating Moore-Penrose pseudo inverse
I have a problem with a project requiring me to calculate the Moore-Penrose pseudo inverse. I've also posted about this on StackOverflow, where you can see my progress.
From what I understand from Planet Math you can simply compute the pseudoinverse only the first formula which I can understand, but it also says that this is for general cases, and you have to do SVD (singular value decomposition) and the formula becomes much complicated (the second formula) which I don't understand... I mean,
1. What is V? What is S? I'm confused.
2. How can I calculate SVD?
4. Why there are two pseudo inverse formulas?
Left pseudo inverse formula $$A_\text{left} = (A^TA)^{-1}A^T$$ Right pseudo inverse formula $$A_\text{right}=A^T(AA^T)^{-1}$$
Thank you very much, Daniel.
These formulas are for different matrix formats of the rectangular matrix $A$.
The matrix to be (pseudo-)inverted should have full rank. (added:) If $A\in I\!\!R^{m\times n}$ is a tall matrix, $m>n$, then this means $rank(A)=n$, that is, the columns have to be linearly independent, or $A$ as a linear map has to be injective. If $A$ is a wide matrix, $m<n$, then the rows of the matrix have to be independent to give full rank. (/edit)
If full rank is a given, then you are better off simplifying these formulas using a QR decomposition for $A$ resp. $A^T$. There the R factor is square and $Q$ is a narrow tall matrix with the same format as $A$ or $A^T$,
If $A$ is tall, then $A=QR$ and $A^{\oplus}_{left}=R^{-1}Q^T$
If $A$ is wide, then $A^T=QR$, $A=R^TQ^T$, and $A^{\oplus}_{right}=QR^{-T}$.
You only need an SVD if $A$ is suspected to not have the maximal rank for its format. Then a reliable rank estimation is only possible comparing the magnitudes of the singular values of $A$. The difference is $A^{\oplus}$ having a very large number or a zero as a singular value where $A$ has a very small singular value.
Added, since wikipedia is curiosly silent about this: Numerically, you first compute or let a library compute the SVD $A=U\Sigma V^T$ where $Σ=diag(σ_1,σ_2,\dots,σ_r)$ is the diagonal matrix of singular values, ordered in decreasing size $σ_1\ge σ_2\ge\dots\ge σ_r$.
Then you estimate the effective rank by looking for the smallest $k$ with for instance $σ_{k+1}<10^{-8}σ_1$ or as another strategy, $σ_{k+1}<10^{-2}σ_k$, or a combination of both. The factors defining what is "small enough" are a matter of taste and experience.
With this estimated effective rank $k$ you compute $$Σ^⊕=diag(σ_1^{-1},σ_2^{-1},\dots,σ_k^{-1},0,\dots,0)$$ and $$A^⊕=VΣ^⊕U^T.$$
Note how the singular values in $Σ^⊕$ and thus $A^⊕$ are increasing in this form, that is, truncating at the effective rank is a very sensitive operation, differences in this estimation lead to wildly varying results for the pseudo-inverse.
• 1-> what do you mean by different matrix formats ?? 2-> What if the matrix dosen't have full rank? What happens, i have to use QR decomposition as you sugested? 3-> ok, but don't SVD help me calculating the pseudo inverse? at least for every case 4-> can't i just use the first formula from A-left formula? And if not, why? 5-> sorry, i am not a math guru, i understand the baiscs of matrix, i also understand how to calculate the inverse, but this is a little bit tricky – Master345 Jan 1 '14 at 22:41
• 1) more rows than columns->tall, more columns than rows->wide -- 2) If you know the structural reason for the rank deficit, you better treat it by reducing redundancy in the problem formulation. If the rank deficit happens dynamically, you better use the better analytic power of the SVD. – LutzL Jan 1 '14 at 23:01
• -- 3) Yes, but SVD is still an iterative process with time $O(n^3|\log ε|)$ to reach a certain precision $ε$. The initial reduction to bidiagonal form corresponds to 2 QR decompositions. Meaning, if QR works, it is much faster. -- 4) You can, but for large matrices, the matrix product makes the condition quadratically worse. -- 5) try out Householder reflectors and Givens rotations, the mathematics is not that bad, keeping track of the indices for the positions to modify however... Anyway, the proposed library has all these fancy decompositions, so you only need to put the results together. – LutzL Jan 1 '14 at 23:02
• ** 3-> ** so, correct me if i understand, First formula A-left > SVD > QR decomposition ** 4-> ** so First formula A-left might work for 3x3, 4x4, but the higher you go, you might find bigger errors, is that right? ** 5-> ** i know they are better algorithms, put for starters let me try First formula A-left to see if it works, i can test it here calculator-fx.com/calculator/linear-algebra/… ** extra-> ** thank you very much for answering, i had never been into mathematics this deep and coding before ... – Master345 Jan 2 '14 at 0:03
• * 3) The computational effort for the 'first formula' is less than that for the SVD, numerical stability is worse. And, as said, if it works, QR is faster than SVD, the numerical results will be about the same. * 4) Yes, it will also work for 10x10, but for 1000x1000 I would expect problems. * 5) By all means, try out all methods. – LutzL Jan 2 '14 at 7:22
While the SVD yields a "clean" way to construct the pseudoinverse, it is sometimes an "overkill" in terms of efficiency.
The Moore-Penrose pseudoinverse can be seen as follows: Let $\ell:\mathbb R^n\rightarrow\mathbb R^m$ be a linear map. Then $\ell$ induces an isomorphism $\ell':{\rm Ker}(\ell)^\perp\rightarrow {\rm Im}(\ell)$. Then the Moore-Penrose pseudoinverse $\ell^+:\mathbb R^m\rightarrow \mathbb R^n$ can be described as follows.
$$\ell^+(x)=\ell'^{-1}(\Pi(x)),$$ where $\Pi$ is the orthogonal projection of $x$ on ${\rm Im}(\ell)$.
In other words, what you need is to compute orthonormal bases of ${\rm Im}(\ell)$ and of ${\rm Ker}(\ell)^\perp$ to contruct the Moore-Penrose pseudoinverse.
For an algorithm, you may be interested by the iterative method here
edit: roughly speaking, one way to see why the SVD might be an "overkill" is that if $A$ is a matrix with rational coefficients, then $A^+$ also have rational coefficients (see e.g. this paper), while the entries of the SVD are algebraic numbers.
• 1-> what do you mean about "overkill"? like is process consuming? first i just want to see this working, then i will think about optimisations 2-> I don't quite understand this formula ℓ+(x)=ℓ′−1(Π(x)) i'm used to work with formulas like uppwards 3-> ok, but don't SVD help me calculating the pseudo inverse? at least for every case 4-> sorry, i am not a math guru, i understand the baiscs of matrix, i also understand how to calculate the inverse, but this is a little bit tricky 5-> can't i just use the first formula from A-left formula? And if not, why? – Master345 Jan 1 '14 at 22:49
• 1-> By "overkill", I wanted to say that the SVD is a very powerful tool, maybe too powerful for computing the Moore-Penrose pseudoinverse, but there is indeed a very nice relation, see en.wikipedia.org/wiki/… 2->It is just a notation: a matrix actually represents a linear map (the map $u\mapsto A\cdot u$) and reasoning in terms of maps is often convenient. 3->Yes, but the SVD involves computing with algebraic numbers, and often makes exact computations impossible. – emeu Jan 2 '14 at 4:07
• 5-> A-left only works if the linear map $u\mapsto A\cdot u$ is surjective. A-right only works if the map is injective. (note that if $A$ is a square invertible matrix, then both formulas give the same result: $A^{-1}$) – emeu Jan 2 '14 at 4:08
• 2-> for me, a matrix is just a table that i can referr to its elements like A[i][j], thats the way i'm seeing it, because i worked a lot with arrays 5-> yes, i observed that, A-left gives same result as the inverse, and i cannot understand a something, given an array (1,2,3 - 4,5,6 - 7 8 9) that has the determinant 0 gives me a NULL matrix, see here (calculator-fx.com/calculator/linear-algebra/matrix-determinant) ... why is that? calculator-fx.com/calculator/linear-algebra/… gives me the right answer. – Master345 Jan 2 '14 at 18:27
• 2-> Yes, seeing a matrix as a table is also a very good way to think of it, especially for computational purposes. However, it is often a good idea (if you want to study this topic deeper) to have both representations in mind: as a table and as a linear map. 5-> A-left only works if the associated linear map is surjective, which is not the case if the determinant of a square matrix $A$ is zero. Consequently, in that case, the A-left formula does not work (but the Moore-Penrose pseudoinverse is still well defined and there are other ways to compute it) – emeu Jan 3 '14 at 7:32
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9279831647872925, "perplexity": 431.9076756513944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986653247.25/warc/CC-MAIN-20191014124230-20191014151730-00349.warc.gz"}
|
https://dsp.stackexchange.com/questions/21978/resample-an-array-in-real-time-application-pitch-bend
|
# Resample an array in real-time (application: pitch-bend)
I'm working on a sampler software (that plays .WAV files, when notes are played on a MIDI keyboard).
In order to implement pitch-bend feature (i.e. you have a "pitch bend wheel" on the synth and you can turn this wheel to retune the sound in realtime, classical effect used in funk music),
we have to:
1. Retune: this is possible by resampling the array of the .WAV. With Python, there is scipy.signal.resample or probably the same in numpy. But I heard there are algorithms optimized for audio repitching. What are the audio-optimized resampling algorithms? [Secret Rabbit Code : http://www.mega-nerd.com/SRC/. Why is this important? How to implement it in Python, with numpy for example?]
2. Retune in realtime: indeed, we can play with the pitchbend-wheel very fast, thus, hundreds of resampling factor could be needed per second! How to do that? How to resample an array with different resampling factors, in realtime?
There's a lot of theory around about how to resample properly and efficiently while preserving the signal content. However, these methods are mostly not practical for things like pitch bending. Also, the fact that pitch bends (and other realtime rate changing requirements, like note transposition) are limited to -12 to +12 semitones, i.e. a resampling factor of 1/2 or 2, makes it easier to design an efficient algorithm.
Historically, audio samplers have been using linear interpolation for on-the-fly resampling and pitch bending. From today's perspective this almost the worst thing you can do (nearest neighbour tops it), but it got the job done. Sound quality was ok, but in times of 12 bits and 16kHz sampling rate, who would complain. (There have also been samplers that had the ability to change the DAC conversion rate by giving each voice an individual output, but that only as a historical side note)
Later models would use higher order polynomial or windowed sinc interpolation together with a mip-map, and the results became good enough for even modern requirements.
So I would suggest you try in this order:
1) Linear interpolation. Just calculate the sample value at the desired time index from its two nearest neighbours by linear interpolation.
2) Use offline processing to oversample your sample table with a factor of 2, doubling the memory requirements obviously. Use the same linear interpolation on those oversampled samples.
3) Also double the audio sampling rate of the resampler process, using a halfband lowpass filter and a decimation stage at the very end to go back to the audio playback rate
4) Like before, but instead of linear interpolation use a short windowed sinc interpolator.
Each step improves the sound quality but also requires more resources. My guess is that you will find it hard to hear a difference between 3) and 4) if you implement them correctly. Go with 3) if it's good enough, and I think it should be.
• Thanks a lot! So linear interpolation from 2 neighbours is still better than "just take the nearest neightbour", right? My audio files are Python numpy arrays. Do you have an idea on implementation the simplest version (nearest neighbour) with Python, without having to "copy" the array to a new one? (it's called a "view" of an array in Numpy afaik) – Basj Mar 12 '15 at 12:05
• Another thing: with pitch bend, should I use a constant "resampling factor" for each call of the audio callback? Example : 1st 1024-samples-buffer: resampling x1.3, 2nd 1024-samples-buffer: resampling x1.4, 3rd 1024-samples-buffer: resampling x 1.5 (because pitch bend wheel turning)?... Or should the resampling factor evolve even inside a 1024-samples-buffer ? (that sounds really difficult!) – Basj Mar 12 '15 at 12:11
• My python-fu is not very good, so I better not advise you. And yes, linear interpolation is already a lot better than nearest neighbour picking. If you want to resample each buffer with a constant factor you should reduce your buffer size or you will experience severe stepping artefacts. It's probably safe to use a constant pitch factor for around 64 samples, but you should experiment with this. Continuously tracking the shift will give better results though, especially if you want smooth glide effects. – Jazzmaniac Mar 12 '15 at 12:57
• Thanks! Yes I also think constant pitch factor on each 1024-sample-buffer will produce stepping artefact... How to continuously track the shift? I really don't see how we can resample a buffer array with many different resampling factors, what do you mean? – Basj Mar 13 '15 at 1:58
• It's as simple as updating the buffer read pointer incrementally with the current speed, I.e. you calculate the fractional read positions by advancing the read pointer with a step size that's proportional to the resampling factor. The resulting fractional buffer positions are then used by the interpolator to calculate the new sample values. – Jazzmaniac Mar 13 '15 at 20:33
Audio-optimized resampling is done by using anti-alias filtering and interpolation kernels with good stop-band attenuation and low pass-band ripple/distortion. On current PC and mobile processors (the kind that can easily compute hundreds of transcendental functions per sample time), these filter and interpolation kernels can not only be precomputed and cached in poly-phase lookup tables, but for certain window types, (re)computed in real-time per sample.
Thus, resampling can be done in real-time by computing the needed interpolation kernels and interpolating each sample as needed at the required sample delay from the last for the current sample rate.
Note that changing the sample rate too quickly may result in FM modulation skirts that may need additional filtering to reduce aliasing artifacts.
• Can you give an example of such algorithm (even in pseudo-code)? – Basj Mar 11 '15 at 18:49
• one trick regarding audio resampling is to put nulls in the frequency response of the interpolation kernel at integer multiples of the original sample rate (except for the integer 0). this way all of the images take a big hit (except for the original image around 0) because most of the energy in audio is in the bottom 5 or 6 octaves. the top 4 or 5 octaves is 15/16 of the whole spectrum in terms of linear frequency. Miller Puckette had a paper about this once in the 90s (where he was asking why sometimes linear interpolation sounds better than Parks-McClellan). – robert bristow-johnson Mar 11 '15 at 19:04
• Here's a link to a near pseudo-code example (in old-fashioned BASIC) that I wrote a few years ago: nicholson.com/rhn/dsp.html#3 It uses a Von Hann windowed Sinc. – hotpaw2 Mar 11 '15 at 20:27
• @rbj remez/Parks-McClellan generated filters can leave a weird repetitive pattern of ripples in the pass band. I think you've posted in comp.dsp that some people find this audible in the time domain. One reason to prefer using windowed Sincs instead. – hotpaw2 Mar 12 '15 at 13:54
• @hotpaw2, didn't see your comment until now (maybe i should get the rbj account here, ya know, that's not a bad idea, hot). anyway, yes, P-McC can have nearly sinusoidal ripples in the passband that causes a pre-echo if the sinc-like impulse response as well as a post-echo. but designing optimally with some other criterion (like least squares) might not have that problem. but the reason that Puckette pointed out why linear-interpolation sometimes sounded better was because of nulls in the frequency response in the center of all of those images. – robert bristow-johnson Mar 26 '15 at 4:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34114158153533936, "perplexity": 2051.9964907678186}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400220495.39/warc/CC-MAIN-20200924194925-20200924224925-00020.warc.gz"}
|
https://tex.stackexchange.com/questions/311132/how-to-style-hrefs-underlined-and-coloured-throughout-the-document
|
# How to style href's (underlined and coloured) throughout the document
Let's take the following MWE:
\documentclass{article}
\usepackage{hyperref,xcolor}
\begin{document}
\href{http://any-URL}{\color{blue}{\underline{Some URL}}}
\end{document}
Let's assume I would like to have every second argument of every \href in the document styled within \color{blue}{\underline{...}.
How would that be possible, via a definition?
• Using \underline would not be optimal as it does not allow line-breaking.
– Werner
May 24, 2016 at 6:09
• \hypersetup{allbordercolors=0 0 1, pdfborderstyle={/S/U/W 1}}. That's for all border colors. There are also: citebordercolor, filebordercolor, linkbordercolor, menubordercolor, urlbordercolor, and runbordercolor for a more fine-grained approach.
– jon
May 24, 2016 at 6:11
## 5 Answers
\href has to do quite a lot \catcode-magic to handle all the special chars (like #) in urls, so all commands that take an argument and so fix the \catcodes are difficult to insert. You can try the following. But
• Imho underlining doesn't look good.
• It will only work for \href (I hope ...)
• Normal text will break over lines, urls probably not.
• \ul from soul will not work instead of \uline.
Code
\documentclass{article}
\usepackage[colorlinks,allcolors=blue]{hyperref}
\usepackage[normalem]{ulem}
\usepackage{xcolor}
\makeatletter
\begingroup
\catcode\$=6 % \catcode\#=12 % \gdef\href@split$1#$2#$3\\$4{% \hyper@@link{$1}{$2}{\uline{$4}}% or \underline
\endgroup
}%
\endgroup
\begin{document}
\href{http://any-URL}{Some URL}
\href{http://any-URL-with#hash}{Some URL}
\href{http://any-URL-with#hash}{\nolinkurl{http://any-URL-with\#}}
\end{document}
Adapting Werner's Lorem ipsum example, another (internal-to-hyperref) possibility is this:
\documentclass{article}
\usepackage{hyperref}
\hypersetup{
allbordercolors=0 0 1,
pdfborderstyle={/S/U/W 1}
}
\begin{document}
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut
\href{http://any-URL}{pellentesque augue} est, id ornare nisi
fringilla eu. Nulla euismod sollicitudin lacus, et porta lectus
accumsan ut. Vestibulum quis interdum lorem. Sed sodales fermentum
neque eu auctor. Vestibulum vitae eros nec massa ultricies
\href{http://any-URL}{sodales vel vitae} justo. Integer a mauris
lectus. Aliquam eu diam vehicula velit lacinia congue. Sed ac mollis
arcu, eu viverra dui. Nunc at elit mi. Duis elementum pulvinar
placerat. Phasellus vel massa varius quam mattis fermentum sed mollis
arcu. \href{http://any-URL}{Vestibulum ante ipsum primis in} faucibus
orci luctus et ultrices posuere cubilia Curae; Aenean dolor elit,
consequat nec tellus luctus, vestibulum fermentum orci. Nullam nec leo
eros. Nullam at quam a mauris luctus euismod ac eu dolor.
\end{document}
• I can not get this to work. Instead, I get blue frames around the href's, not under-lines; or is that a .PDF-viewer-specific effect? May 24, 2016 at 6:29
• Yes. I see only a blue line with: evince, xpdf, and zathura (and with xdvi for the .dvi file). With Ghostscript, I see a blue box; with qpdfview, I see a red box; and with mupdf, I see no line or box. Note that one advantage of this approach is that you can print the file and not have the lines or boxes appear; but the downside is, I guess, that not all PDF viewers implement the PDF standard in the same way (or to the same degree).
– jon
May 24, 2016 at 6:42
• Thank you very much for digging more into the limitations. That's very useful for readers. Vote up +1 for that. -- Would you know how to be able to use number signs (#) in URL's in Werner's answer please, as I personally prefer a .PDF-viewer-independent answer, since the lay-out is most important in my use case. May 24, 2016 at 6:49
• Not off-hand, I'm afraid. If the hyperref developer stops by this question, however, I'm sure you'll get a very detailed and accurate answer to your question.
– jon
May 24, 2016 at 7:02
A simpler variant of Ulrike's solution, based on my answer to How to be able to use the number sign (#) in the URL of an underlined href
\documentclass{article}
\usepackage{xcolor,soul}
\usepackage{etoolbox}
\usepackage{hyperref}
\hypersetup{colorlinks,urlcolor=blue}
\makeatletter
\patchcmd{\hyper@link@}
{{\Hy@tempb}{#4}}
{{\Hy@tempb}{\ul{#4}}}
{}{}
\makeatother
\begin{document}
\href{https://tex.stackexchange.com/#/!!!}{URL with a number sign}
A hyperlink \href{https://tex.stackexchange.com/#/!!!}{URL with a number sign
and text long enough to trigger a line break} with something following.
\end{document}
You can set a specific style using something like this:
\documentclass{article}
\usepackage{hyperref,xcolor,soul}
\let\oldhref\href
\renewcommand{\href}[2]{\oldhref{#1}{\hrefstyle{#2}}}
\newcommand{\hrefstyle}[1]{\color{blue}\ul{#1}}
\begin{document}
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Ut \href{http://any-URL}{pellentesque augue} est,
id ornare nisi fringilla eu. Nulla euismod sollicitudin lacus, et porta lectus accumsan
ut. Vestibulum quis interdum lorem. Sed sodales fermentum neque eu auctor. Vestibulum
vitae eros nec massa ultricies \href{http://any-URL}{sodales vel vitae} justo. Integer a mauris lectus. Aliquam
eu diam vehicula velit lacinia congue. Sed ac mollis arcu, eu viverra dui. Nunc at elit
mi. Duis elementum pulvinar placerat. Phasellus vel massa varius quam mattis fermentum
sed mollis arcu. \href{http://any-URL}{Vestibulum ante ipsum primis in} faucibus orci luctus et ultrices
posuere cubilia Curae; Aenean dolor elit, consequat nec tellus luctus, vestibulum
fermentum orci. Nullam nec leo eros. Nullam at quam a mauris luctus euismod ac eu dolor.
\end{document}
Opting for \ul (from soul) instead of \underline. See Why does underlined text not get wrapped once it hits the end of a line ?.
Here is the code to add at the start of your document for reference in darkgreen.
\usepackage{xcolor}
\usepackage{hyperref}
\definecolor{darkgreen}{rgb}{0.06, 0.78, 0.3}
\hypersetup{ % reference colors
colorlinks=true,
linkcolor=darkgreen,
pdfborder = {0 0 0},
filecolor=magenta,
urlcolor=cyan,
}
\hypersetup{linkcolor=black}
• The colouring doesn't seem to be working. May 24, 2016 at 11:27
• Just tested it now, and it worked. However, you need to build one time only with the new packages before building with the hypersetup. Don't know why but it's working for me, and my links are all in the color they are supposed to be. May 25, 2016 at 8:50
• Sorry mate, I can't replicate it. May 25, 2016 at 8:54
• Try this code, work for me : \documentclass[french]{article} \usepackage{xcolor} \usepackage{hyperref} \definecolor{darkgreen}{rgb}{0.06, 0.78, 0.3} \hypersetup{ % reference colors colorlinks=true, linkcolor=darkgreen, pdfborder = {0 0 0}, filecolor=magenta, urlcolor=cyan, } \hypersetup{linkcolor=black} \begin{document} Here is my reference: \href{www.google.ch}{Google} \end{document} May 25, 2016 at 8:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.710078239440918, "perplexity": 19365.537942037783}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104668059.88/warc/CC-MAIN-20220706060502-20220706090502-00278.warc.gz"}
|
https://mathshistory.st-andrews.ac.uk/Biographies/Crank/
|
# John Crank
### Quick Info
Born
6 February 1916
Hindley, Lancashire, England
Died
3 October 2006
Ruislip, London, England
Summary
John Crank was an English numerical analyst who worked on the heat equation.
### Biography
John Crank was a student of Lawrence Bragg and Douglas Hartree at Manchester University (1934-38), where he was awarded the degrees of B.Sc. and M.Sc. and later (1953) D.Sc. After war work on ballistics he was a mathematical physicist at Courtaulds Fundamental Research Laboratory from 1945 to 1957 and professor of mathematics at Brunel University (initially Brunel College in Acton) from 1957 to 1981. His main work was on the numerical solution of partial differential equations and, in particular, the solution of heat-conduction problems. In the 1940s such calculations were carried out on simple mechanical desk machines. Crank is quoted as saying that to "burn a piece of wood" numerically then could take a week.
John Crank is best known for his joint work with Phyllis Nicolson on the heat equation, where a continuous solution $u(x, t)$ is required which satisfies the second order partial differential equation
$u_{t} - u_{xx} = 0$
for $t > 0$, subject to an initial condition of the form $u(x, 0) = f (x)$ for all real $x$. They considered numerical methods which find an approximate solution on a grid of values of $x$ and $t$, replacing $u_{t}(x, t)$ and $u_{xx}(x, t)$ by finite difference approximations. One of the simplest such replacements was proposed by L F Richardson in 1910. Richardson's method yielded a numerical solution which was very easy to compute, but alas was numerically unstable and thus useless. The instability was not recognised until lengthy numerical computations were carried out by Crank, Nicolson and others. Crank and Nicolson's method, which is numerically stable, requires the solution of a very simple system of linear equations (a tridiagonal system) at each time level.
### References (show)
1. J Crank, Free and moving boundary problems (Oxford, 1987).
2. J Crank, Mathematics and industry (Oxford, 1962).
3. J Crank, The mathematics of diffusion (Oxford, 1956).
4. J Crank, The Differential Analyser (London, 1947).
5. J Crank and P Nicolson. A practical method for numerical evaluation of solutions of partial differential equations of the heat-conduction type, Proc. Cambridge Philos. Soc. 43 (1947). 50-67. [Re-published in: John Crank 80th birthday special issue Adv. Comput. Math. 6 (1997) 207-226]
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8907831311225891, "perplexity": 1578.455738664962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739347.81/warc/CC-MAIN-20200814160701-20200814190701-00162.warc.gz"}
|
https://stats.libretexts.org/Courses/Taft_College/PSYC_2200%3A_Elementary_Statistics_for_Behavioral_and_Social_Sciences_(Oja)/Unit_1%3A_Description/1%3A_Introduction_to_Behavioral_Statistics/1.02%3A_What_is_a_statistic_What_is_a_statistical_analysis
|
# 1.2: What is a statistic? What is a statistical analysis?
$$\newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} }$$ $$\newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}}$$$$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\id}{\mathrm{id}}$$ $$\newcommand{\Span}{\mathrm{span}}$$ $$\newcommand{\kernel}{\mathrm{null}\,}$$ $$\newcommand{\range}{\mathrm{range}\,}$$ $$\newcommand{\RealPart}{\mathrm{Re}}$$ $$\newcommand{\ImaginaryPart}{\mathrm{Im}}$$ $$\newcommand{\Argument}{\mathrm{Arg}}$$ $$\newcommand{\norm}[1]{\| #1 \|}$$ $$\newcommand{\inner}[2]{\langle #1, #2 \rangle}$$ $$\newcommand{\Span}{\mathrm{span}}$$$$\newcommand{\AA}{\unicode[.8,0]{x212B}}$$
The Crash Course video series are great, and there’s one for statistics! Check out this Crach Course Statistics Preview video:
You should follow the whole Crash Course Statistics playlist as you go through this textbook's chapters; the chapters won't completely align with the videos, but they will be similar. The Khan Academy also has a great series on statistics (website addres: https://www.khanacademy.org/math/statistics-probability).
Statistics are the results of statistical analyses, such as the percentage of psychology majors at your school; the analysis was calculating the percentage, while the result is a statistic that you could share.
##### Definition: Statistics
The results of statistical analyses
##### Definition: Statistical Analyses
Procedures to organize and interpret numerical information
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5113549828529358, "perplexity": 1228.8215450938205}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710691.77/warc/CC-MAIN-20221129100233-20221129130233-00275.warc.gz"}
|
http://serverfault.com/questions/422926/requiring-mulitple-group-membership-in-order-to-access-folder
|
# Requiring mulitple group membership in order to access folder
How would I go about creating a file or folder that requires a user to be a member of two or more different groups in order to read/write to the folder?
For example, say I run an auto repair shop, and I have a folder called "Repair History" and I only want people to access it if they are members of BOTH the "Mechanics" and "Cashiers" group? This would be an AND requirment instead of an OR requirement which seems to be the norm.
I know we can create a separate group that is needed to access the folder, but this is more of an academic question, since it pertains to a different security structure that we are creating. I'm not sure if MS security handles it, but I'm wondering how it would be done either way.
-
Generally this isn't possible, but there is one way you could get this effect.
Nest a set of directories, an the set the permissions of each directory for a single group (disable inheritance). If they are not in the first group the cannot traverse the first directory, if they are not in the second group, they cannot get into the second directory. Unfortunately this structure would probably be somewhat confusing to the end-users.
\foo - only group Mechanics
\foo\bar - only group Cashiers.
-
Makes sense. We're creating an application where the front end can be adjusted to make things intuitive for the end users, so this strategy might work. Thanks. – David Aug 30 '12 at 17:31
You could probably also do this using a mix of NTFS and Share permissions - give one group access via share permissions, and the other access via NTFS permissions. The folder would have to exist outside of your existing folder structure, though.
-
I do, however, feel the best solution is to create and manage a third group. – Steve G Aug 30 '12 at 17:46
I'm designing my own security model and I want to base in on NTSF, so was just wondering. Thanks. – David Aug 30 '12 at 19:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2671316862106323, "perplexity": 969.9404699531678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375096209.79/warc/CC-MAIN-20150627031816-00259-ip-10-179-60-89.ec2.internal.warc.gz"}
|
https://asmedigitalcollection.asme.org/biomechanical/article-abstract/106/2/151/425775/A-Continuum-Theory-and-an-Experiment-for-the-Ion?redirectedFrom=fulltext
|
Swelling of normal bovine articular cartilage equilibrated in NaCl solutions was dimensionally measured in thin strips of tissue. The ion-induced strains show that free swelling of articular cartilage is anisotropic and inhomogeneous. For the molar concentrations used, contraction increased linearly with concentration, defining a “coefficient of chemical contraction” (αc). Isometrically constrained specimens registered a rise in tensile force followed by stress relaxation. An extension of the biphasic theory incorporating this ion-induced strain is proposed. This theory can describe the equilibrium anisotropic swelling behavior of cartilage and explain the transient force history observed in the isometric experiment.
This content is only available via PDF.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8004034161567688, "perplexity": 10800.56234200555}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986675409.61/warc/CC-MAIN-20191017145741-20191017173241-00507.warc.gz"}
|
https://openidauthority.com/gum-arabic-gfghi/archive.php?id=0a0c3b-closure-property-of-irrational-numbers
|
closure property of irrational numbers
# closure property of irrational numbers
Completing the CAPTCHA proves you are a human and gives you temporary access to the web property. . System of integers is not closed under division,this means that the division of any two integers is not always an integers. For rational numbers, addition and multiplication are commutative. (In Algebra) The term closure is a term that is used extensively in many fields. mind blowing My parents will be very proud!! Closure Properties. We can say that rational numbers are closed under addition, subtraction and multiplication. Example: when we add two real numbers we get another real number. It is not necessary that the sum is always irrational some time it may be rational. To learn more about other topics download BYJU’S – The Learning App and watch interactive videos. What are the properties of rational numbers? • 2 ⋅ 2 = 2. This is known as Closure Property for Subtraction of Whole Numbers Read the following terms and you can further understand this property Irrational numbers $$\mathbb{I}$$ We have seen that any rational number can be expressed as an integer, decimal or exact decimal number. where operations are +, −, × or ÷ So the result stays in the same set. An irrational Number is a number on the Real number line that cannot be written as the ratio of two integers. If a/b and c/d are any two rational numbers, then (a/b) + (c/d) = (c/d) + (a/b) Example : 2/9 + 4/9 = 6/9 = 2/3 4/9 + 2/… Thank you for helping me. Rational Numbers Vs Irrational Numbers. From the definition of real numbers, the set of real numbers is formed by both rational numbers and irrational numbers. This is really very useful thank u very much byjus , The app which is better for learning is byju’s , useful for every exam. 3.1 + 0.5 = 3.6. To know the properties of rational numbers, we will consider here the general properties such as associative, commutative, distributive and closure properties, which are also defined for integers. What is the Closure Property? The rational numbers are closed not only under addition, multiplication and subtraction, but also division (except for $$0$$). Use rational approximations of irrational numbers to compare the size of irrational numbers, locate them approximately on a number line diagram, and estimate the value of expressions (e.g., π²). Property 4: The product of a rational number with an irrational number is an irrational number. Consider the set of non-negative even numbers: {0, 2, 4, 6, 8, 10, 12,…}. Your email address will not be published. This is known asClosure Property for Division of Whole Numbers. The set of non-negative even numbers is therefore closed under addition. CBSE Previous Year Question Papers Class 10, CBSE Previous Year Question Papers Class 12, NCERT Solutions Class 11 Business Studies, NCERT Solutions Class 12 Business Studies, NCERT Solutions Class 12 Accountancy Part 1, NCERT Solutions Class 12 Accountancy Part 2, NCERT Solutions For Class 6 Social Science, NCERT Solutions for Class 7 Social Science, NCERT Solutions for Class 8 Social Science, NCERT Solutions For Class 9 Social Science, NCERT Solutions For Class 9 Maths Chapter 1, NCERT Solutions For Class 9 Maths Chapter 2, NCERT Solutions For Class 9 Maths Chapter 3, NCERT Solutions For Class 9 Maths Chapter 4, NCERT Solutions For Class 9 Maths Chapter 5, NCERT Solutions For Class 9 Maths Chapter 6, NCERT Solutions For Class 9 Maths Chapter 7, NCERT Solutions For Class 9 Maths Chapter 8, NCERT Solutions For Class 9 Maths Chapter 9, NCERT Solutions For Class 9 Maths Chapter 10, NCERT Solutions For Class 9 Maths Chapter 11, NCERT Solutions For Class 9 Maths Chapter 12, NCERT Solutions For Class 9 Maths Chapter 13, NCERT Solutions For Class 9 Maths Chapter 14, NCERT Solutions For Class 9 Maths Chapter 15, NCERT Solutions for Class 9 Science Chapter 1, NCERT Solutions for Class 9 Science Chapter 2, NCERT Solutions for Class 9 Science Chapter 3, NCERT Solutions for Class 9 Science Chapter 4, NCERT Solutions for Class 9 Science Chapter 5, NCERT Solutions for Class 9 Science Chapter 6, NCERT Solutions for Class 9 Science Chapter 7, NCERT Solutions for Class 9 Science Chapter 8, NCERT Solutions for Class 9 Science Chapter 9, NCERT Solutions for Class 9 Science Chapter 10, NCERT Solutions for Class 9 Science Chapter 12, NCERT Solutions for Class 9 Science Chapter 11, NCERT Solutions for Class 9 Science Chapter 13, NCERT Solutions for Class 9 Science Chapter 14, NCERT Solutions for Class 9 Science Chapter 15, NCERT Solutions for Class 10 Social Science, NCERT Solutions for Class 10 Maths Chapter 1, NCERT Solutions for Class 10 Maths Chapter 2, NCERT Solutions for Class 10 Maths Chapter 3, NCERT Solutions for Class 10 Maths Chapter 4, NCERT Solutions for Class 10 Maths Chapter 5, NCERT Solutions for Class 10 Maths Chapter 6, NCERT Solutions for Class 10 Maths Chapter 7, NCERT Solutions for Class 10 Maths Chapter 8, NCERT Solutions for Class 10 Maths Chapter 9, NCERT Solutions for Class 10 Maths Chapter 10, NCERT Solutions for Class 10 Maths Chapter 11, NCERT Solutions for Class 10 Maths Chapter 12, NCERT Solutions for Class 10 Maths Chapter 13, NCERT Solutions for Class 10 Maths Chapter 14, NCERT Solutions for Class 10 Maths Chapter 15, NCERT Solutions for Class 10 Science Chapter 1, NCERT Solutions for Class 10 Science Chapter 2, NCERT Solutions for Class 10 Science Chapter 3, NCERT Solutions for Class 10 Science Chapter 4, NCERT Solutions for Class 10 Science Chapter 5, NCERT Solutions for Class 10 Science Chapter 6, NCERT Solutions for Class 10 Science Chapter 7, NCERT Solutions for Class 10 Science Chapter 8, NCERT Solutions for Class 10 Science Chapter 9, NCERT Solutions for Class 10 Science Chapter 10, NCERT Solutions for Class 10 Science Chapter 11, NCERT Solutions for Class 10 Science Chapter 12, NCERT Solutions for Class 10 Science Chapter 13, NCERT Solutions for Class 10 Science Chapter 14, NCERT Solutions for Class 10 Science Chapter 15, NCERT Solutions for Class 10 Science Chapter 16, CBSE Previous Year Question Papers Class 12 Maths, CBSE Previous Year Question Papers Class 10 Maths, ICSE Previous Year Question Papers Class 10, ISC Previous Year Question Papers Class 12 Maths, 1/2 x 1 = 1/2 [Multiplicative Identity]. Rational numbers follow the associative property for addition and multiplication. Performance & security by Cloudflare, Please complete the security check to access. Required fields are marked *. Closure is a property that is defined for a set of numbers and an operation. Example 5.17. Closure with respect to addition: The set of irrational numbers are not closure with respect to addition. (ii) Commutative property : Addition of two rational numbers is commutative. Closed sets can also be characterized in terms of sequences. 2. For all real numbers x, y, and z, the following properties apply:. They cannot be expressed as terminating or repeating decimals. Example: 1/2 + 1/3 = (3 + 2)/6 = 5/6 So it is closed under addition, the same way for other operations also it remains closed. Therefore, unlike the set of rational numbers, the set … Your IP: 163.172.251.52 Closure Property: This property states that when any two rational numbers are added, the result is also a rational number. If you are on a personal connection, like at home, you can run an anti-virus scan on your device to make sure it is not infected with malware. You are best byju’s. Manage the Lesson: Step 1: Launch the lesson with Real Number System Notes (convert to a powerpoint). Let us explain it with example √2 + (-√2) =0. Commutative law of multiplication: a×b = b×a. • Irrational numbers are "not closed"under addition, subtraction, multiplication or division. . amazing The division is not under closure property because division by zero is not defined. When any two numbers from this set are added, is the result always a number from this set? Yes, adding two non-negative even numbers will always result in a non-negative even number. In general, rational numbers are those numbers that can be expressed in the form of p/q, in which both p and q are integers and q≠0. I am a struggling eigth grade student in a American School. This can be understood with the help of an example: let (2+√2) and (-√2… This is called ‘Closure property of addition’ of rational numbers. THANK YOU BYJUS THE BEST LEARNING APP. I am a student and find this app more helpful. Closure properties say that a set of numbers is closed under a certain operation if and when that operation is performed on numbers from the set, we will get another number … Real Numbers Examples . The set of rational numbers Q ˆR is neither open nor closed. The sum or product of two real numbers is a real number. Your email address will not be published. The Closure Property states that when you perform an operation (such as addition, multiplication, etc.) c) The set of rational numbers is closed under the operation of multiplication, because the product of any two rational numbers will always be another rational number, and will therefore be in the set of rational numbers. Closure is when an operation (such as "adding") on members of a set (such as "real numbers") always makes a member of the same set. Thanks for essay of my most memorable day of my life. For example: Do you know why division is not under closure property? The additive inverse of 1/3 is -1/3. Addition: Additive properties of irrational number are same as in rational number. You can search for these terms for more information. But rational numbers are countable infinite, while irrational are uncountable. While a few specific examples may show closure, the closure property does not extend to the entire set of irrational numbers. Closure property. All about the Closure Property: What is it and how does it work? You may need to download version 2.0 now from the Chrome Web Store. Before understanding this topic you must know what are whole numbers ? Also, take free tests to practise for exams. Identity Property: 0 is an additive identity and 1 is a multiplicative identity for rational numbers. They cannot be expressed as terminating or repeating decimals. Closure []. • Suppose x, y and z are rational, then for addition: x+(y+z)=(x+y)+z, Example: 1/2 + (1/4 + 2/3) = (1/2 + 1/4) + 2/3. By the above definition of the real numbers, some examples of real numbers can be $$3, 0, 1.5, \dfrac{3}{2}, \sqrt{5}, \sqrt[3]{-9}$$, etc. For example: √2 + 3√2 = √2 (1+ 3) = 4√2. The word rational has evolved from the word ratio. outstanding BYJU’S u r the best We can also say that except ‘0’ all numbers are closed under division. For example, 2/3 + 1/2 = 7/6. The lowest common multiple (LCM) of two irrational numbers may or may not exist. BYJUS the learning app provides solutions for high school classes. irrational number is irrational and that the product of a nonzero rational number and an irrational number is irrational. Here, our concern is only with the closure property as it applies to real numbers . Are there Real Numbers that are not Rational or Irrational? Irrational numbers have the following properties: 1. o Irrational numbers Closure – irrational numbers are not closed under any arithmetic operation Associative property – the grouping of irrational numbers in addition and multiplication doesn’t matter Identity property – there is no additive or multiplicative identity in the set of irrational numbers The Density of the Rational/Irrational Numbers. Rational numbers are the numbers which can be represented in the form of p/q, where q is not equal to 0. The major properties are: Commutative, Associative, Distributive and Closure property. For two rational numbers say x and y the results of addition, subtraction and multiplication... Commutative Property. In the field of mathematics, closure is applied in many sub-branches. For rational numbers, addition and multiplication are commutative. \sqrt{2} \cdot \sqrt{2} = 2. Explain closure property and apply it in reference to irrational numbers - definition Closure property says that a set of numbers is closed under a certain operation if when that operation is performed on numbers from the set, we will get another number from the same set. The distributive property states, if a, b and c are three rational numbers, then; Example: 1/2 x (1/2 + 1/4) = (1/2 x 1/2) + (1/2 x 1/4). Rational and irrational numbers are part of the set of real numbers. As Rational Numbers are Real Numbers they have a specific location on the number line. We will now look at a theorem regarding the density of rational numbers in the real numbers, namely that between any two real numbers there exists a rational number. 3. Irrational numbers do not satisfy the closure property. To determine whether this set is a field, test to see if it satisfies each of the six field properties. Irrational Numbers are distributive under addition and subtraction. Two rational numbers when added gives a rational number. Yes, m… Hence, 1/3 x 3 = 1. Closure Property Worksheets. Another way to prevent getting this page in the future is to use Privacy Pass. We have considered sets of integers (natural numbers, even numbers, odd numbers), sets of rational numbers, sets of vertices, edges, colors, polyhedra and ... Closure Some binary operators are such that when we combine two elements from a set, ... motivates the definition of a final important property of binary operators. It isn’t open because every neighborhood of a rational number contains irrational numbers, and its complement isn’t open because every neighborhood of an irrational number contains rational numbers. Instructional Note: Connect to physical situations, e.g., finding the perimeter of a square of area 2. If you are at an office or shared network, you can ask the network administrator to run a scan across the network looking for misconfigured or infected devices. Explanation :-System of whole numbers is not closed under subtraction, this means that the difference of any two whole numbers is not always a whole number. Continuity: A given Set of irrational number is not continuous. Number Line is a straight line diagram on which each and every point corresponds to a real number. Thus, Q is closed under addition If a/b and c/d are any two rational numbers, then (a/b) + (c/d) is also a rational number. The closure property of additionin irrational numbers say that sum of two irrational number is always a rational number, But this is not true. Real Numbers include many sets of numbers: integers, fractions, decimals, rational numbers, and irrational numbers.The one set of numbers that is not in this group is "imaginary numbers." • Examples of irrational numbers:, π Basically, the rational numbers are the fractions which can be represented in the number line. The properties of rational numbers are: For two rational numbers say x and y the results of addition, subtraction and multiplication operations give a rational number. Switching: irrational numbers can be added or multiplied. on any two numbers in a set, the result of the computation is another number in the same set. Let us go through all the properties here. The multiplication or product of two rational numbers produces a rational number. Inverse Property; Representation of Rational Numbers on a Number Line. Use the Venn It obeys commutative and associative property under addition and multiplication. Closed: any irrational number added, subtracted, multiplied or divided will not always result in an irrational number. Closure is a property that is defined for a set of numbers and an operation. The sum or the product of two irrational numbers may be rational; for example, 2 ⋅ 2 = 2. Inverse Property: For a rational number x/y, the additive inverse is -x/y and y/x is the multiplicative inverse. Property 5: The sum of two irrational numbers is sometimes rational and sometimes irrational. The commutative property of rational numbers is applicable for addition and multiplication only and not for subtraction and division. Read the following and you can further understand this property: (-6) ÷ 2 = (-3), Result is an Integer.....(1) (-27) ÷ (-9) = 3, Result is an Integer.....(2) thank you for such a good information, THANK YOU IT HELPED ME A LOT . This Wikipedia article gives a description of the closure property with examples from various areas in math. Cloudflare Ray ID: 5fefc459fd3b0493 As an Algebra student being aware of the closure property can help you solve a problem. True. I really thanks to you, Very good app and very super app and very best app, wow This is because multiplying two fractions will always give you another fraction as a result, since the product of two fractions a/b and c/d, will give you ac/bd as a result. Closure. There are an infinite number of rational numbers and an infinite number of irrational numbers. Associative: they can be grouped. This is not true in the case of radication. Example : 2/9 + 4/9 = 6/9 = 2/3 is a rational number. (i) Closure property : The sum of any two rational numbers is always a rational number. Is the set of even non-negative numbers also closed under multiplication? Hence, 1/3 + (-1/3) = 0, The multiplicative inverse of 1/3 is 3. . School classes web property i am a struggling eigth grade student in American! Subtraction and division identity for rational numbers are closed under addition and multiplication... commutative of... Same as in rational number and an infinite number of rational numbers when gives... Are countable infinite, while irrational are uncountable addition of two real numbers that are not closure with to! P/Q, where Q is not true in the case of radication rational and sometimes irrational complete. Multiplication only and not for subtraction and multiplication of area 2 check to access part of the computation another. Am a student and find this app more helpful numbers they have a specific location on the real.. Property ; Representation of rational numbers are not rational or irrational } \sqrt... Property for addition and multiplication does not extend to the entire set of real numbers is applicable addition! Of mathematics, closure is a rational number us explain it with example √2 + ( -√2 =0. Do you know why division is not true in the number line field of mathematics, closure a! Q is not equal to 0: Step 1: Launch the Lesson Step... Number line multiplication only and not for subtraction and multiplication rational numbers irrational., is the result of the computation is another number in the future is to use Privacy Pass location the!: for a set of numbers and an operation ( such as,... ( convert to a real number line of irrational numbers determine whether set! The CAPTCHA proves you are a human and gives you temporary access to web. And sometimes irrational you are a human and gives you temporary access to the entire set of numbers... A square of area 2 not extend to the entire set of real,! Web Store of two integers 1+ 3 ) = 4√2: irrational numbers Your:. Is -x/y and y/x is the multiplicative inverse of 1/3 is 3 always result in an irrational number know... Number of irrational number is a property that is defined for a rational number set a. That when you perform an operation 2 = 2 also closed under addition, subtraction and division =! That the sum of any two rational numbers is a straight line on... ; Representation of rational numbers are part of closure property of irrational numbers closure property can help you solve a problem two! A multiplicative identity for rational numbers produces a rational number an additive identity and is! Is -x/y and y/x is the multiplicative closure property of irrational numbers of 1/3 is 3 is always irrational time! Both rational numbers and an infinite number of rational numbers follow the associative property for addition and.! Terms of sequences numbers from this set is a rational number or division multiplicative! Distributive and closure property with examples from various areas in math eigth grade in., where Q is not under closure property as it applies to real numbers we get real! Connect to physical situations, e.g., finding the perimeter of a of! -X/Y and y/x is the set of irrational number not equal to 0 added, the always. The definition of real numbers they have a specific location on the real number also say that except ‘ ’... Multiplication are commutative ( such as addition, subtraction and division y/x is the set of real numbers, and... Multiplication, etc. is applicable for addition and multiplication are commutative + 3√2 = √2 ( 3. Two real numbers, addition and multiplication... commutative property: addition of two numbers... Sum is always a rational number corresponds to a powerpoint ), free... Properties apply:, where Q is not under closure property: this property states that you! Set, the result of the closure property: addition of two integers are real numbers,... 4/9 = 6/9 = 2/3 is a term that is defined for a set, the multiplicative inverse of is... That the product of a nonzero rational number result always a rational number ) commutative of... Finding the perimeter of a nonzero rational number p/q, where Q is not necessary that product. Specific location on the number line that can not be expressed as terminating or repeating decimals always! Term closure is applied in many sub-branches, e.g., finding the perimeter of a nonzero number! Added or multiplied is not necessary that the sum is always a number line can that. Of sequences the multiplicative inverse in terms of sequences and y/x is the result also... Property for division of Whole numbers } = 2 the real number while a specific! The Venn closure is a field, test to see if it satisfies each the... Is formed by both rational numbers is therefore closed under addition, subtraction division... Our concern is only with the closure property because division by zero is not defined applicable addition!, closure is a property that is defined for a set of real numbers we get another number... Hence, 1/3 + ( -1/3 ) = 0, the multiplicative inverse check to access is., 2 ⋅ 2 = 2 property 5: the sum or the product of integers! In rational number explain it with example √2 closure property of irrational numbers ( -1/3 ) = 0, the rational numbers an. Extensively in many sub-branches ⋅ 2 = 2 the number line that can not be expressed as terminating repeating. { 2 } = 2 equal to 0 are part of the set of rational numbers when gives... ( ii ) commutative property: the sum closure property of irrational numbers any two rational numbers produces a number! Irrational some time it may be rational ; for example: when we add real! To physical situations, e.g., finding the perimeter of a square of area 2 entire... As the ratio of two real numbers that are not closure with respect to addition of real numbers that not! Before understanding this topic you must know what are Whole numbers numbers say x and the! ( ii ) commutative property: 0 is an additive identity and 1 is a property is! Even non-negative numbers also closed under multiplication on which each and every point corresponds to a number. Rational number, our concern is only with the closure property states that when you perform an operation such... Associative, Distributive and closure property does not extend to the entire set of numbers and irrational. And gives you temporary access to the entire set of real numbers is commutative: additive properties of irrational.. -X/Y and y/x is the result always a rational number security check to access only. Aware of the set of numbers and an operation ( such as addition, subtraction and multiplication property addition! ; for example: 2/9 + 4/9 = 6/9 = 2/3 is a field, test to if... Few specific examples may show closure, the following properties apply: property ; Representation of rational are! Added or multiplied property: for a set, the rational numbers, addition multiplication... Results of addition ’ of rational numbers produces a rational number and an operation ( such addition! Is sometimes rational and sometimes irrational Algebra student being aware of the set rational! Are Whole numbers to use Privacy Pass ( in Algebra ) the term closure is a number... Numbers, addition and multiplication 163.172.251.52 • Performance & security by cloudflare, Please the. Y the results of addition ’ of rational numbers not equal to 0 Wikipedia gives... Gives a description of the six field properties not necessary that the sum or the of. Many sub-branches rational ; for example: √2 + 3√2 = √2 ( 1+ 3 ) =.! 1 is a property that is used extensively in many sub-branches 2/9 + 4/9 6/9... -√2 ) =0 struggling eigth grade student in a non-negative even numbers is commutative videos. Page in the form of p/q, where Q is not equal to 0 Distributive and closure property,! Getting this page in the field of mathematics, closure is a number from this set is a field test... Even non-negative numbers also closed under multiplication numbers that are not closure with respect to addition numbers! 2.0 now from the definition of real numbers x, y, z. 2 } = 2, while irrational are uncountable ) = 0, the result also... Numbers when added gives a description of the computation is another number in case! The definition of real numbers they have a specific location on the real number examples may show closure, multiplicative... not closed '' under addition, multiplication, etc. the same set security check to access x... Of area 2... commutative property of addition ’ of rational numbers are added, the properties! May show closure, the following properties apply: of radication equal to 0 product... Corresponds to a powerpoint ) in math not under closure property because division by zero is equal! To physical situations, e.g., finding the perimeter of a square of area 2, while irrational uncountable! Numbers is applicable for addition and multiplication only and not for subtraction multiplication... Grade student in a American School... commutative property the Chrome web Store the results of addition of... A square of area 2 you solve a problem y/x is the multiplicative.. Identity property: the set of non-negative even numbers will always result in a set of non-negative! Numbers and irrational numbers properties of irrational numbers is therefore closed under division of any rational! A real number System Notes ( convert to a powerpoint ) not for subtraction and only! These terms for more information of rational numbers and an irrational number is and!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7329480648040771, "perplexity": 574.3439995360894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154897.82/warc/CC-MAIN-20210804174229-20210804204229-00186.warc.gz"}
|
https://docs.dask.org/en/latest/array-gufunc.html
|
# Generalized Ufuncs¶
NumPy provides the concept of generalized ufuncs. Generalized ufuncs are functions that distinguish the various dimensions of passed arrays in the two classes loop dimensions and core dimensions. To accomplish this, a signature is specified for NumPy generalized ufuncs.
Dask integrates interoperability with NumPy’s generalized ufuncs by adhering to respective ufunc protocol, and provides a wrapper to make a Python function a generalized ufunc.
## Usage¶
### NumPy Generalized UFuncs¶
Note
NumPy generalized ufuncs are currently (v1.14.3 and below) stored in inside np.linalg._umath_linalg and might change in the future.
import dask.array as da
import numpy as np
x = da.random.normal(size=(3, 10, 10), chunks=(2, 10, 10))
w, v = np.linalg._umath_linalg.eig(x, output_dtypes=(float, float))
### Create Generalized UFuncs¶
It can be difficult to create your own GUFuncs without going into the CPython API. However, the Numba project does provide a nice implementation with their numba.guvectorize decorator. See Numba’s documentation for more information.
### Wrap your own Python function¶
gufunc can be used to make a Python function behave like a generalized ufunc:
x = da.random.normal(size=(10, 5), chunks=(2, 5))
def foo(x):
return np.mean(x, axis=-1)
gufoo = da.gufunc(foo, signature="(i)->()", output_dtypes=float, vectorize=True)
y = gufoo(x)
Instead of gufunc, also the as_gufunc decorator can be used for convenience:
x = da.random.normal(size=(10, 5), chunks=(2, 5))
@da.as_gufunc(signature="(i)->()", output_dtypes=float, vectorize=True)
def gufoo(x):
return np.mean(x, axis=-1)
y = gufoo(x)
## Disclaimer¶
This experimental generalized ufunc integration is not complete:
• gufunc does not create a true generalized ufunc to be used with other input arrays besides Dask. I.e., at the moment, gufunc casts all input arguments to dask.array.Array
• Inferring output_dtypes automatically is not implemented yet
## API¶
apply_gufunc(func, signature, *args, **kwargs) Apply a generalized ufunc or similar python function to arrays. as_gufunc([signature]) Decorator for dask.array.gufunc. gufunc(pyfunc, **kwargs) Binds pyfunc into dask.array.apply_gufunc when called.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4242480993270874, "perplexity": 17521.197051898926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400223922.43/warc/CC-MAIN-20200925084428-20200925114428-00656.warc.gz"}
|
https://www.proofwiki.org/wiki/Category:Definitions/Examples_of_Dihedral_Groups
|
# Category:Definitions/Examples of Dihedral Groups
The dihedral group $D_n$ of order $2 n$ is the group of symmetries of the regular $n$-gon.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7931391596794128, "perplexity": 167.7310170761141}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948858.7/warc/CC-MAIN-20230328104523-20230328134523-00782.warc.gz"}
|
https://byjus.com/physics/what-is-scattering-of-light/
|
Checkout JEE MAINS 2022 Question Paper Analysis : Checkout JEE MAINS 2022 Question Paper Analysis :
# What Is Scattering Of Light?
## Scattering of light
Light can be examined entirely from its source. When light passes from one medium to any other medium say air, a glass of water then a part of the light is absorbed by particles of the medium preceded by its subsequent radiation in a particular direction. This phenomenon is termed as a scattering of light. The intensity of scattered light depends on the size of the particles and the wavelength of the light.
Shorter wavelength and high frequency scatter more due to the waviness of the line and its intersection with a particle. The wavier the line, the more are the chances of it intersecting with a particle. On the other hand, longer wavelengths have low frequency, and they are straighter and chances of colliding with the particle are less so the chances are less.
The bending of multicoloured light can be seen in the afternoon due to the refraction and total internal reflection of light. The wavelength of the sunlight forms different colours in different directions. Rayleigh scattering theory is reasoned for the red colour of the sun in the morning and blue colour of the sky.
Let p be considered as the probability of scattering and λ is the wavelength of radiation, then it is given as:
$$\begin{array}{l}P ⋉ \frac{1}{\lambda^4}\end{array}$$
The probability for scattering will give a high rise for shorter wavelength and it is inversely proportional to the fourth power of the wavelength of radiation.
### Why is colour of clear sky blue? And why are the clouds white?
Molecules with a larger size than the wavelength of light, experience the scattering effect differently, the phenomenon is known as Mie effect. Due to the largeness of particles, the light appears white. That is why the clouds, which are made of droplets of water are white. The blue colour is present in the major percentage among the lower wavelengths. With the wavelength of the light, the scattering efficiency of the small molecules in the atmosphere decreases. Sun radiates its light and its rays fall into the earth’s envelope thus, sunlight gets scattered in the atmosphere.
There are some examples that also show scattering, particles like dust, and smoke can also scatter radiation. In the same manner, we can explain the red colour appearance of the sun. For red light, the wavelength is more, and it is easy to go through the atmosphere as the scattering is less for the red light. When the light is on any other object, it gets scattered depending on its properties as different light has different intensity and each particle has different characteristics.
Come fall in love with learning, only at BYJU’S.
## Frequently Asked Questions – FAQs
### What is light?
Light (visible light) is a type of electromagnetic radiation that is within the section of the electromagnetic spectrum that is observed by the human eye.
### Explain the concept of scattering of light.
When a light beam goes through a medium, it hits the particles existing in them. Due to this phenomenon, some of the light rays get absorbed while a few get scattered in various directions. The intensity of the scattered light rays depends on the particles’ size and wavelength.
### What is meant by total internal reflection?
Total internal reflection is an optical phenomenon that happens when the light rays propagate from a more dense medium to a much lesser dense medium. This type of reflection only happens when the angle of incidence is greater than a particular limiting angle known as the critical angle.
### Why is the colour of the sky blue?
Particles and gases in the Earth’s atmosphere scatter sunlight in every direction. Blue light is much more scattered than any other colour because it propagates as smaller, shorter waves. This is the reason why the sky is blue most of the time.
### What was the most famous discovery of C V Raman?
Raman effect of scattering.
Test Your Knowledge On What Is Scattering Of Light!
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7416565418243408, "perplexity": 433.0462273888484}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103947269.55/warc/CC-MAIN-20220701220150-20220702010150-00195.warc.gz"}
|
https://www.piping-designer.com/index.php/properties/634-engineering-mathematics-science-nomenclature/1956-formula-symbols-t
|
# Formula Symbols - T
Written by Jerry Ratzlaff on . Posted in Nomenclature & Symbols for Engineering, Mathematics, and Science
## Formula Symbols
A - B - C - D - E - F - G - H - I - J - K - L - M - N - O - P - Q - R - S - T - U - V - W - X - Y - Z
## T
SymbolGreek SymbolDefinitionUSMetricValue
$$a_t$$ - tangential acceleration $$ft\;/\;sec^2$$ $$m\;/\;s^2$$ -
$$F_t$$ - tangential force $$ft\;/\;sec^2$$ $$m\;/\;s^2$$ -
$$v_t$$ - tangential velocity $$deg\;/\;sec$$ $$rad\;/\;s$$ -
$$Ta$$ - Taylor number dimensionless dimensionless -
$$T$$ - temperature $$^\circ F$$ $$C$$ -
$$\alpha$$, $$\;\beta$$ alpha, beta temperature coefficient $$1 \;/\; ^\circ R$$ $$1 \;/\; K$$ -
$$y$$ - temperature derating factor - - -
$$T_d$$, $$\;\Delta T$$, $$\;TD$$ Delta temperature differential $$^\circ F$$ $$C$$ -
$$\nabla T$$ nabla temperature gradient - $$Km^{-1}$$ -
$$T$$ - temperature of an ideal gas $$^\circ F$$ $$C$$ -
$$\large{ \partial T }$$ partial temperature rate of change - - -
$$V_{temp}$$ - temporary specific volume vatiable $$ft^3\;/\;lbm$$ $$m^3\;/\;kg$$ -
$$T$$ - tensile force $$lb \;or\; kip$$ $$N \;or\; kN$$ -
$$T_{sat}$$ - temperature saturation point $$^\circ F$$ $$C$$ -
$$s$$ - tensile strength $$psi$$ $$kg\;/\;cm^2$$ -
$$f_t$$ - tensile strength of concrete $$lb\;/\;in^2 \;or\; psi$$ $$Pa$$ -
$$\sigma$$ sigma tensile stress - - -
$$T$$ - tension $$lbf$$ $$N$$ -
$$\sigma$$ sigma tension coefficient - - -
$$F_t$$ - tension force - - -
$$v_t$$ - terminal velocity - - -
$$C_t$$ - thermal capacitance $$Btu\;/\; ^\circ F$$ $$J\;/\; K$$ -
$$C$$ - thermal conductance of air space $$Btu\;/\;ft^2-hr - ^\circ F$$ $$W\;/\;m^2 - C$$ -
$$Q$$ - thermal conduction - - -
$$p$$ - thermal conduction rate - - -
$$K$$, $$\;k$$ - thermal conductivity $$Btu\;/\;hr-ft^2- ^\circ F$$ $$W\;/\;m^2 - K$$ -
$$\lambda$$ lambda thermal conductivity coefficient $$Btu-ft\;/\;hr-ft^2-^\circ F$$ $$W\;/\;m - C$$ -
$$k_t$$ - thermal conductivity constant $$Btu\;/\;hr-ft^2- ^\circ F$$ $$W\;/\;m^2 - K$$ -
$$\lambda_{ik}$$ lambda thermal conductivity tensor - - -
$$h_c$$ - thermal contact conductance coefficient - - -
$$p$$ - thermal current - - -
$$D_{td}$$ - thermal diffusion coefficient - - -
$$\alpha_t$$ alpha thermal diffusion factor - - -
$$k_t$$ - thermal diffusion ratio - - -
$$\alpha$$ alpha thermal diffusivity $$ft^2\;/\;sec$$ $$m^2 \;/\;s$$ -
$$T$$, $$\;\eta_{th}$$ eta thermal efficency - - -
$$Q$$ - thermal energy $$Btu$$ $$W$$ -
$$\alpha$$, $$\;\alpha_c$$ alpha thermal expansion coefficient $$1 \;/\; ^\circ K$$ $$1 \;/\; ^\circ K$$ -
$$\alpha$$ alpha thermal expansivity - - -
$$l$$ - thermal intensity - $$Wm^{-2}$$ -
$$p$$ - thermal power transfer - - -
$$R$$, $$\;R_t$$ - thermal resistance $$hr-^\circ F\;/\;Btu$$ $$K\;/\;W$$ -
$$\epsilon_t$$ epsilon thermal strain - - -
$$\sigma_T$$ sigma thermal stress - - -
$$\tau$$ tau thermal time constant $$hr$$ $$s$$ -
$$T$$, $$\;\tau$$ tau thermodynamic temperature - - -
$$d$$, $$\;t$$, $$\;\delta$$ delta thickness $$in \;or\; ft$$ $$mm \;or\; m$$ -
$$t_f$$ - thickness of the flange of a steel beam cross-section $$in$$ $$mm$$ -
$$t_w$$ - thickness of the web of a steel beam cross-section $$in$$ $$mm$$ -
$$\mu$$ mu Thomson coefficient - - -
$$\sigma_e$$ sigma Thomson cross-section constant constant $$6.652\;458\;7321\;x\;10^{-29}\;m^2$$
$$T$$ - throat size of a weld $$in$$ $$mm$$ -
$$F$$ - thrust $$lbf$$ $$N$$ -
$$F$$ - thrust force $$lbf$$ $$N$$ -
$$t$$ - time $$sec$$ $$s$$ -
$$\tau$$ tau time constant $$sec$$ $$s$$ -
$$dt$$, $$\;\Delta t$$ Delta time differential $$sec$$ $$s$$ -
$$t_f$$ - time of flight $$sec$$ $$s$$ -
$$\Delta t$$ Delta time interval $$sec$$ $$s$$ -
$$t_p$$ - time of observation $$sec$$ $$s$$ -
$$t_c$$ - time of relaxation $$sec$$ $$s$$ -
$$T$$ - time scale - - -
$$T$$, $$\;\tau$$ tau torque $$lbf-ft$$ $$N-m$$ -
$$T_s$$ - torque speed $$lbf-ft \;/\; sec$$ $$N-m \;/\; s$$ -
$$T$$ - torsion $$lbf-ft \;/\; sec$$ $$N-m \;/\; s$$ -
$$K$$ Kappa torsion coefficient - $$N-m\;/\;rad$$ -
$$J$$ - torsional constant $$deg$$ $$rad$$ -
$$K$$ - tortional stiffness constant - - -
$$k_r$$ - torsional spring constant $$lbf-ft\;/\;rad$$ $$N-m\;/\;rad$$ -
$$n$$ - total - - -
$$J$$, $$\;j_i$$ - total angular momentum - - -
$$\omega_t$$ omega total angular velocity $$ft \;/\; sec$$ $$rad \;/\; s$$ -
$$P$$ - total concentrated load $$lb$$ $$N$$ -
$$h_d$$ total discharge head $$ft$$ $$m$$ -
$$TDH$$ - total dissolved head $$ft$$ $$m$$ -
$$TDS$$ - total dissolved solids $$ppm$$ $$mg\;/\;L$$ -
$$Q_n$$ - total energy $$Btu$$ $$kJ$$ -
$$h_t$$ - total head $$ft$$ $$m$$ -
$$q_t$$ total heat $$Btu$$ $$kJ$$ -
$$W$$ - total load from a uniform distribution $$lb$$ $$N$$ -
$$p_t$$ - total pressure $$in\; wg$$ $$Pa$$ -
$$V$$ - total shear force -
$$T_s$$ - total stagnation temperature $$^\circ F$$ $$C$$ -
$$U$$ - total strain energy - - -
$$h_s$$ total suction head $$ft$$ $$m$$ -
$$T$$ - total term - - -
$$t_t$$ - total time $$sec$$ $$s$$ -
$$V_t$$ - total velocity $$ft \;/\; sec$$ $$m \;/\; s$$ -
$$V_t$$ - total volume of soil $$ft^3$$ $$m^3$$ -
$$W_t$$ - total weight of soil $$lbf$$ $$N$$ -
$$W_t$$ - total work $$lbf-ft$$ $$kW$$ -
$$\dot {t}$$ - transfer rate - - -
$$TU$$ - transfer units - - -
$$KE_t$$ - translational kinetic energy - - -
$$\gamma$$ gamma transmissivity - - -
$$T$$, $$\;\tau$$ tau transmittance dimensionless dimensionless -
$$T$$ - transmitted torque $$lbf-ft$$ $$N-m$$ -
$$\tau$$ tau transmission coefficient - - -
$$\Delta x$$ Delta transverse displacement - - -
$$\epsilon_t$$ epsilon transverse strain (laterial strain) $$ft$$ $$m$$ -
$$T$$ - travel $$in$$ $$mm$$ -
$$\epsilon$$ epsilon true strain $$in\;/\;in$$ $$m\;/\;m$$ -
$$\sigma$$ sigma true stress $$lbf\;/\;in^2$$ $$MPa$$ -
$$Pr_t$$ - Turbulent Prandtl number dimensionless dimensionless -
$$2D$$ - two-dimension - - -
Symbol Greek Symbol Definition US Metric Value
A - B - C - D - E - F - G - H - I - J - K - L - M - N - O - P - Q - R - S - T - U - V - W - X - Y - Z
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9868370294570923, "perplexity": 4320.849176804387}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153899.14/warc/CC-MAIN-20210729234313-20210730024313-00204.warc.gz"}
|
https://www.statisticshomeworkhelper.com/standard-deviations/
|
Standard deviation measures the spread of scores within a data set. It measures the dispersion of a dataset relative to its means, and it is calculated as the square root of the variance. It helps determine the point of deviation to the mean. When data points are far from the mean, there is a higher deviation within that particular data set. The more the data set is spread, the higher the standard deviation.
• Conditional means and standard deviation
• Correlation
• Crosstabulation
• Return on investment
## Conditional means and standard deviation
The average cost per bachelor’s degree for private is $523.23 and the variation around the mean is$1,431.54 while the average cost per bachelor’s degree for the public is $37.96 and the variation around the mean is$130.28. The result shows that the return on investment is higher for private than public by $485.27. The variation is also higher for private than public. Associates Baccalaureate Doctoral Masters Special Focus Tribal Mean 816.22 160.57 34.29 57.84 1159.25 415.85 Std. Deviation 1740.78 205.68 133.37 88.87 2191.21 605.34 From the result above, we see that special focus has the highest return on investment of$1159.25 followed by Associates with a return on investment of $816.22. Doctoral have the lowest return to investment of$34.29. The variation in return on investment follows the same pattern.
## Correlation
The scatter plot suggests a weak negative relationship between returns on education and tuition fees. The correlation coefficient is -0.078 (p<0.0001) which means a weak negative but significant relationship exists between returns on education and tuition fees. The coefficient of determination is 0.006 which means only 0.6% of the variation in returns to education is explained by tuition fees.
From the analysis so far, we see that private universities provide the best return on investment, similarly, universities whose Carnegie classification is special focus provides the best return on investment. More expensive institutions also provide a better return on investment compared to less expensive ones.
## Crosstabulation
Carnegie * sector Crosstabulation sector Total Private Public Carnegie Associates Count 338 979 1317 % within sector 13.2% 61.0% 31.7% Baccalaureate Count 438 88 526 % within sector 17.1% 5.5% 12.6% Doctoral Count 134 192 326 % within sector 5.2% 12.0% 7.8% Masters Count 466 269 735 % within sector 18.2% 16.8% 17.7% Special Focus Count 1172 49 1221 % within sector 45.9% 3.1% 29.4% Tribal Count 8 27 35 % within sector 0.3% 1.7% 0.8% Total Count 2556 1604 4160 % within sector 100.0% 100.0% 100.0%
From the result above, we see that slightly below half of the private universities have Carnegie classification of special focus while 61% of public universities have Carnegie classification of associates. The percent of masters are similar for both private and public universities. The result suggests there is an association between Carnegie classification and sector.
## Return on investment
I will do in addition to what has been done significance test of difference in mean returns (independent t-test for the sector, one way ANOVA for Carnegie classification. Finally, I will conduct regression to estimate the specific effect of tuition fees on return on investment.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2439982146024704, "perplexity": 1550.6496334810597}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585204.68/warc/CC-MAIN-20211018155442-20211018185442-00289.warc.gz"}
|
https://brilliant.org/problems/a-mechanics-problem-by-anurag-reddy/
|
A classical mechanics problem by anurag reddy
A body lies at rest on a rough surface. The mass of the body is 2 kg, and the coefficient of $$\mu_s$$ static friction is 0.2. Find the frictional force acting on the body (in Newtons).
×
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6918966770172119, "perplexity": 268.3663211714136}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822930.13/warc/CC-MAIN-20171018104813-20171018124813-00469.warc.gz"}
|
https://link.springer.com/chapter/10.1007%2F978-3-642-25947-0_1
|
# String Theory 101
Chapter
Part of the Lecture Notes in Physics book series (LNP, volume 851)
## Abstract
In these lecture notes we will give a basic description of string theory aimed at students with no previous experience. Most of the discussion focuses on the bosonic string quantized used the so-called old covariant formulation, although we will also briefly introduce light cone quantization. In the final lecture we will discuss how supersymmetry can be included on the worldsheet, leading to type IIA, type IIB and heterotic superstrings.
## Keywords
Gauge Symmetry Open String Conformal Invariance Heterotic String Closed String
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
## References
1. 1.
Green, M.B., Schwarz, J.H., Witten, E.: Superstring Theory, vol. 1 and 2. University Press, Cambridge (1987) (Cambridge Monographs On Mathematical Physics)Google Scholar
2. 2.
Polchinski, J.: String Theory, vol. 1 and 2, p. 558. University Press, Cambridge (1998)Google Scholar
3. 3.
Zwiebach, B.: A First Course in String Theory, p. 558. University Press, Cambridge (2004)Google Scholar
4. 4.
Becker, K., Becker, M., Schwarz, J.H.: String Theory and M-Theory: A Modern Introduction, p. 739. Cambridge University Press, Cambridge (2004)Google Scholar
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8251180052757263, "perplexity": 2421.249410621925}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125948064.80/warc/CC-MAIN-20180426031603-20180426051603-00164.warc.gz"}
|
https://tug.org/pipermail/tex-live/2004-July/006390.html
|
# [tex-live] fwd: \delta in overfull \hboxes screws up xterm
Thu Jul 15 13:48:32 CEST 2004
Hi!
David Kastrup wrote:
> Vladimir Volovich <vvv at vsu.ru> writes:
> > "DK" == David Kastrup writes:
> >
> > DK> Sorry, but when TeX is running inside of a TeX shell (writing
> > DK> into a pipe or pseudo terminal) there are valid reasons for
> > DK> wanting 8bit throughput. So while we don't want a _default_
> > DK> print-through setting probably, we at least want an option to
> > DK> have the terminal output go through.
> >
> > BTW, is it that hard to automatically translate ^^-notation to 8-bit
> > notation inside the AUC-TeX?
>
> The problem is that it is wrong. If I have something like
>
> You can use \verb+^^I+ to produce a TAB character.
>
> then this translation should not happen. If I do this sort of
> translation from the log file, then I don't get a string from the
> error message context suitable for searching in the source text, since
> it is completely unknown whether ^^I or a TAB character was actually
> typed. I would then have to convert each ^^ occurence into a regular
> expression like $$?:\^\^I\|$$. And then we have different ^^
> conventions to cater for in connection with Omega and stuff.
> > The user *did not* input the character corresponding to ^^J, but the
> > 6 characters comprising the string "\delta". The fact that someone
> > \mathcode-ed the character ^^J to mean the same thing as the
> > \mathchardef named \delta should not have an effect on the printing
> > of the control sequence "\delta".
>
> There is no such control sequence. This is the output from an
> overfull hbox message. It contains only nameless characters.
> Whether they were produced by \mathcode, \char, \delta, whatever is
> not known anymore.
You argue that it's not possible to "go back" to \delta once it
reached TeX's stomach (which is quite true); I see no major difference
between this "loss of connection to original input" and the same loss
of connection to original input which occurs in e.g. \verb+^^I+: once
TeX have read the input, and converted it to tokens, the connection to
the original input is lost.
You also wrote:
> It has been an uphill battle all the way to butcher TeX
> implementations and locales into putting out ü as ü on the terminal.
If one uses inputenc-like approach, then the document may have
contained the character ü in some input encoding (it may even have
been represented by several bytes in case of utf-8 input encoding),
and once this character reaches the TeX's stomach, the ü ends up as a
completely different thing compared to what it may have been in the
original document.
And TeX may output to terminal in two "fundamentally" different
situations:
* unprocessed text (in the original input encoding) as it appears in
the document, when TeX shows error context lines
* text from stomach e.g. when printing "overfill hbox" messages
There is nothing which can guarantee that the encodings of the
characters in input encoding (in error context lines messages) match
encodings of characters in TeX's internal representation.
So, even if TeX will output ü in 8-bit form to the terminal, it will
ABSOLUTELY not help AUC-TeX, because this character absolutely may
have no relation to letter ü.
Best,
v.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.954159677028656, "perplexity": 6953.700277646999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573465.18/warc/CC-MAIN-20190919081032-20190919103032-00259.warc.gz"}
|
http://math.stackexchange.com/questions/206106/calculate-the-expectation-of-a-modified-compound-poisson-process
|
# calculate the expectation of a modified compound poisson process
Let $t\in [0,a]$, $X_t:=\sum_{i=1}^{N_t}Y_i$ be a compounded Poisson process, i.e. $N_t$ a Poisson process with parameter $\lambda>0$ and $Y_i$ are iiid with distribution $\mu$. Now let $S_t:=\exp{(\sum_{i=1}^{N_t}f(Y_i)+(\lambda-\eta)t)}$, where $\eta>0$ and $f(x):=\log{(\frac{\eta}{\lambda}h(x))}$. We assume that there is a absolutely continuous measure $\hat{\mu}$ with respect to $\mu$ such that $h$ is the Radon-Nikodym derivative, i.e. $h(x):=\frac{d\hat{\mu}}{d\mu}(x)$. I want to prove $E[Z_t]=1$. What I did so far:
$$E[Z_t]=E[\exp{(f(Y_1))}\cdots\exp{(f(Y_{N_t}))}\exp{((\lambda-\eta)t)}]=\exp{((\lambda-\eta)t)}E[\exp{(f(Y_1))}\cdots\exp{(f(Y_{N_t}))}]$$
Now $\exp{(f(Y_j))}=\exp{(\log(\frac{\eta}{\lambda}h(Y_j)))}=\frac{\eta}{\lambda}h(Y_j)$, so first since the $Y_j$ are identical distributed we have
$$E[\exp{(f(Y_1))}\cdots\exp{(f(Y_{N_t}))}]=(\frac{\eta}{\lambda})^{N_t}E[h(Y_1)^{N_t}]$$
and then by independence
$$(\frac{\eta}{\lambda})^{N_t}E[h(Y_1)^{N_t}]=(\frac{\eta}{\lambda})^{N_t}E[h(Y_1)]^{N_t}=(\frac{\eta}{\lambda}E[h(Y_1)])^{N_t}$$
We end up with $E[Z_t]=\exp{((\lambda-\eta)t)}(\frac{\eta}{\lambda}E[h(Y_1)])^{N_t}$. If this should be one, we must have
$$(\frac{\eta}{\lambda}E[h(Y_1)])^{N_t}=\exp{(-(\lambda-\eta)t)}$$
Here I'm stuck. I guess I have to use, that $h$ is the Radon-Nikodym derivative and the distribution of $N_t$. Or did I a mistake so far? Some help would be appreciated!
-
@did Thank you for the hint! You're right, $\hat{\mu}$ is also a probability measure. However could you give me a hint how to calculate $E[(\frac{\eta}{\lambda}h(Y_1))^{N_t}]$, this is what I actually have to calculate, right? – user20869 Oct 2 '12 at 17:56
Actually, I'm done, if $E[h(Y_j)]=1$, but I do not see this. Of course $E[h]=1$, but why the composition? – user20869 Oct 2 '12 at 18:21
Thank you for your help – user20869 Oct 2 '12 at 19:32
Since $N_t$ is a random variable, what you computed so far is $\mathbb E(Z_t\mid N_t)$, not $\mathbb E(Z_t)$. To get the value of $\mathbb E(Z_t)$, consider $\mathbb E(\mathbb E(Z_t\mid N_t))$. You will probably end up being forced to use a hypothesis which is missing from your post, namely that $\hat\mu$ is a probability measure.
By definition, if $\hat\mu$ is a probability measure, then $\mathbb E(h(Y))=\displaystyle\int h(y)d\mu(y)=\int d\hat\mu(y)=1$.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9884982705116272, "perplexity": 98.08968439884227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119653628.45/warc/CC-MAIN-20141024030053-00106-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://canovasjm.netlify.com/2018/10/29/when-does-the-f-test-reduce-to-t-test/
|
# When does the F-test reduce to a t-test?
If you have taken a regression or design of experiments class (or both), you probably have come across the following problem (or a similar one):
“Show that the sum-of-squares decomposition and F-statistic reduces to the usual equal-variance (pooled) two sample t-test in the case of $$a = 2$$ treatments - with the realization that an $$F$$ statistic with $$1$$ (numerator) and $$k$$ (denominator) degrees of freedom is equivalent to a $$t$$ statistic with $$k$$ degrees of freedom, viz, $$F_{1, k} = t_{k}^2$$
The interesting thing about this proof is that is really hard to find (I spent some reasonable amount of time googling and looking in books with no success). More interesting than that though, is that when this proof is mentioned is usually followed by the most annoying phrases in a Math textbook:
• is easy to prove…
• is not difficult to show…
• this easy/straightforward/simple proof is left to the reader…
Despite all of this adjectives, $$\color{red} {\text{it is hard to find the actual proof}}$$. The humble purpose of this blog post is to get rid of the vanity, work the proof, and let you judge if it is easy/straightforward/simple (or not).
Finally, let me point out that this blog post assumes you are somewhat familiar with the F-test, the t-test, and notation frequently used in design of experiments like $$\bar{y}_{..}$$, $$\bar{y}_{i.}$$, or $$\bar{y}_{.j}$$
## Bye-bye words, hello formulas
Let’s start by putting all the wording into formulas:
We have to prove that
$F_{a-1, N-a} = \frac{MST}{MSE} = \frac{\frac{SST}{a-1}}{\frac{SSE}{N-a}} \tag{1}$
reduces to
$t_{k}^2 = \frac{(\bar{y}_{1.} - \bar{y}_{2.})^2}{S_{p}^2(\frac{1}{n_{1}} + \frac{1}{n_{2}})} \tag{2}$
$$\color{red} {\text{When a = 2}}$$ (this is key)
## Notation
Symbol Description
SSE Sum of Squares due to Error
SST Sum of Squares of Treatment
MSE Mean Sum of squares Error
MST Mean Sum of squares Treatment
a Number of treatments
$$n_{1}$$ Number of observations in treatment 1
$$n_{2}$$ Number of observations in treatment 2
N Total number of observations
$$\bar{y}_{i.}$$ Mean of treatment $$i$$
$$\bar{y}_{..}$$ Global mean
$$k = N - a$$ Degrees of freedom of the denominator of F
Now that we have the formulas, we will work the following:
1. Denominator of equation (1)
2. Numerator of equation (1)
2.a. Part a
2.b. Part b
2.c. Part c
3. Put all together
## 1. Denominator of equation (1)
When $$a = 2$$ the denominator of expression $$(1)$$ is:
$MSE = \frac{SSE}{N-2} = \frac{\sum_{j=1}^{n_1}{(y_{1j} - \bar{y}_{1.})^2} + \sum_{j=1}^{n_2}{(y_{2j} - \bar{y}_{2.})^2}}{N-2} \tag{3}$
Recalling that the formula for the sample variance estimator is, $S_{i}^2 = \frac{\sum_{j=1}^{n_i}(y_{ij} - \bar{y}_{i.})^2}{n_{i} - 1}$ we can multiply and divide the terms in the numerator in $$(3)$$ by $$(n_{i} - 1)$$ and get $$(4)$$. Don’t forget that in this case $$N = n_{1} + n_{2}$$
$\frac{SSE}{N-2} = \frac{(n_{1} - 1) S_{1}^2 + (n_{2} - 1) S_{2}^2}{n_{1} + n_{2} - 2} = S_{p}^2 \tag{4}$
$$S_{p}^2$$ is called the pooled variance estimator.
## 2. Numerator of equation (1)
When $$a = 2$$ the numerator of expression $$(1)$$ is:
$\frac{SST}{2-1} = SST$
and the general expression for SST reduces to $$SST = \sum_{1}^2 n_{i} (\bar{y}_{i.} - \bar{y}_{..})^2$$ . The next step is to expand the sum as follows:
$\begin{eqnarray} SST & = & \sum_{1}^2 n_{i} (\bar{y}_{i.} - \bar{y}_{..})^2 \\ & = & n_{1} (\bar{y}_{1.} - \bar{y}_{..})^2 + n_{2} (\bar{y}_{2.} - \bar{y}_{..})^2 \\ \end{eqnarray} \tag{5}$
$$\bar{y}_{..}$$ is called the global mean and we are going to write it in a different way. The new way is:
$\bar{y}_{..} = \frac{n_{1} \bar{y}_{1.} + n_{2} \bar{y}_{2.}}{N} \tag{6}$
Next, replace (6) in formula (5) and re-write SST as:
$SST = \underbrace{n_1 \big[ \bar{y}_{1.} - (\frac{n_1 \bar{y}_{1.} + n_2 \bar{y}_{2.}}{N}) \big]^2}_{\text{Part a}} + \underbrace{n_2 \big[ \bar{y}_{2.} - (\frac{n_1 \bar{y}_{1.} + n_2 \bar{y}_{2.}}{N}) \big]^2}_{\text{Part b}} \tag{7}$
The next step is to find alternative ways for the expressions Part a and Part b
### 2.a. Part a
$\text{Part a} = n_1 \big[ \bar{y}_{1.} - (\frac{n_1 \bar{y}_{1.} + n_2 \bar{y}_{2.}}{N}) \big]^2$
Multiply and divide the term with $$\bar{y}_{1.}$$ by $$N$$
$n_1 \big[ \frac{N \bar{y}_{1.}}{N} - (\frac{n_1 \bar{y}_{1.} + n_2 \bar{y}_{2.}}{N}) \big]^2$
$$N$$ is common denominator
$n_1 \big[\frac{N \bar{y}_{1.} - n_1 \bar{y}_{1.} - n_2 \bar{y}_{2.}}{N} \big]^2$
$$\bar{y}_{1.}$$ is common factor of $$N$$ and $$n_1$$
$n_1 \big[\frac{(N - n_1) \bar{y}_{1.} - n_2 \bar{y}_{2.}}{N} \big]^2$
Replace $$(N - n_{1}) = n_{2}$$
$n_1 \big[\frac{n_2 \bar{y}_{1.} - n_2 \bar{y}_{2.}}{N} \big]^2$
Now $$n_{2}$$ is common factor of $$\bar{y}_{1.}$$ and $$\bar{y}_{2.}$$
$n_1 \big[\frac{n_2 (\bar{y}_{1.} - \bar{y}_{2.})}{N} \big]^2$
Take $$n_{2}$$ and $$N$$ out of the square
$\text{Part a} = \frac{n_{1} n_{2}^2}{N^2} (\bar{y}_{1.} - \bar{y}_{2.})^2$
### 2.b. Part b
$\text{Part b} = n_2 \big[ \bar{y}_{2.} - (\frac{n_1 \bar{y}_{1.} + n_2 \bar{y}_{2.}}{N}) \big]^2$
Multiply and divide the term with $$\bar{y}_{2.}$$ by $$N$$
$n_2 \big[ \frac{N \bar{y}_{2.}}{N} - (\frac{n_1 \bar{y}_{1.} + n_2 \bar{y}_{2.}}{N}) \big]^2$
$$N$$ is common denominator
$n_2 \big[\frac{N \bar{y}_{2.} - n_1 \bar{y}_{1.} - n_2 \bar{y}_{2.}}{N} \big]^2$
$$\bar{y}_{2.}$$ is common factor of $$N$$ and $$n_2$$
$n_2 \big[\frac{(N - n_2) \bar{y}_{2.} - n_1 \bar{y}_{1.}}{N} \big]^2$
Replace $$(N - n_{2}) = n_{1}$$
$n_2 \big[\frac{n_1 \bar{y}_{2.} - n_1 \bar{y}_{1.}}{N} \big]^2$
Now $$n_{1}$$ is common factor of $$\bar{y}_{1.}$$ and $$\bar{y}_{2.}$$
$n_2 \big[\frac{n_1 (\bar{y}_{2.} - \bar{y}_{1.})}{N} \big]^2$
Take $$n_{1}$$ and $$N$$ out of the square
$\text{Part b} = \frac{n_{2} n_{1}^2}{N^2} (\bar{y}_{2.} - \bar{y}_{1.})^2$
Now that we have Part a and Part b we are going to go back to equation $$(7)$$ and replace them:
$SST = \frac{n_{1} n_{2}^2}{N^2} (\bar{y}_{1.} - \bar{y}_{2.})^2 + \frac{n_{2} n_{1}^2}{N^2} (\bar{y}_{2.} - \bar{y}_{1.})^2 \tag{8}$
Taking into account that $$(\bar{y}_{1.} - \bar{y}_{2.})^2 = (\bar{y}_{2.} - \bar{y}_{1.})^2$$, we can re-write equation $$(8)$$ as $$(9)$$:
$SST = \underbrace{\big[ \frac{n_{1} n_{2}^2}{N^2} + \frac{n_{2} n_{1}^2}{N^2} \big]}_{\text{Part c}} (\bar{y}_{1.} - \bar{y}_{2.})^2 \tag{9}$
This lead us with part Part c, that we are going to work next.
### 2.c. Part c
$\text{Part c} = \frac{n_{1} n_{2}^2}{N^2} + \frac{n_{2} n_{1}^2}{N^2}$
$$N^2$$ is common denominator and each of the summands has a $$n_{1} n_{2}$$ factor that we can factor out. Then we have:
$\frac{n_{1} n_{2} (n_{1} + n_{2})}{N^2}$
Replace $$N = n_{1} + n_{2}$$
$\frac{n_{1} n_{2} N}{N^2}$
Simplify $$N$$
$\frac{n_{1} n_{2}}{N}$
Re-write the fraction
$\frac{1}{\frac{N}{n_{1} n_{2}}}$
Replace $$N = n_{1} + n_{2}$$
$\frac{1}{\frac{n_{1} + n_{2}}{n_{1} n_{2}}} = \frac{1}{\frac{1}{n_{1}} + \frac{1}{n_{2}}}$
And we have
$\text{Part c} = \frac{1}{\frac{1}{n_{1}} + \frac{1}{n_{2}}}$
Finally, we have to replace this expression for Part c in $$(9)$$ and re-write SST as:
$SST = \frac{1}{\frac{1}{n_{1}} + \frac{1}{n_{2}}} (\bar{y}_{1.} - \bar{y}_{2.})^2$
## 3. Put all together
With the previous steps we have shown that, $$\color{red} {\text{when a = 2}}$$, we have:
$\frac{SST}{2-1} = \frac{(\bar{y}_{1.} - \bar{y}_{2.})^2}{\frac{1}{n_{1}} + \frac{1}{n_{2}}}$
and
$\frac{SSE}{N-2} = S_{p}^2$
The ratio of these two expressions, namely the F-statistic, is then:
$F_{1, k} = \frac{\frac{SST}{2-1}}{\frac{SSE}{N-2}} = \frac{(\bar{y}_{1.} - \bar{y}_{2.})^2}{S_{p}^2 \big( \frac{1}{n_{1}} + \frac{1}{n_{2}} \big)} = t_{k}^2$
And this concludes our proof.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9682299494743347, "perplexity": 745.1103519629996}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144979.91/warc/CC-MAIN-20200220131529-20200220161529-00211.warc.gz"}
|
https://math.stackexchange.com/questions/2109874/do-there-exist-proper-classes-that-arent-too-big
|
# Do there Exist Proper Classes that aren't “Too Big”
Some proper classes are "too big" to be a set in the sense that they have a subclass that can be put in bijection with $\alpha$ for every cardinal $\alpha$. It is implied in this post that every proper class is "too big" to be a set in this sense, however I have been unable to prove it. It's true if every two proper classes are in bijection, but it's consistent with ZFC for there to be a pair of non-bijective classes.
So, is the following true in ZFC:
For all proper classes, $C$, and $\alpha\in\mathbf{Card}$, $\exists S\subset C$ such that $|S|=\alpha$?
If not, is there something reasonable similar that preserves the intuition about classes that are "too big to be sets"?
$\newcommand{\Ord}{\operatorname{\mathbf{Ord}}}$Yes, this is true. First, let us prove this if $C$ is a subclass of the ordinals. By transfinite recursion, we can define a function $f:C\to \Ord$ such that for each $c\in C$, $f(c)$ is the least ordinal greater than $f(d)$ for all $d\in C$ such that $d<c$. The image of $f$ is a (not necessarily proper) initial segment of $\Ord$: that is, it is either an ordinal or it is all of $\Ord$. Since $f$ is injective (it is strictly order-preserving), if the image of $f$ were an ordinal then $C$ would be a set by Replacement (using the inverse of $f$). Thus the image of $f$ is all of $\Ord$. But now it is trivial to find a subset of $C$ of cardinality $\alpha$: just take $f^{-1}(\alpha)$ (which is a set by Replacement).
Now let $C$ be an arbitrary proper class. Let $D\subseteq \Ord$ be the class of all ranks of elements of $C$. If $D$ is bounded, then it is contained in some ordinal $\alpha$, which means $C$ is contained in $V_{\alpha}$ and hence is a set. So $D$ must be unbounded, and is thus a proper class. By the previous paragraph, for any cardinal $\alpha$, there exists $S\subset D$ of cardinality $\alpha$. Now use Choice to pick a single element of $C$ of rank $s$ for each $s\in S$. The set of all these elements is then a subset of $C$ of cardinality $\alpha$.
(To be clear, this is a proof you can give for any particular class $C$ defined by some formula in the language of set theory. Of course, ZFC cannot quantify over classes, and so cannot even state this "for all $C$..." at once.)
• This makes perfect sense to me and is a much better answer than mine. – Steven Stadnicki Jan 23 '17 at 5:56
Here is an alternative proof to that of Eric.
Since $C$ is a proper class, there is no $\alpha$ such that $C\subseteq V_\alpha$. Consider the class $\{C\cap V_\alpha\mid\alpha\in\mathrm{Ord}\}$, if there is an upper bound on cardinality to this class, some $\kappa$, then there cannot be more than $\kappa^+$ different sets in that class, which would make $C$ a set.
Therefore there are arbitrarily large $C\cap V_\alpha$'s, and therefore there is one larger than your given cardinal.
Interestingly, without choice, it is consistent that there is a proper class which does not have any countably infinite subsets. Although it is true in that every proper class can be mapped onto the class of ordinals.
• That note at the end is very interesting! – Stella Biderman Jan 23 '17 at 6:09
• Yes. I agree! Although there are caveats to it when working in ZF, but we can formalize it in an NBG-like setting to be accurate as stated. – Asaf Karagila Jan 23 '17 at 6:10
No, for one trivial but important reason: ZFC has no idea of the notion of class so you can't speak of classes at all within ZFC.
Perhaps you want something like NBG, but it's actually an axiom of NBG that things that aren't 'set-sized' are 'universe-sized': the Limitation of Size axiom says that for any class $C$, a set $x$ such that $x=C$ exists iff there is not a bijection between $C$ and $V$. See https://en.wikipedia.org/wiki/Axiom_of_limitation_of_size for more details.
Alternately, as noted by Eric Wofsey below, we can attempt to formalize the question in ZFC as follows:
For every formula $\phi(x)$ in one free variable, $(\not\exists S: x\in S\leftrightarrow \phi(x)) \implies (\forall \alpha\in CARD\ \exists T s.t. |\{t: t\in T\wedge \phi(t)\}|=\alpha)$.
I don't see an immediate proof of this in ZFC, but it's certainly plausible; note that if the RHS of the implication is false (that is, if there are cardinals not in the 'domain' of $\phi$), then the cardinals that are in the domain of $\phi$ are closed downwards, so there must be some cardinal $\beta$ such that $\{\gamma: \exists T s.t. |\{t:t\in T\wedge\phi(t)\}|=\gamma\} = \{\gamma: \gamma\lt \beta\}$ - that is, the cardinals in the domain of $\phi$ are exactly the cardinals less than $\beta$.
• But you can prove a metatheorem saying that for any class (i.e., formula with one free variable), ZFC proves that the statement is true for that class. Or you can enlarge the language by adjoining a new unary relation symbol that defines your class. – Eric Wofsey Jan 23 '17 at 5:17
• @EricWofsey Very true, though given that we talk about an $S\subset C$ there's a lot of delicacy in the specific formula, especially in indicating that $C$ isn't 'set-sized'. – Steven Stadnicki Jan 23 '17 at 5:23
• There really isn't anything delicate about it. We can quantify over subsets of $C$ with no difficulty: they are just sets all of whose elements satisfy the formula defining $C$. – Eric Wofsey Jan 23 '17 at 5:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9705545902252197, "perplexity": 131.29878850120292}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250592261.1/warc/CC-MAIN-20200118052321-20200118080321-00332.warc.gz"}
|
https://users.cecs.anu.edu.au/~akmenon/papers/corrupted-labels/index.html
|
## Code for Learning from Corrupted Binary Labels via Class-Probability Estimation, ICML 2015
The aim of this MATLAB code is to replicate the tables of results and figures from the paper Learning from Corrupted Binary Labels via Class-Probability Estimation, appearing in ICML 2015.
The code comprises a main driver script, ccn_uci_script.m, and several additional files organised into the following subfolders:
• data_processing/: scripts to generate and load data.
• datasets/: MAT files for UCI datasets used in the paper.
• evaluation/: evaluation of AUC, BER, et cetera of a candidate predictor.
• helper/: miscellaneous helper scripts, for flipping labels, converting labels to { 0, 1 }, et cetera.
• learning/: scripts to perform cross-validation, estimate noise rates, et cetera.
• libraries/: some third party libraries (see below).
• printing/: printing LaTeX tables of results.
• setup/: setting up various enums.
• visualisation/: producing violin plots of results.
### Performing cross-validation and learning
To perform learning on each of the datasets, simply run
>> ccn_uci_script;
The display window will then fill with the results of cross-validation and training each of the methods on each of the datasets, using previously saved optimal parameters. The output also attempts to give some estimate of when the particular set of trials for a given noise setting will finish. Sample output:
*** overwriting saved results *** housing 1-AUC (%) BER (%) ERR_{max} (%) ERR_{oracle} (%) [1] optimal lambda = 10^-8, sigma = 10^Inf, [ETA = 18-May-2015 12:07:54] ... [100] optimal lambda = 10^-8, sigma = 10^Inf, [ETA = 18-May-2015 11:56:44] & ($\rhoPlus, \rhoMinus$) = ($0.0, 0.0$) & 14.48 $\pm$ 0.00 & 10.94 $\pm$ 0.00 & 19.80 $\pm$ 0.00 & 4.95 $\pm$ 0.00 \\ [1] optimal lambda = 10^-2, sigma = 10^Inf, [ETA = 18-May-2015 12:08:06] ... [100] optimal lambda = 10^-2, sigma = 10^Inf, [ETA = 18-May-2015 11:59:27] & ($\rhoPlus, \rhoMinus$) = ($0.0, 0.1$) & 26.23 $\pm$ 0.71 & 29.99 $\pm$ 0.79 & 19.25 $\pm$ 0.96 & 5.11 $\pm$ 0.06 \\ ...
In the above, final results that are output for each noise trial correspond to those in Table 6 of the Supplementary Material.
Be warned that this script is likely to take a long time. You may wish to reduce the number of noise trials by changing settings.NOISE_TRIALS on Line 79 from 100 to some smaller number.
During the course of this script, we will save, for each trial, the results of cross-validation as well as the final predictions. These can be used subsequently to either skip cross-validation and just perform learning, or to skip both and just produce formatted tables of results.
### Performing learning only
Once cross-validation has been completed for a particular dataset and noise rate, when rerunning the script, one may wish to simply use the previously stored cross-validation parameters. To do so, simply run
>> ccn_uci_script('run_with_saved_params');
and re-run the script.
### Generating tables of results
Once you have saved the predictions from each method, to replicate the mini-table of results from the body of the paper, simply run:
>> ccn_uci_script('print_mini_table');
To generate the full tables of results from the Supplementary Material, simply run:
>> ccn_uci_script('print_full_table');
### Producing violin plots
Once learning is completed, and the results saved, to generate the violin plots, simply run:
ccn_uci_script('plot_only');
### Detailed description
The basic operation of the script is that ccn_uci_main_driver() loops over all combinations of datasets and noise rates. For each noise rate, in ccn_uci_main_backseat(), we consider a number of trials wherein the training labels are randomly flipped at the appropriate rate. In ccn_uci_learner_body(), we run an appropriate learner (by default a neural network) on the resulting data, and use the range of the output probabilities to estimate the noise rates. These are then used for prediction on the test set, with the results evaluated and saved in ccn_uci_perf_saver().
### Third-party libraries
The code relies on certain third-party MATLAB code for various operations. For convenience, the code is included in the ZIP file. The libraries are:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7230494022369385, "perplexity": 3014.3402783390293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988961.17/warc/CC-MAIN-20210509062621-20210509092621-00089.warc.gz"}
|
https://terrytao.wordpress.com/category/mathematics/mathco/
|
You are currently browsing the category archive for the ‘math.CO’ category.
In 1946, Ulam, in response to a theorem of Anning and Erdös, posed the following problem:
Problem 1 (Erdös-Ulam problem) Let ${S \subset {\bf R}^2}$ be a set such that the distance between any two points in ${S}$ is rational. Is it true that ${S}$ cannot be (topologically) dense in ${{\bf R}^2}$?
The paper of Anning and Erdös addressed the case that all the distances between two points in ${S}$ were integer rather than rational in the affirmative.
The Erdös-Ulam problem remains open; it was discussed recently over at Gödel’s lost letter. It is in fact likely (as we shall see below) that the set ${S}$ in the above problem is not only forbidden to be topologically dense, but also cannot be Zariski dense either. If so, then the structure of ${S}$ is quite restricted; it was shown by Solymosi and de Zeeuw that if ${S}$ fails to be Zariski dense, then all but finitely many of the points of ${S}$ must lie on a single line, or a single circle. (Conversely, it is easy to construct examples of dense subsets of a line or circle in which all distances are rational, though in the latter case the square of the radius of the circle must also be rational.)
The main tool of the Solymosi-de Zeeuw analysis was Faltings’ celebrated theorem that every algebraic curve of genus at least two contains only finitely many rational points. The purpose of this post is to observe that an affirmative answer to the full Erdös-Ulam problem similarly follows from the conjectured analogue of Falting’s theorem for surfaces, namely the following conjecture of Bombieri and Lang:
Conjecture 2 (Bombieri-Lang conjecture) Let ${X}$ be a smooth projective irreducible algebraic surface defined over the rationals ${{\bf Q}}$ which is of general type. Then the set ${X({\bf Q})}$ of rational points of ${X}$ is not Zariski dense in ${X}$.
In fact, the Bombieri-Lang conjecture has been made for varieties of arbitrary dimension, and for more general number fields than the rationals, but the above special case of the conjecture is the only one needed for this application. We will review what “general type” means (for smooth projective complex varieties, at least) below the fold.
The Bombieri-Lang conjecture is considered to be extremely difficult, in particular being substantially harder than Faltings’ theorem, which is itself a highly non-trivial result. So this implication should not be viewed as a practical route to resolving the Erdös-Ulam problem unconditionally; rather, it is a demonstration of the power of the Bombieri-Lang conjecture. Still, it was an instructive algebraic geometry exercise for me to carry out the details of this implication, which quickly boils down to verifying that a certain quite explicit algebraic surface is of general type (Theorem 4 below). As I am not an expert in the subject, my computations here will be rather tedious and pedestrian; it is likely that they could be made much slicker by exploiting more of the machinery of modern algebraic geometry, and I would welcome any such streamlining by actual experts in this area. (For similar reasons, there may be more typos and errors than usual in this post; corrections are welcome as always.) My calculations here are based on a similar calculation of van Luijk, who used analogous arguments to show (assuming Bombieri-Lang) that the set of perfect cuboids is not Zariski-dense in its projective parameter space.
We also remark that in a recent paper of Makhul and Shaffaf, the Bombieri-Lang conjecture (or more precisely, a weaker consequence of that conjecture) was used to show that if ${S}$ is a subset of ${{\bf R}^2}$ with rational distances which intersects any line in only finitely many points, then there is a uniform bound on the cardinality of the intersection of ${S}$ with any line. I have also recently learned (private communication) that an unpublished work of Shaffaf has obtained a result similar to the one in this post, namely that the Erdös-Ulam conjecture follows from the Bombieri-Lang conjecture, plus an additional conjecture about the rational curves in a specific surface.
Let us now give the elementary reductions to the claim that a certain variety is of general type. For sake of contradiction, let ${S}$ be a dense set such that the distance between any two points is rational. Then ${S}$ certainly contains two points that are a rational distance apart. By applying a translation, rotation, and a (rational) dilation, we may assume that these two points are ${(0,0)}$ and ${(1,0)}$. As ${S}$ is dense, there is a third point of ${S}$ not on the ${x}$ axis, which after a reflection we can place in the upper half-plane; we will write it as ${(a,\sqrt{b})}$ with ${b>0}$.
Given any two points ${P, Q}$ in ${S}$, the quantities ${|P|^2, |Q|^2, |P-Q|^2}$ are rational, and so by the cosine rule the dot product ${P \cdot Q}$ is rational as well. Since ${(1,0) \in S}$, this implies that the ${x}$-component of every point ${P}$ in ${S}$ is rational; this in turn implies that the product of the ${y}$-coordinates of any two points ${P,Q}$ in ${S}$ is rational as well (since this differs from ${P \cdot Q}$ by a rational number). In particular, ${a}$ and ${b}$ are rational, and all of the points in ${S}$ now lie in the lattice ${\{ ( x, y\sqrt{b}): x, y \in {\bf Q} \}}$. (This fact appears to have first been observed in the 1988 habilitationschrift of Kemnitz.)
Now take four points ${(x_j,y_j \sqrt{b})}$, ${j=1,\dots,4}$ in ${S}$ in general position (so that the octuplet ${(x_1,y_1\sqrt{b},\dots,x_4,y_4\sqrt{b})}$ avoids any pre-specified hypersurface in ${{\bf C}^8}$); this can be done if ${S}$ is dense. (If one wished, one could re-use the three previous points ${(0,0), (1,0), (a,\sqrt{b})}$ to be three of these four points, although this ultimately makes little difference to the analysis.) If ${(x,y\sqrt{b})}$ is any point in ${S}$, then the distances ${r_j}$ from ${(x,y\sqrt{b})}$ to ${(x_j,y_j\sqrt{b})}$ are rationals that obey the equations
$\displaystyle (x - x_j)^2 + b (y-y_j)^2 = r_j^2$
for ${j=1,\dots,4}$, and thus determine a rational point in the affine complex variety ${V = V_{b,x_1,y_1,x_2,y_2,x_3,y_3,x_4,y_4} \subset {\bf C}^5}$ defined as
$\displaystyle V := \{ (x,y,r_1,r_2,r_3,r_4) \in {\bf C}^6:$
$\displaystyle (x - x_j)^2 + b (y-y_j)^2 = r_j^2 \hbox{ for } j=1,\dots,4 \}.$
By inspecting the projection ${(x,y,r_1,r_2,r_3,r_4) \rightarrow (x,y)}$ from ${V}$ to ${{\bf C}^2}$, we see that ${V}$ is a branched cover of ${{\bf C}^2}$, with the generic cover having ${2^4=16}$ points (coming from the different ways to form the square roots ${r_1,r_2,r_3,r_4}$); in particular, ${V}$ is a complex affine algebraic surface, defined over the rationals. By inspecting the monodromy around the four singular base points ${(x,y) = (x_i,y_i)}$ (which switch the sign of one of the roots ${r_i}$, while keeping the other three roots unchanged), we see that the variety ${V}$ is connected away from its singular set, and thus irreducible. As ${S}$ is topologically dense in ${{\bf R}^2}$, it is Zariski-dense in ${{\bf C}^2}$, and so ${S}$ generates a Zariski-dense set of rational points in ${V}$. To solve the Erdös-Ulam problem, it thus suffices to show that
Claim 3 For any non-zero rational ${b}$ and for rationals ${x_1,y_1,x_2,y_2,x_3,y_3,x_4,y_4}$ in general position, the rational points of the affine surface ${V = V_{b,x_1,y_1,x_2,y_2,x_3,y_3,x_4,y_4}}$ is not Zariski dense in ${V}$.
This is already very close to a claim that can be directly resolved by the Bombieri-Lang conjecture, but ${V}$ is affine rather than projective, and also contains some singularities. The first issue is easy to deal with, by working with the projectivisation
$\displaystyle \overline{V} := \{ [X,Y,Z,R_1,R_2,R_3,R_4] \in {\bf CP}^6: Q(X,Y,Z,R_1,R_2,R_3,R_4) = 0 \} \ \ \ \ \ (1)$
of ${V}$, where ${Q: {\bf C}^7 \rightarrow {\bf C}^4}$ is the homogeneous quadratic polynomial
$\displaystyle (X,Y,Z,R_1,R_2,R_3,R_4) := (Q_j(X,Y,Z,R_1,R_2,R_3,R_4) )_{j=1}^4$
with
$\displaystyle Q_j(X,Y,Z,R_1,R_2,R_3,R_4) := (X-x_j Z)^2 + b (Y-y_jZ)^2 - R_j^2$
and the projective complex space ${{\bf CP}^6}$ is the space of all equivalence classes ${[X,Y,Z,R_1,R_2,R_3,R_4]}$ of tuples ${(X,Y,Z,R_1,R_2,R_3,R_4) \in {\bf C}^7 \backslash \{0\}}$ up to projective equivalence ${(\lambda X, \lambda Y, \lambda Z, \lambda R_1, \lambda R_2, \lambda R_3, \lambda R_4) \sim (X,Y,Z,R_1,R_2,R_3,R_4)}$. By identifying the affine point ${(x,y,r_1,r_2,r_3,r_4)}$ with the projective point ${(X,Y,1,R_1,R_2,R_3,R_4)}$, we see that ${\overline{V}}$ consists of the affine variety ${V}$ together with the set ${\{ [X,Y,0,R_1,R_2,R_3,R_4]: X^2+bY^2=R^2; R_j = \pm R_1 \hbox{ for } j=2,3,4\}}$, which is the union of eight curves, each of which lies in the closure of ${V}$. Thus ${\overline{V}}$ is the projective closure of ${V}$, and is thus a complex irreducible projective surface, defined over the rationals. As ${\overline{V}}$ is cut out by four quadric equations in ${{\bf CP}^6}$ and has degree sixteen (as can be seen for instance by inspecting the intersection of ${\overline{V}}$ with a generic perturbation of a fibre over the generically defined projection ${[X,Y,Z,R_1,R_2,R_3,R_4] \mapsto [X,Y,Z]}$), it is also a complete intersection. To show (3), it then suffices to show that the rational points in ${\overline{V}}$ are not Zariski dense in ${\overline{V}}$.
Heuristically, the reason why we expect few rational points in ${\overline{V}}$ is as follows. First observe from the projective nature of (1) that every rational point is equivalent to an integer point. But for a septuple ${(X,Y,Z,R_1,R_2,R_3,R_4)}$ of integers of size ${O(N)}$, the quantity ${Q(X,Y,Z,R_1,R_2,R_3,R_4)}$ is an integer point of ${{\bf Z}^4}$ of size ${O(N^2)}$, and so should only vanish about ${O(N^{-8})}$ of the time. Hence the number of integer points ${(X,Y,Z,R_1,R_2,R_3,R_4) \in {\bf Z}^7}$ of height comparable to ${N}$ should be about
$\displaystyle O(N)^7 \times O(N^{-8}) = O(N^{-1});$
this is a convergent sum if ${N}$ ranges over (say) powers of two, and so from standard probabilistic heuristics (see this previous post) we in fact expect only finitely many solutions, in the absence of any special algebraic structure (e.g. the structure of an abelian variety, or a birational reduction to a simpler variety) that could produce an unusually large number of solutions.
The Bombieri-Lang conjecture, Conjecture 2, can be viewed as a formalisation of the above heuristics (roughly speaking, it is one of the most optimistic natural conjectures one could make that is compatible with these heuristics while also being invariant under birational equivalence).
Unfortunately, ${\overline{V}}$ contains some singular points. Being a complete intersection, this occurs when the Jacobian matrix of the map ${Q: {\bf C}^7 \rightarrow {\bf C}^4}$ has less than full rank, or equivalently that the gradient vectors
$\displaystyle \nabla Q_j = (2(X-x_j Z), 2(Y-y_j Z), -2x_j (X-x_j Z) - 2y_j (Y-y_j Z), \ \ \ \ \ (2)$
$\displaystyle 0, \dots, 0, -2R_j, 0, \dots, 0)$
for ${j=1,\dots,4}$ are linearly dependent, where the ${-2R_j}$ is in the coordinate position associated to ${R_j}$. One way in which this can occur is if one of the gradient vectors ${\nabla Q_j}$ vanish identically. This occurs at precisely ${4 \times 2^3 = 32}$ points, when ${[X,Y,Z]}$ is equal to ${[x_j,y_j,1]}$ for some ${j=1,\dots,4}$, and one has ${R_k = \pm ( (x_j - x_k)^2 + b (y_j - y_k)^2 )^{1/2}}$ for all ${k=1,\dots,4}$ (so in particular ${R_j=0}$). Let us refer to these as the obvious singularities; they arise from the geometrically evident fact that the distance function ${(x,y\sqrt{b}) \mapsto \sqrt{(x-x_j)^2 + b(y-y_j)^2}}$ is singular at ${(x_j,y_j\sqrt{b})}$.
The other way in which could occur is if a non-trivial linear combination of at least two of the gradient vectors vanishes. From (2), this can only occur if ${R_j=R_k=0}$ for some distinct ${j,k}$, which from (1) implies that
$\displaystyle (X - x_j Z) = \pm \sqrt{b} i (Y - y_j Z) \ \ \ \ \ (3)$
and
$\displaystyle (X - x_k Z) = \pm \sqrt{b} i (Y - y_k Z) \ \ \ \ \ (4)$
for two choices of sign ${\pm}$. If the signs are equal, then (as ${x_j, y_j, x_k, y_k}$ are in general position) this implies that ${Z=0}$, and then we have the singular point
$\displaystyle [X,Y,Z,R_1,R_2,R_3,R_4] = [\pm \sqrt{b} i, 1, 0, 0, 0, 0, 0]. \ \ \ \ \ (5)$
If the non-trivial linear combination involved three or more gradient vectors, then by the pigeonhole principle at least two of the signs involved must be equal, and so the only singular points are (5). So the only remaining possibility is when we have two gradient vectors ${\nabla Q_j, \nabla Q_k}$ that are parallel but non-zero, with the signs in (3), (4) opposing. But then (as ${x_j,y_j,x_k,y_k}$ are in general position) the vectors ${(X-x_j Z, Y-y_j Z), (X-x_k Z, Y-y_k Z)}$ are non-zero and non-parallel to each other, a contradiction. Thus, outside of the ${32}$ obvious singular points mentioned earlier, the only other singular points are the two points (5).
We will shortly show that the ${32}$ obvious singularities are ordinary double points; the surface ${\overline{V}}$ near any of these points is analytically equivalent to an ordinary cone ${\{ (x,y,z) \in {\bf C}^3: z^2 = x^2 + y^2 \}}$ near the origin, which is a cone over a smooth conic curve ${\{ (x,y) \in {\bf C}^2: x^2+y^2=1\}}$. The two non-obvious singularities (5) are slightly more complicated than ordinary double points, they are elliptic singularities, which approximately resemble a cone over an elliptic curve. (As far as I can tell, this resemblance is exact in the category of real smooth manifolds, but not in the category of algebraic varieties.) If one blows up each of the point singularities of ${\overline{V}}$ separately, no further singularities are created, and one obtains a smooth projective surface ${X}$ (using the Segre embedding as necessary to embed ${X}$ back into projective space, rather than in a product of projective spaces). Away from the singularities, the rational points of ${\overline{V}}$ lift up to rational points of ${X}$. Assuming the Bombieri-Lang conjecture, we thus are able to answer the Erdös-Ulam problem in the affirmative once we establish
Theorem 4 The blowup ${X}$ of ${\overline{V}}$ is of general type.
This will be done below the fold, by the pedestrian device of explicitly constructing global differential forms on ${X}$; I will also be working from a complex analysis viewpoint rather than an algebraic geometry viewpoint as I am more comfortable with the former approach. (As mentioned above, though, there may well be a quicker way to establish this result by using more sophisticated machinery.)
I thank Mark Green and David Gieseker for helpful conversations (and a crash course in varieties of general type!).
Remark 5 The above argument shows in fact (assuming Bombieri-Lang) that sets ${S \subset {\bf R}^2}$ with all distances rational cannot be Zariski-dense, and thus (by Solymosi-de Zeeuw) must lie on a single line or circle with only finitely many exceptions. Assuming a stronger version of Bombieri-Lang involving a general number field ${K}$, we obtain a similar conclusion with “rational” replaced by “lying in ${K}$” (one has to extend the Solymosi-de Zeeuw analysis to more general number fields, but this should be routine, using the analogue of Faltings’ theorem for such number fields).
Van Vu and I have just uploaded to the arXiv our paper “Random matrices have simple eigenvalues“. Recall that an ${n \times n}$ Hermitian matrix is said to have simple eigenvalues if all of its ${n}$ eigenvalues are distinct. This is a very typical property of matrices to have: for instance, as discussed in this previous post, in the space of all ${n \times n}$ Hermitian matrices, the space of matrices without all eigenvalues simple has codimension three, and for real symmetric cases this space has codimension two. In particular, given any random matrix ensemble of Hermitian or real symmetric matrices with an absolutely continuous distribution, we conclude that random matrices drawn from this ensemble will almost surely have simple eigenvalues.
For discrete random matrix ensembles, though, the above argument breaks down, even though general universality heuristics predict that the statistics of discrete ensembles should behave similarly to those of continuous ensembles. A model case here is the adjacency matrix ${M_n}$ of an Erdös-Rényi graph – a graph on ${n}$ vertices in which any pair of vertices has an independent probability ${p}$ of being in the graph. For the purposes of this paper one should view ${p}$ as fixed, e.g. ${p=1/2}$, while ${n}$ is an asymptotic parameter going to infinity. In this context, our main result is the following (answering a question of Babai):
Theorem 1 With probability ${1-o(1)}$, ${M_n}$ has simple eigenvalues.
Our argument works for more general Wigner-type matrix ensembles, but for sake of illustration we will stick with the Erdös-Renyi case. Previous work on local universality for such matrix models (e.g. the work of Erdos, Knowles, Yau, and Yin) was able to show that any individual eigenvalue gap ${\lambda_{i+1}(M)-\lambda_i(M)}$ did not vanish with probability ${1-o(1)}$ (in fact ${1-O(n^{-c})}$ for some absolute constant ${c>0}$), but because there are ${n}$ different gaps that one has to simultaneously ensure to be non-zero, this did not give Theorem 1 as one is forced to apply the union bound.
Our argument in fact gives simplicity of the spectrum with probability ${1-O(n^{-A})}$ for any fixed ${A}$; in a subsequent paper we also show that it gives a quantitative lower bound on the eigenvalue gaps (analogous to how many results on the singularity probability of random matrices can be upgraded to a bound on the least singular value).
The basic idea of argument can be sketched as follows. Suppose that ${M_n}$ has a repeated eigenvalue ${\lambda}$. We split
$\displaystyle M_n = \begin{pmatrix} M_{n-1} & X \\ X^T & 0 \end{pmatrix}$
for a random ${n-1 \times n-1}$ minor ${M_{n-1}}$ and a random sign vector ${X}$; crucially, ${X}$ and ${M_{n-1}}$ are independent. If ${M_n}$ has a repeated eigenvalue ${\lambda}$, then by the Cauchy interlacing law, ${M_{n-1}}$ also has an eigenvalue ${\lambda}$. We now write down the eigenvector equation for ${M_n}$ at ${\lambda}$:
$\displaystyle \begin{pmatrix} M_{n-1} & X \\ X^T & 0 \end{pmatrix} \begin{pmatrix} v \\ a \end{pmatrix} = \lambda \begin{pmatrix} v \\ a \end{pmatrix}.$
Extracting the top ${n-1}$ coefficients, we obtain
$\displaystyle (M_{n-1} - \lambda) v + a X = 0.$
If we let ${w}$ be the ${\lambda}$-eigenvector of ${M_{n-1}}$, then by taking inner products with ${w}$ we conclude that
$\displaystyle a (w \cdot X) = 0;$
we typically expect ${a}$ to be non-zero, in which case we arrive at
$\displaystyle w \cdot X = 0.$
In other words, in order for ${M_n}$ to have a repeated eigenvalue, the top right column ${X}$ of ${M_n}$ has to be orthogonal to an eigenvector ${w}$ of the minor ${M_{n-1}}$. Note that ${X}$ and ${w}$ are going to be independent (once we specify which eigenvector of ${M_{n-1}}$ to take as ${w}$). On the other hand, thanks to inverse Littlewood-Offord theory (specifically, we use an inverse Littlewood-Offord theorem of Nguyen and Vu), we know that the vector ${X}$ is unlikely to be orthogonal to any given vector ${w}$ independent of ${X}$, unless the coefficients of ${w}$ are extremely special (specifically, that most of them lie in a generalised arithmetic progression). The main remaining difficulty is then to show that eigenvectors of a random matrix are typically not of this special form, and this relies on a conditioning argument originally used by Komlós to bound the singularity probability of a random sign matrix. (Basically, if an eigenvector has this special form, then one can use a fraction of the rows and columns of the random matrix to determine the eigenvector completely, while still preserving enough randomness in the remaining portion of the matrix so that this vector will in fact not be an eigenvector with high probability.)
In graph theory, the recently developed theory of graph limits has proven to be a useful tool for analysing large dense graphs, being a convenient reformulation of the Szemerédi regularity lemma. Roughly speaking, the theory asserts that given any sequence ${G_n = (V_n, E_n)}$ of finite graphs, one can extract a subsequence ${G_{n_j} = (V_{n_j}, E_{n_j})}$ which converges (in a specific sense) to a continuous object known as a “graphon” – a symmetric measurable function ${p\colon [0,1] \times [0,1] \rightarrow [0,1]}$. What “converges” means in this context is that subgraph densities converge to the associated integrals of the graphon ${p}$. For instance, the edge density
$\displaystyle \frac{1}{|V_{n_j}|^2} |E_{n_j}|$
converge to the integral
$\displaystyle \int_0^1 \int_0^1 p(x,y)\ dx dy,$
the triangle density
$\displaystyle \frac{1}{|V_{n_j}|^3} \lvert \{ (v_1,v_2,v_3) \in V_{n_j}^3: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_1\} \in E_{n_j} \} \rvert$
converges to the integral
$\displaystyle \int_0^1 \int_0^1 \int_0^1 p(x_1,x_2) p(x_2,x_3) p(x_3,x_1)\ dx_1 dx_2 dx_3,$
the four-cycle density
$\displaystyle \frac{1}{|V_{n_j}|^4} \lvert \{ (v_1,v_2,v_3,v_4) \in V_{n_j}^4: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_4\}, \{v_4,v_1\} \in E_{n_j} \} \rvert$
converges to the integral
$\displaystyle \int_0^1 \int_0^1 \int_0^1 \int_0^1 p(x_1,x_2) p(x_2,x_3) p(x_3,x_4) p(x_4,x_1)\ dx_1 dx_2 dx_3 dx_4,$
and so forth. One can use graph limits to prove many results in graph theory that were traditionally proven using the regularity lemma, such as the triangle removal lemma, and can also reduce many asymptotic graph theory problems to continuous problems involving multilinear integrals (although the latter problems are not necessarily easy to solve!). See this text of Lovasz for a detailed study of graph limits and their applications.
One can also express graph limits (and more generally hypergraph limits) in the language of nonstandard analysis (or of ultraproducts); see for instance this paper of Elek and Szegedy, Section 6 of this previous blog post, or this paper of Towsner. (In this post we assume some familiarity with nonstandard analysis, as reviewed for instance in the previous blog post.) Here, one starts as before with a sequence ${G_n = (V_n,E_n)}$ of finite graphs, and then takes an ultraproduct (with respect to some arbitrarily chosen non-principal ultrafilter ${\alpha \in\beta {\bf N} \backslash {\bf N}}$) to obtain a nonstandard graph ${G_\alpha = (V_\alpha,E_\alpha)}$, where ${V_\alpha = \prod_{n\rightarrow \alpha} V_n}$ is the ultraproduct of the ${V_n}$, and similarly for the ${E_\alpha}$. The set ${E_\alpha}$ can then be viewed as a symmetric subset of ${V_\alpha \times V_\alpha}$ which is measurable with respect to the Loeb ${\sigma}$-algebra ${{\mathcal L}_{V_\alpha \times V_\alpha}}$ of the product ${V_\alpha \times V_\alpha}$ (see this previous blog post for the construction of Loeb measure). A crucial point is that this ${\sigma}$-algebra is larger than the product ${{\mathcal L}_{V_\alpha} \times {\mathcal L}_{V_\alpha}}$ of the Loeb ${\sigma}$-algebra of the individual vertex set ${V_\alpha}$. This leads to a decomposition
$\displaystyle 1_{E_\alpha} = p + e$
where the “graphon” ${p}$ is the orthogonal projection of ${1_{E_\alpha}}$ onto ${L^2( {\mathcal L}_{V_\alpha} \times {\mathcal L}_{V_\alpha} )}$, and the “regular error” ${e}$ is orthogonal to all product sets ${A \times B}$ for ${A, B \in {\mathcal L}_{V_\alpha}}$. The graphon ${p\colon V_\alpha \times V_\alpha \rightarrow [0,1]}$ then captures the statistics of the nonstandard graph ${G_\alpha}$, in exact analogy with the more traditional graph limits: for instance, the edge density
$\displaystyle \hbox{st} \frac{1}{|V_\alpha|^2} |E_\alpha|$
(or equivalently, the limit of the ${\frac{1}{|V_n|^2} |E_n|}$ along the ultrafilter ${\alpha}$) is equal to the integral
$\displaystyle \int_{V_\alpha} \int_{V_\alpha} p(x,y)\ d\mu_{V_\alpha}(x) d\mu_{V_\alpha}(y)$
where ${d\mu_V}$ denotes Loeb measure on a nonstandard finite set ${V}$; the triangle density
$\displaystyle \hbox{st} \frac{1}{|V_\alpha|^3} \lvert \{ (v_1,v_2,v_3) \in V_\alpha^3: \{v_1,v_2\}, \{v_2,v_3\}, \{v_3,v_1\} \in E_\alpha \} \rvert$
(or equivalently, the limit along ${\alpha}$ of the triangle densities of ${E_n}$) is equal to the integral
$\displaystyle \int_{V_\alpha} \int_{V_\alpha} \int_{V_\alpha} p(x_1,x_2) p(x_2,x_3) p(x_3,x_1)\ d\mu_{V_\alpha}(x_1) d\mu_{V_\alpha}(x_2) d\mu_{V_\alpha}(x_3),$
and so forth. Note that with this construction, the graphon ${p}$ is living on the Cartesian square of an abstract probability space ${V_\alpha}$, which is likely to be inseparable; but it is possible to cut down the Loeb ${\sigma}$-algebra on ${V_\alpha}$ to minimal countable ${\sigma}$-algebra for which ${p}$ remains measurable (up to null sets), and then one can identify ${V_\alpha}$ with ${[0,1]}$, bringing this construction of a graphon in line with the traditional notion of a graphon. (See Remark 5 of this previous blog post for more discussion of this point.)
Additive combinatorics, which studies things like the additive structure of finite subsets ${A}$ of an abelian group ${G = (G,+)}$, has many analogies and connections with asymptotic graph theory; in particular, there is the arithmetic regularity lemma of Green which is analogous to the graph regularity lemma of Szemerédi. (There is also a higher order arithmetic regularity lemma analogous to hypergraph regularity lemmas, but this is not the focus of the discussion here.) Given this, it is natural to suspect that there is a theory of “additive limits” for large additive sets of bounded doubling, analogous to the theory of graph limits for large dense graphs. The purpose of this post is to record a candidate for such an additive limit. This limit can be used as a substitute for the arithmetic regularity lemma in certain results in additive combinatorics, at least if one is willing to settle for qualitative results rather than quantitative ones; I give a few examples of this below the fold.
It seems that to allow for the most flexible and powerful manifestation of this theory, it is convenient to use the nonstandard formulation (among other things, it allows for full use of the transfer principle, whereas a more traditional limit formulation would only allow for a transfer of those quantities continuous with respect to the notion of convergence). Here, the analogue of a nonstandard graph is an ultra approximate group ${A_\alpha}$ in a nonstandard group ${G_\alpha = \prod_{n \rightarrow \alpha} G_n}$, defined as the ultraproduct of finite ${K}$-approximate groups ${A_n \subset G_n}$ for some standard ${K}$. (A ${K}$-approximate group ${A_n}$ is a symmetric set containing the origin such that ${A_n+A_n}$ can be covered by ${K}$ or fewer translates of ${A_n}$.) We then let ${O(A_\alpha)}$ be the external subgroup of ${G_\alpha}$ generated by ${A_\alpha}$; equivalently, ${A_\alpha}$ is the union of ${A_\alpha^m}$ over all standard ${m}$. This space has a Loeb measure ${\mu_{O(A_\alpha)}}$, defined by setting
$\displaystyle \mu_{O(A_\alpha)}(E_\alpha) := \hbox{st} \frac{|E_\alpha|}{|A_\alpha|}$
whenever ${E_\alpha}$ is an internal subset of ${A_\alpha^m}$ for any standard ${m}$, and extended to a countably additive measure; the arguments in Section 6 of this previous blog post can be easily modified to give a construction of this measure.
The Loeb measure ${\mu_{O(A_\alpha)}}$ is a translation invariant measure on ${O(A_{\alpha})}$, normalised so that ${A_\alpha}$ has Loeb measure one. As such, one should think of ${O(A_\alpha)}$ as being analogous to a locally compact abelian group equipped with a Haar measure. It should be noted though that ${O(A_\alpha)}$ is not actually a locally compact group with Haar measure, for two reasons:
• There is not an obvious topology on ${O(A_\alpha)}$ that makes it simultaneously locally compact, Hausdorff, and ${\sigma}$-compact. (One can get one or two out of three without difficulty, though.)
• The addition operation ${+\colon O(A_\alpha) \times O(A_\alpha) \rightarrow O(A_\alpha)}$ is not measurable from the product Loeb algebra ${{\mathcal L}_{O(A_\alpha)} \times {\mathcal L}_{O(A_\alpha)}}$ to ${{\mathcal L}_{O(\alpha)}}$. Instead, it is measurable from the coarser Loeb algebra ${{\mathcal L}_{O(A_\alpha) \times O(A_\alpha)}}$ to ${{\mathcal L}_{O(\alpha)}}$ (compare with the analogous situation for nonstandard graphs).
Nevertheless, the analogy is a useful guide for the arguments that follow.
Let ${L(O(A_\alpha))}$ denote the space of bounded Loeb measurable functions ${f\colon O(A_\alpha) \rightarrow {\bf C}}$ (modulo almost everywhere equivalence) that are supported on ${A_\alpha^m}$ for some standard ${m}$; this is a complex algebra with respect to pointwise multiplication. There is also a convolution operation ${\star\colon L(O(A_\alpha)) \times L(O(A_\alpha)) \rightarrow L(O(A_\alpha))}$, defined by setting
$\displaystyle \hbox{st} f \star \hbox{st} g(x) := \hbox{st} \frac{1}{|A_\alpha|} \sum_{y \in A_\alpha^m} f(y) g(x-y)$
whenever ${f\colon A_\alpha^m \rightarrow {}^* {\bf C}}$, ${g\colon A_\alpha^l \rightarrow {}^* {\bf C}}$ are bounded nonstandard functions (extended by zero to all of ${O(A_\alpha)}$), and then extending to arbitrary elements of ${L(O(A_\alpha))}$ by density. Equivalently, ${f \star g}$ is the pushforward of the ${{\mathcal L}_{O(A_\alpha) \times O(A_\alpha)}}$-measurable function ${(x,y) \mapsto f(x) g(y)}$ under the map ${(x,y) \mapsto x+y}$.
The basic structural theorem is then as follows.
Theorem 1 (Kronecker factor) Let ${A_\alpha}$ be an ultra approximate group. Then there exists a (standard) locally compact abelian group ${G}$ of the form
$\displaystyle G = {\bf R}^d \times {\bf Z}^m \times T$
for some standard ${d,m}$ and some compact abelian group ${T}$, equipped with a Haar measure ${\mu_G}$ and a measurable homomorphism ${\pi\colon O(A_\alpha) \rightarrow G}$ (using the Loeb ${\sigma}$-algebra on ${O(A_\alpha)}$ and the Baire ${\sigma}$-algebra on ${G}$), with the following properties:
• (i) ${\pi}$ has dense image, and ${\mu_G}$ is the pushforward of Loeb measure ${\mu_{O(A_\alpha)}}$ by ${\pi}$.
• (ii) There exists sets ${\{0\} \subset U_0 \subset K_0 \subset G}$ with ${U_0}$ open and ${K_0}$ compact, such that
$\displaystyle \pi^{-1}(U_0) \subset 4A_\alpha \subset \pi^{-1}(K_0). \ \ \ \ \ (1)$
• (iii) Whenever ${K \subset U \subset G}$ with ${K}$ compact and ${U}$ open, there exists a nonstandard finite set ${B}$ such that
$\displaystyle \pi^{-1}(K) \subset B \subset \pi^{-1}(U). \ \ \ \ \ (2)$
• (iv) If ${f, g \in L}$, then we have the convolution formula
$\displaystyle f \star g = \pi^*( (\pi_* f) \star (\pi_* g) ) \ \ \ \ \ (3)$
where ${\pi_* f,\pi_* g}$ are the pushforwards of ${f,g}$ to ${L^2(G, \mu_G)}$, the convolution ${\star}$ on the right-hand side is convolution using ${\mu_G}$, and ${\pi^*}$ is the pullback map from ${L^2(G,\mu_G)}$ to ${L^2(O(A_\alpha), \mu_{O(A_\alpha)})}$. In particular, if ${\pi_* f = 0}$, then ${f*g=0}$ for all ${g \in L}$.
One can view the locally compact abelian group ${G}$ as a “model “or “Kronecker factor” for the ultra approximate group ${A_\alpha}$ (in close analogy with the Kronecker factor from ergodic theory). In the case that ${A_\alpha}$ is a genuine nonstandard finite group rather than an ultra approximate group, the non-compact components ${{\bf R}^d \times {\bf Z}^m}$ of the Kronecker group ${G}$ are trivial, and this theorem was implicitly established by Szegedy. The compact group ${T}$ is quite large, and in particular is likely to be inseparable; but as with the case of graphons, when one is only studying at most countably many functions ${f}$, one can cut down the size of this group to be separable (or equivalently, second countable or metrisable) if desired, so one often works with a “reduced Kronecker factor” which is a quotient of the full Kronecker factor ${G}$. Once one is in the separable case, the Baire sigma algebra is identical with the more familiar Borel sigma algebra.
Given any sequence of uniformly bounded functions ${f_n\colon A_n^m \rightarrow {\bf C}}$ for some fixed ${m}$, we can view the function ${f \in L}$ defined by
$\displaystyle f := \pi_* \hbox{st} \lim_{n \rightarrow \alpha} f_n \ \ \ \ \ (4)$
as an “additive limit” of the ${f_n}$, in much the same way that graphons ${p\colon V_\alpha \times V_\alpha \rightarrow [0,1]}$ are limits of the indicator functions ${1_{E_n}\colon V_n \times V_n \rightarrow \{0,1\}}$. The additive limits capture some of the statistics of the ${f_n}$, for instance the normalised means
$\displaystyle \frac{1}{|A_n|} \sum_{x \in A_n^m} f_n(x)$
converge (along the ultrafilter ${\alpha}$) to the mean
$\displaystyle \int_G f(x)\ d\mu_G(x),$
and for three sequences ${f_n,g_n,h_n\colon A_n^m \rightarrow {\bf C}}$ of functions, the normalised correlation
$\displaystyle \frac{1}{|A_n|^2} \sum_{x,y \in A_n^m} f_n(x) g_n(y) h_n(x+y)$
converges along ${\alpha}$ to the correlation
$\displaystyle \int_G \int_G f(x) g(y) h(x+y)\ d\mu_G(x) d\mu_G(y),$
the normalised ${U^2}$ Gowers norm
$\displaystyle ( \frac{1}{|A_n|^3} \sum_{x,y,z,w \in A_n^m: x+w=y+z} f_n(x) \overline{f_n(y)} \overline{f_n(z)} f_n(w))^{1/4}$
converges along ${\alpha}$ to the ${U^2}$ Gowers norm
$\displaystyle ( \int_{G \times G \times G} f(x) \overline{f(y)} \overline{f(z)} f_n(x+y-z)\ d\mu_G(x) d\mu_G(y) d\mu_G(z))^{1/4}$
and so forth. We caution however that some correlations that involve evaluating more than one function at the same point will not necessarily be preserved in the additive limit; for instance the normalised ${\ell^2}$ norm
$\displaystyle (\frac{1}{|A_n|} \sum_{x \in A_n^m} |f_n(x)|^2)^{1/2}$
does not necessarily converge to the ${L^2}$ norm
$\displaystyle (\int_G |f(x)|^2\ d\mu_G(x))^{1/2},$
but can converge instead to a larger quantity, due to the presence of the orthogonal projection ${\pi_*}$ in the definition (4) of ${f}$.
An important special case of an additive limit occurs when the functions ${f_n\colon A_n^m \rightarrow {\bf C}}$ involved are indicator functions ${f_n = 1_{E_n}}$ of some subsets ${E_n}$ of ${A_n^m}$. The additive limit ${f \in L}$ does not necessarily remain an indicator function, but instead takes values in ${[0,1]}$ (much as a graphon ${p}$ takes values in ${[0,1]}$ even though the original indicators ${1_{E_n}}$ take values in ${\{0,1\}}$). The convolution ${f \star f\colon G \rightarrow [0,1]}$ is then the ultralimit of the normalised convolutions ${\frac{1}{|A_n|} 1_{E_n} \star 1_{E_n}}$; in particular, the measure of the support of ${f \star f}$ provides a lower bound on the limiting normalised cardinality ${\frac{1}{|A_n|} |E_n + E_n|}$ of a sumset. In many situations this lower bound is an equality, but this is not necessarily the case, because the sumset ${2E_n = E_n + E_n}$ could contain a large number of elements which have very few (${o(|A_n|)}$) representations as the sum of two elements of ${E_n}$, and in the limit these portions of the sumset fall outside of the support of ${f \star f}$. (One can think of the support of ${f \star f}$ as describing the “essential” sumset of ${2E_n = E_n + E_n}$, discarding those elements that have only very few representations.) Similarly for higher convolutions of ${f}$. Thus one can use additive limits to partially control the growth ${k E_n}$ of iterated sumsets of subsets ${E_n}$ of approximate groups ${A_n}$, in the regime where ${k}$ stays bounded and ${n}$ goes to infinity.
Theorem 1 can be proven by Fourier-analytic means (combined with Freiman’s theorem from additive combinatorics), and we will do so below the fold. For now, we give some illustrative examples of additive limits.
Example 2 (Bohr sets) We take ${A_n}$ to be the intervals ${A_n := \{ x \in {\bf Z}: |x| \leq N_n \}}$, where ${N_n}$ is a sequence going to infinity; these are ${2}$-approximate groups for all ${n}$. Let ${\theta}$ be an irrational real number, let ${I}$ be an interval in ${{\bf R}/{\bf Z}}$, and for each natural number ${n}$ let ${B_n}$ be the Bohr set
$\displaystyle B_n := \{ x \in A^{(n)}: \theta x \hbox{ mod } 1 \in I \}.$
In this case, the (reduced) Kronecker factor ${G}$ can be taken to be the infinite cylinder ${{\bf R} \times {\bf R}/{\bf Z}}$ with the usual Lebesgue measure ${\mu_G}$. The additive limits of ${1_{A_n}}$ and ${1_{B_n}}$ end up being ${1_A}$ and ${1_B}$, where ${A}$ is the finite cylinder
$\displaystyle A := \{ (x,t) \in {\bf R} \times {\bf R}/{\bf Z}: x \in [-1,1]\}$
and ${B}$ is the rectangle
$\displaystyle B := \{ (x,t) \in {\bf R} \times {\bf R}/{\bf Z}: x \in [-1,1]; t \in I \}.$
Geometrically, one should think of ${A_n}$ and ${B_n}$ as being wrapped around the cylinder ${{\bf R} \times {\bf R}/{\bf Z}}$ via the homomorphism ${x \mapsto (\frac{x}{N_n}, \theta x \hbox{ mod } 1)}$, and then one sees that ${B_n}$ is converging in some normalised weak sense to ${B}$, and similarly for ${A_n}$ and ${A}$. In particular, the additive limit predicts the growth rate of the iterated sumsets ${kB_n}$ to be quadratic in ${k}$ until ${k|I|}$ becomes comparable to ${1}$, at which point the growth transitions to linear growth, in the regime where ${k}$ is bounded and ${n}$ is large.
If ${\theta = \frac{p}{q}}$ were rational instead of irrational, then one would need to replace ${{\bf R}/{\bf Z}}$ by the finite subgroup ${\frac{1}{q}{\bf Z}/{\bf Z}}$ here.
Example 3 (Structured subsets of progressions) We take ${A_n}$ be the rank two progression
$\displaystyle A_n := \{ a + b N_n^2: a,b \in {\bf Z}; |a|, |b| \leq N_n \},$
where ${N_n}$ is a sequence going to infinity; these are ${4}$-approximate groups for all ${n}$. Let ${B_n}$ be the subset
$\displaystyle B_n := \{ a + b N_n^2: a,b \in {\bf Z}; |a|^2 + |b|^2 \leq N_n^2 \}.$
Then the (reduced) Kronecker factor can be taken to be ${G = {\bf R}^2}$ with Lebesgue measure ${\mu_G}$, and the additive limits of the ${1_{A_n}}$ and ${1_{B_n}}$ are then ${1_A}$ and ${1_B}$, where ${A}$ is the square
$\displaystyle A := \{ (a,b) \in {\bf R}^2: |a|, |b| \leq 1 \}$
and ${B}$ is the circle
$\displaystyle B := \{ (a,b) \in {\bf R}^2: a^2+b^2 \leq 1 \}.$
Geometrically, the picture is similar to the Bohr set one, except now one uses a Freiman homomorphism ${a + b N_n^2 \mapsto (\frac{a}{N_n}, \frac{b}{N_n})}$ for ${a,b = O( N_n )}$ to embed the original sets ${A_n, B_n}$ into the plane ${{\bf R}^2}$. In particular, one now expects the growth rate of the iterated sumsets ${k A_n}$ and ${k B_n}$ to be quadratic in ${k}$, in the regime where ${k}$ is bounded and ${n}$ is large.
Example 4 (Dissociated sets) Let ${d}$ be a fixed natural number, and take
$\displaystyle A_n = \{0, v_1,\dots,v_d,-v_1,\dots,-v_d \}$
where ${v_1,\dots,v_d}$ are randomly chosen elements of a large cyclic group ${{\bf Z}/p_n{\bf Z}}$, where ${p_n}$ is a sequence of primes going to infinity. These are ${O(d)}$-approximate groups. The (reduced) Kronecker factor ${G}$ can (almost surely) then be taken to be ${{\bf Z}^d}$ with counting measure, and the additive limit of ${1_{A_n}}$ is ${1_A}$, where ${A = \{ 0, e_1,\dots,e_d,-e_1,\dots,-e_d\}}$ and ${e_1,\dots,e_d}$ is the standard basis of ${{\bf Z}^d}$. In particular, the growth rates of ${k A_n}$ should grow approximately like ${k^d}$ for ${k}$ bounded and ${n}$ large.
Example 5 (Random subsets of groups) Let ${A_n = G_n}$ be a sequence of finite additive groups whose order is going to infinity. Let ${B_n}$ be a random subset of ${G_n}$ of some fixed density ${0 \leq \lambda \leq 1}$. Then (almost surely) the Kronecker factor here can be reduced all the way to the trivial group ${\{0\}}$, and the additive limit of the ${1_{B_n}}$ is the constant function ${\lambda}$. The convolutions ${\frac{1}{|G_n|} 1_{B_n} * 1_{B_n}}$ then converge in the ultralimit (modulo almost everywhere equivalence) to the pullback of ${\lambda^2}$; this reflects the fact that ${(1-o(1))|G_n|}$ of the elements of ${G_n}$ can be represented as the sum of two elements of ${B_n}$ in ${(\lambda^2 + o(1)) |G_n|}$ ways. In particular, ${B_n+B_n}$ occupies a proportion ${1-o(1)}$ of ${G_n}$.
Example 6 (Trigonometric series) Take ${A_n = G_n = {\bf Z}/p_n {\bf C}}$ for a sequence ${p_n}$ of primes going to infinity, and for each ${n}$ let ${\xi_{n,1},\xi_{n,2},\dots}$ be an infinite sequence of frequencies chosen uniformly and independently from ${{\bf Z}/p_n{\bf Z}}$. Let ${f_n\colon {\bf Z}/p_n{\bf Z} \rightarrow {\bf C}}$ denote the random trigonometric series
$\displaystyle f_n(x) := \sum_{j=1}^\infty 2^{-j} e^{2\pi i \xi_{n,j} x / p_n }.$
Then (almost surely) we can take the reduced Kronecker factor ${G}$ to be the infinite torus ${({\bf R}/{\bf Z})^{\bf N}}$ (with the Haar probability measure ${\mu_G}$), and the additive limit of the ${f_n}$ then becomes the function ${f\colon ({\bf R}/{\bf Z})^{\bf N} \rightarrow {\bf R}}$ defined by the formula
$\displaystyle f( (x_j)_{j=1}^\infty ) := \sum_{j=1}^\infty e^{2\pi i x_j}.$
In fact, the pullback ${\pi^* f}$ is the ultralimit of the ${f_n}$. As such, for any standard exponent ${1 \leq q < \infty}$, the normalised ${l^q}$ norm
$\displaystyle (\frac{1}{p_n} \sum_{x \in {\bf Z}/p_n{\bf Z}} |f_n(x)|^q)^{1/q}$
can be seen to converge to the limit
$\displaystyle (\int_{({\bf R}/{\bf Z})^{\bf N}} |f(x)|^q\ d\mu_G(x))^{1/q}.$
The reader is invited to consider combinations of the above examples, e.g. random subsets of Bohr sets, to get a sense of the general case of Theorem 1.
It is likely that this theorem can be extended to the noncommutative setting, using the noncommutative Freiman theorem of Emmanuel Breuillard, Ben Green, and myself, but I have not attempted to do so here (see though this recent preprint of Anush Tserunyan for some related explorations); in a separate direction, there should be extensions that can control higher Gowers norms, in the spirit of the work of Szegedy.
Note: the arguments below will presume some familiarity with additive combinatorics and with nonstandard analysis, and will be a little sketchy in places.
In addition to the Fields medallists mentioned in the previous post, the IMU also awarded the Nevanlinna prize to Subhash Khot, the Gauss prize to Stan Osher (my colleague here at UCLA!), and the Chern medal to Phillip Griffiths. Like I did in 2010, I’ll try to briefly discuss one result of each of the prize winners, though the fields of mathematics here are even further from my expertise than those discussed in the previous post (and all the caveats from that post apply here also).
Subhash Khot is best known for his Unique Games Conjecture, a problem in complexity theory that is perhaps second in importance only to the ${P \neq NP}$ problem for the purposes of demarcating the mysterious line between “easy” and “hard” problems (if one follows standard practice and uses “polynomial time” as the definition of “easy”). The ${P \neq NP}$ problem can be viewed as an assertion that it is difficult to find exact solutions to certain standard theoretical computer science problems (such as ${k}$-SAT); thanks to the NP-completeness phenomenon, it turns out that the precise problem posed here is not of critical importance, and ${k}$-SAT may be substituted with one of the many other problems known to be NP-complete. The unique games conjecture is similarly an assertion about the difficulty of finding even approximate solutions to certain standard problems, in particular “unique games” problems in which one needs to colour the vertices of a graph in such a way that the colour of one vertex of an edge is determined uniquely (via a specified matching) by the colour of the other vertex. This is an easy problem to solve if one insists on exact solutions (in which 100% of the edges have a colouring compatible with the specified matching), but becomes extremely difficult if one permits approximate solutions, with no exact solution available. In analogy with the NP-completeness phenomenon, the threshold for approximate satisfiability of many other problems (such as the MAX-CUT problem) is closely connected with the truth of the unique games conjecture; remarkably, the truth of the unique games conjecture would imply asymptotically sharp thresholds for many of these problems. This has implications for many theoretical computer science constructions which rely on hardness of approximation, such as probabilistically checkable proofs. For a more detailed survey of the unique games conjecture and its implications, see this Bulletin article of Trevisan.
My colleague Stan Osher has worked in many areas of applied mathematics, ranging from image processing to modeling fluids for major animation studies such as Pixar or Dreamworks, but today I would like to talk about one of his contributions that is close to an area of my own expertise, namely compressed sensing. One of the basic reconstruction problem in compressed sensing is the basis pursuit problem of finding the vector ${x \in {\bf R}^n}$ in an affine space ${\{ x \in {\bf R}^n: Ax = b \}}$ (where ${b \in {\bf R}^m}$ and ${A \in {\bf R}^{m\times n}}$ are given, and ${m}$ is typically somewhat smaller than ${n}$) which minimises the ${\ell^1}$-norm ${\|x\|_{\ell^1} := \sum_{i=1}^n |x_i|}$ of the vector ${x}$. This is a convex optimisation problem, and thus solvable in principle (it is a polynomial time problem, and thus “easy” in the above theoretical computer science sense). However, once ${n}$ and ${m}$ get moderately large (e.g. of the order of ${10^6}$), standard linear optimisation routines begin to become computationally expensive; also, it is difficult for off-the-shelf methods to exploit any additional structure (e.g. sparsity) in the measurement matrix ${A}$. Much of the problem comes from the fact that the functional ${x \mapsto \|x\|_1}$ is only barely convex. One way to speed up the optimisation problem is to relax it by replacing the constraint ${Ax=b}$ with a convex penalty term ${\frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2}$, thus one is now trying to minimise the unconstrained functional
$\displaystyle \|x\|_1 + \frac{1}{2\epsilon} \|Ax-b\|_{\ell^2}^2.$
This functional is more convex, and is over a computationally simpler domain ${{\bf R}^n}$ than the affine space ${\{x \in {\bf R}^n: Ax=b\}}$, so is easier (though still not entirely trivial) to optimise over. However, the minimiser ${x^\epsilon}$ to this problem need not match the minimiser ${x^0}$ to the original problem, particularly if the (sub-)gradient ${\partial \|x\|_1}$ of the original functional ${\|x\|_1}$ is large at ${x^0}$, and if ${\epsilon}$ is not set to be small. (And setting ${\epsilon}$ too small will cause other difficulties with numerically solving the optimisation problem, due to the need to divide by very small denominators.) However, if one modifies the objective function by an additional linear term
$\displaystyle \|x\|_1 - \langle p, x \rangle + \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2$
then some simple convexity considerations reveal that the minimiser to this new problem will match the minimiser ${x^0}$ to the original problem, provided that ${p}$ is (or more precisely, lies in) the (sub-)gradient ${\partial \|x\|_1}$ of ${\|x\|_1}$ at ${x^0}$ – even if ${\epsilon}$ is not small. But, one does not know in advance what the correct value of ${p}$ should be, because one does not know what the minimiser ${x^0}$ is.
With Yin, Goldfarb and Darbon, Osher introduced a Bregman iteration method in which one solves for ${x}$ and ${p}$ simultaneously; given an initial guess ${x^k, p^k}$ for both ${x^k}$ and ${p^k}$, one first updates ${x^k}$ to the minimiser ${x^{k+1} \in {\bf R}^n}$ of the convex functional
$\displaystyle \|x\|_1 - \langle p^k, x \rangle + \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2 \ \ \ \ \ (1)$
and then updates ${p^{k+1}}$ to the natural value of the subgradient ${\partial \|x\|_1}$ at ${x^{k+1}}$, namely
$\displaystyle p^{k+1} := p^k - \nabla \frac{1}{2 \epsilon} \|Ax-b\|_{\ell^2}^2|_{x=x^{k+1}} = p_k - \frac{1}{\epsilon} (Ax^k - b)$
(note upon taking the first variation of (1) that ${p^{k+1}}$ is indeed in the subgradient). This procedure converges remarkably quickly (both in theory and in practice) to the true minimiser ${x^0}$ even for non-small values of ${\epsilon}$, and also has some ability to be parallelised, and has led to actual performance improvements of an order of magnitude or more in certain compressed sensing problems (such as reconstructing an MRI image).
Phillip Griffiths has made many contributions to complex, algebraic and differential geometry, and I am not qualified to describe most of these; my primary exposure to his work is through his text on algebraic geometry with Harris, but as excellent though that text is, it is not really representative of his research. But I thought I would mention one cute result of his related to the famous Nash embedding theorem. Suppose that one has a smooth ${n}$-dimensional Riemannian manifold that one wants to embed locally into a Euclidean space ${{\bf R}^m}$. The Nash embedding theorem guarantees that one can do this if ${m}$ is large enough depending on ${n}$, but what is the minimal value of ${m}$ one can get away with? Many years ago, my colleague Robert Greene showed that ${m = \frac{n(n+1)}{2} + n}$ sufficed (a simplified proof was subsequently given by Gunther). However, this is not believed to be sharp; if one replaces “smooth” with “real analytic” then a standard Cauchy-Kovalevski argument shows that ${m = \frac{n(n+1)}{2}}$ is possible, and no better. So this suggests that ${m = \frac{n(n+1)}{2}}$ is the threshold for the smooth problem also, but this remains open in general. The cases ${n=1}$ is trivial, and the ${n=2}$ case is not too difficult (if the curvature is non-zero) as the codimension ${m-n}$ is one in this case, and the problem reduces to that of solving a Monge-Ampere equation. With Bryant and Yang, Griffiths settled the ${n=3}$ case, under a non-degeneracy condition on the Einstein tensor. This is quite a serious paper – over 100 pages combining differential geometry, PDE methods (e.g. Nash-Moser iteration), and even some harmonic analysis (e.g. they rely at one key juncture on an extension theorem of my advisor, Elias Stein). The main difficulty is that that the relevant PDE degenerates along a certain characteristic submanifold of the cotangent bundle, which then requires an extremely delicate analysis to handle.
This is a blog version of a talk I recently gave at the IPAM workshop on “The Kakeya Problem, Restriction Problem, and Sum-product Theory”.
Note: the discussion here will be highly non-rigorous in nature, being extremely loose in particular with asymptotic notation and with the notion of dimension. Caveat emptor.
One of the most infamous unsolved problems at the intersection of geometric measure theory, incidence combinatorics, and real-variable harmonic analysis is the Kakeya set conjecture. We will focus on the following three-dimensional case of the conjecture, stated informally as follows:
Conjecture 1 (Kakeya conjecture) Let ${E}$ be a subset of ${{\bf R}^3}$ that contains a unit line segment in every direction. Then ${\hbox{dim}(E) = 3}$.
This conjecture is not precisely formulated here, because we have not specified exactly what type of set ${E}$ is (e.g. measurable, Borel, compact, etc.) and what notion of dimension we are using. We will deliberately ignore these technical details in this post. It is slightly more convenient for us here to work with lines instead of unit line segments, so we work with the following slight variant of the conjecture (which is essentially equivalent):
Conjecture 2 (Kakeya conjecture, again) Let ${{\cal L}}$ be a family of lines in ${{\bf R}^3}$ that meet ${B(0,1)}$ and contain a line in each direction. Let ${E}$ be the union of the restriction ${\ell \cap B(0,2)}$ to ${B(0,2)}$ of every line ${\ell}$ in ${{\cal L}}$. Then ${\hbox{dim}(E) = 3}$.
As the space of all directions in ${{\bf R}^3}$ is two-dimensional, we thus see that ${{\cal L}}$ is an (at least) two-dimensional subset of the four-dimensional space of lines in ${{\bf R}^3}$ (actually, it lies in a compact subset of this space, since we have constrained the lines to meet ${B(0,1)}$). One could then ask if this is the only property of ${{\cal L}}$ that is needed to establish the Kakeya conjecture, that is to say if any subset of ${B(0,2)}$ which contains a two-dimensional family of lines (restricted to ${B(0,2)}$, and meeting ${B(0,1)}$) is necessarily three-dimensional. Here we have an easy counterexample, namely a plane in ${B(0,2)}$ (passing through the origin), which contains a two-dimensional collection of lines. However, we can exclude this case by adding an additional axiom, leading to what one might call a “strong” Kakeya conjecture:
Conjecture 3 (Strong Kakeya conjecture) Let ${{\cal L}}$ be a two-dimensional family of lines in ${{\bf R}^3}$ that meet ${B(0,1)}$, and assume the Wolff axiom that no (affine) plane contains more than a one-dimensional family of lines in ${{\cal L}}$. Let ${E}$ be the union of the restriction ${\ell \cap B(0,2)}$ of every line ${\ell}$ in ${{\cal L}}$. Then ${\hbox{dim}(E) = 3}$.
Actually, to make things work out we need a more quantitative version of the Wolff axiom in which we constrain the metric entropy (and not just dimension) of lines that lie close to a plane, rather than exactly on the plane. However, for the informal discussion here we will ignore these technical details. Families of lines that lie in different directions will obey the Wolff axiom, but the converse is not true in general.
In 1995, Wolff established the important lower bound ${\hbox{dim}(E) \geq 5/2}$ (for various notions of dimension, e.g. Hausdorff dimension) for sets ${E}$ in Conjecture 3 (and hence also for the other forms of the Kakeya problem). However, there is a key obstruction to going beyond the ${5/2}$ barrier, coming from the possible existence of half-dimensional (approximate) subfields of the reals ${{\bf R}}$. To explain this problem, it easiest to first discuss the complex version of the strong Kakeya conjecture, in which all relevant (real) dimensions are doubled:
Conjecture 4 (Strong Kakeya conjecture over ${{\bf C}}$) Let ${{\cal L}}$ be a four (real) dimensional family of complex lines in ${{\bf C}^3}$ that meet the unit ball ${B(0,1)}$ in ${{\bf C}^3}$, and assume the Wolff axiom that no four (real) dimensional (affine) subspace contains more than a two (real) dimensional family of complex lines in ${{\cal L}}$. Let ${E}$ be the union of the restriction ${\ell \cap B(0,2)}$ of every complex line ${\ell}$ in ${{\cal L}}$. Then ${E}$ has real dimension ${6}$.
The argument of Wolff can be adapted to the complex case to show that all sets ${E}$ occuring in Conjecture 4 have real dimension at least ${5}$. Unfortunately, this is sharp, due to the following fundamental counterexample:
Proposition 5 (Heisenberg group counterexample) Let ${H \subset {\bf C}^3}$ be the Heisenberg group
$\displaystyle H = \{ (z_1,z_2,z_3) \in {\bf C}^3: \hbox{Im}(z_1) = \hbox{Im}(z_2 \overline{z_3}) \}$
and let ${{\cal L}}$ be the family of complex lines
$\displaystyle \ell_{s,t,\alpha} := \{ (\overline{\alpha} z + t, z, sz + \alpha): z \in {\bf C} \}$
with ${s,t \in {\bf R}}$ and ${\alpha \in {\bf C}}$. Then ${H}$ is a five (real) dimensional subset of ${{\bf C}^3}$ that contains every line in the four (real) dimensional set ${{\cal L}}$; however each four real dimensional (affine) subspace contains at most a two (real) dimensional set of lines in ${{\cal L}}$. In particular, the strong Kakeya conjecture over the complex numbers is false.
This proposition is proven by a routine computation, which we omit here. The group structure on ${H}$ is given by the group law
$\displaystyle (z_1,z_2,z_3) \cdot (w_1,w_2,w_3) = (z_1 + w_1 + z_2 \overline{w_3} - z_3 \overline{w_2}, z_2 +w_2, z_3+w_3),$
giving ${E}$ the structure of a ${2}$-step simply-connected nilpotent Lie group, isomorphic to the usual Heisenberg group over ${{\bf R}^2}$. Note that while the Heisenberg group is a counterexample to the complex strong Kakeya conjecture, it is not a counterexample to the complex form of the original Kakeya conjecture, because the complex lines ${{\cal L}}$ in the Heisenberg counterexample do not point in distinct directions, but instead only point in a three (real) dimensional subset of the four (real) dimensional space of available directions for complex lines. For instance, one has the one real-dimensional family of parallel lines
$\displaystyle \ell_{0,t,0} = \{ (t, z, 0): z \in {\bf C}\}$
with ${t \in {\bf R}}$; multiplying this family of lines on the right by a group element in ${H}$ gives other families of parallel lines, which in fact sweep out all of ${{\cal L}}$.
The Heisenberg counterexample ultimately arises from the “half-dimensional” (and hence degree two) subfield ${{\bf R}}$ of ${{\bf C}}$, which induces an involution ${z \mapsto \overline{z}}$ which can then be used to define the Heisenberg group ${H}$ through the formula
$\displaystyle H = \{ (z_1,z_2,z_3) \in {\bf C}^3: z_1 - \overline{z_1} = z_2 \overline{z_3} - z_3 \overline{z_2} \}.$
Analogous Heisenberg counterexamples can also be constructed if one works over finite fields ${{\bf F}_{q^2}}$ that contain a “half-dimensional” subfield ${{\bf F}_q}$; we leave the details to the interested reader. Morally speaking, if ${{\bf R}}$ in turn contained a subfield of dimension ${1/2}$ (or even a subring or “approximate subring”), then one ought to be able to use this field to generate a counterexample to the strong Kakeya conjecture over the reals. Fortunately, such subfields do not exist; this was a conjecture of Erdos and Volkmann that was proven by Edgar and Miller, and more quantitatively by Bourgain (answering a question of Nets Katz and myself). However, this fact is not entirely trivial to prove, being a key example of the sum-product phenomenon.
We thus see that to go beyond the ${5/2}$ dimension bound of Wolff for the 3D Kakeya problem over the reals, one must do at least one of two things:
• (a) Exploit the distinct directions of the lines in ${{\mathcal L}}$ in a way that goes beyond the Wolff axiom; or
• (b) Exploit the fact that ${{\bf R}}$ does not contain half-dimensional subfields (or more generally, intermediate-dimensional approximate subrings).
(The situation is more complicated in higher dimensions, as there are more obstructions than the Heisenberg group; for instance, in four dimensions quadric surfaces are an important obstruction, as discussed in this paper of mine.)
Various partial or complete results on the Kakeya problem over various fields have been obtained through route (a) or route (b). For instance, in 2000, Nets Katz, Izabella Laba and myself used route (a) to improve Wolff’s lower bound of ${5/2}$ for Kakeya sets very slightly to ${5/2+10^{-10}}$ (for a weak notion of dimension, namely upper Minkowski dimension). In 2004, Bourgain, Katz, and myself established a sum-product estimate which (among other things) ruled out approximate intermediate-dimensional subrings of ${{\bf F}_p}$, and then pursued route (b) to obtain a corresponding improvement ${5/2+\epsilon}$ to the Kakeya conjecture over finite fields of prime order. The analogous (discretised) sum-product estimate over the reals was established by Bourgain in 2003, which in principle would allow one to extend the result of Katz, Laba and myself to the strong Kakeya setting, but this has not been carried out in the literature. Finally, in 2009, Dvir used route (a) and introduced the polynomial method (as discussed previously here) to completely settle the Kakeya conjecture in finite fields.
Below the fold, I present a heuristic argument of Nets Katz and myself, which in principle would use route (b) to establish the full (strong) Kakeya conjecture. In broad terms, the strategy is as follows:
1. Assume that the (strong) Kakeya conjecture fails, so that there are sets ${E}$ of the form in Conjecture 3 of dimension ${3-\sigma}$ for some ${\sigma>0}$. Assume that ${E}$ is “optimal”, in the sense that ${\sigma}$ is as large as possible.
2. Use the optimality of ${E}$ (and suitable non-isotropic rescalings) to establish strong forms of standard structural properties expected of such sets ${E}$, namely “stickiness”, “planiness”, “local graininess” and “global graininess” (we will roughly describe these properties below). Heuristically, these properties are constraining ${E}$ to “behave like” a putative Heisenberg group counterexample.
3. By playing all these structural properties off of each other, show that ${E}$ can be parameterised locally by a one-dimensional set which generates a counterexample to Bourgain’s sum-product theorem. This contradiction establishes the Kakeya conjecture.
Nets and I have had an informal version of argument for many years, but were never able to make a satisfactory theorem (or even a partial Kakeya result) out of it, because we could not rigorously establish anywhere near enough of the necessary structural properties (stickiness, planiness, etc.) on the optimal set ${E}$ for a large number of reasons (one of which being that we did not have a good notion of dimension that did everything that we wished to demand of it). However, there is beginning to be movement in these directions (e.g. in this recent result of Guth using the polynomial method obtaining a weak version of local graininess on certain Kakeya sets). In view of this (and given that neither Nets or I have been actively working in this direction for some time now, due to many other projects), we’ve decided to distribute these ideas more widely than before, and in particular on this blog.
Roth’s theorem on arithmetic progressions asserts that every subset of the integers ${{\bf Z}}$ of positive upper density contains infinitely many arithmetic progressions of length three. There are many versions and variants of this theorem. Here is one of them:
Theorem 1 (Roth’s theorem) Let ${G = (G,+)}$ be a compact abelian group, with Haar probability measure ${\mu}$, which is ${2}$-divisible (i.e. the map ${x \mapsto 2x}$ is surjective) and let ${A}$ be a measurable subset of ${G}$ with ${\mu(A) \geq \alpha}$ for some ${0 < \alpha < 1}$. Then we have
$\displaystyle \int_G \int_G 1_A(x) 1_A(x+r) 1_A(x+2r)\ d\mu(x) d\mu(r) \gg_\alpha 1,$
where ${X \gg_\alpha Y}$ denotes the bound ${X \geq c_\alpha Y}$ for some ${c_\alpha > 0}$ depending only on ${\alpha}$.
This theorem is usually formulated in the case that ${G}$ is a finite abelian group of odd order (in which case the result is essentially due to Meshulam) or more specifically a cyclic group ${G = {\bf Z}/N{\bf Z}}$ of odd order (in which case it is essentially due to Varnavides), but is also valid for the more general setting of ${2}$-divisible compact abelian groups, as we shall shortly see. One can be more precise about the dependence of the implied constant ${c_\alpha}$ on ${\alpha}$, but to keep the exposition simple we will work at the qualitative level here, without trying at all to get good quantitative bounds. The theorem is also true without the ${2}$-divisibility hypothesis, but the proof we will discuss runs into some technical issues due to the degeneracy of the ${2r}$ shift in that case.
We can deduce Theorem 1 from the following more general Khintchine-type statement. Let ${\hat G}$ denote the Pontryagin dual of a compact abelian group ${G}$, that is to say the set of all continuous homomorphisms ${\xi: x \mapsto \xi \cdot x}$ from ${G}$ to the (additive) unit circle ${{\bf R}/{\bf Z}}$. Thus ${\hat G}$ is a discrete abelian group, and functions ${f \in L^2(G)}$ have a Fourier transform ${\hat f \in \ell^2(\hat G)}$ defined by
$\displaystyle \hat f(\xi) := \int_G f(x) e^{-2\pi i \xi \cdot x}\ d\mu(x).$
If ${G}$ is ${2}$-divisible, then ${\hat G}$ is ${2}$-torsion-free in the sense that the map ${\xi \mapsto 2 \xi}$ is injective. For any finite set ${S \subset \hat G}$ and any radius ${\rho>0}$, define the Bohr set
$\displaystyle B(S,\rho) := \{ x \in G: \sup_{\xi \in S} \| \xi \cdot x \|_{{\bf R}/{\bf Z}} < \rho \}$
where ${\|\theta\|_{{\bf R}/{\bf Z}}}$ denotes the distance of ${\theta}$ to the nearest integer. We refer to the cardinality ${|S|}$ of ${S}$ as the rank of the Bohr set. We record a simple volume bound on Bohr sets:
Lemma 2 (Volume packing bound) Let ${G}$ be a compact abelian group with Haar probability measure ${\mu}$. For any Bohr set ${B(S,\rho)}$, we have
$\displaystyle \mu( B( S, \rho ) ) \gg_{|S|, \rho} 1.$
Proof: We can cover the torus ${({\bf R}/{\bf Z})^S}$ by ${O_{|S|,\rho}(1)}$ translates ${\theta+Q}$ of the cube ${Q := \{ (\theta_\xi)_{\xi \in S} \in ({\bf R}/{\bf Z})^S: \sup_{\xi \in S} \|\theta_\xi\|_{{\bf R}/{\bf Z}} < \rho/2 \}}$. Then the sets ${\{ x \in G: (\xi \cdot x)_{\xi \in S} \in \theta + Q \}}$ form an cover of ${G}$. But all of these sets lie in a translate of ${B(S,\rho)}$, and the claim then follows from the translation invariance of ${\mu}$. $\Box$
Given any Bohr set ${B(S,\rho)}$, we define a normalised “Lipschitz” cutoff function ${\nu_{B(S,\rho)}: G \rightarrow {\bf R}}$ by the formula
$\displaystyle \nu_{B(S,\rho)}(x) = c_{B(S,\rho)} (1 - \frac{1}{\rho} \sup_{\xi \in S} \|\xi \cdot x\|_{{\bf R}/{\bf Z}})_+ \ \ \ \ \ (1)$
where ${c_{B(S,\rho)}}$ is the constant such that
$\displaystyle \int_G \nu_{B(S,\rho)}\ d\mu = 1,$
thus
$\displaystyle c_{B(S,\rho)} = \left( \int_{B(S,\rho)} (1 - \frac{1}{\rho} \sup_{\xi \in S} \|\xi \cdot x\|_{{\bf R}/{\bf Z}})\ d\mu(x) \right)^{-1}.$
The function ${\nu_{B(S,\rho)}}$ should be viewed as an ${L^1}$-normalised “tent function” cutoff to ${B(S,\rho)}$. Note from Lemma 2 that
$\displaystyle 1 \ll_{|S|,\rho} c_{B(S,\rho)} \ll_{|S|,\rho} 1. \ \ \ \ \ (2)$
We then have the following sharper version of Theorem 1:
Theorem 3 (Roth-Khintchine theorem) Let ${G = (G,+)}$ be a ${2}$-divisible compact abelian group, with Haar probability measure ${\mu}$, and let ${\epsilon>0}$. Then for any measurable function ${f: G \rightarrow [0,1]}$, there exists a Bohr set ${B(S,\rho)}$ with ${|S| \ll_\epsilon 1}$ and ${\rho \gg_\epsilon 1}$ such that
$\displaystyle \int_G \int_G f(x) f(x+r) f(x+2r) \nu_{B(S,\rho)}*\nu_{B(S,\rho)}(r)\ d\mu(x) d\mu(r) \ \ \ \ \ (3)$
$\displaystyle \geq (\int_G f\ d\mu)^3 - O(\epsilon)$
where ${*}$ denotes the convolution operation
$\displaystyle f*g(x) := \int_G f(y) g(x-y)\ d\mu(y).$
A variant of this result (expressed in the language of ergodic theory) appears in this paper of Bergelson, Host, and Kra; a combinatorial version of the Bergelson-Host-Kra result that is closer to Theorem 3 subsequently appeared in this paper of Ben Green and myself, but this theorem arguably appears implicitly in a much older paper of Bourgain. To see why Theorem 3 implies Theorem 1, we apply the theorem with ${f := 1_A}$ and ${\epsilon}$ equal to a small multiple of ${\alpha^3}$ to conclude that there is a Bohr set ${B(S,\rho)}$ with ${|S| \ll_\alpha 1}$ and ${\rho \gg_\alpha 1}$ such that
$\displaystyle \int_G \int_G 1_A(x) 1_A(x+r) 1_A(x+2r) \nu_{B(S,\rho)}*\nu_{B(S,\rho)}(r)\ d\mu(x) d\mu(r) \gg \alpha^3.$
But from (2) we have the pointwise bound ${\nu_{B(S,\rho)}*\nu_{B(S,\rho)} \ll_\alpha 1}$, and Theorem 1 follows.
Below the fold, we give a short proof of Theorem 3, using an “energy pigeonholing” argument that essentially dates back to the 1986 paper of Bourgain mentioned previously (not to be confused with a later 1999 paper of Bourgain on Roth’s theorem that was highly influential, for instance in emphasising the importance of Bohr sets). The idea is to use the pigeonhole principle to choose the Bohr set ${B(S,\rho)}$ to capture all the “large Fourier coefficients” of ${f}$, but such that a certain “dilate” of ${B(S,\rho)}$ does not capture much more Fourier energy of ${f}$ than ${B(S,\rho)}$ itself. The bound (3) may then be obtained through elementary Fourier analysis, without much need to explicitly compute things like the Fourier transform of an indicator function of a Bohr set. (However, the bound obtained by this argument is going to be quite poor – of tower-exponential type.) To do this we perform a structural decomposition of ${f}$ into “structured”, “small”, and “highly pseudorandom” components, as is common in the subject (e.g. in this previous blog post), but even though we crucially need to retain non-negativity of one of the components in this decomposition, we can avoid recourse to conditional expectation with respect to a partition (or “factor”) of the space, using instead convolution with one of the ${\nu_{B(S,\rho)}}$ considered above to achieve a similar effect.
A core foundation of the subject now known as arithmetic combinatorics (and particularly the subfield of additive combinatorics) are the elementary sum set estimates (sometimes known as “Ruzsa calculus”) that relate the cardinality of various sum sets
$\displaystyle A+B := \{ a+b: a \in A, b \in B \}$
and difference sets
$\displaystyle A-B := \{ a-b: a \in A, b \in B \},$
as well as iterated sumsets such as ${3A=A+A+A}$, ${2A-2A=A+A-A-A}$, and so forth. Here, ${A, B}$ are finite non-empty subsets of some additive group ${G = (G,+)}$ (classically one took ${G={\bf Z}}$ or ${G={\bf R}}$, but nowadays one usually considers more general additive groups). Some basic estimates in this vein are the following:
Lemma 1 (Ruzsa covering lemma) Let ${A, B}$ be finite non-empty subsets of ${G}$. Then ${A}$ may be covered by at most ${\frac{|A+B|}{|B|}}$ translates of ${B-B}$.
Proof: Consider a maximal set of disjoint translates ${a+B}$ of ${B}$ by elements ${a \in A}$. These translates have cardinality ${|B|}$, are disjoint, and lie in ${A+B}$, so there are at most ${\frac{|A+B|}{|B|}}$ of them. By maximality, for any ${a' \in A}$, ${a'+B}$ must intersect at least one of the selected ${a+B}$, thus ${a' \in a+B-B}$, and the claim follows. $\Box$
Lemma 2 (Ruzsa triangle inequality) Let ${A,B,C}$ be finite non-empty subsets of ${G}$. Then ${|A-C| \leq \frac{|A-B| |B-C|}{|B|}}$.
Proof: Consider the addition map ${+: (x,y) \mapsto x+y}$ from ${(A-B) \times (B-C)}$ to ${G}$. Every element ${a-c}$ of ${A - C}$ has a preimage ${\{ (x,y) \in (A-B) \times (B-C)\}}$ of this map of cardinality at least ${|B|}$, thanks to the obvious identity ${a-c = (a-b) + (b-c)}$ for each ${b \in B}$. Since ${(A-B) \times (B-C)}$ has cardinality ${|A-B| |B-C|}$, the claim follows. $\Box$
Such estimates (which are covered, incidentally, in Section 2 of my book with Van Vu) are particularly useful for controlling finite sets ${A}$ of small doubling, in the sense that ${|A+A| \leq K|A|}$ for some bounded ${K}$. (There are deeper theorems, most notably Freiman’s theorem, which give more control than what elementary Ruzsa calculus does, however the known bounds in the latter theorem are worse than polynomial in ${K}$ (although it is conjectured otherwise), whereas the elementary estimates are almost all polynomial in ${K}$.)
However, there are some settings in which the standard sum set estimates are not quite applicable. One such setting is the continuous setting, where one is dealing with bounded open sets in an additive Lie group (e.g. ${{\bf R}^n}$ or a torus ${({\bf R}/{\bf Z})^n}$) rather than a finite setting. Here, one can largely replicate the discrete sum set estimates by working with a Haar measure in place of cardinality; this is the approach taken for instance in this paper of mine. However, there is another setting, which one might dub the “discretised” setting (as opposed to the “discrete” setting or “continuous” setting), in which the sets ${A}$ remain finite (or at least discretisable to be finite), but for which there is a certain amount of “roundoff error” coming from the discretisation. As a typical example (working now in a non-commutative multiplicative setting rather than an additive one), consider the orthogonal group ${O_n({\bf R})}$ of orthogonal ${n \times n}$ matrices, and let ${A}$ be the matrices obtained by starting with all of the orthogonal matrice in ${O_n({\bf R})}$ and rounding each coefficient of each matrix in this set to the nearest multiple of ${\epsilon}$, for some small ${\epsilon>0}$. This forms a finite set (whose cardinality grows as ${\epsilon\rightarrow 0}$ like a certain negative power of ${\epsilon}$). In the limit ${\epsilon \rightarrow 0}$, the set ${A}$ is not a set of small doubling in the discrete sense. However, ${A \cdot A}$ is still close to ${A}$ in a metric sense, being contained in the ${O_n(\epsilon)}$-neighbourhood of ${A}$. Another key example comes from graphs ${\Gamma := \{ (x, f(x)): x \in G \}}$ of maps ${f: A \rightarrow H}$ from a subset ${A}$ of one additive group ${G = (G,+)}$ to another ${H = (H,+)}$. If ${f}$ is “approximately additive” in the sense that for all ${x,y \in G}$, ${f(x+y)}$ is close to ${f(x)+f(y)}$ in some metric, then ${\Gamma}$ might not have small doubling in the discrete sense (because ${f(x+y)-f(x)-f(y)}$ could take a large number of values), but could be considered a set of small doubling in a discretised sense.
One would like to have a sum set (or product set) theory that can handle these cases, particularly in “high-dimensional” settings in which the standard methods of passing back and forth between continuous, discrete, or discretised settings behave poorly from a quantitative point of view due to the exponentially large doubling constant of balls. One way to do this is to impose a translation invariant metric ${d}$ on the underlying group ${G = (G,+)}$ (reverting back to additive notation), and replace the notion of cardinality by that of metric entropy. There are a number of almost equivalent ways to define this concept:
Definition 3 Let ${(X,d)}$ be a metric space, let ${E}$ be a subset of ${X}$, and let ${r>0}$ be a radius.
• The packing number ${N^{pack}_r(E)}$ is the largest number of points ${x_1,\dots,x_n}$ one can pack inside ${E}$ such that the balls ${B(x_1,r),\dots,B(x_n,r)}$ are disjoint.
• The internal covering number ${N^{int}_r(E)}$ is the fewest number of points ${x_1,\dots,x_n \in E}$ such that the balls ${B(x_1,r),\dots,B(x_n,r)}$ cover ${E}$.
• The external covering number ${N^{ext}_r(E)}$ is the fewest number of points ${x_1,\dots,x_n \in X}$ such that the balls ${B(x_1,r),\dots,B(x_n,r)}$ cover ${E}$.
• The metric entropy ${N^{ent}_r(E)}$ is the largest number of points ${x_1,\dots,x_n}$ one can find in ${E}$ that are ${r}$-separated, thus ${d(x_i,x_j) \geq r}$ for all ${i \neq j}$.
It is an easy exercise to verify the inequalities
$\displaystyle N^{ent}_{2r}(E) \leq N^{pack}_r(E) \leq N^{ext}_r(E) \leq N^{int}_r(E) \leq N^{ent}_r(E)$
for any ${r>0}$, and that ${N^*_r(E)}$ is non-increasing in ${r}$ and non-decreasing in ${E}$ for the three choices ${* = pack,ext,ent}$ (but monotonicity in ${E}$ can fail for ${*=int}$!). It turns out that the external covering number ${N^{ent}_r(E)}$ is slightly more convenient than the other notions of metric entropy, so we will abbreviate ${N_r(E) = N^{ent}_r(E)}$. The cardinality ${|E|}$ can be viewed as the limit of the entropies ${N^*_r(E)}$ as ${r \rightarrow 0}$.
If we have the bounded doubling property that ${B(0,2r)}$ is covered by ${O(1)}$ translates of ${B(0,r)}$ for each ${r>0}$, and one has a Haar measure ${m}$ on ${G}$ which assigns a positive finite mass to each ball, then any of the above entropies ${N^*_r(E)}$ is comparable to ${m( E + B(0,r) ) / m(B(0,r))}$, as can be seen by simple volume packing arguments. Thus in the bounded doubling setting one can usually use the measure-theoretic sum set theory to derive entropy-theoretic sumset bounds (see e.g. this paper of mine for an example of this). However, it turns out that even in the absence of bounded doubling, one still has an entropy analogue of most of the elementary sum set theory, except that one has to accept some degradation in the radius parameter ${r}$ by some absolute constant. Such losses can be acceptable in applications in which the underlying sets ${A}$ are largely “transverse” to the balls ${B(0,r)}$, so that the ${N_r}$-entropy of ${A}$ is largely independent of ${A}$; this is a situation which arises in particular in the case of graphs ${\Gamma = \{ (x,f(x)): x \in G \}}$ discussed above, if one works with “vertical” metrics whose balls extend primarily in the vertical direction. (I hope to present a specific application of this type here in the near future.)
Henceforth we work in an additive group ${G}$ equipped with a translation-invariant metric ${d}$. (One can also generalise things slightly by allowing the metric to attain the values ${0}$ or ${+\infty}$, without changing much of the analysis below.) By the Heine-Borel theorem, any precompact set ${E}$ will have finite entropy ${N_r(E)}$ for any ${r>0}$. We now have analogues of the two basic Ruzsa lemmas above:
Lemma 4 (Ruzsa covering lemma) Let ${A, B}$ be precompact non-empty subsets of ${G}$, and let ${r>0}$. Then ${A}$ may be covered by at most ${\frac{N_r(A+B)}{N_r(B)}}$ translates of ${B-B+B(0,2r)}$.
Proof: Let ${a_1,\dots,a_n \in A}$ be a maximal set of points such that the sets ${a_i + B + B(0,r)}$ are all disjoint. Then the sets ${a_i+B}$ are disjoint in ${A+B}$ and have entropy ${N_r(a_i+B)=N_r(B)}$, and furthermore any ball of radius ${r}$ can intersect at most one of the ${a_i+B}$. We conclude that ${N_r(A+B) \geq n N_r(B)}$, so ${n \leq \frac{N_r(A+B)}{N_r(B)}}$. If ${a \in A}$, then ${a+B+B(0,r)}$ must intersect one of the ${a_i + B + B(0,r)}$, so ${a \in a_i + B-B + B(0,2r)}$, and the claim follows. $\Box$
Lemma 5 (Ruzsa triangle inequality) Let ${A,B,C}$ be precompact non-empty subsets of ${G}$, and let ${r>0}$. Then ${N_{4r}(A-C) \leq \frac{N_r(A-B) N_r(B-C)}{N_r(B)}}$.
Proof: Consider the addition map ${+: (x,y) \mapsto x+y}$ from ${(A-B) \times (B-C)}$ to ${G}$. The domain ${(A-B) \times (B-C)}$ may be covered by ${N_r(A-B) N_r(B-C)}$ product balls ${B(x,r) \times B(y,r)}$. Every element ${a-c}$ of ${A - C}$ has a preimage ${\{ (x,y) \in (A-B) \times (B-C)\}}$ of this map which projects to a translate of ${B}$, and thus must meet at least ${N_r(B)}$ of these product balls. However, if two elements of ${A-C}$ are separated by a distance of at least ${4r}$, then no product ball can intersect both preimages. We thus see that ${N_{4r}^{ent}(A-C) \leq \frac{N_r(A-B) N_r(B-C)}{N_r(A-C)}}$, and the claim follows. $\Box$
Below the fold we will record some further metric entropy analogues of sum set estimates (basically redoing much of Chapter 2 of my book with Van Vu). Unfortunately there does not seem to be a direct way to abstractly deduce metric entropy results from their sum set analogues (basically due to the failure of a certain strong version of Freiman’s theorem, as discussed in this previous post); nevertheless, the proofs of the discrete arguments are elementary enough that they can be modified with a small amount of effort to handle the entropy case. (In fact, there should be a very general model-theoretic framework in which both the discrete and entropy arguments can be processed in a unified manner; see this paper of Hrushovski for one such framework.)
It is also likely that many of the arguments here extend to the non-commutative setting, but for simplicity we will not pursue such generalisations here.
(This is an extended blog post version of my talk “Ultraproducts as a Bridge Between Discrete and Continuous Analysis” that I gave at the Simons institute for the theory of computing at the workshop “Neo-Classical methods in discrete analysis“. Some of the material here is drawn from previous blog posts, notably “Ultraproducts as a bridge between hard analysis and soft analysis” and “Ultralimit analysis and quantitative algebraic geometry“‘. The text here has substantially more details than the talk; one may wish to skip all of the proofs given here to obtain a closer approximation to the original talk.)
Discrete analysis, of course, is primarily interested in the study of discrete (or “finitary”) mathematical objects: integers, rational numbers (which can be viewed as ratios of integers), finite sets, finite graphs, finite or discrete metric spaces, and so forth. However, many powerful tools in mathematics (e.g. ergodic theory, measure theory, topological group theory, algebraic geometry, spectral theory, etc.) work best when applied to continuous (or “infinitary”) mathematical objects: real or complex numbers, manifolds, algebraic varieties, continuous topological or metric spaces, etc. In order to apply results and ideas from continuous mathematics to discrete settings, there are basically two approaches. One is to directly discretise the arguments used in continuous mathematics, which often requires one to keep careful track of all the bounds on various quantities of interest, particularly with regard to various error terms arising from discretisation which would otherwise have been negligible in the continuous setting. The other is to construct continuous objects as limits of sequences of discrete objects of interest, so that results from continuous mathematics may be applied (often as a “black box”) to the continuous limit, which then can be used to deduce consequences for the original discrete objects which are quantitative (though often ineffectively so). The latter approach is the focus of this current talk.
The following table gives some examples of a discrete theory and its continuous counterpart, together with a limiting procedure that might be used to pass from the former to the latter:
(Discrete) (Continuous) (Limit method) Ramsey theory Topological dynamics Compactness Density Ramsey theory Ergodic theory Furstenberg correspondence principle Graph/hypergraph regularity Measure theory Graph limits Polynomial regularity Linear algebra Ultralimits Structural decompositions Hilbert space geometry Ultralimits Fourier analysis Spectral theory Direct and inverse limits Quantitative algebraic geometry Algebraic geometry Schemes Discrete metric spaces Continuous metric spaces Gromov-Hausdorff limits Approximate group theory Topological group theory Model theory
As the above table illustrates, there are a variety of different ways to form a limiting continuous object. Roughly speaking, one can divide limits into three categories:
• Topological and metric limits. These notions of limits are commonly used by analysts. Here, one starts with a sequence (or perhaps a net) of objects ${x_n}$ in a common space ${X}$, which one then endows with the structure of a topological space or a metric space, by defining a notion of distance between two points of the space, or a notion of open neighbourhoods or open sets in the space. Provided that the sequence or net is convergent, this produces a limit object ${\lim_{n \rightarrow \infty} x_n}$, which remains in the same space, and is “close” to many of the original objects ${x_n}$ with respect to the given metric or topology.
• Categorical limits. These notions of limits are commonly used by algebraists. Here, one starts with a sequence (or more generally, a diagram) of objects ${x_n}$ in a category ${X}$, which are connected to each other by various morphisms. If the ambient category is well-behaved, one can then form the direct limit ${\varinjlim x_n}$ or the inverse limit ${\varprojlim x_n}$ of these objects, which is another object in the same category ${X}$, and is connected to the original objects ${x_n}$ by various morphisms.
• Logical limits. These notions of limits are commonly used by model theorists. Here, one starts with a sequence of objects ${x_{\bf n}}$ or of spaces ${X_{\bf n}}$, each of which is (a component of) a model for given (first-order) mathematical language (e.g. if one is working in the language of groups, ${X_{\bf n}}$ might be groups and ${x_{\bf n}}$ might be elements of these groups). By using devices such as the ultraproduct construction, or the compactness theorem in logic, one can then create a new object ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$ or a new space ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$, which is still a model of the same language (e.g. if the spaces ${X_{\bf n}}$ were all groups, then the limiting space ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$ will also be a group), and is “close” to the original objects or spaces in the sense that any assertion (in the given language) that is true for the limiting object or space, will also be true for many of the original objects or spaces, and conversely. (For instance, if ${\prod_{{\bf n} \rightarrow \alpha} X_{\bf n}}$ is an abelian group, then the ${X_{\bf n}}$ will also be abelian groups for many ${{\bf n}}$.)
The purpose of this talk is to highlight the third type of limit, and specifically the ultraproduct construction, as being a “universal” limiting procedure that can be used to replace most of the limits previously mentioned. Unlike the topological or metric limits, one does not need the original objects ${x_{\bf n}}$ to all lie in a common space ${X}$ in order to form an ultralimit ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$; they are permitted to lie in different spaces ${X_{\bf n}}$; this is more natural in many discrete contexts, e.g. when considering graphs on ${{\bf n}}$ vertices in the limit when ${{\bf n}}$ goes to infinity. Also, no convergence properties on the ${x_{\bf n}}$ are required in order for the ultralimit to exist. Similarly, ultraproduct limits differ from categorical limits in that no morphisms between the various spaces ${X_{\bf n}}$ involved are required in order to construct the ultraproduct.
With so few requirements on the objects ${x_{\bf n}}$ or spaces ${X_{\bf n}}$, the ultraproduct construction is necessarily a very “soft” one. Nevertheless, the construction has two very useful properties which make it particularly useful for the purpose of extracting good continuous limit objects out of a sequence of discrete objects. First of all, there is Łos’s theorem, which roughly speaking asserts that any first-order sentence which is asymptotically obeyed by the ${x_{\bf n}}$, will be exactly obeyed by the limit object ${\lim_{{\bf n} \rightarrow \alpha} x_{\bf n}}$; in particular, one can often take a discrete sequence of “partial counterexamples” to some assertion, and produce a continuous “complete counterexample” that same assertion via an ultraproduct construction; taking the contrapositives, one can often then establish a rigorous equivalence between a quantitative discrete statement and its qualitative continuous counterpart. Secondly, there is the countable saturation property that ultraproducts automatically enjoy, which is a property closely analogous to that of compactness in topological spaces, and can often be used to ensure that the continuous objects produced by ultraproduct methods are “complete” or “compact” in various senses, which is particularly useful in being able to upgrade qualitative (or “pointwise”) bounds to quantitative (or “uniform”) bounds, more or less “for free”, thus reducing significantly the burden of “epsilon management” (although the price one pays for this is that one needs to pay attention to which mathematical objects of study are “standard” and which are “nonstandard”). To achieve this compactness or completeness, one sometimes has to restrict to the “bounded” portion of the ultraproduct, and it is often also convenient to quotient out the “infinitesimal” portion in order to complement these compactness properties with a matching “Hausdorff” property, thus creating familiar examples of continuous spaces, such as locally compact Hausdorff spaces.
Ultraproducts are not the only logical limit in the model theorist’s toolbox, but they are one of the simplest to set up and use, and already suffice for many of the applications of logical limits outside of model theory. In this post, I will set out the basic theory of these ultraproducts, and illustrate how they can be used to pass between discrete and continuous theories in each of the examples listed in the above table.
Apart from the initial “one-time cost” of setting up the ultraproduct machinery, the main loss one incurs when using ultraproduct methods is that it becomes very difficult to extract explicit quantitative bounds from results that are proven by transferring qualitative continuous results to the discrete setting via ultraproducts. However, in many cases (particularly those involving regularity-type lemmas) the bounds are already of tower-exponential type or worse, and there is arguably not much to be lost by abandoning the explicit quantitative bounds altogether.
Let ${F}$ be a field. A definable set over ${F}$ is a set of the form
$\displaystyle \{ x \in F^n | \phi(x) \hbox{ is true} \} \ \ \ \ \ (1)$
where ${n}$ is a natural number, and ${\phi(x)}$ is a predicate involving the ring operations ${+,\times}$ of ${F}$, the equality symbol ${=}$, an arbitrary number of constants and free variables in ${F}$, the quantifiers ${\forall, \exists}$, boolean operators such as ${\vee,\wedge,\neg}$, and parentheses and colons, where the quantifiers are always understood to be over the field ${F}$. Thus, for instance, the set of quadratic residues
$\displaystyle \{ x \in F | \exists y: x = y \times y \}$
is definable over ${F}$, and any algebraic variety over ${F}$ is also a definable set over ${F}$. Henceforth we will abbreviate “definable over ${F}$” simply as “definable”.
If ${F}$ is a finite field, then every subset of ${F^n}$ is definable, since finite sets are automatically definable. However, we can obtain a more interesting notion in this case by restricting the complexity of a definable set. We say that ${E \subset F^n}$ is a definable set of complexity at most ${M}$ if ${n \leq M}$, and ${E}$ can be written in the form (1) for some predicate ${\phi}$ of length at most ${M}$ (where all operators, quantifiers, relations, variables, constants, and punctuation symbols are considered to have unit length). Thus, for instance, a hypersurface in ${n}$ dimensions of degree ${d}$ would be a definable set of complexity ${O_{n,d}(1)}$. We will then be interested in the regime where the complexity remains bounded, but the field size (or field characteristic) becomes large.
In a recent paper, I established (in the large characteristic case) the following regularity lemma for dense definable graphs, which significantly strengthens the Szemerédi regularity lemma in this context, by eliminating “bad” pairs, giving a polynomially strong regularity, and also giving definability of the cells:
Lemma 1 (Algebraic regularity lemma) Let ${F}$ be a finite field, let ${V,W}$ be definable non-empty sets of complexity at most ${M}$, and let ${E \subset V \times W}$ also be definable with complexity at most ${M}$. Assume that the characteristic of ${F}$ is sufficiently large depending on ${M}$. Then we may partition ${V = V_1 \cup \ldots \cup V_m}$ and ${W = W_1 \cup \ldots \cup W_n}$ with ${m,n = O_M(1)}$, with the following properties:
• (Definability) Each of the ${V_1,\ldots,V_m,W_1,\ldots,W_n}$ are definable of complexity ${O_M(1)}$.
• (Size) We have ${|V_i| \gg_M |V|}$ and ${|W_j| \gg_M |W|}$ for all ${i=1,\ldots,m}$ and ${j=1,\ldots,n}$.
• (Regularity) We have
$\displaystyle |E \cap (A \times B)| = d_{ij} |A| |B| + O_M( |F|^{-1/4} |V| |W| ) \ \ \ \ \ (2)$
for all ${i=1,\ldots,m}$, ${j=1,\ldots,n}$, ${A \subset V_i}$, and ${B\subset W_j}$, where ${d_{ij}}$ is a rational number in ${[0,1]}$ with numerator and denominator ${O_M(1)}$.
My original proof of this lemma was quite complicated, based on an explicit calculation of the “square”
$\displaystyle \mu(w,w') := \{ v \in V: (v,w), (v,w') \in E \}$
of ${E}$ using the Lang-Weil bound and some facts about the étale fundamental group. It was the reliance on the latter which was the main reason why the result was restricted to the large characteristic setting. (I then applied this lemma to classify expanding polynomials over finite fields of large characteristic, but I will not discuss these applications here; see this previous blog post for more discussion.)
Recently, Anand Pillay and Sergei Starchenko (and independently, Udi Hrushovski) have observed that the theory of the étale fundamental group is not necessary in the argument, and the lemma can in fact be deduced from quite general model theoretic techniques, in particular using (a local version of) the concept of stability. One of the consequences of this new proof of the lemma is that the hypothesis of large characteristic can be omitted; the lemma is now known to be valid for arbitrary finite fields ${F}$ (although its content is trivial if the field is not sufficiently large depending on the complexity at most ${M}$).
Inspired by this, I decided to see if I could find yet another proof of the algebraic regularity lemma, again avoiding the theory of the étale fundamental group. It turns out that the spectral proof of the Szemerédi regularity lemma (discussed in this previous blog post) adapts very nicely to this setting. The key fact needed about definable sets over finite fields is that their cardinality takes on an essentially discrete set of values. More precisely, we have the following fundamental result of Chatzidakis, van den Dries, and Macintyre:
Proposition 2 Let ${F}$ be a finite field, and let ${M > 0}$.
• (Discretised cardinality) If ${E}$ is a non-empty definable set of complexity at most ${M}$, then one has
$\displaystyle |E| = c |F|^d + O_M( |F|^{d-1/2} ) \ \ \ \ \ (3)$
where ${d = O_M(1)}$ is a natural number, and ${c}$ is a positive rational number with numerator and denominator ${O_M(1)}$. In particular, we have ${|F|^d \ll_M |E| \ll_M |F|^d}$.
• (Definable cardinality) Assume ${|F|}$ is sufficiently large depending on ${M}$. If ${V, W}$, and ${E \subset V \times W}$ are definable sets of complexity at most ${M}$, so that ${E_w := \{ v \in V: (v,w) \in W \}}$ can be viewed as a definable subset of ${V}$ that is definably parameterised by ${w \in W}$, then for each natural number ${d = O_M(1)}$ and each positive rational ${c}$ with numerator and denominator ${O_M(1)}$, the set
$\displaystyle \{ w \in W: |E_w| = c |F|^d + O_M( |F|^{d-1/2} ) \} \ \ \ \ \ (4)$
is definable with complexity ${O_M(1)}$, where the implied constants in the asymptotic notation used to define (4) are the same as those that appearing in (3). (Informally: the “dimension” ${d}$ and “measure” ${c}$ of ${E_w}$ depends definably on ${w}$.)
We will take this proposition as a black box; a proof can be obtained by combining the description of definable sets over pseudofinite fields (discussed in this previous post) with the Lang-Weil bound (discussed in this previous post). (The former fact is phrased using nonstandard analysis, but one can use standard compactness-and-contradiction arguments to convert such statements to statements in standard analysis, as discussed in this post.)
The above proposition places severe restrictions on the cardinality of definable sets; for instance, it shows that one cannot have a definable set of complexity at most ${M}$ and cardinality ${|F|^{1/2}}$, if ${|F|}$ is sufficiently large depending on ${M}$. If ${E \subset V}$ are definable sets of complexity at most ${M}$, it shows that ${|E| = (c+ O_M(|F|^{-1/2})) |V|}$ for some rational ${0\leq c \leq 1}$ with numerator and denominator ${O_M(1)}$; furthermore, if ${c=0}$, we may improve this bound to ${|E| = O_M( |F|^{-1} |V|)}$. In particular, we obtain the following “self-improving” properties:
• If ${E \subset V}$ are definable of complexity at most ${M}$ and ${|E| \leq \epsilon |V|}$ for some ${\epsilon>0}$, then (if ${\epsilon}$ is sufficiently small depending on ${M}$ and ${F}$ is sufficiently large depending on ${M}$) this forces ${|E| = O_M( |F|^{-1} |V| )}$.
• If ${E \subset V}$ are definable of complexity at most ${M}$ and ${||E| - c |V|| \leq \epsilon |V|}$ for some ${\epsilon>0}$ and positive rational ${c}$, then (if ${\epsilon}$ is sufficiently small depending on ${M,c}$ and ${F}$ is sufficiently large depending on ${M,c}$) this forces ${|E| = c |V| + O_M( |F|^{-1/2} |V| )}$.
It turns out that these self-improving properties can be applied to the coefficients of various matrices (basically powers of the adjacency matrix associated to ${E}$) that arise in the spectral proof of the regularity lemma to significantly improve the bounds in that lemma; we describe how this is done below the fold. We also make some connections to the stability-based proofs of Pillay-Starchenko and Hrushovski.
I’ve just uploaded to the arXiv my article “Algebraic combinatorial geometry: the polynomial method in arithmetic combinatorics, incidence combinatorics, and number theory“, submitted to the new journal “EMS surveys in the mathematical sciences“. This is the first draft of a survey article on the polynomial method – a technique in combinatorics and number theory for controlling a relevant set of points by comparing it with the zero set of a suitably chosen polynomial, and then using tools from algebraic geometry (e.g. Bezout’s theorem) on that zero set. As such, the method combines algebraic geometry with combinatorial geometry, and could be viewed as the philosophy of a combined field which I dub “algebraic combinatorial geometry”. There is also an important extension of this method when one is working overthe reals, in which methods from algebraic topology (e.g. the ham sandwich theorem and its generalisation to polynomials), and not just algebraic geometry, come into play also.
The polynomial method has been used independently many times in mathematics; for instance, it plays a key role in the proof of Baker’s theorem in transcendence theory, or Stepanov’s method in giving an elementary proof of the Riemann hypothesis for finite fields over curves; in combinatorics, the nullstellenatz of Alon is also another relatively early use of the polynomial method. More recently, it underlies Dvir’s proof of the Kakeya conjecture over finite fields and Guth and Katz’s near-complete solution to the Erdos distance problem in the plane, and can be used to give a short proof of the Szemeredi-Trotter theorem. One of the aims of this survey is to try to present all of these disparate applications of the polynomial method in a somewhat unified context; my hope is that there will eventually be a systematic foundation for algebraic combinatorial geometry which naturally contains all of these different instances the polynomial method (and also suggests new instances to explore); but the field is unfortunately not at that stage of maturity yet.
This is something of a first draft, so comments and suggestions are even more welcome than usual. (For instance, I have already had my attention drawn to some additional uses of the polynomial method in the literature that I was not previously aware of.)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 1097, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9602397084236145, "perplexity": 201.05257486294852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463453.54/warc/CC-MAIN-20150226074103-00230-ip-10-28-5-156.ec2.internal.warc.gz"}
|
https://zenodo.org/record/3891366/export/schemaorg_jsonld
|
Conference paper Open Access
# Conventional Power Plants to TSO Frequency Containment Reserves - A Competitive Analysis for Virtual Power Plant's Role
Ali Jibran; Silvestro Federico
### JSON-LD (schema.org) Export
{
"description": "<p>Frequency regulation is one of the basic objectives in ancillary market, which involves different stages and multiple participants. There exists different techniques for this service, where the points of demarcation are the time of service, and the regulatory requirements. The paper discusses primary frequency regulation, w.r.t Italian regulations, which is provided by conventional power plants upon TSO requests. The paper demonstrates conventional technique with its limitations, and proposes use of Virtual Power Plant (VPP) for the service provision. Storage and renewables techniques are compared under VPP context, and the use of storage is motivated. Finally, technical and economic comparison amongst potential storage techniques is done.</p>",
"creator": [
{
"affiliation": "DITEN & MEAN4SG, University of Genoa,Genova,Italy",
"@type": "Person",
"name": "Ali Jibran"
},
{
"affiliation": "DITEN, University of Genoa,Genova,Italy",
"@type": "Person",
"name": "Silvestro Federico"
}
],
"headline": "Conventional Power Plants to TSO Frequency Containment Reserves - A Competitive Analysis for Virtual Power Plant's Role",
"datePublished": "2019-11-11",
"url": "https://zenodo.org/record/3891366",
"@context": "https://schema.org/",
"identifier": "https://doi.org/10.5281/zenodo.3891366",
"@id": "https://doi.org/10.5281/zenodo.3891366",
"@type": "ScholarlyArticle",
"name": "Conventional Power Plants to TSO Frequency Containment Reserves - A Competitive Analysis for Virtual Power Plant's Role"
}
17
16
views
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1769706904888153, "perplexity": 14785.432606110991}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107891203.69/warc/CC-MAIN-20201026090458-20201026120458-00482.warc.gz"}
|
http://www.xavierdupre.fr/app/machinelearningext/helpsphinx/introduction.html
|
# Introduction¶
ML.net is a machine learning library implemented in C# by Microsoft. This projet aims at showing how to extend it with custom tranforms or learners. It implements standard abstraction in C# such as dataframes and pipeline following the scikit-learn API. ML.net implements two API. The first one structured as a streaming API merges every experiment in a single sequence of transform and learners possibly handling one out-of-memory dataset. The second API is built on the top of the first one and proposes an easier way to build pipeline with multiple datasets. This second API is also used by wrapper to other language such as NimbusML. Let’s see first how this library can be used without any addition.
## Command line¶
ML.net proposes some sort of simple language to define a simple machine learning pipeline. We use it on Iris data to train a logistic regression.
Label Sepal_length Sepal_width Petal_length Petal_width
0 3.5 1.4 0.2 5.1
0 3.0 1.4 0.2 4.9
0 3.2 1.3 0.2 4.7
0 3.1 1.5 0.2 4.6
0 3.6 1.4 0.2 5.0
The pipeline is simply define by a logistic regression named mlr for MultiLogisticRegression. Options are defined inside {...}. The parameter data= specifies the data file, loader= specifies the format and column names.
<<<
train
data = iris.txt
loader = text{col = Label: R4: 0 col = Features: R4: 1 - 4 header = +}
tr = mlr{maxiter = 5}
out = logistic_regression.zip
The documentation of every component is available through the command line. An exemple for Multi-class Logistic Regression:
<<<
? mlr
>>>
Help for MultiClassClassifierTrainer, Trainer: 'MultiClassLogisticRegression'
Aliases: MulticlassLogisticRegressionPredictorNew, mlr, multilr
showTrainingStats=[+|-] Show statistics of training examples.
Default value:'-' (short form stat)
l2Weight=<float> L2 regularization weight Default value:'1'
(short form l2)
l1Weight=<float> L1 regularization weight Default value:'1'
(short form l1)
optTol=<float> Tolerance parameter for optimization
convergence. Lower = slower, more accurate
Default value:'1E-07' (short form ot)
memorySize=<int> Memory size for L-BFGS. Lower=faster, less
accurate Default value:'20' (short form m)
maxIterations=<int> Maximum iterations. Default
value:'2147483647' (short form maxiter)
sgdInitializationTolerance=<float> Run SGD to initialize LR weights,
converging to this tolerance Default
value:'0' (short form sgd)
quiet=[+|-] If set to true, produce no output during
training. Default value:'-' (short form q)
initWtsDiameter=<float> Init weights diameter Default value:'0'
(short form initwts)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3870335519313812, "perplexity": 11276.972992250645}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202476.48/warc/CC-MAIN-20190321010720-20190321032720-00344.warc.gz"}
|
https://gitter.im/gitlab/gitlab?at=5f6ae85e0b5f3873c9e8de57
|
## Where communities thrive
• Join over 1.5M+ people
• Join over 100K+ communities
• Free without limits
##### Activity
ronalddavid
@ronalddavid
Hi Everyone.. I am using GitLab CE. While I am trying to push my source code which consists of French characters, GitLab doesn't recognise them as french characters and uses its own encoding which in turn makes my build fail in the Jenkins server. Any idea for work around or how to solve this ?
BANO.notIT
@BANOnotIT
Can somebody give an example of helm deployment job? I've tried to use alpine/helm image, but it isn't usable because of entrypoint and setting "entrypoint" to /bin/sh doesn't help
Sascha Wiegandt
@TheSasch
Hello, does someone have a hint for me where i could configure the automatic bridge between gitlab ci and sonarqube? I want automatically get a visible Sonar scan in each Pull-Request a developer sets up. I know the manual way from .gitlab-ci.yml where i could push as step my results to sonar. but i read something that it would be possible to integrate both systems where i don't need to do this within each pipeline.
David Broin
@davidbro-in_gitlab
Hi @TheSasch you always need some things in .gitlab-ci.yml sonarcloud integration helps you a lot, but they actually give a code to copy/paste in your CI config
6 replies
CodingPenguin
@TheCodingPenguin
Hello, a quite basic question here for anyone using Gitlab CI with sbt/scala. I have my test stage followed by rpm:publish stage and I would like to skip compiling on the second stage and get the compiled files from the first stage. What is the ideal way of doing this ?
David Broin
@davidbro-in_gitlab
You should use cache
CodingPenguin
@TheCodingPenguin
@davidbro-in_gitlab thank you for the response and the reference but the docs say: While the cache could be configured to pass intermediate build results between stages, this should be done with artifacts instead.. However, this doesn't needs to be an artifact that's saved and stored in some place because it is only needed for the consecutive stage. Hence I am confused a little
David Broin
@davidbro-in_gitlab
Yes, if there's nothing to share between stages you could use artifacts. I actually prefer that way because it's simpler to follow the pipeline artifacts, but cache speeds up compilation using previous stages cache.
Joe Phillips
@phillijw
can anyone clarify how I can incorporate allow_failure: true into this manual step?
rules:
- if: $CI_COMMIT_BRANCH =~ /^release\/v\d+.\d+.\d+$/
when: manual
- when: never
do I need allow_failure: true on the previous step instead?
David Broin
@davidbro-in_gitlab
Hi @phillijw could you share more lines? Last line is wrong, you don't have if, changes neither exists clause before when: never
This message was deleted
Have you try this?
rules:
- if: $CI_COMMIT_BRANCH =~ /^release\/v\d+.\d+.\d+$/
when: manual
allow_failure: true
Jon Ward
@jghward
Hi, wondering if anybody could help me with an issue with using trigger / bridge jobs.
I have Project A which contains global variables in the CI YAML. Project B downstream also contains global variables which I do NOT want to override. I understand upstream variables have precedence over downstream vars, so I want to avoid passing Project A's global vars to Project B.
My .gitlab-ci.yml for Project A looks like:
variables:
MY_VAR: "set in the upstream job"
downstream_project_b:
stage: trigger_downstream
variables: {}
trigger: myprojects/project_b
In Project B I have:
variables:
MY_VAR: "set in the downstream job"
test:
stage: test
script:
- echo "$MY_VAR" As you can see I have attempted to unset the global vars in Project A by using variables: {} in the job, however$MY_VAR is still being passed to Project B and overriding $MY_VAR there. Is there any way I can unset it? Thanks in advance for any tips! 1 reply Brian Pham @brianpham Hi, is anyone here using Gitlab managed terraform state? I am trying to get a sense of how people are using it and if they like it over the other backend methods. Dominic Watson @intellix I'm trying to use AutoDevops inside a single repository hosting an nrwl/nx repository but having issues with only specific environment names getting various magical ENV vars: KUBE_* and CI_ENVIRONMENT_*. Anyone managed to do it? There's multiple apps/endpoints with different K8S Replicaset requirements. I'm deploying a review app for each of them fine, because I can do something like: environment: name: review/$PROJECT_NAME-$CI_COMMIT_REF_NAME url: http://$CI_PROJECT_ID-$CI_ENVIRONMENT_SLUG.$KUBE_INGRESS_BASE_DOMAIN
on_stop: reviewApiStop
that works because Gitlab respects review/* as part of AutoDevops and provides the magic sauce to let it work, but if I try the same with staging like so:
environment:
name: staging/$PROJECT_NAME url: http://$CI_PROJECT_PATH_SLUG-staging-api.$KUBE_INGRESS_BASE_DOMAIN It doesn't work cause it only works for staging and not staging/* so I thought ok no big deal.. I know my cluster details, I'll provide the KUBE_ vars myself..... and I don't even get the CI_ENVIRONMENT_* vars -.- Dominic Watson @intellix Dominic Watson @intellix figured it out.... the staging/$PROJECT_NAME didn't work because PROJECT_NAME doesn't exist at the environment:name "step" and ending in a forward slash is invalid
ahsanmir
@ahsansmir
Hi - Having problems with accessing a repository using SSH
Anyone has any ideas
Judy Lipinski
@JudyLipinski_gitlab
A developer suddenly lost the ability to push or pull code. similar to this link, but it has lasted for days with no remedy. gitlab-org/gitlab-foss#1398
Alec Koumjian
@akoumjian_gitlab
I am trying to pass CI_ENVIRONMENT_NAME to webpack during the auto devops build step. I'm using the automatically detected buildpack and I've set AUTO_DEVOPS_BUILD_IMAGE_FORWARDED_CI_VARIABLES to CI_ENVIRONMENT_NAME,CI_COMMIT_SHA at the organization level. However, when I log out process.env, I can only see CI_COMMIT_SHA present.
Alec Koumjian
@akoumjian_gitlab
It looks like the environment name is set in the review step itself. What would folks recommend as the conventional way to use an environment config to point to other services from a static JS project? A dynamic environment such as an API can pull in environment variables or config files at run time, but I would think the static file project would need those configs built in, at build time.
I'm not afraid to write my own ci/cd scripts, but I'm curious if there is an auto devops convention here.
Dale Magee
@AntiSol
Hi there. Does anybody have any theories as to why I'm seeing no content in the main area on most/all pages on gitlab.com? I see the menus but that's all. I've tried: turning off ad blocker, restarting browser, etc etc. This is only affecting firefox as far as I can tell, doesn't happen in chrome. I'm even seeing it on the github help page.
Dale Magee
@AntiSol
if I had to guess I'd say a stylesheet isn't loading. div#content-body is set to display:none. Setting it to visible gives me content, but it's all messed up and not styled properly
Brian Pham
@brianpham
Does anyone know if gitlab managed terraform state remote store is supported for gitlab.com? I am looking at this doc, https://docs.gitlab.com/ee/administration/terraform_state.html and it only seems to show instructions for omnibus and from source.
Dale Magee
@AntiSol
(I found the answer to my question, if anybody else is having the same issue: gitlab-org/gitlab#239357)
fafifox
@fafifox
Hello,
I'm having trouble while trying to sign-in with my github account.
It says I need to confirm my email, so i click "resend confirmation email" but I got the following error:
Could not authenticate you from GitHub because "Csrf detected".
I've tried on firefox and chrome (with and without incognito mode) same things happens.
Any idea on how I can login ?
3 replies
Alec Koumjian
@akoumjian_gitlab
I am suddenly having a problem with review apps. Whenever my project gets to the review stage I get "This job failed because the necessary resources were not successfully created. More information "
Is it possible there are pods / deployments running in my managed k8s setup that I don't have easy access to? The cluster appears to be nowhere near capacity.
Surprising that the runner doesn't even start the job, there are no logs.
I tried disabling shared runners and only running on the ones in my cluster, but doesn't seem to make a difference.
br3nt
@br3nt
Hi everyone, Is there a setting somewhere I can set to have so that the git commit message body gets word-wrapped when expanding the body of a commit message?
Alec Koumjian
@akoumjian_gitlab
Hm, a bit frustrating here. The AutoDevops just suddenly stops working on me. I can't get any logs, the review stage just seems to not start. Anyone have recommendations of where to look first?
Alec Koumjian
@akoumjian_gitlab
Okay, so after battling with this for a day it seems that clearing the cluster cache in the advanced settings got it working again. How did I end up in this state and how do I avoid it in the future?
Ronald Roe
@ronaldroe
Hello, I'm trying to upload an image to a project - not as part of the repo, but so I can use it in a comment - is it possible to A) do that from a datauri, and B) use fetch or XHR to send it?
I've been beating my head against this for a few hours now. Every example I ever see is with cURL.
I can send the file, but I either get file is missing or file is invalid. I can successfully send via cURL. I've tried sending it with datauri URL escaped and also not. Am I missing something, or is this just not a possibility? I'd have to think there'd be a way, since the comment form turns pasted images into an uploaded file
Ateeb Ahmed
@ateebahmed_gitlab
Hello, I'm having this problem with SSH where my SSH connection hangs after establishing connection and sending the SSH version its using, I searched the problem and it seems Gitlab's SSH server is not responding to my connection. I can do same things with HTTPS fine but SSH hangs and after very long period timeout.
Ateeb Ahmed
@ateebahmed_gitlab
Output of ssh -Tvvv [email protected]
OpenSSH_8.3p1, OpenSSL 1.1.1g FIPS 21 Apr 2020
debug3: /etc/ssh/ssh_config line 54: Including file /etc/ssh/ssh_config.d/50-redhat.conf depth 0
debug2: checking match for 'final all' host gitlab.com originally gitlab.com
debug3: /etc/ssh/ssh_config.d/50-redhat.conf line 3: not matched 'final'
debug3: /etc/ssh/ssh_config.d/50-redhat.conf line 5: Including file /etc/crypto-policies/back-ends/openssh.config depth 1 (parse only)
debug3: gss kex names ok: [gss-curve25519-sha256-,gss-nistp256-sha256-,gss-group14-sha256-,gss-group16-sha512-,gss-gex-sha1-,gss-group14-sha1-,gss-group1-sha1-]
debug3: kex names ok: [curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1]
debug1: configuration requests final Match pass
debug1: re-parsing configuration
debug3: /etc/ssh/ssh_config line 54: Including file /etc/ssh/ssh_config.d/50-redhat.conf depth 0
debug2: checking match for 'final all' host gitlab.com originally gitlab.com
debug3: /etc/ssh/ssh_config.d/50-redhat.conf line 3: matched 'final'
debug2: match found
debug3: /etc/ssh/ssh_config.d/50-redhat.conf line 5: Including file /etc/crypto-policies/back-ends/openssh.config depth 1
debug3: gss kex names ok: [gss-curve25519-sha256-,gss-nistp256-sha256-,gss-group14-sha256-,gss-group16-sha512-,gss-gex-sha1-,gss-group14-sha1-,gss-group1-sha1-]
debug3: kex names ok: [curve25519-sha256,[email protected],ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha256,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1,diffie-hellman-group1-sha1]
debug2: resolving "gitlab.com" port 22
debug2: ssh_connect_direct
debug1: Connecting to gitlab.com [172.65.251.78] port 22.
debug1: Connection established.
debug1: identity file /home/user/.ssh/id_rsa type -1
debug1: identity file /home/user/.ssh/id_rsa-cert type -1
debug1: identity file /home/user/.ssh/id_dsa type -1
debug1: identity file /home/user/.ssh/id_dsa-cert type -1
debug1: identity file /home/user/.ssh/id_ecdsa type -1
debug1: identity file /home/user/.ssh/id_ecdsa-cert type -1
debug1: identity file /home/user/.ssh/id_ecdsa_sk type -1
debug1: identity file /home/user/.ssh/id_ecdsa_sk-cert type -1
debug1: identity file /home/user/.ssh/id_ed25519 type -1
debug1: identity file /home/user/.ssh/id_ed25519-cert type -1
debug1: identity file /home/user/.ssh/id_ed25519_sk type -1
debug1: identity file /home/user/.ssh/id_ed25519_sk-cert type -1
debug1: identity file /home/user/.ssh/id_xmss type -1
debug1: identity file /home/user/.ssh/id_xmss-cert type -1
debug1: Local version string SSH-2.0-OpenSSH_8.3
Henrik Christian Grove
@hcgrove_gitlab
@ateebahmed_gitlab : What did you expect tto happen here? You've disabled (with the -T' option to ssh) tty allocation, so you won't get a prompt, but I see nothing suggesting that yu don't get the connection you asked for
2 replies
Deshdeepak
@Deshdeepak1
how gitlab is different from github?
Ateeb Ahmed
@ateebahmed_gitlab
Seems to be working now
Hello, I'm having this problem with SSH where my SSH connection hangs after establishing connection and sending the SSH version its using, I searched the problem and it seems Gitlab's SSH server is not responding to my connection. I can do same things with HTTPS fine but SSH hangs and after very long period timeout.
Kalyan chakravarthy
hello folks,
how to use gitlab jobs to go and fetch secrets from vault?
the underlying instances are configured with vault aws auth method, when running vault login getting * failed to verify as a valid EC2 instance in region
Jannik-143
@Jannik-143
Hello, is it possible to get the git log --numstat information via gitlab api?
Lukas M
@lukasmrtvy
Guys? gitlab-org/gitlab-runner#27026 ( docker-machine switch idea )
Yogesh Bhondekar
@yogesh.bond_gitlab
Hello All, I need help. Today I see all files from my repository are missing even I can't see any history also - strange. In afternoon I did commit, that was successful and now when I go to branch and repo nothing is there.. hos is this possible - also no history ? Anyone had experienced this.. any help is much appreciated.
Leo Palmer Sunmo
@leosunmo
Hello!
Is this a typo or am I misunderstanding?
https://docs.gitlab.com/ee/api/projects.html#transfer-a-project-to-a-new-namespace
Is there supposed to be a namespace slug in that URL or am I supposed to send a JSON payload with this API request?
Leo Palmer Sunmo
@leosunmo
ah they're url params, I get it. I must have missed the docs on that, found examples online.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22292686998844147, "perplexity": 5604.666308384395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704799741.85/warc/CC-MAIN-20210126104721-20210126134721-00653.warc.gz"}
|
http://math.stackexchange.com/questions/154453/how-to-find-the-number-of-vertices-and-edges-in-these-graphs
|
How to find the number of vertices and edges in these graphs?
How many vertices and edges are there in $K_{i,j}$ and $K_{i,j,k}$?
-
Welcome to math.SE, mehtap karabacak: since you are a quite new user, I wanted to let you know a few things about the site. In order to get the best possible answers, it is helpful if you say in what context you encountered the problem, and what your thoughts on it are so far; this will prevent people from telling you things you already know, and help them write their answers at an appropriate level. – draks ... Jun 5 '12 at 22:39
Wikipedia is a useful resource – TMM Jun 5 '12 at 22:40
$K_{i,j}$ is defined as having a group of $i$ vertices and a group of $j$ vertices, with each vertex of the first group connected to each vertex of the second group, and no edges within a group. Is that your definition? If so, there are $i+j$ vertices (just count the two groups) and $ij$ edges as you pick one of the $i$ and one of the $j$ for each edge. How do you define Kijk (is it $K_{i,j,k}$)? If it is similar but with three groups of vertices, the same counting technique will work.
@mehtapkarabacak: if you don't know the definition, there is no way to answer the question. Do you understand the answer for $K_{i,j}$? The Wikipedia page that TMM cites gives a couple pictures that should help. – Ross Millikan Jun 5 '12 at 23:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6228235960006714, "perplexity": 306.7117887065169}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422115857200.13/warc/CC-MAIN-20150124161057-00302-ip-10-180-212-252.ec2.internal.warc.gz"}
|
http://mathhelpforum.com/statistics/105303-probability-help-please.html
|
Q: A study of the residents of a region showed that 20% were smokers. The probability of death due to lung cancer, given that a person smoked, was ten times the probability of death due to lung cancer, given that the person did not smoke. If the probability of death due to lung cancer in the region is .006, what is the probability of death due to lung cancer given that the person is a smoker?
I tried to use Baye's rule, but had not luck. How do I make use of the fact that P(D|S)=10P(D|S'), where D represents deaths due to lung cancer and S is for smokers.
Thanks
2. Originally Posted by Danneedshelp
Q: A study of the residents of a region showed that 20% were smokers. The probability of death due to lung cancer, given that a person smoked, was ten times the probability of death due to lung cancer, given that the person did not smoke. If the probability of death due to lung cancer in the region is .006, what is the probability of death due to lung cancer given that the person is a smoker?
I tried to use Baye's rule, but had not luck. How do I make use of the fact that P(D|S)=10P(D|S'), where D represents deaths due to lung cancer and S is for smokers.
Thanks
Hint: P(D) = P(D|S) P(S) + P(D|S') P(S')
3. Originally Posted by awkward
Hint: P(D) = P(D|S) P(S) + P(D|S') P(S')
How do I use that for this problem? I set up a box with all the information, but i am still not getting the desired 0.21 as my answer.
4. Originally Posted by Danneedshelp
Q: A study of the residents of a region showed that 20% were smokers. The probability of death due to lung cancer, given that a person smoked, was ten times the probability of death due to lung cancer, given that the person did not smoke. If the probability of death due to lung cancer in the region is .006, what is the probability of death due to lung cancer given that the person is a smoker?
I tried to use Baye's rule, but had not luck. How do I make use of the fact that P(D|S)=10P(D|S'), where D represents deaths due to lung cancer and S is for smokers.
Thanks
Draw a tree diagram. The first two branches are S (smokes) and S' (does not smoke). From each of those two branches there are two more branches. The first branch is D (death) and the second branch is D' (no death).
Let Pr(D | S') = x. Then Pr(D | S) = 10x.
From the tree diagram: $0.006 = (0.2)(10x) + (0.8)(x)$. Solve for $x$ and substitute into Pr(D | S) = 10x.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9579800963401794, "perplexity": 426.2445438168164}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718311.12/warc/CC-MAIN-20161020183838-00372-ip-10-171-6-4.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/help-in-checking-the-solution-of-this-separable-equation.822396/
|
# Help in checking the solution of this separable equation
• Thread starter enggM
• Start date
• #1
13
0
## Homework Statement
It is just an evaluation problem which looks like this dx/dy = x^2 y^2 / 1+x
## Homework Equations
dx/dy = x^2 y^2 / 1 + x
## The Attempt at a Solution
What i did is cross multiply to get this equation y^2 dy = x^2 / 1+x dx then next line ∫y^2 dy = ∫x^2/1+x dx
y^3/3 = ∫dx + ∫1/x dx after simplifying i get y^3=3x + 3 ln x + C but im not sure if this is the right answer.
## Answers and Replies
• #2
HallsofIvy
Homework Helper
41,847
965
First, I presume that you really mean dx/dy= x^2y^2/(1+ x). What you wrote, dx/dy= x^2y^2/1+ x, is the same as dx/dy = x^2y^2+ x.
More importantly you have separated the x and y incorrectly. The left side is y^2 dy but the right sides should be [(1+ x)/x]dx. You have the fraction inverted.
• #3
13
0
oh, so when i integrate [1+x / x] dx would it look like ∫1/x + ∫x dx? or something else?
• #4
LCKurtz
Homework Helper
Gold Member
9,559
770
oh, so when i integrate [1+x / x] dx would it look like ∫1/x + ∫x dx? or something else?
1 + x/x = 1+1 =2.
• #5
Mark44
Mentor
35,138
6,887
oh, so when i integrate [1+x / x] dx would it look like ∫1/x + ∫x dx? or something else?
Or did you intend the above to mean (1 + x)/x? If that's what you intended, it does absolutely no good to put brackets around the entire expression.
In post #1 you wrote what I've quoted below. For that, you need parentheses around the denominator, as x2y2/(1 + x).
It is just an evaluation problem which looks like this dx/dy = x^2 y^2 / 1+x
.
• #6
13
0
Oh sorry about that, what i intend to do is to integrate the entire expression. As it is the right hand side of the equation, no problem with y^2 dy but the right side looks a bit confusing. In the expression (1 + x) / x dx as was suggested is what i intend to integrate, so is this the right integration expression, ∫1 / x dx + ∫ dx? if so then ln x + x + C should be sufficient for the RHS, is it not?
• #7
Mark44
Mentor
35,138
6,887
Oh sorry about that, what i intend to do is to integrate the entire expression. As it is the right hand side of the equation, no problem with y^2 dy but the right side looks a bit confusing. In the expression (1 + x) / x dx as was suggested is what i intend to integrate, so is this the right integration expression, ∫1 / x dx + ∫ dx? if so then ln x + x + C should be sufficient for the RHS, is it not?
Yes, you can split ##\int \frac{1 + x}{x} \ dx## into ##\int \frac {dx} x + \int 1 \ dx##.
• #8
13
0
so the answer y^3 = 3x + 3 ln x + C should be correct? ok i get it now thanks for the time.
• #9
Mark44
Mentor
35,138
6,887
so the answer y^3 = 3x + 3 ln x + C should be correct? ok i get it now thanks for the time.
No @enggM, this is not correct. The work you did before was incorrect, and my earlier response was based on that work. I think you need to start from the beginning.
$$\frac{dx}{dy} = \frac{x^2y^2}{1 + x}$$
If you multiply both sides by 1 + x, then divide both sides by ##x^2##, and finally, multiply both sides by dy, the equation will be separated. What do you get when you do this?
• #10
13
0
@Mark44 i would get (1+x / x^2 ) dx = y^2 dy so integrating the both sides y^3 / 3 = 1 / x + ln x so the final form would then be y^3 = 3/x + 3 ln x + C? no?
• #11
Ray Vickson
Homework Helper
Dearly Missed
10,706
1,722
@Mark44 i would get (1+x / x^2 ) dx = y^2 dy so integrating the both sides y^3 / 3 = 1 / x + ln x so the final form would then be y^3 = 3/x + 3 ln x + C? no?
From
$$y^2 dy = \left( 1 + \frac{x}{x^2} \right) dx$$
you will get
$$\frac{1}{3} y^3 = x + \ln (x) + C$$
From
$$y^2 dy = \frac{1 + x}{x^2} dx$$
you will get
$$\frac{1}{3} y^3 = -\frac{1}{x} + \ln (x) + C$$
Which do you mean? Why are you still refusing to use parentheses? Do you not see their importance?
• #12
Mark44
Mentor
35,138
6,887
@Mark44 i would get (1+x / x^2 ) dx = y^2 dy so integrating the both sides y^3 / 3 = 1 / x + ln x so the final form would then be y^3 = 3/x + 3 ln x + C? no?
In writing (1 + x/x2) above, you are using parentheses, but aren't using them correctly (as Ray also points out). If you have a fraction where either the numerator or denominator (or both) has multiple terms, you need parentheses around the entire numerator or denominator, not around the overall fraction.
If you mean ##\frac{1 + x}{x^2}##, write it in text as (1 + x)/x^2, NOT as (1+x / x^2 ). What you wrote means ##1 + \frac x {x^2} = 1 + \frac 1 x##.
• #13
13
0
sorry about the confusion because sometimes when i solve something like this in paper i sometimes leave out the parenthesis. so the final form would be y^3 = -3/x + 3 ln x + C? by the way how did it become a negative? just an additional question.
• #14
SammyS
Staff Emeritus
Homework Helper
Gold Member
11,377
1,038
sorry about the confusion because sometimes when i solve something like this in paper i sometimes leave out the parenthesis. so the final form would be y^3 = -3/x + 3 ln x + C? by the way how did it become a negative? just an additional question.
What is ##\displaystyle \int\frac{1}{x^2}dx\ ?##
• #15
Mentallic
Homework Helper
3,798
94
sorry about the confusion because sometimes when i solve something like this in paper i sometimes leave out the parenthesis
On paper you can freely write the fraction as it's normally portrayed, however, if you mean that you do sometimes write everything on a single line and still don't use parentheses, then you need to break out of that habit. Parentheses are crucial.
• #16
13
0
@SammyS oh i see i remember now it should be u^-2+1 / -2 +1.
@Mentallic ok thanks...
thanks for all of your help in checking for the solution and answer to this problem helped me understand it much better.
• #17
LCKurtz
Homework Helper
Gold Member
9,559
770
@SammyS oh i see i remember now it should be u^-2+1 / -2 +1.
.
Apparently you haven't been paying attention to anything people in this thread have told you about the importance of using parentheses.
• Last Post
Replies
3
Views
694
• Last Post
Replies
3
Views
991
• Last Post
Replies
2
Views
2K
• Last Post
Replies
8
Views
1K
• Last Post
Replies
7
Views
880
• Last Post
Replies
5
Views
981
• Last Post
Replies
8
Views
2K
• Last Post
Replies
0
Views
3K
• Last Post
Replies
5
Views
8K
• Last Post
Replies
17
Views
3K
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9654626846313477, "perplexity": 1219.8880990940536}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154796.71/warc/CC-MAIN-20210804045226-20210804075226-00323.warc.gz"}
|
http://postthreads.org/support/3958906/DVD-Drive.html
|
Computer Support Forum
# DVD Drive
Question: DVD Drive
I attempted to install some software using a CD in my DVD Drive. The drive failed to read the CD. When the drive ejected, the tray was empty, and the CD was gone. Q1: How do I get the CD back? How do I get the drive to work again?
When I close the tray and click on the D Drive, the heading marked "This PC" starts turning green, and the green continues to spread, as if the drive is reading something. But the heading never completely finishes, even after an extended
period of time. This seems to be part of the problem.
More replies
Relevance 100%
Preferred Solution: DVD Drive
I recommend downloading and running Reimage. It's a computer repair tool that has been proven to identify and fix many Windows problems with a high level of success.
I've used it in the past to identify and fix everything from blue screens (BSOD's), ActiveX errors, corrupt files and processes, dll/exe/sys errors, recover lost memory, Windows update problems, defragging, malware removal etc.
Relevance 25.01%
two weeks ago i was bought a Fujitsu B series lifebook computer.About 5 days ago I had got that virus stating I had a "processor running at 83 degrees", "hard drive clusters are partly damaged", "must scan to fix this issue", yatta yatta yatta and some other classic virus jive talk i cant remember at this moment. At first it kept bothering me about the scan, then it started popping up 57 windows in short succesion, then i couldnt access my other files, then my desk top went black and was reduced to 4 icons, start menu was obliterated.try combo fix, trend micro, some other reliable programs but i would either down load them and they'd disappear or in the case of combofix it would load then exit out quickly. It reached a point were i just couldn't boot my computer up anymore.NTLDR was missing it said. I am not sure if that virus deleted it or if the computer and its hd were just old and wore out. It was from the flea market.. I cant get past that error msg, there is no flash drive boot up option on this old thing, i have no floppy drive with it, it has no cd rom drive, ive heard the external usb cd rom drive cant be used to boot it up but im not 100 percent sure.Someone else with the same notebook reported that it would not allow usb cd rom drive boot up. I have been told network boot is an option but i was told thats just to access it after its turned on and its not what i think its for. I was told an hd switch was a good option but i dont... Read more
Answer:"NTLDR is missing" issue with computer featuring no cd rom drive, floppy drive, or flash drive support..
i think im past infected moderator. saying im still in the infection stage is like telling someone with cervical cancer to take the HPV vaccine..
1 more replies
Relevance 25.01%
My primary HDD was failing so I backed it up and replaced it with a larger 1TB drive. After reinstalling everything the new drive is displaying the same size as my old drive and the extra space available is unallocated. I can allocate it as
a separate drive but I find that annoying and would prefer to just have my primary C: drive show up with all of the available space. I formatted the extra space and made it available separately as drive G hoping that I could EXPAND drive C but that option
is not highlighted as selectable on drive C for some reason. Should I just live with this extra space or is there a way to easily combine C and the new G space?
More replies
Relevance 25.01%
I'm working on my sister computer and put in a new hard drive. I imaged the drive in windows (using xp) and all seemed to go ok. I took the original out and now windows will book ok, but will hang on a screen that resembles the windows login screen, blue background with a windows xp logo on the right side. The mouse still works as well as the numlock lights, but it will go no futher. I've tried changing drive letters with the old drive installed, but somehow, i've fubar'd both and now both drives hang at this spot regardless of whether or not the other is plugged in as well. I have a feeling, from research i've done on the web, that windows is looking for system files on a drive letter that doesn't exist. The problem now is that i cannot boot into either install of windows, nor thier respective safe mode options. In safe mode, i get the same screen as i do when booting into normal mode. How do i get either one of these drives to boot? I know the data is still there and the data is fine, but I need to make windows boot. I've also tried using the recovery console to do a fixmbr and a fixboot, but neither helped (I didn't figure they would as the address different aspects of the boot than what i suspect to be the problem). Any ideas?
Answer:Imaged drive in windows, took original drive out, now new drive hangs on login screen
have you tried booting from the cd and running the repair option? --um--nevermind
18 more replies
Relevance 25.01%
I have an Edge 14. Intel i3 M370 model. I ordered a OBHC and installed a 160 Gb SATA drive, loaded with WIN 7. In the main drive bay I have an Ubuntu OS setup.Trying to set up a dual boot configuration. But, when I boot the drive and enter the BIOS set up, the drive in the caddy doesn't show. If I let it boot from the main drive, Ubuntu starts up. If I put the Ubuntu drive in the caddy and nothing in the main bay, "NO OPERATING SYSTEM." When I look at the computer in Ubuntu, the OS7 drive is there. All three partitions (Lenovo, OS and startup). I can mount them, etc. When I use Disk Utility in Ubuntu, the drive shows up listed under the SATA controller and is located at position 2 of the SATA bus. I updated the BIOS and it is now 1.26. Still no sign of the HDD in the caddy. Very odd. When I power up the computer, the LED in the caddy does light up. When I run sudo update-grub in Ubuntu, the WIN 7 loader shows up in the list. So a bootable OS is seen. But if I select WIN 7 loader in the GRUB list at boot, I get a "no such drive" message, and am lead back the the GRUB list. In the BIOS, the config menu, the startup list only includes HDD0. Is there some secret BIOS trick I need to play? Correct me if I'm wrong, but there aren't any jumper settings, etc. on a SATA drive, right? Nothing there to change. Any suggestions would be very much appreci... Read more
More replies
Relevance 24.6%
Hi guys, i wonder if you can offer any ideas here. I have a brand new T420s which came with an optical slot place holder piece of plastic. I removed this and popped in a 'Lenovo DVD Multi IV DVD Multi Recorder CD-RW' P/N 45N7453. It clicked into place fine, but the power light on it does not come on and the eject button does not work.This same drive works fine in 2 other T420s laptops my colleagues have, so the drive is ok. I tried disabling the optical drive in the bios, rebooting, re-enabling, rebooting but nothing. Is there anything else i can try or would you say its a board issue and i'm stuck with it ?
More replies
Relevance 24.6%
I have a Dell Dimension 4700 running XP Service Pack 3. My original Western Digital Drive was dying. This drive was the boot drive and listed first in "setup" (after hitting F2) There was also a 300GB Maxtor SATA drive already installed which I used for data storage -- not bootable. It was listed 2nd in "setup".
Ultimately the WD Drive wouldn't boot. I cleared the 300GB Maxtor and did a clean install Win XP from manuf. disks WITHOUT removing the WD - Drive 1 first. (I had hoped to remove some data I hadn't yet back up
I tried rebooting BUT if both disks were set to "On" in "setup - F2" the system would NOT boot. If Disk 1 (WD Drive) was turned off the system would boot fine. I removed the "boot partition" on Disk 1 but if both drives were turn on - no go.
I have now replaced the defective drive (WD) with a Maxtor SATA II/300 (1 TB).
The boot drive (300GB Maxtor) is the "last" drive on the SATA interface cable and the new drive (1 TB) is on the second to last connector.
Still, if the blank, new drive is turned on in "setup - F2" the system won't boot. If off, system boots. (I want to use the 1 TB drive solely for data storage).
It seems the SATA controller refuses to recognize the bootable drive if both drives are active. All I can think of is this is due to the 300GB being "Drive 2" in my original system and still listed that way in "setup". There is NO ability to c... Read more
Answer:Solved: Drive Crashed - XP installed on 2nd Drive - Replaced Drive Won't Work
16 more replies
Relevance 24.19%
I want to prepare a factory fresh drive so that I can use it to save backup images. Soon I will have a second drive that will only be used to store data files. Neither drive will ever be used for the installation of an operating system. From what I've read, I see no advantage to having more than a single partition on either drive. I know I want to use NTFS.
This should be straightforward. I'm finding it is not. I cannot formulate a search that points me to the right tutorial or thread. Either I get hundreds of options to consider or, if I try to narrow the options, no answer is found.
Please point me to the right discussion. Perhaps the answer will help others who come after me, too.
thanks
baumgrenze
Answer:Format & Partition New Drive to Use as a Backup Drive or Storage Drive
Do you know how to make a partition and format a drive?
You can do it all in Windows Disk Management.
You don't need to do anything special to the drive just because it is going to store images.
Treat it like any other drive.
Format it with a single partition.
Store your image files on it. Use a folder structure if you want to---perhaps naming a folder something like "Macrium image of C partition, 060714" so you know the date of the image.
Nor do you need to use a separate drive just for image storage.
Make your images and store them like ordinary data. Back up the images just like you'd back up any other important data file. That's what they are: important data.
9 more replies
Relevance 24.19%
I recently got a new hard drive. The first thing I did was create a new partition. After that, I copied the contents of my C drive to the new partition using HDCopy. Next, I changed the letter of my C drive(changed it to Z) and changed the letter of my new partition to C. Now it looks like everything works, but I do have one question. When I go to windows computer management, it still says my drive that used to be C is my system drive and my NEW C drive is Page File. Here's a screen shot to help:http://i42.tinypic.com/1zxbg9e.jpgI'm wondering if this is problematic and if there's any way to make my C drive listed as the system driveThanks,Austin
Answer:C drive not listed as system drive in windows drive manager
Your pc is still booting to the Z drive it seems. You might want to try just backing up your stuff and doing a format so that the new drive does not contain your system root.
3 more replies
Relevance 24.19%
I have just finished a pc build with a SSD to be used as the boot/OS drive and a secondary 2 HDD setup in raid 0 for storage.
First I started with the SSD only and installed win7, applied all the updates etc. At this point the system has rebooted at least half a dozen times with no issue.
Next I set up the 2 HDD in raid 0, initialized it with disk manager and everything is looking good. But as soon as I reboot the system, upon startup the system can't find a os. Using the windows install disc, I go into the recovery console and discover that the boot drive which was C:\ is now E:\ and the storage drive which was E:\ is now C:\.
The os files on the boot drive seem to be all there. It's just in the wrong drive path. Using the recovery console I switched back the drives to their proper places, but then on startup I got the error 'bootmgr is missing' at this point I gave up, nuked the drives and started from scratch only to get thing again once I initialized the HDD and rebooted.
I am sure I have overlooked some stupid detail somewhere. Can anyone help me figure out where I went wrong?
Answer:Windows 7 has swapped my drive paths with my boot drive and sec. drive
My first thought with a new build is settings in the BIOS, many times something has to be changed.
9 more replies
Relevance 24.19%
Need to replace my failed Yoga 2 11 Hard Drive wd5000mpck 5MM($206.00),, will a wd5000lpvx 7MM($44.00) drive fit..Big price difference..
Answer:Yoga 2 11 Hard Drive Failed will a 7MM drive fit (calls for a 5MM drive)
Hi Kendoe,of course big price differences, The technical differences are: cache (MB); Ampere Peak(5V); Power W (read/write); standby W; acoustic (dBA); Height mm; Weightwd5000mpck: 16 ; 0,9 ; 1,5 ; 0,15 ; 15-17 ; 5 ; 0,07 kgwd5000lpvx: 8 ; 1 ; 1,4 ; 0,13... Read more
3 more replies
Relevance 24.19%
[font=Verdana, Arial]I am building a 3GHz system with the Intel D865PERL, and this is the first time that I have used SATA drives. Here is how it is setup:
Primary IDE Master: Plextor 8X DVD+/-RW
Secondary IDE Master: Iomega Zip 250
SATA Port 0: Maxtor 7200RPM 8MB SATA Drive
SATA Port 1: Maxtor 7200RPM 8MB SATA Drive
When I was trying to install XP, when I created a partition on the Maxtor on SATA 0, it automatically named it Drive E. It is because of the two drives that I have on the Primary and Secondary IDE channels. So, how do I get it to name the drive C:? I checked the BIOS, but did not see a setting that would fix this issue.
UPDATE:
[/font][font=Verdana, Arial]I just tried to update the BIOS to see if that would help. I used the floppy disk version. When it was in the process of doing it, it said that it was copying a program, sw.exe, from the a: drive to the c: drive, but the c: drive is the DVD burner!
It did its thing, and then asked to have me remove the floppy disk from the drive and hit enter, and said NOT TO DO A THING until it had finished. Well, it has been about 10 minutes, and it is still just displaying a blinking cursor (blinking underscore) on a black screen. I hope this didn't hurt the mobo. I'm guessing that it did not complete the process because there was not a c: drive to write to.
I did look at Intel's site for some answers, and it said by default, booting from SATA drives is disabled in the BIOS. You have to ch... Read more
Answer:Need help with new Intel Mobo ASAP - SATA drive = E drive; need it to be C drive
SATA controllers are still fairly new, and a bit of an afterthought as far as Windows is concerned. Windows will always try to call the master drive on IDE 1 your "C" drive.
Easiest way to do it YOUR way is to simply disconnect any standard IDE drives before you install Windows. Windows will then accept your SATA as C, and keep the C designation even after you re-connect your IDE drives.
1 more replies
Relevance 24.19%
Windows XP Pro displays the primary hard Drive set as the master on IDE 1 as drive F: instead of drive C:. I can not find the cause.....any ideas? Any help would be much appreciated. Thanks
Answer:Windows XP displays primary hard drive as drive F: instead of Drive C:
7 more replies
Relevance 24.19%
Sorry I am a computer novice; but, I am running out of hard drive C space and I thought maybe I could put Recovery Drive D on a flash drive in order to free up drive C space.
If this is possible, can you refer me to some instructions on how to do this?
Answer:Can you put Recovery Drive D on a flash drive to free up hard drive
To be honest a recovery drive won't give you much space (10 GB or so?). You are better off with a USB external HDD and move your less accessed data to the external.
I wonder what others will say.
6 more replies
Relevance 24.19%
Hello gurus, I replaced the HD with Samsung 500GB SSD, and have been loving it. However, ever since I did that, I do not see the 2nd pre-installed 15GB SANDISK SSD Drive anymore in the explorer. Anyone know what I can do to make the second drive be recognizable by the computer? This is potentially a related problem but I also have trouble installing Windows 10, because it says it can't find a partition. Can I use the second SSD drive as the partition for the install?
Answer:T450S: After upgrading main drive to SSD drive, do not see the second pre-installed SSD drive
This smaller SSD is designed to use software and be a cache drive. It wouldn't show in File Explorer. You can check Device Manager.
1 more replies
Relevance 24.19%
I've partitioned my hard drive due to not being able to load windows 10 over windows vista, so c: has vista and d: has windows 10. It wont let me merge the two together or delete C: saying the computer will be unbootable. Is there a way of making D: the main boot drive and deleting/merging C:?
More replies
Relevance 23.78%
Relevance 23.78%
Re: cloned a 80G HDD to a 128G USB Flash stick
Nov 29, 2014
http://www.techspot.com/community/t...e-old-usb-storage-drivers.145884/#post-875300
Andy
----- Original Message -----
From: Andy
Sent: Friday, November 28, 2014 6:16 PM
Subject: cloned a 80G HDD to a 128G USB Flash stick
hi all,
happy holidays, etc.
this is an old issue, yet i am still working on it
in July 2013; I cloned my 80G HDD onto a 128G USB Flash stick
in Sept. 2013; I had a data loss...
so, I thought to recover my data using this 128G clone...
I put this 128G Flash stick into my computer, and it showed up as a Local drive, yet was inaccessible and with its LED flashing incessantly...
{this was after, I treated this 128G USB flash stick as if it were a whole computer, and I then attempted to use/misused Win Vista's EasyTransfer program in a failed attempt to restore my lost data }
I later ran CHKDSK /f /r on it, and Chkdsk repaired and/or deleted about 7 "attributes" files
after which, this 128G Flash, only shows up as a Removable Storage(not Local) drive.. and remains locked/data inaccessible yet without the flashing LED
the USB drive, when I clicked on it in My Computer, would give the/an error message "Insert drive into H: drive"... Insert? a drive? into my tiny USB drive? which isn't even a CD or DVD drive? ... I tried all the fre... Read more
More replies
Relevance 23.37%
Okay, here is a pic of my hard disks.
My backups go into my Recovery Drive, and it is nearly full. I repeatedly uninstall prgrams that I don't need, and delete old backups. Why is it still full? How do I make more space? My OS drive is so BIG, but I have next to nothing in it! Are backups supposed to be going into C drive?
Answer:Recovery Drive almost full: Should backups be in D drive or in the C drive?
Not to worry. The recovery is probalby just restore points. As you need more space the oldest point is deleted and a new one made. If you want to make space. Turn off the System Restore, deleting all points. Then make a new point, you will need it in case of problems.
10 more replies
Relevance 23.37%
This is a new built computer, I have installed win7 64bit on a 500 gig hard drive (tosheba), this is drive C. Everything works fine!
I was wanting to use windows 7 backup, so I went and got another 500 gig hard drive (WD), I connected the HD and powered up the computer and that's all I did! Device manager showed the new drive and the primary drive! However the computer did not show the new WD drive, . The new drive was not formatted so with the help of a friend using partdisk I formatted the new WD drive, the new drive now shows up in computer as drive G. But now I also have a drive F, and its 100MB in size. (see screen shot)
What have I done? and how can I fix it?
Drive F which is your System Reserved partition should be hidden and not have a letter .
Can you right click on the drive click on Change Drive letter and path and remove the letter ?
9 more replies
Relevance 23.37%
move the OS (windows 7) to the new C drive?
Answer:Installing SSD drive as C drive, moving current C drive to D. Once done how do I
Why not just use the SSD drive manuf. free utilities to "clone" the old drive to the new one ? ?
1 more replies
Relevance 22.55%
Hi all,
I am attempting to install Win7 in an Inspiron 1545 with a new SATA hard drive that was purchased. When i insert the drive, I get "Internal Hard Disk Drive Not Found." I have verified the drive is not showing in the BIOS, I've already attempted to reseat the drive several times. In addition, I've attempted to load the chipset drivers, SATA drivers, etc from Dells website, the drive still does not show in the OS install when booting to DVD or in bios.
I dont believe it to be an issue with the new drive, as I plugged the new drive into another model Dell laptop and it recognizes the drive fine in BIOS, also I can still boot to the previous hard drive in this laptop without issue when i plug it back in.
Is there a specific driver, etc that I need to use for the drive to be recognized or other thoughts people have?
This is strange - either the new hard drive has a rare compatibility issue with your onboard controller (I use a 500Gbyte in my 1545 without probs)) or it is a mechanical problem.I would take a close look at the physical dimensions and compare your original hdd with the new one.Watch the video (even though it's about a 1525 (similar to an 1545) and an SSD hdd) - go to 2:20
10 more replies
Relevance 22.14%
For anyone who can help.
My name is David Roy. and i have a Dell Dimension 4600. In this dimension i have a dual drive CD-RW on the bottom and DVD-Rom on the top. (HL-DT-ST CD-RW GCE-8483B) and (Samsung DVD-ROM SD-616E)
Yesterday out of nowhere both my drives gave out. I can still see them on the my computer option but i cant use any of them. When i want to burn a blank CD it shows up on my computer but when i try to burn it, it sais no compatible recordable or rewritable devices could be found on your system. When i pop any movies or games or anything in any of the drives nothing shows up and if i click on the drive it sais please enter cd. So they cant read anything and this just happened. I have never had a problem like this in the past 2-3 years of having this computer.
Thank You if anyone can help me
David R
More replies
Relevance 22.14%
I am using Compaq Presario SR1520NX.
My O is drive taken by Network drive whever I login and use wd2go.com and usb flash drive can't be used.
But when I disconnect from O network drive, by re-booting the p.c. and then plugin usb it shows up as O drive.
My question is can I re-assign my usb flash drive to a different letter so both
usb and network drive can be used ?
If so, what should I do ?
Thanks.
Answer:O drive taken by Network drive and flash drive can't be used
With the USB drive connected open Disk Management
Right Click on the drive and select Change Drive Letter and Paths
You should then be able to assign the drive a letter that is not in use.
7 more replies
Relevance 22.14%
Relevance 22.14%
Hi folksI have just lost about 4Gb of hard-drive space and I don't know why!Just prior to this I had burned a DVD using Nero 7 Startsmart. The file copied where AVI & Nero recode files.I'd burned them as Data files as nero wouldn't let me copy them 'as they were' 5 onto one 4.7G DVDR.Being new to Nero (and computing generally) I am still finding out what it Will do. I realised from this that nero apparently recodes AVI files to normal dvd file to copy to a 4.7G disc. I started to copy one of these movies to a disc for a friend but wanted to watch another movie that was on my H/drive. The system crashed and I aborted the burn process. It was after watching my movie that I noticed I was about 4G down on free space. This is annoying as there is now only about 2G left on my C drive.I've tried to find the offending data files but without success.Can anyone give me any leads to regain my lost 4G of free space?
Answer:Re. Lost H/drive space.......C drive Not D drive!
I think you answered your own question.Nero copies to the HDD before burning the DVD. Since your computer crashed before the burn operation was complete, Nero did not have a chance to delete the intermediate files or ask you if you wanted to delete them.
1 more replies
Relevance 22.14%
I have two external hard drives, and I did something to make them switch drive directories. The G drive is now F and the F now G. How can I reverse them so they are back to normal? My I tunes can't find any of the songs because the directories have changed.CheersDave
Answer:Hard drive changing from G drive to F drive
If this is XP, enter diskmgmt.msc in the Start>Run box. Right click a drive/partition and select "Change Drive Letter And Paths".Move drive f: to z:Move drive g: to f:Move drive z: to g:
1 more replies
Relevance 21.32%
I have a sony vaio VGN-NR330E that I have just bought a new hard drive for (doing a fresh install of vista) and the internal optical drive spins up but won't boot off the cd, and after a blinking cursor says O/S not found, so i tried it on an external usb drive, and it didnt boot, same black screen followed by O/S not found. I tried booting from a knoppix live cd that worked on another computer and it wouldnt boot either. I have no idea of any what to do, since it wont boot off of an external HDD internal optical drive, external optical drive, and any help is greatly appreciated.
Answer:sony vaio vgn-nr330E won't boot to any external drive, or internal optical drive
What are the current boot options...as set in your BIOS?
http://www.ehow.com/how_8022156_change-bios-settings-sony-vaio.html
If not listed as the first boot option...it's unlikely that the system would boot from either drive.
Louis
3 more replies
Relevance 21.32%
My hard drive failed last Saturday, reading "operating system not found." BIOS testing resulted in error 10009, replace hard drive. At this point, the drive won't boot, but data can still be read.
I have a new hard drive. The backup disk I made for Vista isn't working.
I thought of borrowing one from a friend, but my OEM key on the bottom has completely worn off. HP says they cannot help and Microsoft wants me to pay for a replacement key. I'd like to avoid that if possible.
Is there any way to use the recovery partition on the old drive to install Vista on the new drive?
My laptop has two hard drive bays, but I only have one caddy, so ideally I'd like to connect it using an enclosure.
Also, I do have access to another computer that I could use as a "go-between" if needed. For example, if it's possible to simply copy and paste the recovery partition files, I could use the enclosure to copy them from the old drive to the working computer, then put the new drive in the enclosure and copy the files onto that.
More replies
Relevance 21.32%
I have a Dell Inspiron 1440 using since 2010 suddenly I saw this error message "internal hard disk drive not found,to resolve this issue try to reseat the drive" when I turned it ON. Then I tested with other HDD drive from other laptop and it booted fine. Then realized that HDD was down brought a new samsung SSD 120GB drive and installed on my laptop and trying to install the Windows 7 from a CD still seeing the same error when turned ON.
ROHIT
Thanks guys!!!!!
My bad after I installed the SSD the CD that was in the DVD Drive was the Applications/Driver CD instead of Windows 7 disk. I just realized and installed the Win7 on my new hard disk my laptop is back to normal. Thank God! I saved 500 to get the new laptop.
5 more replies
Relevance 21.32%
With my 2009 Seagate 500 GB sata hard drive, the re-allocated sectors count shows 2 bad sectors, in the Ubuntu 10.04 disk utility for the smart data. In this multi-boot desktop PC the XP with a drive monitor program called, Active Smart shows this drive health status as good, including that reallocated sectors count item, so is this not to worry about for now, or what. I do monitor the hard drive temperature, which does stay cool enough, in the seventies up to about 80* F.
Or can this be caused by mixed partitioning methods, since the new way is now align to MiB, and not align to cylinder as it was. I used the GParted tool to create and change partitions when it was set as default to align to cylinder, but then I recently used both my Vista Disk Manager, and the newer version of GParted to re-size some of the partitions, which both now use that align to MiB method by default, which often has near 1 MB gaps between partitions then.
More replies
Relevance 21.32%
do I need to go out and find an external dvd drive in order to access the pix/ use the discs? Or am I missing something on this ? If I need to purchase one, what is recommended for a modest budget?
Answer:I have a new surface computer and cannot find a dvd/cd drive to view/access stored pix, do I need an external dvd drive?
Yes you'll need an external USB cd/DVD drive to access disc. You can get one for a cheap as $20. They're all pretty much the same. 3 more replies Relevance 21.32% Bit of a strange issue here - someone got fired from work and changed the password on the Windows 8 computer before they left. I just managed to reset the password yesterday but now the Outlook account has none of their previous emails and the One Drive account files seem to have all gone missing and certain files that were on the C: Drive are all gone and not in recycle bin either. When I reset the password I noticed there were three accounts on the computer - USER (which is the one I reset since it was the default when starting up computer), ADMINISTRATOR (didn't touch this one) and GUEST (didn't touch this either). AfterI reset the password, I'm pretty sure it was the right one as there were still a few relevant files still on the desktop. It's just everything else seems to have vanished. When I search for certain files in the Explorer, file names come up but it says that "the file location has now been changed" so the shortcut doesn't work. More replies Relevance 21.32% A week ago, a strange clicking noise befalled us whenever we attempted to boot the system. The BIOS welcome screen would run for much longer than usual, and then a message "No internal hard drive found." would display. We thought it would be a hard drive error and replaced it. Now, however, placing in a bootable OS installation CD does not work! Whenever we try, the computer acts like there is no CD (although sometimes the CD does begin to spin), and says there is no boot device available. I have tried a Ubuntu CD, a Windows Vista CD, a Windows XP CD, and none of them work. It does not matter what CD you put in; it does not work. So what is the problem? Is it fixable? Was it the hard drive that broke? The system is a Dell Inspiron 5150 laptop. Answer:Solved: After upgrading hard drive for Inspiron 5150 laptop, CD drive does not work Never mind, everyone! It turns out, after I took it out and put it back in, it worked again. Maybe a connection problem... who knows. 1 more replies Relevance 21.32% I have a USB external hard drive that I keep all my documents etc on (had it for years) I upgraded from Vista Home to & Home Premium then had to upgrade recently to Professional to run my Sage. Through all these upgrades my ext. drive ran fine. Occasionally the drvie letter would change if I had something else plugged into the USB, this was always easily corected in disk management by changing the drive path. The connection on the case packed up so I had to get the drive put into a new case, now when I plug it in the drive is assigned G instead of F, I tried to change the drive letter allocation in Disk Management but it won't let me as the program still thinks I have a second ext. hard drive which is labelled F. I suspect this has happened because when the usb connection broke the drive was disconnected suddenly instead of a proper eject. How do I get Disk Management to remove the inactive drive - i can't find any obvious way - eject, delete etc are all missing when I click on tools or tasks. If there is no easy way then how do I stop program updates for Adobe etc. failing because they can not find the F drive. I don't know why they look at that drive anyway. Help. Answer:External Hard Drive - Drive Letter changed - unable to change back Hi PAMDILL, Welcome aboard. Try this: Download drivecleanup.zip V 0.8.1 from Drive Tools for Windows Unzip to a folder. In it you will have two folders Win32 and x64, each containing DriveCleanup.exe for Windows 32bit and 64 bit respectively. Now unplug all the USB devices from your PC (except of course the Keyboard and mouse), right click on the appropriate DriveCleanup.exe for your system and run it as an administrator. This will remove all non-present drives from the registry. Reboot and then plug-in your external drive. The drive will be installed and hopefully you should be able to assign any free drive letter to it. Please report whether it resolved your problem. 2 more replies Relevance 21.32% I have a very straightforward question: I am running WinXP Pro on my computer and I just got an external USB hard-drive to back up my data before I upgrade to VISTA. My internal hard drive is NTFS file system, and the external drive is FAT32. My data backed-up perfectly. However, I understand that VISTA is NTFS only (for the drive that it's installed on, which in my case will be the internal drive). I will be installing the VISTA OS on the internal drive, and using the external drive solely for backing up data. I don't want to back up my data externally, upgrade my PC to VISTA, and then discover that I can't access the data on the external drive. Here's my only question: will VISTA work with a FAT32 external drive, or do I have to convert that external drive to NTFS *before* I attempt to connect it to VISTA? awaiting your replies - I'll appreciate all the help I can get! Answer:will VISTA read a FAT32 external hard drive,or do I have to format the drive to NTFS? 6 more replies Relevance 21.32% My Sony computer that I have had since 2000 or so is store bought and came with windows xp. I believed it had two hard drives in it until earlier this year when I found out it came with one partioned hard drive. I now have two physical HDD and three that appear on my computer. I need to restore my computer and would like to make my partioned drive appear as one. I will have all my backup files on my second physical HDD. If it makes things easier at all I do not have a windows install disc only system restore discs that came with my computer. If this isn't clear enough please ask me to try and clear it up. Answer:Partion Issue - Making a partioned hard drive a single drive in windows 10 more replies Relevance 21.32% Hello, I'm going to be replacing my primary HDD (1tb) with a SSD and will place the HDD in a drive caddy, replacing the existing DVD drive. 3rd party vendors list both 12mm and 9.5mm drive caddys for this - I imagine a newer laptop would have the smaller form factor, unfortunately this spec info is not available from the manual or Lenovo tech support - does anyone happen to know? Might be best if I yank the drive and measure it or take it with me to buy a caddy in person, rather than order online. Thanks for reading,Paul Answer:Replacing the Optical drive on Ideapad 510151SK with HDD caddy, need drive size -12mm or 9.5mm? Per page 63 of the Lenovo ideapad 510-15ISK/ ideapad 510-15IKB Hardware Maintenance Manual item #4, your laptop uses one of 3 9.0mm optical drives (see below) so a 9.5mm caddy may not work. There are some 9.0mm universal versions like this one available, but I have not used them, so be sure to check with the seller for capatability before ordering. No Description Part# /FRU4 HLDS GUE0N 9.0mm Slim Tray Rambo ODD 5DX0J464884 PLDS DA-8A6SH 9.0mm Slim Tray Rambo ODD 5DX0F864044 TSST SU-228GB 9.0 Rambo ODD 5DX0H14227 Lenovo ideapad 510-15ISK/ ideapad 510-15IKB Hardware Maintenance Manualhttps://download.lenovo.com/consumer/mobiles_pub/ideapad_510-15isk_510-15ikb_hmm_201604.pdf Cheers, 1 more replies Relevance 21.32% I replaced the hard drives in two X230 tablets today (HGST Z7K320-320, replacing with the same). Everything seems fine, but when the system reboots, it prompts for the hard drive lock password again (normally, it would just reboot without prompting). That wouldn't be a huge deal, but for whatever reason, it simply will not recognize the drive lock password. If I power it down and enter it, then everything is fine, the password is recognized and Windows boots normally. This is a fresh install of Windows 7 64-bit btw. I'm not sure why it's doing this, but the fact that two are having the same issue means something else needs to be done. Both replacement drives are new drives, and I set the drive lock password for both after installation. Has anyone encountered anything like this? More replies Relevance 21.32% Well, I've been having this problem for a while, but now it's getting a bit out of hand. When I go to stand-by and get out of stand-by, it says 'Unable to read/write from drive C:' and other times (just regular surfing, playing CS, etc) the HD locks up for about 10 secs. It also does a thing I'd like to call "HD-turns-off-then-turns-back-on-again-with-a-grinding-sound". I've tried reformatting which had some '...trying to recover' errors. And I've tried ScanDisk on thorough A LOT of times and it came up with some bad sectors on some of the scans. I've also tried Western Digital's disk drive utils. but it has some weird error now saying the IDE isnt plugged in correctly, but it is. Specs: Win98 40GB Western Digital HD [ I won't say all the other specs. I don't want to intimidate other people ^_^; ] Answer:Hard Drive acting weird. Unable to read/write on drive C:, and other errors Hi trunten A grinding sound with a hard drive and many bad sectors is not good. It sounds like your drive has bit the dust. Since it is 40 gig there is a good chance that it is still under a warranty as most hard drives come with at least a 1 year warranty and many have a three year warranty. You could recheck the drives jumpers (master/slave) and reseat the data cable but I don't think that will help. BOL 3 more replies Relevance 21.32% Has anyone run into a problem where you have a mapped drive and it takes a minute or two to show the folder list after clicking on it? My Computer/Explorer completely freezes until the folder list is shown. If I disconnect the share and change the drive letter, everything is back to normal. If I change the drive letter back, then the problem returns. There is also constant network activity while it is sitting there frozen. This is driving me nuts. I have a small network with 40 computers on it. Just a few days ago 6 of them started doing this. They are all XP SP3 and our server is running Win2k Server SP4. Here's what I've tried so far: MSConfig to disable all startup programs and services. Disabled the Webclient service. Disabled the Windows Image Acquisition service. Run net use * /del /yes to delete all shares If I access the server directly, it's fine. \server\data So it's not a DNS issue. It's just as soon as I map it with that one drive letter. If I change it to another letter, the problem is gone. All 6 computers are running different versions of Office and Autocad. I am having an impossible time trying to find a common thread. The only thing that I can think of is that a recent Windows Update is causing it. I can't change the drive letter on the map because autocad references the files by the drive letter, not the UNC name. Let me know if you have any ideas. Thanks, Ben More replies Relevance 21.32% Hi, I had memory issues, as have many on these boards. Following the advice I first updated my lenovo system and then pointed the rescue and recovery to an external hard drive and saved a new base backup. At some point I was asked if I wanted the existing backups deleted and I said yes, however, I don't think they can have been deleted as their has been no increase in memory available. I also ran the SpaceMonger program recommended on the boards and it was unable to scan 60% of the hard drive. So, can you please let me know how to get to, and delete the original backups? thank you More replies Relevance 21.32% YEsterday.. my bro used pendrive probally infected and when i saw my pc Antivirus is already disabled and when i tried to open it didnt open. This time im able to open C D and E drive. i scanned pendrive in my other pc i got it its win32.sality.ag.. i know it dangerous and bloody virus.. now i search internet and use Avg sality remover kit.. usually kit runs on window opened but Kit said virus will be removed while boot.. i rebooted and 11 hrs scanning take place.. but still pc is infected with sality then i downloaded kaspersky sality curing kit.. it cured .. but after scanning got finish i click c drive it started showing "Caption - Hello World" then D and E drive got unknown form.. i run registry key given in kaspersky folder called Disable Auto run. ---------- . DDS (Ver_2011-08-26.01) - FAT32x86 Internet Explorer: 6.0.2900.2180 BrowserJavaVersion: 1.6.0_27 Run by Owner at 16:14:01 on 2011-10-01 Microsoft Windows XP Professional 5.1.2600.2.1252.1.1033.18.3071.2470 [GMT 5.5:30] . AV: Kaspersky Internet Security *Disabled/Updated* {2C4D4BC6-0793-4956-A9F9-E252435469C0} FW: Kaspersky Internet Security *Disabled* . ============== Running Processes =============== . C:\WINDOWS\system32\svchost -k DcomLaunch svchost.exe svchost.exe svchost.exe C:\WINDOWS\system32\spoolsv.exe C:\Program Files\Java\jre6\bin\jqs.exe C:\WINDOWS\System32\svchost.exe -k HPZ12 C:\WINDOWS\system32\nvsvc32.exe C:\WINDOWS\System32\svchost.exe -k HPZ12 C:\WINDOWS\system32\STacSV.exe C:\WI... Read more Answer:virus.win32.sality.ag + Caption Hello World (c drive) + d+E drive showing unknow ok.. now.. i removed autorun.inf manually from C/D/E drives now they are opening and also i deleted unknown exe from c drive hidden which i think causing " Hello World " Caption to appear. <-- solved also Antivirus repaired.. now opening earlier not. --------------------------------------- now i made new scan. ---------- . DDS (Ver_2011-08-26.01) - FAT32x86 Internet Explorer: 6.0.2900.2180 BrowserJavaVersion: 1.6.0_27 Run by Owner at 22:43:59 on 2011-10-01 Microsoft Windows XP Professional 5.1.2600.2.1252.1.1033.18.3071.2640 [GMT 5.5:30] . AV: Kaspersky Internet Security *Disabled/Updated* {2C4D4BC6-0793-4956-A9F9-E252435469C0} FW: Kaspersky Internet Security *Disabled* . ============== Running Processes =============== . C:\WINDOWS\system32\svchost.exe -k DcomLaunch svchost.exe C:\WINDOWS\System32\svchost.exe -k netsvcs svchost.exe svchost.exe C:\WINDOWS\system32\spoolsv.exe C:\Program Files\Kaspersky Lab\Kaspersky Internet Security 2011\avp.exe C:\WINDOWS\Explorer.EXE C:\Program Files\Java\jre6\bin\jqs.exe C:\WINDOWS\System32\svchost.exe -k HPZ12 C:\WINDOWS\system32\nvsvc32.exe C:\WINDOWS\System32\svchost.exe -k HPZ12 C:\WINDOWS\system32\STacSV.exe C:\Program Files\CyberLink\PowerDVD\PDVDServ.exe C:\WINDOWS\sttray.exe C:\WINDOWS\system32\RUNDLL32.EXE C:\Program Files\Common Files\Ahead\Lib\NMBgMonitor.exe C:\Program Files\Internet Download Manager\IEMonitor.exe C:\WINDOWS\system32\igfxsrvc.exe C:\WINDOWS\system32\wuauclt.exe C:\Program Files\Kaspersky La... Read more 3 more replies Relevance 21.32% I've never seen this before. Double click on e:\ in my computer and it says not a valid win32 application. Seen it when opening applications but not a drive! all other drives are fine. First time i saw it it was looking for install.exe, which i remember deleting a week or so ago. Most likely because it was a missing shortcut in registry or because it was corrupted or something simular. I dont know what to do about this, as I said never seen it happen b4 on a drive. You can open e drive when you right click and explore but when double click sometimes it tries to locate install.exe. No virus, but did have a few recently, I wont put it down to that though and dismiss because I am very vigilant and remove virus immediately, and scan twice a day, always scanning b4 opening apps. Weird one! Logically, how do i stop it looking for install.exe when openning? thanx More replies Relevance 21.32% Hey guys...I am trying to find a solution to a problem that I'm having. At work we have about 6 PC's that I would like to make drive images for in case of drive failues. I was wanting to use an external drive for the backups and was looking for a one touch feature but I don't think it would work with multiple machine backups. I was trying to avoid having to install software on each machine to perform these backups. So what I was thinking of doing is trying to find a program that can reside on the external drive and will run from there. Does anyone know of a free reliable imaging program that will do this? I had found one at one time that did just this and worked great but I can't remember for the life of me the name of the program! Any help or ideas appreciated!! Mike Answer:Good drive imaging program to backup multiple systems with an ext. drive? Free? 6 more replies Relevance 21.32% Hey everyone! I've had my Thinkpad Edge E545 with Windows 7 Pro for well over a year, maybe a year and a half, but it stopped recognizing my CDRW/DVDRW drive a month or so ago. I thought maybe it was a glitch that would work itself out with a restart, and I forgot about it until yesterday when I needed it again. The drive does not even show up in device manager at all. When I put a CD in, it spins for a few seconds, but it doesn't register with the computer, never shows up in My Computer or in Device Manager. I'd love to at least see the drive with a yellow exclamation point! Seaching the support section of this site, I could not find a driver for it anywhere. Does anyone know where I can get one, or have some other solution that I may be able to try? I'd appreciate any help I can get! Thanks in advance! Answer:Need driver for my CDRW/DVDRW drive, Thinkpad E545 Windows 7 Pro cannot find the drive at all Hi shawnEO, Welcome to Lenovo Forums! Please find the link given below and run the test for optical drive. http://support.lenovo.com/us/en/products/laptops-and-netbooks/thinkpad-edge-laptops/thinkpad-edge-e5... http://support.lenovo.com/us/en/products/laptops-and-netbooks/thinkpad-edge-laptops/thinkpad-edge-e5... Please let me know whether the above steps helped you. Thank you, have a good day. Did someone help you today? Click on the ?upper half star? at the left side of the post to thank them with a Kudo! Regards, Naveen.v 3 more replies Relevance 21.32% what is the condition of the file if you move it from an NTFS5 drive to an NTFS4 drive on windows 2000? Thanks Answer:moving compressed file from NTFS5 drive to NTFS4 drive on windows 2000 Try and copy it first and check the condition of the file. Moving should be the same, but if it is not readable (doubtul) then you have your original copy still. Let us know what the result is, please. 1 more replies Relevance 21.32% Have just installed a new 8.7 Maxtor in a P100. This drive is set as master and partitioned into 5 which assigns drive letters c,d,e,f,g . If I re-install the old 500mb drive unit as a slave ( Quantum Maverick) it assigns itself as drive letter D. I want this drive to show up as drive H . If I go to Device manager, hard drives, double click on generic ide type 47, settings, for this drive the drive letter assignment is there, but I cannot change it. Any ideas please. Answer:{SOLVED} Slave drive letter inserts itself within master partition drive letters I don't think you can change it. Normally the bios will take the primary of the 1st ide drive as c, the primary of the second ide as d, the next partition of the primary ide drive as e, etc. 3 more replies Relevance 21.32% So currently my gigabyte motherboard has 2-1TB drives in a mirror raid as my secondary drive for data storage and 1 250GB single drive for my OS. In the BIOS I have the setting set to RAID for my hard drives, which I set this after I loaded windows when I first started, but now my 250GB single OS drive needs to be reloaded, but my XP disc will not see the drive since I have it set to RAID. I'm having trouble loading the raid drivers from gigabyte and I thought it would be easier to set the hard drives to AHCI mode and unplug my 2-1TB drives, reload windows on the 250 then plug my 2-TB drives back up then switch the BIOS back to RAID, but I am worried that this will mess up my RAID on my 2-1TB drives?? I would hate to experiment and lose the data on those drives. Will switching modes mess up my RAID??? Answer:Gigabyte Mirror Raid on backup drive need to reinstall XP on single OS drive question Pull the drives off the system, do whatever you need to do to get XP installed. When that is working, change the settings to RAID, then reconnect the mirror. It's best to install XP with nothing else on the system anyways, otherwise you risk having XP decide to install your O/S on drive "D:". 5 more replies Relevance 21.32% I have a new E525 that I am trying to use with some external eSATA storage. If I boot the system with the eSATA drive attached, the internal HDD is not found and the system will not boot. I get "Fixed Disk: External Disk 0" on the diagnostic splash screen.If I boot with the eSATA drive turned off, and turn it on once the OS has booted, all is well and both internal and external drives are found.Any ideas? I have looked for a BIOS update but have not been able to find one. I've also been through most of the BIOS settings but can't see anything that looks relevant. Answer:E525 internal hard drive not detected by BIOS when eSATA drive attached It appears that this has something to do with the hard drive ordering. When the machine is switched on with the eSATA drive attached, eSATA becomes SATA HDD0 (presumably the internal drive becomes SATA HDD1).The only option to boot from SATA drives on the boot order menu in the BIOS is SATA HDD0, so it never tries to boot from HDD1 (now the internal drive).I can see this is useful if you WANT to boot from the eSATA, but I don't........ 4 more replies Relevance 21.32% We recently purchased 10 Dell Optiplex 5040 desktops. On one of them I built a Windows 7 32 bit configuration. I wished to capture an image of it using Ghost and save it on an external drive. The USB stick that I used has been used successfully many times on a variety of machines. When I boot from this USB it boots successfully to X:\Windows\System32 and then I can navigate to D: as the local drive. When I try to navigate to E: the message is “the device is not ready”. When I try to navigate to F: the message is “the system cannot find the drive specified”. I have tried using all of the USB ports on the machine with the same result. When the machine is booted into Windows it does recognize and read the USB stick. The disk administrator identifies it as 'healthy' with an active partition. The file format of the USB and the machine being imaged is NTFS. I had downloaded all of the drivers for this computer using the Dell website’s Dell Detect program. The BIOS settings are the default settings. I have tested this on the other new Dell Optiplex 5040 desktops right out of the box with exactly the same result. All of the DELL diagnostic tests have normal results – whether run from the machine itself or the Dell website. Are there any known problems/fixes that apply to this situation? More replies Relevance 21.32% I have a Samsung s2 1tb external hard drive. Today when I hooked it up to my laptop it would not load. It will shows local F drive when it usually will say samsung Drive. A pop up box will then appear stating "you need to format drive" I know if I reformat it I will lose all my data which I will not do. When I look in device manager it states that the external hard drive is installed. I attempted to go through start menu and type cmd then F: and it states "The volume does not recognize this system file. Please make sure all required system drivers are loaded and the volume is not corrupted." PLEASE HELP I DONT WANT TO LOSE ALL MY FILES Answer:Samsung s2 external hard drive not loading showing local disk F drive With the external drive connected, Go to Disk Management (Start > type: "computer" > click on Computer Management > click on Disk Management) Determine which disk is the external drive. If the external drive shows here as a healthy volume, Right click on the large box to the right of the "Disk 1" / "Disk 2" lable and choose: Change Drive Letter and Path. If there is nothing but a grey box or "Unallocated Space" next to the disk number then Right click on the disk number (the box) and choose: Change Drive Letter and Path. Change the drive letter to anything else > OK. If you can't do any of the above post back. If you can post a screen shot of the Disk Management window that can help. Do not initialize the disk or format it or you will lose your data. 9 more replies Relevance 21.32% I have been fooling around with Robocopy now for a while and the amount of switches it features is a tab overwheleming Can anyone recommend a bone stock "go to" command line specifcally for copying files from an old drive to a new drive - with the following requirements? 1. Maintaining the same folder structure (old to new) while persevering existing directory timestamps (/DCOPY:T?) 2. Ensuring that the destination files inherit the permissions of the new dest directory. (Confused here?) 3. Multithreaded (MT:8?) 4. Copy all subdirectories (/e?) 5. Give me basic info on screen (TEE/?) 6. Write a log file that's not overly huge 7. Cut the retries etc to a minimum Anything else you think might be relevant. Really want an easy go to command that I can reach for again and again without wondering if it works or not Cheers! VP More replies Relevance 21.32% Hi, Knowing a little about the ATA Security command set, SED SSDs, etc., I would like some clarification on what, exactly, the two functions (1 - Normal Erase and 2- Resetting the Cryptographic Key) do. The standard ATA Security commands are "SECURITY ERASE UNIT" and "ENHANCED SECURITY ERASE UNIT". Both can be used, if supported, via specialty utilities such as the DOS-based HDDErase, recent releases of linux-based Partition Magic, etc. Most drives report the same amount of time required to execute each one, if available, and the time is generally scaled to drive size for HDDs (sometimes > 2 hrs for large drives) but is reported as a pretty standard 2 minutes for SSDs (though running usually takes less time than that on SSDs). My question: When I boot the utility on my W520 and choose option #1 from the lenovo utility that sets up the BIOS function on reboot, only item #1 indicated it will take two minutes and \item #1 takes at least 20 second or more. Pretty good evidence that item #1 is a SECURE ERASE UNIT call. Item #2, "Resetting the Cryptographic Key", however, takes < 5 seconds. This indicates to me that it may not actually be a call to "ENHANCED SECURITY ERASE UNIT", but some other call. Alternately, it might be "ENHANCED SECURITY ERASE UNIT" and the drives (different models) aren't being up front about the time it takes to do this. Any ideas from folks experienced in this area or from Lenovo reps? Thanks,Brendan ... Read more More replies Relevance 21.32% I installed a second hard drive in place of the DVD drive and now when my computer is plugged in it won't charge (it says plugged in, not charging). I can't imagine that the power adapter isn't able to handle a second hard drive because I've seen plenty of people that said they put two hard drives in their w530. Is there some kind of driver or setting that I need to reset to make this work again? I'm on windows 8.1 by the way so a lot of the solutions I've found by googling do not apply to my system. More replies Relevance 21.32% I have an Alienware Aurora R6 and am frustratingly trying to upgrade the optical drive to one that can read Blu-rays. Does anyone know what drive size specs I should be looking for? It's a low profile drive that is even smaller than a standard laptop optical drive. I tried posting this to an existing Aurora R6 article (https://www.windowscentral.com/how-upgrade-alienware-aurora-r5-and-r6), but my reputation here isn't high enough to post to older articles. Thanks any help. Background in case anyone's interested: When purchasing this computer the only spec*they showed was that it was a 5.25" drive. Assuming it would be a full sized 5.25", I ordered the system with the least expensive DVD option, meaning to switch it out with the Blu-ray drive in my old system. This, of course, didn't work, so eventually I ordered a low profile Blu-ray drive meant for laptops. The slot in the case is too small even for this drive. What grinds my gears about this is that it's obvious that the internal bay (silver*metal part) is big enough for the laptop drive. It's the outer case (black metal part) and front bezel that are the obstructions. More replies Relevance 21.32% Hi, Looking for this for a while now. The only thing i ever came across that was close was the microsoft virtual cd-rom drive program. Which i never really cared for in the first place. Just looking for a quick, easy and painless way to mount a iso image and the like, without having to install a app to mount it, and autoplay it, explore it, etc.... Any and all suggestions is greatly appreciated. Happy Holidays all!!! Answer:Portable Image mounter / Virtual drive / drive emulator software 6 more replies Relevance 21.32% Hello, I have a Dell Inspiron 15 N5050. This unit was having high lag and memory utilization of 100% shown in task manager and resource monitor. Ran Dell Extended Troubleshooting. Results = Hard Drive error. Replace hard drive. What are the Hard Drive specs for this unit. This unit runs Win 7. How do I reinstall on new hard drive? Thank you in advance for your help! Answer:Inspiron N5050 Hard Drive Crashed. Drive type and OS Reinstall help Any 2.5" SATA notebook drive will work that's 9.5 mm or slimmer. Ideally you prepared a set of recovery media when you received the system -- if you did not, you will need to contact Dell to purchase that now. 1 more replies Relevance 21.32% Hi, I am trying to re-install xp pro on my Toshiba Notebook with an external DVD drive. I am able to boot from the drive and the blue "windows setup" screen begins to load. After I press enter to set up Window XP, it says "setup did not find any hard disk drives installed on your comptuer...Press F3 to quit. I am stumped. If I quit setup and I can reboot and load my original windows. Therefore, the hard disk is present and works fine. How do I get the windows setup to see the hard drive and proceed with setup? Thanks. Dw1256 Answer:Solved: Install Windows with External Dvd Drive - Hard drive not found? It is usually because the installation cannot recognise the SATA driver for the hard disc If you have a usb drive load from Toshiba the sata controller drivers so on the screen for the install you press F6 load drivers and then load the sata driver from the usb - I say usb as I presume you do not have a floppy drive OR you can include them in a reburnt CD using nLite but that is a longer way round the problem Nlite http://www.nliteos.com/guide/part2.html OR you can TRY try this Enter your BIOS/Setup Utility Locate the Serial ATA or SATA configuration section I’ve seen this section called ‘On Chip Config’ on some Phoenix Award BIOS On Lenovo/IBM ThinkPads it’s in Config > Serial ATA (Sata) Change the mode of the SATA controller from AHCI to IDE or Compatibility Save & Exit Reboot and begin the Windows Setup again. If Windows Setup successfully detects your hard disk this time then go ahead and perform the Windows Setup. When Windows setup completes change the mode back to AHCI in the BIOS 3 more replies Relevance 21.32% If this question has been addressed elsewhere, please direct me there. I have a duel hard drive, dual boot system. Before upgrading to Win10, I purchased a Samsung 850 EVO SSD drive. I cloned my Win7 HHD drive to the SSD. I then deleted different items on each of the drives based on what I was going to use the drive for. (Games and General Purpose computing) Next, I updated the general purpose HHD to Win10, intending to dual boot Win7 SSD for games. After upgrading the HHD drive to Win10, I discovered that the Windows 7 OS installed on my SSD was declared invalid by Microsoft and would not boot. I called Microsoft and was told I needed my win7 install key. I recently moved and cannot find my win7 installation disk. I have Belarc installed on both drives. Is there any way to recover the CD key from the windows 7 drive even though it will not boot? Answer:recover win7 drive use after install Win10 on 2 drive dual boot sys? Showkey plus from this forum: Showkey - Windows 10 Forums 1 more replies Relevance 21.32% I am trying to access files on a hard drive from a different computer which will no longer load the OS. I have connected the hard drive using a usb adapter. When I plug it in, I get a notice that the device is ready to use, but it does not show up in Computer. It is visible in the disk manager. There is no drive letter assigned and the options to do so are grayed out. Most of the options in device manager are grayed out, except for 'delete volume' and 'help'. Answer:hard drive w/ usb adapter- drive shows in disk manager, not computer Welcome to the forum, Can you post a maximized screenshot of your full disk management window. Screenshots and Files - Upload and Post in Seven Forums 4 more replies Relevance 21.32% format my hard drive for clean install of new windows 10 on flash drive More replies Relevance 21.32% I have a HP PAVILLION a1250n PROD#EG194AA-ABA SN#MXK5360NH7 WINDOWS XP MEDIA CENTER can anyone help The comp want reboot never got disc tryed the f10 but still will not reboot gave the drive to comp tec they said its fried bought new drive sata 1ter 7200 and tryed to use my windows 2000pro to bring it up want come up do I have to buy a new windows xp disc can some one give me a hand i used the comp in my shop for doing graphics in airbrushing,to make templets i lost a lot of data from it the old drive still spins$ are short right now but want the comp backup does anyone have disc for the master reboot of hp hp says they dont have the drive that was in was same as new one except for space I live back in the sticks so town is aways away and a city is hrs away thanks again
thank you
More replies
Relevance 21.32%
I have a backup drive that we had used for for a while (WD 6000aak), previously in an external drive enclosure. I connected it today via USB to find that the computer has the driver installed, the drive appears in the Devices list, but it does not show in My Computer. In "Disk Management" the drive appears but is not assigned a drive letter. Also, all the options for the volumes are greyed out and cannot be selected - thus I cannot assign a drive letter.
I have tried this on multiple computers. I have also removed it from the enclosure and connected the drive via SATA... the result is identical. I checked the bios and the drive appears there.
The drive does spin when powering up. It is hot (even though it cannot really be accessed).
Any suggestions? Is this drive done?
Answer:Hard drive installed, shows in Drive Mngmnt, in Bios, but not in Comp.
Go to Disk Management, in the bottom part of the screen, right click the drive, select 'change drive letter or path', next window select add, next window select a drive letter from the drop down box. For an external drive it's best to use a letter lower in the alphabet. If disk management wants to initialize and format it, don't do it if there is data on it you don't want to lose. If no data or if you don't care, go ahead. It will show in computer when a drive letter is assigned.
9 more replies
Relevance 21.32%
i can't get my puter to read the data on an external hard drive. wants me to format it even though it recognizes the drive letter i gave it when i did format it. don't want to lose the 100+ gigs of data i have on this drive.
Answer:my computer recognizes the drive letter but not the data on external hard drive
Does the external drive work on another machine?
Also, while the drive is still connected, can you go into Device Manager (right-click My Computer, click on 'Manage...', click on 'Device Manager') to check for any yellow exclamation marks or red crosses next to anything in there?
3 more replies
Relevance 21.32%
HelloI have recently upgraded from vista 32bit to Windows 8 Pro 64 bit. All my important data was stored on Icybox External Drive enclosure in RAID mode (fully working before the upgrade).
Since the upgrade it is not showing up under "My computer", and then I go to disk management it is showing as an invalid Dynamic drive. I spent about 18 hours in various forums searching for solution, tried everything suggested, nothing worked.
I need the data that is on the ICYBOX and cannot convert it to basic disk until I get the data off.
The commercial software recommended did not work either.
Loosing my will to live!Anyone has any suggestions please?
Thank you
Check here: http://illbethejudgeofthat.wordpress.com/2010/10/20/repairing-a-dynamic-invalid-drive-in-windows/
5 more replies
Relevance 21.32%
Thank you for reading my tale of woe!
I recently had some malware infections that I simply could not remove, using a huge arsenal of AV scans & removers, so I got a new hard drive ("H"), and attached old one ("C") as a slave. After configuring "H" with new XP Pro OS/Avira AV/Comodo Firewall, I used my original Outlook 2007 CD to install it.
I got the basics done (importing .pst files), but there are a whole lot of details that aren't moving to the new Outlook. I have gone through the Microsoft support pages, which normally I find super helpful, but this time I seem to be going in circles. I am anxious to get the data off C so we can re-format it: I hate having the infected drive around.
If anyone can provide any insights into *any* of the following obstacles, I would be most appreciative.
1. -Install went apparently well, but when I tried to install an Outlook add-on (Calender Print Assistant from Microsoft download center) I got a message that my copy of Outlook could not be validated: "Problem: The validation process could not be completed because a copy of Microsoft Office® on this computer has not been activated." Following the Microsoft instructions led me in circles, warning that my copy might be counterfeit. I purchased it new from Amazon over a year ago. All the Microsoft support pages tell me that I must use the Activation Wizard during installation to enter product key, which I did, and it was accepted. Even th... Read more
More replies
Relevance 21.32%
Ok, this is a weird question, I know, but I don't know exactly how to Google this to find the answer.
I have a 1.5 tb external drive (which shows up as something like 1.3 tb in Windows). I use it for storage, but now I have it set up to automatically back up my internal drive. Now it shows up as 1.04 tb in "My Computer." I looked in "Disk Management" and it seems to have three partitions, two of which have no drive letters assigned. So, I just have one quick question, is that normal?
The reason I ask is because today I was messing around with dual-booting Ubuntu. I just wanted to make sure I didn't mess something up.
To anyone who answers, thanks for putting up with a stupid question, lol.
Answer:Solved: Will setting a drive as a backup drive reduce what's shown as available space
What is the make and model of the external drive?
2 more replies
Relevance 21.32%
Hello thank you for your time, I am experiencing issues with a dell dimension 4100. I have received a virus and was told upon startup to insert some kind of startup disk im not sure it was a while ago so me in all my stupidity tried to connect the hard drive to a different motherboard in the spot the dvd drive plugs in to in that cisnet computer. Well after a few tries i somehow got it to recognize the hard drive upon startup and it tried to format it i think, although as soon as i saw that screen i powered down and now the hard drive is no longer recognized by either computer, even in the bios it says the total memory is 512 mb with it connected to hard drive spot of the original dell. I would really like to get my dell working again but it gives me the error Invalid Boot diskette Insert Boot diskette in A: I do not care for restoring the files on the hard drive they are worthless to me i just want to get my computer working again any help would be greatly appreciated, Thank you.
Answer:Non detected Disk Drive/Hard Drive Need help with dell Dimension 4100
Place OEM Windows CD into the CD or DVD drive and boot off of this disk. If your system doesnt want to boot off the CD or DVD drive go into bios and set the CD or DVD drive as 1rst boot device in boot order. Once you successfully boot off of your optical drive you then can delete the partition on the hard drive, create new partition and reformat, then install clean installation to that hard drive. If you dont have recovery or oem cd available, you will have to buy one.
1 more replies
Relevance 21.32%
Hi everyone,
I installed Ubuntu on my computer a few months ago and created another partition for it on my 1TB hard drive.
I didn't really care for Ubuntu so I decided to delete the partition it was on.
That might have been a mistake.
Well, now there's 87.68GB of free space on my hard disk that I can't use and I don't know how to add it back to my c: partition.
There was another post about this a couple years ago, but I don't understand the instructions and am not actually sure if it worked. Can someone explain how to do this, please?
I'm not completely computer illiterate, but I'm not familiar with partitioning disks. It was just the one time with Ubuntu.
The unallocated area is actually an empty extended partition. You need to delete that first.
2 more replies
Relevance 21.32%
Hello,
I have an issue, I have not have before. I am running XP pro SP3 with all updates installed. I use AVG free for everyday protection, with a router configured as a firewall, as well as the standard Win firewall. I also use Windows Defender, Windows malicious software removal too, I was using LiveOne Care, but it no longer exists. I also use Malwarebytes free version regularly and RUBotted.
So here is the issue. I was at a work conference and took my flash drive to transfer my presentation to the laptop that is used for the projector in auditorium. Got it back, threw it in my purse and never thought anything more about it. Today I started to transfer another presentation to the flash drive and wala, AVG pops up with "threat detected, move files to virus vault?" The message initially said: Recycler.exe. So of course I click yes to transfer to virus vault. It appears the file is in my virus vault. I then ran AVG on the flash drive and it came up with another virus CNN and sent it to the virus vault as well. I then ran malwarebytes, AVG, Windows Defender, as well as BitDefender online. I tried to use Kapersky as well, but it appears it is being updated and not available. All came up clean.
So, I am thinking the problem is taken care of. So I set up AVG to run on the flashdrive again on and it says "no infection found". But when I try to access the flash drive as usual, it said K:access denied (that is my USB drive). HMMMM now what? So I check the dri... Read more
More replies
Relevance 21.32%
Windows 7 OS with admin rights (group policy not installed)
Hello to the forum, I would apprechiate some help.
Q.
How to stop external hdd OS writing to a drive (without express permision or blocking totally) and controlling overal access.
I understand group policy (gpedit.msc) does this but its not installed, which leaves doing something with permissions in the security tab
computer > properties > security >
Thanks
More replies
Relevance 21.32%
my new motherboard came capable of booting up from a USB flash drive/cdrom/HDD etc...
my last one apparently also had it but it was kinda half ***** still so i left the floppy there.
so does the flash drive finally replacing floppy disk drive as far as features and being acessible and bootable in bios and dos?
i got a gigabyte GA-EP43-UD3L motherboard and treres an option in the boot menu for flashdrive.
i had to use the floppy to recover from my system not booting up into windows in the past and dont want to get stuck without any access to the system if something happends.
and also i made a customized image of my windows disk with latest graphic drivers and all the rest of them using NLITE.
but could not figure out how to make a bootable flashdrive.
i just extracted the image to flash drive and it wouldnt boot.
i went around google for a lil bit and found few different methods of doing it.
can anyoen suggest which method os the easiest and most reliable out of them?
Answer:can we finally get rid of floppy drive? making a windows install flash drive
My mobo doesn't even have a connector for a floppy disk drive. I use a flash drive to flash my bios, but I have a usb floppy drive if I happen to need one.
6 more replies
Relevance 21.32%
Windows 7 OS with admin rights (group policy not installed)
Hello to the forum, I would apprechiate some help.
Q.
How to stop external hdd OS writing to a drive (without express permision or blocking totally) and controlling overal access.
I understand group policy (gpedit.msc) does this but its not installed, which leaves doing something with permissions in the security tab
computer > properties > security >
Thanks
More replies
Relevance 21.32%
For some crazy reason (I'm sure there is a good one) Canon Zoom browser, which is the download software for canon digicams, does not want anything to do with network drives.
I have bought an a75 for our service department to takes fotos of cars. They will probably take around 100 shots a day, and want to spend as little trouble as possible downloading them, so I setup the autodownload and date stamp feature in the software which works great. the only problem is, we cannot save to a network share. Why not use a mapped drive you say? well, you can't even do that. so.. I found a little prgram called "VDC" (Virtual Drive Creator) which creates a virtual drive. the zoom browser software CAN see this drive. problem is, VDC does not want to make a virtual drive out of a network share or a mapped drive. ARGH! lame. so, after a bit of digging, I found that VDC utilizes the SUBST dos command which basically does what I just explained. the command does work with network drives, but when you run it, the new drive appears as a mapped drive. no good for zoom browser. basically I'm in a bit of a pickle. I can map the my documents folder and save to it, but I'd rather not. any suggestions?
More replies
Relevance 21.32%
I am one of those people who like to gradually transfer over from one technology to another when the price is right. Well the price is right especially since SATA has been out for several years and PATA is an obsolete technology.
I have a PATA drive which is formatted for NTFS, not fat 32. I want to install it on a SATA 2 drive. I want to back it up with Ghost or Acronis and install it on the new sata2 drive.
Will this work? The drives aren't the same size either. Does it matter if it is Acronis or Ghost?
Thanks
Answer:Can I use Ghost or Acronis to back up my PATA drive and install it on a SATA drive
If you have the drivers for the SATA controller installed on your Windows installation then there's rarely any issue cloning the PATA drive to a SATA drive, and there's no issue with cloning to larger drive either, both Acronis and Ghost support going to a smaller or bigger drive. The file system doesn't matter.
1 more replies
Relevance 21.32%
I'm planning on upgrading to a larger SSD in my T400. It originally shipped with a 1.8" 64GB Samsung SSD and I'm probably going to upgrade it to a 2.5" 7mm Samsung 840 (non-pro) SSD.
I'm not sure if the current caddy in the T400 is going to fit the new SSD, or if I'll need to purchase the 2.5" caddy. If so, does anyone know the FRU # of the caddy I'll need?
Answer:T400 SSD Upgrade Question Do I need a new drive caddy? My machine shipped with a 1.8" drive.
you would need a normal 2.5 inch hdd caddy. The one that goes into the T60 and T61 will do, and cost like 4 dollars on ebay.
3 more replies
Relevance 21.32%
I've seen a similar problem in this forum, but I can't find the identical problem or a solution.
Simply stated, I made a copy of my w2k hard drive using ghost and when I try to boot to it, I get an error to the effect that the page file is missing or too small. Safe mode doesn't work either.
Read on for more details. I have also posted this to Western Digital and am awaiting an answer from them.
I want to install my new drive as my master because I believe I am having errors with my existing drive.
I have a master drive (C: ) and a slave (F: ). Most of the data on the slave got there by running the drive to drive copy (not the install new drive) selection from data lifeguard (Western Digital utility).
Because I learned in the past that I cannot successfully boot from the copy made by data lifeguard, I used Norton Ghost (version 10.0) to copy from my old master to the new drive. Ghost has an option that says it can be used to install a new drive.
I'm pretty sure I kept the jumpers and physical drive locations straight through all of this. Here is where I am at. I was careful.
If I try to boot from the new drive as master with no slave, the boot fails. It tells me the paging file is missing or too small. I do not have this problem booting from my old master with no slave. In both cases the slave is physically disconnected. If I attempt safe mode with the new drive, I have the same problem.
If I boot from the new drive with the existing F: (created by... Read more
Answer:Can't boot to hard drive copy - no pagefile & drive letters confused
9 more replies
Relevance 21.32%
A Lenovo tech told me to rename my Drive F to Drive TVT_RNR (T in order to back up my T500 to an external hard drive. Now, all original information is missing. But ... is the original information still on the hard drive or erased? Can you instruct me how to retrieve the original Drive F information? A. I no longer need the Lenovo backup data;B. I would like to have my drive back ... no more TVT_RNR (Y;C. Drive shows 87GB free of 111GB. What happened? Can you help? Thank you, Eric [email protected]
Answer:Want to change drive TVT_RNR (Y:) back to drive F and retrieve all original information
Hi - saw your posting. No doubt you have solved the problem but I had exactly the same this week. Using a Buffalo HD externally and made the mistake of making it the back up boot drive. Buffalo were extremely helpful responding within 24 hrs with how to rectify the problem. So have my original settings back again! Contact your HD supplier.
3 more replies
Relevance 21.32%
Good afternoon All,Yes I did fill the partition......and I knew from the forum what was happening..I did redirect to the D drive for the future.I was having a multide of problems with vista Sp1 and Tmproxey, trend micro crap and others etc........ I now want to use the one key recovery because of my recent issues. I saw the COA2 program but it looks like it won't work with vista Is there a way to move Programs easily to the D Drive???I do not have CDs for some of the programs so i don't think i can uninstall and reinstall to the other drive...Any ideas ?Thanks in advanceTBIMessage Edited by TheBigIndian on 07-28-2008 12:14 PM
More replies
Relevance 21.32%
Dear community members, I'd be very thankful if somebody helped me with my problem: I've recently got an M910q tiny form factor, got it delivered with a M.2 SSD, which works fine.Thing is I want to add another 2.5 inch SATA3 6Gb/s SSD-drive, but I can't find the bracket and the SATA connector cable to get the 2.5'' SSD installed.I did not find that thing in the accessories section of Lenovo, just the one for the M.2 factor kit. Can you please help me to figure out an article number or serial number of the right bracket or just the right solution for this problem? THANKS in advance!2cups
Answer:M910q tiny: standard configuration: M.2 SSD drive, problem: how to add a 2.5'' SATA3 6Gb/s SSD drive
Sadly, the M910q is totally redesigned inside compared to an M900 :-(
1 more replies
Relevance 21.32%
Hi. My laptop is Acer with Windows 7 home premium 64 bit OS.
It contain only 1 single partition disk that is c drive only.
Recently, i accidentally convert the c drive which contain OS into dynamic disk.
After i restart the laptop, it shows the following:
BootMgr is missing
Please press Alt, Ctrl +del to restart
Now, i not able to enter the window. I am so worry because my important data is kept inside the hard disk, and i still not yet do any back up copy on it.
I searching on this forum and learnt that i need a window 7 installation disk to repair it, but my problem is the computer dealer not give me any windows 7 installation disk when i baught this laptop as their said all the driver and windows program already kept in side the harddisk.
Any help are much appreciated...
I tried to remove the laptop hardisk which is a 500GB toshiba SATA hardisk. Then, i use the other laptop with Windows XP professional with the intention to copy yhe file inside the toshiba hard disk.
However, after plugged in the hard disk to the other laptop, under the disk management, it shows that the toshiba disk as : dynamic disk, unreadable.
So, i having problem to backup the data inside the toshiba hard disk.....
Answer:Accidentally changed hard drive with OS to dynamic drive.startup fail.
Hello Yee, welcome to Seven Forums!
Have a look at Option Four in the tutorial at the link below to see if the "Set Partition as Primary" option is available; there's also a link in the same tutorial for the info on creating a Windows recovery disk.
Partition Wizard : Use the Bootable CD
8 more replies
Relevance 21.32%
I added a 1TB Seagate Hybrid drive to my old ThinkCentre 808532U. I removed the jumper from the old IDE drive so it would be considered the secondary. I installed Windows 7 Professional 32 bit on the hybrid SATA. The old IDE had Windows XP. I want to use the old IDE drive as the secondary so I can access the data, pictures, etc. I do NOT want to boot XP. The system boots up fine with Windows 7. However, the system does not recognize the old IDE drive. I have gone into BIOS and changed the Parallel ATA to Secondary. The system still does not recognize the old IDE drive. Can someone provide recommendations? Thanks,Napp
Solved!
Go to Solution.
Answer:Added Hybrid SATA drive to ThinkCentre - does not recognize old IDE drive - Suggestions?
Have you tried putting the jumper back in? SATA doesn't have a master/slave or primary/secondary configuration.
By pulling the jumper you've set up an IDE channel that has no primary drive. Either set the IDE drive to master or to cable select if that's an option and your cable is configured that way.
Not saying this is 100% but give it a try if you haven't already.
Z.
6 more replies
Relevance 21.32%
I have two separate drives. C has windows 7 and D has Windows XPsp3. C used to be Windows 98 before I upgraded it to win7. Back then I had Norton ghost 12 in my D drive. Ghost 12 would recognize both the c and d drives.
When I upgraded the C drive to win7, Norton Ghost 12 didn't recognize the either drive anymore, so to rectify the situation I upgraded to Norton Ghost 15 in both drives.
Now Norton ghost 15 doesn't recognize either drive when I am in my XP drive but does recognize both when I am in my Win7 drive. I get the following error below.
How can I fix this:
Answer:Norton Ghost doesn't recognize any drive when in winXP drive but does in win7
I tried uninstalling and reinstalling Norton Ghost 15 using the Norton Removal Tool in my Windows xp drive. Below are the error messages I got.
1 more replies
Relevance 21.32%
I everyone!! thank you so much for having me on this forum. I usually do everything possible before asking for help and this problem Im really out of ideas( I have a compaq presario f700 laptop that im trying to do a clean install in. It initially had vista but i decided to put win 7 over it through my usb portable drive (inside vista). It installed but I think it would be best to do a clean install cause vista hade soo much crap and info that i suspect is making win 7 slow.. and there is where I ran into problems.....
The dvd drive inside the laptop doesnt work at all and the usb portable drive only works inside the os but cant boot through bios. The bios doesnt give me an option to boot the dvd drive and make the clean install. The bios only has options like , "usb floppy" usb hardrive", usb diskette" but no "usb rom" option" and yes, I have tried every combination possible with no luck. The usb drive only works inside of windows so i have no idea why I can't boot with it.
I have read many hours on this issue but the main consensus i find is that my laptop just doesnt have that option to boot my usb drive. If this is the case, how am i gonna do a clean install aside from buying a new dvd drive for the laptop(which may not work) please any ideas would be greatly appreciated
The laptop has 3gb of ram and a turion 64x2 so it shouldn't be taking me a minute to startup after installing a fresh win 7(through vista). This is why I want to d... Read more
Answer:Can't install 7os. DVD drive wont work,bios doesnt support usb drive.
Are you under the impression you can't clean install over an old installation by running a Custom install from the old OS which will overwrite C? This is pretty close to a clean install in the sense it isn't an inferior Upgrade install which carries over Vista programs and settings.
However I'd first try to boot the installer. After writing the ISO to stick using a reliable app like UltraISO Software To Create Bootable USB Flash Drive did you expand the USB Hard Drive choices to see if the stick is listed there as it often is? Did you try setting the HDD as first boot device in BIOS setup and then use the ESC key to display a boot menu, look for flash stick there, or expand USB HD choice there to see if USB is under that?
Did you try booting the USB DVD drive using the ESC menu, even trying the USB diskette choice to see if it will work?
If none of these work even after verifying the stick will boot successfully in another PC, then I would run the installer from the OS you have on there now, choosing C partition to do a Custom install. Afterwards your old OS will be stuffed in a C:\windows.old folder which you can save long enough to make sure you have everything then delete using Disk Cleanup.
Other tips here are the same for retail to get and keep a perfect Clean Reinstall - Factory OEM Windows 7.
5 more replies
Relevance 21.32%
I've been noticing after re-installing Win 7 that I get random floppy drive seeks and the hard drive light flashes like crazy and then everything is fine again.I have Clamwin AV,Spyware Blaster and Spybot S&D installed,but that's it,no Windows Defender or Essentials AV.Any ideas what causes this? It's a minor annoyance,but I just want to make sure it's not damaging anything.Also would Readyboost help me? I've got 2 gigs dual channel PC3200 ram(or I could go with 2.5 gig single channel PC2700)and a P4 at 3 gig until I can afford to upgrade my comp.This is my first post in quite awhile so I hope this is in the right place Thanks.
Answer:Random floppy drive seek and hard drive light flashing
I have a fix for the floppy drive problem. Unplug it!
But er, it could easily be the AV software doing it, or possibly the Windows indexer.
To track it down you could try catching it in the act (if it's prolonged enough) by using the "Resource Monitor" found on the Performance tab of the Task Manager. Click on the "Disk" tab and wait for it to happen and see if something new shows up as doing disk IO.
Though also, disabling the AV software temporarily and see if it stops might be a first try...
7 more replies
Relevance 21.32%
I have a 2TB Western Digital My Book Essentials, it's filled with tons of old photos, movies, docs, games, and I'm scared to think of what else. The SATA to USB connector broke on it, so I took the drive apart and put it inside my computer. I opened up device manager because it wasn't showing up automatically and the 2TB is showing up as unallocated, I'm freaking out now because I'm scared the data was somehow deleted.
What do I do?
edit: I realize also this isn't directly related to Windows 8 (besides the fact it's the OS I'm on), but I don't know where else to post it. I'd also like to say that I had a really deep feeling the drive was going to break continually for about two weeks before it happened, so if you have a similar feeling about something, don't ignore it!
Answer:External hard drive shell broke, salvaged internal drive
As long as your hard disk is still spinning & recognized by Windows, the data in your hard disk is still there, just the File Allocation Table got corrupted, You'll have a good chance to recover your data. What you need is a data recovery program, I use this program in the past:
Data Recovery Software Products - Runtime Software Products
In the mean time, leave this hard disk alone, do not initialize or format it when Windows offers to do so.
32 more replies
Relevance 21.32%
I'm trying to replace the OEM PATA CDR/DVD-ROM in my recently acquired S50, and can't seem to find a setup configuration which will allow it to boot with an LG SATA DVDR as a replacement. Cabling is not the issue--there's a SATA power connector right there at the drive cage, and the two SATA data connectors in the middle of the motherboard. With the DVDR connected, the system won't boot past the Windows logo screen. Disconnect it, and the system boots fine. I've searched the User Guide and Hardware Maintenance Guides for every SATA reference, and they only refer to a SATA hard drive as an alternative to the PATA hard drive. All references to the CDR/DVD-ROM assume it is PATA. Googling did not produce a definitive answer as to whether this is a permitted configuration (PATA/SATA mix). I'm hoping someone here can say either "No, not possible," or "Yes, I've done it--here's the setup parameters." Mike
Answer:Can S50 BIOS handle SATA DVD drive and PATA hard drive configuration?
Hi Mike "Yes, I've done it" Go to my inputs to this forum in late October for my trials and tribulations with the self same problem. My solution was to remove & refit the CMOS battery as described. I've retained my PATA hard drive & installed a SATA DVD rewriter plus I have access to Setup Utility ( ie BIOS access) whenever I need it. I hope it works for you. Call back if you need to. Regards Kemar
1 more replies
Relevance 21.32%
I'm thinking about replacing the factory CD/DVD drive that came with my W530 and lives in the stock 12.7mm Ultrabay caddy, with a 12.7mm slim BluRay burner (say something like this Panasonic UJ-265). Since the thickness matches, I believe this is possible.
So, can the factory Ultrabay caddy be removed and completely replaced with a fixed SATA BD drive like this?
If so, is there a SATA cable problem? In other words, do I need a SATA-to-SlimlineSATA adapter, both for data and power, such as this item (which is shown on the same Amazon page as the UJ-265)?
Has anybody done this? Do you have any suggestions and/or recommendations I haven't thought of?
thanks for any help or guidance you can provide.
Solved!
Go to Solution.
Answer:W530: Can I replace the factory 12.7mm Ultrabay caddy and CD/DVD drive with 12.7mm BluRay drive?
The connector looks right. If I were GUESSING, I would say that it will probably work, but it won't fit into the mechanism that locks the drive in place.
9 more replies
Relevance 21.32%
Hello.
Recently bought a new Laptop, and went through all the setting up process, one of which was Windows 7 back up. At the time I just went for the partitioned drive option, but after a week, I've decided to buy an external Hard drive for my back ups instead.
My questions are:
1. Can I easily change the destination of the back up files?
2. Do I just delete everything on Drive D once I'm backing up to an external Hard Drive, or will Windows give me this option when I change to external Hard Drive?
3. The Back up is done on a schedule, so how do I get on when the date passes? Will it ask me to plug my external Hard Drive in the next time I boot up?
4. Finally, I'd like to back up our Desktop too, running on Vista. Can I just partition the external Hard Drive, and keep our laptop, and Desktop backed up on the same hard drive??
Cheers, Matthew
Answer:Change back up files from D Partitioned Drive to External Hard Drive?
Hi Matthew -
You don't need to partition the HD to keep your different backups on it.
The WindowsImageBackup file needs to be named only that and in the root of the external to reimage from the booted DVD or Repair CD, however you can click on it to see which PC it is for.
Backup Complete Computer - Create an Image Backup
Backup User and System Files
I don't like the way Win7 file backup works so I drag the active User folder from each computer here to external once per month, then update the backup image every few months. You may decide to go Manual after trying the automated backup, since you'll need to move the external to catch the backup anyway.
1 more replies
Relevance 21.32%
Hey Guys,
I have a netbook which I want to recycle however it has been used for business purposes and I want to properly format the drive. The issue I have is that it does not have a CD drive nor does it have a working screen. I have connected the hard drive to external display and manually deleted all files from the computer but I know this is not really 'deleting' anything.
How can I format the drive from within Windows? Unfortunately I cannot get bios to display on the external display so I can not do anything from DOS or Bios simply because I can't see anything!
My latest attempt has been to use Active KillDisk to create a bootable USB however this is when I ran into the issue of the Bios and DOS not displaying, it will only display once windows has been loaded.
Any ideas?
Answer:Solved: Help needed to securely format hard drive - no screen or CD drive
7 more replies
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24890953302383423, "perplexity": 3230.5674922617945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-17/segments/1524125947690.43/warc/CC-MAIN-20180425022414-20180425042414-00580.warc.gz"}
|
http://www.researchgate.net/researcher/1000629_Cornilleau-Wehrlin_N
|
N. Cornilleau-Wehrlin
École Polytechnique, Paliseau, Île-de-France, France
Are you N. Cornilleau-Wehrlin?
Publications (255)377.92 Total impact
• Article: Intensities and spatiotemporal variability of equatorial noise emissions observed by the Cluster spacecraft
F. Němec, O. Santolík, Z. Hrbáčková, N. Cornilleau-Wehrlin
[Hide abstract]
ABSTRACT: Equatorial noise (EN) emissions are electromagnetic waves observed in the equatorial region of the inner magnetosphere at frequencies between the proton cyclotron frequency and the lower hybrid frequency. We present the analysis of 2229 EN events identified in the Spatio-Temporal Analysis of Field Fluctuations (STAFF) experiment data of the Cluster spacecraft during the years 2001-2010. EN emissions are distinguished using the polarization analysis, and their intensity is determined based on the evaluation of the Poynting flux rather than on the evaluation of only the electric/magnetic field intensity. The intensity of EN events is analyzed as a function of the frequency, the position of the spacecraft inside/outside the plasmasphere, magnetic local time, and the geomagnetic activity. The emissions have higher frequencies and are more intense in the plasma trough than in the plasmasphere. EN events observed in the plasma trough are most intense close to the local noon, while EN events observed in the plasmasphere are nearly independent on MLT. The intensity of EN events is enhanced during disturbed periods, both inside the plasmasphere and in the plasma trough. Observations of the same events by several Cluster spacecraft allow us to estimate their spatiotemporal variability. EN emissions observed in the plasmasphere do not change on the analyzed spatial scales (∆MLT< 0.2 h, ∆r < 0.2RE), but they change significantly on time scales of about an hour. The same appears to be the case also for EN events observed in the plasma trough, although the plasma trough dependencies are less clear.
Journal of Geophysical Research: Space Physics. 02/2015;
• Source
Article: Whistler mode waves and the electron heat flux in the solar wind: Cluster observations
[Hide abstract]
ABSTRACT: The nature of the magnetic field fluctuations in the solar wind between the ion and electron scales is still under debate. Using the Cluster/STAFF instrument, we make a survey of the power spectral density and of the polarization of these fluctuations at frequencies $f\in[1,400]$ Hz, during five years (2001-2005), when Cluster was in the free solar wind. In $\sim 10\%$ of the selected data, we observe narrow-band, right-handed, circularly polarized fluctuations, with wave vectors quasi-parallel to the mean magnetic field, superimposed on the spectrum of the permanent background turbulence. We interpret these coherent fluctuations as whistler mode waves. The life time of these waves varies between a few seconds and several hours. Here we present, for the first time, an analysis of long-lived whistler waves, i.e. lasting more than five minutes. We find several necessary (but not sufficient) conditions for the observation of whistler waves, mainly a low level of the background turbulence, a slow wind, a relatively large electron heat flux and a low electron collision frequency. When the electron parallel beta factor $\beta_{e\parallel}$ is larger than 3, the whistler waves are seen along the heat flux threshold of the whistler heat flux instability. The presence of such whistler waves confirms that the whistler heat flux instability contributes to the regulation of the solar wind heat flux, at least for $\beta_{e\parallel} \ge$ 3, in the slow wind, at 1 AU.
The Astrophysical Journal 10/2014; 796(1). · 6.28 Impact Factor
• Article: Azimuthal directions of equatorial noise propagation determined using 10 years of data from the Cluster spacecraft
[Hide abstract]
ABSTRACT: [1] Equatorial noise (EN) emissions are electromagnetic waves at frequencies between the proton cyclotron frequency and the lower hybrid frequency routinely observed within a few degrees of the geomagnetic equator at radial distances from about 2 to 6 RE. They propagate in the extraordinary (fast magnetosonic) mode nearly perpendicularly to the ambient magnetic field. We conduct a systematic analysis of azimuthal directions of wave propagation, using all available Cluster data from 2001 to 2010. Altogether, combined measurements of the Wide-Band Data and Spectrum Analyzer of the Spatio-Temporal Analysis of Field Fluctuations instruments allowed us to determine azimuthal angle of wave propagation for more than 100 EN events. It is found that the observed propagation pattern is mostly related to the plasmapause location. While principally isotropic azimuthal directions of EN propagation were detected inside the plasmasphere, wave propagation in the plasma trough was predominantly found directed to the West or East, perpendicular to the radial direction. The observed propagation pattern can be explained using a simple propagation analysis, assuming that the emissions are generated close to the plasmapause.
Journal of Geophysical Research: Space Physics 11/2013; 118(11). · 3.44 Impact Factor
• Article: Cluster observations of whistler waves correlated with ion-scale magnetic structures during the 17 August 2003 substorm event
[Hide abstract]
ABSTRACT: We provide evidence of the simultaneous occurrence of large-amplitude, quasi-parallel whistler mode waves and ion-scale magnetic structures, which have been observed by the Cluster spacecraft in the plasma sheet at 17 Earth radii, during a substorm event. It is shown that the magnetic structures are characterized by both a magnetic field strength minimum and a density hump and that they propagate in a direction quasi-perpendicular to the average magnetic field. The observed whistler mode waves are efficiently ducted by the inhomogeneity associated with such ion-scale magnetic structures. The large amplitude of the confined whistler waves suggests that electron precipitations could be enhanced locally via strong pitch angle scattering. Furthermore, electron distribution functions indicate that a strong parallel heating of electrons occurs within these ion-scale structures. This study provides new insights on the possible multiscale coupling of plasma dynamics during the substorm expansion, on the basis of the whistler mode wave trapping by coherent ion-scale structures.
Journal of Geophysical Research Atmospheres 10/2013; 118(10):6072-6089. · 3.44 Impact Factor
• Article: Quasiperiodic emissions observed by the Cluster spacecraft and their association with ULF magnetic pulsations
F. Němec, O. Santolík, J. S. Pickett, M. Parrot, N. Cornilleau-Wehrlin
[Hide abstract]
ABSTRACT: Quasiperiodic (QP) emissions are electromagnetic waves at frequencies of about 0.5-4 kHz characterized by a periodic time modulation of the wave intensity, with a typical modulation period on the order of minutes. We present results of a survey of QP emissions observed by the Wide-Band Data (WBD) instruments on board the Cluster spacecraft. All WBD data measured in the appropriate frequency range during the first 10 years of operation (2001-2010) at radial distances lower than 10 RE were visually inspected for the presence of QP emissions, resulting in 21 positively identified events. These are systematically analyzed, and their frequency ranges and modulation periods are determined. Moreover, a detailed wave analysis has been done for the events that were strong enough to be seen in low-resolution Spatio-Temporal Analysis of Field Fluctuations-Spectrum Analyzer data. Wave vectors are found to be nearly field-aligned in the equatorial region, but they become oblique at larger geomagnetic latitudes. This is consistent with a hypothesis of unducted propagation. ULF magnetic field pulsations were detected at the same time as QP emissions in 4 out of the 21 events. They were polarized in the plane perpendicular to the ambient magnetic field, and their frequencies roughly corresponded to the modulation period of the QP events.
Journal of Geophysical Research Atmospheres 07/2013; 118(7):4210-4220. · 3.44 Impact Factor
• Article: Analysis of amplitudes of equatorial noise emissions and their variation with L, MLT and magnetic activity
Zuzana Hrbáčková, Ondřej Santolík, Nicole Cornilleau-Wehrlin
[Hide abstract]
ABSTRACT: Wave-particle interactions are an important mechanism of energy exchange in the outer Van Allen radiation belt. These interactions can cause an increase or decrease of relativistic electron flux. The equatorial noise (EN) emissions (also called fast magnetosonic waves) are electromagnetic waves which could be effective in producing MeV electrons. EN emissions propagate predominantly within 10° of the geomagnetic equator at L shells from 1 to 10. Their frequency range is between the local proton cyclotron frequency and the lower hybrid resonance. We use a data set measured by the STAFF-SA instruments onboard four Cluster spacecraft from January 2001 to December 2010. We have compiled the list of the time intervals of the observed EN emissions during the investigated time period. For each interval we have computed an intensity profile of the wave magnetic field as a function of frequency. The frequency band is then determined by an automatic procedure and the measured power spectral densities are reliably transformed into wave amplitudes. The results are shown as a function of the McIlwain's parameter, magnetic local time and magnetic activity - Kp and Dst indexes. This work has received EU support through the FP7-Space grant agreement n 284520 for the MAARBLE collaborative research project.
04/2013;
• Article: Directions of equatorial noise propagation determined using Cluster and DEMETER spacecraft
[Hide abstract]
ABSTRACT: Equatorial noise emissions are electromagnetic waves at frequencies between the proton cyclotron frequency and the lower hybrid frequency routinely observed within a few degrees of the geomagnetic equator at radial distances from about 2 to 6 Re. High resolution data reveal that the emissions are formed by a system of spectral lines, being generated by instabilities of proton distribution functions at harmonics of the proton cyclotron frequency in the source region. The waves propagate in the fast magnetosonic mode nearly perpendicularly to the ambient magnetic field, i.e. the corresponding magnetic field fluctuations are almost linearly polarized along the ambient magnetic field and the corresponding electric field fluctuations are elliptically polarized in the equatorial plane, with the major polarization axis having the same direction as wave and Poynting vectors. We conduct a systematic analysis of azimuthal propagation of equatorial noise. Combined WBD and STAFF-SA measurements performed on the Cluster spacecraft are used to determine not only the azimuthal angle of the wave vector direction, but also to estimate the corresponding beaming angle. It is found that the beaming angle is generally rather large, i.e. the detected waves come from a significant range of directions, and a traditionally used approximation of a single plane wave fails. The obtained results are complemented by a raytracing analysis in order to get a comprehensive picture of equatorial noise propagation in the inner magnetosphere. Finally, high resolution multi-component measurements performed by the low-altitude DEMETER spacecraft are used to demonstrate that equatorial noise emissions can reach altitudes as low as 660 km, and that the observed propagation properties are in agreement with the overall propagation picture.
04/2013;
• Article: Eleven years of Cluster observations of whistler-mode chorus
[Hide abstract]
ABSTRACT: Electromagnetic emissions of whistler-mode chorus carry enough power to increase electron fluxes in the outer Van Allen radiation belt at time scales on the order of one day. However, the ability of these waves to efficiently interact with relativistic electrons is controlled by the wave propagation directions and time-frequency structure. Eleven years of measurements of the STAFF-SA and WBD instruments onboard the Cluster spacecraft are systematically analyzed in order to determine the probability density functions of propagation directions of chorus as a function of geomagnetic latitude, magnetic local time, L* parameter, and frequency. A large database of banded whistler-mode emissions and time-frequency structured chorus has been used for this analysis. This work has received EU support through the FP7-Space grant agreement no 284520 for the MAARBLE collaborative research project.
04/2013;
• Article: Characteristics of banded chorus-like emission measured by the TC-1 Double Star spacecraft
Eva Macúšová, Ondřej Santolík, Nicole Cornilleau-Wehrlin, Keith Yearby
[Hide abstract]
ABSTRACT: We present a study of the spatio-temporal characteristics of banded whistler-mode emissions. It covers the full operational period of the TC-1 spacecraft, between January 2004 and the end of September 2007. The analyzed data set has been visually selected from the onboard-analyzed time-frequency spectrograms of magnetic field fluctuations below 4 kHz measured by the STAFF/DWP wave instrument situated onboard the TC-1 spacecraft with a low inclination elliptical equatorial orbit. This orbit covers magnetic latitudes between -39o and 39o. The entire data set has been collected between L=2 and L=12. Our results show that almost all intense emissions (above a threshold of 10-5nT2Hz-1) occur at L-shells from 6 to 12 and in the MLT sector from 2 to 11 hours. This is in a good agreement with previous observations. We determine the bandwidth of the observed emission by an automatic procedure based on the measured spectra. This allows us to reliably calculate the integral amplitudes of the measured signals. The majority of the largest amplitudes of chorus-like emissions were found closer to the Earth. The other result is that the upper band chorus-like emissions (above one half of the electron cyclotron frequency) are much less intense than the lower band chorus-like emissions (below one half of the electron cyclotron frequency) and are usually observed closer to the Earth than the lower band. This work has received EU support through the FP7-Space grant agreement n 284520 for the MAARBLE collaborative research project.
04/2013;
• Source
Article: Scaling of the electron dissipation range of solar wind turbulence
[Hide abstract]
ABSTRACT: Electron scale solar wind turbulence has attracted great interest in recent years. Clear evidences have been given from the Cluster data that turbulence is not fully dissipated near the proton scale but continues cascading down to the electron scales. However, the scaling of the energy spectra as well as the nature of the plasma modes involved at those small scales are still not fully determined. Here we survey 10 years of the Cluster search-coil magnetometer (SCM) waveforms measured in the solar wind and perform a statistical study of the magnetic energy spectra in the frequency range [$1, 180$]Hz. We show that a large fraction of the spectra exhibit clear breakpoints near the electon gyroscale $\rho_e$, followed by steeper power-law like spectra. We show that the scaling below the electron breakpoint cannot be determined unambiguously due to instrumental limitations that will be discussed in detail. We compare our results to recent ones reported in other studies and discuss their implication on the physical mechanisms and the theoretical modeling of energy dissipation in the SW.
The Astrophysical Journal 03/2013; 777(1). · 6.28 Impact Factor
• Article: Conjugate observations of quasi-periodic emissions by Cluster and DEMETER spacecraft
[Hide abstract]
ABSTRACT: Quasi‐periodic (QP) emissions are electromagnetic emissions at frequencies of about 0.5–4 kHz that are characterized by a periodic time modulation of the wave intensity. Typical periods of this modulation are on the order of minutes. We present a case study of a large‐scale long‐lasting QP event observed simultaneously on board the DEMETER (Detection of Electro‐Magnetic Emissions Transmitted from Earthquake Regions) and the Cluster spacecraft. The measurements by the Wide‐Band Data instrument on board the Cluster spacecraft enabled us to obtain high‐resolution frequency‐time spectrograms of the event close to the equatorial region over a large range of radial distances, while the measurements by the STAFF‐SA instrument allowed us to perform a detailed wave analysis. Conjugate observations by the DEMETER spacecraft have been used to estimate the spatial and temporal extent of the emissions. The analyzed QP event lasted as long as 5 h and it spanned over the L‐shells from about 1.5 to 5.5. Simultaneous observations of the same event by DEMETER and Cluster show that the same QP modulation of the wave intensity is observed at the same time at very different locations in the inner magnetosphere. ULF magnetic field fluctuations with a period roughly comparable to, but somewhat larger than the period of the QP modulation were detected by the fluxgate magnetometers instrument on board the Cluster spacecraft near the equatorial region, suggesting these are likely to be related to the QP generation. Results of a detailed wave analysis show that the QP emissions detected by Cluster propagate unducted, with oblique wave normal angles at higher geomagnetic latitudes.
Journal of Geophysical Research Atmospheres 01/2013; 118(1):198-208. · 3.44 Impact Factor
• Article: CLUSTER STAFF search coils magnetometer calibration – comparisons with FGM
Geoscientific Instrumentation, Methods and Data Systems Discussions. 01/2013; 3(2):679-751.
• Source
Conference Paper: LSS/NenuFAR: The LOFAR Super Station project in Nançay
[Hide abstract]
ABSTRACT: We summarize the outcome of the scientific and technical study conducted in the past 3 years for the definition and prototyping of a LOFAR Super Station (LSS) in Nançay. We first present the LSS concept, then the steps addressed by the design study and the conclusions reached. We give an overview of the science case for the LSS, with special emphasis on the interest of a dedicated backend for standalone use. We compare the expected LSS characteristics to those of large low-frequency radio instruments, existing or in project. The main advantage of the LSS in standalone mode will be its very high instantaneous sensitivity, enabling or significantly improving a broad range of scientific studies. It will be a SKA precursor for the French community, both scientific and technical.
SF2A-2012: Proceedings of the Annual meeting of the French Society of Astronomy and Astrophysics. Eds.: S. Boissier, P. de Laverny, N. Nardetto, R. Samadi, D. Valls-Gabaud and H. Wozniak, pp.687-694; 12/2012
• Article: Coupling between whistler waves and ion-scale solitary waves: cluster measurements in the magnetotail during a substorm.
[Hide abstract]
ABSTRACT: We present a new model of self-consistent coupling between low frequency, ion-scale coherent structures with high frequency whistler waves in order to interpret Cluster data. The idea relies on the possibility of trapping whistler waves by inhomogeneous external fields where they can be spatially confined and propagate for times much longer than their characteristic electronic time scale. Here we take the example of a slow magnetosonic soliton acting as a wave guide in analogy with the ducting properties of an inhomogeneous plasma. The soliton is characterized by a magnetic dip and density hump that traps and advects high frequency waves over many ion times. The model represents a new possible way of explaining space measurements often detecting the presence of whistler waves in correspondence to magnetic depressions and density humps. This approach, here given by means of slow solitons, but more general than that, is alternative to the standard approach of considering whistler wave packets as associated with nonpropagating magnetic holes resulting from a mirror-type instability.
Physical Review Letters 10/2012; 109(15):155005. · 7.73 Impact Factor
• Article: Systematic propagation analysis of whistler-mode waves in the inner magnetosphere
[Hide abstract]
ABSTRACT: Acceleration and dynamics of energetic electrons in the outer Van Allen radiation belt can be influenced by cross-energy coupling among the particle populations in the radiation belts, ring current and plasmasphere. Waves of different frequencies have been shown to play a significant role in these interactions. We analyze more than 10 years of measurements of the four Cluster spacecraft to investigate propagation properties of whistler-mode waves in the crucial regions of the Earth's magnetosphere. We use this unprecedented database to determine the distribution of the wave energy density in the space of the wave vector directions, which is a crucial parameter for modeling of both the wave-particle interactions and wave propagation in the inner magnetosphere. We show implications for radiation belt studies and upcoming inner magnetosphere spacecraft missions, and we also show similarities of the observed whistler-mode waves with results from the magnetosphere of Saturn collected by the Cassini mission.
07/2012;
• Article: Occurrence rate of magnetosonic equatorial noise emissions as a function of the McIlwain's parameter
Z. Hrbackova, O. Santolik, F. Nemec, N. Cornilleau-Wehrlin
[Hide abstract]
ABSTRACT: We report results of a statistical analysis of equatorial noise (EN) emissions based on the data set collected by the four Cluster spacecraft between January 2001 and December 2010. We have investigated a large range of the McIlwain's parameter from L≈1 to L≈12 thanks to the change of orbital parameters of the Cluster mission. We have processed data from the STAFF-SA instruments which analyze measurements of electric and magnetic field fluctuations onboard and provide us with hermitian spectral matrices. We have used linear polarization of magnetic field fluctuations as a selection criterion. Propagation in the vicinity of the geomagnetic equator has been used as an additional criterion for recognition of EN. We have identified more than 2000 events during the investigated time period. We demonstrate that EN can occur at all the analyzed L-shells. However, the occurrence rate at L-shells below 2.5 and above 7.0 is very low. We show that EN occurs in the plasmasphere as well as outside of the plasmasphere but with a lower occurrence rate.
04/2012;
• Article: Source region of the dayside whistler-mode chorus
[Hide abstract]
ABSTRACT: Intense whistler-mode waves can be generated by cyclotron interactions with anisotropic electrons at energies between a few and tens of keV. It has been shown that whistler-mode waves propagating in the Earth's magnetosphere can influence relativistic electrons in the outer Van Allen radiation belt. These electromagnetic wave emissions are therefore receiving increased attention for their possible role in coupling electron populations at lower energies to the electron radiation belt. Whistler-mode chorus emissions are known for their predominant occurrence in the dawnside and dayside magnetosphere. While it is generally accepted that dawnside chorus is excited by injected anisotropic plasma sheet electrons, the details of this process are still debated. Especially, possible mechanisms describing the origin of the dayside chorus are a subject of active research, including the role of the plasma density variations, and the role of a particular dayside configuration of the compressed Earth's magnetic field. We use data collected by the Cluster mission during the last few years, when the orbit of the Cluster spacecraft reached to larger radial distances from the Earth in the dayside low-latitude region. We analyze multipoint measurements of the WBD and STAFF-SA instruments. We investigate propagation and spectral properties of the observed whistler-mode waves. We concentrate our analysis on the properties of the chorus source and we show that the dayside magnetic field topology can lead to a displacement of the source region from the dipole equator to higher latitudes.
04/2012;
• Article: Variability of ULF wave power at the magnetopause: a study at low latitude with Cluster data
[Hide abstract]
ABSTRACT: Strong ULF wave activity has been observed at magnetopause crossings since a long time. Those turbulent-like waves are possible contributors to particle penetration from the Solar Wind to the Magnetosphere through the magnetopause. Statistical studies have been performed to understand under which conditions the ULF wave power is the most intense and thus the waves can be the most efficient for particle transport from one region to the other. Clearly the solar wind pressure organizes the data, the stronger the pressure, the higher the ULF power (Attié et al 2008). Double STAR-Cluster comparison has shown that ULF wave power is stronger at low latitude than at high latitude (Cornilleau-Wehrlin et al, 2008). The different studies performed have not, up to now, shown a stronger power in the vicinity of local noon. Nevertheless under identical activity conditions, the variability of this power, even at a given location in latitude and local time is very high. The present work intends at understanding this variability by means of the multi spacecraft mission Cluster. The data used are from spring 2008, while Cluster was crossing the magnetopause at low latitude, in particularly quite Solar Wind conditions. The first region of interest of this study is the sub-solar point vicinity where the long wavelength surface wave effects are most unlikely.
04/2012;
• Article: Propagation of EMIC triggered emissions toward the magnetic equatorial plane
[Hide abstract]
ABSTRACT: EMIC triggered emissions are observed close to the equatorial plane of the magnetosphere at locations where EMIC waves are commonly observed: close to the plasmapause region and in the dayside magnetosphere close to the magnetopause. Their overall characteristics (frequency with time dispersion, generation mechanism) make those waves the EMIC analogue of rising frequency whistler-mode chorus emissions. In our observations the Poynting flux of these emissions is usually clearly arriving from the equatorial region direction, especially when observations take place at more than 5 degrees of magnetic latitude. Simulations have also confirmed that the conditions of generation by interaction with energetic ions are at a maximum at the magnetic equator (lowest value of the background magnetic field along the field line). However in the Cluster case study presented here the Poynting flux of EMIC triggered emissions is propagating toward the equatorial region. The large angle between the wave vector and the background magnetic field is also unusual for this kind of emission. The rising tone starts just above half of the He+ gyrofrequency (Fhe+) and it disappears close to Fhe+. At the time of detection, the spacecraft magnetic latitude is larger than 10 degrees and L shell is about 4. The propagation sense of the emissions has been established using two independent methods: 1) sense of the parallel component of the Poynting flux for a single spacecraft and 2) timing of the emission detections at each of the four Cluster spacecraft which were in a relatively close configuration. We propose here to discuss this unexpected result considering a reflection of this emission at higher latitude.
AGU Fall Meeting Abstracts. 12/2011;
• Article: Observations of whistler-mode chorus in a large range of radial distances
[Hide abstract]
ABSTRACT: Whistler-mode chorus emissions are known for their capacity to interact with energetic electrons. We use data collected by the Cluster mission after 2005, when the orbit of the four Cluster spacecraft changed, thus facilitating the analysis of chorus in a large range of different radial distances from the Earth. We concentrate our analysis on the equatorial source region of chorus. We use multipoint measurements of the WBD and STAFF-SA instruments to characterize propagation and spectral properties of the observed waves. We show that intense whistler-mode emissions are found at large radial distances up to the dayside magnetopause. These emissions either have the form of hiss or they contain the typical structure of chorus wave packets. This result is supported by case studies as well as by statistical results, using the unprecedented database of Cluster measurements.
AGU Fall Meeting Abstracts. 12/2011;
Publication Stats
2k Citations 377.92 Total Impact Points
Institutions
• École Polytechnique
Paliseau, Île-de-France, France
• Laboratory of Plasma Physics
Paliseau, Île-de-France, France
• La Station de Radioastronomie de Nançay
Orléans, Centre, France
• French National Centre for Scientific Research
• Laboratoire de physique et chimie de l'environnement et de l'Espace (LPC2E)
Lutetia Parisorum, Île-de-France, France
• University of Oslo
• Department of Physics
Kristiania (historical), Oslo, Norway
• Charles University in Prague
• Faculty of Mathematics and Physics
Praha, Praha, Czech Republic
• Institut Pierre Simon Laplace
Lutetia Parisorum, Île-de-France, France
• Swedish Institute of Space Physics
Kiruna, Norrbotten, Sweden
• Université de Versailles Saint-Quentin
Versailles, Île-de-France, France
• Observatoire de Paris
Lutetia Parisorum, Île-de-France, France
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9096313714981079, "perplexity": 1969.8612744833379}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936463608.25/warc/CC-MAIN-20150226074103-00192-ip-10-28-5-156.ec2.internal.warc.gz"}
|
http://math.stackexchange.com/users/56880/orest-xherija?tab=activity&sort=revisions
|
# Orest Xherija
less info
reputation
418
bio website chicago.academia.edu/… location Chicago, IL age 23 member for 2 years seen Oct 7 '14 at 22:45 profile views 246
I have a B.A. in Mathematics from the University of Chicago. I am starting a Ph.D. in Linguistics on September 2014 in the same university.
# 12 Revisions
Jul4 revised A step in the proof of the Riemann Mapping Theorem added 273 characters in body Mar26 revised Imposing the topology of open rays in $\Bbb R$ deleted 4 characters in body Mar25 revised Imposing the topology of open rays in $\Bbb R$ added 88 characters in body Mar16 revised Imposing the topology of open rays in $\Bbb R$ added 69 characters in body Feb27 revised Mathematical preparation for postgraduate studies in Linguistics added 501 characters in body Feb24 revised Definite Integral with a discontinuty Spelling corrected Feb22 revised Suppose $f: M \to M$ is a contraction, but $M$ is not necessarily complete Spelling corrected Feb22 revised Differential Equation : $f '' = f '$ added 134 characters in body Feb21 revised Differential Equation : $f '' = f '$ added 164 characters in body Feb21 revised Mathematical preparation for postgraduate studies in Linguistics added 244 characters in body Feb20 revised Is $\Bbb Q$ countable or uncountable? For brevity I changed the word rationals to $\Bbb Q$ Feb19 revised Mathematical preparation for postgraduate studies in Linguistics deleted 5 characters in body
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6059606075286865, "perplexity": 2738.8437189420947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-06/segments/1422119446463.10/warc/CC-MAIN-20150124171046-00218-ip-10-180-212-252.ec2.internal.warc.gz"}
|
https://stats.libretexts.org/Bookshelves/Introductory_Statistics/Book%3A_Business_Statistics_(OpenStax)/10%3A_Hypothesis_Testing_with_Two_Samples/Chapter_10_Key_Terms
|
a measure of effect size based on the differences between two means. If $$d$$ is between 0 and 0.2 then the effect is small. If $$d$$ approaches is 0.5, then the effect is medium, and if $$d$$ approaches 0.8, then it is a large effect.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9813733100891113, "perplexity": 148.90770673489288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664567.4/warc/CC-MAIN-20191112024224-20191112052224-00481.warc.gz"}
|
https://arxiv.org/abs/1702.01767
|
hep-ph
# Title:A resolution of the inclusive flavor-breaking $τ$ $|V_{us}|$ puzzle
Abstract: We revisit the puzzle of $|V_{us}|$ values obtained from the conventional implementation of hadronic-$\tau$-decay-based flavor-breaking finite-energy sum rules lying $>3 \sigma$ below the expectations of three-family unitarity. Significant unphysical dependences of $|V_{us}|$ on the choice of weight, w, and upper limit, $s_0$, of the experimental spectral integrals entering the analysis are confirmed, and a breakdown of assumptions made in estimating higher dimension, $D>4$, OPE contributions is identified as the main source of these problems. A combination of continuum and lattice results is shown to suggest a new implementation of the flavor-breaking sum rule approach in which not only $|V_{us}|$, but also $D>4$ effective condensates, are fit to data. Lattice results are also used to clarify how to reliably treat the slowly converging D=2 OPE series. The new sum rule implementation is shown to cure the problems of the unphysical w- and $s_0$-dependence of $|V_{us}|$ and to produce results $\sim 0.0020$ higher than those of the conventional implementation. With B-factory input, and using, in addition, dispersively constrained results for the $K\pi$ branching fractions, we find $\vert V_{us}\vert =0.2231(27)_{exp}(4)_{th}$, in excellent agreement with the result from $K_{\ell 3}$, and compatible within errors with the expectations of three-family unitarity, thus resolving the long-standing inclusive $\tau$ $|V_{us}|$ puzzle.
Comments: 15 pages, 4 figures. Updated treatment of strange hadronic tau decay data; expanded discussion of the breakdown of the conventional implementation approach. Final version to appear in Physics Letters B Subjects: High Energy Physics - Phenomenology (hep-ph); High Energy Physics - Experiment (hep-ex); High Energy Physics - Lattice (hep-lat) DOI: 10.1016/j.physletb.2018.03.074 Report number: ADP-17-07/T1013, YUPP-I/E-KM-17-02-2 Cite as: arXiv:1702.01767 [hep-ph] (or arXiv:1702.01767v2 [hep-ph] for this version)
## Submission history
From: Kim Maltman [view email]
[v1] Mon, 6 Feb 2017 19:08:43 UTC (34 KB)
[v2] Wed, 28 Mar 2018 13:44:17 UTC (58 KB)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6993522047996521, "perplexity": 2952.5437952502166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540527010.70/warc/CC-MAIN-20191210070602-20191210094602-00339.warc.gz"}
|
http://crypto.stackexchange.com/users/550/sxd
|
# sxd
less info
reputation
1
bio website location age member for 3 years, 7 months seen Nov 30 '11 at 16:13 profile views 5
This user has not answered any questions
# 0 Questions
This user has not asked any questions
# 0 Tags
This user has not participated in any tags
# 10 Accounts
Mathematics 2,516 rep 3926 Area 51 151 rep 1 TeX - LaTeX 135 rep 4 English Language & Usage 128 rep 4 Stack Overflow 101 rep 2
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8598095774650574, "perplexity": 9623.624220241607}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-14/segments/1427131300031.56/warc/CC-MAIN-20150323172140-00088-ip-10-168-14-71.ec2.internal.warc.gz"}
|
https://www.varsitytutors.com/advanced_geometry-help/how-to-find-the-volume-of-a-tetrahedron
|
# Advanced Geometry : How to find the volume of a tetrahedron
## Example Questions
← Previous 1
### Example Question #31 : Tetrahedrons
What is the volume of the following tetrahedron? Assume the figure is a regular tetrahedron.
Explanation:
A regular tetrahedron is composed of four equilateral triangles. The formula for the volume of a regular tetrahedron is:
, where represents the length of the side.
Plugging in our values we get:
### Example Question #1 : How To Find The Volume Of A Tetrahedron
Find the volume of a tetrahedron with an edge of .
Explanation:
Write the formula for the volume of a tetrahedron.
Substitute in the length of the edge provided in the problem.
Rationalize the denominator.
### Example Question #1 : How To Find The Volume Of A Tetrahedron
Find the volume of a tetrahedron with an edge of .
Explanation:
Write the formula for the volume of a tetrahedron.
Substitute in the length of the edge provided in the problem:
Cancel out the in the denominator with one in the numerator:
A square root is being raised to the power of two in the numerator; these two operations cancel each other out. After canceling those operations, reduce the remaining fraction to arrive at the correct answer:
### Example Question #4 : How To Find The Volume Of A Tetrahedron
Find the volume of a tetrahedron with an edge of .
Explanation:
Write the formula for finding the volume of a tetrahedron.
Substitute in the edge length provided in the problem.
Cancel out the in the denominator with part of the in the numerator:
Expand, rationalize the denominator, and reduce to arrive at the correct answer:
### Example Question #2 : How To Find The Volume Of A Tetrahedron
Find the volume of a tetrahedron with an edge of .
Explanation:
Write the formula the volume of a tetrahedron.
Substitute the edge length provided in the equation into the formula.
Cancel out the denominator with part of the numerator and solve the remaining part of the numerator to arrive at the correct answer.
### Example Question #2 : How To Find The Volume Of A Tetrahedron
Find the volume of a tetrahedron with an edge of .
Explanation:
Write the formula the volume of a tetrahedron and substitute in the provided edge length.
Rationalize the denominator to arrive at the correct answer.
### Example Question #81 : Solid Geometry
Find the volume of the regular tetrahedron with side length .
Explanation:
The formula for the volume of a regular tetrahedron is:
Where is the length of side. Using this formula and the given values, we get:
### Example Question #44 : Tetrahedrons
What is the volume of a regular tetrahedron with edges of ?
Explanation:
The volume of a tetrahedron is found with the formula:
,
where is the length of the edges.
When
.
### Example Question #1 : How To Find The Volume Of A Tetrahedron
What is the volume of a regular tetrahedron with edges of ?
None of the above.
Explanation:
The volume of a tetrahedron is found with the formula,
where is the length of the edges.
When the volume becomes,
The answer is in volume, so it must be in a cubic measurement!
### Example Question #4 : How To Find The Volume Of A Tetrahedron
What is the volume of a regular tetrahedron with edges of ?
None of the above.
None of the above.
Explanation:
The volume of a tetrahedron is found with the formula where is the length of the edges.
When
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8524383306503296, "perplexity": 494.911146082517}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171781.5/warc/CC-MAIN-20170219104611-00614-ip-10-171-10-108.ec2.internal.warc.gz"}
|
http://francomics.de/8pyfmok/175f6b-chemistry-calculator-moles
|
where mass is in grams and the molar mass is in grams per mole. We can use the above equation to find the mass of a substance when we are given the number of moles of the substance. Convert masses/volumes into moles. Chemical Mole to Gram Calculator Easily convert between grams and moles … The reactants and products, along with their coefficients will appear above. Mole-Mass Equation. or reactant. Number of particles. In addition, a mole of hydrogen is equal to a mole of glucose or a mole of uranium. To perform a stoichiometric calculation, enter an equation of a chemical reaction and press the Start button. This value is called Avogadro’s number. Number of particles in 1 mol of substance = 6 x 10 23 Free Online Chemistry Calculators, including periodic table, molecular weight calculator, molarity, chemical equation balancer, pH, boyle's law, idea gas law etc. Besides, one mole of any chemical compound or element is always the identical number. The atom is the smallest particle of a chemical element that can exist. Moles of hydrogen = 96/24 = 4 mol. E.g. Examples: Fe, Au, Co, Br, C, O, N, F. You can use parenthesis or brackets []. Use uppercase for the first character in the element and lowercase for the second character. You can calculate the mass of a product. The mole is the base unit of amount of substance. One mole of substance has 6 x 10 23 particles. 1. Finding Molar Mass. One mole of substance contains 6 x 10 23 particles. The mass (in grams) of a compound is equal to its molarity (in moles) multiply its molar mass: grams = mole × molar mass. How to Find Moles? Chemical Calculations and Moles GCSE chemistry equations, formulae and calculations are often the part of the syllabus that many students struggle with. grams = 58.443 × 5 = 292.215 (g) Moles to Mass Calculation. Moles of iron(III) oxide = 260/159.69 = 1.63 mol. Construct an ICE table and fill in the initial moles. Also, it is easier to calculate atoms in a mole than in lakhs and crores. 2. The Avogadro's number is a very important relationship to remember: 1 mole = $6.022\times10^{23}$ atoms, molecules, protons, etc. Calculate the mass of iron produced from the reaction. The mass and molarity of chemical compounds can be calculated based on the molar mass of the compound. Reacting masses. relative formula mass = mass ÷ number of moles = 440 ÷ 10 = 44. However, their masses are different. 1 mol of gas occupies 24 dm 3. You can determine the number of moles in any chemical reaction given the chemical formula and the mass of the reactants. It will calculate the total mass along with the elemental composition and mass of each element in the compound. Example: Calculate the mass of (a) 2 moles and (b) 0.25 moles of iron. TL;DR (Too Long; Didn't Read) To calculate molar relations in a chemical reaction, find the atomic mass units (amus) for each element found in the products and reactants and work out the stoichiometry of the reaction. mass = number of moles × molar mass. Furthermore, they are expressed as ‘mol’. Calculate and find the molar mass (molecular weight) of any element, molecule, compound, or substance. Instructions. Solution Stoichiometry (Moles, titration, and molarity calculations) Endmemo Chemical Mole Grams - Input chemical formulas here to figure out the number of moles or grams in a chemical formula. From understanding avagadro’s contact, to mole calculations, formula’s for percentage yield and atom economy, at first this part of the GCSE chemistry syllabus seems very difficult. Particles may be molecules, atoms, ions and electrons. Molar mass of NaCl is 58.443, how many grams is 5 mole NaCl? ENDMEMO. A mole is a unit which defined as the amount of a chemical substance that contains as many representative particles.
.
Andrew Jackson Hotel New Orleans, What Is Equivalent To Year 12 In Australia, How Long Does It Take To Grow A Pear Tree, Sony Xperia Z5 Release Date, White Mountain Home Rentals, Minnesota Vikings Purple Color Code, Hotel Provincial New Orleans, Sony Xperia Z5 Release Date, Vampire The Requiem Clan Quiz, Asian Pear Benefits, Secondary School Teacher Salary Uk, Hotel Provincial New Orleans, Secondary School Teacher Salary Uk,
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8177598714828491, "perplexity": 1823.135701825955}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703517559.41/warc/CC-MAIN-20210119011203-20210119041203-00723.warc.gz"}
|
https://ask.sagemath.org/answers/43504/revisions/
|
# Revision history [back]
It seems you want the power series expansion of $1 / (1 - (x + x^2)^2))$.
It is not clear if you want the first few terms of this power series, or a formula for the general term.
To get the first few terms, you can use the method .series().
sage: x = SR.var('x')
sage: f = 1/(1-(x+x^2)^2)
sage: f.series(x)
1 + 1*x^2 + 2*x^3 + 2*x^4 + 4*x^5 + 7*x^6 + 10*x^7 + 17*x^8 + 28*x^9 + 44*x^10
+ 72*x^11 + 117*x^12 + 188*x^13 + 305*x^14 + 494*x^15 + 798*x^16 + 1292*x^17
+ 2091*x^18 + 3382*x^19 + Order(x^20)
It seems you want the power series expansion of $1 / (1 - (x + x^2)^2))$.
It is not clear if you want the first few terms of this power series, or a formula for the general term.
To get the first few terms, you can use the method .series().
sage: x = SR.var('x')
sage: f = 1/(1-(x+x^2)^2)
sage: f.series(x)
1 + 1*x^2 + 2*x^3 + 2*x^4 + 4*x^5 + 7*x^6 + 10*x^7 + 17*x^8 + 28*x^9 + 44*x^10
+ 72*x^11 + 117*x^12 + 188*x^13 + 305*x^14 + 494*x^15 + 798*x^16 + 1292*x^17
+ 2091*x^18 + 3382*x^19 + Order(x^20)
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7500177025794983, "perplexity": 300.77827692942344}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154099.21/warc/CC-MAIN-20210731172305-20210731202305-00102.warc.gz"}
|
http://www.koreascience.or.kr/search.page?keywords=Symmetric+and+asymmetric+topology
|
• Title, Summary, Keyword: Symmetric and asymmetric topology
### A New Family of Cascaded Transformer Six Switches Sub-Multilevel Inverter with Several Advantages
• Banaei, M.R.;Salary, E.
• Journal of Electrical Engineering and Technology
• /
• v.8 no.5
• /
• pp.1078-1085
• /
• 2013
• This paper presents a novel topology for cascaded transformer sub-multilevel converter. Eachsub-multilevel converter consists of two DC voltage sources with six switches to achieve five-level voltage. The proposed topology results in reduction of DC voltage sources and switches number. Single phase low frequency transformers are used in proposed topology and voltage transformation and galvanic isolation between load and sources are given by transformers. This topology can operate as symmetric or asymmetric converter but in this paper we have focused on symmetric state. The operation and performance of the suggested multilevel converter has been verified by the simulation results of a single-phase nine-level multilevel converter using MATLAB/SIMULINK.
• Ajami, Ali;Oskuee, Mohammad Reza Jannati;Mokhberdoran, Ataollah;Khosroshahi, Mahdi Toupchi
• Journal of Electrical Engineering and Technology
• /
• v.9 no.1
• /
• pp.127-135
• /
• 2014
• In this paper a novel converter structure based on cascade converter family is presented. The suggested multilevel advanced cascade converter has benefits such as reduction in number of switches and power losses. Comparison depict that proposed topology has the least number of IGBTs among all multilevel cascade type converters which have been introduced recently. This characteristic causes low cost and small installation area for suggested converter. The number of on state switches in current path is less than conventional topologies and so the output voltage drop and power losses are decreased. Symmetric and asymmetric modes are analyzed and compared with conventional multilevel cascade converter. Simulation and experimental results are presented to illustrate validity, good performance and effectiveness of the proposed configuration. The suggested converter can be applied in medium/high voltage and PV applications.
### The Postorder Fibonacci Circulants-a new interconnection networks with lower diameter (후위순회 피보나치 원형군-짧은 지름을 갖는 새로운 상호연결망)
• Kim Yong-Seok;Kwon Seung-Tag
• Proceedings of the IEEK Conference
• /
• /
• pp.91-94
• /
• 2004
• In this paper, we propose a new parallel computer topology, called the postorder Fibonacci circulants and analyze its properties. It is compared with Fibonacci cubes, when its number of nodes and its degree is kept the same of comparable one. Its diameter is improved from n-2 to [$\frac{n}{3}$] and a its topology is changed from asymmetric to symmetric. It includes Fibonacci cube as a spanning tree.
### Asymmetric Cascaded Multi-level Inverter: A Solution to Obtain High Number of Voltage Levels
• Banaei, M.R.;Salary, E.
• Journal of Electrical Engineering and Technology
• /
• v.8 no.2
• /
• pp.316-325
• /
• 2013
• Multilevel inverters produce a staircase output voltage from DC voltage sources. Requiring great number of semiconductor switches is main disadvantage of multilevel inverters. The multilevel inverters can be divided in two groups: symmetric and asymmetric converters. The asymmetric multilevel inverters provide a large number of output steps without increasing the number of DC voltage sources and components. In this paper, a novel topology for multilevel converters is proposed using cascaded sub-multilevel Cells. This sub-multilevel converters can produce five levels of voltage. Four algorithms for determining the DC voltage sources magnitudes have been presented. Finally, in order to verify the theoretical issues, simulation is presented.
### Topology Aggregation Schemes for Asymmetric Link State Information
• Yoo, Young-Hwan;Ahn, Sang-Hyun;Kim, Chong-Sang
• Journal of Communications and Networks
• /
• v.6 no.1
• /
• pp.46-59
• /
• 2004
• In this paper, we present two algorithms for efficiently aggregating link state information needed for quality-of-service (QoS) routing. In these algorithms, each edge node in a group is mapped onto a node of a shufflenet or a node of a de Bruijn graph. By this mapping, the number of links for which state information is maintained becomes aN (a is an integer, N is the number of edge nodes) which is significantly smaller than N2 in the full-mesh approach. Our algorithms also can support asymmetric link state parameters which are common in practice, while many previous algorithms such as the spanning tree approach can be applied only to networks with symmetric link state parameters. Experimental results show that the performance of our shufflenet algorithm is close to that of the full-mesh approach in terms of the accuracy of bandwidth and delay information, with only a much smaller amount of information. On the other hand, although it is not as good as the shufflenet approach, the de Bruijn algorithm also performs far better than the star approach which is one of the most widely accepted schemes. The de Bruijn algorithm needs smaller computational complexity than most previous algorithms for asymmetric networks, including the shufflenet algorithm.
### A Medium Access Control Mechanism for Distributed In-band Full-Duplex Wireless Networks
• Zuo, Haiwei;Sun, Yanjing;Li, Song;Ni, Qiang;Wang, Xiaolin;Zhang, Xiaoguang
• KSII Transactions on Internet and Information Systems (TIIS)
• /
• v.11 no.11
• /
• pp.5338-5359
• /
• 2017
• In-band full-duplex (IBFD) wireless communication supports symmetric dual transmission between two nodes and asymmetric dual transmission among three nodes, which allows improved throughput for distributed IBFD wireless networks. However, inter-node interference (INI) can affect desired packet reception in the downlink of three-node topology. The current Half-duplex (HD) medium access control (MAC) mechanism RTS/CTS is unable to establish an asymmetric dual link and consequently to suppress INI. In this paper, we propose a medium access control mechanism for use in distributed IBFD wireless networks, FD-DMAC (Full-Duplex Distributed MAC). In this approach, communication nodes only require single channel access to establish symmetric or asymmetric dual link, and we fully consider the two transmission modes of asymmetric dual link. Through FD-DMAC medium access, the neighbors of communication nodes can clearly know network transmission status, which will provide other opportunities of asymmetric IBFD dual communication and solve hidden node problem. Additionally, we leverage FD-DMAC to transmit received power information. This approach can assist communication nodes to adjust transmit powers and suppress INI. Finally, we give a theoretical analysis of network performance using a discrete-time Markov model. The numerical results show that FD-DMAC achieves a significant improvement over RTS/CTS in terms of throughput and delay.
### A Fibonacci Posterorder Circulants (피보나치 후위순회 원형군)
• Kim Yong-Seok
• Proceedings of the Korea Information Processing Society Conference
• /
• /
• pp.743-746
• /
• 2006
• In this paper, we propose and analyze a new parallel computer topology, called the Fibonacci posterorder circulants. It connects ${\Large f}_x,\;n{\geq}2$ processing nodes, same the number of nodes used in a comparable Fibonacci cube. Yet its diameter is only ${\lfloor}\frac{n}{3}{\rfloor}$ almost one third that of the Fibonacci cube. Fibonacci cube is asymmetric, but it is a regular and symmetric static interconnection networks for large-scale, loosely coupled systems. It includes scalability and Fibonacci cube as a spanning subgraph.
### Verification of New Family for Cascade Multilevel Inverters with Reduction of Components
• Banaei, M.R.;Salary, E.
• Journal of Electrical Engineering and Technology
• /
• v.6 no.2
• /
• pp.245-254
• /
• 2011
• This paper presents a new group for multilevel converter that operates as symmetric and asymmetric state. The proposed multilevel converter generates DC voltage levels similar to other topologies with less number of semiconductor switches. It results in the reduction of the number of switches, losses, installation area, and converter cost. To verify the voltage injection capabilities of the proposed inverter, the proposed topology is used in dynamic voltage restorer (DVR) to restore load voltage. The operation and performance of the proposed multilevel converters are verified by simulation using SIMULINK/MATLAB and experimental results.
### General Coupling Matrix Synthesis Method for Microwave Resonator Filters of Arbitrary Topology
• Uhm, Man-Seok;Lee, Ju-Seop;Yom, In-Bok;Kim, Jeong-Phill
• ETRI Journal
• /
• v.28 no.2
• /
• pp.223-226
• /
• 2006
• This letter presents a new approach to synthesize the resonator filters of an arbitrary topology. This method employs an optimization method based on the relation between the polynomial coefficients of the transfer function and those of the $S_{21}$ from the coupling matrix. Therefore, this new method can also be applied to self-equalized filters that were not considered in the conventional optimization methods. Two microwave filters, a symmetric 4-pole filter with four transmission zeros (TZs) and an asymmetric 8-pole filter with seven TZs, are synthesized using the present method for validation. Excellent agreement between the response of the transfer function and that of the synthesized $S_{21}$ from the coupling matrix is shown.
### Postorder Fibonacci Circulants (후위순회 피보나치 원형군)
• Kim, Yong-Seok;Roo, Myung-Gi
• The KIPS Transactions:PartA
• /
• v.15A no.1
• /
• pp.27-34
• /
• 2008
• In this paper, We propose a new parallel computer topology, called the Postorder Fibonacci Circulants and analyze its properties. It is compared with Fibonacci cubes, when its number of nodes is kept the same of comparable one. Its diameter is improved from n-2 to $[\frac{n}{3}]$ and its topology is changed from asymmetric to symmetric. It includes Fibonacci cube as a spanning graph.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.828729510307312, "perplexity": 2881.7882537271794}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737238.53/warc/CC-MAIN-20200808021257-20200808051257-00044.warc.gz"}
|
https://arxiv.org/abs/1303.7162
|
physics.hist-ph
(what is this?)
# Title: Fritz Hasenohrl and E = mc^2
Abstract: In 1904, the year before Einstein's seminal papers on special relativity, Austrian physicist Fritz Hasenohrl examined the properties of blackbody radiation in a moving cavity. He calculated the work necessary to keep the cavity moving at a constant velocity as it fills with radiation and concluded that the radiation energy has associated with it an apparent mass such that E = 3/8 mc^2. Also in 1904, Hasenohrl achieved the same result by computing the force necessary to accelerate a cavity already filled with radiation. In early 1905, he corrected the latter result to E = 3/4 mc^2. In this paper, Hasenohrl's papers are examined from a modern, relativistic point of view in an attempt to understand where he went wrong. The primary mistake in his first paper was, ironically, that he didn't account for the loss of mass of the blackbody end caps as they radiate energy into the cavity. However, even taking this into account one concludes that blackbody radiation has a mass equivalent of m = 4/3 E/c^2 or m = 5/3 E/c^2 depending on whether one equates the momentum or kinetic energy of radiation to the momentum or kinetic energy of an equivalent mass. In his second and third papers that deal with an accelerated cavity, Hasenohrl concluded that the mass associated with blackbody radiation is m = 4/3 E/c^2, a result which, within the restricted context of Hasenohrl's gedanken experiment, is actually consistent with special relativity. Both of these problems are non-trivial and the surprising results, indeed, turn out to be relevant to the "4/3 problem" in classical models of the electron. An important lesson of these analyses is that E = mc^2, while extremely useful, is not a "law of physics" in the sense that it ought not be applied indiscriminately to any extended system and, in particular, to the subsystems from which they are comprised.
Subjects: History and Philosophy of Physics (physics.hist-ph); Cosmology and Nongalactic Astrophysics (astro-ph.CO); General Relativity and Quantum Cosmology (gr-qc) Journal reference: Eur. Phys. J. H. 38, 262-78 (2013) DOI: 10.1140/epjh/e2012-30061-5 Cite as: arXiv:1303.7162 [physics.hist-ph] (or arXiv:1303.7162v1 [physics.hist-ph] for this version)
## Submission history
From: Stephen P. Boughn [view email]
[v1] Thu, 28 Mar 2013 16:10:54 GMT (109kb)
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8868088722229004, "perplexity": 1053.654494196227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190181.34/warc/CC-MAIN-20170322212950-00245-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://physics.aps.org/articles/v2/93
|
# Viewpoint: There and back again: from magnets to superconductors
Physics 2, 93
A theory of novel phase formation near quantum critical points suggests that large fluctuations lead to magnetic analogs of inhomogeneous superconductivity.
Interactions often modify the behavior of quantum particles and reorganize them into new phases of matter. One dramatic example is the superconducting state, where electrons with attractive interactions form Cooper pairs. The most pertinent interactions in metals are often effective ones—mediated by lattice vibrations and magnetism—and they can be altered by changing the electron environment. Changes to chemical composition, applying magnetic fields and or pressure have the most dramatic effect on the interactions near a so-called quantum critical point [1]. This is the point in the phase diagram where, at zero temperature, a metal is just on the verge of developing magnetic order. In this vicinity electrons interact via intense quantum and thermal fluctuations of that incipient order. Those long-range interactions both modify the metallic properties and also lead to new low-temperature phases such as superconductivity. In a paper in Physical Review Letters [2], Gareth Conduit and Ben Simons at the Cavendish Laboratory in Cambridge and Andrew Green at the School of Physics and Astronomy at St. Andrews, both in the UK, undertake a theoretical study of metals near a ferromagnetic quantum critical point. They show that critical fluctuations there can favor a low-temperature phase that is the magnetic analog of a superconducting state.
That superconductivity should occur near quantum criticality is a surprise; the usual “rule” is that superconductivity and magnetism do not mix [3]. That said, there are now many examples of superconductivity in quantum critical metals [4]. It seems that the magnetic fluctuations drive the superconductivity by acting as the “glue” that binds the Cooper pairs. Calculations suggest that such Cooper pairs form with an internal angular momentum—distinct from conventional $s$-wave superconductors [5].
But is superconductivity the only new phase of matter that emerges near a quantum critical point? Apparently not, for there are a number of examples of nonsuperconducting phase transitions appearing in quantum critical metals where the underlying nature of the new phase is mysterious. One of the clearest cases is seen in $Sr3Ru2O7$ (see Ref. [6]). This material is on the cusp of ferromagnetism, and can be tipped over the edge by an applied magnetic field and tuned to quantum criticality by varying the direction of the field. But its quantum critical point is preempted by a novel phase. One of the motivations for the Letter by Conduit et al. is to study the identity and formation mechanism of this phase.
Metals on the border of ferromagnetism have an unexpected richness because the critical magnetic fluctuations are accompanied by other soft modes [7]. Diagrammatic perturbation theory—the usual starting point for the study of these quantum critical metals—leads to nonanalytic terms [8] that significantly modify the conventional treatment [9]. In effect, they generate attractive interactions between magnetic fluctuations that may force the ferromagnetic phase transition to become first order at low temperatures [10]. But could they also promote other types of order?
Conduit et al. address this question with two methods, both of which make certain assumptions concerning the order. The authors are motivated by suggestions that a spiral order could preferentially develop [11–13]. In contrast to a ferromagnet where the spins align in the same direction everywhere, in a spiral state the spins form a twisted pattern that repeats over a characteristic length. First, they use perturbation theory techniques that include the critical and the soft modes to calculate the free energy and thereby to find the most energetically favored state. This method seems significantly simpler than previous diagrammatic methods. Second, they perform a numerical quantum Monte Carlo calculation that is nonperturbative with a trial wave function, which also allows a magnetic spiral phase to emerge. In both approaches they find that the expected first-order transition to a uniform ferromagnet is masked at low temperature by an energetically favorable spiral phase. It is a fascinating suggestion that will lead to further experimental inquiry.
The authors also note a similarity between the emergence of their spiral magnetic state and earlier work predicting inhomogeneous superconductivity—the Fulde-Ferrel-Larkin-Ovchinnikov (FFLO) state [14,15] (Fig. 1). In the FFLO state, the phase of the Cooper-pair wave function develops a periodic pattern in space with a characteristic wavelength signifying Cooper pairs with center-of-mass momentum. This connection suggests a more profound link between the emergence of superconductivity at quantum critical points and the identity of the mysterious new phases seen in some quantum critical metals. Superconductivity is formed from electron-electron pairs around the Fermi surface. Magnetism results from electron-hole pairs as the electron fluid reorganizes itself. The similarity indicates a general mapping between novel superconductors and different types of magnetic order. For example, the simple ferromagnet with its excess of majority-spin electrons and minority-spin holes distributed isotropically around the Fermi surface would be the magnetic version of a conventional $s$-wave superconductor. The textured magnetic state that Conduit et al. find is the magnetic analog of the FFLO state where now particle-hole rather than Cooper pairs have center-of-mass momentum. So, just as quantum critical fluctuations can induce pairing in the Cooper (particle-particle) channel to make superconductors, Conduit et al. show that quantum critical fluctuations can also favor “pairing” in the magnetic or particle-hole channel.
But why should nature stop at the magnetic version of the FFLO phase? Could these mysterious phases appearing in the presence of critical fluctuations be more general magnetic analogs of possible exotic superconductors [16]? We have already seen that quantum critical fluctuations favor Cooper pairs with internal angular momentum— $p$-wave and $d$-wave superconductors—where the phase of the Cooper pair varies around the Fermi surface. These superconductors also have associated magnetic counterparts. They correspond to Pomeranchuk distortions, where the magnetic density varies in momentum space around the Fermi surface [17]. In the strong coupling limit this leads to a spin-nematic electron liquid—a state with broken rotational symmetry but without a periodic texture [18].
Alternatively, one could start with the mixed state of the superconductor in a magnetic field and ask what its magnetic counterpart would look like. The answer turns out to be an unusually textured magnet [19] that was recently observed in $MnSi$ [20].
As this last example shows, experiment will be the final arbiter of which new phases will be preferred near quantum criticality. In $Sr3Ru2O7$ the experimental evidence for a nematic state is quite strong and supported by mean-field calculations (see references in Ref. [6]) as well as a recent fluctuation study [21]. However, inelastic neutron scattering experiments suggest that there are features at finite wavelength [22], and the state proposed by Conduit et al. should be readily visible in elastic neutron scattering. Even if not in $Sr3Ru2O7$, the growing number of quantum critical materials gives ample scope for a fluctuation-driven magnetic analog of superconducting states to be realized.
## References
1. P. Coleman and A. J. Schofield, Nature 433, 226 (2005); H. v. Löhneysen, A. Rosch, M. Vojta, and P. Wölfle, Rev. Mod. Phys. 79, 1015 (2007)
2. G. J. Conduit, A. G. Green, and B. D. Simons, Phys. Rev. Lett. 103, 207201 (2009)
3. R. A. Hein, R. L. Falge, B. T. Matthias, and C. Corenzwit, Phys. Rev. Lett. 2, 500 (1959)
4. N. D. Mathur, F. M. Grosche, S. R. Julian, I. R. Walker, D. M. Freye, R. K. W. Haselwimmer, and G. G. Lonzarich, Nature 394, 39 (1998)
5. P. Monthoux, D. Pines, and G. G. Lonzarich, Nature 450, 1177 (2007)
6. A. W. Rost, R. S. Perry, J.-F. Mercure, A. P. Mackenzie, and S. A. Grigera, Science 325, 1360 (2009) and references therein
7. D. Belitz, T. R. Kirkpatrick, and T. Vojta, Rev. Mod. Phys. 77, 579 (2005)
8. D. J. W. Geldart and M. Rasolt, Phys. Rev. B 15, 1523 (1977)
9. D. Belitz, T. R. Kirkpatrick, and T. Vojta, Phys. Rev. B 55, 9452 (1997)
10. D. Belitz and T. R. Kirkpatrick, Phys. Rev. Lett. 89, 247202 (2002)
11. J. R. Rech, C. Pépin, and A. V. Chubukov, Phys. Rev. B 74, 195126 (2006)
12. D. V. Efremov, J. J. Betouras, and A. Chubukov, Phys. Rev. B 77, 220401 (2008)
13. A. M. Berridge, A. G. Green, S. A. Grigera, and B. D. Simons, Phys. Rev. Lett. 102, 136404 (2009)
14. P. Fulde and R. A. Ferrell, Phys. Rev. 135, A550 (1964)
15. A. I. Larkin and Y. N. Ovchinnikov, JETP 20, 762 (1965)
16. G. G. Lonzarich (about 1996), private communication
17. A. F. Ho and A. J. Schofield, EPL 84, 27007 (2008)
18. S. A. Kivelson, E. Fradkin, and V. J. Emery, Nature 393, 550 (1998)
19. A. N. Bogdanov and D. A. Yablonskii, JETP 68, 101 (1989)
20. S. Muhlbauer, B. Binz, F. Jonietz, C. Pfleiderer, A. Rosch, A. Neubauer, R. Georgii, and P. Boni, Science 323, 915 (2009)
21. A. V. Chubukov and D. L. Maslov, arXiv:0908.4433
22. S. Ramos, E. M. Forgan, and C. Bowell et al., Physica B 403, 1270 (2008)
Andy Schofield graduated from the University of Cambridge in 1989 with a B.A. and in 1993 with a Ph.D. He was a postdoc at Rutgers, NJ (1994–1996), a Royal Society University Research Fellow at Cambridge, and moved to the University of Birmingham in 1999 where he is now Professor of Theoretical Physics. His research interests are in the theory of strongly correlated quantum systems.
## Related Articles
Graphene
### Synopsis: Graphene Helps Catch Light Quanta
The use of graphene in a single-photon detector makes it dramatically more sensitive to low-frequency light. Read More »
Nanophysics
### Synopsis: Two-Pulse X Rays Probe Skyrmions
A new x-ray spectroscopy technique can measure magnetic fluctuations in vortex-like structures called Skyrmions with nanosecond resolution. Read More »
Magnetism
### Synopsis: Organically Made Quantum Spin Liquids
Versatile materials called metal-organic frameworks might be good systems in which to search for quantum spin liquids. Read More »
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 12, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7812852263450623, "perplexity": 1908.93534981545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818689028.2/warc/CC-MAIN-20170922164513-20170922184513-00711.warc.gz"}
|
http://mathhelpforum.com/advanced-statistics/50331-plz-help-required-expectation-print.html
|
# Plz Help is required in expectation
• September 23rd 2008, 05:09 PM
UMD
Plz Help is required in expectation
I shall be very grateful to anyone who can guide me regarding the solution of the following problems.
Show that if Xn converges to X in rth mean ( r>=1), then E ( |Xn^r| )converges to E ( | X ^r |)
Show that if Xn converges to X in rth mean with r=1, then E(Xn) Converges to E(X). Give an example to show that the converse is false
• September 24th 2008, 02:13 PM
Laurent
For the first question: rewrite the conclusion and hypothesis in terms of $L^r$ norm, and it should become straightforward.
---------------------
Don't read the following if you want to find it out by yourself
---------------------
By the triangle inequality, $|\|X_n\|_r-\|X\|_r|\leq \|X_n-X\|_r\to_n 0$, so that $\|X_n\|_r\to_n\|X\|_r$, which directly implies $E[|X_n|^r]\to_n E[|X|^r]$.
For the second one, I think you can prove the first part by yourself. And for the reverse, almost every example works: $X_n=\pm1$ with probability 1/2, and $X=0$ may be the easiest.
• September 27th 2008, 11:50 AM
UMD
@ Laurant
Thanks alot Brother. U r providing great help for which i am grateful
• September 27th 2008, 12:17 PM
UMD
@ Laurant
one last question
Show that if Xn converges to X in probability, then Xn converges mutually in probability .
• September 27th 2008, 02:03 PM
Laurent
Quote:
Originally Posted by UMD
Show that if Xn converges to X in probability, then Xn converges mutually in probability .
I don't know what it means to "converge mutually in probability". It seems to be an unusual expression. Could you define that?
• September 29th 2008, 10:57 AM
Player1
I think "mutual convergence in probability" means $P(|X_m - X_n| > \varepsilon) \rightarrow_{m,n \rightarrow \infty} 0$
They use the triangle inequality to argue something related here. However,
1) I don't know what they mean by norm of a random variable, and
2) I don't know if or how you can use that here...
• September 29th 2008, 01:20 PM
Laurent
Quote:
Originally Posted by Player1
I think "mutual convergence in probability" means $P(|X_m - X_n| > \varepsilon) \rightarrow_{m,n \rightarrow \infty} 0$
Thank you, I was suspecting something like this as well but I was pretty unsure. So "mutually" must refer to a "Cauchy sequence"-like behaviour.
Suppose $(X_n)_n$ converges to $X$ in probability. Let $\varepsilon>0$. For any $m,n$, $P(|X_n-X_m|>\varepsilon)=P(|(X_n-X)-(X_m-X)|>\varepsilon)$ $\leq P(|X_n-X|+|X_m-X|>\varepsilon)\leq P(|X_n-X|>\varepsilon/2\mbox{ or }|X_m-X|>\varepsilon/2)$ $\leq P(|X_n-X|>\varepsilon/2)+P(|X_m-X|>\varepsilon/2)$. And you can easily conclude from here.
• October 4th 2008, 04:23 PM
UMD
@ Laurant
Thanks for ur reply. I am sorry for not replying on time. Yes u were right that was the thing that i exactly needed. But i submitted my assignment without it. Anyways many many thanks for ur reply. i hav a few more problems to solve if u have time to think over it, I shall be grateful. I have to submit this assignment the day after tomorrow. Lastly how to use mathematical symbols in this forum ?
1) If Xn converges in quadratic mean to X and Xn is a guassian random variable with mean Un and variance ( the symbol of variance), and if variance = lim ( n approaches to infinity) variance is non zero, show that X is Guassian
2) the second question i cannot write bcz of lack of mathematical symbols
• October 4th 2008, 04:46 PM
Player1
Hi!
$X_n$ converges in the second mean is telling you
1) The variance is finite
2) The mean is finite
3) $X_n$ converges in probability and in distribution to $X$
Then write the PDF of $X_n$. As you have convergence in distribution, and 1) and 2), then $\mu_n \rightarrow \mu$ and $\sigma_n^2 \rightarrow \sigma^2$ as $n \rightarrow \infty$. And that's gaussian too.
Please somebody correct my post if I am wrong. Thanks.
To write math symbols, type "latex math symbols" in google. Then, enclose your symbols between opensquarebracket math closesquarebracket and opensquarebracket /math closesquarebracket. You get me, just replace the brackets with real brackets ;)
• October 4th 2008, 05:10 PM
UMD
@player1
(Sleepy)Thanks. U said take its pdf. can u elaborate this .
• October 4th 2008, 05:57 PM
Player1
Each of $X_n$ has probability density function (or PDF)
$f_{X_n}(x) = \frac{1}{\sqrt{2 \pi \sigma_n ^2}} exp\{-\frac{(x-\mu_n)^2}{2 \sigma_n^2}\}$
and by 1), 2) and 3), the PDF of $X$ is
$f_{X_n}(x) = \frac{1}{\sqrt{2 \pi \sigma ^2}} exp\{-\frac{(x-\mu)^2}{2 \sigma^2}\}$
which is gaussian too.
• October 4th 2008, 06:02 PM
UMD
1 Attachment(s)
@laurant,Player1
Please see the attached word document for question. I shall be very grateful to anyone who can guide me regarding its solution
• October 4th 2008, 06:05 PM
UMD
@player1
Plz if u can see the second one , the attached word file
• October 5th 2008, 05:18 AM
Laurent
Quote:
Originally Posted by Player1
Each of $X_n$ has probability density function (or PDF)
$f_{X_n}(x) = \frac{1}{\sqrt{2 \pi \sigma_n ^2}} exp\{-\frac{(x-\mu_n)^2}{2 \sigma_n^2}\}$
and by 1), 2) and 3), the PDF of $X$ is
$f_{X_n}(x) = \frac{1}{\sqrt{2 \pi \sigma ^2}} exp\{-\frac{(x-\mu)^2}{2 \sigma^2}\}$
which is gaussian too.
Beware! It is common misbelief that the probability density function converges if there is convergence in distribution. What is for sure is that for instance the cumulative distribution function converges, as well as the characteristic function.
Because $\|X_n-X\|_2\to_n 0$, you know (by a previous question) that $E[X_n^2]\to_n E[X^2]$.
Because $\| X_n-X\|_1\leq \|X_n-X\|_2$ (as Cauchy-Schwarz inequality shows), and because of a previous question of yours, you know that the mean $\mu_n$ of $X_n$ converges to $\mu=E[X]$. And hence the same with variances using the previous paragraph: $\sigma_n^2\to_n \sigma^2={\rm Var}(X)$.
As for the distribution of the limit, it results from considering the characteristic function: because of the convergence in distribution, for all $t\in\mathbb{R}$, $E[e^{it X_n}]\to_n E[e^{itX}]$. However, you know that $E[e^{itX_n}]=e^{it\mu_n-t^2\sigma_n^2/2}\to_n e^{it\mu-t^2\sigma^2/2}$ because $X_n$ is Gaussian and because of what we said about the convergence of the mean and variance. So the characteristic function of $X$ is $e^{i t\mu-t^2\sigma^2/2}$, which is the characteristic function of a Gaussian random variable of mean $\mu$ and variance $\sigma^2$. The fact that the characteristic function characterizes the distribution allows to conclude.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 45, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9817889928817749, "perplexity": 794.6367154620411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-07/segments/1454701158601.61/warc/CC-MAIN-20160205193918-00099-ip-10-236-182-209.ec2.internal.warc.gz"}
|
http://www.zazzle.co.uk/mastermind+keychains
|
Showing All Results
29 results
Related Searches: brain, brainiac, intellectual
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
£2.95
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
Got it! We won't show you this product again!
Undo
No matches for
Showing All Results
29 results
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.810890793800354, "perplexity": 5107.669711093469}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386163065688/warc/CC-MAIN-20131204131745-00019-ip-10-33-133-15.ec2.internal.warc.gz"}
|
https://conan777.wordpress.com/2011/06/07/a-remark-on-a-mini-course-by-kleiner-in-sullivans-70th-birthday/
|
## A remark on a mini-course by Kleiner in Sullivan’s 70th birthday
June 7, 2011
I spent the last week on Long Island for Dennis Sullivan’s birthday conference. The conference is hosted in the brand new Simons center where great food is served everyday in the cafe (I think life-wise it’s a wonderful choice for doing a post-doc).
Anyways, aside from getting to know this super-cool person named Dennis, the talks there were interesting~ There are many things I found so exciting and can’t help to not say a few words about, however due to my laziness, I can only select one item to give a little stupid remark on:
So Bruce Kleiner gave a 3-lecture mini-course on boundaries of Gromov hyperbolic spaces (see this related post on a piece of his pervious work in the subject)
Cannon’s conjecture: Any Gromov hyperbolic group with $\partial_\infty G \approx \mathbb{S}^2$ acts discretely and cocompactly by isometries on $\mathbb{H}^3$.
As we all know, in the theory of Gromov hyperbolic spaces, we have the basic theorem that says if a groups acts on a space discretely and cocompactly by isometries, then the group (equipped with any word metric on its Cayley graph) is quasi-isometric to the space it acts on.
Since I borrowed professor Sullivan as an excuse for writing this post, let’s also state a partial converse of this theorem (which is more in the line of Cannon’s conjecture):
Theorem: (Sullivan, Gromov, Cannon-Swenson)
For $G$ finitely generated, if $G$ is quasi-isometric to $\mathbb{H}^n$ for some $n \geq 3$, then $G$ acts on $\mathbb{H}^n$ discretely cocompactly by isometries.
This essentially says that due to the strong symmetries and hyperbolicity of $\mathbb{H}^n$, in this case quasi-isometry is enough to guarantee an action. (Such thing is of course not true in general, for example any finite group is quasi-isometric to any compact metric space, there’s no way such action exists.) In some sense being quasi-isometric is a much stronger condition once the spaces has large growth at infinity.
In light of the above two theorems we know that Cannon’s conjecture is equivalent to saying that any hyperbolic group with boundary $\mathbb{S}^2$ is quasi-isometric to $\mathbb{H}^3$.
At first glance this seems striking since knowing only the topology of the boundary and the fact that it’s hyperbolic, we need to conclude what the whole group looks like geometrically. However, the pervious post on one dimensional boundaries perhaps gives us some hint on the boundary can’t be anything we want. In fact it’s rather rigid due to the large symmetries of our hyperbolic group structure.
Having Cannon’s conjecture as a Holy Grail, they developed tools that give raise to some very elegant and inspring proofs of the conjecture in various special cases. For example:
Definition: A metric space $M$, is said to be Alfors $\alpha$-regular where $\alpha$ is its Hausdorff dimension, if there exists constant $C$ s.t. for any ball $B(p, R)$ with $R \leq \mbox{Diam}(M)$, we have:
$C^{-1}R^\alpha \leq \mu(B(p,R)) \leq C R^\alpha$
This is saying it’s of Hausdorff dimension $\alpha$ in a very strong sense. (i.e. the Hausdorff $\alpha$ measure behaves exactly like the regular Eculidean measure everywhere and in all scales).
For two disjoint continua $C_1, C_2$ in $M$, let $\Gamma(C_1, C_2)$ denote the set of rectifiable curves connecting $C_1$ to $C_2$. For any density function $\rho: M \rightarrow \mathbb{R}^+$, we define the $\rho$-distance between $C_1, C_2$ to be $\displaystyle \mbox{dist}_\rho(C_1, C_2) = \inf_{\gamma \in \Gamma(C_1, C_2)} \int_\gamma \rho$.
Definition: The $\alpha$-modulus between $C_1, C_2$ is
$\mbox{Mod}_\alpha(C_1, C_2) = \inf \{ \int_M \rho^\alpha \ | \ \mbox{dist}_\rho(C_1, C_2) \geq 1 \}$,
OK…I know this is a lot of seemingly random definitions to digest, let’s pause a little bit: Given two continua in our favorite $\mathbb{R}^n$, new we are of course Hausdorff dimension $n$, what’s the $n$-modulus between them?
This is equivalent to asking for a density function for scaling the metric so that the total n-dimensional volume of $\mathbb{R}^n$ is as small as possible but yet the length of any curve connecting $C_1, \ C_2$ is larger than $1$.
So intuitively we want to put large density between the sets whenever they are close together. Since we are integrating the $n$-th power for volume (suppose $n>1$, since our set is path connected it’s dimension is at least 1), we would want the density as ‘spread out’ as possible while keeping the arc-length property. Hence one observation is this modulus depends on the pair of closest points and the diameter of the sets.
The relative distance between $C_1, C_2$ is $\displaystyle \Delta (C_1, C_2) = \frac{\inf \{ d(p_1, p_2) \ | \ p_1 \in C_1, \ p_2 \in C_2 \} }{ \min \{ \mbox{Diam}(C_1), \mbox{Diam}(C_2) \} }$
We say $M$ is $\alpha$-Loewner if the $\alpha$ modulus between any two continua is controlled above and below by their relative distance, i.e. there exists increasing functions $\phi, \psi: [0, \infty) \rightarrow [0, \infty)$ s.t. for all $C_1, C_2$,
$\phi(\Delta(C_1, C_2)) \leq \mbox{Mod}_\alpha(C_1, C_2) \leq \psi(\Delta(C_1, C_2))$
Those spaces are, in some sense, regular with respect to it’s metric and measure.
Theorem: If $\partial_\infty G$ is Alfors 2-regular and 2-Loewner, homeomorphic to $\mathbb{S}^2$, then $G$ acts discrete cocompactly on $\mathbb{H}^3$ by isometries.
Most of the material appeared in the talk can be found in their paper.
There are many other talks I found very interesting, especially that of Kenneth Bromberg, Mario Bonk and Peter Jones. Unfortunately I had to miss Curt McMullen, Yair Minski and Shishikura…
### 2 Responses to “A remark on a mini-course by Kleiner in Sullivan’s 70th birthday”
1. tushar Says:
hi conan,
they’ve put up the videos of the lectures and talks at:
http://www.math.sunysb.edu/Videos/dennisfest/
best,
tushar
2. […] A remark on a mini-course by Kleiner in Sullivan’s 70th birthday […]
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 52, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9534345865249634, "perplexity": 537.2437465630724}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320040.36/warc/CC-MAIN-20170623082050-20170623102050-00063.warc.gz"}
|
https://www.gamedev.net/blogs/blog/949-new-old-things/?page=1&sortby=entry_views&sortdirection=desc
|
• entries
104
102
• views
260402
Musings of a hobbyist
## A c64 game in several steps (lots of 'em)
Welcome!
Today's development is heaps and bounds beyond imagination from 20 years ago. I've always had a soft spot for the C64 after all this years. So I sat down and tried to start assembly programming on a C64.
Today I'll start with a sort of tutorial on how to write a C64 game. I have prepared 36 steps for now, planned are probably a few more.
I'll start out very small but there will be bigger steps later on. The code is supposed to be heavily commented but is probably not clear for everyone. I'll be happy to answer questions regarding the code. The code is written for the ACME cross compiler, which allows to compile the code on any bigger OS.
Step #1 is a simple base for a game. It provides a Basic start (10 SYS 2064), sets up the VIC relocation and shows a simple synchronized game loop.
To show the loop running the border color is flashed and the top left char is rotating throughout all characters.
The not too eye-popping result looks like this:
Find here the source code and binary for use in an emulator of your choice (I recommend WinVICE):
step1.zip
Next Step 1b
## First step explained in detail
As threatened the first step detailed. I reckon that the first step is overwhelming if it's your first voyage into C64 game programming. There's quite a few assumptions made about the viewer's knowledge and the later steps won't get exactly easier.
A note about the !zone macro. ACME allows for global and local labels. A local label is starting with a . and is only visible inside a zone. This allows for easier reuse of common names like loop or retry.
This snippet tells the ACME cross compiler to assemble the result into the file "jmain.prg" with the cbm (Commodore Business Machines) type. This basically boils down to the well known .prg format which contains a word with the loading address at the start followed by the assembly. ;compile to this filename !to "jmain.prg",cbm The next snipped just defines a constant. I try to use them throughout so you can understand where I'm putting bytes at. The value 52224 is the address of the screen buffer, where 25 lines a 40 characters are stored continously. This is not the default memory location for the screen, a part of this base code relocates the screen. ;define constants here ;address of the screen buffer SCREEN_CHAR = 52224 Now a very interesting piece which took me longer to work out than it should have. A C64 has two types of files, Basic files and machine code files. A Basic file can be started by RUN, a machine code file just contains the code and usually must be jumped at with the SYS command. Any half decent game will provide a proper Basic kick start that jumps directly at the machine code.
To allow for this we set the file start address to $801 (2049), the default Basic start. The file content starts out with the tokenized bytes of a simple Basic line calling SYS for us. The line is built by a word containing the address of the next Basic line. Following is a word with the line number (10 in our sample). After that the token for the SYS command ($9e) followed by a space ($20) and the ASCII representation of the target address (2064 in our sample). After that there is one zero byte marking the end of the line. The next zero word represents the end of the Basic file. I've got some extra zero bytes which are actually wrong but also don't really hurt. ;this creates a basic start *=$801 ;SYS 2064 !byte $0C,$8,$0A,$00,$9E,$20,$32,$30,$36,$34,$00,$00,$00,$00,$00 The next snippet disables any visible sprites, relocates the VICs memory bank (resulting in a relocated screen buffer and charset address). ;init sprite registers ;no visible sprites lda #0 sta VIC_SPRITE_ENABLE ;set charset lda #$3c sta VIC_MEMORY_CONTROL ;VIC bank lda CIA_PRA and #$fc sta CIA_PRA This piece is the main game loop. It's rather easy, we increase the border color (resulting in flashing), increase the top left character on the screen, wait for the vertical blank (not exactly but to the effect) and rerun the loop. ;the main game loop GameLoop ;border flashing inc VIC_BORDER_COLOR ;top left char inc SCREEN_CHAR jsr WaitFrame jmp GameLoop This snippet is quite interesting. The C64 allows you to read the current raster line on the screen that is currently being redrawn. The code checks for a certain raster position at the bottom of the screen to sync the game to the computer's display speed. In detail we're waiting for the raster line to NOT be the position we want to wait for. Once we are on any line but the wanted we now really wait for our raster line to appear. This avoids the problem when the routine is called too fast in succession and we end up on the same raster line. !zone WaitFrame ;wait for the raster to reach line$f8 ;this is keeping our timing stable ;are we on line $F8 already? if so, wait for the next full screen ;prevents mistimings if called too fast WaitFrame lda$d012 cmp #$F8 beq WaitFrame ;wait for the raster to reach line$f8 (should be closer to the start of this line this way) .WaitStep2 lda $d012 cmp #$F8 bne .WaitStep2 rts
step1.zip
Previous Step 1 Next Step 2
## A c64 game - Step 2
And onwards we stumble!
In the first part we prepared everything for the VIC, now we set up our modified charset. Note that I used a selfmade tool (similar to CharPad), which is included (Windows binary). The file J.CHR contains the charset and is included into the source as binary. The memory layout of the game expects the modified charset at $f000. Since the C64 can't load files to locations after$C000 we have to copy the charset to the target memory at $f000. To be able to properly write at those addresses we need to switch off the ROM overlay. The current step should display "HELLO". The rest of the screen depends on the current memory setup of your emulator/C64 First code piece we add is the copy routine. Interrupts are blocked because we turn off the kernal ROM. If we didn't the IRQ code would jump in the middle of uninitialised RAM, likely resulting in a crash. The RAM/ROM layout is influenced by memory address$1. ;---------------------- ;copy charset to target ;---------------------- ;block interrupts ;since we turn ROMs off this would result in crashes if we did not sei ;save old configuration lda $1 sta PARAM1 ;only RAM ;to copy under the IO rom lda #%00110000 sta$1 ;take source address from CHARSET LDA #<CHARSET STA ZEROPAGE_POINTER_1 LDA #>CHARSET STA ZEROPAGE_POINTER_1 + 1 ;now copy jsr CopyCharSet ;restore ROMs lda PARAM1 sta $1 cli The actual copy routine. Note that we only copy 254 characters. The last two characters are omitted to not overwrite the default IRQ vectors residing at$fffb. Since we deal with a 8 bit machine there is an extra loop taking care of the high bytes of our addresses. At the end of the copy routine we include the binary charset data. !zone CopyCharSet CopyCharSet ;set target address ($F000) lda #$00 sta ZEROPAGE_POINTER_2 lda #$F0 sta ZEROPAGE_POINTER_2 + 1 ldx #$00 ldy #$00 lda #0 sta PARAM2 .NextLine lda (ZEROPAGE_POINTER_1),Y sta (ZEROPAGE_POINTER_2),Y inx iny cpx #$8 bne .NextLine cpy #$00 bne .PageBoundaryNotReached ;we reached the next 256 bytes, inc high byte inc ZEROPAGE_POINTER_1 + 1 inc ZEROPAGE_POINTER_2 + 1 .PageBoundaryNotReached ;only copy 254 chars to keep irq vectors intact inc PARAM2 lda PARAM2 cmp #254 beq .CopyCharsetDone ldx #$00 jmp .NextLine .CopyCharsetDone rts CHARSET !binary "j.chr" To display HELLO on the screen we simple poke the character codes on the screen and also set the characters colors to white. ;test charset lda #'H' sta SCREEN_CHAR lda #'E' sta SCREEN_CHAR + 1 lda #'L' sta SCREEN_CHAR + 2 sta SCREEN_CHAR + 3 lda #'O' sta SCREEN_CHAR + 4 lda #1 sta SCREEN_COLOR sta SCREEN_COLOR + 1 sta SCREEN_COLOR + 2 sta SCREEN_COLOR + 3 sta SCREEN_COLOR + 4
Clarifications:
The charset of the C64 is using 8 bytes per character. This totals at 256 characters a 8 bytes = 2048 bytes. A custom character set can be positioned almost everywhere in RAM (at 2048 interval steps).
In hires text mode every bit corresponds to a pixel. In multicolor text mode pixels are doubling width, so two bits make up one pixel. In multicolor mode two colors are shared by all multi-color characters, one is the background color and one is the current char color.
The memory layout looks like this (nicked from www.c64-wiki.de): $FFFF = 65535 ????????????????????????????????? ?---------------?|||||||||||||||? ||| = read by PEEK ?---------------?|||||||||||||||? --- = written to by POKE ?---------------?|||||||||||||||? +++ = read and write ?---------------?||| KERNAL- |||? other = not reachable from BASIC ?---------------?||| ROM |||? ?---------------?|||||||||||||||? ?---------------?|||||||||||||||?$E000 = 57344 ????????????????????????????????????????????????? ? ? ?+++++++++++++++? ? ? CHAR ROM ?+++++ I/O +++++? ? ? ?+++++++++++++++? $D000 = 53248 ????????????????????????????????????????????????? ?+++++++++++++++? ?+++++++++++++++? ?+++++++++++++++?$C000 = 49152 ????????????????????????????????? ?---------------?|||||||||||||||? ?---------------?|||||||||||||||? ?---------------?||| BASIC- ||||? ?---------------?||| ROM ||||? ?---------------?|||||||||||||||? ?---------------?|||||||||||||||? ?---------------?|||||||||||||||? $A000 = 40960 ????????????????????????????????? ?+++++++++++++++? ?+++ BASIC- ++++? ?+++ RAM ++++? . . ?+++ BASIC- ++++? ?+++ RAM ++++?$800 = 2048 ?+++++++++++++++?-? $400 = 1024 ?+++++++++++++++?-?Default Screen Address$0000 ?????????????????-?Zeropage and Enhanced Zeropage
step2.zip
Previous Step 1b Next Step 3
## A C64 game - Step 3
Quite similar to step 2 this time we set up sprites. Note that I used another selfmade tool (similar to SpritePad). The file J.SPR contains the sprites and is included into the source as binary. The sprites are located "under" the I/O ROM at $D000 onwards. And another note: Since I'm lazy I built the sprite copy loop to always copy packages of 4 sprites (1 sprite comes as 63 + 1 byte, so 4 make nice 256 byte packages). The current step should display the known charset garbage and "HELLO" as before, but also show an elegantly handicrafted sprite. Right below the charset copy routine we add the copy call. ;take source address from SPRITES lda #<SPRITES sta ZEROPAGE_POINTER_1 lda #>SPRITES sta ZEROPAGE_POINTER_1 + 1 jsr CopySprites The sprite copy routine is quite similar to the charset copy but a tad shorter due to the 256 byte packaging: ;------------------------------------------------------------ ;copies sprites from ZEROPAGE_POINTER_1 to ZEROPAGE_POINTER_2 ; sprites are copied in numbers of four ;------------------------------------------------------------ !zone CopySprites CopySprites ldy #$00 ldx #$00 lda #00 sta ZEROPAGE_POINTER_2 lda #$d0 sta ZEROPAGE_POINTER_2 + 1 ;4 sprites per loop .SpriteLoop lda (ZEROPAGE_POINTER_1),y sta (ZEROPAGE_POINTER_2),y iny bne .SpriteLoop inx inc ZEROPAGE_POINTER_1 + 1 inc ZEROPAGE_POINTER_2 + 1 cpx #NUMBER_OF_SPRITES_DIV_4 bne .SpriteLoop rts In front of GameLoop we put the sprite display code. The sprite is positioned at coordinates 100,100, the sprite pointer set to the correct image and the sprite is enabled. ;set sprite 1 pos lda #100 sta VIC_SPRITE_X_POS sta VIC_SPRITE_Y_POS ;set sprite image lda #SPRITE_PLAYER sta SPRITE_POINTER_BASE ;enable sprite 1 lda #1 sta VIC_SPRITE_ENABLE step3.zip
Previous Step 2 Next Step 4
## A C64 game - Step 4
Now we take a bigger step: Moving the sprite with the joystick. Since we want to make a real game we also allow to move the sprite over to the right side (in other words we'll take care of the extended x bit).
For clarification: The C64 has 8 hardware sprites. That's 8 objects of the size 24 x 21 pixels. They can be placed anywhere. The coordinates are stored in memory addresses. However since the X resolution is higher than 256 all the sprites 9th bit is stored in another memory location (which makes it highly annoying to work with).
Sprite coordinates are set in X, Y pairs via the memory locations 53248 (=X sprite 0), 53249 (=Y sprite 0), 53250 (=X sprite 1), etc. The extended sprite bits are stored in 53248 + 16.
Since I don't plan to allow sprites to go off screen in the game later there is no defined behaviour if you move the sprite off screen too far. It'll simply bounce back in once the coordinate wraps around.
The joystick ports can be checked via the memory locations 56320 (Port 2) or 56321 (Port 1). The lower 5 bits are cleared(!) if either up, down, left, right for fire is pressed.
This step shows:
-Joystick control
-Sprite extended x bit
Inside the GameLoop we add a call to the players control function: jsr PlayerControl PlayerControl itself checks the joystick port (II) and calls the proper direction move routines. Note that the move routines themselves simply set the object index to 0 (for the player) and call a generic sprite move routine.
;------------------------------------------------------------ ;check joystick (player control) ;------------------------------------------------------------ !zone PlayerControl PlayerControl lda #$2 bit$dc00 bne .NotDownPressed jsr PlayerMoveDown .NotDownPressed lda #$1 bit$dc00 bne .NotUpPressed jsr PlayerMoveUp .NotUpPressed lda #$4 bit$dc00 bne .NotLeftPressed jsr PlayerMoveLeft .NotLeftPressed lda #$8 bit$dc00 bne .NotRightPressed jsr PlayerMoveRight .NotRightPressed rts PlayerMoveLeft ldx #0 jsr MoveSpriteLeft rts PlayerMoveRight ldx #0 jsr MoveSpriteRight rts PlayerMoveUp ldx #0 jsr MoveSpriteUp rts PlayerMoveDown ldx #0 jsr MoveSpriteDown rts The sprite move routines are rather simple, update the position counter variables and set the actual sprite registers. A bit more complicated are the X move functions. If X reaches the wraparound, the extended x bit (the 9th bit) is looked up in a table and then added/removed.
;------------------------------------------------------------ ;Move Sprite Left ;expect x as sprite index (0 to 7) ;------------------------------------------------------------ !zone MoveSpriteLeft MoveSpriteLeft dec SPRITE_POS_X,x bpl .NoChangeInExtendedFlag lda BIT_TABLE,x eor #$ff and SPRITE_POS_X_EXTEND sta SPRITE_POS_X_EXTEND sta VIC_SPRITE_X_EXTEND .NoChangeInExtendedFlag txa asl tay lda SPRITE_POS_X,x sta VIC_SPRITE_X_POS,y rts ;------------------------------------------------------------ ;Move Sprite Right ;expect x as sprite index (0 to 7) ;------------------------------------------------------------ !zone MoveSpriteRight MoveSpriteRight inc SPRITE_POS_X,x lda SPRITE_POS_X,x bne .NoChangeInExtendedFlag lda BIT_TABLE,x ora SPRITE_POS_X_EXTEND sta SPRITE_POS_X_EXTEND sta VIC_SPRITE_X_EXTEND .NoChangeInExtendedFlag txa asl tay lda SPRITE_POS_X,x sta VIC_SPRITE_X_POS,y rts ;------------------------------------------------------------ ;Move Sprite Up ;expect x as sprite index (0 to 7) ;------------------------------------------------------------ !zone MoveSpriteUp MoveSpriteUp dec SPRITE_POS_Y,x txa asl tay lda SPRITE_POS_Y,x sta 53249,y rts ;------------------------------------------------------------ ;Move Sprite Down ;expect x as sprite index (0 to 7) ;------------------------------------------------------------ !zone MoveSpriteDown MoveSpriteDown inc SPRITE_POS_Y,x txa asl tay lda SPRITE_POS_Y,x sta 53249,y rts step4.zip Previous Step 3 Next Step 5 ## A C64 game - Step 5 Our next big step: Obviously we want some play screen, and more obviously, we are not going to store full screens (we've got only 64Kb Ram after all). Therefore there's a level build routine that allows to build a screen with various building elements. For now we'll start out with vertical and horizontal lines. Since we always have a level border at the screen edges we'll have the border as a second screen data block (LEVEL_BORDER_DATA). To allow faster screen building we also have a table with the precalculated char offsets of a vertical line (SCREEN_LINE_OFFSET_TABLE_LO and SCREEN_LINE_OFFSET_TABLE_HI). Note that with the C64s mnemonics the best way to get stuff done is tables, tables, tables. The BuildScreen sub routine clears the play screen area, builds the level data and then the border. The level data is a collection of primitives which are then worked through until LD_END is hit. The first addition is the call to the buildroutine: ;setup level lda #0 sta LEVEL_NR jsr BuildScreen The BuildScreen routine sets up a level completely. It starts out with clearing the current play area (not the full screen). Then LEVEL_NR is used to look up the location of the level data in table SCREEN_DATA_TABLE. .BuildLevel is jumped to to actually work through the level primitives and put them on screen. Then we use LEVEL_BORDER_DATA as second level to display a border on the screens edges. .BuildLevel uses Y as index through the data. Depending on the first byte different routines are called (.LineH, .LineV, .LevelComplete). Since I think levels may be complex and use more than 256 bytes we add Y to the level data pointers so we can start out with Y=0 for the next primitive. ;------------------------------------------------------------ ;BuildScreen ;creates a screen from level data ;------------------------------------------------------------ !zone BuildScreen BuildScreen lda #0 ldy #6 jsr ClearPlayScreen ;get pointer to real level data from table ldx LEVEL_NR lda SCREEN_DATA_TABLE,x sta ZEROPAGE_POINTER_1 lda SCREEN_DATA_TABLE + 1,x sta ZEROPAGE_POINTER_1 + 1 jsr .BuildLevel ;get pointer to real level data from table lda #<LEVEL_BORDER_DATA sta ZEROPAGE_POINTER_1 lda #>LEVEL_BORDER_DATA sta ZEROPAGE_POINTER_1 + 1 jsr .BuildLevel rts .BuildLevel ;work through data ldy #255 .LevelDataLoop iny lda (ZEROPAGE_POINTER_1),y cmp #LD_END beq .LevelComplete cmp #LD_LINE_H beq .LineH cmp #LD_LINE_V beq .LineV .LevelComplete rts .NextLevelData pla ;adjust pointers so we are able to access more ;than 256 bytes of level data clc adc #1 adc ZEROPAGE_POINTER_1 sta ZEROPAGE_POINTER_1 lda ZEROPAGE_POINTER_1 + 1 adc #0 sta ZEROPAGE_POINTER_1 + 1 ldy #255 jmp .LevelDataLoop The primitive display routines .LineH and .LineV are rather straight forward. Read the parameters, and put the character and color values in place. .LineH ;X pos iny lda (ZEROPAGE_POINTER_1),y sta PARAM1 ;Y pos iny lda (ZEROPAGE_POINTER_1),y sta PARAM2 ;width iny lda (ZEROPAGE_POINTER_1),y sta PARAM3 ;char iny lda (ZEROPAGE_POINTER_1),y sta PARAM4 ;color iny lda (ZEROPAGE_POINTER_1),y sta PARAM5 ;store target pointers to screen and color ram ldx PARAM2 lda SCREEN_LINE_OFFSET_TABLE_LO,x sta ZEROPAGE_POINTER_2 sta ZEROPAGE_POINTER_3 lda SCREEN_LINE_OFFSET_TABLE_HI,x sta ZEROPAGE_POINTER_2 + 1 clc adc #( ( SCREEN_COLOR - SCREEN_CHAR ) & 0xff00 ) >> 8 sta ZEROPAGE_POINTER_3 + 1 tya pha ldy PARAM1 .NextChar lda PARAM4 sta (ZEROPAGE_POINTER_2),y lda PARAM5 sta (ZEROPAGE_POINTER_3),y iny dec PARAM3 bne .NextChar jmp .NextLevelData .LineV ;X pos iny lda (ZEROPAGE_POINTER_1),y sta PARAM1 ;Y pos iny lda (ZEROPAGE_POINTER_1),y sta PARAM2 ;height iny lda (ZEROPAGE_POINTER_1),y sta PARAM3 ;char iny lda (ZEROPAGE_POINTER_1),y sta PARAM4 ;color iny lda (ZEROPAGE_POINTER_1),y sta PARAM5 ;store target pointers to screen and color ram ldx PARAM2 lda SCREEN_LINE_OFFSET_TABLE_LO,x sta ZEROPAGE_POINTER_2 sta ZEROPAGE_POINTER_3 lda SCREEN_LINE_OFFSET_TABLE_HI,x sta ZEROPAGE_POINTER_2 + 1 clc adc #( ( SCREEN_COLOR - SCREEN_CHAR ) & 0xff00 ) >> 8 sta ZEROPAGE_POINTER_3 + 1 tya pha ldy PARAM1 .NextCharV lda PARAM4 sta (ZEROPAGE_POINTER_2),y lda PARAM5 sta (ZEROPAGE_POINTER_3),y ;adjust pointer lda ZEROPAGE_POINTER_2 clc adc #40 sta ZEROPAGE_POINTER_2 sta ZEROPAGE_POINTER_3 lda ZEROPAGE_POINTER_2 + 1 adc #0 sta ZEROPAGE_POINTER_2 + 1 clc adc #( ( SCREEN_COLOR - SCREEN_CHAR ) & 0xff00 ) >> 8 sta ZEROPAGE_POINTER_3 + 1 dec PARAM3 bne .NextCharV jmp .NextLevelData step5.zip Previous Step 4 Next Step 6 ## A C64 Game - Step 6 And onwards we go: Obviously we don't want the player to move through walls. In this step we check the chars in the players way to see if they are blocking. To make this easier we store the character pos and the character delta pos (0 to 7) for x and y for every sprite (SPRITE_CHAR_POS_X, SPRITE_CHAR_POS_X_DELTA). If the sprite is not at a character brink the move is allowed, if it hits the brink, check the characters at the target. For this step any character equal or above index 128 is considered blocking, any below is free to move. The collision code assumes that the collision box of a sprite is one char wide and two chars high. This step shows: -calculating character positions while moving about -checking the position for blocking chars -calculating the required sprite position from character pos (for starting a sprite at a specific place) Since the code is basically the same for all four directions I'll only go into details on one of them: ;------------------------------------------------------------ ;PlayerMoveLeft ;------------------------------------------------------------ !zone PlayerMoveLeft PlayerMoveLeft ldx #0 ;check if we are on the brink of a character lda SPRITE_CHAR_POS_X_DELTA beq .CheckCanMoveLeft ;no, we are not .CanMoveLeft dec SPRITE_CHAR_POS_X_DELTA jsr MoveSpriteLeft rts .CheckCanMoveLeft lda SPRITE_CHAR_POS_Y_DELTA beq .NoThirdCharCheckNeeded ;find the character in the screen buffer ldy SPRITE_CHAR_POS_Y lda SCREEN_LINE_OFFSET_TABLE_LO,y sta ZEROPAGE_POINTER_1 lda SCREEN_LINE_OFFSET_TABLE_HI,y sta ZEROPAGE_POINTER_1 + 1 lda SPRITE_CHAR_POS_X clc adc #39 ;39 equals one line down (40 chars) and one to the left (-1) tay lda (ZEROPAGE_POINTER_1),y jsr IsCharBlocking bne .BlockedLeft .NoThirdCharCheckNeeded ldy SPRITE_CHAR_POS_Y dey lda SCREEN_LINE_OFFSET_TABLE_LO,y sta ZEROPAGE_POINTER_1 lda SCREEN_LINE_OFFSET_TABLE_HI,y sta ZEROPAGE_POINTER_1 + 1 ldy SPRITE_CHAR_POS_X dey lda (ZEROPAGE_POINTER_1),y jsr IsCharBlocking bne .BlockedLeft tya clc adc #40 tay lda (ZEROPAGE_POINTER_1),y jsr IsCharBlocking bne .BlockedLeft lda #8 sta SPRITE_CHAR_POS_X_DELTA dec SPRITE_CHAR_POS_X jmp .CanMoveLeft .BlockedLeft rts The subroutine IsCharBlocking is rather primitive, as described it only checks if the character is smaller than 128: ;------------------------------------------------------------ ;IsCharBlocking ;checks if a char is blocking ;A contains the character ;returns 1 for blocking, 0 for not blocking ;------------------------------------------------------------ !zone IsCharBlocking IsCharBlocking cmp #128 bpl .Blocking lda #0 rts .Blocking lda #1 rts step6.zip Previous Step 5 Next Step 7 ## A C64 game - Step 7 Now it's starting to resemble a game. Loosely. In this step we add gravity and jumping. The player will fall if there is no blocking char below. On joystick up the player jumps in a curve. Both fall speed and jump speed are non linear and based on tables. This step shows: -gravity (accelerating) -jumping (following a delta y curve) Most prominent addition are the jump and fall table. These hold the deltas we use to make the movement not look linear but somewhat naturalistic: PLAYER_JUMP_POS !byte 0 PLAYER_JUMP_TABLE !byte 8,7,5,3,2,1,1,1,0,0 PLAYER_FALL_POS !byte 0 FALL_SPEED_TABLE !byte 1,1,2,2,3,3,3,3,3,3 The jump is only possible if the player is not falling. Once the player jumped the PLAYER_JUMP_POS is increased on every frame and the player moved upwards for entry-of-jump-table pixel. If the player is blocked moving upwards the jump is aborted: .PlayerIsJumping inc PLAYER_JUMP_POS lda PLAYER_JUMP_POS cmp #JUMP_TABLE_SIZE bne .JumpOn lda #0 sta PLAYER_JUMP_POS jmp .JumpComplete .JumpOn ldx PLAYER_JUMP_POS lda PLAYER_JUMP_TABLE,x beq .JumpComplete sta PARAM5 .JumpContinue jsr PlayerMoveUp beq .JumpBlocked dec PARAM5 bne .JumpContinue jmp .JumpComplete .JumpBlocked lda #0 sta PLAYER_JUMP_POS jmp .JumpStopped To check for falling an attempt is made to move the player down one pixel. If he is blocked he is standing on solid ground. If he can fall the fall counter is increased. The fall counter is increased in every frame up to the max number of entries in the fall table. If the player is falling the player is moved down entry-of-fall-table pixel. .PlayerFell ldx PLAYER_FALL_POS lda FALL_SPEED_TABLE,x beq .FallComplete sta PARAM5 .FallLoop dec PARAM5 beq .FallComplete jsr PlayerMoveDown jmp .FallLoop .FallComplete lda PLAYER_FALL_POS cmp #( FALL_TABLE_SIZE - 1 ) beq .FallSpeedAtMax inc PLAYER_FALL_POS .FallSpeedAtMax step7.zip Previous Step 6 Next Step 8 ## A C64 Game - Final Step And thus this create a game series ends... We fix the last few bugs (score was not properly reset on replay, music was stuck on saving the scores). Now the game is complete and can be enjoyed as it was meant to be! Oh joy, Smila (the graphician) actually touched up the game even more for a retail release. It has been out for a few weeks now, but if you didn't know, go here: Psytronik (for digital download, tape or disk) or to RGCD (cartridge with nifty packaging and stickers). Or simply marvel in all the nice games that are still being made for older computers. There you can actually buy a severely enhanced version (more two player game modes, better graphic, gameplay enhancements) as either digital download for emulators, or as a real tape, disk and finally as cartridge! Thank you for your encouragements throughout, and keep on coding step100.zip Previous Step ## A C64 game - Step 13 And yet again a bigger step. Of course we need lots of goodies from the killed enemies. We add items. Items are displayed as 2x2 block of characters. There's a new list of possible items with location added (ITEM_ACTIVE, etc.). The list also stores the original background behind the items. To get an item spawned just walk beside one of the enemies (look in their direction) and keep fire pressed until it dies. Note that the player cannot collect the items yet (that's up for the next step). Inside the player subroutine FireShot we add another subroutine call after killing an enemy: .EnemyKilled jsr RemoveObject jsr SpawnItem SpawnItem itself resembles the AddObject routine. First we loop over the active item table (ITEM_ACTIVE) to find a free slot. Once found we randomly chose the item type (for now there are two types). The ugly part below that stores the original character and color at the item position and puts the items char and color in its place. Remember the item is sized 2x2 chars, so we need to store 8 bytes overall. However to keep the code comfortable, we actually use 8 tables. This allows use to only work with the item index instead of manually accessing a second index. There's only two index registers after all. ;------------------------------------------------------------ ;spawns an item at char position from object x ;X = object index ;------------------------------------------------------------ !zone SpawnItem SpawnItem ;find free item slot ldy #0 .CheckNextItemSlot lda ITEM_ACTIVE,y cmp #ITEM_NONE beq .FreeSlotFound iny cpy #ITEM_COUNT bne .CheckNextItemSlot rts .FreeSlotFound jsr GenerateRandomNumber and #$1 sta ITEM_ACTIVE,y lda SPRITE_CHAR_POS_X,x sta ITEM_POS_X,y lda SPRITE_CHAR_POS_Y,x sta ITEM_POS_Y,y sty PARAM1 ;find address in screen buffer... tay lda SCREEN_LINE_OFFSET_TABLE_LO,y sta ZEROPAGE_POINTER_1 sta ZEROPAGE_POINTER_2 lda SCREEN_LINE_OFFSET_TABLE_HI,y sta ZEROPAGE_POINTER_1 + 1 ;...and for the color buffer clc adc #( ( SCREEN_COLOR - SCREEN_CHAR ) & 0xff00 ) >> 8 sta ZEROPAGE_POINTER_2 + 1 ldy SPRITE_CHAR_POS_X,x ldx PARAM1 ;store old background and put item ;we do not take overlapping items in account yet! lda (ZEROPAGE_POINTER_1),y sta ITEM_BACK_CHAR_UL,x lda (ZEROPAGE_POINTER_2),y sta ITEM_BACK_COLOR_UL,x lda ITEM_CHAR_UL,x sta (ZEROPAGE_POINTER_1),y lda ITEM_COLOR_UL,x sta (ZEROPAGE_POINTER_2),y iny lda (ZEROPAGE_POINTER_1),y sta ITEM_BACK_CHAR_UR,x lda (ZEROPAGE_POINTER_2),y sta ITEM_BACK_COLOR_UR,x lda ITEM_CHAR_UR,x sta (ZEROPAGE_POINTER_1),y lda ITEM_COLOR_UR,x sta (ZEROPAGE_POINTER_2),y tya clc adc #39 tay lda (ZEROPAGE_POINTER_1),y sta ITEM_BACK_CHAR_LL,x lda (ZEROPAGE_POINTER_2),y sta ITEM_BACK_COLOR_LL,x lda ITEM_CHAR_LL,x sta (ZEROPAGE_POINTER_1),y lda ITEM_COLOR_LL,x sta (ZEROPAGE_POINTER_2),y iny lda (ZEROPAGE_POINTER_1),y sta ITEM_BACK_CHAR_LR,x lda (ZEROPAGE_POINTER_2),y sta ITEM_BACK_COLOR_LR,x lda ITEM_CHAR_LR,x sta (ZEROPAGE_POINTER_1),y lda ITEM_COLOR_LR,x sta (ZEROPAGE_POINTER_2),y rts
step13.zip
Previous Step Next Step
## A C64 game - Step 8
Of course a game isn't a game without some challenge. Therefore we need enemies. Since we have some neat little level build code why not use it for enemies as well?
We add a new level primitive type LD_OBJECT which adds objects (= sprites). We use it for both player and enemies. A new table SPRITE_ACTIVE is added to see if a sprite is used (and which type).
One central function to this is FindEmptySpriteSlot. It iterates over the sprite active table and looks for a free slot to use. If there is a free slot we set the object active, apply the object startup values and use the previously created CalcSpritePosFromCharPos to place the sprite.
Note that we don't plan to use more than 8 objects so we can directly map object to sprites.
Find an empty slot: ;------------------------------------------------------------ ;Looks for an empty sprite slot, returns in X ;#1 in A when empty slot found, #0 when full ;------------------------------------------------------------ !zone FindEmptySpriteSlot FindEmptySpriteSlot ldx #0 .CheckSlot lda SPRITE_ACTIVE,x beq .FoundSlot inx cpx #8 bne .CheckSlot lda #0 rts .FoundSlot lda #1 rts
How we add an object during level buildup:
.Object ;X pos iny lda (ZEROPAGE_POINTER_1),y sta PARAM1 ;Y pos iny lda (ZEROPAGE_POINTER_1),y sta PARAM2 ;type iny lda (ZEROPAGE_POINTER_1),y sta PARAM3 ;store y for later tya pha ;add object to sprite array jsr FindEmptySpriteSlot beq .NoFreeSlot lda PARAM3 sta SPRITE_ACTIVE,x ;PARAM1 and PARAM2 hold x,y already jsr CalcSpritePosFromCharPos ;enable sprite lda BIT_TABLE,x ora VIC_SPRITE_ENABLE sta VIC_SPRITE_ENABLE lda #SPRITE_PLAYER sta SPRITE_POINTER_BASE,x .NoFreeSlot jmp .NextLevelData
step8.zip
Previous Step 7 Next Step 9
## A C64 game - Step 9
What are enemies if they just sit put and don't move at all? Therefore we now add the sub routine ObjectControl. ObjectControl loops through all objects (even the player) and jumps to the behaviour function depending on the object type. This incurs that the behaviour is tied to the object type. We provide a table with function pointers to every object's behaviour code (including the player).
ObjectControl takes the object type as index into the table and jumps to the target address. For now we have two enemy types, dumb moving up/down or left/right. For moving we reuse the previously created functions we already use for the player, namely ObjectMoveLeft/ObjectMoveRight etc.
Loop over all active objects and jump at their behaviour code. Note that we apply a nasty trick. Since jsr doesn't allow for indirect jumps we manually push the return address on the stack and then call indirect jmp. This allows for the behaviour code to return with rts.
;------------------------------------------------------------ ;Enemy Behaviour ;------------------------------------------------------------ !zone ObjectControl ObjectControl ldx #0 .ObjectLoop ldy SPRITE_ACTIVE,x beq .NextObject ;enemy is active dey lda ENEMY_BEHAVIOUR_TABLE_LO,y sta ZEROPAGE_POINTER_2 lda ENEMY_BEHAVIOUR_TABLE_HI,y sta ZEROPAGE_POINTER_2 + 1 ;set up return address for rts lda #>( .NextObject - 1 ) pha lda #<( .NextObject - 1 ) pha jmp (ZEROPAGE_POINTER_2) .NextObject inx cpx #8 bne .ObjectLoop rts The main game loop is now changed; removed the call of PlayerControl and added the call to ObjectControl: ;------------------------------------------------------------ ;the main game loop ;------------------------------------------------------------ GameLoop jsr WaitFrame jsr ObjectControl jmp GameLoop The behaviour table is built from the behaviour code addresses. Actually we use two tables for high and low byte, this way we don't have to mess with the index. The < and > operators return the low and high byte of a 16 bit value.
ENEMY_BEHAVIOUR_TABLE_LO !byte <PlayerControl !byte <BehaviourDumbEnemyLR !byte <BehaviourDumbEnemyUD ENEMY_BEHAVIOUR_TABLE_HI !byte >PlayerControl !byte >BehaviourDumbEnemyLR !byte >BehaviourDumbEnemyUD
step9.zip
Previous Step 8 Next Step 10
## A C64 game - Step 10
So you found out the enemies couldn't hurt you? Well, we're working towards that goal in this step. We add collision checks. Since I'm not completely sure about later changes we are NOT relying on the VICs collision checks but roll our own. Remember the object size contraints from step #6? We apply those to the object collision checks as well.
We add a new subroutine CheckCollisions which in turn uses IsEnemyCollidingWithPlayer. We do not check for collisions between enemies. The check is not completely exact (behaves more like 9 pixel * 16 pixel), but that's good enough. To test the function a collision is signalled by setting the border color to white.
The routine CheckCollisions is simply added to the main game loop: GameLoop jsr WaitFrame jsr ObjectControl jsr CheckCollisions jmp GameLoop The function CheckCollision just loops through the active object list and calls IsEnemyCollidingWithPlayer for every active entry:
;------------------------------------------------------------ ;check object collisions (enemy vs. player etc.) ;x ;------------------------------------------------------------ CheckCollisions ldx #1 .CollisionLoop lda SPRITE_ACTIVE,x bne .CheckObject .NextObject inx cpx #8 bne .CollisionLoop lda #0 sta VIC_BORDER_COLOR rts .CheckObject stx PARAM2 jsr IsEnemyCollidingWithPlayer bne .PlayerCollidedWithEnemy ldx PARAM2 jmp .NextObject .PlayerCollidedWithEnemy lda #1 sta VIC_BORDER_COLOR ;ldx #0 ;jsr RemoveObject rts IsEnemyCollidingWithPlayer employs a few tricks to ease the calculation.
First we do the Y coordinate check to weed out. For the X coordinate: Since the actual X position is 9 bits we half the value (half the X coordinate and add 128 if the extended X bit is set). Now the comparation is easy.
The routine then returns 1 if a collision occurs and 0 if not.
;------------------------------------------------------------ ;check object collision with player (object 0) ;x = enemy index ;return a = 1 when colliding, a = 0 when not ;------------------------------------------------------------ !zone IsEnemyCollidingWithPlayer .CalculateSimpleXPos ;Returns a with simple x pos (x halved + 128 if > 256) ;modifies y lda BIT_TABLE,x and SPRITE_POS_X_EXTEND beq .NoXBit lda SPRITE_POS_X,x lsr clc adc #128 rts .NoXBit lda SPRITE_POS_X,x lsr rts IsEnemyCollidingWithPlayer ;modifies X ;check y pos lda SPRITE_POS_Y,x sec sbc #( OBJECT_HEIGHT ) ;offset to bottom cmp SPRITE_POS_Y bcs .NotTouching clc adc #( OBJECT_HEIGHT + OBJECT_HEIGHT - 1 ) cmp SPRITE_POS_Y bcc .NotTouching ;X = Index in enemy-table jsr .CalculateSimpleXPos sta PARAM1 ldx #0 jsr .CalculateSimpleXPos sec sbc #4 ;position X-Anfang Player - 12 Pixel cmp PARAM1 bcs .NotTouching adc #8 cmp PARAM1 bcc .NotTouching lda #1 rts .NotTouching lda #0 rts step10.zip
Previous Step 9 Next Step 11
## A C64 game - Step 17
Another rather small step, but visually pleasing. We're enhancing the player sprite with animation and better jump abilities.
All the hard work is added to PlayerControl. On every movement we update the sprite while checking the player states like jumping, recoil, falling, etc. Suddenly things look more interesting
It's basically updating and checking counters during different control parts. SPRITE_ANIM_DELAY is used for controlling animation speed while SPRITE_ANIM_POS is used for the animation frame.
Here are the new parts for walking left:
;animate player lda SPRITE_FALLING bne .NoAnimLNeeded lda PLAYER_JUMP_POS bne .NoAnimLNeeded inc SPRITE_ANIM_DELAY lda SPRITE_ANIM_DELAY cmp #8 bne .NoAnimLNeeded lda #0 sta SPRITE_ANIM_DELAY inc SPRITE_ANIM_POS lda SPRITE_ANIM_POS and #$3 sta SPRITE_ANIM_POS .NoAnimLNeeded The same for right movement: ;animate player lda SPRITE_FALLING bne .NoAnimRNeeded lda PLAYER_JUMP_POS bne .NoAnimRNeeded inc SPRITE_ANIM_DELAY lda SPRITE_ANIM_DELAY cmp #8 bne .NoAnimRNeeded lda #0 sta SPRITE_ANIM_DELAY inc SPRITE_ANIM_POS lda SPRITE_ANIM_POS and #$3 sta SPRITE_ANIM_POS .NoAnimRNeeded And all the missing animation for jumping, falling, recoil and combined states. Note that the sprites are arranged in right/left pairs, so that adding SPRITE_DIRECTION (0 = facing right, 1 = facing left) to the sprite frame results in the proper sprite.
;update player animation lda SPRITE_FALLING bne .AnimFalling lda PLAYER_JUMP_POS bne .AnimJumping ;is player shooting? lda PLAYER_SHOT_PAUSE beq .AnimNoRecoil ;recoil anim lda SPRITE_ANIM_POS asl clc adc SPRITE_DIRECTION adc #SPRITE_PLAYER_WALK_R_1 adc #8 sta SPRITE_POINTER_BASE rts .AnimNoRecoil lda SPRITE_ANIM_POS asl clc adc SPRITE_DIRECTION adc #SPRITE_PLAYER_WALK_R_1 sta SPRITE_POINTER_BASE rts .AnimFalling lda PLAYER_SHOT_PAUSE bne .AnimFallingNoRecoil lda #SPRITE_PLAYER_FALL_R clc adc SPRITE_DIRECTION sta SPRITE_POINTER_BASE rts .AnimFallingNoRecoil lda #SPRITE_PLAYER_FALL_RECOIL_R clc adc SPRITE_DIRECTION sta SPRITE_POINTER_BASE rts .AnimJumping lda PLAYER_SHOT_PAUSE bne .AnimJumpingNoRecoil lda #SPRITE_PLAYER_JUMP_R clc adc SPRITE_DIRECTION sta SPRITE_POINTER_BASE rts .AnimJumpingNoRecoil lda #SPRITE_PLAYER_JUMP_RECOIL_R clc adc SPRITE_DIRECTION sta SPRITE_POINTER_BASE rts step17.zip
Previous Step Next Step
## A C64 game - Step 11
From colliding to dying is a small step. Once the player collides with an enemy we kill him by removing the player object. A "Press Fire to Restart" message is displayed and a press on the button will revive the player object.
We add the function RemoveObject which simply removes the object from the SPRITE_ACTIVE table and disables its sprite. While we wait for the player to press the button all the rest of the game moves on.
First of all we add the getting killed part in our old routine "CheckCollisions". Nothing ground breaking, a call to the text display function follows by removing the object and resetting the button released flag.
.PlayerCollidedWithEnemy ;display text lda #<TEXT_PRESS_FIRE sta ZEROPAGE_POINTER_1 lda #>TEXT_PRESS_FIRE sta ZEROPAGE_POINTER_1 + 1 lda #10 sta PARAM1 lda #23 sta PARAM2 jsr DisplayText ldx #0 stx BUTTON_PRESSED stx BUTTON_RELEASED jsr RemoveObject rts
A new call is added to the main game loop which controls behaviour when the player is dead: GameLoop jsr WaitFrame jsr DeadControl jsr ObjectControl jsr CheckCollisions jmp GameLoop Surprisingly easy. We check if the player is really dead, if he isn't, bail out. Then we check for the joystick button being pressed, but only allow to go on, if the button has been released before. If all that happened, we simply force the player object back into life (for now with hard coded values).
!zone DeadControl DeadControl lda SPRITE_ACTIVE beq .PlayerIsDead rts .PlayerIsDead lda #$10 bit$dc00 bne .ButtonNotPressed ;button pushed lda BUTTON_RELEASED bne .Restart rts .ButtonNotPressed lda #1 sta BUTTON_RELEASED rts .Restart lda #5 sta PARAM1 lda #4 sta PARAM2 ;type lda #TYPE_PLAYER sta PARAM3 ldx #0 lda PARAM3 sta SPRITE_ACTIVE,x ;PARAM1 and PARAM2 hold x,y already jsr CalcSpritePosFromCharPos ;enable sprite lda BIT_TABLE,x ora VIC_SPRITE_ENABLE sta VIC_SPRITE_ENABLE ;initialise enemy values lda #SPRITE_PLAYER sta SPRITE_POINTER_BASE,x ;look right per default lda #0 sta SPRITE_DIRECTION,x rts
step11.zip
Previous Step 10Next Step
## A C64 game - Step 12
One of the more complex steps. And also one I someday need to heavily optimize. The player can now shoot an enemy.
The central function for this is FireShot. We don't use a bullet but insta-shot. However walls should block the shot as well. This means, we need to take the current player direction and position, and work our way to the left/right until we hit an enemy or a wall.
Since there's no direct collision involved we take the character pos of the player, add or decrease the x pos and compare against all alive enemies pos. Rinse and repeat until done.
I've given the enemies 5 HP, and the player has a shot delay of 10 frames. Therefore it takes a while for the enemy to disappear (best tested with the box on top). If the player is purple the shot delay is active.
lda PLAYER_SHOT_PAUSE bne .FirePauseActive lda #1 sta VIC_SPRITE_COLOR lda #$10 bit$dc00 bne .NotFirePushed jsr FireShot jmp .FireDone .FirePauseActive dec PLAYER_SHOT_PAUSE .FireDone .NotFirePushed
This simply checks for PLAYER_SHOT_PAUSE. If it is higher than 0 the player is still pausing. If so the counter is decreased and the fire function skipped. If the counter is zero, we check the fire button and if pressed call the FireShot routine.
The FireShot routine is not that complicated, however it's taking its processing time. First set the fire pause to 10 frames. Mark the player as shooting by changing his color.
Now the hard part. There is no visible bullet. So we take the current player position, increase/decrease X and check for a blocking char or a hittable enemy. If the bullet is blocked, done. If an enemy is hit, decrease its health by one point. Once the health is down to zero the enemy is removed.
!zone FireShotFireShot ;frame delay until next shot lda #10 sta PLAYER_SHOT_PAUSE ;mark player as shooting lda #4 sta VIC_SPRITE_COLOR ldy SPRITE_CHAR_POS_Y dey lda SCREEN_LINE_OFFSET_TABLE_LO,y sta ZEROPAGE_POINTER_1 lda SCREEN_LINE_OFFSET_TABLE_HI,y sta ZEROPAGE_POINTER_1 + 1 ldy SPRITE_CHAR_POS_X .ShotContinue lda SPRITE_DIRECTION beq .ShootRight ;shooting left dey lda (ZEROPAGE_POINTER_1),y jsr IsCharBlocking bne .ShotDone jmp .CheckHitEnemy .ShootRight iny lda (ZEROPAGE_POINTER_1),y jsr IsCharBlocking bne .ShotDone .CheckHitEnemy ;hit an enemy? ldx #1 .CheckEnemy stx PARAM2 lda SPRITE_ACTIVE,x beq .CheckNextEnemy tax lda IS_TYPE_ENEMY,x beq .CheckNextEnemy ;sprite pos matches on x? ldx PARAM2 sty PARAM1 lda SPRITE_CHAR_POS_X,x cmp PARAM1 bne .CheckNextEnemy ;sprite pos matches on y? lda SPRITE_CHAR_POS_Y,x cmp SPRITE_CHAR_POS_Y beq .EnemyHit ;sprite pos matches on y + 1? clc adc #1 cmp SPRITE_CHAR_POS_Y beq .EnemyHit ;sprite pos matches on y - 1? sec sbc #2 cmp SPRITE_CHAR_POS_Y bne .CheckNextEnemy .EnemyHit ;enemy hit! dec SPRITE_HP,x lda SPRITE_HP,x beq .EnemyKilled jmp .ShotDone .EnemyKilled jsr RemoveObject jmp .ShotDone .CheckNextEnemy ldx PARAM2 inx cpx #8 bne .CheckEnemy jmp .ShotContinue .ShotDone rts step12.zip
Previous Step Next Step
## A C64 game - Step 14
And onwards we go. Picking up items showed a new problem. When an item is picked up we want the background behind restored (plus the item characters should not cut holes into the play field, allowing the player to fall through floors).
In the end I decided to have a second back buffer screen that contains the original play screen. Now every time an item is removed the character blocks are copied from the back buffer. Also, the back buffer is now used for collision detection. I could not avoid having to redraw the still existing item images in case the removed item was overlapping.
Effectively we double the work during level building. We start out with the new "buffers":
;address of the screen backbuffer SCREEN_BACK_CHAR = $C800 ;address of the screen backbuffer SCREEN_BACK_COLOR =$C400
After calling the BuildScreen subroutine we copy the screen and color RAM to the backup buffers. Note the check for 230 bytes. We only have a play field of 40x23 characters, so only 4 * 230 = 920 bytes are needed. ;copy level data to back buffer ldx #$00 .ClearLoop lda SCREEN_CHAR,x sta SCREEN_BACK_CHAR,x lda SCREEN_CHAR + 230,x sta SCREEN_BACK_CHAR + 230,x lda SCREEN_CHAR + 460,x sta SCREEN_BACK_CHAR + 460,x lda SCREEN_CHAR + 690,x sta SCREEN_BACK_CHAR + 690,x inx cpx #230 bne .ClearLoop ldx #$00 .ColorLoop lda SCREEN_COLOR,x sta SCREEN_BACK_COLOR,x lda SCREEN_COLOR + 230,x sta SCREEN_BACK_COLOR + 230,x lda SCREEN_COLOR + 460,x sta SCREEN_BACK_COLOR + 460,x lda SCREEN_COLOR + 690,x sta SCREEN_BACK_COLOR + 690,x inx cpx #230 bne .ColorLoop
The repaint item function is thusly modified to simply copy the character and color values from the backup buffer:
;------------------------------------------------------------ ;remove item image from screen ;Y = item index ;------------------------------------------------------------ !zone RemoveItemImage RemoveItemImage sty PARAM2 ;set up pointers lda ITEM_POS_Y,y tay lda SCREEN_LINE_OFFSET_TABLE_LO,y sta ZEROPAGE_POINTER_1 sta ZEROPAGE_POINTER_2 sta ZEROPAGE_POINTER_3 sta ZEROPAGE_POINTER_4 lda SCREEN_LINE_OFFSET_TABLE_HI,y sta ZEROPAGE_POINTER_1 + 1 clc adc #( ( SCREEN_COLOR - SCREEN_CHAR ) & 0xff00 ) >> 8 sta ZEROPAGE_POINTER_2 + 1 sec sbc #( ( SCREEN_COLOR - SCREEN_BACK_CHAR ) & 0xff00 ) >> 8 sta ZEROPAGE_POINTER_3 + 1 sec sbc #( ( SCREEN_BACK_CHAR - SCREEN_BACK_COLOR ) & 0xff00 ) >> 8 sta ZEROPAGE_POINTER_4 + 1 ldx PARAM2 ldy ITEM_POS_X,x ;... and copying lda (ZEROPAGE_POINTER_4),y sta (ZEROPAGE_POINTER_2),y lda (ZEROPAGE_POINTER_3),y sta (ZEROPAGE_POINTER_1),y iny lda (ZEROPAGE_POINTER_4),y sta (ZEROPAGE_POINTER_2),y lda (ZEROPAGE_POINTER_3),y sta (ZEROPAGE_POINTER_1),y tya clc adc #39 tay lda (ZEROPAGE_POINTER_4),y sta (ZEROPAGE_POINTER_2),y lda (ZEROPAGE_POINTER_3),y sta (ZEROPAGE_POINTER_1),y iny lda (ZEROPAGE_POINTER_4),y sta (ZEROPAGE_POINTER_2),y lda (ZEROPAGE_POINTER_3),y sta (ZEROPAGE_POINTER_1),y ;repaint other items to avoid broken overlapped items ldx #0 .RepaintLoop lda ITEM_ACTIVE,x cmp #ITEM_NONE beq .RepaintNextItem txa pha jsr PutItemImage pla tax .RepaintNextItem inx cpx #ITEM_COUNT bne .RepaintLoop ldy PARAM2 rts
step14.zip
Previous Step Next Step
## A C64 game - Step 18
This time we add some enemy animation and a path based movement enemy type. The movement path is stored in a table of delta X and delta Y values. Values with the highest bit set are treated as negative.
The animation of the bat is also stored in a table (it's a simple ping pong loop).
Every objects get an animation delay (SPRITE_ANIM_DELAY), animation pos (SPRITE_ANIM_POS) and movement pos counter (SPRITE_MOVE_POS).
Remember, adding a new type means just adding the new constant and entries to the startup value tables.
If you wonder about the flickering white border on the bottom half: It's an easy way to see how long the actual per frame code runs. You'll notice more complex code taking quite a bit more time.
Here's a detailed look at the path code. It's actually pretty straight forward. Read the next byte. Check if the high bit is set and use the result to either move left/right. Rinse and repeat for Y.
;------------------------------------------------------------ ;move in flat 8 ;------------------------------------------------------------ !zone BehaviourBat8 BehaviourBat8 ;do not update animation too fast lda DELAYED_GENERIC_COUNTER and #$3 bne .NoAnimUpdate inc SPRITE_ANIM_POS,x lda SPRITE_ANIM_POS,x and #$3 sta SPRITE_ANIM_POS,x tay lda BAT_ANIMATION,y sta SPRITE_POINTER_BASE,x .NoAnimUpdate inc SPRITE_MOVE_POS,x lda SPRITE_MOVE_POS,x and #31 sta SPRITE_MOVE_POS,x ;process next path pos tay lda PATH_8_DX,y beq .NoXMoveNeeded sta PARAM1 and #$80 beq .MoveRight ;move left lda PARAM1 and #$7f sta PARAM1 .MoveLeft jsr MoveSpriteLeft dec PARAM1 bne .MoveLeft jmp .XMoveDone .MoveRight jsr MoveSpriteRight dec PARAM1 bne .MoveRight .NoXMoveNeeded .XMoveDone ldy SPRITE_MOVE_POS,x lda PATH_8_DY,y beq .NoYMoveNeeded sta PARAM1 and #$80 beq .MoveDown ;move up lda PARAM1 and #$7f sta PARAM1 .MoveUp jsr MoveSpriteUp dec PARAM1 bne .MoveUp rts .MoveDown jsr MoveSpriteDown dec PARAM1 bne .MoveDown .NoYMoveNeeded rts The tables themselves are handmade. For the planned path we just need to make sure we end up where we started: PATH_8_DX !byte $86 !byte$86 !byte $85 !byte$84 !byte $83 !byte$82 !byte $81 !byte 0 !byte 0 !byte 1 !byte 2 !byte 3 !byte 4 !byte 5 !byte 6 !byte 6 !byte 6 !byte 6 !byte 5 !byte 4 !byte 3 !byte 2 !byte 1 !byte 0 !byte 0 !byte$81 !byte $82 !byte$83 !byte $84 !byte$85 !byte $86 !byte$86 PATH_8_DY !byte 0 !byte 1 !byte 2 !byte 3 !byte 4 !byte 5 !byte 6 !byte 6 !byte 6 !byte 6 !byte 5 !byte 4 !byte 3 !byte 2 !byte 1 !byte 0 !byte 0 !byte $81 !byte$82 !byte $83 !byte$84 !byte $85 !byte$86 !byte $86 !byte$86 !byte $86 !byte$85 !byte $84 !byte$83 !byte $82 !byte$81 !byte 0 step18.zip
Previous Step Next Step
## A C64 Game - Step 99
And of course lots of little bugs were found and fixed
-Live number display was off on a new level.
-Beams would sometimes not be removed on the final boss
-Disable screen output when saving scores (IRQs go nuts if using kernal save routine)
-Cleaned up "extro" text
Have fun!
step99.zip
Previous Step Next Step
## A C64 game - Step 16
Now for some rather small addition, which however feels like a bigger step: Score/Live/Level display.
We already have a text display function, so we add a new default text for the display with initial 00000 values. Note that the score doesn't fit into a byte easily. We only update the numbers on the screen, we do not store the score in another location.
This makes it quite easy to update. For every step we start at the right most digit and increase it. If it hits the digit after '9', set to '0' again and repeat the step on char to the left. For retro sake we don't start at the right most score digit, but the second right most (making increase steps always being 10). If you look closer at a lot of older games you'll see that their right most score digit never changes (Bubble Bobble, etc.)
Small text entry:
TEXT_DISPLAY !text " SCORE: 000000 LIVES: 03 LEVEL: 00 *" Increase score bit:
;------------------------------------------------------------ ;increases score by A ;note that the score is only shown ; not held in a variable ;------------------------------------------------------------ !zone IncreaseScore IncreaseScore sta PARAM1 stx PARAM2 sty PARAM3 .IncreaseBy1 ldx #4 .IncreaseDigit inc SCREEN_CHAR + ( 23 * 40 + 8 ),x lda SCREEN_CHAR + ( 23 * 40 + 8 ),x cmp #58 bne .IncreaseBy1Done ;looped digit, increase next lda #48 sta SCREEN_CHAR + ( 23 * 40 + 8 ),x dex ;TODO - this might overflow jmp .IncreaseDigit .IncreaseBy1Done dec PARAM1 bne .IncreaseBy1 ;increase complete, restore x,y ldx PARAM2 ldy PARAM3 rts Another neat effect is the display of the level number and lives. Due to the hard coded screen position I've made two specialized functions instead of a generic one.
Interesting anecdote:
When I first had to display a decimal number I was stumped due to no available div operator. You actually need to divide by yourself (subtract divisor and increase count until done). That's what the call to DivideBy10 does.
;------------------------------------------------------------ ;displays level number ;------------------------------------------------------------ !zone DisplayLevelNumber DisplayLevelNumber lda LEVEL_NR clc adc #1 jsr DivideBy10 pha ;10 digit tya clc adc #48 sta SCREEN_CHAR + ( 23 * 40 + 37 ) pla clc adc #48 sta SCREEN_CHAR + ( 23 * 40 + 38 ) rts ;------------------------------------------------------------ ;displays live number ;------------------------------------------------------------ !zone DisplayLiveNumber DisplayLiveNumber lda PLAYER_LIVES jsr DivideBy10 pha ;10 digit tya clc adc #48 sta SCREEN_CHAR + ( 23 * 40 + 24 ) pla clc adc #48 sta SCREEN_CHAR + ( 23 * 40 + 25 ) rts ;------------------------------------------------------------ ;divides A by 10 ;returns remainder in A ;returns result in Y ;------------------------------------------------------------ !zone DivideBy10 DivideBy10 sec ldy #$FF .divloop iny sbc #10 bcs .divloop adc #10 rts step16.zip Previous Step Next Step ## A C64 game - Step 15 Now we start with a few game essentials: Progressing to the next level. Not too complicated. We keep a counter of enemies alive (NUMBER_ENEMIES_ALIVE) which is initially set to 0. Since we already have a lookup table IS_TYPE_ENEMY we simply add a check inside AddObject. If the new object is an enemy, increase the counter: ;adjust enemy counter ldx PARAM3 lda IS_TYPE_ENEMY,x beq .NoEnemy inc NUMBER_ENEMIES_ALIVE .NoEnemy The other spot where this comes in is when we kill an enemy. Inside our FireShot routine we add: .EnemyKilled ldy SPRITE_ACTIVE,x lda IS_TYPE_ENEMY,y beq .NoEnemy dec NUMBER_ENEMIES_ALIVE .NoEnemy jsr RemoveObject For the level change we add a new control routine (GameFlowControl) in the main game loop. Once the enemy count reaches 0 we increase a level done delay so we don't immediately jump onwards. Once reached, disable all sprites, build next level and continue. For now there's two simple levels with the last looping back to the first. ;------------------------------------------------------------ ;the main game loop ;------------------------------------------------------------ GameLoop jsr WaitFrame jsr GameFlowControl jsr DeadControl jsr ObjectControl jsr CheckCollisions jmp GameLoop ;------------------------------------------------------------ ;controls the game flow ;------------------------------------------------------------ !zone GameFlowControl GameFlowControl inc DELAYED_GENERIC_COUNTER lda DELAYED_GENERIC_COUNTER cmp #8 bne .NoTimedActionYet lda #0 sta DELAYED_GENERIC_COUNTER ;level done delay lda NUMBER_ENEMIES_ALIVE bne .NotDoneYet inc LEVEL_DONE_DELAY lda LEVEL_DONE_DELAY cmp #20 beq .GoToNextLevel inc VIC_BORDER_COLOR .NotDoneYet .NoTimedActionYet rts .GoToNextLevel lda #0 sta VIC_SPRITE_ENABLE inc LEVEL_NR jsr BuildScreen jsr CopyLevelToBackBuffer rts step15.zip Previous Step Next Step ## A C64 Game - Step 45 In this version nothing much is added to the code. However an external level editor was added (Windows executable) which helps a lot in churning out pretty levels faster. Most prominent addition are level elements. Before the level was built mostly of simple primitives (line of an character, etc.). With the editor so called Elements are added. These consist of a variable sized character and color block. Elements can be arranged as single object, lines or areas. This helps a lot in reusing bigger level parts and keeping memory usage down. The elements are stored in several tables. A major lookup table that points to a elements character and color tables, and two lookup tables holding an elements width and height. The editor tries to fold element tables into each other to save memory. For example if there's a big brick sized 4x2 characters, and you have two smaller elements showing the left and right halfs of the brick, the element data is reused. Note that the element area code is not implemented in this step. Moral of the story: As soon as you see you're going ahead with a project having an easy to use editor is quite important. It aids you in faster content creation, faster testing and overall usually prettier results. Nothing crushes productivity better than annoying tools (or manual boring work without tools). ;------------------------------------------------------------ ;draws a level element ;PARAM1 = X ;PARAM2 = Y ;PARAM3 = TYPE ;returns element width in PARAM4 ;returns element height in PARAM5 ;------------------------------------------------------------ !zone DrawLevelElement DrawLevelElement ldy PARAM3 lda SNELEMENT_TABLE_LO,y sta .LoadCode + 1 lda SNELEMENT_TABLE_HI,y sta .LoadCode + 2 lda SNELEMENT_COLOR_TABLE_LO,y sta .LoadCodeColor + 1 lda SNELEMENT_COLOR_TABLE_HI,y sta .LoadCodeColor + 2 lda SNELEMENT_WIDTH_TABLE,y sta PARAM4 lda SNELEMENT_HEIGHT_TABLE,y sta PARAM5 sta PARAM6 ldy PARAM2 lda SCREEN_LINE_OFFSET_TABLE_LO,y clc adc PARAM1 sta .StoreCode + 1 sta .StoreCodeColor + 1 sta ZEROPAGE_POINTER_4 lda SCREEN_LINE_OFFSET_TABLE_HI,y adc #0 sta .StoreCode + 2 adc #( ( >SCREEN_COLOR ) - ( >SCREEN_CHAR ) ) sta .StoreCodeColor + 2 .NextRow ldx #0 ;display a row .Row .LoadCode lda$8000,x .StoreCode sta $8000,x .LoadCodeColor lda$8000,x .StoreCodeColor sta \$8000,x inx cpx PARAM4 bne .Row ;eine zeile nach unten dec PARAM6 beq .ElementDone ;should be faster? lda .LoadCode + 1 clc adc PARAM4 sta .LoadCode + 1 lda .LoadCode + 2 adc #0 sta .LoadCode + 2 lda .LoadCodeColor + 1 clc adc PARAM4 sta .LoadCodeColor + 1 lda .LoadCodeColor + 2 adc #0 sta .LoadCodeColor + 2 lda .StoreCode + 1 clc adc #40 sta .StoreCode + 1 lda .StoreCode + 2 adc #0 sta .StoreCode + 2 lda .StoreCodeColor + 1 clc adc #40 sta .StoreCodeColor + 1 lda .StoreCodeColor + 2 adc #0 sta .StoreCodeColor + 2 jmp .NextRow .ElementDone rts !zone LevelElement LevelElement LevelElementArea ; !byte LD_ELEMENT,0,0,EL_BLUE_BRICK_4x3 ;X pos iny lda (ZEROPAGE_POINTER_1),y sta PARAM1 ;Y pos iny lda (ZEROPAGE_POINTER_1),y sta PARAM2 ;type iny lda (ZEROPAGE_POINTER_1),y sta PARAM3 ;store y for later tya pha jsr DrawLevelElement jmp NextLevelData
The element line primitives are very similar, they just loop over the element draw routine: !zone LevelElementH LevelElementH ; !byte LD_ELEMENT_LINE_H,x,y,width,element ;X pos iny lda (ZEROPAGE_POINTER_1),y sta PARAM1 ;Y pos iny lda (ZEROPAGE_POINTER_1),y sta PARAM2 ;x count iny lda (ZEROPAGE_POINTER_1),y sta PARAM7 ;type iny lda (ZEROPAGE_POINTER_1),y sta PARAM3 ;store y for later tya pha .NextElement jsr DrawLevelElement dec PARAM7 beq .Done lda PARAM1 clc adc PARAM4 sta PARAM1 jmp .NextElement .Done jmp NextLevelData !zone LevelElementV LevelElementV ; !byte LD_ELEMENT_LINE_V,x,y,num,element ;X pos iny lda (ZEROPAGE_POINTER_1),y sta PARAM1 ;Y pos iny lda (ZEROPAGE_POINTER_1),y sta PARAM2 ;y count iny lda (ZEROPAGE_POINTER_1),y sta PARAM7 ;type iny lda (ZEROPAGE_POINTER_1),y sta PARAM3 ;store y for later tya pha .NextElement jsr DrawLevelElement dec PARAM7 beq .Done lda PARAM2 clc adc PARAM5 sta PARAM2 jmp .NextElement .Done jmp NextLevelData
The editor exports the level structure to a separate file, this is then included in the main file via the !source macro. step45.zip Previous Step Next Step
## A C64 Game - Step 69
And yet again, the next bunch of stages, this time a deserty area.
Have fun! step69.zip
Previous Step Next Step
## A C64 Game - Step 50
In this step the powerups for range increase and decrease reload delay are made permanent. They won't fade with time.
You can also collect up to five extras to reach the maximum. And the powerup stays even when you get killed. (Don't you hate it when you die in Bubble Bobble and you're slow again).
Previously we had a flag that stored the time left of a faster reload. The max times are now kept in a table, and a faster reload step value is stored instead.
The speed table RELOAD_SPEED_MAX and other counters for this update: PLAYER_RELOAD_SPEED !byte 0 RELOAD_SPEED !byte 1,1,1,1,1 RELOAD_SPEED_MAX !byte 40,35,30,25,20
Initialising on game restart: lda #0 sta PLAYER_RELOAD_SPEED
During standing still the reload speed value is now used to count down the time. ldy PLAYER_RELOAD_SPEED lda PLAYER_STAND_STILL_TIME clc adc RELOAD_SPEED,y cmp RELOAD_SPEED_MAX,y bcs .ReloadTimeDone sta PLAYER_STAND_STILL_TIME jmp .HandleFire .ReloadTimeDone lda #0 sta PLAYER_STAND_STILL_TIME
During pickup of the increase reload speed the counter needs to be updated: lda PLAYER_RELOAD_SPEED cmp #4 beq .SpeedHighestAlready inc PLAYER_RELOAD_SPEED .SpeedHighestAlready
Making the force range increase permanent is even easier, we simply remove all instances where it was reset to the start value on respawning and starting the next level. step50.zip
Previous Step Next Step
## A C64 Game - Step 48
This update doesn't really add anything mentionable code wise, however there's a full chapter of 10 stages.
Have fun!
step48.zip
Previous Step Next Step
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.37368014454841614, "perplexity": 28762.747004621735}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660020.5/warc/CC-MAIN-20190118090507-20190118112507-00025.warc.gz"}
|
http://tex.stackexchange.com/users/9524/fluffy?tab=activity&sort=comments
|
fluffy
Reputation
238
Top tag
Next privilege 250 Rep.
Sep 9 comment typing with latex It would be helpful if you provided the error. Also, you seem to be missing a number of \s and \$s, and you've misspelled documentclass and a4paper. Aug 20 comment Using a custom TrueType font in spans of text @LeoLiu I'm just using pdfTeX because of familiarity and not bothering to learn about the newer things. :) I'll look into them now, thanks. Aug 20 comment Using a custom TrueType font in spans of text @LeoLiu I'm not a TeX beginner - this just isn't something I've had to do before (and it seems like nobody ever does this). Does that linked-to article also apply to TeXLive on OSX? I'm not a fan of mucking about with packaged installation files. There's a reason I don't run Slackware anymore. :) Aug 20 comment Using a custom TrueType font in spans of text @karlhoeller Unfortunately, no, the stuff there didn't help. It's not actually selecting the font I provided - it's just complaining about a missing T5xxx.fd file, and that t.se page doesn't explain anything about generating them. Aug 20 comment Using a custom TrueType font in spans of text @karlhoeller That looks like it might be helpful, thanks. Why didn't that come up in my extensive searching? Jul 25 comment Separating plural possessive apostrophe from closing double-quote Thanks. For some reason the linked-to one didn't come up in my search - I figured someone would have asked it before but I had a hell of a time finding it. Jan 24 comment How can I explain the meaning of LaTeX to my grandma? Just because LaTeX was designed for technical documents doesn't mean that's all its good for. I laid out a comic book in LaTeX. (Because scripting makes things easier.) Nov 13 comment Change background colour for entire document Somewhat off topic, but the best way to change the background color of an entire document is to print it on colored paper. :) Feb 13 comment How can I introduce a non-technical person to LaTeX? +1 here as well. I used LyX to write my various academic papers back when I was still an academic and it certainly saved me a lot of grief. Dec 17 comment The effect of the anonymous letter Man, that kerning is TERRIBLE. Nov 28 comment How to replace chapter boilerplate with full-page image? Okay, good to know. I guess I should have just tried it straight away. Sorry to be such a bother. Nov 28 comment How to replace chapter boilerplate with full-page image? So, I finally tried switching over to this method (since it seems a bit cleaner), but I couldn't figure out how to use \AddToShipoutPicture or \includegraphics or whatever instead of \includepdf (since my images are in .png format, not .pdf). Nov 26 comment How to replace chapter boilerplate with full-page image? Oh, so it is, thanks. Sorry about that. I think I just got overwhelmed by the verbosity of the first two approaches that I must have overlooked the third. Nov 25 comment How to replace chapter boilerplate with full-page image? @egreg Yeah, the last \picturechapter macro is pretty decent. +1 for that. When I commented on this answer only the first example code was here. I definitely like \picturechapter a lot better than the other ones. Nov 23 comment How to replace chapter boilerplate with full-page image? That certainly seems to be the proper TeX-y way to do it: overwrought, overly complicated, and requires a Makefile to update things correctly. ;) Nov 23 comment How to replace chapter boilerplate with full-page image? For now just doing \def\chapterimage{chapter-1.png} \chapter{Chapter Title} seems to work. Thanks! Nov 23 comment How to replace chapter boilerplate with full-page image? Thanks! How do I define \chapterimage? Just doing \chapter[chapterimage=foo.png]{ChapterTitle} didn't work (I got a macro error, "Undefined control sequence \chapterimage"). Also I'm including my individual chapters in separate .tex files, if that makes a difference.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8325352668762207, "perplexity": 2253.712121083854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736677342.4/warc/CC-MAIN-20151001215757-00164-ip-10-137-6-227.ec2.internal.warc.gz"}
|
http://www.transtutors.com/questions/financial-accounting-theory-352352.htm
|
# Financial Accounting Theory
Hi,
I need this assaignment done. I believe you already have done it for some previous students. Pls don't give me a copy of that as turnitin will catch that.
Attachments:
## Solutions:
Financial Accounting Theory
“Think Before You Spend”
Well, somehow a critical but emerging question about think before you spend on anything
which perhaps you like the most and so far moreover. Utilizing your earnings on the beneficial
products which can increase the repertoire, would no doubt a good solution. To not to waste our
hard work on earnings we should think twice before spending on any unnecessary things or
products.
Especially for the some products that areThe Rough Guide to Ethical Shopping”. And for
that we firmly not in trust to buy without thinking as: “Is that really needed?” The products are
kind as:
Beverages:
Chilling out our throats and walking with a cane of Soft Drinks or Hard Drinks in hands is not a
good sense in respect of managing finance.
? As per Positive theory, Beverages are undoubtedly increases the revenues too to around
22% to 28% in the hotel and restaurant industries. With increased amount of 15% to 35%
of sales by fine dining and restaurants.
? As per normative privately economic based theory, the beverages markets are expected
to expand with CAGR growth of 4.3% to 4.2%.
? As per public interest and economic market group theory, Americans are consuming
13 billion gallons of soda, which increases diseases and mortality rates too in the country.
Smoking:
At the packet of every cigarette there is always a note written “Cigarette Smoking is Injurious to
Healththen why to opt the style that kills us not in respect of wealth but from the health too.
? With to the result for positive theory, according the research survey by Wells Fargo
Securities, average sales rates for merchandise are 45% to 50% which gross margin of
20% to 30% in the past years.
? As per normative & private economic theory, current annual revenue is $400-500. ? If we consider the public theory under the standards the mortality rate in America is 23.5 percent in men and 17.9 percent in women which is rising day-by-day. ? This gives the high credibility to the market managers with sales of nearly$1B.
Related Questions in Accounting Concepts and Principles
• Assignment help (Solved) September 12, 2012
the question has been attached and i need the help for answering question 1 to 4 please.The due date is by 17th of september thank you
Solution Preview :
Provide a summary of the purpose of Corporate Sustainability Reporting by referring to the Global Reporting Initiative’s Sustainability Reporting Framework (G3.1) available at...
• Accounting Theory (Solved) July 28, 2012
i .Is the study of financial accounting theory a waste of time for accounting students?
Solution Preview :
Financial accounting theory is as vital as accounting practices, one can't say it is wasteful for accounting student as it clarifies the concept of accounting and helps to solve problem...
• essay question for accounting theory in Australia (Solved) January 10, 2014
you consider to be more compelling? (In other words, what is youropinion? note : have to use harvard style reference generator ( 750 words each question )
Solution Preview :
If a new accounting standard impact on profits, should this impact on the value of the firm, and if so, why It has been observed that changes made in accounting policies, which includes...
• Accounting concepts (Solved) May 01, 2012
UNIVERSITY OF BOLTONFACULTY OF WELL-BEING & SOCIAL SCIENCESBusiness ManagementModule: Bus
Solution Preview :
Assignment 2: Using ONLY the data provided in the initial Trial Balance, prepare an income statement for the year ended 30 September and a balance sheet as at that date for Joe. 10 marks...
• this is my assignment (Solved) January 14, 2013
I really don't understand accounting.. this is not my course but i have this kind of subject.. and always got failing grades because of this.. pls help...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.16423514485359192, "perplexity": 5806.442132554618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541905.26/warc/CC-MAIN-20161202170901-00200-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://gharpedia.com/plastic-settlement-cracks/
|
## Plastic Settlement Cracks: All you Wanted to Know
Plastic settlement cracks appear in freshly laid/placed cement concrete in reinforced structure. The plastic settlement cracks also occur on the surface before the concrete has set, when there is the relatively high amount of bleeding. There is also some form of obstruction (i.e. reinforcement bars) to the downward sedimentation of the solids.
Courtesy - Gujarat Ambuja Cements Ltd.
Fresh Concrete, when placed in deep formwork such as that of column or wall, has a tendency to settle or subside. If this settlement is restrained due to an obstruction like steel bars or large aggregates it causes short horizontal cracks. Also, these obstructions break the back of concrete above them forming the voids under their belly. The subsidence due to a reduction in volume is termed as plastic settlement cracks.
### Typical Types of Plastic Settlement Cracks:
Fresh Concrete when laid in the formwork of column, wall or etc., it has a tendency to settle or subside and if this settlement is restrained due to an obstruction like steel bars or large aggregates, it causes plastic settlement cracks.
If the settlement of the solids in concrete can occur freely and without restraint, there will be a reduction in the volume or depth of the actual concrete cast, but no settlement cracking will occur.
Some typical plastic settlement cracks are illustrated in the figures below:
Courtesy - Gujarat Ambuja Cements Ltd.
Reinforcement near the top of a concrete section can cause plastic settlement cracking. If the formwork is relatively narrow, concrete may arch or wedge itself across the top of the narrow space and develops cracks
Courtesy - Gujarat Ambuja Cements Ltd.
Plastic settlement cracking are those caused by pronounced changes of section such as the cracking below flared column head.
Courtesy - Gujarat Ambuja Cements Ltd.
The changes in section of trough and waffle floor slabs, causes more settlement in the webs than in the comparatively thin flanges and result in cracking.
Courtesy - Gujarat Ambuja Cements Ltd.
If the sub-base or other material against which concrete is being placed is very absorbent like, for example, dry soil or absorbent formwork, an exaggerated type of plastic settlement cracking is likely to occur. In such cases the cracks will usually follow the reinforcement pattern and run parallel to each other.
### Major Causes of Plastic Settlement Cracks:
• Reduction in the volume of the cement-water system due to bleeding and segregation
• Internal restrainment due to reinforcement steel or large size aggregate
• External restrainment due to relatively narrow formwork
• Pronounced change of concrete cross section
• Absorbent sub base or formwork surface
• Flared column heads, troughs and waffles in floor slabs
• Bulging or settlement of formwork and supporting system
The above factors can singly or collectively cause cracking of concrete
Summary of Plastic Settlement Cracks in Concrete:
### 01. Time of Appearance
• 10 to 180 minutes
### 02. Types/Locations
• Over formwork tie bolts, or over reinforcement near the top of the section.
• In narrow column and walls due to obstruction to sedimentation by resulting arching action of concrete due to a narrow passage.
• At the change of depth of the section.
• Arching on top of columns
### 03. Primary Causes
• Non-cohesive concrete mix
• Excess bleeding
• Over vibration
### 04. Secondary Causes
• Rapid Early Drying Conditions
(a) High ambient temperatures/Hot Sun
(b) Low Humidity
(c) High Wind velocity
### 05. Prevent Measures
• Increase finer fines in the fine aggregates and cohesiveness of concrete mix
• Avoid using gap graded materials
• Use air entrainment
• Do re-vibration for proper compaction
## Material Exhibition
Explore the world of materials.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49100813269615173, "perplexity": 11469.503373924768}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247504790.66/warc/CC-MAIN-20190221132217-20190221154217-00086.warc.gz"}
|
http://www.freemathhelp.com/algebra-formulas.html
|
# Common Algebra Formulas
Here are some of the most commonly used formulas in algebra. If you have one you'd like to be added, or find an error, please contact me.
## Laws of Exponents
• $$a^ma^n=a^{m+n}$$
• $$(a*b)^m=a^mb^m$$
• $$(a^m)^n=a^{mn}$$
• $$a^{\frac{m}{n}}=\sqrt[n]{a^m}$$
• $$a^0=1$$
• $$\frac{a^m}{a^n}=a^{m-n}$$
• $$a^{-m}=\frac{1}{a^m}$$
For an equation of the form $$ax^2+bx+c=0$$, you can solve for x using the Quadratic Formula:
$$x = \frac{-b \pm \sqrt{b^2-4ac}}{2a}$$
## Binomial Theorem
• $$(a+b)^1= a + b$$
• $$(a+b)^2=a^2+2ab+b^2$$
• $$(a+b)^3=a^3+3a^2b+3ab^2+b^3$$
• $$(a+b)^4=a^4+4a^3b+6a^2b^2+4ab^3+b^4$$
## Difference of Squares
• $$a^2-b^2=(a-b)(a+b)$$
## Rules of Zero
• $$\frac{0}{x} = 0\text{, where }x \neq 0.$$
• $$a^0=1$$
• $$0^a=0$$
• $$a*0 = 0$$
• $$\frac{a}{0}\text{ is undefined (you can't do it)}$$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8689721822738647, "perplexity": 731.0217036034485}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988719877.27/warc/CC-MAIN-20161020183839-00345-ip-10-171-6-4.ec2.internal.warc.gz"}
|
http://openstudy.com/updates/5132858de4b0034bc1d78100
|
## A community for students. Sign up today
Here's the question you clicked on:
55 members online
• 0 viewing
## anonymous 3 years ago The capacitance of a parallel plate capacitor $C=\frac{Q}{V}$ $dV=-\int Edr$ $E=$ Delete Cancel Submit
• This Question is Closed
1. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
|dw:1362265815092:dw|
2. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
|dw:1362265934285:dw| should I use gauss's law to find the electric field?
3. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
yes .. use the gauss law. consider a single plate first.
4. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
$\oint E dA=\frac{Q}{\epsilon_0}$ |dw:1362266179361:dw| $Q=\sigma A$ $EL^2=\frac{Q}{\epsilon_0}=\frac{\sigma L^2}{\epsilon_0}$ $E=\frac{\sigma}{\epsilon_0}$
5. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
since we have two plates do we use $$2\sigma$$?
6. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
yeah yeah ... that;s the way ... let me add few things ... |dw:1362266439292:dw|
7. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
how should the flux lines go? fit it geometrically
8. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
|dw:1362266583983:dw|
9. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
yep!! Now what is flux??
10. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
$\sigma/\epsilon_0$? or do I need to consider the cylinder?
11. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
|dw:1362266689639:dw| yes ... it's okay even if you consider square or rectangle.
12. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
I used L^2 for area
13. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
everything is okay ... the answer is interesting ... If you have infinite charged sheet ... not matter how far you fly, the electric field will never decrease.
14. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
looks like you can use that on gravity too ...
15. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
should I try putting in values? Let's say 10cm=L and the plate seperation is 1mm. wait, so the electric field does not depend on the dimensions? would I need to substitute for sigma and epsilon to find the electric field if I'm only given geometric dimensions?
16. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
No ... calculate the charge density.
17. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
$\sigma=q/A= \frac{1.6\times10^{-19}}{.1^2}=1.6E-17 C/m^2$ $E=\frac {\sigma}{}$
18. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
looks like you are going in wrong direction ... can you state the original question
19. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
I think we're on the right track...once we have E we can solve for V and so on....should I leave E in variable form for now.
20. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
?
21. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
but what are we integration over....the distance between the plates?
22. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
Q <-- this should the be the total charge on the surface ... and A <-- this is the total area of the surface.
23. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
is this different from the gaussian surface? do I need different variable to use gauss's law?
24. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
yeah it looks like I haven't quite grasped the concept of Gauss's law
25. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
should we use the cylinder?
26. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
hmm ... what is your original question?
27. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
I have to find the capacitance of the parallel plate capacitor.
28. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
what information are given initially?
29. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
the plate is square and has length 10cm....the plates are separated by 1mm
30. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
ow ..
31. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
$dV=\int\frac{\sigma}{\epsilon_0}dA$
32. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
is that correct?
33. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
$V=\frac{2\sigma A}{\epsilon_0}$
34. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
Woops!! error
35. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
|dw:1362268232059:dw|
36. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
but would that still be $V=\frac{\sigma A}{\epsilon_0}$
37. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
|dw:1362268629160:dw|
38. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
Let me try ...One moment....
39. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
|dw:1362268884843:dw|
40. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
$\oint EdA=\frac Q{\epsilon_0}$ $E\pi r^2=\frac{Q}{\epsilon_0}$ $\rho=\frac Q {\pi r^2L}$ $E\pi r^2=\frac{\rho \pi r^2 L}{\epsilon_0}$
41. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
so the charge density is that of the cylinder?
42. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
$E=\frac{\rho L}{\epsilon_0}$
43. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
I completely missed the point didn't I?
44. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
use sigma for surface charge density.
45. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
|dw:1362269453040:dw|
46. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
oh so the flux is only though the circle... $$\phi$$ is of that surface....because we're looking for the flux through the plate, and we're using a cylinder to model that....I think I got it now
47. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
yes?
48. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
$\phi=\oint E\cdot dr=\frac{Q}{\epsilon_0}$
49. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
No ... we are not integrating over the circle.
50. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
closed integral from a to b?
51. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
|dw:1362269839865:dw|
52. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
to understand Gauss law ... you should understand flux. Find the total flux .. this case |dw:1362269948919:dw|
53. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
is this a circular surface of charge +q ?
54. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
no ... not circular ... spherical!!
55. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
the flux would be |dw:1362270059079:dw| oh ok...I guess we'll work with a sphere instead...Let's see...
56. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
consider ... the charge in enclosed by sphere.
57. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
|dw:1362270254679:dw| in all directions out of the sphere...the Electric field is perpendicular to the surface everywhere
58. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
|dw:1362270435029:dw|
59. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
yes ...
60. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
Yay!!! sigh...
61. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
what is the total flux?
62. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
zero?
63. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
the sum of all of the electric field lines?
64. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
zero
65. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
66. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
No ...
67. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
sum of all the Electric field lines $\phi=\oint E dr=\frac{Q}{\epsilon_0}$
68. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
volume?
69. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
You should understand what is flux first ... Flux is like ... |dw:1362270773063:dw|
70. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
the amount of water that flows through it is A times $$v$$
71. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
$\frac{meter^2}{1}\times\frac{meters}{second}$ that doesn't seem right
72. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
yeah .. that's correct. It's easy to visualize water ... |dw:1362271042142:dw|
73. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
yep
74. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
|dw:1362271173946:dw|
75. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
so the flux through the area would be the volume of the sphere times the electric field
76. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
I tried to make your circle look more like a sphere :P
77. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
78. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
oh surface area of the sphere!
79. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
yeah ... what is the total flux?
80. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
because the flux is through the surface of the sphere :D The total flux is the sum of all of (the surface areas multiplied by the electric field)
81. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
i meant surface area...not areas
82. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
|dw:1362271421314:dw|
83. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
what's s? surface area?
84. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
ds...?
85. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
|dw:1362271575577:dw|
86. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
|dw:1362271615713:dw|
87. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
what's that?
88. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
sphere.
89. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
oh are you saying that $\int ds=s$ yeah I believe that but is s ...oh i see sphere
90. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
we're integrating with respect to the sphere, correct?
91. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
the surface area of the sphere
92. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
yes ... but electric Field is constant .. since it's sphere.
93. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
yep
94. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
what would you limits be?
95. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
and we are integrating surface ...not distance. lol ... the limits ... don't worry
96. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
so $\oint \textrm{never has limits?}$
97. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
|dw:1362271874243:dw|
98. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
yeah I get that E is constant everywhere over the sphere
99. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
surface*
100. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
|dw:1362271927389:dw|
101. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
oh because were integrating $\oint ds=s \;\;\;\textrm{we don't need to worry about the limits}$
102. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
since the answer is just s
103. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
|dw:1362272111497:dw|
104. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
yep , makes sense....
105. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
I finally understand flux.... :)
106. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
Not complete yet!!
107. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
|dw:1362272196631:dw|
108. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
|dw:1362272206784:dw|
109. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
i'm not gonna look
110. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
not gonna look
111. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
not gonna look
112. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
ok fine lol
113. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
keep going...i'm sorry
114. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
Now ... |dw:1362272365512:dw|
115. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
this is called conservation of flux.
116. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
|dw:1362272482209:dw|
117. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
|dw:1362272583908:dw|
118. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
|dw:1362272690179:dw|
119. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
through both spheres? yes
120. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
This is why Gravity ... and Electrostatic force follow inverse square laws. Any shape ... but must be closed. and charge must be inside it.
121. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
interesting! yeah $$E\propto \frac 1{r^2}$$ $F_g\propto \frac 1{r^2}$
122. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
or is it $F_e\propto \frac 1{r^2}$ and $F_g\propto \frac 1{r^2}$
123. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
just think that ... charge is mouth of that pipe on that water example ... and outer envelopes is the other hose water comes out thorugh. It is independent of spape of pipe.
124. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
|dw:1362273084528:dw|
125. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
|dw:1362273139951:dw|
126. anonymous
• 3 years ago
Best Response
You've already chosen the best response.
0
oh|dw:1362273163066:dw| flux is still the same
127. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
yes ... the flux is still the same.
128. experimentX
• 3 years ago
Best Response
You've already chosen the best response.
1
|dw:1362273240830:dw|
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 1.0000077486038208, "perplexity": 15915.975086271434}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698540839.46/warc/CC-MAIN-20161202170900-00319-ip-10-31-129-80.ec2.internal.warc.gz"}
|
https://time.ekitan.com/train/TimeStation/661-16_D2.shtml
|
rcw ÉKChEFCoXiʁj ̎\łB
ÉKChEFCoXrcw[炽] [HIɖ߂]
[] 33 [] 45 [] 57 [] 08 [] 18 [] 29 [] 34 [] 48 [] 04 [] 22 [] 25 [] 29 [] 35 [] 43 [] 52 [] 58 [] 04 [] 19 [] 31 [] 44 [] 49 [] 54 [] 59 [] 14 [] 19 [] 24 [] 34 [] 44 [] 54 [] 14 [] 24 [] 34 [] 44 [] 54 [] 14 [] 24 [] 34 [] 44 [] 54 [] 14 [] 24 [] 34 [] 44 [] 54 [] 14 [] 24 [] 34 [] 44 [] 54 [] 14 [] 24 [] 34 [] 44 [] 54 [] 14 [] 24 [] 34 [] 44 [] 54 [] 14 [] 24 [] 34 [] 44 [] 54 [] 04 [] 14 [] 24 [] 34 [] 44 [] 54 [] 04 [] 14 [] 24 [] 34 [] 44 [] 54 [] 04 [] 14 [] 24 [] 34 [] 54 [] 04 [] 14 [] 24 [] 45 [] 56 [] 07 [] 18 [] 29 [] 40 [] 51 [] 02 [] 14 [] 29 [] 44
• EEE
• EEEui
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999314546585083, "perplexity": 10428.521263353883}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912206677.94/warc/CC-MAIN-20190326220507-20190327002507-00528.warc.gz"}
|
https://mathoverflow.net/questions/93140/associative-yang-baxter-on-ug
|
# associative Yang-Baxter on U(g)
Consider $\mathfrak{g}$ a finite-dimensional Lie algebra over the field $\textbf{k}$. If A is an associative algebra, we are searching for functions from $\textbf{C}\times \textbf{C}$ to $A\otimes A$ such that: $$r^{12}(-u',v)r^{13}(u+u',v+v')-r^{23}(u+u',v')r^{23}(u,v)+r^{13}(u,v+v')r^{23}(u',v')=0$$ For $u,u',v,v'\in\mathbb{C}$. This is known as the associative Yang-Baxter equation with spectral parameters. Has the set of the solutions been unravelled when $A=U(\mathfrak{g})$ is the universal envelopping algebra of $\mathfrak{g}$? In fact, I am searching for solutions which have the following unitarity condition: $$r^{12}(x,y)=-r^{21}(-x,-y)$$
• What is the relation between $A$ and $g$? – Bruce Westbury Apr 4 '12 at 16:21
• I reformulated a bit the question accordingly to your comment. I hope bob would agree with the changes I made. – DamienC Apr 5 '12 at 7:43
• bob: If you register an account, it will be easier to edit your own question. – S. Carnahan Apr 6 '12 at 2:28
I don't know about full classification results for $U(\mathfrak{g})$ in general (this is probably out of reach), but there are a bunch of very interesting constructions (together with partial classification resultas) when $A=M_n(\textbf{k})$ and/or when $\mathfrak{g}$ is a semi-simple Lie algebra:
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8248997926712036, "perplexity": 279.0270765532111}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655924908.55/warc/CC-MAIN-20200711064158-20200711094158-00255.warc.gz"}
|
http://www.koreascience.or.kr/article/JAKO201020065114457.page
|
# Application of Response Surface Method as an Experimental Design to Optimize Coagulation Tests
Trinh, Thuy Khanh;Kang, Lim-Seok
• Accepted : 2010.02.22
• Published : 2010.06.30
• 94 21
#### Abstract
In this study, the response surface method and experimental design were applied as an alternative to conventional methods for the optimization of coagulation tests. A central composite design, with 4 axial points, 4 factorial points and 5 replicates at the center point were used to build a model for predicting and optimizing the coagulation process. Mathematical model equations were derived by computer simulation programming with a least squares method using the Minitab 15 software. In these equations, the removal efficiencies of turbidity and total organic carbon (TOC) were expressed as second-order functions of two factors, such as alum dose and coagulation pH. Statistical checks (ANOVA table, $R^2$ and $R^2_{adj}$ value, model lack of fit test, and p value) indicated that the model was adequate for representing the experimental data. The p values showed that the quadratic effects of alum dose and coagulation pH were highly significant. In other words, these two factors had an important impact on the turbidity and TOC of treated water. To gain a better understanding of the two variables for optimal coagulation performance, the model was presented as both 3-D response surface and 2-D contour graphs. As a compromise for the simultaneously removal of maximum amounts of 92.5% turbidity and 39.5% TOC, the optimum conditions were found with 44 mg/L alum at pH 7.6. The predicted response from the model showed close agreement with the experimental data ($R^2$ values of 90.63% and 91.43% for turbidity removal and TOC removal, respectively), which demonstrates the effectiveness of this approach in achieving good predictions, while minimizing the number of experiments required.
#### Keywords
Coagulation tests;Drinking water treatment;Experimental design;Optimizing coagulation;Response surface methodology
#### References
1. Clearsby JL, Dharmarajah AH, Sindt GL, Baumann ER. Design and operation guidelines for optimization of the high-rate filtration process: plant survey results. Denver, CO: AWWA Research Foundation; 1989.
2. Pernitsky DJ. Coagulation 101. Alberta Water and Wastewater Operators Association (AWWOA) Annual Seminar; 2004; Alberta, Canada.
3. Mason RL, Gunst RF, Hess JL. Statistical design and analysis of experiments with applications to engineering and science. 2nd ed. New York: John Wiley and Sons; 2003.
4. Khuri AI, Cornell JA. Responses surfaces: design and analyses. 2nd ed. New York: Marcel Dekker; 1996.
5. Khuri AI. An overview of the use of generalized linear models in response surface methodology. Nonlinear Anal. 2001;47:2023-2034. https://doi.org/10.1016/S0362-546X(01)00330-3
6. Clesceri LS, Greenberg AE, Eaton AD. Standard methods for the examination of water and waste water. 20th ed. Washington, DC: American Public Health Association, American Water Works Association, Water Environmental Federation; 1998.
7. Montgomery DC. Design and analysis of experiments. 5th ed. New York: John Wiley & Sons; 2001. p. 427-510.
8. Box GEP, Hunter JS. Multi-factor experimental designs for exploring response surfaces. Ann. Math. Statist. 1957;28:195-241. https://doi.org/10.1214/aoms/1177707047
9. NIST/SEMATECH e-handbook of statistical method [Inter net]. Available from: http://www.itl.nist.gov/div898/handbook.
10. Haber A, Runyun RP. General statistics. 3rd ed. Reading, MA: Addision-Wesley; 1977.
11. Faust SD, Aly OM. Removal of particulate matter by coagulation. In: Faust SD, Aly OM, eds. Chemistry of water treatment, 2nd ed. Florida: CRC Press; 1998. p. 217-270.
12. Kim SH. Enhanced coagulation: determination of controlling criteria and an effect on turbidity removal. Environ. Eng. Res. 2005;10:105-111. https://doi.org/10.4491/eer.2005.10.3.105
#### Cited by
1. Analysis of Siloxane Adsorption Characteristics Using Response Surface Methodology vol.17, pp.2, 2012, https://doi.org/10.4491/eer.2012.17.2.117
2. Optimisation of beef tenderisation treated with bromelain using response surface methodology (RSM) vol.04, pp.05, 2013, https://doi.org/10.4236/as.2013.45B013
3. Optimization of Ultrasonic Extraction of Phenolic Antioxidants from Green Tea Using Response Surface Methodology vol.18, pp.11, 2013, https://doi.org/10.3390/molecules181113530
4. UV-Initiated Polymerization of Cationic Polyacrylamide: Synthesis, Characterization, and Sludge Dewatering Performance vol.2013, pp.1537-744X, 2013, https://doi.org/10.1155/2013/937937
5. Response surface methodology approach to optimize coagulation-flocculation process using composite coagulants vol.30, pp.3, 2013, https://doi.org/10.1007/s11814-012-0169-y
6. Effects of carrageenan and jackfruit puree on the texture of goat's milk Dadih using response surface methodology vol.66, pp.3, 2013, https://doi.org/10.1111/1471-0307.12053
7. Statistical Design for Formulation Optimization of Hydrocortisone Butyrate-Loaded PLGA Nanoparticles vol.15, pp.3, 2014, https://doi.org/10.1208/s12249-014-0072-4
8. Arsenic Removal from Natural Groundwater by Electrocoagulation Using Response Surface Methodology vol.2014, pp.2090-9071, 2014, https://doi.org/10.1155/2014/857625
9. Arsenic Removal from Water by Sugarcane Bagasse: An Application of Response Surface Methodology (RSM) vol.225, pp.7, 2014, https://doi.org/10.1007/s11270-014-2028-4
10. Employing Response Surface Methodology for the Optimization of Ultrasound Assisted Extraction of Lutein and β-Carotene from Spinach vol.20, pp.4, 2015, https://doi.org/10.3390/molecules20046611
11. Preparation, characterization and kinetic behavior of supported copper oxide catalysts on almond shell-based activated carbon for oxidation of toluene in air vol.22, pp.1, 2015, https://doi.org/10.1007/s10934-014-9877-5
12. Simultaneous extraction, optimization, and analysis of flavonoids and polyphenols from peach and pumpkin extracts using a TLC-densitometric method vol.9, pp.1, 2015, https://doi.org/10.1186/s13065-015-0113-4
13. Optimization of high hardness perforated steel armor plates using finite element and response surface methods vol.24, pp.7, 2017, https://doi.org/10.1080/15376494.2016.1196771
14. and Alginate Using the Box–Behnken Response Surface Methodology vol.56, pp.12, 2017, https://doi.org/10.1021/acs.iecr.6b04765
15. Species-specific interaction of trihalomethane (THM) precursors in a scaled-up distribution network using response surface methodology (RSM) vol.39, pp.3, 2018, https://doi.org/10.1080/09593330.2017.1301564
16. Extraction Optimization of Polyphenols from Waste Kiwi Fruit Seeds (Actinidia chinensis Planch.) and Evaluation of Its Antioxidant and Anti-Inflammatory Properties vol.21, pp.7, 2016, https://doi.org/10.3390/molecules21070832
17. Experimental investigation of design parameters for laboratory scale Pelton wheel turbine using RSM vol.204, pp.2261-236X, 2018, https://doi.org/10.1051/matecconf/201820407019
18. Preparation and Characterization of Polyacrylonitrile/Aluminum Oxide Nanofiber Adsorbent Modified with 2-Amino-3-Methyl-1-Hexanethiol for the Adsorption of Thorium (IV) Ion from Aqueous Solution vol.144, pp.10, 2018, https://doi.org/10.1061/(ASCE)EE.1943-7870.0001446
19. ) kernel coagulant pp.1563-5201, 2018, https://doi.org/10.1080/00986445.2018.1483351
20. Optimization of Photooxidative Removal of Phenazopyridine from Water vol.92, pp.5, 2018, https://doi.org/10.1134/S0036024418050266
#### Acknowledgement
Supported by : Pukyong National University
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6366440057754517, "perplexity": 22244.079864638297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578721468.57/warc/CC-MAIN-20190425134058-20190425160058-00372.warc.gz"}
|
https://www.aimsciences.org/journal/1930-8337/2020/14/1
|
# American Institute of Mathematical Sciences
ISSN:
1930-8337
eISSN:
1930-8345
All Issues
## Inverse Problems & Imaging
February 2020 , Volume 14 , Issue 1
Select all articles
Export/Reference:
2020, 14(1): 1-26 doi: 10.3934/ipi.2019061 +[Abstract](865) +[HTML](159) +[PDF](1127.94KB)
Abstract:
We study the integral transform over a general family of broken rays in \begin{document}$\mathbb{R}^2$\end{document}. One example of the broken rays is the family of rays reflected from a curved boundary once. There is a natural notion of conjugate points for broken rays. If there are conjugate points, we show that the singularities conormal to the broken rays cannot be recovered from local data and therefore artifacts arise in the reconstruction. As for global data, more singularities might be recoverable. We apply these conclusions to two examples, the V-line transform and the parallel ray transform. In each example, a detailed discussion of the local and global recovery of singularities is given and we perform numerical experiments to illustrate the results.
2020, 14(1): 27-52 doi: 10.3934/ipi.2019062 +[Abstract](843) +[HTML](150) +[PDF](822.45KB)
Abstract:
Existing reconstruction methods for single photon emission computed tomography (SPECT) are most based on discrete models, leading to low accuracy in reconstruction. Reconstruction methods based on integral equation models (IEMs) with a higher order piecewise polynomial discretization on the pixel grid for SEPCT imaging were recently proposed to overcome the accuracy deficiency of the discrete models. Discretization of IEMs based on the pixel grid leads to a system of a large dimension, which may require higher computational costs to solve. We develop a SPECT reconstruction method which employs an IEM of the SPECT data acquisition process and discretizes it on a content-adaptive unstructured grid (CAUG) with the total variation (TV) regularization aiming at reducing computational costs of the integral equation method. Specifically, we design a CAUG of the image domain for the discretization of the IEM, and propose a TV regularization defined on the CAUG for the resulting ill-posed problem. We then apply a preconditioned fixed-point proximity algorithm to solve the resulting non-smooth optimization problem, and provide convergence analysis of the algorithm. Numerical experiments are presented to demonstrate the superiority of the proposed method over the competing methods in terms of suppressing noise, preserving edges and reducing computational costs.
2020, 14(1): 53-75 doi: 10.3934/ipi.2019063 +[Abstract](1246) +[HTML](143) +[PDF](409.27KB)
Abstract:
In this article we study the inverse problem of determining the convection term and the time-dependent density coefficient appearing in the convection-diffusion equation. We prove the unique determination of these coefficients from the knowledge of solution measured on a subset of the boundary.
2020, 14(1): 77-96 doi: 10.3934/ipi.2019064 +[Abstract](1776) +[HTML](398) +[PDF](1475.63KB)
Abstract:
Poisson noise is an important type of electronic noise that is present in a variety of photon-limited imaging systems. Different from the Gaussian noise, Poisson noise depends on the image intensity, which makes image restoration very challenging. Moreover, complex geometry of images desires a regularization that is capable of preserving piecewise smoothness. In this paper, we propose a Poisson denoising model based on the fractional-order total variation (FOTV). The existence and uniqueness of a solution to the model are established. To solve the problem efficiently, we propose three numerical algorithms based on the Chambolle-Pock primal-dual method, a forward-backward splitting scheme, and the alternating direction method of multipliers (ADMM), each with guaranteed convergence. Various experimental results are provided to demonstrate the effectiveness and efficiency of our proposed methods over the state-of-the-art in Poisson denoising.
2020, 14(1): 97-115 doi: 10.3934/ipi.2019065 +[Abstract](926) +[HTML](173) +[PDF](944.67KB)
Abstract:
This paper addresses the problem of the electro-communication for weakly electric fish. In particular we aim at sheding light on how the fish circumvent the jamming issue for both electro-communication and active electro-sensing. Our main result is a real-time tracking algorithm, which provides a new approach to the communication problem. It finds a natural application in robotics, where efficient communication strategies are needed to be implemented by bio-inspired underwater robots.
2020, 14(1): 117-132 doi: 10.3934/ipi.2019066 +[Abstract](1098) +[HTML](379) +[PDF](2355.85KB)
Abstract:
Bayesian inference methods have been widely applied in inverse problems due to the ability of uncertainty characterization of the estimation. The prior distribution of the unknown plays an essential role in the Bayesian inference, and a good prior distribution can significantly improve the inference results. In this paper, we propose a hybrid prior distribution on combining the nonlocal total variation regularization (NLTV) and the Gaussian distribution, namely NLTG prior. The advantage of this hybrid prior is two-fold. The proposed prior models both texture and geometric structures present in images through the NLTV. The Gaussian reference measure also provides a flexibility of incorporating structure information from a reference image. Some theoretical properties are established for the hybrid prior. We apply the proposed prior to limited tomography reconstruction problem that is difficult due to severe data missing. Both maximum a posteriori and conditional mean estimates are computed through two efficient methods and the numerical experiments validate the advantages and feasibility of the proposed NLTG prior.
2020, 14(1): 133-152 doi: 10.3934/ipi.2019067 +[Abstract](733) +[HTML](146) +[PDF](905.3KB)
Abstract:
We analyze the Factorization method to reconstruct the geometry of a local defect in a periodic absorbing layer using almost only incident plane waves at a fixed frequency. A crucial part of our analysis relies on the consideration of the range of a carefully designed far field operator, which characterizes the geometry of the defect. We further provide some validating numerical results in a two dimensional setting.
2020, 14(1): 153-169 doi: 10.3934/ipi.2019068 +[Abstract](1678) +[HTML](167) +[PDF](340.84KB)
Abstract:
The Sturm-Liouville pencil is studied with arbitrary entire functions of the spectral parameter, contained in one of the boundary conditions. We solve the inverse problem, that consists in recovering the pencil coefficients from a part of the spectrum satisfying some conditions. Our main results are 1) uniqueness, 2) constructive solution, 3) local solvability and stability of the inverse problem. Our method is based on the reduction to the Sturm-Liouville problem without the spectral parameter in the boundary conditions. We use a special vector-functional Riesz-basis for that reduction.
2018 Impact Factor: 1.469
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8211681246757507, "perplexity": 502.11251707254024}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347389309.17/warc/CC-MAIN-20200525161346-20200525191346-00140.warc.gz"}
|
https://www.sawaal.com/probability-questions-and-answers/a-bag-contains-8-red-and-4-green-balls-find-the-probability-that-two-balls-are-red-and-one-ball-is-g_9582
|
5
Q:
# A bag contains 8 red and 4 green balls. Find the probability that two balls are red and one ball is green when three balls are drawn at random.
A) 56/99 B) 112/495 C) 78/495 D) None of these
Explanation:
$∴P(E)=112495$
Q:
In a purse there are 30 coins, twenty one-rupee and remaining 50-paise coins. Eleven coins are picked simultaneously at random and are placed in a box. If a coin is now picked from the box, find the probability of it being a rupee coin?
A) 4/7 B) 2/3 C) 1/2 D) 5/6
Explanation:
Total coins 30
In that,
1 rupee coins 20
50 paise coins 10
Probability of total 1 rupee coins = 20C11
Probability that 11 coins are picked = 30C11
Required probability of a coin now picked from the box is 1 rupee = 20C11/30C11 = 2/3.
1 108
Q:
In a box, there are 9 blue, 6 white and some black stones. A stone is randomly selected and the probability that the stone is black is ¼. Find the total number of stones in the box?
A) 15 B) 18 C) 20 D) 24
Explanation:
We know that, Total probability = 1
Given probability of black stones = 1/4
=> Probability of blue and white stones = 1 - 1/4 = 3/4
But, given blue + white stones = 9 + 6 = 15
Hence,
3/4 ----- 15
1 ----- ?
=> 15 x 4/3 = 20.
Hence, total number of stones in the box = 20.
5 342
Q:
What is the probability of an impossible event?
A) 0 B) -1 C) 0.1 D) 1
Explanation:
The probability of an impossible event is 0.
The event is known ahead of time to be not possible, therefore by definition in mathematics, the probability is defined to be 0 which means it can never happen.
The probability of a certain event is 1.
8 775
Q:
In a box, there are four marbles of white color and five marbles of black color. Two marbles are chosen randomly. What is the probability that both are of the same color?
A) 2/9 B) 5/9 C) 4/9 D) 0
Explanation:
Number of white marbles = 4
Number of Black marbles = 5
Total number of marbles = 9
Number of ways, two marbles picked randomly = 9C2
Now, the required probability of picked marbles are to be of same color = 4C2/9C2 + 5C2/9C2
= 1/6 + 5/18
= 4/9.
7 1085
Q:
A bag contains 3 red balls, 5 yellow balls and 7 pink balls. If one ball is drawn at random from the bag, what is the probability that it is either pink or red?
A) 2/3 B) 1/8 C) 3/8 D) 3/4
Explanation:
Given number of balls = 3 + 5 + 7 = 15
One ball is drawn randomly = 15C1
probability that it is either pink or red =
14 997
Q:
Two letters are randomly chosen from the word TIME. Find the probability that the letters are T and M?
A) 1/4 B) 1/6 C) 1/8 D) 4
Explanation:
Required probability is given by P(E) =
15 1553
Q:
14 persons are seated around a circular table. Find the probability that 3 particular persons always seated together.
A) 11/379 B) 21/628 C) 24/625 D) 26/247
Explanation:
Total no of ways = (14 – 1)! = 13!
Number of favorable ways = (12 – 1)! = 11!
So, required probability = $11!×3!13!$ = $39916800×66227020800$ = $24625$
15 1549
Q:
Two dice are rolled simultaneously. Find the probability of getting the sum of numbers on the on the two faces divisible by 3 or 4?
A) 3/7 B) 7/11 C) 5/9 D) 6/13
Explanation:
Here n(S) = 6 x 6 = 36
E={(1,2),(1,5),(2,1),(2,4),(3,3),(3,6),(4,2),(4,5),(5,1),(5,4),(6,3) ,(6,6),(1,3),(2,2),(2,6),(3,1),(3,5), (4,4),(5,3),(6,2)}
=> n(E)=20
Required Probability n(P) = n(E)/n(S) = 20/36 = 5/9.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8884996175765991, "perplexity": 1046.7542882768219}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591831.57/warc/CC-MAIN-20180720193850-20180720213850-00115.warc.gz"}
|
http://soft-matter.seas.harvard.edu/index.php?title=Percolation_Model_for_Slow_Dynamics_in_Glass-Forming_Materials&diff=next&oldid=15935
|
Difference between revisions of "Percolation Model for Slow Dynamics in Glass-Forming Materials"
Entry: Chia Wei Hsu, AP 225, Fall 2010
G. Lois, J. Blawzdziewicz, and C. S. O'Hern, "Percolation Model for Slow Dynamics in Glass-Forming Materials", Phys. Rev. Lett. 102, 015702 (2009).
Summary
In this work, the authors propose an alternate approach to understand the glass transition. Instead of looking at the real space, they focus on the configuration space of the system. There are mobility regions in the configuration space, and the percolation of these regions corresponds to a glass-to-liquid transition. With a mean-field description of such percolation, they show that the stretched-exponential response functions typical of glassy systems can be explained.
Background
Glassy systems exhibit several unique properties. During a glass transition, the structural relaxation time increases by several orders of magnitude. Also, the structural correlations display an anomalous stretched-exponential time decay: $exp(-t/\tau_{\alpha})^{\beta}$, where $\beta$ is called the stretching exponent, and $\tau_{\alpha}$ is called the $\alpha$-relaxation time. Although the stretched-exponential relaxation is universal among glass-forming materials, $\beta$ and $\tau_{\alpha}$ are not. They depend on temperature and density, and they vary from one material to another.
Basic Idea
The basic idea is that, instead of focusing on the heterogeneous dynamics and percolation in real space (the traditional approach), the authors focus on the configuration space and its connection to the anomalous dynamics.
For complete relaxation, the system must be able to diffuse over the whole configuration space. That is, there has to be a path that percolates the configuration space. Thus, we can think of a "percolation transition in the configuration space," which corresponds to the glass transition in real space.
Hard spheres
Fig 1. Schematic of allowed regions in configuration space for hard spheres. (a) $\phi = \phi_J$, only jammed states (discrete points in the config space) are allowed. (b) $\phi < \phi_J$, motion occurs in closed mobility domains surrounding jammed states. (c) Even smaller $\phi$, bridges between mobility domains occur. (d) Even smaller $\phi$, percolation occurs (shaded yellow).
Hard spheres interact with infinite repulsion upon contact. At small volume fraction $\phi$, hard spheres behave like fluids. As $\phi$ increases to $\phi_J \approx 0.64$, the system becomes collectively jammed. In such state no motion can occur, because any particle displacement will lead to overlap. Therefore only discrete set of points in the configuration space are allowed (fig 1a).
At slightly lower $\phi$, particles can move around a little bit. Therefore, the discrete points become mobility domains in the configuration space (fig 1b). Further decrease of $\phi$ lead to connection between these mobility domains (fig 1c), and eventually to a percolation between these domains (fig 1d) at $\phi=\phi_P$.
Denote the volume fraction of mobility domains in the configuration by $\Pi$. Percolation occurs at a critical $\Pi_P$. When $\Pi > \Pi_P$, the system can explore the whole configuration space, and it is a metastable liquid. When $\Pi < \Pi_P$, the system can only diffuse in finite regions of the configuration space, and it is a glass. The distance it can explore is set by the percolation correlation length $\xi$, which diverges to infinite at $\Pi_P$.
The authors then describe the percolation with mean-field theory. Following several known scaling laws, they show that the stretching exponent varies with time and satisfies $1/3 \leq \beta \leq 1$, agreeing with experimental observations.
With a few more assumptions, they further predicts the scaling of $\alpha$-relaxation time to be
$q^2 \tau_{\alpha} \propto \left\{ \begin{array}{ll} \exp\left(\frac{A \phi_J}{\phi_J-\phi}\right)(\phi_P-\phi)^{-2} & \textrm{for } \phi_P-\phi \ll \phi_J-\phi_P\\ \exp\left(\frac{B \phi_J}{\phi_J-\phi}\right) & \textrm{for } \phi_P-\phi \gg \phi_J-\phi_P \end{array} \right.$,
where $q$ is the scattering wave number, and $A$, $B$ are positive constants.
Finite energy barriers
Bonds form for systems with a finite energy barrier. In such case, the configuration space can be decomposed into basins of attraction surrounding each local energy minimum. At short times the system is confined to a basin, whereas at long times it can hop from one basin to another.
Again using mean-field arguments, the authors come up with analogous expression for $\beta$ and for $\tau_{\alpha}$.
Soft matter discussion
The glass transition is one of the largest outstanding questions in soft matter. The approach proposed by these authors are dramatically different from the traditional standpoint, yet explains the observed phenomena equally well. This approach may shed some light, and hopefully lead to new predictions and better understanding of the glass-formation materials.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9108719825744629, "perplexity": 938.7516759610008}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737039.58/warc/CC-MAIN-20200806210649-20200807000649-00249.warc.gz"}
|
https://meta.mathoverflow.net/tags/faq/hot
|
# Tag Info
## Hot answers tagged faq
36 votes
### Best of MathOverflow, or papers inspired by MathOverflow
This is an old and self-indulgent story; but it was such a charmingly unexpected bonus from my early use of MathOverflow, that I think it deserves to be recorded somewhere (my apologies for its length!...
20 votes
### Best of MathOverflow, or papers inspired by MathOverflow
This paper (details below) by Zhen Lin Low and Aaron Mazel-Gee cites not just MO but: This collaboration would not have happened without the ‘Homotopy Theory’ chat room on MathOverflow. arXiv....
19 votes
### Best of MathOverflow, or papers inspired by MathOverflow
My MO question Conjugation of group extensions was answered by YCor. As a result, we wrote a joint note Conjugate complex homogeneous spaces with non-isomorphic fundamental groups published in C. R. ...
17 votes
### How to write a good MathOverflow question?
If you ask a question you're asking a favor (the eventual award of MO-points does not change this fact). This means that you should give the public something in return. The formulation of the question ...
17 votes
### Editing etiquette
Most of the time, the edits that I see on MO are respectfully and tactfully performed and small in scope, and gratefully received by the post's author as improvements. Occasionally though I see flare-...
• 50.6k
17 votes
Accepted
### Can I ask a question on MathOverflow and also on another site?
Cross-posting is discouraged in general, because it can lead to duplication of effort by people answering on different sites. However, it is appropriate under some circumstances. Most of the advice ...
• 49.6k
16 votes
### Best of MathOverflow, or papers inspired by MathOverflow
The question whether there is a non surjective bounded linear operator on $\ell_\infty$ that has dense range was answered in this paper by Amir Bahman Nasseri, Gideon Schechtman, Tomasz Tkocz, and me. ...
15 votes
### Best of MathOverflow, or papers inspired by MathOverflow
The analog of the famous law of iterated logarithm for maximum eigenvalue of a random Gaussian matrix was asked here. Zeitouni's MO-answer was expanded (after significant effort) to a full answer for ...
15 votes
### Best of MathOverflow, or papers inspired by MathOverflow
As acknowledged in my note Explicit additive decomposition of norms on $\mathbb{R}^2$, it was sparked by answers by Noam D. Elkies and Suvrit Sra on MathOverflow Absolute value inequality for complex ...
13 votes
### Best of MathOverflow, or papers inspired by MathOverflow
A nice question by Michael Hardy, How many rearrangements must fail to alter the value of a sum before you conclude that none do?, led to a recent 6-author collaboration, 5 or 6 of whom are MO patrons ...
12 votes
### Best of MathOverflow, or papers inspired by MathOverflow
This MO question was the starting point for a joint work with Tao Mei where we study radial multipliers on the von Neumann algebras of hyperbolic groups. The paper is entitled Complete boundedness of ...
12 votes
### Best of MathOverflow, or papers inspired by MathOverflow
Keith Kearnes, together with co-authors Emil Kiss and Ágnes Szendrei, recently published a solution to Varieties where every algebra is free in this arxiv preprint. They prove a result under an ...
11 votes
### Best of MathOverflow, or papers inspired by MathOverflow
Not sure if my recent paper "Equivalence: an attempt at a history of the idea" qualifies as one of the "best of Mathoverflow or papers inspired by Mathoverflow". But I am sure Mathoverflow was a force ...
11 votes
### Best of MathOverflow, or papers inspired by MathOverflow
The question, "How hard is reconstructing a permutation from its differences sequence?" posed by Mohammad Al-Turkistany, was answered by Marzio De Biasi, who then wrote a paper, "Permutation ...
11 votes
### Best of MathOverflow, or papers inspired by MathOverflow
According to Christian Stump, his paper "On a New Collection of Words in the Catalan Family" (Journal of Integer Sequences, vol. 17 (2014), article 14.7.1) is a long version of his answers to a ...
11 votes
### Best of MathOverflow, or papers inspired by MathOverflow
Hannah Cairns's proof of Perron's theorem (discussed in this MathOverflow question) has been published in The American Mathematical Monthly as Perron’s Theorem in an Hour.
11 votes
### How to write a good MathOverflow question?
MO seems to be a place where a differential geometer may ask about general topological manifolds, a topologist about algebra, etc. (and everybody would love to ask about logic if they were not afraid :...
10 votes
### Best of MathOverflow, or papers inspired by MathOverflow
This paper, Roman Karasev, Jan Kynčl, Pavel Paták, Zuzana Safernová, and Martin Tancer. "Bounds for Pach's selection theorem and for the minimum solid angle in a simplex." arXiv:1403.8147 (2014). ...
10 votes
### Best of MathOverflow, or papers inspired by MathOverflow
An unpublished open problem posed by Adam Chalcraft, Does every polyomino tile $\mathbb R^n$ for some $n$?, received considerable attention when I posted it here on MO. (Of all the questions that I ...
10 votes
### Best of MathOverflow, or papers inspired by MathOverflow
Mohammad Ghomi answered the question Shortest closed curve to inspect a sphere, in a paper, Shortest closed curve to inspect a sphere, posted to the arXiv (https://arxiv.org/abs/2010.15204), whose PDF ...
10 votes
Accepted
### Best way to post graphics to MO
Put a line of the form ![Text to be shown if the picture is unavailable][1] at the place in your post where the graphics should appear, put the graphics online ...
• 18k
9 votes
### Best of MathOverflow, or papers inspired by MathOverflow
In 2013 John Pardon solved the Hilbert-Smith conjecture for group actions on 3-manifolds. Lemma 2.17 of the paper was based on the answer to this mathoverflow question. I was quite surprised to ...
9 votes
### Best of MathOverflow, or papers inspired by MathOverflow
Julien Marché's question "Homology generated by lifts of simple curves" was the first appearance in print of a folklore question (I first was asked it back when I was a postdoc). As I discuss in my ...
9 votes
### Best of MathOverflow, or papers inspired by MathOverflow
The MO question, "Shortest closed curve to inspect a sphere," was cited as the "initial stimulus" for the paper Mohammad Ghomi, "The length, width,and inradius of space curves," (PDF download.) ...
9 votes
### Best of MathOverflow, or papers inspired by MathOverflow
Stefan Kiefer and Björn Wachter just published a paper, "Stability and Complexity of Minimising Probabilistic Automata" (arXiv link), which acknowledges the MO question convex polyhedron in the unit ...
8 votes
### Best of MathOverflow, or papers inspired by MathOverflow
The answer to the question Length of Hirzebruch continued fractions was published as a short note On continued fractions of equal length .
8 votes
### Best of MathOverflow, or papers inspired by MathOverflow
The full answer to the question Decidability of diophantine equation in a theory by rainmaker in the case of Robinson’s arithmetic was written up in my paper Division by zero, Archive for Mathematical ...
8 votes
### Best of MathOverflow, or papers inspired by MathOverflow
This doesn't quite fit the mold of the other postings, but Matt Parker (Numberphile and StandUpMaths) made a YouTube video that mentions MathOverflow several times, and particularly highlights the ...
Only top scored, non community-wiki answers of a minimum length are eligible
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4989636242389679, "perplexity": 2849.4221604175973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104249664.70/warc/CC-MAIN-20220703195118-20220703225118-00379.warc.gz"}
|
https://onehossshay.wordpress.com/
|
## Cleaning a Bicycle Chain
I ran across this link today and thought to myself, “Duh. Why don’t I do it this way?”
Cleaning a bike chain is the hands-down dirtiest task of maintaining a bike. Some bike shops will drop the dirty chains into an ultrasonic cleaner to get it nice and spiffy, but DIY’ers options are not so great. Shops sell special expensive chain cleaning tools that are expensive, prone to breaking, and don’t particularly work that well. They tend to spray dirty cleaning solution all over your bike and gears and cleaning the chain cleaner brushes is a task in itself.
Normally I would take off a bike chain (super easy with SRAM powerlink) and soak it a disposable tub, but that doesn’t get the difficult grease. This solution drops it into a large mouth bottle, where you can seal it and simulate a ultrasonic cleaner by just shaking the hell out of it. Flushing and staging with new cleaning solution will get your bike chain nice and clean.
Categories: Bikes
## Grbl: Proofing precision and accuracy
After building the DIY Sherline CNC, I had asked myself the question: “How do I proof the CNC/grbl to make sure that everything machines correctly and to spec?” I posed this question to my friend Machinist Mike, and he quickly responded: “Machine a diamond-circle-square test block!” and awesomely generated a g-code program for me to immediately use.
The diamond-circle-square test block is an old school method to machine a set of known shapes and dimensions to gauge the accuracy and precision of a CNC mill. The diamond gauges how well and straight the CNC tracks diagonal cuts; the circle gauges the circularity of the cuts; and the square gauges the primary axes and perpendicularity. This test also will show the effects from backlash, if your machine is not square, smooth feeds from the surface finishes, among other things. Today there are new and better methods to gauge the accuracy and precision of a CNC machine, as the link shows, but for DIY’ers, the diamond-circle-square test is a plenty robust enough test.
The exact sizes and depths of the shapes do not particularly matter, just as long as the user knows what they are. I would recommend to create a large enough test block that is about as large as the parts you intend to machine. A large shape will tell you if you have problems in a certain area of travel. For me, the Sherline has about a 5″ Y-axis travel and the test block was sized to be about 2″x2″ with shape depths of 0.1″.
Also, Mike had noted that when machining with a mill without rigid anti-backlash ball screws, one should always conventional cut, opposed to climb cut (unless doing a finishing pass.) This is because the cutting forces with a conventional cut always push against the leadscrews in the direction of cutting travel. Where as, with climb cutting, the cutting forces push away from the leadscrews in the direction of cutting, causing the axes motion to rattle between the length of the backlash. This can effect precision and surface finish, as well as prematurely wear your lead screws.
Anyhow, a couple of weekends ago, Machinist Mike and I ran the g-code program with the new changes to my grbl fork. We ensured the Sherline anti-backlash nuts were nice and tight with a backlash of less than 0.001″. (The higher torque of my DIY CNC build allowed me to tighten up the anti-backlash nuts more than I would have with the OEM CNC.) We ran the program without any problems from start to finish.
I’m very happy to report that the surface finishes were excellent, the circularity and perpendicularily were also excellent, and all measured dimensions are within <= 0.001″ of the design dimensions. Meaning that grbl and the DIY Sherline CNC are good to go!
Categories: Arduino, CNC, Machining
## Improving Grbl: Cornering Algorithm
Greasing the axle and pumping up the tires… on the metaphoric grbl wheel that is. I had spent some time to really understand the grbl source code these past few weeks, getting knee deep into the intricacies of it. More I had looked at it, the more I became impressed with Simen’s coding efficiency, showing what the little Arduino can do. In short, it can do a lot and still has plenty of room to grow. While grbl is still in development, I decided to take some initiative to help out and solve some problems still existing in the code, i.e. strange cornering behavior, full arc support with acceleration planning, and intermittent strange deceleration issues. As a researcher, I’m pretty much bound to document just about everything, even if it’s pretty insignificant. So here goes on part one.
The Cornering Algorithm: G-code motions involve piecewise linear movements, where each junction between these linear movements immediately turns and continues on to the next linear movement. The problem is that these junctions create problems with real-world CNC machines, since they can’t do these immediate and instantaneous changes in direction. Stepper motors have only a finite amount of torque and the high inertial forces required to make these quick direction changes can cause them to lose steps. The reason why this is so important is that stepper motor-based CNCs are open-loop control and have no feedback on what the motors are doing. A motor driver just knows that it received a step pulse and tries to move the motor. If it misses a step, the controller (grbl) has no idea and will lose its true position, resulting in a ruined part.
Simen had spent some time working on the problem of how to optimally solve for maximum junction speeds, such that to not exceed the maximum allowable acceleration limit of the machine. His approach is based on the euclidean norm of the entry and exit velocity vectors at the junction and limiting the maximum instantaneous change in velocity at the junction: $\min (v_\text{limit}, \lVert v_\text{exit} - v_\text{entry} \rVert)$. A good approximation for most applications, only if the $v_\text{limit}$ parameter is set correctly for the machine, but not robust for all situations and all machines. In some cases it leads to some strange behavior, like choppy and slow movements through tight curves. In technical terms, Simen’s solution is a linear fit to a nonlinear acceleration problem, but is quite computationally efficient.
To come up with more robust solution of this problem, it needs to be both accurate for all ranges of motion and just as computationally efficient. After some thought, here’s what I came up with: First let’s assume that at a junction of two line segments, we only look at centripetal acceleration to simply things. In other words, we assume the tangential velocity is zero and the velocity through it is constant. At the junction, let’s place a circle with a radius $R$ such that both lines are tangent to the circle. The circular segment joining the lines represents the path for constant centripetal acceleration: $v_\text{junction} = \sqrt{a_\text{max} R}$. Where the maximum junction velocity $v_\text{junction}$ is the fastest speed the CNC can go through the junction without exceeding the CNC’s maximum allowable acceleration $a_\text{max}$.
This circular segment creates a virtual deviation from the path $\delta$, which is defined as the distance from the junction to the edge of the circular segment. This $\delta$ parameter is defined by the user as a setting, which indirectly sets the radius of the circle, and hence limits the junction velocity by the centripetal acceleration. Think of the this as widening a race track. If a race car is driving on a track only as wide as a car, it’ll have to slow down almost to a complete stop to turn corners. If we widen the track a bit, the car can start to use the track to go into the turn. The wider it is, the faster through the corner it can go.
An efficient computation of the circle radius is as follows. If you do the geometry in terms of the known variables, you get: $\sin(\theta/2) = R/(R+\delta)$. Re-arranging the equation in terms of circle radius $R$, you get: $R=\delta\frac{\sin(\theta/2)}{(1-\sin(\theta/2)}$. Theta $\theta$ is defined as the angle between line segments given by the dot product equation: $\cos(\theta) = \frac{ v_\text{exit} \cdot v_\text{entry}}{\lVert v_\text{entry} \rVert \lVert v_\text{exit} \rVert}$. To solve for circle radius, there are two expensive trig functions, acos() and sin(), but these can be completely removed by the trig half angle identity: $\sin(\theta/2) =\pm\sqrt{\frac{1-\cos(\theta)}{2}}$. For our applications, this will always be positive.
Now just plug and chug these equations into the centripetal acceleration equation above. You’ll see that there are only two sqrt() computations per junction and absolutely no trig sin() or acos(). To then find the absolute maximum velocity through the junction, just apply: $\min(v_\text{junction}, \min( v_\text{entry}, v_\text{exit}))$, to guarantee the junction speed never exceeds either the entry or exit speeds at the junction.
This approach is both computationally efficient and accounts for the junction nonlinearities of how sharp the angle is between line segments and how fast the CNC can travel through each junction. Meaning that for right angles or reversing, the computed maximum junction speed is at or near zero. For nearly straight junctions, the computed maximum junction speed is near or at the nominal feedrates, which mean it can fly through the junction without worrying about exceeding the acceleration limits. This has been successfully tested on my machine and by several very helpful users, who have reported that they can crank up the feed speeds of their machines and it runs much smoother through complex paths. Lastly, keep in mind this method is a virtual path deviation that is only used for robustly computing the junction speed and does not actually mean there is a physical path deviation in your CNC machine.
These improvements along with others, such as increased planner efficiency, a fast arc generation algorithm allowing arc acceleration planning, improved acceleration and deceleration performance, etc, are posted on my fork of grbl. And there is more to come.
Categories: Arduino, CNC, Machining
## Grbl: A Simple Python Interface
For people who are more Python inclined, here’s a stripped-down, simple streaming interface with grbl from your computer. The script will send single line g-code blocks to grbl and wait for an acknowledgement. Grbl will send an acknowledgement only when it’s done processing the block and has room in the buffer. So, when starting up, you should see it fly through the first 16 or so (328p Arduinos) commands quickly as the buffer fills, and then it should steadily stream more commands as grbl completes blocks. I can only give setup details for Macs, but this script should work on just about any operating system with an up-to-date version of Python and the same general steps are followed.
For Macs, there are only a few things you will need to do to get up and running. Python, by default, is already installed, but you will need the PySerial library to interface with the Arduino. To install, simply run the Terminal.app and type sudo easy_install pyserial at the prompt.
You will also need to find the device path for your Arduino connected to your USB port. At the Terminal.app prompt, just type ‘/dev/tty.usbmodem’ and hit tab one or two times. This should come up with short list of device ports that is your Arduino. It will likely just come up with one. Replace the complete path in the script, along with the filename of your g-code. (Note: Arduino paths can change if you plug into a different port later)
To run, you can either use ‘python stream.py’ (or whatever you will call this Python script) or set the permissions to execute the script by filename.
That’s it!
#!/usr/bin/env python
"""\
Simple g-code streaming script for grbl
"""
import serial
import time
# Open grbl serial port
s = serial.Serial('/dev/tty.usbmodem0000',9600)
# Open g-code file
f = open('somefile.gcode','r');
# Wake up grbl
s.write("\r\n\r\n")
time.sleep(2) # Wait for grbl to initialize
s.flushInput() # Flush startup text in serial input
# Stream g-code to grbl
for line in f:
l = line.strip() # Strip all EOL characters for streaming
print 'Sending: ' + l,
s.write(l + '\n') # Send g-code block to grbl
grbl_out = s.readline() # Wait for grbl response with carriage return
print ' : ' + grbl_out.strip()
# Wait here until grbl is finished to close serial port and file.
raw_input(" Press <Enter> to exit and disable grbl.")
# Close file and serial port
f.close()
s.close()
NOTE: For more advanced stuff, you can take advantage of the 128 character serial buffer on the Arduino and design the script to keep careful track of how many characters you have sent and how many have been processed by each ‘ok’ response. If you ever run into any data starving problems (by using a wickedly slow computer like my resurrected 550Mhz Powerbook G4), this creates another buffer layer and provides grbl immediate access to g-code commands without having to wait for any serial processes. So far, it has worked very well for me when having to send a long series of very short line segments.
Categories: Arduino, CNC, Machining
## Grbl: How it works and other thoughts…
While grbl is a fantastically simple and economical g-code interpreter and CNC stepper motor controller for the Arduino, there doesn’t seem to be much information on how an average or new Arduino user should interface with it or how it works internally. So, here’s a shot at filling that gap.
(As of this writing, grbl versions are master 0.6b and edge 0.7b)
The first thing that people should know about grbl is that its designed to be simple and barebones. It is not a complete solution for all CNC milling, but it is rather it seems to be intended as a starting point for anyone building a 3-axis cartesian-type mill, router, laser cutter, 3d printer, etc. It started as a stripped down and general-use port of the Reprap Arduino GCode Interpreter, which is more geared for 3d printing only.
Grbl works primarily through the Arduino serial port interface and needs a constant stream of g-code commands sent via a computer or some other means. Grbl accepts and processes single g-code blocks followed by a carriage return, ignoring gcode comments and block delete characters. It returns an ‘ok’ or ‘error:X’ message when it has processed the block and is ready for more information. A simple ruby script for g-code streaming is supplied in the code base for reference. Python scripts also work very well for streaming grbl g-code, but ultimately it’s up to the user in how to interface with it.
To get an idea how grbl works internally, there are essentially two programs running concurrently on grbl. The main program reads the serial port for g-code commands, parses them, then passes it to an acceleration/feedrate planner, and finally places the event into a ring buffer (max 5 blocks for 168 and 16 blocks for 328p Arduinos.) The other program is interrupt driven and works in the background. It controls the stepper motors and sends step pulses and direction bits to the stepper driver pins. It sequentially processes through the ring buffer events, FIFO-style, until it is empty.
For most part, the main program will continually accept new gcode blocks as quickly as it can be processed as long as there is room in the buffer. If the buffer is full, the grbl will not send a response until the interrupt program has finished an event and cleared it from the buffer. For streaming, this means the user interface should always wait for an ‘ok’ or ‘error’ response from grbl before sending a new g-code block. Also, the data stream should be steady and uninterrupted to minimize the chance of ‘data starving’ grbl, aka emptying the buffer, which will cause unintended hiccups in the CNC movements.
As for external interfaces, there are only XYZ limit switches. Other features, such as pause/halt, variable speed reductions for proofing, homing cycles, real-time jogging or manual interface, are currently not supported or are in-development. It should be noted that some of these features are left to the user to decide to add, mainly to stay with the vision of simplicity and portability. Canned cycles and tool radius compensation/offsets are not supported, but this may be handled by an external preprocessor that has yet to be written. Also, there is currently no protocol in querying or broadcasting the current status of grbl, as in the size of the ring buffer, distance to go on current block, and current position.
For all the cool stuff that grbl can do, there are also still some bugs that are being ironed out. G02/03 arcs are not supported by the acceleration planner and intentionally forces the ring buffer to empty, causing some short motion hiccups when the main program has to process a new g-code block to fill the ring buffer and weird accelerations coming into and out of an arc. Although, there has been a lot development here to solve this problem recently and should be ironed out soon. The same could be said of the acceleration planner itself in terms of improving speed and robustness. G04 dwelling forces the ring buffer to empty as well, but doesn’t really pose too much of an issue other than having to re-fill an empty buffer upon resuming.
Regardless of its minor ‘issues’, grbl has a lot of potential and creates a wonderful and economical introduction to a large potential audience of makers and DIYers into the world of CNC. Even then, the possibilities for other applications, such as expanding to 6-axis hexapods or robotics, are very exciting. Anyhow, for about 95% of things that any home user would want to do with grbl, it will work as is. For the other 5%, like precision machining or production, there is still a ways to go, but it’s getting there.
UPDATE: Here’s a list of currently supported g-code commands and unsupported commands from grbl gcode.c
• G0, G1 – Seek and linear motion with acceleration planning
• G2, G3 – CW and CCW arc motions with no acceleration planning
• G4 – Dwell (Up to 6 seconds, for now)
• G17, G18, G19 – Plane select
• G20, G21 – Inches mode enable and disable
• G53, G90, G91 – Absolute mode override, enable, and disable
• G80 – Motion mode cancel
• G92 – Coordinate offset
• G93, G94 – Inverse feedrate mode enable and disable
• M3, M4 – Spindle direction
• (TBD) M0, M1, M2, M30, M60 – Program pause and completed
• (TBD) M5 – Spindle speed
• (TBD) G28, G30 – Go home
• Intentionally not supported: Canned cycles, Tool radius compensation, ABC-axes, Multiple coordinate systems/home locations, Evaluations of expressions, Variables, Probing, Override Control, Non-modal G-codes (G10,G28,G30,G92,G92.1,G92.2,G92.3), Coolant (M7,M8,M9), Overrides (M48,M49), Coordinate system selection, and path mode control.
UPDATE 2: I have been recently working on and posted the solutions to many of these issues in my grbl fork, such as junction speed handling, increased planner efficiency, ‘un’limited dwell time, G02/03 arcs are now enabled with acceleration planning, and other fundamental bugs. Also, with Alden Hart of grblshield fame, we are looking into developing more features for grbl and its variants, such as communication protocols for overrides, status reports, e-stops, tool offsets (which can be handled indirectly with the G92 coordinate offset feature), jogging, backlash handling, etc. As well as looking into future development paths with the newly announced Arduino Due with an 96MHz ARM processor. I will post more details as I find some more time.
Categories: Arduino, CNC
## Designing a DIY CNC (for a Sherline Mill)
According to Machinist Mike, learning how your new machine responds to cutting metal is critical, because you can easily damage the mill, tool, or part, if you take it beyond its capabilities, like this overly-excited guy who probably ruined his very expensive mill spindle assembly by taking too aggressive cuts and using end mills in a drill chuck (a big no-no because of high lateral forces). Mike had recommended that I should use my mill manually, especially since I am relatively new to the act of cutting metal to make the parts, rather than designing them. Learning your machine also gives you an intuitive feel by hearing or seeing trouble before it happens, which can save your machine or your part during your CNC runs. After picking up my mill a few months back, I have been spending my time getting to know my mill by machining multi-use jigs, toe clamps, a tapping block, a tooling plate, and other handy tools that I will need in the future.
With the end goal of building a CNC mill, I opted for the CNC-ready version of the Sherline 5400 mill, which comes pre-installed with NEMA 23 stepper motor mounts, X-Y axis leadscrew oiler, more robust Z-axis leadscrew, adjustable “zero” handwheels, and preload bearings to remove end-play, anti-backlash features, and shaft motor couplers for all three axes. With a cost of only $250 more and the design quality, it was hard to argue to not get it. More time making, less time building. The only minor issue is that the CNC-ready mill doesn’t include anything to mount the handwheels to the recessed shaft couplers, where they assume that you will be immediately installing a dual-shaft NEMA 23 stepper motor with the handwheels mounted onto the back-end. This is easy to bootstrap. All you need for each handwheel is a short section of 0.25″D steel rod with filed-down flats for the set screws that are long enough to span the recess and 1″ wide flat aluminum bar with 3 holes to span the diagonals of the stepper motor mounts with the middle hole providing some stability for the shaft and handwheel. Everything needed is available in a typical hardware store and can be made with basic hand tools. One of the many nice things about Sherline mills is that they have a large selection of pre-built parts and mechanisms designed for their machines, including a fully capable CNC setup, complete with technical details. According to their website, their CNC-builds use an Allegro SLA7044M unipolar stepper driver with a 24V/4A power supply. Their motors are NEMA 23 with 1/4″ diameter dual shaft, 3.2V/2A, 120 oz-in torque rating, a 1.8deg (200 steps/rev), and 250 g-cm^3 rotor inertia (important for acceleration). Their maximum feedrate is 22 ipm, which may sound slow to some people, but most metal milling operations should occur below 6 ipm and the maximum travel in any direction is 9″. From this, these are the minimum design parameters I will be basing my build off of. In choosing a stepper motor, the most significant force a CNC motor contends with is inertia from the motor rotors, leadscrews, mill table, etc. High inertial forces occur when starting, stopping or changing directions during an operation, which also governs how well your mill can machine tight curves due to centripetal acceleration. If your motor does not supply enough torque to overcome the inertial forces, the steppers will likely skip steps and lose track of its position, since the open-loop controller does not have any feedback to correct for this. In other words, high torque at cutting speeds (<6 ipm) is good. Other factors to consider: Higher stepper driver voltage increases high speed motor torques and the maximum speed, but does not effect low-speed or holding torque. Higher driver current increases low-speed and holding torques but does not effect high-speed torque and the useable maximum speed. And, bipolar coil windings in series doubles low-speed torque but halves the maximum speed compared to when the coils are connected in parallel. So, to get the most out of a stepper motor for a CNC application, a bi-polar stepper motor with coils wired in series, driven at the maximum rated current and highest allowable voltage, and a low rotor inertia should be used. If you are designing a DIY mill that is not a Sherline or do not have access to any technical information for a successful build, I would recommend to first compute how much inertia (rotational and translational) per axis of your machine and the torque required for your desired acceleration per axis. Since motor torque drops as speed increases, you will need to also determine the nominal torque due to friction in your machine to find the required torque at the maximum feedrate desired per axis. This should give you a baseline on what motor you will need. From this, I ended up selecting a 2.2V/1.5A, 185 oz-in bi-polar stepper motor from Keling Inc for$27.95 each. They also supply another slightly smaller and lighter 6.1V/1.7A, 156 oz-in stepper that would work as well, but did not have as quite as nice of a torque vs. speed curve. Although the mass and rotor inertias of these stepper motors are slightly higher (0.6 vs 0.7 kg) than used in the Sherline CNCs, the torque is significantly greater, especially considering they are driven as bi-polar rather than uni-polar. It should provide slightly better acceleration response and less chance of skipping a step but mainly should drive the CNC at much faster maximum feedrates. A design trade that’s worth it in my mind.
In choosing a bipolar stepper driver, there are many options, but for reasons in a previous post, I opted for Pololu’s A4983/A4988 bipolar stepper motor drivers. I had chosen these primarily due to their low-cost ($12.95 each), high-efficiency, simplicity, and high 2A maximum current per coil. (Sparkfun’s EasyDrivers do not supply enough current at 0.75A per coil.) With a simple circuit design, these should be able to interface via parallel port to EMC2 or any other CNC interface, as well as to an Arduino with grbl. In choosing a power supply, the supply voltage should be just under the maximum voltage the drivers allow to allow for a little back EMF buffer. (Sorry ATX PC power supplies will not work here!) The Pololu stepper drivers have a maximum motor voltage of 32V and require heatsinks for currents over 1A per coil, but you don’t need much of one, if you drive the motors at the highest voltage possible. These drivers recycle the energy already present in the motor coils and tend to be more efficient with higher voltages. This is due to the high voltages forcing the energy out of the motor windings faster and ends up with less waste heat generated. Personally, I did see a huge difference in temperature between 24V to 30V. At 24V, the heatsinked drivers overheated within a minute at 1.5A per coil and tripped the internal thermal protection, and when at 30V, the driver heatsinks were hot, not searing, and didn’t require a fan. I had ended up purchasing a KL-150-24 24V/6.3A switching power supply from Keling for$39.95. These have a potentiometer to adjust the output DC voltage from the rated 24V by about +/- 6V, which provided me about 30V and probably up to 5A. Surprisingly, the Pololu stepper drivers are so efficient, when all three are active in full-step mode (all windings energized), the current drawn from the power supply is no more than 1A. As of yet, I have not exceeded the capabilities of the power supply, even under cutting load.
With everything selected, next came the build, which is very straight forward. I had first built everything on a breadboard to make sure the circuit was good. Just follow the wiring diagrams for the Pololu stepper drivers and here are a few things that I came across that should be noted here. Try to use decoupling capacitors to ensure a clean source of power for both the logic and motor power. Logic ground should be shared with all other logic grounds, including your controller. The motor power grounds should be star grounded at the motor power terminal. Logic and motor ground are already shared internally in each Pololu stepper driver board and do not need to an additional external connection between the two. (Fairly sure on this. This keeps the two grounds independent of each other and adding another ground at the star ground would likely cause a ground loop.) Finally, the Pololu A4983 stepper drivers need a pull-down resistor for the MS1 pin to operate correctly. Their other three stepper drivers have internal pull-down resistors for that pin. Not really sure why.
After the breadboard testing, I picked up a small protoboard from Sparkfun and built the circuit. I highly recommend getting a good plated protoboard. It will save you some soldering headaches for little cost. Also, I had created some DIY jumpers with some female headers and breadboard wire to easily change the microstep size for each driver and make the sleep and reset pins readily available for future mods. If you choose to use grbl, the pictured Arduino uses the grbl edge version 0.7b, not the current master 0.6b. This is due to a switch in how the stepper enable pin is held high/low between versions and the compatibility with the Pololu drivers and the grblshield too. To get grbl edge version 0.7b, hit the link, download and compile the source code, and flash it to your Arduino, all according to Simen’s instructions.
In testing the DIY CNC system, everything went exactly as planned, except for one thing. Steppers are driven with square wave pulses and create higher and higher audible frequencies when driven faster and faster. At high enough frequencies, they can excite the structural vibration modes of your mill, causing everything to shake and rattle. To minimize any resonance effects, you need to select a motor with low mass as in stated in a previous post. But, the one thing I didn’t account for is the vibration modes of the internal rotor of the motor. With the Keling steppers I had selected, if I were to 1/4 or 1/8 microstep at feedrates above 15 ipm, the motor internal rotor would begin to resonate and stall, even though it was no where near it’s maximum theoretical speed. In 1/2 microstep or full-step mode, the motor rotor wouldn’t resonate and run up to the motor’s maximum feedrate of 30-35 ipm (This corresponds to the torque curve when running up at 5kHz step pulses). So, I’m basically forced to run in half-stepping mode, which is just a bit louder in operation, but it’s plenty precise with steps equaling 0.000125″ per step and it’s still 50% faster than the Sherline CNC system. In hindsight, I would have selected a better quality stepper motor with a stiffer, more robust casing to remove the motor rotor resonance problem, but there was no way to tell if this would be a problem until I purchased and tested the motors anyhow.
So what’s next? At the moment, the DIY CNC system is streaming the grbl Arduino g-code commands through the USB serial port via a Python script and working as designed. I had intended to look into creating a headless system with joystick control and an LCD readout, but grbl is unfortunately still somewhat beta and isn’t designed for easily adapting an external interface to. Meaning, grbl does not currently have a way to get real-time feedback or issue real-time commands through its serial port interface. It also doesn’t have a way to compensate for backlash internally yet or some other useful features, such as variable feedrate, pause/reset, status reports, etc. Although it’s possible modify grbl to do these things, it still may not be the best solution for my mill, considering there is always EMC2. But, I do really like the idea of being able to write my own Python scripts to have complete control of the mill. Anyhow, I’m still looking into it and, depending on where its headed, considering in helping develop grbl, as it has a lot of promise, especially for other applications.
With this all said and done, this was a fun project and very cheap. The cost of a Sherline CNC driver-stepper only system is $805 ($1825 with computer with EMC2). With the cost of the motors (~$90), drivers (~$40), Arduino grbl controller (~$30), power supply ($40), and misc build hardware ($50), the total cost of a DIY build was roughly$250. Even though there is some more to do, like deciding on an enclosure and more proofing of the system, the DIY approach is, compared to the Sherline CNC, at least 33% the cost, 50% higher low-end torque, 50% higher maximum feedrate, and completely modular, maintainable, and cheap and easy to fix. Well worth the time, I say.
Categories: Arduino, CNC, Machining
## Grbl: Why reinvent the wheel?
Recently, I took a stab at trying to write a stepper motor driver for an Arduino, mainly to see what it would entail to get precise control for all three axes of the CNC and do other tasks as well. I have seen a lot of short example Arduino programs around online to do this, but usually only for one motor. To my surprise (or not), creating three, asynchronous pulse trains for the stepper motors, while simultaneously making the necessary computations for each block of g-code, was a bit more difficult to do, mainly due to the processing limitations of the Arduino. This is especially a problem if you want to anything else on top of this, like an LCD display, joystick manual control, etc.
To create precisely timed, asynchronous pulse trains requires interrupt programming, where you commandeer one or both of the PWM timers in the Arduino to use as an internal clock, independent of the main clock, to time the pulses. Interrupt programming basically counts down from a user-specified time and when reaching zero: pauses the main program, runs the interrupt code, which in this case is a single stepper pulse, then resumes the main program. But, along with being more difficult to program for, the main drawback is the more interrupts you have, the slower the main code runs.
For my Sherline DIY CNC, the frequency of the interrupt pulse trains can cause a problem. Suppose we look at the worst case scenario and we set the maximum feedrate to 20 inch/min for all three axes. With a 20 threads/inch leadscrew and 200 steps/rev stepper motor in full step mode, each stepper control pulses require a rate of 20*20*200/60 = 1333.3 step/sec. Not too bad. But, if we’d like to microstep the stepper motors at 1/4 steps, each stepper pulse rate goes up to 4 pulse/step * 1333.3 step/sec = 5333.3 pulse/sec. That’s 5.3 kHz… for one motor. All together, the Arduino needs to supply an asynchronous pulse train to all three motors at rate of roughly 16 kHz, or every 63 microseconds.
So, now the question is how much processing time is there left to still run the main program to load, interpret, and execute the next g-code block and other tasks. In short, not much. Allegro-based stepper motor drivers have a minimum 1 microsecond pulse rise and fall to process and read a step has occurred. So, let’s set a relatively conservative pulse total time at 15 microseconds to ensure the pulse is received, and the interrupt program has a 5 microsecond overhead. This leaves about 43 microseconds between pulses for the main program to run. Given that the Arduino runs at 16MHz (16 CPU cycles per microsecond), the main program will only go through very short segments of program execution and will effectively run at most 68% of normal speed. If you decide to microstep at 1/8 step, things go from bad to worse, where the main program will run at most 37.5% speed (under the same assumptions). Additionally, this is assuming that internal interrupt handling during runtime is 100% efficient, which is likely not the case.
Although this is only a worst-case scenario, this just shows how the Arduino has the potential of struggling to multi-task with high-frequency pulse trains and other desired tasks in certain situations. If the Arduino can’t keep up, what will likely happen is stalling during CNC operation between g-code blocks, motor jitter which can lead to skipped steps due to rapid stops and accelerations, or the Arduino itself crashing and not finishing the job, where you then have to figure out how to reset your g-code to start in a middle of the program while likely having lost your reference point. This is not good for you, on the part or the tool, let alone finish of a machined part (a stall causes the cutting tool to dwell, leaving unsightly tool marks).
So, where does this leave me? Sure, it’s very possible to write code that will do everything that I’d like an Arduino to do. Sure, if the steppers jitter, it might not be that bad. BUT, just looking at this interrupt issue makes me cringe at the thought of how much time programming and testing and programming and debugging and programming and etc,etc. will be involved in making sure the code is solid and dependable. Plus, with the great possibility that the Arduino compilers doesn’t optimize well, it will likely have be written in a traditional programming language, as in C++. But, luckily somebody already has. Thank you, Simen!
Grbl (github,blog), written by Simen S.S. of Norway, is an open-source g-code interpreter for driving stepper motors on standard 328 Arduinos. It’s written in C to create a highly optimized, efficient, stable controller capable of independently driving 3 stepper motors, for up to 30k (according to site) jitter-free step pulses. It’s well-tested, handles acceleration and deceleration, queues up to 20 g-code blocks, and even with a planner to anticipate and execute efficient movements.
Although the code itself is beautifully written and well commented, there isn’t much information yet on how it works and how to interface with it for those of you that can’t read C code. In my next post, I’ll take a shot at it for the rest of us out there.
For me, grbl is simple and nearly the perfect solution, but can be difficult to implement into my CNC system, if I want to modify the code for my own personal changes, i.e. head-less control, manual control via joystick, adjusting feedrates, LCD display with read-outs on position, read-write SD-card with g-code, etc. Meaning that for every new release, the new code must be re-modified for each of my personal changes and then thoroughly re-proofed that it did not screw up the performance of motor pulse trains. Not to mention, this must all be written in C as well, since Simen’s code cannot be efficiently translated to Arduino language. Rather than deal with this, I like Ed’s two controller solution (Ed’s Life Daily), where he uses one ‘slave’ grbl Arduino dedicated to g-code and motor control and another ‘master’ Arduino for reading an SD-card and streaming the ‘slave’ the g-code commands. This keeps grbl and the user interface independent of each other and resistant to any other new features that Simen decides to add in the future. But, rather than just streaming code, you could program the ‘master’ Arduino in the simpler Arduino environment using their standard libraries to just perform the majority of tasks that you would want headlessly, which, IMO is an order of magnitude easier and simplier to program and maintain.
As the post title states, why reinvent the wheel?
UPDATE: First, let me say that the conclusions of this post need some clarification. A little over a year since I wrote this post, much has happened, such as becoming a main developer for Grbl, and I have learned quite a lot. When this was written, I came from a background of interpreted programming languages, like Matlab and Python, where programs run much like scripts, direct and linear, with little to no multitasking. The same could be said with the Arduino IDE in the way its designed to work easily for beginning users. The point is that, even though an Arduino and other microcontrollers can easily perform high frequency stepper pulses, a fully functioning and reliable CNC controller requires advanced algorithms and techniques to account for how to handle everything else, such as the incoming serial data stream, parsing the data, planning the accelerations (which you have to do with the open-loop control of the steppers), and add any other features like real-time status reports and control switches. If the controller either can’t keep a steady stream of stepper pulses or can’t keep up with feeding the steppers the correct motions, the steppers will likely lose steps (hence position) and crash into something. One of the many of Grbl’s tricks to solve the problem described in this post is by combining all of the different axes stepper pulses with a Bresenham line algorithm into one stepper pulse ‘tick’, rather than three different ones, and manage the memory and data flow very efficiently. As a part of Grbl, we’re constantly pushing the limits of what the little Arduino can do and adding more features all the time. If you’d like to learn more, contribute, or would like to see a new feature added to Grbl, please visit us at Github.
Categories: Arduino, CNC
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 15, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28711986541748047, "perplexity": 2196.3482631328616}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464049278417.79/warc/CC-MAIN-20160524002118-00074-ip-10-185-217-139.ec2.internal.warc.gz"}
|
http://openstudy.com/updates/4fa5b375e4b029e9dc35a2cf
|
## Got Homework?
### Connect with other students for help. It's a free community.
• across
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
55 members online
• 0 viewing
## AccessDenied Group Title Note: This is NOT a Question! I am just joining the cool new trend of having a tutorial for something, and it gives me a good excuse to make a question. This Tutorial is for LaTeX formatting (mainly without Equation Editor). There are three sections in total, and I'll post them individually, along with the introduction for a total of four posts. Enjoy! 2 years ago 2 years ago Edit Question Delete Cancel Submit
• This Question is Closed
1. AccessDenied Group Title
Best Response
You've already chosen the best response.
54
$$\color{gold}{\star \ \star} \quad \large{ \mathbb{\text{Doing That } \LaTeX \text{Thing}} } % Because knowing is half the battle! % % I hope you enjoy the tutorial as much as I did writing it. It's been fun! % % - Written by: AccessDenied, master of using an excessive amount of work in things %$$ LaTeX is a very powerful and versatile tool for writing Mathematics. The Equation Editor gives you only a small glimpse into the true power of the language. My goal is to give you the power to write this LaTeX on your own and tap into that potential! Prepare to delve into the wonderful world of LaTeX! $$\color{orange}{\cdot} \quad \small{ \text{Experience will best supplement this tutorial. Try things out!} }$$
• 2 years ago
2. AccessDenied Group Title
Best Response
You've already chosen the best response.
54
$$\color{goldenrod}{\star} \quad \mathbb{\LaTeX \text{ Syntax; Formatting, Commands, and Environments}}$$ $$\color{orange}{\cdot} \quad \text{Adding LaTeX to your Posts}$$ Creating fine posts that will coexist with the LaTeX content requires that you know how to declare it in the posts correctly and effectively. There are two ways to declare it: block content and in-line content (sometimes referred to as "displayed" and "text," respectively). $$\cdot \quad$$ Block content is inserted into the post as its own stand-alone element. The LaTeX is put on its own line. This type of LaTeX is applied by wrapping the code between the delimiters: $$\backslash [ \cdots \backslash ]$$. $$\cdot \quad$$ In-line content is inserted into the post with respect to its surroundings. This allows LaTeX to be inserted directly into paragraph explanations. This type is applied by using the delimiters: $$\backslash ( \cdots \backslash )$$. Which method you use is largely based on your purposes. Just remember that in-line is treated a tad differently from block content in that, it is made smaller and more compact than the block content by nature. This often affects placement of things like the stuff usually under the limit $$\lim$$ where the in-line method places it to the side, as so: $$\lim_{x \to 4}$$. This is fixed using a command (If you're interested, it is called $$\textbf{\limits}$$ -- I won't go over it here). Another note (it is a minor point), in-line text will not wrap around the text-space unless it is preceded by something. Block content will always wrap around, however. One of the staples for LaTeX is the backslash $$\backslash$$ symbol. It comes in on the declaration and the commands. It is also used to start a new line (double-backslash). That should help conserving the need for many individual LaTeX spaces. Okay, we got that out of the way! So, what can we do? You can type some basic equations in the LaTeX for the pretty, italicized font, but you'll quickly notice the limitations -- no spaces and no fancy font things... where is the fancy?! Well, we're getting there.
• 2 years ago
3. AccessDenied Group Title
Best Response
You've already chosen the best response.
54
$$\color{orange}{\cdot} \quad \text{Commands -- Adding in that functionality}$$ Commands are the masters of the functionality you really want in the posts. They're very simple, too. The basic syntax of a command is as follows: $$\quad \boxed{ \quad \color{green}{\star} \quad \backslash \underline {\textit{name}} \{ \textit{argument}_1 \} \{ \textit{argument}_2 \} \ ... \qquad }$$ It is most common for the command to have from zero to two arguments. These arguments are usually either switches for how the command works or the target of the command. There are many, many commands for LaTeX, more than I can even discuss honestly. We'll touch on the important / awesome ones, though. I leave it to you to search for the rest as you please. $$% Just AccessDenied adding some random comments to make sure you can't reuse it! %$$ The first commands we should learn is for inserting text into the LaTeX correctly. You could try writing a message like "I love LaTeX!" directly, but it doesn't turn out right, as I shall present: $$\quad \boxed{ \quad \color{green}{\Rightarrow} \quad \text{I love LaTeX!} \qquad }$$ $$\quad \boxed{ \quad \color{lime}{\Leftarrow} \quad I love LaTeX! \qquad }$$ I'll use this kind of representation a lot for this tutorial. Just internalize that this is "input =>" "output <=" and you'll be good. On a more relevant note, however, we can see there is a serious lack of spacing. It's nowhere near as pretty as just typing it out... well, let's try out a command. $$\textbf{\text} \{ ... \}$$ Well, it sounds aptly named. Let's try it out. $$\quad \boxed{ \quad \color{green}{\Rightarrow} \quad \text{\text{I love LaTeX!}} \qquad }$$ $$\quad \boxed{ \quad \color{lime}{\Leftarrow} \quad \text{I love LaTeX!} \qquad }$$ We have our spaces in and everything! Just for novelty, let's also use the \LaTeX command, which adds a nice label for the LaTeX name. $$\quad \boxed{ \quad \color{green}{\Rightarrow} \quad \text{\text{I love } \LaTeX !} \qquad }$$ $$\quad \boxed{ \quad \color{lime}{\Leftarrow} \quad \text{I love } \LaTeX !} \qquad$$ Notice that some commands are written with specific capitalized letters. Command names are case-sensitive, so be wary that some functions will not work without addressing that capital letter. Also notice that we have an extra space after "love." That space carries even outside of the command. Just keep those things in mind! The text command creates an environment apart from the Math space, which omits the spaces. However, spacing is still possible even without the text command. In fact, there are a lot of commands for spacing in the Math space. I'll just list them here. $\quad \begin{array}{|c|c|} \hline \textbf{Command} & \textbf{Effect} \\ \hline \backslash & \text{ Single space equivalent } \\ \backslash quad & \text{ Creates large horizontal space } \\ \backslash qquad & \text{ Two \quad } \\ \backslash , & \text{ Equals 3/18 a \quad } \\ \backslash : & \text{ Equals 4/18 a \quad } \\ \backslash ; & \text{ Equals 5/18 a \quad } \\ \backslash ! & \text{ Equals -3/18 a \quad } \\ \hline \end{array}$ What? An antispace!? Well, I guess antispaces may have uses somewhere. I never particularly found one, though. I guess if you wanted to create a 1/18 a \quad space or even -36/18... oh nevermind, not important! There are also the Mathematical functions. You could type them normally, but they're usually better oriented with the command. $\quad \begin{array}{|c|c|c|} \hline \text{\frac} & \text{_} & \text{^} \\ \hline \text{\sqrt} & & \\ \hline \text{\sin} & \text{arcsin} & \text{sinh} \\ \hline \text{\log} & \text{\ln} & \text{\exp} \\ \hline \text{\lim} & \text{\int} & \\ \hline \text{\sum} & \text{\prod} & \\ \hline \text{\mod} & \text{\inf} & \text{\sup} \\ \hline \end{array}$ That frac function is probably one of the big secrets of the Equation Editor and LaTeX here. The syntax of the fraction function is \frac{numerator}{denominator}. In addition, the root's index actually is placed into a weird place in square brackets (\sqrt[index] {content}) The majority of the others do come up on Equation Editor, however, and should be familiar if you've played with it before. $$% AccessDenied, adding in some special comments here and there %$$ Lastly, I want to go over some modifying commands. This should wrap up the (useful) commands section. These commands are for changing the size and style of your writing in LaTeX. $\quad \begin{array}{|c|c|} \hline \textbf{Name} & \textbf{Explained} \\ \hline \text{rm} & \mathrm{Roman} \\ \text{it} & \mathit{Italicized} \\ \text{bf} & \mathbf{Bold} \\ \text{frak} & \mathfrak{Fraktur} \\ \text{cal} & \mathcal{Calligraphy} \\ \text{sf} & \mathsf{\text{Sans-serif}} \\ \hline \text{tiny} & \text{Tiny font} \\ \text{scriptsize} & \text{Size of a sub/super script} \\ \text{small} & \text{Small font} \\ \text{normalsize} & \text{Normal} \\ \text{large} & \text{Larger font} \\ \text{Large} & \text{Even larger font} \\ \text{LARGE} & \text{LARGE font} \\ \hline \text{color} & \color{green}{\text{Makes colors!}} \\ \hline \end{array}$ The first set of commands can come in a few forms, although some are exclusive to one form. All of the above forms may be stand-alone. These will not have arguments and apply to all the content in the space. The first three may be used appended to \text (i.e. \textrm{}, \textbf{}) for the same effect as if you used the stand-alone version and then \text{}, and only apply to a target-argument. All of these are also applicable at the end of a function "\math__" and will also apply to a target-argument. The sizes can either be left stand-alone to apply to their respective line, or be fed a target-argument for what should be made bigger. There are even larger sizes, but I find them overkill. The names are intuitive, though. Anything as "huge" as LARGE is absurd. The color command takes a color argument in either hexadecimal or nominal format (\color{#FF0000}{text} and \color{red}{text} do the same thing). As a test of our newfound skills, let's try to make a (considerably fancy) display for the slope-intercept form of lines! We would expect this to fail miserably, typing it directly into LaTeX: $$\quad \boxed{ \quad \color{green}{\Rightarrow} \quad \text{Slope-intercept form: y = mx + b} \qquad }$$ $$\quad \boxed{ \quad \color{lime}{\Leftarrow} \quad Slope-intercept form: y = mx + b \qquad }$$ And it does. Now, let's surround the "Slope-intercept form: " in a \text{} command. $$\quad \boxed{ \quad \color{green}{\Rightarrow} \quad \text{\text{Slope-intercept form: } y = mx + b} \qquad }$$ $$\quad \boxed{ \quad \color{lime}{\Leftarrow} \quad \text{Slope-intercept form: } y = mx + b \qquad }$$ There we go! Finally, we'll make it bold by replacing \text with \textbf and add a \quad between the text and formula for horizontal spacing. $$\quad \boxed{ \quad \color{green}{\Rightarrow} \quad \text{\textbf{Slope-intercept form: } \quad y = mx + b} \qquad }$$ $$\quad \boxed{ \quad \color{lime}{\Leftarrow} \quad \textbf{Slope-intercept form: } \quad y = mx + b \qquad }$$ We're done! Looks a lot nicer than just writing it out, which would take way too little time! Well, let's end this section off and move onto the final part: Environments!
• 2 years ago
4. AccessDenied Group Title
Best Response
You've already chosen the best response.
54
$$\color{orange}{\cdot} \quad \text{Eco-friendly / Safe Environments}$$ So, there's just one more thing to cover, called "environments." The name may sound a little funny, but I assure you that these are anything but lame! Environments allow for special conditions to be applied to the LaTeX space. There are two very useful ones. The first one is called "align." The alignment environment allows for the use of a special operator "&" that forces each line to line up at these points. Such an example for this environment could be a solution to the equation "2x + 3 = 9." \quad \boxed{ \begin{align} \quad & \color{green}{\Rightarrow}\quad \text{\begin{align}} \qquad \\ & \color{green}{\Rightarrow} \quad \ \text{ 2x + 3 &= 9} \qquad \\ & \color{green}{\Rightarrow} \quad \ \text{ 2x &= 6} \qquad \\ & \color{green}{\Rightarrow} \quad \ \text{ x &= 3} \qquad \\ & \color{green}{\Rightarrow} \quad \text{\end{align}} \qquad \end{align} } \quad \boxed{ \begin{align} \quad & \color{lime}{\Leftarrow} \quad & 2x + 3 &= 9 \qquad \\ & \color{lime}{\Leftarrow} & \quad 2x &= 6 \qquad \\ & \color{lime}{\Leftarrow} & \quad x &= 3 \qquad \end{align} } Notice that format. It starts with a "begin{align}" and ends with a "end{align}." This is how the environments will always be declared. It is mostly the same way with the array environment. Instead of "align", you just have the "array." However, the Array environment uses an extra parameter after the "begin{array}" part. We have to declare the alignment of each element in the table. We may optionally specify vertical separation. Also, the environment supports a command called $$\textbf{\hline}$$ to make horizontal lines. The possibilities are: "l", "c", and "r." They stand for "left," "center," and "right alignment, respectively. So, to write the array header for a two-column array with all elements aligned to the right and no vertical bars, we'd use "begin{array}{rr} ... \end{array}." Similarly, if we wanted to add a vertical bar between the first and second item of each row, we'd use "begin{array}{r|r}." I will not create another example of arrays for the sake of making sure the post isn't going to explode. Instead, I want to give you the very plaintext version of my tutorial that includes a few uses of arrays already, so that you may see how it was all written as well! Enjoy! This will conclude the tutorial! I hope you have learned a thing or two on making LaTeX. Just remember that your creativity will drive the effectiveness of what you do with this. Personally, I thought the diagram of a rectangle I made with arrays was fairly nifty. $$% Disclaimer: I love you all very much for checking out the guide %$$ Plaintext Source; Tutorial: http://pastebin.com/raw.php?i=v8GUrZi5
• 2 years ago
5. zepp Group Title
Best Response
You've already chosen the best response.
0
AccessDenied had lot of issues when writing the tutorial.. but he did it! *Applauses* ;D
• 2 years ago
6. AccessDenied Group Title
Best Response
You've already chosen the best response.
54
two revisions and a weird error concerning environments going after LaTeX
• 2 years ago
7. alexwee123 Group Title
Best Response
You've already chosen the best response.
0
wall of text didn't read :D
• 2 years ago
8. Hero Group Title
Best Response
You've already chosen the best response.
0
I scanned through it in about a minute. Good info.
• 2 years ago
9. asnaseer Group Title
Best Response
You've already chosen the best response.
0
Great stuff AccessDenied. There is a $$\LaTeX$$ practising group where you could post this as well: http://openstudy.com/study?login#/groups/LaTeX%20Practicing!%20%3A)
• 2 years ago
10. Hero Group Title
Best Response
You've already chosen the best response.
0
They should feature these tutorials on the main page along with creating a special group. All of the tutorials so far have been pretty impressive.
• 2 years ago
11. AccessDenied Group Title
Best Response
You've already chosen the best response.
54
Thanks. I don't think I made any more mistakes after revising again, but I hope if I did, people can catch them. The biggest deal making it was just writing the latex in latex. There was a lot of weird issues that would come up doing that. :P There was also a weird thing that happened when I tried to type the environment declarations after I introduced an independent LaTeX string before it, where the environments would become LaTeX delimiters outside of the regular ones. D:
• 2 years ago
12. KingGeorge Group Title
Best Response
You've already chosen the best response.
0
Very impressive. And very helpful as well. Excellent job.
• 2 years ago
13. lgbasallote Group Title
Best Response
You've already chosen the best response.
0
just when i thought ishy and myin were the best in latex you proved me wrong
• 2 years ago
14. KingGeorge Group Title
Best Response
You've already chosen the best response.
0
I enjoy your comments in the plaintext source.
• 2 years ago
15. Hero Group Title
Best Response
You've already chosen the best response.
0
I'm nitpicking, but you should have stated at the beginning of your tutorial that right arrow means input, and left arrow means output. @lgbasallote, there are several people who are really good at Latex, not just those you have specified.
• 2 years ago
16. Hero Group Title
Best Response
You've already chosen the best response.
0
Belay my last
• 2 years ago
17. lgbasallote Group Title
Best Response
You've already chosen the best response.
0
fine..hero is good in latex too...i know you wanted to be mentioned :P haha jkjk
• 2 years ago
18. Hero Group Title
Best Response
You've already chosen the best response.
0
I didn't say it was me :P
• 2 years ago
19. AccessDenied Group Title
Best Response
You've already chosen the best response.
54
Oh, its actually just if there are closed LaTeX strings anywhere in the post, \begin{} and \end{} become delimiters for their own LaTeX environments. Yeah, that might be a good idea for the future. Most ideas came as the thing progressed, those arrowed boxes being one of them. :D
• 2 years ago
20. AccessDenied Group Title
Best Response
You've already chosen the best response.
54
What! Fine then. Anytime.
• 2 years ago
21. lgbasallote Group Title
Best Response
You've already chosen the best response.
0
i hope with these the number of ambiguous questions would drop
• 2 years ago
22. AccessDenied Group Title
Best Response
You've already chosen the best response.
54
Notes for improvements in possible future revisions * Fix mis-spacing in title (L 1) * Fix qquad position (L 58) (should precede closing bracket) * Add in the diagram i referred to, (L 130) * . Add introductory note for used notations * C: possibly more elaboration in the environments section * C: Could probably add an example for arrays, it wasn't too bad to post as predicted * C: possibly more explanation of the line break, wasnt emphasized very well. * Add sources or helpful sites to check out: --> http://en.wikibooks.org/wiki/LaTeX/Mathematics (Originally taught me a lot of the possible commands, also found some interesting commands recently) --> http://insti.physics.sunysb.edu/latex-help/ltx-176.html (Text size, usable fonts)
• 2 years ago
23. AccessDenied Group Title
Best Response
You've already chosen the best response.
54
I bump in hopes that somebody does catch any mistakes, would help in making a revision that I don't miss something. D:
• 2 years ago
24. AccessDenied Group Title
Best Response
You've already chosen the best response.
54
I'm assuming that most of the things to tweak have been plucked out now, so I can probably just adjust and post in the other section as mentioned earlier. D:
• 2 years ago
25. TheViper Group Title
Best Response
You've already chosen the best response.
2
$\Huge{\color{red}{\mathbb{LaTex}}}$
• 2 years ago
26. TheViper Group Title
Best Response
You've already chosen the best response.
2
wow!
• 2 years ago
27. TheViper Group Title
Best Response
You've already chosen the best response.
2
I didn't know that I can do like that in the Equation Editor!
• 2 years ago
28. TheViper Group Title
Best Response
You've already chosen the best response.
2
Now, I too love $\Huge{\mathbb{\LaTeX}}$
• 2 years ago
29. TheViper Group Title
Best Response
You've already chosen the best response.
2
So much interesting!!
• 2 years ago
30. TheViper Group Title
Best Response
You've already chosen the best response.
2
I can't tell in words that how interesting is LaTeX!
• 2 years ago
31. KingGeorge Group Title
Best Response
You've already chosen the best response.
0
not to be rude, but it would be nice if you were to take your practicing to the $$\LaTeX$$ practice group at openstudy.com/study#/groups/LaTeX instead of repeatedly posting here.
• 2 years ago
32. TheViper Group Title
Best Response
You've already chosen the best response.
2
$\Huge{\color{red}{\rightarrow \boxed{\mathbb{\text{I loveLaTeX}}}}}$
• 2 years ago
33. TheViper Group Title
Best Response
You've already chosen the best response.
2
$\Huge{\color{orange}{\star \star}{\text{{Very Excellent Job}}}}$
• 2 years ago
34. TheViper Group Title
Best Response
You've already chosen the best response.
2
So useful! Thanx so much!
• 2 years ago
35. TheViper Group Title
Best Response
You've already chosen the best response.
2
I can't thank u in my words! I m so happy to know it! & thanx @ParthKohli for telling me this $\Huge{\color{red}{\star}{Tutorial}}$
• 2 years ago
36. sami-21 Group Title
Best Response
You've already chosen the best response.
0
$\Huge \color{blue}{nice}\color{red} {tutorial}$ !
• one year ago
37. AccessDenied Group Title
Best Response
You've already chosen the best response.
54
$$\large \frak{\text{Thanks, glad you learned from it!}}$$ :) I should start on a second version, so I can add more / fix some of those things.
• one year ago
38. mathslover Group Title
Best Response
You've already chosen the best response.
0
$\huge{\frak{Hey, nice}\mathsf{Tutorial}\mathbb{Great!!!!}}$
• one year ago
39. ghazi Group Title
Best Response
You've already chosen the best response.
0
how you guys do it?
• one year ago
40. mathslover Group Title
Best Response
You've already chosen the best response.
0
@ghazi prefer this group : this will help you http://openstudy.com/study#/groups/LaTeX%20Practicing!%20%3A)
• one year ago
41. TheViper Group Title
Best Response
You've already chosen the best response.
2
Wait if u want to know how they do just right click on LaTeX & then click show math as then click text commands @ghazi :)
• one year ago
42. TheViper Group Title
Best Response
You've already chosen the best response.
2
like
• one year ago
##### 1 Attachment
43. TheViper Group Title
Best Response
You've already chosen the best response.
2
After clicking on text commands a box will appear then copy everything in the box & paste in equation editor @ghazi :)
• one year ago
44. TheViper Group Title
Best Response
You've already chosen the best response.
2
@ghazi does that helped ??
• one year ago
45. ghazi Group Title
Best Response
You've already chosen the best response.
0
@TheViper that was really helpful...thanks dude....i am grateful to this ....i shall make use of it...
• one year ago
46. TheViper Group Title
Best Response
You've already chosen the best response.
2
$$\Huge{\color{green}{\ddot{\smile}}}$$
• one year ago
47. TheViper Group Title
Best Response
You've already chosen the best response.
2
wait @ghazi :)
• one year ago
48. ghazi Group Title
Best Response
You've already chosen the best response.
0
wow !!
• one year ago
49. TheViper Group Title
Best Response
You've already chosen the best response.
2
If you want to write anything in 'Code Block' you have write between these :- |dw:1347024632753:dw|Like this :- http://assets.openstudy.com/updates/attachments/503dfd2ce4b0074824ff598c-theviper-1346930814196-untitled.png It will be shown as :- GHAZI
• one year ago
50. ghazi Group Title
Best Response
You've already chosen the best response.
0
cool....let me try this
• one year ago
51. TheViper Group Title
Best Response
You've already chosen the best response.
2
try :)
• one year ago
52. ghazi Group Title
Best Response
You've already chosen the best response.
0
|dw:1346936212442:dw|
• one year ago
53. ghazi Group Title
Best Response
You've already chosen the best response.
0
it didn't work :(
• one year ago
54. TheViper Group Title
Best Response
You've already chosen the best response.
2
sorry u should not type in drawing :)
• one year ago
55. TheViper Group Title
Best Response
You've already chosen the best response.
2
work as in the attachment :)
• one year ago
56. TheViper Group Title
Best Response
You've already chosen the best response.
2
• one year ago
57. TheViper Group Title
Best Response
You've already chosen the best response.
2
try now :)
• one year ago
58. ghazi Group Title
Best Response
You've already chosen the best response.
0
okay wait a sec
• one year ago
59. TheViper Group Title
Best Response
You've already chosen the best response.
2
fast
• one year ago
60. TheViper Group Title
Best Response
You've already chosen the best response.
2
• one year ago
##### 1 Attachment
61. ghazi Group Title
Best Response
You've already chosen the best response.
0
like this ,,,,,,,,,,, Type here ,,,, Viper
• one year ago
62. ghazi Group Title
Best Response
You've already chosen the best response.
0
damn i am too stupid at this
• one year ago
63. ghazi Group Title
Best Response
You've already chosen the best response.
0
by the way thanks for your time dude :)
• one year ago
64. TheViper Group Title
Best Response
You've already chosen the best response.
2
hey @ghazi its not apostrophe its this
• one year ago
##### 1 Attachment
65. TheViper Group Title
Best Response
You've already chosen the best response.
2
& u must type it only 3 times like this :-
• one year ago
##### 1 Attachment
66. ghazi Group Title
Best Response
You've already chosen the best response.
0
@the viper thanks man
• one year ago
67. TheViper Group Title
Best Response
You've already chosen the best response.
2
ok try here:)
• one year ago
68. sauravshakya Group Title
Best Response
You've already chosen the best response.
0
$$\checkmark$$
• one year ago
69. MathSofiya Group Title
Best Response
You've already chosen the best response.
0
Thanks soo much @AccessDenied :D
• one year ago
70. jiteshmeghwal9 Group Title
Best Response
You've already chosen the best response.
0
$\huge{\color{red}{\frak{So \space \text{great} \space tutorial \space}\mathbb{Thanx !!!!!!}}}$ @AccessDenied :)
• one year ago
71. blackops2luvr Group Title
Best Response
You've already chosen the best response.
0
$$\bbox [ 15pt, #000033 ,border: 15px solid #aa8866 ] {\Huge\sf\color{white} {\ hi \ there \ :D }}$$
• 4 months ago
• Attachments:
## See more questions >>>
##### spraguer (Moderator) 5→ View Detailed Profile
23
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9963624477386475, "perplexity": 4407.820338997046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997873839.53/warc/CC-MAIN-20140722025753-00077-ip-10-33-131-23.ec2.internal.warc.gz"}
|
https://worldwidescience.org/topicpages/c/conventional+disease-modifying+anti-rheumatic.html
|
#### Sample records for conventional disease-modifying anti-rheumatic
1. Distribution of Podoplanin in Synovial Tissues in Rheumatoid Arthritis Patients Using Biologic or Conventional Disease-Modifying Anti-Rheumatic Drugs.
Science.gov (United States)
Takakubo, Yuya; Oki, Hiroharu; Naganuma, Yasushi; Saski, Kan; Sasaki, Akiko; Tamaki, Yasunobu; Suran, Yang; Konta, Tsuneo; Takagi, Michiaki
2017-01-01
Podoplanin (PDPN) mediates tumor cell migration and invasion, which phenomena might also play a role in severe rheumatoid arthritis (RA). Therefore, the precise cellular distribution of PDPN and it's relationships with inflammation was studied in RA treated with biologic disease-modifying anti-rheumatic drugs (DMARD) or conventional DMARDs (cDMARD). PDPN+ cells were immunostained by NZ-1 mAb, and scored (3+; >50%/ area, 2+; 20%- 50%, 1+; 5%-20%, 0: <5%) in synovial tissues from RA treated with biologic DMARDs (BIO, n=20) or cDMARD (n=20) for comparison with osteoarthritis (OA, n=5), followed by cell grading of inflammation and cell-typing. Inflammatory synovitis score was 1.4 in both BIO and cDMARD, compared to only 0.2 in OA. PDPN+ cells were found in the lining layer (BIO 1.6, cDMARD 1.3, OA 0.2) and lymphoid aggregates (BIO 0.6, cDMRD 0.7, OA 0.2), and correlated with RA-inflammation in BIO- and cDMARD-groups in both area (r=0.7/0.9, r=0.6/0.7, respectively p<0.05). PDPN was expressed in CD68+ type A macrophage-like and 5B5+ type B fibroblast-like cells in the lining layer, and in IL- 17+ cells in lymphoid aggregates in RA. PDPN was markedly increased in the immunologically inflamed RA synovitis, which was surgically treated due to BIO- and cDMARD-resistant RA. PDPN may have potential of a new marker of residual arthritis in local joints for inflammation-associated severe RA. Copyright© Bentham Science Publishers; For any queries, please email at [email protected].
2. Radiographic outcome in Hispanic early rheumatoid arthritis patients treated with conventional disease modifying anti-rheumatic drugs
International Nuclear Information System (INIS)
Contreras-Yanez, Irazu; Rull-Gabayet, Marina; Vazquez-LaMadrid, Jorge; Pascual-Ramos, Virginia
2011-01-01
Objectives: To determine rates of incident erosive disease in early rheumatoid arthritis patients, to identify baseline predictors and to evaluate erosion's impact on patient-reported outcomes. Methods: 82 patients with ≤12 months of disease duration, ≥3 years of follow-up and conventional treatment were included. Consecutive evaluations assessed swollen and tender joint counts, treatment and comorbidity, acute reactant-phase determinations and patient-reported outcomes. Digitized radiographs of the hands and feet were obtained at baseline and yearly thereafter. RA was defined as erosive when at least one unequivocal cortical bone defect was detected. Descriptive statistics and Cox regression analysis were performed. Results: At baseline, 71 of the patients were Female Sign , population median (range) age was of 38.7 (16-78.2) years, 58 patients had antibodies and all the patients had active disease and substantial disability. Follow-up cohort was of 299.3 person-years. At last follow-up (49 ± 13.8 months), 28 patients developed erosions. Erosion's location was the feet, in 12 patients. Incident rates of erosive disease at one, two, three and four years were of 8.1, 12.8, 13.8 and 5.6 per 100 person-years, respectively. Higher C-reactive protein (HR: 1.20, 95%CI: 1.04-1.4, p = 0.01) and positive antibodies (HR: 5.09, 95%CI: 1.08-23.86, p = 0.04) were baseline predictors of incident erosive disease. Erosions had minor impact on patient-reported outcomes. Conclusion: Rheumatoid arthritis patients with antibodies and higher C reactive protein at baseline are at risk for incident erosions which appear most frequently at the feet. Up to 1/3 patients conventionally treated develop incident erosions, which minimally impact function.
3. Radiographic outcome in Hispanic early rheumatoid arthritis patients treated with conventional disease modifying anti-rheumatic drugs
Energy Technology Data Exchange (ETDEWEB)
Contreras-Yanez, Irazu, E-mail: [email protected] [Department of Immunology and Rheumatology, Instituto Nacional de Ciencias Medicas y Nutricion Salvador Zubiran, Vasco de Quiroga 15, Seccion XVI, C.P. 14000, Tlalpan, Mexico, D.F. (Mexico); Rull-Gabayet, Marina, E-mail: [email protected] [Department of Immunology and Rheumatology, Instituto Nacional de Ciencias Medicas y Nutricion Salvador Zubiran, Vasco de Quiroga 15, Seccion XVI, C.P. 14000, Tlalpan, Mexico, D.F. (Mexico); Vazquez-LaMadrid, Jorge, E-mail: [email protected] [Department of Radiology, Instituto Nacional de Ciencias Medicas y Nutricion Salvador Zubiran, Vasco de Quiroga 15, Seccion XVI, C.P. 14000, Tlalpan, Mexico, D.F. (Mexico); Pascual-Ramos, Virginia, E-mail: [email protected] [Department of Immunology and Rheumatology, Instituto Nacional de Ciencias Medicas y Nutricion Salvador Zubiran, Vasco de Quiroga 15, Seccion XVI, C.P. 14000, Tlalpan, Mexico, D.F. (Mexico)
2011-08-15
Objectives: To determine rates of incident erosive disease in early rheumatoid arthritis patients, to identify baseline predictors and to evaluate erosion's impact on patient-reported outcomes. Methods: 82 patients with {<=}12 months of disease duration, {>=}3 years of follow-up and conventional treatment were included. Consecutive evaluations assessed swollen and tender joint counts, treatment and comorbidity, acute reactant-phase determinations and patient-reported outcomes. Digitized radiographs of the hands and feet were obtained at baseline and yearly thereafter. RA was defined as erosive when at least one unequivocal cortical bone defect was detected. Descriptive statistics and Cox regression analysis were performed. Results: At baseline, 71 of the patients were Female Sign , population median (range) age was of 38.7 (16-78.2) years, 58 patients had antibodies and all the patients had active disease and substantial disability. Follow-up cohort was of 299.3 person-years. At last follow-up (49 {+-} 13.8 months), 28 patients developed erosions. Erosion's location was the feet, in 12 patients. Incident rates of erosive disease at one, two, three and four years were of 8.1, 12.8, 13.8 and 5.6 per 100 person-years, respectively. Higher C-reactive protein (HR: 1.20, 95%CI: 1.04-1.4, p = 0.01) and positive antibodies (HR: 5.09, 95%CI: 1.08-23.86, p = 0.04) were baseline predictors of incident erosive disease. Erosions had minor impact on patient-reported outcomes. Conclusion: Rheumatoid arthritis patients with antibodies and higher C reactive protein at baseline are at risk for incident erosions which appear most frequently at the feet. Up to 1/3 patients conventionally treated develop incident erosions, which minimally impact function.
4. Tumour necrosis factor inhibitors versus combination intensive therapy with conventional disease modifying anti-rheumatic drugs in established rheumatoid arthritis: TACIT non-inferiority randomised controlled trial.
Science.gov (United States)
Scott, David L; Ibrahim, Fowzia; Farewell, Vern; O'Keeffe, Aidan G; Walker, David; Kelly, Clive; Birrell, Fraser; Chakravarty, Kuntal; Maddison, Peter; Heslin, Margaret; Patel, Anita; Kingsley, Gabrielle H
2015-03-13
To determine whether intensive combinations of synthetic disease modifying drugs can achieve similar clinical benefits at lower costs to high cost biologics such as tumour necrosis factor inhibitors in patients with active rheumatoid arthritis resistant to initial methotrexate and other synthetic disease modifying drugs. Open label pragmatic randomised multicentre two arm non-inferiority trial over 12 months. 24 rheumatology clinics in England. Patients with rheumatoid arthritis who were eligible for treatment with tumour necrosis factor inhibitors according to current English guidance were randomised to either the tumour necrosis factor inhibitor strategy or the combined disease modifying drug strategy. Biologic strategy: start tumour necrosis factor inhibitor; second biologic in six month for non-responders. Alternative strategy: start combination of disease modifying drugs; start tumour necrosis factor inhibitors after six months in non-responders. reduction in disability at 12 months measured with patient recorded heath assessment questionnaire (range 0.00-3.00) with a 0.22 non-inferiority margin for combination treatment versus the biologic strategy. quality of life, joint damage, disease activity, adverse events, and costs. Intention to treat analysis used multiple imputation methods for missing data. 432 patients were screened: 107 were randomised to tumour necrosis factor inhibitors and 101 started taking; 107 were randomised to the combined drug strategy and 104 started taking the drugs. Initial assessments were similar; 16 patients were lost to follow-up (seven with the tumour necrosis factor inhibitor strategy, nine with the combined drug strategy); 42 discontinued the intervention but were followed-up (19 and 23, respectively). The primary outcome showed mean falls in scores on the health assessment questionnaire of -0.30 with the tumour necrosis factor inhibitor strategy and -0.45 with the alternative combined drug strategy. The difference between
5. Patients' considerations in the decision-making process of initiating disease-modifying anti-rheumatic drugs
NARCIS (Netherlands)
Nota, Ingrid; Drossaert, Constance H.C.; Taal, Erik; van de Laar, Mart A F J
2015-01-01
Objectives To explore what considerations patients have when deciding about disease-modifying anti-rheumatic drugs (DMARDs) and what information patients need to participate in the decision-making process. Methods In-depth face-to-face interviews were conducted with 32 inflammatory arthritis
6. Real-life experience of using conventional disease-modifying anti-rheumatic drugs (DMARDs) in psoriatic arthritis (PsA). Retrospective analysis of the efficacy of methotrexate, sulfasalazine, and leflunomide in PsA in comparison to spondyloarthritides other than PsA and literature review of the use of conventional DMARDs in PsA
Science.gov (United States)
Roussou, Euthalia; Bouraoui, Aicha
2017-01-01
Objective With the aim of assessing the response to treatment with conventional disease-modifying anti-rheumatic drugs (DMARDs) used in patients with psoriatic arthritis (PsA), data on methotrexate, sulfasalazine (SSZ), and leflunomide were analyzed from baseline and subsequent follow-up (FU) questionnaires completed by patients with either PsA or other spondyloarthritides (SpAs). Material and Methods A single-center real-life retrospective analysis was performed by obtaining clinical data via questionnaires administered before and after treatment. The indices used were erythrocyte sedimentation rate (ESR), C-reactive protein (CRP) level, Bath Ankylosing Spondylitis Disease Activity Index (BASDAI), Bath Ankylosing Spondylitis Function Index (BASFI), wellbeing (WB), and treatment effect (TxE). The indices measured at baseline were compared with those measured on one occasion in a FU visit at least 1 year later. Results A total of 73 patients, 51 with PsA (mean age 49.8±12.8 years; male-to-female ratio [M:F]=18:33) and 22 with other SpAs (mean age 50.6±16 years; M:F=2:20), were studied. BASDAI, BASFI, and WB displayed consistent improvements during FU assessments in both PsA patients and controls in comparison to baseline values. SSZ exhibited better efficacy as confirmed by TxE in both PsA patients and controls. ESR and CRP displayed no differences in either the PsA or the SpA group between the cases before and after treatment. Conclusion Real-life retrospective analysis of three DMARDs used in PsA (and SpAs other than PsA) demonstrated that all three DMARDs that were used brought about improvements in BASDAI, BASFI, TxE, and WB. However, the greatest improvements at FU were seen with SSZ use in both PsA and control cohorts. PMID:28293446
7. A comparison of discontinuation rates of tofacitinib and biologic disease-modifying anti-rheumatic drugs in rheumatoid arthritis: a systematic review and Bayesian network meta-analysis.
Science.gov (United States)
Park, Sun-Kyeong; Lee, Min-Young; Jang, Eun-Jin; Kim, Hye-Lin; Ha, Dong-Mun; Lee, Eui-Kyung
2017-01-01
The purpose of this study was to compare the discontinuation rates of tofacitinib and biologics (tumour necrosis factor inhibitors (TNFi), abatacept, rituximab, and tocilizumab) in rheumatoid arthritis (RA) patients considering inadequate responses (IRs) to previous treatment(s). Randomised controlled trials of tofacitinib and biologics - reporting at least one total discontinuation, discontinuation due to lack of efficacy (LOE), and discontinuation due to adverse events (AEs) - were identified through systematic review. The analyses were conducted for patients with IRs to conventional synthetic disease-modifying anti-rheumatic drugs (cDMARDs) and for patients with biologics-IR, separately. Bayesian network meta-analysis was used to estimate rate ratio (RR) of a biologic relative to tofacitinib with 95% credible interval (CrI), and probability of RR being tofacitinib and biologics in the cDMARDs-IR group. In the biologics-IR group, however, TNFi (RR 0.17, 95% CrI 0.01-3.61, P[RRtofacitinib did. Despite the difference, discontinuation cases owing to LOE and AEs revealed that tofacitinib was comparable to the biologics. The comparability of discontinuation rate between tofacitinib and biologics was different based on previous treatments and discontinuation reasons: LOE, AEs, and total (due to other reasons). Therefore, those factors need to be considered to decide the optimal treatment strategy.
8. Biologics or tofacitinib for rheumatoid arthritis in incomplete responders to methotrexate or other traditional disease-modifying anti-rheumatic drugs
DEFF Research Database (Denmark)
Singh, Jasvinder A; Hossain, Alomgir; Tanjong Ghogomu, Elizabeth
2016-01-01
, tocilizumab) and small molecule tofacitinib, versus comparator (MTX, DMARD, placebo (PL), or a combination) in adults with rheumatoid arthritis who have failed to respond to methotrexate (MTX) or other disease-modifying anti-rheumatic drugs (DMARDs), i.e., MTX/DMARD incomplete responders (MTX.......78)) were similarly inconclusive and downgraded to low quality for both imprecision and indirectness.Main results text shows the results for tofacitinib and differences between medications. AUTHORS' CONCLUSIONS: Based primarily on RCTs of 6 months' to 12 months' duration, there is moderate quality evidence...
9. Adverse drug reactions associated with the use of disease-modifying anti-rheumatic drugs in patients with rheumatoid arthritis
Directory of Open Access Journals (Sweden)
2014-12-01
10. Clinical utility of therapeutic drug monitoring in biological disease modifying anti-rheumatic drug treatment of rheumatic disorders: a systematic narrative review.
Science.gov (United States)
Van Herwaarden, Noortje; Van Den Bemt, Bart J F; Wientjes, Maike H M; Kramers, Cornelis; Den Broeder, Alfons A
2017-08-01
Biological Disease Modifying Anti-Rheumatic Drugs (bDMARDs) have improved the treatment outcomes of inflammatory rheumatic diseases including Rheumatoid Arthritis and spondyloarthropathies. Inter-individual variation exists in (maintenance of) response to bDMARDs. Therapeutic Drug Monitoring (TDM) of bDMARDs could potentially help in optimizing treatment for the individual patient. Areas covered: Evidence of clinical utility of TDM in bDMARD treatment is reviewed. Different clinical scenarios will be discussed, including: prediction of response after start of treatment, prediction of response to a next bDMARD in case of treatment failure of the first, prediction of successful dose reduction or discontinuation in case of low disease activity, prediction of response to dose-escalation in case of active disease and prediction of response to bDMARD in case of flare in disease activity. Expert opinion: The limited available evidence does often not report important outcomes for diagnostic studies, such as sensitivity and specificity. In most clinical relevant scenarios, predictive value of serum (anti-) drug levels is absent, therefore the use of TDM of bDMARDs cannot be advocated. Well-designed prospective studies should be done to further investigate the promising scenarios to determine the place of TDM in clinical practice.
11. Disease-modifying anti-rheumatic drug use in pregnant women with rheumatic diseases: a systematic review of the risk of congenital malformations.
Science.gov (United States)
Baldwin, Corisande; Avina-Zubieta, Antonio; Rai, Sharan K; Carruthers, Erin; De Vera, Mary A
2016-01-01
Despite the high incidence of rheumatic diseases during the reproductive years, little is known about the impact of disease-modifying anti-rheumatic drug (DMARD) use during pregnancy. Our objective was to systematically review and appraise evidence in women with rheumatic disease on the use of traditional and biologic DMARDs during pregnancy and the risk of congenital malformation outcomes. We conducted a systematic search of MEDLINE, EMBASE, and INTERNATIONAL PHARMACEUTICAL ABSTRACTS databases. Inclusion criteria were: 1) study sample including women with rheumatic disease; 2) use of traditional and/or biologic DMARDs during pregnancy; and 3) congenital malformation outcome(s) reported. We extracted information on study design, data source, number of exposed pregnancies, type of DMARD, number of live births, and number of congenital malformations. Altogether, we included 79 studies; the majority were based on designs that did not involve a comparison group, including 26 case reports, 17 case series, 20 cross-sectional studies, and 4 surveys. Studies that had a comparator group included 1 case control, 10 cohort studies, and 1 controlled trial. Hydroxychloroquine and azathioprine represent the most studied traditional DMARD exposures and, among biologics, most of the reports were on infliximab and etanercept. This is the first systematic review on the use of both traditional and biologic DMARDs during pregnancy among women with rheumatic diseases and congenital malformation outcomes, with a focus on study design and quality. Findings confirm the limited number of studies, as well as the need to improve study designs.
12. Long-Term Outcomes in Puerto Ricans with Rheumatoid Arthritis (RA) Receiving Early Treatment with Disease-Modifying Anti-Rheumatic Drugs using the American College of Rheumatology Definition of Early RA.
Science.gov (United States)
Varela-Rosario, Noemí; Arroyo-Ávila, Mariangelí; Fred-Jiménez, Ruth M; Díaz-Correa, Leyda M; Pérez-Ríos, Naydi; Rodríguez, Noelia; Ríos, Grissel; Vilá, Luis M
2017-01-01
Early treatment of rheumatoid arthritis (RA) results in better long-term outcomes. However, the optimal therapeutic window has not been clearly established. To determine the clinical outcome of Puerto Ricans with RA receiving early treatment with conventional and/or biologic disease-modifying anti-rheumatic drugs (DMARDs) based on the American College of Rheumatology (ACR) definition of early RA. A cross-sectional study was performed in a cohort of Puerto Ricans with RA. Demographic features, clinical manifestations, disease activity, functional status, and pharmacotherapy were determined. Early treatment was defined as the initiation of DMARDs (conventional and/or biologic) in less than 6 months from the onset of symptoms attributable to RA. Patients who received early (disease duration was 14.9 years and 337 (87.0%) patients were women. One hundred and twenty one (31.3%) patients received early treatment. In the multivariate analysis adjusted for age and sex, early treatment was associated with better functional status, lower probability of joint deformities, intra-articular injections and joint replacement surgeries, and lower scores in the physician's assessments of global health, functional impairment and physical damage of patients. Using the ACR definition of early RA, this group of patients treated with DMARDs within 6 months of disease had better long-term outcomes with less physical damage and functional impairment.
13. Quantity and economic value of unused oral anti-cancer and biological disease-modifying anti-rheumatic drugs among outpatient pharmacy patients who discontinue therapy.
Science.gov (United States)
Bekker, C L; Melis, E J; Egberts, A C G; Bouvy, M L; Gardarsdottir, H; van den Bemt, B J F
2018-03-24
14. Biologic disease-modifying anti-rheumatic drugs and the risk of non-vertebral osteoporotic fractures in patients with rheumatoid arthritis aged 50 years and over.
Science.gov (United States)
Roussy, J-P; Bessette, L; Bernatsky, S; Rahme, E; Lachaine, J
2013-09-01
Prevention of bone mineral density loss in rheumatoid arthritis (RA) has been associated with use of biologic disease-modifying anti-rheumatic drugs (DMARDs). However, in this study, we could not demonstrate a reduction in the risk of non-vertebral fractures. Additional research is required to clarify the impact of biologic DMARDs on fracture risk in RA. Small studies have suggested biologic DMARDs preserve bone mineral density at 6-12 months. Our objective was to determine the association between biologic DMARD use and the risk of non-vertebral osteoporotic fractures in RA subjects aged ≥50 years. A nested case-control study was conducted using Quebec physician billing and hospital discharge data. RA subjects were identified from International Classification of Disease-9/10 codes in billing and hospitalisation data and followed from cohort entry until the earliest of non-vertebral osteoporotic fracture, death, or end of study period. Controls were matched to cases (4:1 ratio) on age, sex, and date of cohort entry. Biologic DMARD exposure was defined as being on treatment for ≥180 days pre-fracture (index). Conditional logistic regression was used, adjusting for indicators of RA severity, comorbidity, drugs influencing fracture risk, and measures of health care utilisation. Over the study period, 1,515 cases were identified (6,023 controls). The most frequent fracture site was hip/femur (42.3%). In total, 172 subjects (49 cases and 123 controls) were exposed to biologic DMARDs. The median duration of exposure was 735 (interquartile range (IQR), 564) and 645 (IQR, 903) days in cases and controls, respectively. We were unable to demonstrate an association between biologic DMARDs and fracture risk (odds ratio, 1.03; 95% confidence interval, 0.42-2.53). RA duration significantly increased the fracture risk. Despite the positive impact of biologic DMARDs on bone remodelling observed in small studies, we were unable to demonstrate a reduction in the risk of non
15. Providing patients with information about disease-modifying anti-rheumatic drugs: Individually or in groups? A pilot randomized controlled trial comparing adherence and satisfaction.
Science.gov (United States)
Homer, Dawn; Nightingale, Peter; Jobanputra, Paresh
2009-06-01
16. Safety and effectiveness of tacrolimus add-on therapy for rheumatoid arthritis patients without an adequate response to biological disease-modifying anti-rheumatic drugs (DMARDs): Post-marketing surveillance in Japan.
Science.gov (United States)
Takeuchi, Tsutomu; Ishida, Kota; Shiraki, Katsuhisa; Yoshiyasu, Takashi
2018-01-01
Post-marketing surveillance (PMS) was conducted to assess the safety and effectiveness of tacrolimus (TAC) add-on therapy for patients with rheumatoid arthritis (RA) and an inadequate response to biological disease-modifying anti-rheumatic drugs (DMARDs). Patients with RA from 180 medical sites across Japan were registered centrally with an electronic investigation system. The observational period was 24 weeks from the first day of TAC administration concomitantly with biological DMARDs. Safety and effectiveness populations included 624 and 566 patients, respectively. Patients were predominantly female (81.1%), with a mean age of 61.9 years. Overall, 125 adverse drug reactions (ADRs) occurred in 94 patients (15.1%), and 15 serious ADRs occurred in 11 patients (1.8%). These incidences were lower compared with previously reported incidences after TAC treatment in PMS, and all of the observed ADRs were already known. A statistically significant improvement was observed in the primary effectiveness variable of Simplified Disease Activity Index after TAC treatment; 62.7% of patients achieved remission or low disease activity at week 24. TAC is well tolerated and effective when used as an add-on to biological DMARDs in Japanese patients with RA who do not achieve an adequate response to biological DMARDs in a real-world clinical setting.
17. Long term effectiveness of RA-1, a standardized Ayurvedic medicine as a monotherapy and in combination with disease modifying anti-rheumatic drugs in the treatment of rheumatoid arthritis.
Science.gov (United States)
Chopra, Arvind; Saluja, Manjit; Kianifard, Toktam; Chitre, Deepa; Venugopalan, Anuradha
2018-03-08
Data on long term use of Ayurvedic drugs is sparse. They may prove useful if combined with modern medicine in certain clinical situations (integrative medicine). We present the results of a long term observational study of RA-1 (Ayurvedic drug) used in the treatment of rheumatoid arthritis (RA). On completion of a 16 week randomized controlled study, 165 consenting volunteer patients were enrolled into a three year open label phase (OLP) study. Patients were symptomatic with persistent active disease and naïve for disease modifying anti-rheumatic drugs (DMARD). 57 patients were on fixed low dose prednisone. Patients were examined every 10-14 weeks in a routine rheumatology practice using standard care norms. They continued RA-1 (Artrex ™, 2 tablets twice daily) throughout the study period and were generally advised to lead a healthy life style. Based on clinical judgment, rheumatologist added DMARD and/or steroids (modified if already in use) to patients with inadequate response; chloroquine and/or methotrexate commonly used. Treatment response was assessed using American College of Rheumatology (ACR) efficacy measures and ACR 20% improvement index standard update statistical software (SAS and SPSS) were used; significant at p Ayurveda Foundation. Published by Elsevier B.V. All rights reserved.
18. Increased baseline RUNX2, caspase 3 and p21 gene expressions in the peripheral blood of disease-modifying anti-rheumatic drug-naïve rheumatoid arthritis patients are associated with improved clinical response to methotrexate therapy.
Science.gov (United States)
Tchetina, Elena V; Demidova, Natalia V; Markova, Galina A; Taskina, Elena A; Glukhova, Svetlana I; Karateev, Dmitry E
2017-10-01
To investigate the potential of the baseline gene expression in the whole blood of disease-modifying anti-rheumatic drug-naïve rheumatoid arthritis (RA) patients for predicting the response to methotrexate (MTX) treatment. Twenty-six control subjects and 40 RA patients were examined. Clinical, immunological and radiographic parameters were assessed before and after 24 months of follow-up. The gene expressions in the whole blood were measured using real-time reverse transcription polymerase chain reaction. The protein concentrations in peripheral blood mononuclear cells were quantified using enzyme-linked immunosorbent assay. Receiver operating characteristic curve analyses were used to suggest thresholds that were associated with the prediction of the response. Decreases in the disease activity at the end of the study were accompanied by significant increases in joint space narrowing score (JSN). Positive correlations between the expressions of the Unc-51-like kinase 1 (ULK1) and matrix metalloproteinase 9 (MMP-9) genes with the level of C-reactive protein and MMP-9 expression with Disease Activity Score of 28 joints (DAS28) and swollen joint count were noted at baseline. The baseline tumor necrosis factor (TNF)α gene expression was positively correlated with JSN at the end of the follow-up, whereas p21, caspase 3, and runt-related transcription factor (RUNX)2 were correlated with the ΔDAS28 values. Our results suggest that the expressions of MMP-9 and ULK1 might be associated with disease activity. Increased baseline gene expressions of RUNX2, p21 and caspase 3 in the peripheral blood might predict better responses to MTX therapy. © 2017 Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.
19. Tofacitinib with conventional synthetic disease-modifying antirheumatic drugs in Chinese patients with rheumatoid arthritis: Patient-reported outcomes from a Phase 3 randomized controlled trial.
Science.gov (United States)
Li, Zhanguo; An, Yuan; Su, Houheng; Li, Xiangpei; Xu, Jianhua; Zheng, Yi; Li, Guiye; Kwok, Kenneth; Wang, Lisy; Wu, Qizhe
2018-02-01
Tofacitinib is an oral Janus kinase inhibitor for the treatment of rheumatoid arthritis (RA). We assess the effect of tofacitinib + conventional synthetic disease-modifying anti rheumatic drugs (csDMARDs) on patient-reported outcomes in Chinese patients with RA and inadequate response to DMARDs. This analysis of data from the Phase 3 study ORAL Sync included Chinese patients randomized 4 : 4 : 1 : 1 to receive tofacitinib 5 mg twice daily, tofacitinib 10 mg twice daily, placebo→tofacitinib 5 mg twice daily, or placebo→tofacitinib 10 mg twice daily, with csDMARDs. Placebo non-responders switched to tofacitinib at 3 months; the remaining placebo patients switched at 6 months. Least squares mean changes from baseline were reported for Health Assessment Questionnaire-Disability Index (HAQ-DI), patient assessment of arthritis pain (Pain), patient global assessment of disease activity (PtGA), physician global assessment of disease activity (PGA), Functional Assessment of Chronic Illness Therapy-Fatigue (FACIT-F) scores, Short Form 36 (SF-36), and Work Limitations Questionnaire (WLQ), using a mixed-effects model for repeated measures. Overall, 216 patients were included (tofacitinib 5 mg twice daily, n = 86; tofacitinib 10 mg twice daily, n = 86; placebo→tofacitinib 5 mg twice daily, n = 22; placebo→tofacitinib 10 mg twice daily, n = 22). At month 3, tofacitinib elicited significant improvements in HAQ-DI, Pain, PtGA, PGA and SF-36 Physical Component Summary scores. Improvements were generally maintained through 12 months. Tofacitinib 5 and 10 mg twice daily + csDMARDs resulted in improvements in health-related quality of life, physical function and Pain through 12 months in Chinese patients with RA. © 2018 The Authors. International Journal of Rheumatic Diseases published by Asia Pacific League of Associations for Rheumatology and John Wiley & Sons Australia, Ltd.
20. Aggressive combination therapy with intra-articular glucocorticoid injections and conventional disease-modifying anti-rheumatic drugs in early rheumatoid arthritis: second-year clinical and radiographic results from the CIMESTRA study
DEFF Research Database (Denmark)
Hetland, M.L.; Stengaard-Pedersen, K.; Junker, P.
2008-01-01
-articular corticosteroid. This paper presents the results of the second year of the randomised, controlled double-blind CIMESTRA (Ciclosporine, Methotrexate, Steroid in RA) study. METHODS: 160 patients with early RA (duration
1. A short history of anti-rheumatic therapy - VI. Rheumatoid arthritis drugs
Directory of Open Access Journals (Sweden)
G. Pasero
2011-09-01
Full Text Available The treatment of rheumatoid arthritis traditionally includes symptomatic drugs, showing a prompt action on pain and infl ammation, but without any infl uence on disease progression, and other drugs that could modify the disease course and occasionally induce clinical remission (DMARDs or disease modifying anti-rheumatic drugs. This review describes the historical steps that led to the use of the main DMARDs in rheumatoid arthritis, such as gold salts, sulphasalazine, chloroquine and hydroxychloroquine, D-penicillamine, and other immunoactive drugs, including methotrexate, azathioprine, cyclosporin and lefl unomide. The historical evolution of use of these drugs is then discussed, including the strategy of progressive (“therapeutic pyramid” or of more aggressive treatment, through the simultaneous use of two or more DMARDs (“combination therapy”.
2. [Anti-rheumatic therapy in patients with rheumatoid arthritis undergoing hemodialysis].
Science.gov (United States)
Akiyama, Yuji
2011-01-01
Hemodialysis (HD) patients have been increasing recently. Some rheumatoid arthritis (RA) patients need hemodialysis (HD), though the proportion is not high. At present, such patients are almost treated with corticosteroids and/or nonsteroidal anti-inflammatory drugs alone, even if they have a high disease activity that would require disease-modifying anti-rheumatic drug (DMARD) therapy, partly because the safety of DMARDs in RA patients with end-stage renal disease has not been confirmed. Their joint destruction would be inevitable and lead to impaired activities of daily living. As there are no guidelines for the use of DMARDs in HD patients, here I reviewed the previous reports about the treatment of DMARDs including biologics for patients with RA undergoing HD.
3. Anti-Rheumatic Potential of Pakistani Medicinal Plants: A Review
International Nuclear Information System (INIS)
Kamal, M.; Adnan, M.; Murad, W.; Tariq, A.; Bibi, H.; Rahman, H.; Shinwari, Z. K.
2016-01-01
Present review aimed to provide a comprehensive documentation of plants used as anti-rheumatic ethnomedicines in Pakistan and to suggest future recommendations. Data on anti-rheumatic plants was collected from published scientific papers, reports and thesis using online searching engines such as Google Scholar PubMed and Science Direct. Five distinct zones in the country were classified on the basis of geography, humidity and rainfall. We used Sorenson similarity index for plants and their parts used between different zones. A total of 137 anti-rheumatic plant species representing 55 families and 104 genera are used in Pakistan. Herbs (87 plants) were the primary source of anti-rheumatic medicinal plants, while leaves (22 % plant species) were the most frequently used part in the preparation of ethnomedicinal recipes. Highest number of 52 medicinal plant species were found in Zone A having high mountains and cold climate where the prevalence of rheumatism was more common. Solanum surattense was found with highest conservation concerns as it was using in 13 different areas against rheumatism. Results of Sorenson index revealed that there is a similarity of plants and its parts uses between different zones. In conclusions, geography and climate have an important role in causing rheumatic disease. Pakistan has a number of anti-rheumatic plants that are used by the local populations through their traditional knowledge. Moreover, inter zonal similarities among plants and its part uses indicate higher pharmacological potency of these medicinal plants. Further, the review will also provide an insight regarding the conservation status of reported plants. (author)
4. Disease-modifying anti-rheumatic drugs til behandling af ankyloserende spondylitis
DEFF Research Database (Denmark)
2009-01-01
Ankylosing spondylitis (AS) is an inflammatory disorder affecting the axial skeleton, peripheral joints, entheses and extra-articular sites. Patients with early disease, a higher level of erythrocyte sedimentation rate and/or peripheral arthritis might benefit from sulfasalazine. Otherwise...
5. The Impact of Conventional and Biological Disease Modifying Antirheumatic Drugs on Bone Biology. Rheumatoid Arthritis as a Case Study.
Science.gov (United States)
Barreira, Sofia Carvalho; Fonseca, João Eurico
2016-08-01
The bone and the immune system have a very tight interaction. Systemic immune-mediated inflammatory diseases, such as rheumatoid arthritis (RA), induce bone loss, leading to a twofold increase in osteoporosis and an increase of fragility fracture risk of 1.35-2.13 times. This review focuses on the effects of conventional and biological disease modifying antirheumatic drugs (DMARDs) on bone biology, in the context of systemic inflammation, with a focus on RA. Published evidence supports a decrease in osteoclastic activity induced by DMARDs, which leads to positive effects on bone mineral density (BMD). It is unknown if this effect could be translated into fracture risk reduction. The combination with antiosteoclastic drugs can have an additional benefit.
6. short history of anti-rheumatic therapy. IV. Corticosteroids
Directory of Open Access Journals (Sweden)
P. Marson
2011-06-01
Full Text Available In 1948 a corticosteroid compound was administered for the first time to a patient affected by rheumatoid arthritis by Philip Showalter Hench, a rheumatologist at the Mayo Clinic in Rochester, Minnesota (USA. He was investigating since 1929 the role of adrenal gland-derived substances in rheumatoid arthritis. For the discovery of cortisone and its applications in anti-rheumatic therapy, Hench, along with Edward Calvin Kendall and Tadeusz Reichstein, won the 1950 Nobel Prize for Medicine. In this review we summarize the main stages that led to the identification of the so-called compound E, which was used by Hench. We also consider the subsequent development of steroid therapy in rheumatic diseases, through the introduction of new molecules with less mineralocorticoid effects, such as prednisone, and more recently, deflazacort.
7. Kynurenic acid content in anti-rheumatic herbs.
Science.gov (United States)
Zgrajka, Wojciech; Turska, Monika; Rajtar, Grażyna; Majdan, Maria; Parada-Turska, Jolanta
2013-01-01
The use of herbal medicines is common among people living in rural areas and increasingly popular in urbanized countries. Kynurenic acid (KYNA) is a metabolite of kynurenine possessing anti-inflammatory, anti-oxidative and pain reliving properties. Previous data indicated that the content of KYNA in the synovial fluid of patients with rheumatoid arthritis is lower than in patients with osteoarthritis. Rheumatoid arthritis is a chronic, systemic inflammatory disorder affecting about 1% of the world's population. The aim of the presented study was to investigate the content of KYNA in 11 herbal preparations used in rheumatic diseases. The following herbs were studied: bean pericarp, birch leaf, dandelion root, elder flower, horsetail herb, nettle leaf, peppermint leaf and willow bark. An anti-rheumatic mixture of the herbs Reumatefix and Reumaflos tea were also investigated. The herbs were prepared according to producers' directions. In addition, the herbal supplement Devil's Claw containing root of Harpagophytum was used. KYNA content was measured using the high-performance liquid chromatography method, and KYNA was detected fluorometrically. KYNA was found in all studied herbal preparations. The highest content of KYNA was found in peppermint, nettle, birch leaf and the horsetail herb. The lowest content of KYNA was found in willow bark, dandelion root and in the extract from the root of Harpagophytum. These findings indicate that the use of herbal preparations containing a high level of KYNA can be considered as a supplementary measure in rheumatoid arthritis therapy, as well as in rheumatic diseases prevention.
8. A short history of anti-rheumatic therapy. II. Aspirin
Directory of Open Access Journals (Sweden)
P. Marson
2011-06-01
Full Text Available The discovery of aspirin, an antipyretic, anti-inflammatory and analgesic drug, undoubtedly represents a milestone in the history of medical therapy. Since ancient times the derivatives of willow (Salix alba were used to treat a variety of fevers and pain syndromes, although the first report dates back to 1763 when the English Reverend Edward Stone described the effect of an extract of the bark willow in treating malaria. In the XIX century many apothecaries and chemists, including the Italian Raffaele Piria and Cesare Bertagnini, developed the biological processes of extraction and chemical synthesis of salicylates, and then analyzed their therapeutic properties and pharmacokinetic and pharmacodynamic characteristics. In 1899 the Bayer Company, where Felix Hoffmann, Heinrich Dreser and Arthur Eichengrün worked, recorded acetyl-salicylic acid under the name “Aspirin”. In the XX century, besides the definition of the correct applications of aspirin in the anti-rheumatic therapy being defined, Lawrence L. Crawen identified the property of this drug as an anti-platelet agent, thus opening the way for more widespread uses in cardiovascular diseases.
9. Efficacy and safety of tofacitinib following inadequate response to conventional synthetic or biological disease-modifying antirheumatic drugs.
Science.gov (United States)
Charles-Schoeman, Christina; Burmester, Gerd; Nash, Peter; Zerbini, Cristiano A F; Soma, Koshika; Kwok, Kenneth; Hendrikx, Thijs; Bananis, Eustratios; Fleischmann, Roy
2016-07-01
Biological disease-modifying antirheumatic drugs (bDMARDs) have shown diminished clinical response following an inadequate response (IR) to ≥1 previous bDMARD. Here, tofacitinib was compared with placebo in patients with an IR to conventional synthetic DMARDs (csDMARDs; bDMARD-naive) and in patients with an IR to bDMARDs (bDMARD-IR). Data were taken from phase II and phase III studies of tofacitinib in patients with rheumatoid arthritis (RA). Patients received tofacitinib 5 or 10 mg twice daily, or placebo, as monotherapy or with background methotrexate or other csDMARDs. Efficacy endpoints and incidence rates of adverse events (AEs) of special interest were assessed. 2812 bDMARD-naive and 705 bDMARD-IR patients were analysed. Baseline demographics and disease characteristics were generally similar between treatment groups within subpopulations. Across subpopulations, improvements in efficacy parameters at month 3 were generally significantly greater for both tofacitinib doses versus placebo. Clinical response was numerically greater with bDMARD-naive versus bDMARD-IR patients (overlapping 95% CIs). Rates of safety events of special interest were generally similar between tofacitinib doses and subpopulations; however, patients receiving glucocorticoids had more serious AEs, discontinuations due to AEs, serious infection events and herpes zoster. Numerically greater clinical responses and incidence rates of AEs of special interest were generally reported for tofacitinib 10 mg twice daily versus tofacitinib 5 mg twice daily (overlapping 95% CIs). Tofacitinib demonstrated efficacy in both bDMARD-naive and bDMARD-IR patients with RA. Clinical response to tofacitinib was generally numerically greater in bDMARD-naive than bDMARD-IR patients. The safety profile appeared similar between subpopulations. (NCT00413660, NCT00550446, NCT00603512, NCT00687193, NCT00960440, NCT00847613, NCT00814307, NCT00856544, NCT00853385). Published by the BMJ Publishing Group Limited
10. The effect of newer anti-rheumatic drugs on osteogenic cell proliferation: an in-vitro study
Directory of Open Access Journals (Sweden)
Laing Patrick
2009-05-01
Full Text Available Abstract Background Disease modifying anti-rheumatic drugs (DMARDs may interfere with bone healing. Previous studies give conflicting advice regarding discontinuation of these drugs in the peri-operative setting. No consensus exists in current practice especially with the newer DMARDs such as Leflunomide, Etanercept, and Infliximab. The aim of this study was to assess the in-vitro effect of these drugs alone and in relevant clinical combinations on Osteoblast activity. Methods Osteoblasts were cultured from femoral heads obtained from five young otherwise healthy patients undergoing total hip replacement. The cells were cultured using techniques that have been previously described. A full factorial design was used to set up the experiment on samples obtained from the five donors. Normal therapeutic concentrations of the various DMARDs were added alone and in combination to the media. The cell proliferation was estimated after two weeks using spectrophotometric technique using Roche Cell proliferation Kit. Multilevel regression analysis was used to estimate which drugs or combination of drugs significantly affected cell proliferation. Results Infliximab and Leflunomide had an overall significant inhibitory effect (p Conclusion Our study indicates that in-vitro osteoblast proliferation can be inhibited by the presence of certain DMARDs. Combinations of drugs had an influence and could negate the action of a drug on osteoblast proliferation. The response to drugs may be donor-dependent.
11. Radiographic assessment of disease progression in rheumatoid arthritis patients undergoing early disease-modifying anti-rheumatic drug treatment
International Nuclear Information System (INIS)
Wick, M.C.
2002-04-01
Rheumatoid arthritis (RA) is a common systemic disease predominantly involving the joints. Since the pathogenesis, etiology and pathophysiological mechanisms of RA have only been partially elucidated, a definitive therapy has not been established. Precise diagnosis and follow-up therapy requires objective quantification, and radiological analyses are considered to be the most appropriate method. The aim of this study was to retrospectively determine the time-dependent progression of joint damage in patients with pharmacologically-treated RA, and to determine which therapeutic agents demonstrate the highest efficacy. Outpatient records, laboratory values, therapy schemes and radiographs from hands and feet of 150 RA patients were collected, analyzed and statistically evaluated. Radiographs were quantified using the Larsen score and supportively using the 'RheumaCoach-Rheumatology' computer software. Our observations reveal that radiologically-detectable damage is most pronounced during the first year of disease, while mitigated and generally progressing linearly thereafter. Overall Larsen scores linearly increased from year 0 to 10 (r=0.853), during which the mean Larsen score increased 7.93 ± 0.76 per year. During the first year, RA progression was similar regardless of the medication administered (gold-compounds, AU; chloroquine, CQ; methotrexate, MTX; sulfasalazine SSZ). While MTX and CQ treatment showed no difference when examined as mean 5-year increment of Larsen score, AU and SSZ showed up to 3 fold higher RA progression compared with MTX. The Larsen score in year 1 did not correlate with that of years 2 to 5. In contrast, Larsen scores in year 2 were linearly related to each of the subsequent 3 years. Despite similar ESR values in various medication groups, cumulative ESR correlated with RA progression, and its reduction with therapeutic efficacy. In conclusion, this study found that, (i) early DMARD-treated RA progressed more rapidly during the first than in subsequent years, and (ii) a linear increase during the first ten years after diagnosis upon retrospective assessment using the original Larsen score, (iii) therapy with methotrexate and chloroquine yielded equal results, and were superior to sulfasalazine or oral gold-compounds, and (iv) despite the effectiveness of cumulative ESR in evaluating RA progression, (v) radiographs of the hands and feet to predict RA were most useful when assessed using the original Larsen score at baseline, after one and two years. (author)
12. Tofacitinib in Combination With Conventional Disease-Modifying Antirheumatic Drugs in Patients With Active Rheumatoid Arthritis: Patient-Reported Outcomes From a Phase III Randomized Controlled Trial.
Science.gov (United States)
Strand, Vibeke; Kremer, Joel M; Gruben, David; Krishnaswami, Sriram; Zwillich, Samuel H; Wallenstein, Gene V
2017-04-01
Tofacitinib is an oral Janus kinase inhibitor for the treatment of rheumatoid arthritis (RA). We compared patient-reported outcomes (PROs) in patients with RA treated with tofacitinib or placebo in combination with conventional disease-modifying antirheumatic drugs (DMARDs). In a 12-month, phase III randomized controlled trial (ORAL Sync), patients (n = 795) with active RA and previous inadequate response to therapy with ≥1 conventional or biologic DMARD were randomized 4:4:1:1 to tofacitinib 5 mg twice daily (BID), tofacitinib 10 mg BID, placebo advanced to 5 mg BID, or placebo to 10 mg BID, in combination with stable background DMARD therapy. PROs included patient global assessment of arthritis (PtGA), patient assessment of arthritis pain (Pain), physical function (Health Assessment Questionnaire disability index [HAQ DI]), health-related quality of life (Short Form 36 health survey [SF-36]), fatigue (Functional Assessment of Chronic Illness Therapy-Fatigue [FACIT-F]), and sleep (Medical Outcomes Study Sleep [MOS Sleep]). At month 3, statistically significant improvements from baseline versus placebo were reported in PtGA, Pain, HAQ DI, all 8 SF-36 domains, FACIT-F, and MOS Sleep with tofacitinib 10 mg BID, and in PtGA, Pain, HAQ DI, 7 SF-36 domains, FACIT-F, and MOS Sleep with tofacitinib 5 mg BID. Improvements were sustained to month 12. Significantly more tofacitinib-treated patients reported improvements of greater than or equal to the minimum clinically important differences at month 3 versus placebo in all PROs, except the SF-36 role-emotional domain (significant for tofacitinib 10 mg BID). Patients with active RA treated with tofacitinib combined with background conventional DMARD therapy reported sustained, significant, and clinically meaningful improvements in PROs versus placebo. © 2016, The Authors. Arthritis Care & Research published by Wiley Periodicals, Inc. on behalf of American College of Rheumatology.
13. A comparative study of renal dysfunction in patients with inflammatory arthropathies: strong association with cardiovascular diseases and not with anti-rheumatic therapies, inflammatory markers or duration of arthritis.
LENUS (Irish Health Repository)
2012-02-01
AIMS: The aim of this study was to investigate the prevalence of chronic kidney disease (CKD) among comparable patients with rheumatoid arthritis (RA) and seronegative inflammatory arthritis, and to explore any predictive factors for renal impairment. METHODS: Consecutive patients with peripheral joint disease (oligo and polyarthritis) were recruited from our inflammatory arthritis clinics. We divided patients in two groups: RA group and seronegative inflammatory arthritis group. The cohort consisted of 183 patients (RA = 107, seronegative arthritis = 76 [psoriatic arthritis = 69, undifferentiated oligoarthritis = 7]). Estimated glomerular filtration rate (eGFR) was calculated using the established Modification of Diet in Renal Disease equation. Demographic details, disease-specific characteristics, anti-rheumatic drugs and the presence of cardiovascular diseases were recorded. RESULTS: In total, 17.48% (n = 32) of the cohort had CKD. There was no statistically significant variation between the two groups as regards baseline demographics, disease characteristics, use of anti-rheumatic drugs and the presence of individual cardiovascular diseases. We found that eGFR and the presence of CKD were similar among these groups. Among patients with CKD, 72% had undiagnosed CKD. No association of statistical significance was noted between CKD and the use of corticosteroids, disease-modifying antirheumatic drugs and anti-tumor necrosis factor agents. The association of cardiovascular diseases with CKD remained significant after adjusting for confounders (age, gender, duration of arthritis, high C-reactive protein, use of anti-rheumatic drugs). CONCLUSIONS: Patients with inflammatory arthritis are more prone to have CKD. This could have serious implications, as the majority of rheumatology patients use non-steroidal anti-inflammatory drugs and different immunosuppressives, such as methotrexate. No association of kidney dysfunction was noted with inflammatory disease
14. Efficacy of conventional synthetic disease-modifying antirheumatic drugs, glucocorticoids and tofacitinib: a systematic literature review informing the 2013 update of the EULAR recommendations for management of rheumatoid arthritis
NARCIS (Netherlands)
Gaujoux-Viala, Cécile; Nam, Jackie; Ramiro, Sofia; Landewé, Robert; Buch, Maya H.; Smolen, Josef S.; Gossec, Laure
2014-01-01
To update a previous systematic review assessing the efficacy of conventional synthetic disease-modifying antirheumatic drugs (csDMARDs) in rheumatoid arthritis (RA). Two systematic reviews of the literature using PubMed, Embase and the Cochrane library were performed from 2009 until January 2013 to
15. Efficacy of glucocorticoids, conventional and targeted synthetic disease-modifying antirheumatic drugs: a systematic literature review informing the 2016 update of the EULAR recommendations for the management of rheumatoid arthritis
NARCIS (Netherlands)
Chatzidionysiou, Katerina; Emamikia, Sharzad; Nam, Jackie; Ramiro, Sofia; Smolen, Josef; van der Heijde, Désirée; Dougados, Maxime; Bijlsma, Johannes; Burmester, Gerd; Scholte, Marieke; van Vollenhoven, Ronald; Landewé, Robert
2017-01-01
To perform a systematic literature review (SLR) informing the 2016 update of the recommendations for the management of rheumatoid arthritis (RA). An SLR for the period between 2013 and 2016 was undertaken to assess the efficacy of glucocorticoids (GCs), conventional synthetic disease-modifying
16. Simultaneous Response in Several Domains in Patients with Psoriatic Disease Treated with Etanercept as Monotherapy or in Combination with Conventional Synthetic Disease-modifying Antirheumatic Drugs.
Science.gov (United States)
Behrens, Frank; Meier, Lothar; Prinz, Jörg C; Jobst, Jürgen; Lippe, Ralph; Löschmann, Peter-Andreas; Lorenz, Hanns-Martin
2018-04-01
To evaluate patients with psoriatic arthritis (PsA) receiving etanercept (ETN) monotherapy or ETN plus conventional synthetic disease-modifying antirheumatic drugs (csDMARD) to determine the proportion achieving a clinically meaningful response in arthritis, psoriasis, and quality of life simultaneously. A prospective, multicenter, 52-week observational study in patients with active PsA evaluated treatment with ETN in clinical practice (ClinicalTrials.gov: NCT00293722). This analysis assessed simultaneous achievement of 3 treatment targets: low disease activity (LDA) based on 28-joint count Disease Activity Score (DAS28); body surface area (BSA) involvement ≤ 3%; and a score > 45 on the Medical Outcomes Study Short Form-12 (SF-12) physical component summary. Of 579 patients, 380 received ETN monotherapy and 199 received combination ETN plus csDMARD. At 52 weeks, data for all 3 disease domains were available for 251 patients receiving monotherapy and 151 receiving combination therapy. In the monotherapy and combination therapy groups, 61 (24.3%) and 37 (24.5%) patients, respectively, achieved all 3 treatment targets simultaneously. A significantly greater proportion of patients receiving monotherapy versus combination therapy achieved SF-12 > 45 (43.0% vs 31.8%; p < 0.05) and DAS28 LDA (72.5% vs 62.3%; p < 0.05). Conversely, BSA ≤ 3% was reached by a significantly greater proportion receiving combination therapy (75.5% vs 56.6%; p < 0.001). However, baseline BSA involvement was higher for the monotherapy group. While nearly half the patients achieved arthritis and psoriasis treatment targets simultaneously and one-fourth reached all 3 treatment targets, combining ETN and csDMARD did not substantially improve clinical response compared with ETN monotherapy in this real-world PsA patient population.
17. Baricitinib in Patients with Rheumatoid Arthritis and an Inadequate Response to Conventional Disease-Modifying Antirheumatic Drugs in United States and Rest of World: A Subset Analysis.
Science.gov (United States)
Wells, Alvin F; Greenwald, Maria; Bradley, John D; Alam, Jahangir; Arora, Vipin; Kartman, Cynthia E
2018-06-01
This article evaluates the efficacy and safety of baricitinib 4 mg versus placebo in United States including Puerto Rico (US) and rest of the world (ROW) subpopulations using data pooled from RA-BEAM and RA-BUILD, which enrolled patients with moderate-to-severe adult-onset rheumatoid arthritis (RA). In RA-BEAM, patients with an inadequate response (IR) to methotrexate, at least one X-ray erosion, and high sensitivity C-reactive protein (hsCRP) ≥ 6 mg/L were randomized to placebo or orally administered baricitinib 4 mg daily or subcutaneously administered adalimumab 40 mg every other week. In RA-BUILD, patients with an IR to at least one conventional synthetic disease-modifying antirheumatic drug (csDMARD) and with hsCRP ≥ 3.6 mg/L were randomized to placebo or baricitinib 2 or 4 mg daily. Patients in both trials were biologic naive. In this post hoc analysis, data from both studies were pooled (714 baricitinib 4 mg-treated, 716 placebo-treated patients). Overall, 188 US and 1242 ROW patients were included. Subgroups differed in baseline characteristics including race, weight, age, time since RA diagnosis, current corticosteroid use, and previous csDMARD use. At weeks 12 and 24, baricitinib-treated patients had larger responses compared to placebo-treated patients for multiple efficacy outcomes: American College of Rheumatology 20/50/70 response, low disease activity, remission, Disease Activity Score 28-C-reactive protein, and Health Assessment Questionnaire-Disability Index. Overall, similar efficacy was observed in US and ROW subgroups with no notable safety differences between subgroups at weeks 12 or 24. Baricitinib 4 mg was efficacious compared to placebo in US and ROW subpopulations. Safety was similar between subgroups. Eli Lilly & Company and Incyte Corporation. ClinicalTrials.gov identifiers, NCT01721057; NCT01710358.
18. The Impact of Low-Dose Disease-modifying Anti-rheumatics Drugs (DMARDs) on Bone Mineral Density of Premenopausal Women in Early Rheumatoid Arthritis.
Science.gov (United States)
Rexhepi, Sylejman; Rexhepi, Mjellma; Sahatçiu-Meka, Vjollca; Mahmutaj, Vigan; Boshnjaku, Shkumbin
2016-04-01
Rheumatoid arthritis (RA) is a chronic inflammatory disease characterized by symmetrical polyarthritis and multisystemic involvement. The aim of this study was to assess the impact of low dose of methotrexate on bone mineral density (BMD) in patients with early rheumatoid arthritis (RA). This paper follows a retrospective study, which involves 60 female patients with early onset RA diagnosed according to the American Rheumatism Association Criteria (ACR/EULAR 2010). The patients were divided into two groups group I was composed of thirty patients treated with dose of 7.5 mg/weekly methotrexate (MTX), while group II included thirty patients treated with dose of 2 g/daily sulfasalazine (SSZ). The Disease Activity was measured by a combination of Erythrocyte Sedimentation Rate (ESR) and Disease Activity Score (DAS-28). Bone mineral density of the lumbar spine (L2-4), and femoral neck, was measured by dual energy X-ray absorptiometry (DEXA) (Stratos 800). Laboratory findings included: In this study, we found no negative effect on BMD in RA patients treated with low dose MTX in comparison to patients treated with SSZ. There was not observed significant difference in BMD of the lumbar spine, femur neck or trochanter, of MTX and SSZ patients in the pretreatment phase, nor after 12 months of treatment. No significant change in the biochemical parameters of the both groups. Based on the results of our study, low dose of methotrexate has no negative effect on BMD in premenopausal RA patients. We believe that these results might provide new insights and that further longitudinal studies with larger groups of premenopausal RA patients are required.
19. Analysis of the data on pregnancy and lactation provided by patient information leaflets of anti-rheumatic drugs in Argentina.
Science.gov (United States)
Sabando, Miguel Ormaza; Saavedra, Maira Arias; Sequeira, Gabriel; Kerzberg, Eduardo
2018-04-01
To analyse the level of consistency and updating of the information on pregnancy and lactation provided by patient information leaflets (PILs) of the antirheumatic drugs approved in Argentina. Inconsistencies between the 2016 EULAR Task Force recommendations on the use of anti-rheumatic drugs during pregnancy and lactation and the information provided by PILs of the same drugs approved in Argentina were analysed along with inconsistencies within the PILs of different registered trademarks of these drugs. Eighty-eight PILs of 32 drugs were analysed. Out of the 88 PILs, 50% presented information inconsistencies as to pregnancy. Medications comprised in this group were: hydroxychloroquine, sulfasalazine, azathioprine, tacrolimus, cyclosporine, NSAIDs (during the first two trimesters), celecoxib, some glucocorticoids, colchicine, and some anti-TNF drugs (etanercept, adalimumab and infliximab) during part of the pregnancy. As for lactation, 56% had information inconsistencies. Medications encompassed in this group were: hydroxychloroquine, chloroquine, sulfasalazine, azathioprine, tacrolimus, cyclosporine, NSAIDs, celecoxib, meprednisone, prednisone, colchicine, and anti-TNF drugs. Out of 17 drugs that had more than one registered trademark, information inconsistencies on pregnancy were found in the PILs of sulfasalazine, diclofenac, ibuprofen and methylprednisolone. Concerning lactation, inconsistencies were present in the PILs of hydroxychloroquine, sulfasalazine, diclofenac, ibuprofen, meprednisone, and colchicine. At least half of the PILs of anti-rheumatic drugs analysed in this study had information inconsistencies on pregnancy and lactation. This is a serious state of affairs because the consensual decision-making process between patient and professional may be compromised, which, in turn, may give rise to medical-legal issues.
20. MS Disease-Modifying Medications
Science.gov (United States)
... disease-modifying therapies Approval: 2014 US; 2014 CAN Pregnancy Category C (see footnote, page 11) Rash, headache, fever, nasal congestion, nausea, urinary tract infection, fatigue, insomnia, upper respiratory tract infection, herpes viral ...
1. Rapid screening of non-steroidal anti-inflammatory drugs illegally added in anti-rheumatic herbal supplements and herbal remedies by portable ion mobility spectrometry.
Science.gov (United States)
Li, Mengjiao; Ma, Haiyan; Gao, Jinglin; Zhang, Lina; Wang, Xinyu; Liu, Di; Bian, Jing; Jiang, Ye
2017-10-25
In this work, for the first time, a high-performance ion mobility spectrometry with electrospray ionization (ESI-HPIMS) method has been employed as a rapid screening tool for the detection of acetaminophen, ibuprofen, naproxen, diclofenac sodium and indomethacin illegally added in anti-rheumatic herbal supplements and herbal remedies. Samples were dissolved and filtered through a 0.45μm microporous membrane, then the filtrate was directly injected into the high-performance ion mobility spectrometry for analysis. Using this approach, the screening of illegal additions can be accomplished in as rapid as two to three minutes with no pretreatment required. The proposed method provided a LOD of 0.06-0.33μgmL -1 , as well as a good seperation of the five NSAIDs. The precision of the method was 0.1-0.4% (repeatability, n=6) and 0.9-3.3% (reproducibility, n=3). The proposed method appeared to be simple, rapid and highly specific, thus could be effective for the in-situ screening of NSAIDs in anti-rheumatic herbal supplements and herbal remedies. Copyright © 2017 Elsevier B.V. All rights reserved.
2. O uso de drogas anti-reumáticas na gravidez Use of anti-rheumatic drugs during pregnancy
Directory of Open Access Journals (Sweden)
Roger A. Levy
2005-06-01
sendo estudada na prevenção do bloqueio cardíaco total da síndrome do lúpus neonatal. O uso de prednisona e prednisolona é limitado a menor dose eficaz, não atinge a circulação fetal, mas pode induzir os efeitos colaterais maternos já conhecidos. Azatioprina e ciclosporina são utilizadas, quando indicadas formalmente, sem aparente risco fetal. Metotrexato e leflunomide devem ser evitados a qualquer custo e o tratamento interrompido três meses antes da tentativa de concepção. Todas as decisões terapêuticas em pacientes grávidas devem ser individualizadas e os riscos e benefícios considerados.The prescription of anti-rheumatic drugs in fertile patients should take into account the current knowledge about their effects on conception, pregnancy and lactation. Judicious advice and pregnancy planning is ideal when possible. With the incorporation of new substances and the constant appearance of recent data in the literature this subject has to be continuously updated. The FDA risk factor rating is sometimes contradictory to our practice, in part because results from animal studies may not be directly applicable to humans. Biologic response modifiers seem to be safely used during pregnancy, since they are large molecules that are not capable of crossing the placenta. Non-steroidal anti-inflammatory drugs including specific COX-2 inhibitors may impair implantation of the ovum but can be used once pregnancy is under way, they should be avoided after 32 weeks, when there is a relationship with fetal complications. COX-2 inhibitors must be avoided due to its risk of renal mal-formation. Low-dose aspirin can be used safely during pregnancy. Low molecular weight heparins are preferred, since the unfractionated heparins have an increased risk of inducing thrombocytopenia and bleeding. Hydroxychloroquine is used and in fact recommended in lupus pregnancy with patients' benefits and no fetal risk. Warfarin is teratogenic if given between the 6th and 9th gestational
3. Randomised controlled trial of tumour necrosis factor inhibitors against combination intensive therapy with conventional disease-modifying antirheumatic drugs in established rheumatoid arthritis: the TACIT trial and associated systematic reviews.
Science.gov (United States)
Scott, David L; Ibrahim, Fowzia; Farewell, Vern; O'Keeffe, Aidan G; Ma, Margaret; Walker, David; Heslin, Margaret; Patel, Anita; Kingsley, Gabrielle
2014-10-01
Rheumatoid arthritis (RA) is initially treated with methotrexate and other disease-modifying antirheumatic drugs (DMARDs). Active RA patients who fail such treatments can receive tumour necrosis factor inhibitors (TNFis), which are effective but expensive. We assessed whether or not combination DMARDs (cDMARDs) give equivalent clinical benefits at lower costs in RA patients eligible for TNFis. An open-label, 12-month, pragmatic, randomised, multicentre, two-arm trial [Tumour necrosis factor inhibitors Against Combination Intensive Therapy (TACIT)] compared these treatment strategies. We then systematically reviewed all comparable published trials. The TACIT trial involved 24 English rheumatology clinics. Active RA patients eligible for TNFis. The TACIT trial compared cDMARDs with TNFis plus methotrexate or another DMARD; 6-month non-responders received (a) TNFis if in the cDMARD group; and (b) a second TNFi if in the TNFi group. The Heath Assessment Questionnaire (HAQ) was the primary outcome measure. The European Quality of Life-5 Dimensions (EQ-5D), joint damage, Disease Activity Score for 28 Joints (DAS28), withdrawals and adverse effects were secondary outcome measures. Economic evaluation linked costs, HAQ changes and quality-adjusted life-years (QALYs). In total, 432 patients were screened; 104 started on cDMARDs and 101 started on TNFis. The initial demographic and disease assessments were similar between the groups. In total, 16 patients were lost to follow-up (nine in the cDMARD group, seven in the TNFi group) and 42 discontinued their intervention but were followed up (23 in the cDMARD group and 19 in the TNFi group). Intention-to-treat analysis with multiple imputation methods used for missing data showed greater 12-month HAQ score reductions with initial cDMARDs than with initial TNFis [adjusted linear regression coefficient 0.15, 95% confidence interval (CI) -0.003 to 0.31; p = 0.046]. Increases in 12-month EQ-5D scores were greater with initial c
4. Effect of sanhuangwuji powder, anti-rheumatic drugs, and ginger-partitioned acupoint stimulation on the treatment of rheumatoid arthritis with peptic ulcer: a randomized controlled study.
Science.gov (United States)
Liu, Defang; Guo, Mingyang; Hu, Yonghe; Liu, Taihua; Yan, Jiao; Luo, Yong; Yun, Mingdong; Yang, Min; Zhang, Jun; Guo, Linglin
2015-06-01
To observe the efficacy and safety of oral sanhuangwuji powder, anti-rheumatic drugs (ARDs), and ginger-partitioned acupoint stimulation at zusanli (ST 36) on the treatment of rheumatoid arthritis (RA) complicated by peptic ulcer. This prospective randomized controlled study included 180 eligible inpatients and outpatients randomly assigned to an ARD treatment (n.= 60), ginger-partitioned stimulation (n = 60), or combination treatment (n = 60). Patients assigned to the ARD group were given oral celecoxib, methotrexate, and esomeprazole. Patients assigned to the ginger-partitioned stimulation group were given ginger-partitioned acupoint stimulation at zusanli (ST 36) in addition to the ARDs. Patients in the combination treatment group were given oral sanhuangwuji powder, ginger-partitioned acupoint stimulation at susanli (ST 36), and ARDs. All patients were followed up for 2 months to evaluate clinical effects and safety. The study was registered in the World Health Organization database at the General Hospital of Chengdu Military Area Command Chinese People's Liberation Army (ChiCTR-TCC12002824). The combination treatment group had significantly greater improvements in RA symptoms, laboratory outcomes, and gastrointestinal symptom scores, compared with the other groups (P ginger-partitioned stimulation group (χ2= 6.171, P ginger-partitioned acupoint stimulation at zusanli (ST 36), oral sanhuangwuji powder, and ARDs had a better clinical effect for RA with complicated peptic ulcer, compared with ARD treatmentalone or in combination with ginger-partitioned acupoint stimulation.
5. Disease-modifying drugs in Alzheimer's disease
Directory of Open Access Journals (Sweden)
Ghezzi L
2013-12-01
Full Text Available Laura Ghezzi, Elio Scarpini, Daniela Galimberti Neurology Unit, Department of Pathophysiology and Transplantation, University of Milan, Fondazione Cà Granda, IRCCS Ospedale Maggiore Policlinico, Milan, Italy Abstract: Alzheimer's disease (AD is an age-dependent neurodegenerative disorder and the most common cause of dementia. The early stages of AD are characterized by short-term memory loss. Once the disease progresses, patients experience difficulties in sense of direction, oral communication, calculation, ability to learn, and cognitive thinking. The median duration of the disease is 10 years. The pathology is characterized by deposition of amyloid beta peptide (so-called senile plaques and tau protein in the form of neurofibrillary tangles. Currently, two classes of drugs are licensed by the European Medicines Agency for the treatment of AD, ie, acetylcholinesterase inhibitors for mild to moderate AD, and memantine, an N-methyl-D-aspartate receptor antagonist, for moderate and severe AD. Treatment with acetylcholinesterase inhibitors or memantine aims at slowing progression and controlling symptoms, whereas drugs under development are intended to modify the pathologic steps leading to AD. Herein, we review the clinical features, pharmacologic properties, and cost-effectiveness of the available acetylcholinesterase inhibitors and memantine, and focus on disease-modifying drugs aiming to interfere with the amyloid beta peptide, including vaccination, passive immunization, and tau deposition. Keywords: Alzheimer's disease, acetylcholinesterase inhibitors, memantine, disease-modifying drugs, diagnosis, treatment
6. Classifying PML risk with disease modifying therapies.
Science.gov (United States)
Berger, Joseph R
2017-02-01
To catalogue the risk of PML with the currently available disease modifying therapies (DMTs) for multiple sclerosis (MS). All DMTs perturb the immune system in some fashion. Natalizumab, a highly effective DMT, has been associated with a significant risk of PML. Fingolimod and dimethyl fumarate have also been unquestionably associated with a risk of PML in the MS population. Concerns about PML risk with other DMTs have arisen due to their mechanism of action and pharmacological parallel to other agents with known PML risk. A method of contextualizing PML risk for DMTs is warranted. Classification of PML risk was predicated on three criteria:: 1) whether the underlying condition being treated predisposes to PML in the absence of the drug; 2) the latency from initiation of the drug to the development of PML; and 3) the frequency with which PML is observed. Among the DMTs, natalizumab occupies a place of its own with respect to PML risk. Significantly lesser degrees of risk exist for fingolimod and dimethyl fumarate. Whether PML will be observed with other DMTs in use for MS, such as, rituximab, teriflunomide, and alemtuzumab, remains uncertain. A logical classification for stratifying DMT PML risk is important for both the physician and patient in contextualizing risk/benefit ratios. As additional experience accumulates regarding PML and the DMTs, this early effort will undoubtedly require revisiting. Copyright © 2016 Elsevier B.V. All rights reserved.
7. Tofacitinib with conventional synthetic disease‐modifying antirheumatic drugs in Chinese patients with rheumatoid arthritis: Patient‐reported outcomes from a Phase 3 randomized controlled trial
OpenAIRE
Li, Zhanguo; An, Yuan; Su, Houheng; Li, Xiangpei; Xu, Jianhua; Zheng, Yi; Li, Guiye; Kwok, Kenneth; Wang, Lisy; Wu, Qizhe
2018-01-01
Abstract Aim Tofacitinib is an oral Janus kinase inhibitor for the treatment of rheumatoid arthritis (RA). We assess the effect of tofacitinib + conventional synthetic disease‐modifying anti rheumatic drugs (csDMARDs) on patient‐reported outcomes in Chinese patients with RA and inadequate response to DMARDs. Methods This analysis of data from the Phase 3 study ORAL Sync included Chinese patients randomized 4 : 4 : 1 : 1 to receive tofacitinib 5 mg twice daily, tofacitinib 10 mg twice daily, p...
8. Barriers and facilitators to disease-modifying antirheumatic drug use in patients with inflammatory rheumatic diseases: a qualitative theory-based study.
Science.gov (United States)
Voshaar, Marieke; Vriezekolk, Johanna; van Dulmen, Sandra; van den Bemt, Bart; van de Laar, Mart
2016-10-21
9. Infections and treatment of patients with rheumatic diseases
DEFF Research Database (Denmark)
Atzeni, F; Bendtzen, K; Bobbio-Pallavicini, F
2008-01-01
, and for the shortest possible time should therefore greatly reduce the risk of infections. Infection is a major co-morbidity in rheumatoid arthritis (RA), and conventional disease-modifying anti-rheumatic drugs (DMARDs) can increase the risk of their occurrence, including tuberculosis. TNF-alpha plays a key role...
10. 77 FR 59930 - Clinical Development Programs for Disease-Modifying Agents for Peripheral Neuropathy; Public...
Science.gov (United States)
2012-10-01
...] Clinical Development Programs for Disease-Modifying Agents for Peripheral Neuropathy; Public Workshop... to the clinical development of disease-modifying agents for the treatment of peripheral neuropathy... disease-modifying products for the management of peripheral neuropathy. Date and Time: The public workshop...
11. Assessing the effectiveness of synthetic and biologic disease-modifying antirheumatic drugs in psoriatic arthritis – a systematic review
Directory of Open Access Journals (Sweden)
Kingsley GH
2015-05-01
positive benefit. For biologics, TNF inhibitors already licensed for use were effective and similar benefits were seen with newer agents including ustekinumab, secukinumab, brodalumab, and abatacept, although the latter did not impact on skin problems. Important limitations of the systematic review included, first, the fact that for many agents there were little data and, second, much of the recent data for newer biologics were only available in abstract form. Conclusion: Conventional disease-modifying agents, with the possible exception of leflunomide, do not show clear evidence of disease-modifying effects in psoriatic arthritis, though a newer synthetic disease-modifying agents, apremilast, appears more effective. Biologic agents appear more beneficial, although more evidence is required for newer agents. This review suggests that it may be necessary to review existing national and international management guidelines for psoriatic arthritis. Keywords: psoriatic arthritis, disease-modifying antirheumatic drugs, biologics
12. Established and novel disease-modifying treatments in multiple sclerosis.
Science.gov (United States)
Cross, A H; Naismith, R T
2014-04-01
Multiple sclerosis (MS) is a presumed autoimmune disorder of the central nervous system, resulting in inflammatory demyelination and axonal and neuronal injury. New diagnostic criteria that incorporate magnetic resonance imaging have resulted in earlier and more accurate diagnosis of MS. Several immunomodulatory and immunosuppressive therapeutic agents are available for relapsing forms of MS, which allow individualized treatment based upon the benefits and risks. Disease-modifying therapies introduced in the 1990s, the beta-interferons and glatiramer acetate, have an established track record of efficacy and safety, although they require administration via injection. More recently, monoclonal antibodies have been engineered to act through specific mechanisms such as blocking alpha-4 integrin interactions (natalizumab) or lysing cells bearing specific markers, for example CD52 (alemtuzumab) or CD20 (ocrelizumab and ofatumumab). These agents can be highly efficacious, but sometimes have serious potential complications (natalizumab is associated with progressive multifocal leukoencephalopathy; alemtuzumab is associated with the development of new autoimmune disorders). Three new oral therapies (fingolimod, teriflunomide and dimethyl fumarate, approved for MS treatment from 2010 onwards) provide efficacy, tolerability and convenience; however, as yet, there are no long-term postmarketing efficacy and safety data in a general MS population. Because of this lack of long-term data, in some cases, therapy is currently initiated with the older, safer injectable medications, but patients are monitored closely with the plan to switch therapies if there is any indication of a suboptimal response or intolerance or lack of adherence to the initial therapy. For patients with MS who present with highly inflammatory and potentially aggressive disease, the benefit-to-risk ratio may support initiating therapy using a drug with greater potential efficacy despite greater risks (e
13. Discontinuing disease-modifying therapy in progressive multiple sclerosis: can we stop what we have started?
LENUS (Irish Health Repository)
Lonergan, Roisin
2012-02-01
Disease-modifying therapy is ineffective in disabled patients (Expanded Disability Status Scale [EDSS] > 6.5) with secondary progressive multiple sclerosis (MS) without relapses, or in primary progressive MS. Many patients with secondary progressive MS who initially had relapsing MS continue to use disease-modifying therapies. The enormous associated costs are a burden to health services. Regular assessment is recommended to guide discontinuation of disease-modifying therapies when no longer beneficial, but this is unavailable to many patients, particularly in rural areas. The objectives of this study are as follows: 1. To observe use of disease-modifying therapies in patients with progressive multiple sclerosis and EDSS > 6.5. 2. To examine approaches used by a group of international MS experts to stopping-disease modifying therapies in patients with secondary progressive MS without relapses. During an epidemiological study in three regions of Ireland (southeast Dublin city, and Wexford and Donegal Counties), we recorded details of disease-modifying therapies in patients with progressive MS and EDSS > 6.5. An e-questionnaire was sent to 26 neurologists with expert knowledge of MS, asking them to share their approach to stopping disease-modifying therapies in patients with secondary progressive MS. Three hundred and thirty-six patients were studied: 88 from southeast Dublin, 99 from Wexford and 149 from Donegal. Forty-four had EDSS > 6.5: 12 were still using disease-modifying therapies. Of the surveyed neurologists, 15 made efforts to stop disease-modifying therapies in progressive multiple sclerosis, but most did not insist. A significant proportion (12 of 44 patients with progressive MS and EDSS > 6.5) was considered to be receiving therapy without benefit. Eleven of the 12 were from rural counties, reflecting poorer access to neurology services. The costs of disease-modifying therapies in this group (>170,000 euro yearly) could be re-directed towards development
14. [Therapeutic Concepts for Treatment of Patients with Non-infectious Uveitis Biologic Disease Modifying Antirheumatic Drugs].
Science.gov (United States)
Walscheid, Karoline; Pleyer, Uwe; Heiligenhaus, Arnd
2018-04-12
Biologic disease modifying antirheumatic drugs (bDMARDs) can be highly efficient in the treatment of various non-infectious uveitis entities. Currently, the TNF-α-inhibitor Adalimumab is the only in-label therapeutic option, whereas, all other bDMARDs need to be given as an off-label therapy. bDMARDs are indicated in diseases refractory to conventional synthetic DMARD therapy and/or systemic steroids, or in patients in whom treatment with those is not possible due to side effects. Therapeutic mechanisms currently employed are cytokine-specific (interferons, inhibition of TNF-α or of interleukin [IL]-1-, IL-6- or IL-17-signalling), inhibit T cell costimulation (CTLA-4 fusion protein), or act via depletion of B cells (anti-CD20). All bDMARDs need to be administered parenterally, and therapy is initiated by the treating internal specialist only after interdisciplinary coordination of all treating subspecialties and after exclusion of contraindications. Regular clinical and laboratory monitoring is mandatory for all patients while under bDMARD therapy. Georg Thieme Verlag KG Stuttgart · New York.
15. Biologics for rheumatoid arthritis: an overview of Cochrane reviews
DEFF Research Database (Denmark)
Singh, Jasvinder A; Christensen, Robin; Wells, George A
2010-01-01
the biologic disease-modifying anti-rheumatic drugs (DMARDs) are very effective in treating rheumatoid arthritis (RA), however there is a lack of head-to-head comparison studies.......the biologic disease-modifying anti-rheumatic drugs (DMARDs) are very effective in treating rheumatoid arthritis (RA), however there is a lack of head-to-head comparison studies....
16. Proposal for a new nomenclature of disease-modifying antirheumatic drugs
NARCIS (Netherlands)
Smolen, Josef S.; van der Heijde, Desiree; Machold, Klaus P.; Aletaha, Daniel; Landewe, Robert
2014-01-01
In light of the recent emergence of new therapeutics for rheumatoid arthritis, such as kinase inhibitors and biosimilars, a new nomenclature for disease-modifying antirheumatic drugs (DMARDs), which are currently often classified as synthetic (or chemical) DMARDs (sDMARDS) and biological DMARDs
17. Disease-modifying antirheumatic drugs in pregnancy - Current status and implications for the future
NARCIS (Netherlands)
Vroom, Fokaline; de Walle, Hermien E. K.; van de Laar, Mart A. J. F.; Brouwers, Jacobus R. B. J.; de Jong-van den Berg, Lolkje T. W.
2006-01-01
Drug use during pregnancy is sometimes unavoidable, especially in chronic inflammatory diseases such as rheumatoid arthritis (RA). The use of disease-modifying antirheumatic drugs (DMARDs) often starts in the early stage of RA; therefore, women of reproductive age are at risk for exposure to a DMARD
18. Factors associated with influenza and pneumococcal vaccine uptake among rheumatoid arthritis patients in Denmark invited to participate in a pneumococcal vaccine trial (Immunovax_RA)
DEFF Research Database (Denmark)
Nguyen, MTT; Lindegaard, H.; Hendricks, O.
2017-01-01
the survey during scheduled follow-up visits. The questionnaire included questions concerning previous influenza and pneumococcal vaccine uptake, attitudes about vaccination, and socio-demographic factors. Factors associated with recalled vaccine uptake were assessed by multivariate logistic regression....... Results: A total of 192 RA patients completed the survey, 134 (70%) of whom were women and 90 (47%) were aged ≥ 65 years. Sixty-seven patients (35%) received conventional disease-modifying anti-rheumatic drugs (cDMARDs) and 125 (65%) combination therapy with biological disease-modifying anti...
19. A short history of anti-rheumatic therapy - V. Analgesics
Directory of Open Access Journals (Sweden)
P. Marson
2011-06-01
Full Text Available The pharmacological treatment of pain has very ancient origins, when plant-derived products were used, including mandrake extracts and opium, a dried latex obtained from Papaver somniferum. In the XVI and XVII centuries opium came into the preparation of two compounds widely used for pain relief: laudanum and Dover’s powder. The analgesic properties of extracts of willow bark were then recognized and later, in the second half of the XIX century, experimental studies on chemically synthesized analgesics were planned, thus promoting the marketing of some derivatives of para-amino-phenol and pyrazole, the predecessors of paracetamol and metamizol. In the XX century, nonsteroidal anti-inflammatory drugs were synthesized, such as phenylbutazone, which was initially considered primarily a pain medication. The introduction on the market of centrally acting analgesics, such as tramadol, sometimes used in the treatment of rheumatic pain. is quite recent.
20. Critical appraisal of the guidelines for the management of ankylosing spondylitis: disease-modifying antirheumatic drugs.
Science.gov (United States)
Soriano, Enrique R; Clegg, Daniel O; Lisse, Jeffrey R
2012-05-01
Surprisingly, little data are available for the use of disease-modifying antirheumatic drugs in ankylosing spondylitis. Sulfasalazine has been the best studied. Efficacy data for individual agents (including pamidronate) and combinations of agents are detailed in this review. Intriguingly, these agents continue to be used with some frequency, even in the absence of efficacy data. To answer these questions, additional systematic studies of these agents in ankylosing spondylitis are needed and will likely need to be done by interested collaborative groups such as SPARTAN.
1. New FDA-Approved Disease-Modifying Therapies for Multiple Sclerosis.
Science.gov (United States)
English, Clayton; Aloi, Joseph J
2015-04-01
Interferon injectables and glatiramer acetate have served as the primary disease-modifying treatments for multiple sclerosis (MS) since their introduction in the 1990s and are first-line treatments for relapsing-remitting forms of MS (RRMS). Many new drug therapies were launched since early 2010, expanding the drug treatment options considerably in a disease state that once had a limited treatment portfolio. The purpose of this review is to critically evaluate the safety profile and efficacy data of disease-modifying agents for MS approved by the US Food and Drug Administration (FDA) from 2010 to the present and provide cost and available pharmacoeconomic data about each new treatment. Peer-reviewed clinical trials, pharmacoeconomic studies, and relevant pharmacokinetic/pharmacologic studies were identified from MEDLINE (January 2000-December 2014) by using the search terms multiple sclerosis, fingolimod, teriflunomide, alemtuzumab, dimethyl fumarate, pegylated interferon, peginterferon beta-1a, glatiramer 3 times weekly, and pharmacoeconomics. Citations from available articles were also reviewed for additional references. The databases publically available at www.clinicaltrials.gov and www.fda.gov were searched for unpublished studies or studies currently in progress. A total of 5 new agents and 1 new dosage formulation were approved by the FDA for the treatment of RRMS since 2010. Peginterferon beta-1a and high-dose glatiramer acetate represent 2 new effective injectable options for MS that reduce burden of administration seen with traditional interferon and low-dose glatiramer acetate. Fingolimod, teriflunomide, and dimethyl fumarate represent new oral agents available for MS, and their efficacy in reducing annualized relapse rates is 48% to 55%, 22% to 36.3%, and 44% to 53%, respectively, compared with placebo. Alemtuzumab is a biologic given over a 2-year span that reduced annualized relapse rates by 55% in treatment-naive patients and by 49% in patients
2. Infections in patients with multiple sclerosis: Implications for disease-modifying therapy.
Science.gov (United States)
Celius, E G
2017-11-01
Patients with multiple sclerosis have an increased risk of infections compared to the general population. The increased risk has been described for decades and is not alone attributed to the use of disease-modifying drugs, but secondary to the disability. The introduction of more potent immunomodulatory drugs may cause an additional challenge, and depending on the mechanism of action, a treatment-induced increased risk of bacterial, viral, fungal or parasitic infections is observed. The choice of treatment in the individual patient with infections and multiple sclerosis must be guided by the drugs' specific mechanism of action, the drug-specific risk of infection and comorbidities. Increased monitoring and follow-up through treatment registries is warranted to increase our understanding and thereby improve management. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
3. [The trend of developing new disease-modifying drugs in Alzheimer's disease].
Science.gov (United States)
Arai, Hiroyuki; Furukawa, Katsutoshi; Tomita, Naoki; Ishiki, Aiko; Okamura, Nobuyuki; Kudo, Yukitsuka
2016-03-01
Development of symptomatic treatment of Alzheimer s disease by cholinesterase inhibitors like donepezil was successful. However, it is a disappointment that development of disease-modifying drugs such as anti-amyloid drug based on amyloid-cascade theory has been interrupted or unsuccessful. Therefore, we have to be more cautious regarding inclusion criteria for clinical trials of new drugs. We agree that potentially curative drugs should be started before symptoms begin as a preemptive therapy or prevention trial. The concept of personalized medicine also is important when ApoE4-related amyloid reducing therapy is considered. Unfortunately, Japanese-ADNI has suffered a setback since 2014. However, Ministry of Health, Labour and Welfare gave a final remark that there was nothing wrong in the data managing process in the J-ADNI data center. We should pay more attention to worldwide challenges of speeding up new drug development.
4. SQ grass sublingual allergy immunotherapy tablet for disease-modifying treatment of grass pollen allergic rhinoconjunctivitis
DEFF Research Database (Denmark)
Dahl, Ronald; Roberts, Graham; de Blic, Jacques
2016-01-01
BACKGROUND: Allergy immunotherapy is a treatment option for allergic rhinoconjunctivitis (ARC). It is unique compared with pharmacotherapy in that it modifies the immunologic pathways that elicit an allergic response. The SQ Timothy grass sublingual immunotherapy (SLIT) tablet is approved in North...... America and throughout Europe for the treatment of adults and children (≥5 years old) with grass pollen-induced ARC. OBJECTIVE: The clinical evidence for the use of SQ grass SLIT-tablet as a disease-modifying treatment for grass pollen ARC is discussed in this review. METHODS: The review included...... the suitability of SQ grass SLIT-tablet for patients with clinically relevant symptoms to multiple Pooideae grass species, single-season efficacy, safety, adherence, coseasonal initiation, and cost-effectiveness. The data from the long-term SQ grass SLIT-tablet clinical trial that evaluated a clinical effect 2...
5. Patients’ satisfaction with and views about treatment with disease-modifying drugs in multiple sclerosis
Directory of Open Access Journals (Sweden)
Caroline Vieira Spessotto
2016-08-01
Full Text Available ABSTRACT Objective The treatment of multiple sclerosis (MS with disease-modifying-drugs (DMDs is evolving and new drugs are reaching the market. Efficacy and safety aspects of the drugs are crucial, but the patients’ satisfaction with the treatment must be taken into consideration. Methods Individual interview with patients with MS regarding their satisfaction and points of view on the treatment with DMDs. Results One hundred and twenty eight patients attending specialized MS Units in five different cities were interviewed. Over 80% of patients were very satisfied with the drugs in use regarding convenience and perceived benefits. The only aspect scoring lesser values was tolerability. Conclusion Parameters for improving treatment in MS must include efficacy, safety, and patient satisfaction with the given DMD.
6. Developing Disease-Modifying Treatments in Alzheimer's Disease - A Perspective from Roche and Genentech.
Science.gov (United States)
Doody, R
2017-01-01
Alzheimer's disease (AD) is a chronic neurodegenerative disease for which no preventative or disease-modifying treatments currently exist. Pathological hallmarks include amyloid plaques and neurofibrillary tangles composed of hyper-phosphorylated tau protein. Evidence suggests that both pathologies are self-propagating once established. However, the lag time between neuropathological changes in the brain and the onset of even subtle clinical symptomatology means that patients are often diagnosed late when pathology, and neurodegeneration secondary to these changes, may have been established for several years. Complex pathological pathways associated with susceptibility to AD and changes that occur downstream of the neuropathologic process further contribute to the challenging endeavour of developing novel disease-modifying therapy. Recognising this complexity, effective management of AD must include reliable screening and early diagnosis in combination with effective therapeutic management of the pathological processes. Roche and Genentech are committed to addressing these unmet needs through developing a comprehensive portfolio of diagnostics and novel therapies. Beginning with the most scientifically supported targets, this approach includes two targeted amyloid-β monoclonal antibody therapies, crenezumab and gantenerumab, and an anti-tau monoclonal antibody, RO7105705, as well as a robust biomarker platform to aid in the early identification of people at risk or in the early stages of AD. Identification and implementation of diagnostic tools will support the enrolment of patients into clinical trials; furthermore, these tools should also support evaluation of the clinical efficacy and safety profile of the novel therapeutic agents tested in these trials. This review discusses the therapeutic agents currently under clinical development.
African Journals Online (AJOL)
psoriatic arthritis (PsA), and inflammatory bowel disease (IBD), as ... The biologic disease-modifying anti-rheumatic drugs (DMARDs) are classified according to .... (an integrin receptor antagonist) may also be effective in Crohn's disease and ...
8. Adherence to Disease Modifying Drugs among Patients with Multiple Sclerosis in Germany: A Retrospective Cohort Study.
Directory of Open Access Journals (Sweden)
Kerstin Hansen
Full Text Available Long-term therapies such as disease modifying therapy for Multiple Sclerosis (MS demand high levels of medication adherence in order to reach acceptable outcomes. The objective of this study was to describe adherence to four disease modifying drugs (DMDs among statutorily insured patients within two years following treatment initiation. These drugs were interferon beta-1a i.m. (Avonex, interferon beta-1a s.c. (Rebif, interferon beta-1b s.c. (Betaferon and glatiramer acetate s.c. (Copaxone.This retrospective cohort study used pharmacy claims data from the data warehouse of the German Institute for Drug Use Evaluation (DAPI from 2001 through 2009. New or renewed DMD prescriptions in the years 2002 to 2006 were identified and adherence was estimated during 730 days of follow-up by analyzing the medication possession ratio (MPR as proxy for compliance and persistence defined as number of days from initiation of DMD therapy until discontinuation or interruption.A total of 52,516 medication profiles or therapy cycles (11,891 Avonex, 14,060 Betaferon, 12,353 Copaxone and 14,212 Rebif from 50,057 patients were included into the analysis. Among the 4 cohorts, no clinically relevant differences were found in available covariates. The Medication Possession Ratio (MPR measured overall compliance, which was 39.9% with a threshold MPR≥0.8. There were small differences in the proportion of therapy cycles during which a patient was compliant for the following medications: Avonex (42.8%, Betaferon (40.6%, Rebif (39.2%, and Copaxone (37%. Overall persistence was 32.3% at the end of the 24 months observation period, i.e. during only one third of all included therapy cycles patients did not discontinue or interrupt DMD therapy. There were also small differences in the proportion of therapy cycles during which a patient was persistent as follows: Avonex (34.2%, Betaferon (33.4%, Rebif (31.7% and Copaxone (29.8%.Two years after initiating MS-modifying therapy, only
9. A new proposal for randomized start design to investigate disease-modifying therapies for Alzheimer disease.
Science.gov (United States)
Zhang, Richard Y; Leon, Andrew C; Chuang-Stein, Christy; Romano, Steven J
2011-02-01
10. Baseline predictors of persistence to first disease-modifying treatment in multiple sclerosis.
Science.gov (United States)
Zettl, U K; Schreiber, H; Bauer-Steinhusen, U; Glaser, T; Hechenbichler, K; Hecker, M
2017-08-01
Patients with multiple sclerosis (MS) require lifelong therapy. However, success of disease-modifying therapies is dependent on patients' persistence and adherence to treatment schedules. In the setting of a large multicenter observational study, we aimed at assessing multiple parameters for their predictive power with respect to discontinuation of therapy. We analyzed 13 parameters to predict discontinuation of interferon beta-1b treatment during a 2-year follow-up period based on data from 395 patients with MS who were treatment-naïve at study onset. Besides clinical characteristics, patient-related psychosocial outcomes were assessed as well. Among patients without clinically relevant fatigue, males showed a higher persistence rate than females (80.3% vs 64.7%). Clinically relevant fatigue scores decreased the persistence rate in men and especially in women (71.4% and 51.2%). Besides gender and fatigue, univariable and multivariable analyses revealed further factors associated with interferon beta-1b therapy discontinuation, namely lower quality of life, depressiveness, and higher relapse rate before therapy initiation, while higher education, living without a partner, and higher age improved persistence. Patients with higher grades of fatigue and depressiveness are at higher risk to prematurely discontinue MS treatment; especially, women suffering from fatigue have an increased discontinuation rate. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
11. Disease-modifying treatments for early and advanced multiple sclerosis: a new treatment paradigm.
Science.gov (United States)
Giovannoni, Gavin
2018-06-01
The treatment of multiple sclerosis is evolving rapidly with 11 classes of disease-modifying therapies (DMTs). This article provides an overview of a new classification system for DMTs and treatment paradigm for using these DMTs effectively and safely. A summary of research into the use of more active approaches to early and effective treatment of multiple sclerosis with defined treatment targets of no evident disease activity (NEDA). New insights are discussed that is allowing the field to begin to tackle more advanced multiple sclerosis, including people with multiple sclerosis using wheelchairs. However, the need to modify expectations of what can be achieved in more advanced multiple sclerosis are discussed; in particular, the focus on neuronal systems with reserve capacity, for example, upper limb, bulbar and visual function. The review describes a new more active way of managing multiple sclerosis and concludes with a call to action in solving the problem of slow adoption of innovations and the global problem of untreated, or undertreated, multiple sclerosis.
12. Fertility, pregnancy and childbirth in patients with multiple sclerosis: impact of disease-modifying drugs.
Science.gov (United States)
Amato, Maria Pia; Portaccio, Emilio
2015-03-01
In recent decades, pregnancy-related issues in multiple sclerosis (MS) have received growing interest. MS is more frequent in women than in men and typically starts during child-bearing age. An increasing number of disease-modifying drugs (DMDs) for the treatment of MS are becoming available. Gathering information on their influences on pregnancy-related issues is of crucial importance for the counselling of MS patients. As for the immunomodulatory drugs (interferons and glatiramer acetate), accumulating evidence points to the relative safety of pregnancy exposure in terms of maternal and foetal outcomes. In case of higher clinical disease activity before pregnancy, these drugs could be continued until conception. As for the 'newer' drugs (fingolimod, natalizumab, teriflunomide, dimethyl fumarate and alemtuzumab), the information is more limited. Whereas fingolimod and teriflunomide are likely associated with an increased risk of foetal malformations, the effects of natalizumab, dimethyl fumarate and alemtuzumab still need to be ascertained. This article provides a review of the available information on the use of DMDs during pregnancy, with a specific focus on fertility, foetal development, delivery and breast-feeding.
13. Biologic and oral disease-modifying antirheumatic drug monotherapy in rheumatoid arthritis
Science.gov (United States)
Emery, Paul; Sebba, Anthony; Huizinga, Tom W J
2013-01-01
Clinical evidence demonstrates coadministration of tumour necrosis factor inhibitor (TNFi) agents and methotrexate (MTX) is more efficacious than administration of TNFi agents alone in patients with rheumatoid arthritis, leading to the perception that coadministration of MTX with all biologic agents or oral disease-modifying antirheumatic drugs is necessary for maximum efficacy. Real-life registry data reveal approximately one-third of patients taking biologic agents use them as monotherapy. Additionally, an analysis of healthcare claims data showed that when MTX was prescribed in conjunction with a biologic agent, as many as 58% of patients did not collect the MTX prescription. Given this discrepancy between perception and real life, we conducted a review of the peer-reviewed literature and rheumatology medical congress abstracts to determine whether data support biologic monotherapy as a treatment option for patients with rheumatoid arthritis. Our analysis suggests only for tocilizumab is there evidence that the efficacy of biologic monotherapy is comparable with combination therapy with MTX. PMID:23918035
14. Adherence to synthetic disease-modifying Antirheumatic Drugs in Rheumatoid Arthritis: Results of the OBSERVAR Study.
Science.gov (United States)
Juan Mas, Antonio; Castañeda, Santos; Cantero Santamaría, José I; Baquero, José L; Del Toro Santos, Francisco J
2017-12-27
15. The Neuroprotective Disease-Modifying Potential of Psychotropics in Parkinson's Disease
Directory of Open Access Journals (Sweden)
Edward C. Lauterbach
2012-01-01
Full Text Available Neuroprotective treatments in Parkinson's disease (PD have remained elusive. Psychotropics are commonly prescribed in PD without regard to their pathobiological effects. The authors investigated the effects of psychotropics on pathobiological proteins, proteasomal activity, mitochondrial functions, apoptosis, neuroinflammation, trophic factors, stem cells, and neurogenesis. Only findings replicated in at least 2 studies were considered for these actions. Additionally, PD-related gene transcription, animal model, and human neuroprotective clinical trial data were reviewed. Results indicate that, from a PD pathobiology perspective, the safest drugs (i.e., drugs least likely to promote cellular neurodegenerative mechanisms balanced against their likelihood of promoting neuroprotective mechanisms include pramipexole, valproate, lithium, desipramine, escitalopram, and dextromethorphan. Fluoxetine favorably affects transcription of multiple genes (e.g., MAPT, GBA, CCDC62, HIP1R, although it and desipramine reduced MPTP mouse survival. Haloperidol is best avoided. The most promising neuroprotective investigative priorities will involve disease-modifying trials of the safest agents alone or in combination to capture salutary effects on H3 histone deacetylase, gene transcription, glycogen synthase kinase-3, α-synuclein, reactive oxygen species (ROS, reactive nitrogen species (RNS, apoptosis, inflammation, and trophic factors including GDNF and BDNF.
16. Huperzine A: Is it an Effective Disease-Modifying Drug for Alzheimer's Disease?
Science.gov (United States)
Qian, Zhong Ming; Ke, Ya
2014-01-01
Alzheimer's disease (AD) is a progressive neurodegenerative disorder for which there is no cure. Huperzine A (HupA) is a natural inhibitor of acetylcholinesterase (AChE) derived from the Chinese folk medicine Huperzia serrata (Qian Ceng Ta). It is a licensed anti-AD drug in China and is available as a nutraceutical in the US. A growing body of evidence has demonstrated that HupA has multifaceted pharmacological effects. In addition to the symptomatic, cognitive-enhancing effect via inhibition of AChE, a number of recent studies have reported that this drug has "non-cholinergic" effects on AD. Most important among these is the protective effect of HupA on neurons against amyloid beta-induced oxidative injury and mitochondrial dysfunction as well as via the up-regulation of nerve growth factor and antagonizing N-methyl-d-aspartate receptors. The most recent discovery that HupA may reduce brain iron accumulation lends further support to the argument that HupA could serve as a potential disease-modifying agent for AD and also other neurodegenerative disorders by significantly slowing down the course of neuronal death.
17. Huperzine A: Is it an Effective Disease-Modifying Drug for Alzheimer’s Disease?
Science.gov (United States)
Qian, Zhong Ming; Ke, Ya
2014-01-01
Alzheimer’s disease (AD) is a progressive neurodegenerative disorder for which there is no cure. Huperzine A (HupA) is a natural inhibitor of acetylcholinesterase (AChE) derived from the Chinese folk medicine Huperzia serrata (Qian Ceng Ta). It is a licensed anti-AD drug in China and is available as a nutraceutical in the US. A growing body of evidence has demonstrated that HupA has multifaceted pharmacological effects. In addition to the symptomatic, cognitive-enhancing effect via inhibition of AChE, a number of recent studies have reported that this drug has “non-cholinergic” effects on AD. Most important among these is the protective effect of HupA on neurons against amyloid beta-induced oxidative injury and mitochondrial dysfunction as well as via the up-regulation of nerve growth factor and antagonizing N-methyl-d-aspartate receptors. The most recent discovery that HupA may reduce brain iron accumulation lends further support to the argument that HupA could serve as a potential disease-modifying agent for AD and also other neurodegenerative disorders by significantly slowing down the course of neuronal death. PMID:25191267
18. Huperzine A: is it an effective disease-modifying drug for Alzheimer’s disease?
Directory of Open Access Journals (Sweden)
Zhong Ming eQian
2014-08-01
Full Text Available Alzheimer's disease (AD is a progressive neurodegenerative disorder for which there is no cure. Huperzine A (HupA is a natural inhibitor of acetylcholinesterase (AChE derived from the Chinese folk medicine Huperzia serrata (Qian Ceng Ta. It is a licensed anti-AD drug in China and is available as a nutraceutical in the US. A growing body of evidence has demonstrated that HupA has multifaceted pharmacological effects. In addition to the symptomatic, cognitive-enhancing effect via inhibition of AChE, a number of recent studies have reported that this drug has non-cholinergic effects on AD. Most important among these is the protective effect of HupA on neurons against amyloid beta-induced oxidative injury and mitochondrial dysfunction as well as via the up-regulation of nerve growth factor and antagonizing N-methyl-D-aspartate receptors. The most recent discovery that HupA may reduce brain iron accumulation lends further support to the argument that HupA could serve as a potential disease-modifying agent for AD and also other neurodegenerative disorders by significantly slowing down the course of neuronal death.
19. Perspectives and experiences of Dutch multiple sclerosis patients and multiple sclerosis-specialized neurologists on injectable disease-modifying treatment
NARCIS (Netherlands)
Visser, Leo H.; Heerings, Marco A.; Jongen, Peter J.; van der Hiele, Karin
2016-01-01
Background: The adherence to treatment with injectable disease-modifying drugs (DMDs) in multiple sclerosis (MS) may benefit from adequate information provision and management of expectations. The communication between patients and physicians is very important in this respect. The current study
20. Health-Related Quality of Life in Patients with Multiple Sclerosis : Impact of Disease-Modifying Drugs
NARCIS (Netherlands)
Jongen, Peter Joseph
Multiple sclerosis (MS) has a profound impact on health-related quality of life (HRQoL), a comprehensive subjective measure of the patient's health status. Assessment of HRQoL informs on the potential advantages and disadvantages of disease-modifying drugs (DMDs) beyond their effects on
1. Pamidronate infusion improved two cases of intractable seronegative rheumatoid arthritise
Directory of Open Access Journals (Sweden)
Mansour Salesi
2011-01-01
Full Text Available Pamidronate is a bisphosphonate derivative that can inhibit bone resorption by actions on osteoclasts and increase bone density in spite of treatment with steroids. This drug has the anti-inflammatory effect by increase apoptosis of monocytes. 5-10 percent of rheumatoid arthritis patients is seronegative and may be resistant to conventional disease modifying anti rheumatic drugs (DMARDs. Intravenous (IV pamidronate can be effective in disease control in seronegative rheumatoid arthritis. We report two cases of seronegative and drug resistant rheumatoid arthritis that favorably responds to pamidronate.
2. Monitoring patients with rheumatoid arthritis in routine care
DEFF Research Database (Denmark)
Hetland, Merete Lund; Jensen, Dorte Vendelbo; Krogh, Niels Steen
2014-01-01
OBJECTIVES: Advances in aggressive use of conventional synthetic disease-modifying anti-rheumatic drugs (csDMARDs) as well as biological DMARDs (bDMARDs) have improved the treatment armamentarium for rheumatologists, and modern treatment principles include a treat-to-target (T2T) strategy. However......, little is known about the feasibility of a T2T strategy in patients with rheumatoid arthritis (RA) treated in routine care. The aim of the present study was to (i) present the annual number of patients included in DANBIO between 2006 and 2013 and their disease characteristics and (ii) estimate coverage...
3. Infections and treatment of patients with rheumatic diseasesTumor necrosis factor-alpha binding capacity and anti-infliximab antibodies measured by fluid-phase radioimmunoassays as predictors of clinical efficacy of infliximab in Crohn's disease
DEFF Research Database (Denmark)
Atzeni, F.; Bendtzen, K.; Bobbio-Pallavicini, F.
2008-01-01
/inflammatory conditions, and current therapies have the aim of providing adequate (low) compensatory doses, the timing of GC administration, such as during the nocturnal turning-on phase of tumour necrosis factor (TNF) secretion, can be extremely important. The use of the lowest possible GC dose, at night......, and for the shortest possible time should therefore greatly reduce the risk of infections. Infection is a major co-morbidity in rheumatoid arthritis (RA), and conventional disease-modifying anti-rheumatic drugs (DMARDs) can increase the risk of their occurrence, including tuberculosis. TNF-alpha plays a key role...
4. Dumping convention
International Nuclear Information System (INIS)
Roche, P.
1992-01-01
Sea dumping of radioactive waste has, since 1983, been precluded under a moratorium established by the London Dumping Convention. Pressure from the nuclear industry to allow ocean dumping of nuclear waste is reported in this article. (author)
5. Treatment patterns in disease-modifying therapy for patients with multiple sclerosis in the United States.
Science.gov (United States)
Bonafede, Machaon M; Johnson, Barbara H; Wenten, Madé; Watson, Crystal
2013-10-01
Patients with multiple sclerosis (MS) whose disease activity is inadequately controlled with a platform therapy (interferon beta or glatiramer acetate [GA]) may switch to another platform therapy or escalate therapy to natalizumab or fingolimod, which were approved in the US in 2006 and 2010, respectively. The objective of this study was to describe treatment patterns in patients with multiple sclerosis (MS) in the United States who were followed for 2 years after initiating a disease-modifying therapy (DMT). A retrospective observational cohort study was conducted to examine treatment patterns of initial DMT use (on initial therapy for 2 years with and without gaps of ≥ 60 days, medication switching, and discontinuation) among patients with MS who initiated a platform therapy (interferon-β or glatiramer acetate) or natalizumab between January 1, 2007, and September 30, 2009; the first DMT claim was the index. Eligible patients were identified in the MarketScan Commercial and Medicare Supplemental databases based on continuous enrollment for 6 months before (preindex period) and 24 months after their index date, with a diagnosis of MS and no claim for a previous DMT in the 6-month preindex period. Demographics at index and clinical characteristics during the preindex period were also analyzed. A total of 6181 MS patients were included, with 5735 (92.8%) starting on platform therapy. Natalizumab initiators were more likely to stay on index therapy (32.3% vs 16.9%, P treatment gaps of ≥ 60 days (44.8% vs 55.3%, P treatment (13.9% vs 19.1%, P = 0.007) and took longer to switch (400.9 days vs 330.7 days, P treatment gaps, and switch less than platform initiators in the 2 years after treatment initiation. Switching between platform therapies is common despite evidence that MS patients on platform therapy may benefit from switching to natalizumab. © 2013 Elsevier HS Journals, Inc. All rights reserved.
6. Management of coccidioidomycosis in patients receiving biologic response modifiers or disease-modifying antirheumatic drugs.
Science.gov (United States)
Taroumian, Sara; Knowles, Susan L; Lisse, Jeffrey R; Yanes, James; Ampel, Neil M; Vaz, Austin; Galgiani, John N; Hoover, Susan E
2012-12-01
Coccidioidomycosis (valley fever) is an endemic fungal infection of the American Southwest, an area with a large population of patients with rheumatic diseases. There are currently no guidelines for management of patients who develop coccidioidomycosis while under treatment with biologic response modifiers (BRMs) or disease-modifying antirheumatic drugs (DMARDs). We conducted a retrospective study of how both concurrent diseases were managed and the patient outcomes at 2 centers in Tucson, Arizona. A retrospective chart review identified patients who developed coccidioidomycosis during treatment with DMARDs or BRMs. Patients were seen at least once in a university-affiliated or Veterans Affairs outpatient rheumatology clinic in Tucson, Arizona, between 2007 and 2009. Forty-four patients were identified. Rheumatologic treatment included a BRM alone (n = 11), a DMARD alone (n = 8), or combination therapy (n = 25). Manifestations of coccidioidomycosis included pulmonary infection (n = 29), disseminated disease (n = 9), and asymptomatic positive coccidioidal serologies (n = 6). After the diagnosis of coccidioidomycosis, 26 patients had BRMs and DMARDs stopped, 8 patients had BRMs stopped but DMARD therapy continued, and 10 patients had no change in their immunosuppressive therapy. Forty-one patients had antifungal therapy initiated for 1 month or longer. Followup data were available for 38 patients. BRM and/or DMARD therapy was continued or resumed in 33 patients, only 16 of whom continued concurrent antifungal therapy. None of the patients have had subsequent dissemination or complications of coccidioidomycosis. Re-treating rheumatic disease patients with a BRM and/or a DMARD after coccidioidomycosis appears to be safe in some patients. We propose a management strategy based on coccidioidomycosis disease activity. Copyright © 2012 by the American College of Rheumatology.
7. Therapeutic compliance of first line disease-modifying therapies in patients with multiple sclerosis. COMPLIANCE Study.
Science.gov (United States)
Saiz, A; Mora, S; Blanco, J
2015-05-01
8. Environmental enrichment imparts disease-modifying and transgenerational effects on genetically-determined epilepsy and anxiety.
Science.gov (United States)
Dezsi, Gabi; Ozturk, Ezgi; Salzberg, Michael R; Morris, Margaret; O'Brien, Terence J; Jones, Nigel C
2016-09-01
The absence epilepsies are presumed to be caused by genetic factors, but the influence of environmental exposures on epilepsy development and severity, and whether this influence is transmitted to subsequent generations, is not well known. We assessed the effects of environmental enrichment on epilepsy and anxiety outcomes in multiple generations of GAERS - a genetic rat model of absence epilepsy that manifests comorbid elevated anxiety-like behaviour. GAERS were exposed to environmental enrichment or standard housing beginning either prior to, or after epilepsy onset, and underwent EEG recordings and anxiety testing. Then, we exposed male GAERS to early enrichment or standard housing and generated F1 progeny, which also underwent EEG recordings. Hippocampal CRH mRNA expression and DNA methylation were assessed using RT-PCR and pyrosequencing, respectively. Early environmental enrichment delayed the onset of epilepsy in GAERS, and resulted in fewer seizures in adulthood, compared with standard housed GAERS. Enrichment also reduced the frequency of seizures when initiated in adulthood. Anxiety levels were reduced by enrichment, and these anti-epileptogenic and anxiolytic effects were heritable into the next generation. We also found reduced expression of CRH mRNA in GAERS exposed to enrichment, but this was not due to changes in DNA methylation. Environmental enrichment produces disease-modifying effects on genetically determined absence epilepsy and anxiety, and these beneficial effects are transferable to the subsequent generation. Reduced CRH expression was associated with these phenotypic improvements. Environmental stimulation holds promise as a naturalistic therapy for genetically determined epilepsy which may benefit subsequent generations. Copyright © 2016 Elsevier Inc. All rights reserved.
9. ANALYSIS OF DISEASE MODIFYING DRUGS ADMINISTRATION FREGUENCY AND CAUSES OF THEIR WITHDRAWAL IN RHEUMATOID ARTHRITIS
Directory of Open Access Journals (Sweden)
E V Pavlova
2000-01-01
Full Text Available Aim of studdy: To assess the frequency of practical application of different basic drugs in rheumatoid arthritis (RA. Material and methods: Tlxe study was conducted basing of questionner of pts and analysis of ycases by randomized sampling among 103 consequent pts (M:F= 13:90 with reliable RA (ARA, 1987 in rheumatologic department of Clinical Hospital Nol in Ekaterinburg. 74% of pts under study demonstrated systemic manifestations: anemia (in 47 pts, lymphadenopathy (in 34, rheumatoid nodules (in 15, Sjogren s syndrome (in 4, nephropathy (in 4, vascular disturbances including Raynaud s phenomenon, capillarites (by 1 pt. Results: In the course of disease basic therapy was prescribed to 88 out of103 (85.4% pts and one and the same patient could take different basic drugs. Aminochinoline drugs prevailed, after them more frequent were immunodepressants and gold preparations. More rarely pts had sulfasalazin, cuprenil and wobenzym. In general, in 133 out of 184 cases of prescribing basic drugs they were canceled. The reason for cancellation were: prevalently absence of the drug in the pharmaceutical stores (in 48 cases averagely in 8 months of taking the drug; then they insufficient efficacy (44 cases averagely in 1.3 year. In 18 cases pts themselves stopped treatment averagely in 3.5 months of drug taking. Conclusion: In the majority of cases of basic drugs cancellation in RA the cause is their absence in sail especially on free of charge prescription. Cases ofself-cancellation of the drug demonstrate the need of explaining to pts the necessity> of long-term taking disease-modifying drugs.
10. Patient perspectives on switching disease-modifying therapies in the NARCOMS registry.
Science.gov (United States)
Salter, Amber R; Marrie, Ruth Ann; Agashivala, Neetu; Belletti, Daniel A; Kim, Edward; Cutter, Gary R; Cofield, Stacey S; Tyry, Tuula
2014-01-01
The evolving landscape of disease-modifying therapies (DMTs) for multiple sclerosis raises important questions about why patients change DMTs. Physicians and patients could benefit from a better understanding of the reasons for switching therapy. To investigate the reasons patients switch DMTs and identify characteristics associated with the decision to switch. The North American Research Committee on Multiple Sclerosis (NARCOMS) Registry conducted a supplemental survey among registry participants responding to the 2011 update survey. The supplemental survey investigated reasons for switching DMT, origin of the discussion of DMT change, and which factors influenced the decision. Chi-square tests, Fisher's exact tests, and logistic regression were used for the analyses. Of the 691 eligible candidates, 308 responded and met the inclusion criteria (relapsing disease course, switched DMT after September 2010). The responders were 83.4% female, on average 52 years old, with a median (interquartile range) Patient-Determined Disease Steps score of 4 (2-5). The most recent prior therapy included first-line injectables (74.5%), infusions (18.1%), an oral DMT (3.4%), and other DMTs (4.0%). The discussion to switch DMT was initiated almost equally by physicians and participants. The primary reason for choosing the new DMT was based most frequently on physician's recommendation (24.5%) and patient perception of efficacy (13.7%). Participants frequently initiated the discussion regarding changing DMT, although physician recommendations regarding the specific therapy were still weighed highly. Long-term follow-up of these participants will provide valuable information on their disease trajectory, satisfaction with, and effectiveness of their new medication.
11. Disease-modifying effect of atipamezole in a model of post-traumatic epilepsy.
Science.gov (United States)
Nissinen, Jari; Andrade, Pedro; Natunen, Teemu; Hiltunen, Mikko; Malm, Tarja; Kanninen, Katja; Soares, Joana I; Shatillo, Olena; Sallinen, Jukka; Ndode-Ekane, Xavier Ekolle; Pitkänen, Asla
2017-10-01
Treatment of TBI remains a major unmet medical need, with 2.5 million new cases of traumatic brain injury (TBI) each year in Europe and 1.5 million in the USA. This single-center proof-of-concept preclinical study tested the hypothesis that pharmacologic neurostimulation with proconvulsants, either atipamezole, a selective α 2 -adrenoceptor antagonist, or the cannabinoid receptor 1 antagonist SR141716A, as monotherapy would improve functional recovery after TBI. A total of 404 adult Sprague-Dawley male rats were randomized into two groups: sham-injured or lateral fluid-percussion-induced TBI. The rats were treated with atipamezole (started at 30min or 7 d after TBI) or SR141716A (2min or 30min post-TBI) for up to 9 wk. Total follow-up time was 14 wk after treatment initiation. Outcome measures included motor (composite neuroscore, beam-walking) and cognitive performance (Morris water-maze), seizure susceptibility, spontaneous seizures, and cortical and hippocampal pathology. All injured rats exhibited similar impairment in the neuroscore and beam-walking tests at 2 d post-TBI. Atipamezole treatment initiated at either 30min or 7 d post-TBI and continued for 9 wk via subcutaneous osmotic minipumps improved performance in both the neuroscore and beam-walking tests, but not in the Morris water-maze spatial learning and memory test. Atipamezole treatment initiated at 7 d post-TBI also reduced seizure susceptibility in the pentylenetetrazol test 14 wk after treatment initiation, although it did not prevent the development of epilepsy. SR141716A administered as a single dose at 2min post-TBI or initiated at 30min post-TBI and continued for 9 wk had no recovery-enhancing or antiepileptogenic effects. Mechanistic studies to assess the α 2 -adrenoceptor subtype specificity of the disease-modifying effects of atipametzole revealed that genetic ablation of α 2A -noradrenergic receptor function in Adra2A mice carrying an N79P point mutation had antiepileptogenic effects
12. Patient perspectives on switching disease-modifying therapies in the NARCOMS registry
Directory of Open Access Journals (Sweden)
Salter AR
2014-07-01
Full Text Available Amber R Salter,1 Ruth Ann Marrie,2,3 Neetu Agashivala,4 Daniel A Belletti,4 Edward Kim,4 Gary R Cutter,1 Stacey S Cofield,1 Tuula Tyry51Department of Biostatistics, University of Alabama at Birmingham, Birmingham, AL, USA; 2Department of Internal Medicine, 3Department of Community Health Sciences, University of Manitoba, Winnipeg, MB, Canada; 4Novartis Pharmaceutical Corporation, East Hanover, NJ, USA; 5Division of Neurology, St. Joseph’s Hospital and Medical Center, Phoenix, AZ, USAIntroduction: The evolving landscape of disease-modifying therapies (DMTs for multiple sclerosis raises important questions about why patients change DMTs. Physicians and patients could benefit from a better understanding of the reasons for switching therapy. Purpose: To investigate the reasons patients switch DMTs and identify characteristics associated with the decision to switch.Method: The North American Research Committee on Multiple Sclerosis (NARCOMS Registry conducted a supplemental survey among registry participants responding to the 2011 update survey. The supplemental survey investigated reasons for switching DMT, origin of the discussion of DMT change, and which factors influenced the decision. Chi-square tests, Fisher’s exact tests, and logistic regression were used for the analyses. Results: Of the 691 eligible candidates, 308 responded and met the inclusion criteria (relapsing disease course, switched DMT after September 2010. The responders were 83.4% female, on average 52 years old, with a median (interquartile range Patient-Determined Disease Steps score of 4 (2–5. The most recent prior therapy included first-line injectables (74.5%, infusions (18.1%, an oral DMT (3.4%, and other DMTs (4.0%. The discussion to switch DMT was initiated almost equally by physicians and participants. The primary reason for choosing the new DMT was based most frequently on physician’s recommendation (24.5% and patient perception of efficacy (13.7%. Conclusion
13. Disease modifying and antiangiogenic activity of 2-Methoxyestradiol in a murine model of rheumatoid arthritis
Directory of Open Access Journals (Sweden)
Moore Elizabeth G
2009-05-01
study parameters and prevented neovascularization into the joint. Examination of gene expression on dissected hind limbs from mice treated for 5 or 14 days with 2ME2 showed inhibition of inflammatory cytokine message for IL-1β, TNF-α, IL-6 and IL-17, as well as the angiogenic cytokines, VEGF and FGF-2. Conclusion These data demonstrate that in the CAIA mouse model of RA, 2ME2 has disease modifying activity that is at least partially attributable to the inhibition of neovascular development. Further, the data suggests new mechanistic points of intervention for 2ME2 in RA, specifically inhibition of inflammatory mediators and osteoclast activity.
14. Cognitive enhancers (Nootropics). Part 3: drugs interacting with targets other than receptors or enzymes. Disease-modifying drugs. Update 2014.
Science.gov (United States)
Froestl, Wolfgang; Pfeifer, Andrea; Muhs, Andreas
2014-01-01
Scientists working in the field of Alzheimer's disease and, in particular, cognitive enhancers, are very productive. The review "Drugs interacting with Targets other than Receptors or Enzymes. Disease-modifying Drugs" was accepted in October 2012. In the last 20 months, new targets for the potential treatment of Alzheimer's disease were identified. Enormous progress was realized in the pharmacological characterization of natural products with cognitive enhancing properties. This review covers the evolution of research in this field through May 2014.
15. Towards the concept of disease-modifier in post-stroke or vascular cognitive impairment: a consensus report.
Science.gov (United States)
Bordet, Régis; Ihl, Ralf; Korczyn, Amos D; Lanza, Giuseppe; Jansa, Jelka; Hoerr, Robert; Guekht, Alla
2017-05-24
Vascular cognitive impairment (VCI) is a complex spectrum encompassing post-stroke cognitive impairment (PSCI) and small vessel disease-related cognitive impairment. Despite the growing health, social, and economic burden of VCI, to date, no specific treatment is available, prompting the introduction of the concept of a disease modifier. Within this clinical spectrum, VCI and PSCI remain advancing conditions as neurodegenerative diseases with progression of both vascular and degenerative lesions accounting for cognitive decline. Disease-modifying strategies should integrate both pharmacological and non-pharmacological multimodal approaches, with pleiotropic effects targeting (1) endothelial and brain-blood barrier dysfunction; (2) neuronal death and axonal loss; (3) cerebral plasticity and compensatory mechanisms; and (4) degenerative-related protein misfolding. Moreover, pharmacological and non-pharmacological treatment in PSCI or VCI requires valid study designs clearly stating the definition of basic methodological issues, such as the instruments that should be used to measure eventual changes, the biomarker-based stratification of participants to be investigated, and statistical tests, as well as the inclusion and exclusion criteria that should be applied. A consensus emerged to propose the development of a disease-modifying strategy in VCI and PSCI based on pleiotropic pharmacological and non-pharmacological approaches.
16. The effect of comedication with conventional synthetic disease modifying antirheumatic drugs on TNF inhibitor drug survival in patients with ankylosing spondylitis and undifferentiated spondyloarthritis
DEFF Research Database (Denmark)
Lie, Elisabeth; Kristensen, Lars Erik; Forsblad-d'Elia, Helena
2015-01-01
on patients with a clinical diagnosis of AS or uSpA starting treatment with adalimumab, etanercept or infliximab as their first TNFi during 2003-2010 were retrieved from the Swedish national biologics register and linked to national population based registers. Five-year drug survival was analysed by Cox......, and the associations were retained when adjusting for erythrocyte sedimentation rate, C-reactive protein, patient global, swollen joints, uveitis, psoriasis and inflammatory bowel disease. CONCLUSIONS: In this large register study of patients with AS and uSpA, use of csDMARD comedication was associated with better 5...
17. Efficacy and safety of Tofacitinib in patients with active rheumatoid arthritis resistant to conventional therapy: Preliminary results of an open-label clinical trial
Directory of Open Access Journals (Sweden)
E. L. Luchikhina
2016-01-01
Full Text Available Despite the advances in the therapy of rheumatoid arthritis (RA, which are associated with the use of biological anti-rheumatic drugs, the problemof effective treatment of RA is not still solved. Inclusion of new methods in treatment strategies, in particular the so-called «small molecules», i.e. synthetic compounds acting on intracellular signaling pathways, such as Tofacitinib (TOFA approved for use in rheumatologic practice, is very important.Objective: to evaluate the efficacy and safety of therapy with TOFA in combination with synthetic disease-modifying anti-rheumatic drugs (s-DMARDs, primarily methotrexate (MTX in in patients with active RA in real clinical practice.Subjects and methods. This ongoing open-label trial is a part of the scientific program «Russian Investigation of Methotrexate and Biologics in Early Active Inflammatory Arthritis» (REMARCA that explores the possibility of adapting the «treat-to-target» strategy in real prac-tice in Russia. The study included RA patients with moderate to high disease activity despite treatment with MTX or other DMARDs. A total of 41 patients with RA were included (8 males, 33 females; mean age 52.6±14.2 years, disease duration 47.2±49.7 months, 82.9% RF+ and 80.5% anti-CCP+,DAS28-ESR 5.45±0.95, SDAI 30.2±12.2. All the patients had previously received s-DMARDs; 12 (29.3% patients also had biological DMARDs (1 to 4 biologics. Oral TOFA 5 mg in combination with MTX or leflunomide was administered twice daily to 40 and 1 patients, respectively, with the possibility of increasing the dose up to 10 mg BID. To date, 37 and 12 patients received TOFA for 3 and 6 months, respectively.Results. TOFA was used as a second-line drug (after s-DMARDs failure in 29 (70.7%, as a third line drug (after s-DMARDs and biologics failure in 12 (29.3% patients. The dose was escalated to 10 mg BID in 13 (31.2% patients, on the average, 11.2±1.7 weeks after treatment initiation. TOFA was not
18. Delayed wound healing and postoperative surgical site infections in patients with rheumatoid arthritis treated with or without biological disease-modifying antirheumatic drugs.
Science.gov (United States)
Tada, Masahiro; Inui, Kentaro; Sugioka, Yuko; Mamoto, Kenji; Okano, Tadashi; Kinoshita, Takuya; Hidaka, Noriaki; Koike, Tatsuya
2016-06-01
Biological disease-modifying antirheumatic drugs (bDMARDs) have become more popular for treating rheumatoid arthritis (RA). Whether or not bDMARDs increase the postoperative risk of surgical site infection (SSI) has remained controversial. We aimed to clarify the effects of bDMARDs on the outcomes of elective orthopedic surgery. We used multivariate logistic regression analysis to analyze risk factors for SSI and delayed wound healing among 227 patients with RA (mean age, 65.0 years; disease duration, 16.9 years) after 332 elective orthopedic surgeries. We also attempted to evaluate the effects of individual medications on infection. Rates of bDMARD and conventional synthetic DMARD (csDMARD) administration were 30.4 and 91.0 %, respectively. Risk factors for SSI were advanced age (odds ratio [OR], 1.11; P = 0.045), prolonged surgery (OR, 1.02; P = 0.03), and preoperative white blood cell count >10,000/μL (OR, 3.66; P = 0.003). Those for delayed wound healing were advanced age (OR, 1.16; P = 0.001), prolonged surgery (OR, 1.02; P = 0.007), preoperative white blood cell count >10,000/μL (OR, 4.56; P = 0.02), and foot surgery (OR, 6.60; P = 0.001). Risk factors for SSI and medications did not significantly differ. No DMARDs were risk factors for any outcome examined. Biological DMARDs were not risk factors for postoperative SSI. Foot surgery was a risk factor for delayed wound healing.
19. Efficacy of VX-509 (decernotinib) in combination with a disease-modifying antirheumatic drug in patients with rheumatoid arthritis
DEFF Research Database (Denmark)
Genovese, Mark C.; Yang, Fang; Østergaard, Mikkel
2016-01-01
Objective To assess early effects on joint structures of VX-509 in combination with stable disease-modifying antirheumatic drug (DMARD) therapy using MRI in adults with rheumatoid arthritis (RA). Methods This phase II, placebo-controlled, double-blind, dose-ranging study randomised patients with RA......), and the RA MRI scoring (RAMRIS) system. Results ACR20 response at week 12 was 63.6%, 60.0% and 60.0% in the VX-509 100-mg, 200-mg and 300-mg groups, respectively, compared with 25.0% in the placebo group. DAS28-CRP scores decreased in a dose-dependent manner with increasing VX-509 doses. Decreases in RAMRIS...... to a DMARD alone. MRI responses were detected at week 12. Treatment was generally well tolerated. Trial registration number NCT01754935; results....
20. A novel approach to delayed-start analyses for demonstrating disease-modifying effects in Alzheimer's disease.
Directory of Open Access Journals (Sweden)
Hong Liu-Seifert
Full Text Available One method for demonstrating disease modification is a delayed-start design, consisting of a placebo-controlled period followed by a delayed-start period wherein all patients receive active treatment. To address methodological issues in previous delayed-start approaches, we propose a new method that is robust across conditions of drug effect, discontinuation rates, and missing data mechanisms. We propose a modeling approach and test procedure to test the hypothesis of noninferiority, comparing the treatment difference at the end of the delayed-start period with that at the end of the placebo-controlled period. We conducted simulations to identify the optimal noninferiority testing procedure to ensure the method was robust across scenarios and assumptions, and to evaluate the appropriate modeling approach for analyzing the delayed-start period. We then applied this methodology to Phase 3 solanezumab clinical trial data for mild Alzheimer's disease patients. Simulation results showed a testing procedure using a proportional noninferiority margin was robust for detecting disease-modifying effects; conditions of high and moderate discontinuations; and with various missing data mechanisms. Using all data from all randomized patients in a single model over both the placebo-controlled and delayed-start study periods demonstrated good statistical performance. In analysis of solanezumab data using this methodology, the noninferiority criterion was met, indicating the treatment difference at the end of the placebo-controlled studies was preserved at the end of the delayed-start period within a pre-defined margin. The proposed noninferiority method for delayed-start analysis controls Type I error rate well and addresses many challenges posed by previous approaches. Delayed-start studies employing the proposed analysis approach could be used to provide evidence of a disease-modifying effect. This method has been communicated with FDA and has been
1. Cerivastatin Nano-Liposome as a Potential Disease Modifying Approach for the Treatment of Pulmonary Arterial Hypertension.
Science.gov (United States)
Lee, Young; Pai, S Balakrishna; Bellamkonda, Ravi V; Thompson, David H; Singh, Jaipal
2018-04-25
In this study, we have investigated nano-liposome as an approach to tailor the pharmacology of cerivastatin as a disease modifying drug for pulmonary arterial hypertension (PAH). Cerivastatin encapsulated liposomes with an average diameter of 98±27 nm were generated by thin film and freeze-thaw process. The nano-liposomes demonstrated sustained drug release kinetics in vitro and inhibited proliferation of pulmonary artery smooth muscle cells with significantly less cellular cytotoxicity as compared to free cerivastatin. When delivered by inhalation to a rat model of monocrotalin induced PAH, cerivastatin significantly reduced pulmonary artery pressure from 55.13±9.82 mmHg to 35.56±6.59 mmHg (P < 0.001) and diminished pulmonary artery wall thickening. Echocardiography showed that cerivastatin significantly reduced right ventricle thickening (0.34±0.02 cm monocrotalin vs. 0.26±0.02 cm cerivastatin; P < 0.001) and increased pulmonary artery acceleration time (13.98±1.14 ms monocrotalin vs. 21.07±2.80 ms cerivastatin; P < 0.001). Nano-liposomal cerivastatin was equally effective or slightly better than cerivastatin in reducing pulmonary artery pressure (67.06±13.64 mmHg monocrotalin; 46.31±7.64 mmHg cerivastatin vs. 37.32±9.50 mmHg liposomal cerivastatin) and improving parameters of right ventricular function as measured by increasing pulmonary artery acceleration time (24.68±3.92 ms monocrotalin; 32.59±6.10 ms cerivastatin vs. 34.96±7.51 ms liposomal cerivastatin). More importantly, the rate and magnitude of toxic cerivastatin metabolite lactone generation from the intratracheally administered nano-liposomes was significantly lower as compared to intravenously administered free cerivastatin. These studies show that nano-liposome encapsulation improved in vitro and in vivo pharmacological and safety profile of cerivastatin and may represent a safer approach as a disease modifying therapy for PAH. The American Society for Pharmacology and Experimental
2. Clinical Trials for Disease-Modifying Therapies in Alzheimer's Disease: A Primer, Lessons Learned, and a Blueprint for the Future.
Science.gov (United States)
Cummings, Jeffrey; Ritter, Aaron; Zhong, Kate
2018-03-16
Alzheimer's disease (AD) has no currently approved disease-modifying therapies (DMTs), and treatments to prevent, delay the onset, or slow the progression are urgently needed. A delay of 5 years if available by 2025 would decrease the total number of patients with AD by 50% in 2050. To meet the definition of DMT, an agent must produce an enduring change in the course of AD; clinical trials of DMTs have the goal of demonstrating this effect. AD drug discovery entails target identification followed by high throughput screening and lead optimization of drug-like compounds. Once an optimized agent is available and has been assessed for efficacy and toxicity in animals, it progresses through Phase I testing with healthy volunteers, Phase II learning trials to establish proof-of-mechanism and dose, and Phase III confirmatory trials to demonstrate efficacy and safety in larger populations. Phase III is followed by Food and Drug Administration review and, if appropriate, market access. Trial populations include cognitively normal at-risk participants in prevention trials, mildly impaired participants with biomarker evidence of AD in prodromal AD trials, and subjects with cognitive and functional impairment in AD dementia trials. Biomarkers are critical in trials of DMTs, assisting in participant characterization and diagnosis, target engagement and proof-of-pharmacology, demonstration of disease-modification, and monitoring side effects. Clinical trial designs include randomized, parallel group; delayed start; staggered withdrawal; and adaptive. Lessons learned from completed trials inform future trials and increase the likelihood of success.
3. Needle-free and microneedle drug delivery in children: a case for disease-modifying antirheumatic drugs (DMARDs).
Science.gov (United States)
Shah, Utpal U; Roberts, Matthew; Orlu Gul, Mine; Tuleu, Catherine; Beresford, Michael W
2011-09-15
Parenteral routes of drug administration have poor acceptability and tolerability in children. Advances in transdermal drug delivery provide a potential alternative for improving drug administration in this patient group. Issues with parenteral delivery in children are highlighted and thus illustrate the scope for the application of needle-free and microneedle technologies. This mini-review discusses the opportunities and challenges for providing disease-modifying antirheumatic drugs (DMARDs) currently prescribed to paediatric rheumatology patients using such technologies. The aim is to raise further awareness of the need for age-appropriate formulations and drug delivery systems and stimulate exploration of these options for DMARDs, and in particular, rapidly emerging biologics on the market. The ability of needle-free and microneedle technologies to deliver monoclonal antibodies and fusion proteins still remains largely untested. Such an understanding is crucial for future drug design opportunities. The bioavailability, safety and tolerance of delivering biologics into the viable epidermis also need to be studied. Copyright © 2011 Elsevier B.V. All rights reserved.
4. Analysis of recent failures of disease modifying therapies in Alzheimer's disease suggesting a new methodology for future studies.
Science.gov (United States)
Amanatkar, Hamid Reza; Papagiannopoulos, Bill; Grossberg, George Thomas
2017-01-01
Pharmaceutical companies and the NIH have invested heavily in a variety of potential disease-modifying therapies for Alzheimer's disease (AD) but unfortunately all double-blind placebo-controlled Phase III studies of these drugs have failed to show statistically significant results supporting their clinical efficacy on cognitive measures. These negative results are surprising as most of these medications have the capability to impact the biomarkers which are associated with progression of Alzheimer's disease. Areas covered: This contradiction prompted us to review all study phases of Intravenous Immunoglobulin (IVIG), Bapineuzumab, Solanezumab, Avagacestat and Dimebolin to shed more light on these recent failures. We critically analyzed these studies, recommending seven lessons from these failures which should not be overlooked. Expert commentary: We suggest a new methodology for future treatment research in Alzheimer's disease considering early intervention with more focus on cognitive decline as a screening tool, more sophisticated exclusion criteria with more reliance on biomarkers, stratification of subjects based on the rate of cognitive decline aiming less heterogeneity, and a longer study duration with periodic assessment of cognition and activities of daily living during the study and also after a washout period.
5. Sequencing of disease-modifying therapies for relapsing-remitting multiple sclerosis: a theoretical approach to optimizing treatment.
Science.gov (United States)
Grand'Maison, Francois; Yeung, Michael; Morrow, Sarah A; Lee, Liesly; Emond, Francois; Ward, Brian J; Laneuville, Pierre; Schecter, Robyn
2018-04-18
Multiple sclerosis (MS) is a chronic disease which usually begins in young adulthood and is a lifelong condition. Individuals with MS experience physical and cognitive disability resulting from inflammation and demyelination in the central nervous system. Over the past decade, several disease-modifying therapies (DMTs) have been approved for the management of relapsing-remitting MS (RRMS), which is the most prevalent phenotype. The chronic nature of the disease and the multiple treatment options make benefit-risk-based sequencing of therapy essential to ensure optimal care. The efficacy and short- and long-term risks of treatment differ for each DMT due to their different mechanism of action on the immune system. While transitioning between DMTs, in addition to immune system effects, factors such as age, disease duration and severity, disability status, monitoring requirements, preference for the route of administration, and family planning play an important role. Determining a treatment strategy is therefore challenging as it requires careful consideration of the differences in efficacy, safety and tolerability, while at the same time minimizing risks of immune modulation. In this review, we discuss a sequencing approach for treating RRMS, with importance given to the long-term risks and individual preference when devising a treatment plan. Evidence-based strategies to counter breakthrough disease are also addressed.
6. Pregnancy and the Use of Disease-Modifying Therapies in Patients with Multiple Sclerosis: Benefits versus Risks
Directory of Open Access Journals (Sweden)
Raed Alroughani
2016-01-01
Full Text Available The burden of multiple sclerosis (MS in women of childbearing potential is increasing, with peak incidence around the age of 30 years, increasing incidence and prevalence, and growing female : male ratio. Guidelines recommend early use of disease-modifying therapies (DMTs, which are contraindicated or recommended with considerable caution, during pregnancy/breastfeeding. Many physicians are reluctant to prescribe them for a woman who is/is planning to be pregnant. Interferons are not absolutely contraindicated during pregnancy, since interferon-β appears to lack serious adverse effects in pregnancy, despite a warning in its labelling concerning risk of spontaneous abortion. Glatiramer acetate, natalizumab, and alemtuzumab also may not induce adverse pregnancy outcomes, although natalizumab may induce haematologic abnormalities in newborns. An accelerated elimination procedure is needed for teriflunomide if pregnancy occurs on treatment or if pregnancy is planned. Current evidence supports the contraindication for fingolimod during pregnancy; data on other DMTs remains limited. Increased relapse rates following withdrawal of some DMTs in pregnancy are concerning and require further research. The postpartum period brings increased risk of disease reactivation that needs to be carefully addressed through effective communication between treating physicians and mothers intending to breastfeed. We address the potential for use of the first- and second-line DMTs in pregnancy and lactation.
7. Cognitive enhancers (nootropics). Part 3: drugs interacting with targets other than receptors or enzymes. disease-modifying drugs.
Science.gov (United States)
Froestl, Wolfgang; Pfeifer, Andrea; Muhs, Andreas
2013-01-01
Cognitive enhancers (nootropics) are drugs to treat cognition deficits in patients suffering from Alzheimer's disease, schizophrenia, stroke, attention deficit hyperactivity disorder, or aging. Cognition refers to a capacity for information processing, applying knowledge, and changing preferences. It involves memory, attention, executive functions, perception, language, and psychomotor functions. The term nootropics was coined in 1972 when memory enhancing properties of piracetam were observed in clinical trials. In the meantime, hundreds of drugs have been evaluated in clinical trials or in preclinical experiments. To classify the compounds, a concept is proposed assigning drugs to 19 categories according to their mechanism(s) of action, in particular drugs interacting with receptors, enzymes, ion channels, nerve growth factors, re-uptake transporters, antioxidants, metal chelators, and disease modifying drugs, meaning small molecules, vaccines, and monoclonal antibodies interacting with amyloid-β and tau. For drugs, whose mechanism of action is not known, they are either classified according to structure, e.g., peptides, or their origin, e.g., natural products. The review covers the evolution of research in this field over the last 25 years.
8. Long-term changes in ADAS-cog: what is clinically relevant for disease modifying trials in Alzheimer?
Science.gov (United States)
Vellas, B; Andrieu, S; Cantet, C; Dartigues, J F; Gauthier, S
2007-01-01
9. Oral disease-modifying therapies for multiple sclerosis in the Middle Eastern and North African (MENA) region: an overview.
Science.gov (United States)
Deleu, Dirk; Mesraoua, Boulenouar; Canibaño, Beatriz; Melikyan, Gayane; Al Hail, Hassan; El-Sheikh, Lubna; Ali, Musab; Al Hussein, Hassan; Ibrahim, Faiza; Hanssens, Yolande
2018-06-18
The introduction of new disease-modifying therapies (DMTs) for remitting-relapsing multiple sclerosis (RRMS) has considerably transformed the landscape of therapeutic opportunities for this chronic disabling disease. Unlike injectable drugs, oral DMTs promote patient satisfaction and increase therapeutic adherence. This article reviews the salient features about the mode of action, efficacy, safety, and tolerability profile of approved oral DMTs in RRMS, and reviews their place in clinical algorithms in the Middle East and North Africa (MENA) region. A systematic review was conducted using a comprehensive search of MEDLINE, PubMed, Cochrane Database of Systematic Reviews (period January 1, 1995-January 31, 2018). Additional searches of the American Academy of Neurology and European Committee for Treatment and Research in Multiple Sclerosis abstracts from 2012-2017 were performed, in addition to searches of the Food and Drug Administration and European Medicines Agency websites, to obtain relevant safety information on these DMTs. Four oral DMTs: fingolimod, teriflunomide, dimethyl fumarate, and cladribine have been approved by the regulatory agencies. Based on the number needed to treat (NNT), the potential role of these DMTs in the management of active and highly active or rapidly evolving RRMS is assessed. Finally, the place of the oral DMTs in clinical algorithms in the MENA region is reviewed.
10. The impact of adjusted work conditions and disease-modifying drugs on work ability in multiple sclerosis.
Science.gov (United States)
Wickström, Anne; Fagerström, Maria; Wickström, Lucas; Granåsen, Gabriel; Dahle, Charlotte; Vrethem, Magnus; Sundström, Peter
2017-07-01
Multiple sclerosis (MS) is a neurological disorder that causes significantly reduced ability to work, and the Expanded Disability Status Scale (EDSS) is one of the main predictors for reduced work ability. To investigate how work requirements, flexible work conditions and disease-modifying drugs (DMDs) influence the work ability in relation to different EDSS grades in two MS populations. Work ability was studied in two MS populations: one in the southern and one in the northern part of Sweden, both demographically similar. In the latter population, more active work-promoting interventions have been practised and second-generation DMDs have been widely used from the onset of disease for several years. The proportion of MS patients who participated in the workforce or studied was significantly higher in the northern compared with the southern population ( p work conditions and were able to work more hours per week. Higher EDSS was associated with lower reduction in number of worked hours per week in the northern population ( p = 0.042). Our data indicated that treatment strategy and adjusted work conditions have impact on work ability in MS.
11. Antidepressant Drug Treatment in Association with Multiple Sclerosis Disease-Modifying Therapy: Using Explorys in the MS Population.
Science.gov (United States)
Mirsky, Matthew M; Marrie, Ruth Ann; Rae-Grant, Alexander
2016-01-01
Background: The Explorys Enterprise Performance Management (EPM) database contains de-identified clinical data for 50 million patients. Multiple sclerosis (MS) disease-modifying therapies (DMTs), specifically interferon beta (IFNβ) treatments, may potentiate depression. Conflicting data have emerged, and a large-scale claims-based study by Patten et al. did not support such an association. This study compares the results of Patten et al. with those using the EPM database. Methods: "Power searches" were built to test the relationship between antidepressant drug use and DMT in the MS population. Searches were built to produce a cohort of individuals diagnosed as having MS in the past 3 years taking a specific DMT who were then given any antidepressant drug. The antidepressant drug therapy prevalence was tested in the MS population on the following DMTs: IFNβ-1a, IFNβ-1b, combined IFNβ, glatiramer acetate, natalizumab, fingolimod, and dimethyl fumarate. Results: In patients with MS, the rate of antidepressant drug use in those receiving DMTs was 40.60% to 44.57%. The rate of antidepressant drug use for combined IFNβ DMTs was 41.61% (males: 31.25%-39.62%; females: 43.10%-47.33%). Antidepressant drug use peaked in the group aged 45 to 54 years for five of six DMTs. Conclusions: We found no association between IFNβ treatment and antidepressant drug use in the MS population compared with other DMTs. The EPM database has been validated against the Patten et al. data for future use in the MS population.
12. Anakinra as first-line disease-modifying therapy in systemic juvenile idiopathic arthritis: report of forty-six patients from an international multicenter series
NARCIS (Netherlands)
Nigrovic, Peter A.; Mannion, Melissa; Prince, Femke H. M.; Zeft, Andrew; Rabinovich, C. Egla; van Rossum, Marion A. J.; Cortis, Elisabetta; Pardeo, Manuela; Miettunen, Paivi M.; Janow, Ginger; Birmingham, James; Eggebeen, Aaron; Janssen, Erin; Shulman, Andrew I.; Son, Mary Beth; Hong, Sandy; Jones, Karla; Ilowite, Norman T.; Cron, Randy Q.; Higgins, Gloria C.
2011-01-01
To examine the safety and efficacy of the interleukin-1 (IL-1) receptor antagonist anakinra as first-line therapy for systemic juvenile idiopathic arthritis (JIA). Patients with systemic JIA receiving anakinra as part of initial disease-modifying antirheumatic drug (DMARD) therapy were identified
13. Early cost-utility analysis of general and cerebrospinal fluid-specific Alzheimer's disease biomarkers for hypothetical disease-modifying treatment decision in mild cognitive impairment
NARCIS (Netherlands)
Handels, Ron L. H.; Joore, Manuela A.; Tran-Duy, An; Wimo, Anders; Wolfs, Claire A. G.; Verhey, Frans R. J.; Severens, Johan L.
Introduction: The study aimed to determine the room for improvement of a perfect cerebrospinal fluid (CSF) biomarker and the societal incremental net monetary benefit of CSF in subjects with mild cognitive impairment (MCI) assuming a hypothetical disease-modifying Alzheimer's disease (AD) treatment.
14. Efficacy of biological disease-modifying antirheumatic drugs: a systematic literature review informing the 2013 update of the EULAR recommendations for the management of rheumatoid arthritis
NARCIS (Netherlands)
Nam, Jackie L.; Ramiro, Sofia; Gaujoux-Viala, Cecile; Takase, Kaoru; Leon-Garcia, Mario; Emery, Paul; Gossec, Laure; Landewe, Robert; Smolen, Josef S.; Buch, Maya H.
2014-01-01
To update the evidence for the efficacy of biological disease-modifying antirheumatic drugs (bDMARD) in patients with rheumatoid arthritis (RA) to inform the European League Against Rheumatism(EULAR) Task Force treatment recommendations. Medline, Embase and Cochrane databases were searched for
15. [Acupuncture Therapy versus Disease-modifying Antirheumatic Drugs for the Treatment of Ankylosing Spondylitis--a Meta-analysis].
Science.gov (United States)
Lv, Zheng-tao; Zhou, Xiang; Chen, An-min
2015-01-01
We conducted a meta-analysis evaluating the efficacy and safety of acupuncture compared to disease-modifying antirheumatic drugs in patients with ankylosing spondylitis. Four databases including Pubmed, EMBASE, Cochrane library, and ISI Web of Science were searched in December 2014, taking also the reference section into account. Randomized controlled trials that aimed to assess the efficacy of acupuncture therapy were identified. The inclusion criteria for the outcome measurements were the clinical effect, ESR, occipital wall test, chest expansion, CRP and finger ground distance. Finally, six studies met these inclusion criteria. Two reviewers screened each article independently and were blinded to the findings of each other. We analyzed data from 6 RCTs involving 541 participants. Acupuncture therapy could further improve the clinical effect (OR = 3.01; 95% CI, 1.48-6.13; P = 0.002) and reduce ESR level (SMD = -0.77; 95% CI, -1.46 to -0.08; P = 0.03) compared to DMARDs; a combination of acupuncture and DMARDs could further improve clinical effect (OR = 3.20, 95% CI, 1.36-7.54; P = 0.008), occipital-wall distance (SMD = -0.84; 95% CI, -1.37 to -0.31; P = 0.002), chest expansion (SMD = 0.38; 95% CI, 0.16-0.60; P = 0.0009), and finger-ground distance (SMD = -0.48; 95% CI, -0.87 to -0.09; P = 0.02) as compared to DMARDs treatment alone. Our findings support that acupuncture therapy could be an option to relieve symptoms associated with AS. These results should be interpreted cautiously due to the generally poor methodological qualities of the included trials. © 2015 S. Karger GmbH, Freiburg.
16. Biomarker-driven phenotyping in Parkinson disease: a translational missing link in disease-modifying clinical trials
Science.gov (United States)
Espay, Alberto J.; Schwarzschild, Michael A.; Tanner, Caroline M.; Fernandez, Hubert H; Simon, David K.; Leverenz, James B.; Merola, Aristide; Chen-Plotkin, Alice; Brundin, Patrik; Kauffman, Marcelo A.; Erro, Roberto; Kieburtz, Karl; Woo, Daniel; Macklin, Eric A.; Standaert, David G.; Lang, Anthony E.
2016-01-01
Past clinical trials of putative neuroprotective therapies have targeted Parkinson disease (PD) as a single pathogenic disease entity. From an Oslerian clinico-pathologic perspective, the wide complexity of PD converges into Lewy bodies and justifies a reductionist approach to PD: a single-mechanism therapy can affect most of those sharing the classic pathologic hallmark. From a systems-biology perspective, PD is a group of disorders that, while related by sharing the feature of nigral dopamine-neuron degeneration, exhibit unique genetic, biological and molecular abnormalities, which probably respond differentially to a given therapeutic approach, particularly for strategies aimed at neuroprotection. Under this model, only biomarker-defined, homogenous subtypes of PD are likely to respond optimally to therapies proven to affect the biological processes within each subtype. Therefore, we suggest that precision medicine applied to PD requires a reevaluation of the biomarker-discovery effort. This effort is currently centered on correlating biological measures to clinical features of PD and on identifying factors that predict whether various prodromal states will convert into the classical movement disorder. We suggest, instead, that subtyping of PD requires the reverse view, where abnormal biological signals (i.e., biomarkers) rather than clinical definitions are used to define disease phenotypes. Successful development of disease-modifying strategies will depend on how relevant the specific biological processes addressed by an intervention are to the pathogenetic mechanisms in the subgroup of targeted patients. This precision-medicine approach will likely yield smaller but well-defined subsets of PD amenable to successful neuroprotection. PMID:28233927
17. Clinically isolated syndrome. Prognostic markers for conversion to multiple sclerosis and initiation of disease-modifying therapy
International Nuclear Information System (INIS)
Kohriyama, Tatsuo
2011-01-01
Eighty-five percent of patients with multiple sclerosis (MS) initially present with a single demyelinating event, referred to as a clinically isolated syndrome (CIS) of the optic nerves, brainstem, or spinal cord. Following the onset of CIS, 38 to 68% of patients develop clinically definite MS (CDMS). Clinically silent brain lesions are seen on MRI in 50 to 80% of patients with CIS at first clinical presentation and 56 to 88% of CIS patients with abnormal MRI are at high risk of conversion to CDMS. Axonal damage, that is considered to underlie the development of persistent disability in MS, occurs in the CIS stage. Treatment with disease-modifying therapies (DMTs), that might prevent axonal damage and result in slowing the progression of disability, should be initiated early during the disease course. Clinical trials demonstrated that early treatment of CIS patients with the standard dose of interferon beta (IFNβ) significantly reduced the risk of progression to CDMS by 44 to 50%. After 5 years of follow-up, the results of the IFNβ treatment extension studies confirmed that the risk of conversion to CDMS was significantly reduced by 35 to 37% in patients receiving early treatment compared to that in those receiving delayed treatment. However, not every patient with CIS will progress to CDMS; the IFNβ treatment is appropriately indicated for CIS patients who are diagnosed with MS by McDonald diagnostic criteria based on MRI findings of dissemination in space and time and are at high risk for conversion to CDMS. Development of more reliable prognostic markers will enable DMTs to be targeted for those who are most likely to benefit. (author)
18. Disease-modifying effect of anthraquinone prodrug with boswellic acid on collagenase-induced osteoarthritis in Wistar rats.
Science.gov (United States)
Dhaneshwar, Suneela; Dipmala, Patil; Abhay, Harsulkar; Prashant, Bhondave
2013-08-01
Diacerein and its active metabolite rhein are promising disease modifying agents for osteoarthritis (OA). Boswellic acid is an active ingredient of Gugglu; a herbal medicine commonly administered in osteoarthritis. Both of them possess excellent anti-inflammatory and anti-arthritic activities. It was thought interesting to conjugate rhein and boswellic acid into a mutual prodrug (DSRB) and evaluate its efficacy on collagenase-induced osteoarthritis in rats wherein the conjugate, rhein, boswellic acid and their physical mixture, were tested based on various parameters. Oral administration of 3.85 mg of rhein, 12.36 mg of boswellic acid and 15.73 mg of DSRB which would release equimolar amounts of rhein and boswellic acid, exhibited significant restoration in rat body weight as compared to the untreated arthritic control group. Increase in knee diameter (mm), due to edema was observed in group injected with collagenase, which reduced significantly with the treatment of conjugate. The hematological parameters (Hb, RBC, WBC and ESR) and biochemical parameters (CRP, SALP, SGOT and SGPT) in the osteoarthritic rats were significantly brought back to normal values on treatment with conjugate. It also showed better anti-ulcer activity than rhein. Further the histopathological studies revealed significant anti-arthritic activity of conjugate when compared with the arthritic control group. In conclusion, the conjugate at the specified dose level of 15.73 mg/kg, p. o. (BID) showed reduction in knee diameter and it could significantly normalize the hematological and biochemical abnormalities in collagenase-induced osteoarthritis in rats. Further the histopathological studies confirmed the additive anti-arthritic effect of DSRB as compared to plain rhein.
19. The Effect of Disease-Modifying Drugs on Brain Atrophy in Relapsing-Remitting Multiple Sclerosis: A Meta-Analysis.
Directory of Open Access Journals (Sweden)
Pierre Branger
Full Text Available The quantification of brain atrophy in relapsing-remitting multiple sclerosis (RRMS may serve as a marker of disease progression and treatment response. We compared the association between first-line (FL or second-line (SL disease-modifying drugs (DMDs and brain volume changes over time in RRMS.We reviewed clinical trials in RRMS between January 1, 1995 and June 1, 2014 that assessed the effect of DMDs and reported data on brain atrophy in Medline, Embase, the Cochrane database and meeting abstracts. First, we designed a meta-analysis to directly compare the percentage brain volume change (PBVC between FLDMDs and SLDMDs at 24 months. Second, we conducted an observational and longitudinal linear regression analysis of a 48-month follow-up period. Sensitivity analyses considering PBVC between 12 and 48 months were also performed.Among the 272 studies identified, 117 were analyzed and 35 (18,140 patients were included in the analysis. Based on the meta-analysis, atrophy was greater for the use of an FLDMD than that of an SLDMD at 24 months (primary endpoint mean difference, -0.86; 95% confidence interval: -1.57--0.15; P = 0.02. Based on the linear regression analysis, the annual PBVC significantly differed between SLDMDs and placebo (-0.27%/y and -0.50%/y, respectively, P = 0.046 but not between FLDMDs (-0.33%/y and placebo (P = 0.11 or between FLDMDs and SLDMDs (P = 0.49. Based on sensitivity analysis, the annual PBVC was reduced for SLDMDs compared with placebo (-0.14%/y and -0.56%/y, respectively, P<0.001 and FLDMDs (-0.46%/y, P<0.005, but no difference was detected between FLDMDs and placebo (P = 0.12.SLDMDs were associated with reduced PBVC slope over time in RRMS, regardless of the period considered. These results provide new insights into the mechanisms underlying atrophy progression in RRMS.
20. Real-World Adherence and Persistence to Oral Disease-Modifying Therapies in Multiple Sclerosis Patients Over 1 Year.
Science.gov (United States)
Johnson, Kristen M; Zhou, Huanxue; Lin, Feng; Ko, John J; Herrera, Vivian
2017-08-01
Disease-modifying therapies (DMTs) are indicated to reduce relapse rates and slow disease progression for relapsing-remitting multiple sclerosis (MS) patients when taken as prescribed. Nonadherence or non-persistence in the real-world setting can lead to greater risk for negative clinical outcomes. Although previous research has demonstrated greater adherence and persistence to oral DMTs compared with injectable DMTs, comparisons among oral DMTs are lacking. To compare adherence, persistence, and time to discontinuation among MS patients newly prescribed the oral DMTs fingolimod, dimethyl fumarate, or teriflunomide. This retrospective study used MarketScan Commercial and Medicare Supplemental claims databases. MS patients with ≥ 1 claim for specified DMTs from April 1, 2013, to June 30, 2013, were identified. The index drug was defined as the first oral DMT within this period. To capture patients newly initiating index DMTs, patients could not have a claim for their index drugs in the previous 12 months. Baseline characteristics were described for patients in each treatment cohort. Adherence, as measured by medication possession ratio (MPR) and proportion of days covered (PDC); persistence (30-day gap allowed); and time to discontinuation over a 12-month follow-up period were compared across treatment cohorts. Adjusted logistic regression models were used to examine adherence, and Cox regression models estimated risk of discontinuation. 1,498 patients newly initiated oral DMTs and met study inclusion criteria: fingolimod (n = 185), dimethyl fumarate (n = 1,160), and teriflunomide (n = 143). Patients were similar across most baseline characteristics, including region, relapse history, and health care resource utilization. Statistically significant differences were observed across the treatment cohorts for age, gender, previous injectable/infused DMT use, and comorbidities. Adherence and time to discontinuation were adjusted for age, gender, region, previous oral
1. Identification and Prioritization of Important Attributes of Disease-Modifying Drugs in Decision Making among Patients with Multiple Sclerosis: A Nominal Group Technique and Best-Worst Scaling.
Science.gov (United States)
Kremer, Ingrid E H; Evers, Silvia M A A; Jongen, Peter J; van der Weijden, Trudy; van de Kolk, Ilona; Hiligsmann, Mickaël
2016-01-01
Understanding the preferences of patients with multiple sclerosis (MS) for disease-modifying drugs and involving these patients in clinical decision making can improve the concordance between medical decisions and patient values and may, subsequently, improve adherence to disease-modifying drugs. This study aims first to identify which characteristics-or attributes-of disease-modifying drugs influence patients´ decisions about these treatments and second to quantify the attributes' relative importance among patients. First, three focus groups of relapsing-remitting MS patients were formed to compile a preliminary list of attributes using a nominal group technique. Based on this qualitative research, a survey with several choice tasks (best-worst scaling) was developed to prioritize attributes, asking a larger patient group to choose the most and least important attributes. The attributes' mean relative importance scores (RIS) were calculated. Nineteen patients reported 34 attributes during the focus groups and 185 patients evaluated the importance of the attributes in the survey. The effect on disease progression received the highest RIS (RIS = 9.64, 95% confidence interval: [9.48-9.81]), followed by quality of life (RIS = 9.21 [9.00-9.42]), relapse rate (RIS = 7.76 [7.39-8.13]), severity of side effects (RIS = 7.63 [7.33-7.94]) and relapse severity (RIS = 7.39 [7.06-7.73]). Subgroup analyses showed heterogeneity in preference of patients. For example, side effect-related attributes were statistically more important for patients who had no experience in using disease-modifying drugs compared to experienced patients (p decision making would be needed and requires eliciting individual preferences.
2. A short history of anti-rheumatic therapy - VII. Biological agents
Directory of Open Access Journals (Sweden)
B. Gatto
2011-11-01
Full Text Available The introduction of biological agents has been a major turning-point in the treatment of rheumatic diseases, particularly in rheumatoid arthritis. This review describes the principle milestones that have led, through the knowledge of the structure and functions of nucleic acids, to the development of production techniques of the three major families of biological agents: proteins, monoclonal antibodies and fusion proteins. A brief history has also been traced of the cytokines most involved in the pathogenesis of inflammatory rheumatic diseases (IL-1 and TNF and the steps which have led to the use of the main biological drugs in rheumatology: anakinra, infliximab, adalimumab, etanercept and rituximab.
3. Osteoporosis in Rheumatic Diseases: Anti-rheumatic Drugs and the Skeleton.
Science.gov (United States)
Dubrovsky, Alanna M; Lim, Mie Jin; Lane, Nancy E
2018-05-01
Osteoporosis in rheumatic diseases is a very well-known complication. Systemic inflammation results in both generalized and localized bone loss and erosions. Recently, increased knowledge of inflammatory process in rheumatic diseases has resulted in the development of potent inhibitors of the cytokines, the biologic DMARDs. These treatments reduce systemic inflammation and have some effect on the generalized and localized bone loss. Progression of bone erosion was slowed by TNF, IL-6 and IL-1 inhibitors, a JAK inhibitor, a CTLA4 agonist, and rituximab. Effects on bone mineral density varied between the biological DMARDs. Medications that are approved for the treatment of osteoporosis have been evaluated to prevent bone loss in rheumatic disease patients, including denosumab, cathepsin K, bisphosphonates, anti-sclerostin antibodies and parathyroid hormone (hPTH 1-34), and have some efficacy in both the prevention of systemic bone loss and reducing localized bone erosions. This article reviews the effects of biologic DMARDs on bone mass and erosions in patients with rheumatic diseases and trials of anti-osteoporotic medications in animal models and patients with rheumatic diseases.
4. A short history of anti-rheumatic therapy. III. Non steroidal anti-inflammatory drugs
Directory of Open Access Journals (Sweden)
P. Marson
2011-06-01
Full Text Available The chemical advances of the 20th century led to the synthesis of non steroidal anti-inflammatory drugs (NSAIDs, beginning from phenylbutazone and indomethacin and continuing with other new drugs, including ibuprofen, diclofenac, naproxen, piroxicam and, more recently, the highly selective COX-2 inhibitors (coxibs. This progress derived from the discovery of the mechanism of action of these drugs: the inhibition of synthesis of prostaglandins due to the cycloxigenase enzyme system, according to the experimental contributions of John R. Vane.
5. Chemical Weapons Convention
National Research Council Canada - National Science Library
1997-01-01
On April 29, 1997, the Convention on the Prohibition of the Development, Production, Stockpiling, and Use of Chemical Weapons and on Their Destruction, known as the Chemical Weapons Convention (CWC...
6. The Hague Judgments Convention
DEFF Research Database (Denmark)
Nielsen, Peter Arnt
2011-01-01
The Hague Judgments Convention of 2005 is the first global convention on international jurisdiction and recognition and enforcement of judgments in civil and commercial matters. The author explains the political and legal background of the Convention, its content and certain crucial issues during...
7. Tocilizumab in patients with active rheumatoid arthritis and inadequate response to disease-modifying antirheumatic drugs or tumor necrosis factor inhibitors: subanalysis of Spanish results of an open-label study close to clinical practice.
Science.gov (United States)
Álvaro-Gracia, José M; Fernández-Nebro, Antonio; García-López, Alicia; Guzmán, Manuel; Blanco, Francisco J; Navarro, Francisco J; Bustabad, Sagrario; Armendáriz, Yolanda; Román-Ivorra, José A
2014-01-01
8. Autoimmune-autoinflammatory rheumatoid arthritis overlaps: a rare but potentially important subgroup of diseases.
Science.gov (United States)
Savic, Sinisa; Mistry, Anoop; Wilson, Anthony G; Barcenas-Morales, Gabriela; Doffinger, Rainer; Emery, Paul; McGonagle, Dennis
2017-01-01
At the population level, rheumatoid arthritis (RA) is generally viewed as autoimmune in nature with a small subgroup of cases having a palindromic form or systemic autoinflammatory disorder (SAID) phenotype. Herein, we describe resistant cases of classical autoantibody associated RA that had clinical, genetic and therapeutic responses indicative of coexistent autoinflammatory disease. Five patients with clinically overlapping features between RA and SAID including polysynovitis and autoantibody/shared epitope positivity, and who had abrupt severe self-limiting attacks including fevers and serositis, are described. Mutations or single nucleotide polymorphisms in recognised autoinflammatory pathways were evident. Generally, these cases responded poorly to conventional Disease-modifying anti-rheumatic drugs (DMARD) treatment with some excellent responses to colchicine or interleukin 1 pathway blockade. A subgroup of RA cases have a mixed autoimmune-autoinflammatory phenotype and genotype with therapeutic implications.
9. Type 1 Diabetes TrialNet: A Multifaceted Approach to Bringing Disease-Modifying Therapy to Clinical Use in Type 1 Diabetes.
Science.gov (United States)
Bingley, Polly J; Wherrett, Diane K; Shultz, Ann; Rafkin, Lisa E; Atkinson, Mark A; Greenbaum, Carla J
2018-04-01
What will it take to bring disease-modifying therapy to clinical use in type 1 diabetes? Coordinated efforts of investigators involved in discovery, translational, and clinical research operating in partnership with funders and industry and in sync with regulatory agencies are needed. This Perspective describes one such effort, Type 1 Diabetes TrialNet, a National Institutes of Health-funded and JDRF-supported international clinical trials network that emerged from the Diabetes Prevention Trial-Type 1 (DPT-1). Through longitudinal natural history studies, as well as trials before and after clinical onset of disease combined with mechanistic and ancillary investigations to enhance scientific understanding and translation to clinical use, TrialNet is working to bring disease-modifying therapies to individuals with type 1 diabetes. Moreover, TrialNet uses its expertise and experience in clinical studies to increase efficiencies in the conduct of trials and to reduce the burden of participation on individuals and families. Herein, we highlight key contributions made by TrialNet toward a revised understanding of the natural history of disease and approaches to alter disease course and outline the consortium's plans for the future. © 2018 by the American Diabetes Association.
10. Design, Synthesis, and Biological Evaluation of 2-(Benzylamino-2-HydroxyalkylIsoindoline-1,3-Diones Derivatives as Potential Disease-Modifying Multifunctional Anti-Alzheimer Agents
Directory of Open Access Journals (Sweden)
Dawid Panek
2018-02-01
Full Text Available The complex nature of Alzheimer’s disease calls for multidirectional treatment. Consequently, the search for multi-target-directed ligands may lead to potential drug candidates. The aim of the present study is to seek multifunctional compounds with expected activity against disease-modifying and symptomatic targets. A series of 15 drug-like various substituted derivatives of 2-(benzylamino-2-hydroxyalkylisoindoline-1,3-diones was designed by modification of cholinesterase inhibitors toward β-secretase inhibition. All target compounds have been synthesized and tested against eel acetylcholinesterase (eeAChE, equine serum butyrylcholinesterase (eqBuChE, human β-secretase (hBACE-1, and β-amyloid (Aβ-aggregation. The most promising compound, 12 (2-(5-(benzylamino-4-hydroxypentylisoindoline-1,3-dione, displayed inhibitory potency against eeAChE (IC50 = 3.33 μM, hBACE-1 (43.7% at 50 μM, and Aβ-aggregation (24.9% at 10 μM. Molecular modeling studies have revealed possible interaction of compound 12 with the active sites of both enzymes—acetylcholinesterase and β-secretase. In conclusion: modifications of acetylcholinesterase inhibitors led to the discovery of a multipotent anti-Alzheimer’s agent, with moderate and balanced potency, capable of inhibiting acetylcholinesterase, a symptomatic target, and disease-modifying targets: β-secretase and Aβ-aggregation.
11. Systematic review and network meta-analysis of combination and monotherapy treatments in disease-modifying antirheumatic drug-experienced patients with rheumatoid arthritis: analysis of American College of Rheumatology criteria scores 20, 50, and 70
Science.gov (United States)
Orme, Michelle E; MacGilchrist, Katherine S; Mitchell, Stephen; Spurden, Dean; Bird, Alex
2012-01-01
Background Biologic disease-modifying antirheumatic drugs (bDMARDs) extend the treatment choices for rheumatoid arthritis patients with suboptimal response or intolerance to conventional DMARDs. The objective of this systematic review and meta-analysis was to compare the relative efficacy of EU-licensed bDMARD combination therapy or monotherapy for patients intolerant of or contraindicated to continued methotrexate. Methods Comprehensive, structured literature searches were conducted in Medline, Embase, and the Cochrane Library, as well as hand-searching of conference proceedings and reference lists. Phase II or III randomized controlled trials reporting American College of Rheumatology (ACR) criteria scores of 20, 50, and 70 between 12 and 30 weeks’ follow-up and enrolling adult patients meeting ACR classification criteria for rheumatoid arthritis previously treated with and with an inadequate response to conventional DMARDs were eligible. To estimate the relative efficacy of treatments whilst preserving the randomized comparisons within each trial, a Bayesian network meta-analysis was conducted in WinBUGS using fixed and random-effects, logit-link models fitted to the binomial ACR 20/50/70 trial data. Results The systematic review identified 10,625 citations, and after a review of 2450 full-text papers, there were 29 and 14 eligible studies for the combination and monotherapy meta-analyses, respectively. In the combination analysis, all licensed bDMARD combinations had significantly higher odds of ACR 20/50/70 compared to DMARDs alone, except for the rituximab comparison, which did not reach significance for the ACR 70 outcome (based on the 95% credible interval). The etanercept combination was significantly better than the tumor necrosis factor-α inhibitors adalimumab and infliximab in improving ACR 20/50/70 outcomes, with no significant differences between the etanercept combination and certolizumab pegol or tocilizumab. Licensed-dose etanercept, adalimumab
12. Rheumatoid arthritis in the United Arab Emirates
NARCIS (Netherlands)
Badsha, Humeira; Kong, Kok Ooi; Tak, Paul P.
2008-01-01
Studies have shown that patients with rheumatoid arthritis (RA) in the Middle East have delayed diagnosis and low disease-modifying anti-rheumatic drug (DMARD) utilization. We describe the characteristics and treatments of consecutive RA patients presenting to a new musculoskeletal clinic in Dubai,
13. Efficacy and safety of biological and targeted-synthetic DMARDs: a systematic literature review informing the 2016 update of the ASAS/EULAR recommendations for the management of axial spondyloarthritis
NARCIS (Netherlands)
Sepriano, Alexandre; Regel, Andrea; van der Heijde, Désirée; Braun, Jürgen; Baraliakos, Xenofon; Landewé, Robert; van den Bosch, Filip; Falzon, Louise; Ramiro, Sofia
2017-01-01
To update the evidence for the efficacy and safety of (b)biological and (ts)targeted-synthetic disease-modifying anti-rheumatic drugs (DMARDs) in patients with axial spondyloarthritis (axSpA) to inform the 2016 update of the Assessment of SpondyloArthritis international Society/European League
14. Effectiveness of a group-based intervention to change medication beliefs and improve medication adherence in patients with rheumatoid arthritis: a randomized controlled trial.
NARCIS (Netherlands)
Zwikker, H.E.; Ende, C.H. van den; Lankveld, W.G. van; Broeder, A.A. den; Hoogen, F.H. van den; Mosselaar, B. van de; Dulmen, S. van; Bemt, B.J. van den
2014-01-01
Objective: To assess the effect of a group-based intervention on the balance between necessity beliefs and concern beliefs about medication and on medication non-adherence in patients with rheumatoid arthritis (RA). Methods: Non-adherent RA patients using disease-modifying anti-rheumatic drugs
15. Convention on nuclear safety
International Nuclear Information System (INIS)
1994-01-01
The Convention on Nuclear Safety was adopted on 17 June 1994 by Diplomatic Conference convened by the International Atomic Energy Agency at its Headquarters from 14 to 17 June 1994. The Convention will enter into force on the ninetieth day after the date of deposit with the Depository (the Agency's Director General) of the twenty-second instrument of ratification, acceptance or approval, including the instruments of seventeen States, having each at leas one nuclear installation which has achieved criticality in a reactor core. The text of the Convention as adopted is reproduced in the Annex hereto for the information of all Member States
16. Minamata Convention on Mercury
Science.gov (United States)
On November 6, 2013 the United States signed the Minamata Convention on Mercury, a new multilateral environmental agreement that addresses specific human activities which are contributing to widespread mercury pollution
17. Climate change convention
International Nuclear Information System (INIS)
Russell, D.
1992-01-01
Principles that guide Canada's Green Plan with respect to global warming are outlined. These include respect for nature, meeting environmental goals in an economically beneficial manner, efficient use of resources, shared responsibilities, federal leadership, and informed decision making. The policy side of the international Framework Convention on Climate Change is then discussed and related to the Green Plan. The Convention has been signed by 154 nations and has the long-term objective of stabilizing anthropogenic greenhouse gas concentrations in the atmosphere at levels that prevent dangerous interference with the climate system. Some of the Convention's commitments toward achieving that objective are only applicable to the developed countries. Five general areas of commitment are emissions reductions, assistance to developing countries, reporting requirements, scientific and socioeconomic research, and education. The most controversial area is that of limiting emissions. The Convention has strong measures for public accountability and is open to future revisions. Canada's Green Plan represents one country's response to the Convention commitments, including a national goal to stabilize greenhouse gas emissions at the 1990 level by the year 2000
18. Tritium and OSPAR convention
International Nuclear Information System (INIS)
2009-01-01
The missions and the organisation of the OSPAR convention on protection of the NE Atlantic marine environment are given. The OSPAR strategy for the radioactive substances is stated. The results of work programme of the radioactive Substances committee are described and the consensus reached by contracting parties on the appropriate arrangements for this radionuclide is presented. (authors)
19. Revised C++ coding conventions
CERN Document Server
Callot, O
2001-01-01
This document replaces the note LHCb 98-049 by Pavel Binko. After a few years of practice, some simplification and clarification of the rules was needed. As many more people have now some experience in writing C++ code, their opinion was also taken into account to get a commonly agreed set of conventions
20. Global climate convention
International Nuclear Information System (INIS)
Simonis, U.E.
1991-01-01
The effort of negotiate a global convention on climate change is one of mankind's great endeavours - and a challenge to economists and development planners. The inherent linkages between climate and the habitability of the earth are increasingly well recognized, and a convention could help to ensure that conserving the environment and developing the economy in the future must go hand in hand. Due to growing environmental concern the United Nations General Assembly has set into motion an international negotiating process for a framework convention on climate change. One the major tasks in these negotiations is how to share the duties in reducing climate relevant gases, particularly carbon dioxide (CO 2 ), between the industrial and the developing countries. The results and proposals could be among the most far-reaching ever for socio-economic development, indeed for global security and survival itself. While the negotiations will be about climate and protection of the atmosphere, they will be on fundamental global changes in energy policies, forestry, transport, technology, and on development pathways with low greenhouse gas emissions. Some of these aspects of a climate convention, particularly the distributional options and consequences for the North-South relations, are addressed in this chapter. (orig.)
1. Tofacitinib 5 mg Twice Daily in Patients with Rheumatoid Arthritis and Inadequate Response to Disease-Modifying Antirheumatic Drugs: A Comprehensive Review of Phase 3 Efficacy and Safety.
Science.gov (United States)
Bird, Paul; Bensen, William; El-Zorkany, Bassel; Kaine, Jeffrey; Manapat-Reyes, Bernadette Heizel; Pascual-Ramos, Virginia; Witcombe, David; Soma, Koshika; Zhang, Richard; Thirunavukkarasu, Krishan
2018-05-24
2. An evaluation of adherence in patients with multiple sclerosis newly initiating treatment with a self-injectable or an oral disease-modifying drug
Directory of Open Access Journals (Sweden)
Munsell M
2016-12-01
3. Conventions and Institutional Logics
DEFF Research Database (Denmark)
Westenholz, Ann
Two theoretical approaches – Conventions and Institutional Logics – are brought together and the similarities and differences between the two are explored. It is not the intention to combine the approaches, but I would like to open both ‘boxes’ and make them available to each other with the purpose...... of creating a space for dialog. Both approaches were developed in the mid-1980s as a reaction to rational-choice economic theory and collectivistic sociological theory. These two theories were oversimplifying social life as being founded either in actor-micro level analyses or in structure-macro level...... analyses. The theoretical quest of both Conventions and Institutional Logics has been to understand the increasing indeterminacy, uncertainty and ambiguity in people’s lives where a sense of reality, of value, of moral, of feelings is not fixed. Both approaches have created new theoretical insights...
OpenAIRE
Anggianto, Rio M; Rate, Johannes Van
2013-01-01
Proyek Manado Convention Center ini pada dasarnya merupakan wadah atau sarana komunikasi antara dua pihak dengan penerapkan berbagai metode komunikasi langsung tatap muka baik itu dari perorangan terhadap kelompok, kelompok terhadap kelompok atau kelompok terhadap masyarakat. Dan pada era kini hal ini menjadi suatu kebutuhan yang dianganggap penting. Kota Manado seringkali menjadi tuan rumah suatu konverensi dengan jumlah peserta yang tergolong besar karena cakupannya sampai manca negara....
5. The conventional quark picture
International Nuclear Information System (INIS)
Dalitz, R.H.
1976-01-01
For baryons, mesons and deep inelastic phenomena the ideas and the problems of the conventional quark picture are pointed out. All observed baryons fit in three SU(3)-multiplets which cluster into larger SU(6)-multiplets. No mesons are known which have quantum numbers inconsistent with belonging to a SU(3) nonet or octet. The deep inelastic phenomena are described in terms of six structure functions of the proton. (BJ) [de
6. A qualitative study assessing patient perspectives in the process of decision-making on disease modifying therapies (DMT's) in multiple sclerosis (MS).
Science.gov (United States)
Ceuninck van Capelle, Archibald de; Meide, Hanneke van der; Vosman, Frans J H; Visser, Leo H
2017-01-01
Physicians commonly advise patients to begin disease modifying therapies (DMT's) shortly after the establishment of a diagnosis of Multiple Sclerosis (MS) to prevent further relapses and disease progression. However, little is known about the meaning for patients going through the process of the diagnosis of MS and of making decisions on DMT's in early MS. To explore the patient perspective on using DMT's for MS. Methods: Ten participants with a recent (approach. The analysis revealed the following themes: (1) Constant confrontation with the disease, (2) Managing inevitable decline, (3) Hope of delaying the progression of the disease, and, (4) The importance of social support. The themes show that patients associate the recommendation to begin DMT's (especially injectable DMT's) with views about their bodies as well as their hopes about the future. Both considering and adhering to treatment are experienced by patients as not only matters of individual and rational deliberation, but also as activities that are lived within a web of relationships with relatives and friends. From the patient perspective, the use of DMT's is not a purely rational and individual experience. More attention to the use of DMT's as relational and lived phenomena will improve the understanding of the process of decision-making for DMT's in MS.
7. Influence of Anti-TNF and Disease Modifying Antirheumatic Drugs Therapy on Pulmonary Forced Vital Capacity Associated to Ankylosing Spondylitis: A 2-Year Follow-Up Observational Study
Directory of Open Access Journals (Sweden)
Alberto Daniel Rocha-Muñoz
2015-01-01
Full Text Available Objective. To evaluate the effect of anti-TNF agents plus synthetic disease modifying antirheumatic drugs (DMARDs versus DMARDs alone for ankylosing spondylitis (AS with reduced pulmonary function vital capacity (FVC%. Methods. In an observational study, we included AS who had FVC% <80% at baseline. Twenty patients were taking DMARDs and 16 received anti-TNF + DMARDs. Outcome measures: changes in FVC%, BASDAI, BASFI, 6-minute walk test (6MWT, Borg scale after 6MWT, and St. George’s Respiratory Questionnaire at 24 months. Results. Both DMARDs and anti-TNF + DMARDs groups had similar baseline values in FVC%. Significant improvement was achieved with anti-TNF + DMARDs in FVC%, at 24 months, when compared to DMARDs alone (P=0.04. Similarly, patients in anti-TNF + DMARDs group had greater improvement in BASDAI, BASFI, Borg scale, and 6MWT when compared to DMARDs alone. After 2 years of follow-up, 14/16 (87.5% in the anti-TNF + DMARDs group achieved the primary outcome: FVC% ≥80%, compared with 11/20 (55% in the DMARDs group (P=0.04. Conclusions. Patients with anti-TNF + DMARDs had a greater improvement in FVC% and cardiopulmonary scales at 24 months compared with DMARDs. This preliminary study supports the fact that anti-TNF agents may offer additional benefits compared to DMARDs in patients with AS who have reduced FVC%.
8. Strategic interaction and conventions
Directory of Open Access Journals (Sweden)
Espinosa, María Paz
2012-03-01
Full Text Available The scope of the paper is to review the literature that employs coordination games to study social norms and conventions from the viewpoint of game theory and cognitive psychology. We claim that those two alternative approaches are in fact complementary, as they provide different insights to explain how people converge to a unique system of self-fulfilling expectations in presence of multiple, equally viable, conventions. While game theory explains the emergence of conventions relying on efficiency and risk considerations, the psychological view is more concerned with frame and labeling effects. The interaction between these alternative (and, sometimes, competing effects leads to the result that coordination failures may well occur and, even when coordination takes place, there is no guarantee that the convention eventually established will be the most efficient.
El objetivo de este artículo es presentar la literatura que emplea los juegos de coordinación para el estudio de normas y convenciones sociales, que se han analizado tanto desde el punto de vista de la teoría de juegos como de la psicología cognitiva. Argumentamos en este trabajo que estos dos enfoques alternativos son en realidad complementarios, dado que ambos contribuyen al entendimiento de los procesos mediante los cuales las personas llegan a coordinarse en un único sistema de expectativas autorrealizadas, en presencia de múltiples convenciones todas ellas igualmente viables. Mientras que la teoría de juegos explica la aparición de convenciones basándose en argumentos de eficiencia y comportamientos frente al riesgo, el enfoque de la psicología cognitiva utiliza en mayor medida consideraciones referidas al entorno y naturaleza de las decisiones. La interacción entre estos efectos diferentes (y en ocasiones, rivales desemboca con frecuencia en fallos de coordinación y, aun cuando la coordinación se produce, no hay garantía de que la convención en vigor sea la m
International Nuclear Information System (INIS)
Wenz, W.; Buitrago-Tellez, C.; Blum, U.; Hauenstein, K.H.; Gufler, H.; Meyer, E.; Ruediger, K.
1992-01-01
The diagnostic value of a digitization system for analogue films based on a charge-coupled-device (CCD) scanner with adjustable resolution of 2.5 or 5 lp/mm was assessed. Some 110 skeletal radiographs, 50 contrast studies, including 25 of patients with Crohn's disease, and 70 abdominal plain films before and after successful lithotripsy for renal stones were digitized. Receiver operating characteristic (ROC) studies showed improved detection of cortical and trabecular defects with contrast-optimized digitized films. Edge enhancement algorithms yielded no additional information. Inflammatory lesions of Crohn's disease were detected equally well by conventional films and digitized images. A statistically significant improvement (p [de
10. Conventional RF system design
International Nuclear Information System (INIS)
Puglisi, M.
1994-01-01
The design of a conventional RF system is always complex and must fit the needs of the particular machine for which it is planned. It follows that many different design criteria should be considered and analyzed, thus exceeding the narrow limits of a lecture. For this reason only the fundamental components of an RF system, including the generators, are considered in this short seminar. The most common formulas are simply presented in the text, while their derivations are shown in the appendices to facilitate, if desired, a more advanced level of understanding. (orig.)
11. Conventional magnets. Pt. 1
International Nuclear Information System (INIS)
Marks, N.
1994-01-01
The design and construction of conventional, steel-cored, direct-current magnets are discussed. Laplace's equation and the associated cylindrical harmonic solutions in two dimensions are established. The equations are used to define the ideal pole shapes and required excitation for dipole, quadrupole and sextupole magnets. Standard magnet geometries are then considered and criteria determining the coil design are presented. The use of codes for predicting flux density distributions and the iterative techniques used for pole face design are then discussed. This includes a description of the use of two-dimensional codes to generate suitable magnet end geometries. Finally, standard constructional techniques for cores and coils are described. (orig.)
12. Comparison of preferences of healthcare professionals and MS patients for attributes of disease-modifying drugs: A best-worst scaling.
Science.gov (United States)
Kremer, Ingrid E H; Evers, Silvia M A A; Jongen, Peter J; Hiligsmann, Mickaël
2018-02-01
The choice between disease-modifying drugs (DMDs) for the treatment of multiple sclerosis (MS) becomes more often a shared decision between the patient and the neurologist and MS nurse. This study aimed to assess which DMD attributes are most important for the healthcare professionals in selecting a DMD for a patient. Subsequently, within this perspective, the neurologists' and nurses' perspectives were compared. Lastly, the healthcare professionals' perspective was compared with the patients' perspective to detect any differences that may need attention in the communication about DMDs. A best-worst scaling (BWS) was conducted among 27 neurologists and 33 MS nurses treating patients with MS to determine the importance of 27 DMD attributes. These attributes were identified through three focus groups with MS patients in a previous study (N=19). Relative importance scores (RISs) were estimated for each attribute. Multivariable linear regression analyses were used to compare the different perspectives. According to the neurologists and nurses, safety of the DMD was the most important DMD attribute in the treatment decision, closely followed by effect on disability progression, quality of life and relapse rate. Patients with MS agreed with the importance of the last three attributes, but valued safety significantly lower (b=-2.59, P<.001). This study suggests that, overall, neurologists and nurses regard the same DMD attributes as important as MS patients with the notable exception of safety. This study provides valuable information for the development of interventions to support shared decision making and highlights which attributes of DMDs may need additional attention. © 2017 The Authors Health Expectations Published by John Wiley & Sons Ltd.
13. The effect of disease modifying therapies on brain atrophy in patients with relapsing-remitting multiple sclerosis: a systematic review and meta-analysis.
Directory of Open Access Journals (Sweden)
Georgios Tsivgoulis
Full Text Available The aim of the present meta-analysis was to evaluate the effect of disease-modifying drugs (DMD on brain atrophy in patients with relapsing-remitting multiple sclerosis (RRMS using available randomized-controlled trial (RCT data.We conducted a systematic review and meta-analysis according to PRISMA guidelines of all available RCTs of patients with RRMS that reported data on brain volume measurements during the study period.We identified 4 eligible studies, including a total of 1819 RRMS patients (71% women, mean age 36.5 years, mean baseline EDSS-score: 2.4. The mean percentage change in brain volume was found to be significantly lower in DMD versus placebo subgroup (standardized mean difference: -0.19; 95%CI: -0.27--0.11; p<0.001. We detected no evidence of heterogeneity between estimates (I2 = 30%, p = 0.19 nor publication bias in the Funnel plots. Sensitivity analyses stratifying studies according to brain atrophy neuroimaging protocol disclosed no evidence of heterogeneity (p = 0.16. In meta-regression analyses, the percentage change in brain volume was found to be inversely related with duration of observation period in both DMD (meta-regression slope = -0.03; 95% CI: -0.04--0.02; p<0.001 and placebo subgroups (meta-regression slope = -0.05; 95% CI: -0.06--0.04; p<0.001. However, the rate of percentage brain volume loss over time was greater in placebo than in DMD subgroup (p = 0.017, ANCOVA.DMD appear to be effective in attenuating brain atrophy in comparison to placebo and their benefit in delaying the rate of brain volume loss increases linearly with longer treatment duration.
14. The prevalence of injection-site reactions with disease-modifying therapies and their effect on adherence in patients with multiple sclerosis: an observational study
Directory of Open Access Journals (Sweden)
Beer Karsten
2011-11-01
Full Text Available Abstract Background Interferon beta (IFNβ and glatiramer acetate (GA are administered by subcutaneous (SC or intramuscular (IM injection. Patients with multiple sclerosis (MS often report injection-site reactions (ISRs as a reason for noncompliance or switching therapies. The aim of this study was to compare the proportion of patients on different formulations of IFNβ or GA who experienced ISRs and who switched or discontinued therapy because of ISRs. Methods The Swiss MS Skin Project was an observational multicenter study. Patients with MS or clinically isolated syndrome who were on the same therapy for at least 2 years were enrolled. A skin examination was conducted at the first study visit and 1 year later. Results The 412 patients enrolled were on 1 of 4 disease-modifying therapies for at least 2 years: IM IFNβ-1a (n = 82, SC IFNβ-1b (n = 123, SC IFNβ-1a (n = 184, or SC GA (n = 23. At first evaluation, ISRs were reported by fewer patients on IM IFNβ-1a (13.4% than on SC IFNβ-1b (57.7%; P P P = not significant [NS]. No patient on IM IFNβ-1a missed a dose in the previous 4 weeks because of ISRs, compared with 5.7% of patients on SC IFNβ-1b (P = 0.044, 7.1% of patients on SC IFNβ-1a (P = 0.011, and 4.3% of patients on SC GA (P = NS. Primary reasons for discontinuing or switching therapy were ISRs or lack of efficacy. Similar patterns were observed at 1 year. Conclusions Patients on IM IFNβ-1a had fewer ISRs and were less likely to switch therapies than patients on other therapies. This study may have implications in selecting initial therapy or, for patients considering switching or discontinuing therapy because of ISRs, selecting an alternative option.
15. The Cost-effectiveness of Sequences of Biological Disease-modifying Antirheumatic Drug Treatment in England for Patients with Rheumatoid Arthritis Who Can Tolerate Methotrexate.
Science.gov (United States)
Stevenson, Matt D; Wailoo, Allan J; Tosh, Jonathan C; Hernandez-Alava, Monica; Gibson, Laura A; Stevens, John W; Archer, Rachel J; Simpson, Emma L; Hock, Emma S; Young, Adam; Scott, David L
2017-07-01
To ascertain whether strategies of treatment with a biological disease-modifying antirheumatic drug (bDMARD) are cost-effective in an English setting. Results are presented for those patients with moderate to severe rheumatoid arthritis (RA) and those with severe RA. An economic model to assess the cost-effectiveness of 7 bDMARD was developed. A systematic literature review and network metaanalysis was undertaken to establish relative clinical effectiveness. The results were used to populate the model, together with estimates of Health Assessment Questionnaire (HAQ) score following European League Against Rheumatism response; annual costs, and utility, per HAQ band; trajectory of HAQ for patients taking bDMARD; and trajectory of HAQ for patients using nonbiologic therapy (NBT). Results were presented as those associated with the strategy with the median cost-effectiveness. Supplementary analyses were undertaken assessing the change in cost-effectiveness when only patients with the most severe prognoses taking NBT were provided with bDMARD treatment. The costs per quality-adjusted life-year (QALY) values were compared with reported thresholds from the UK National Institute for Health and Care Excellence of £20,000 to £30,000 (US$24,700 to US$37,000). In the primary analyses, the cost per QALY of a bDMARD strategy was £41,600 for patients with severe RA and £51,100 for those with moderate to severe RA. Under the supplementary analyses, the cost per QALY fell to £25,300 for those with severe RA and to £28,500 for those with moderate to severe RA. The cost-effectiveness of bDMARD in RA in England is questionable and only meets current accepted levels in subsets of patients with the worst prognoses.
16. Efficacy and safety of golimumab as add-on therapy to disease-modifying antirheumatic drugs in rheumatoid arthritis: results of the GO-MORE study in Spain.
Science.gov (United States)
Alonso, Alberto; González, Carlos M; Ballina, Javier; García Vivar, María L; Gómez-Reino, Juan J; Marenco, Jose Luis; Fernández-Nebro, Antonio; Ordás, Carmen; Cea-Calvo, Luis; Arteaga, María J; Sanmartí, Raimon
2015-01-01
17. ESD and the Rio Conventions
Science.gov (United States)
Sarabhai, Kartikeya V.; Ravindranath, Shailaja; Schwarz, Rixa; Vyas, Purvi
2012-01-01
Chapter 36 of Agenda 21, a key document of the 1992 Earth Summit, emphasised reorienting education towards sustainable development. While two of the Rio conventions, the Convention on Biological Diversity (CBD) and the United Nations Framework Convention on Climate Change (UNFCCC), developed communication, education and public awareness (CEPA)…
18. Retrospective US database analysis of persistence with glatiramer acetate vs. available disease-modifying therapies for multiple sclerosis: 2001-2010.
Science.gov (United States)
Oleen-Burkey, MerriKay; Cyhaniuk, Anissa; Swallow, Eric
2014-01-14
Long-term persistence to treatment for chronic disease is difficult for patients to achieve, regardless of the disease or medication being used. The objective of this investigation was to examine treatment persistence with glatiramer acetate (GA) relative to available disease-modifying therapies (DMT) for multiple sclerosis (MS) over 12-, 24- and 36-month periods. Data from Clinformatics™ for DataMart affiliated with OptumInsight was used to identify patients using DMT between 2001 and 2010. Patients with 12, 24, and 36 months of follow-up were included. Persistence was defined as continuous use of the same DMT for the duration of follow-up regardless of treatment gaps. Regimen changes including re-initiation of therapy following gaps of 15 days or more, switching therapy, and DMT discontinuation were investigated. Descriptive statistics were used to summarize the results. Cohorts of GA users with 12 months (n = 12,144), 24 months (n = 7,386) and 36 months (n = 4,693) of follow-up were identified. Persistence rates with GA were 80% for all time periods; discontinuation rates declined over time while switching increased modestly. In contrast, the full DMT-treated cohorts showed persistent rates of 68.3% at 12 months (n = 35,312), 53.9% at 24 months (n = 21,927), and 70.1% at 36 months (n = 14,343). As with these full DMT-treated cohorts, the proportion of GA users remaining on their initial therapy without a gap of 15 days or more decreased with length of follow-up. However, the proportion of GA users with a gap in treatment who re-initiated GA increased over time (64.4% at 12 months; 75.1% at 24 months, and 80.1% at 36 months) while those in the full DMT-treated cohorts re-initiated therapy at rates of only 50-60%. Persistence rates for GA were 80% for the 12-, 24- and 36-month time periods in contrast with the full DMT-treated cohorts whose persistence rates never exceeded 70.0%. Although there were more gaps in therapy of 15 days or more with all DMT over time
19. Systematic review and meta-analysis of serious infections with tofacitinib and biologic disease-modifying antirheumatic drug treatment in rheumatoid arthritis clinical trials.
Science.gov (United States)
Strand, Vibeke; Ahadieh, Sima; French, Jonathan; Geier, Jamie; Krishnaswami, Sriram; Menon, Sujatha; Checchio, Tina; Tensfeldt, Thomas G; Hoffman, Elaine; Riese, Richard; Boy, Mary; Gómez-Reino, Juan J
2015-12-15
were 2.21 (0.60, 8.14) and 2.02 (0.56, 7.28), respectively. Risk differences (95% CIs) versus placebo for tofacitinib 5 and 10 mg BID were 0.38% (-0.24%, 0.99%) and 0.40% (-0.22%, 1.02%), respectively. In interventional studies, the risk of serious infections with tofacitinib is comparable to published rates for biologic disease-modifying antirheumatic drugs in patients with moderate to severely active RA.
20. Application of the Aarhus Convention
Directory of Open Access Journals (Sweden)
Tubić Bojan
2011-01-01
Full Text Available Convention on access to information, public participation in decision-making and access to justice in environmental matters (Aarhus Convention has been adopted in 1998 and entered into force three years later. It envisages three elements for strengthening democratic procedures in decision-making: access to information, public participation and access to justice. At the first meeting of the Member States the Aarhus Convention Compliance Committee was founded. The European Union is a party of the Convention and it has implemented the provisions in its legal order. After entering into force of the Convention, several Directives that regulate these issues in the EU have been enacted. Republic of Serbia has ratified the Convention in 2009 and it is currently in the process of its implementation by involving private subjects in decision-making on environmental issues.
1. Understanding the conventional arms trade
Science.gov (United States)
Stohl, Rachel
2017-11-01
The global conventional arms trade is worth tens of billions of dollars every year and is engaged in by every country in the world. Yet, it is often difficult to control the legal trade in conventional arms and there is a thriving illicit market, willing to arm unscrupulous regimes and nefarious non-state actors. This chapter examines the international conventional arms trade, the range of tools that have been used to control it, and challenges to these international regimes.
2. Comparison of Conventional and Semi-Conventional Management ...
African Journals Online (AJOL)
Comparison of Conventional and Semi-Conventional Management Systems on the Performance and Carcass Yield of Broiler Chickens. ... TO AFRICAN RESEARCH. AFRICAN JOURNALS ONLINE (AJOL) · Journals · Advanced Search · USING AJOL · RESOURCES ... Journal Home > Vol 20, No 1 (2018) >. Log in or ...
3. Towards a Theory of Convention
DEFF Research Database (Denmark)
Hansen, Pelle Guldborg
2006-01-01
Some thirty years ago Lewis published his Convention: A philosophical Study (Lewis 1969). Besides exciting the logical community by providing the seminal analysis work on common knowledge, it also laid the foundations for the formal approach to the study of social conventions by means of game the...
4. Revision of the Paris Convention and the Brussels Supplementary Convention
International Nuclear Information System (INIS)
Busekist, Otto von.
1977-01-01
The Paris Convention and the Brussels Supplementary Convention have in substance remained unchanged since their adoption in 1960 and 1963, respectively. During that period, nuclear industry and technology have developed considerably while the financial and monetary bases of the Conventions have been shattered. The amounts of liability and compensation have been eroded by inflation, and the gold-based unit of account in which these amounts are expressed has lost its original meaning after the abolition of the official gold price. The question of revising the Conventions, in particular of raising those amounts and of replacing the unit of account, is therefore being studied by the Group of Governmental Experts on Third party Liability in the Field of Nuclear Energy of the OECD Nuclear Energy Agency. (auth.) [fr
5. The nuclear liability conventions revised
International Nuclear Information System (INIS)
Reyners, P.
2004-01-01
The signature on 12 February 2004 of the Protocols amending respectively the 1960 Paris Convention and the 1963 Brussels Supplementary Convention was the second step of the process of modernisation of the international nuclear liability regime after the adoption in September 1997 of a Protocol revising the 1963 Vienna Convention and of a new Convention on Supplementary Compensation for Nuclear Damage. The common objective of the new instruments is to provide more funds to compensate a larger number of potential victims in respect of a broader range of damage. Another goal of the revision exercise was to maintain the compatibility between the Paris and Vienna based systems, a commitment enshrined in the 1988 Joint Protocol, as well as to ascertain that Paris/Brussels countries could also become a Party to the Convention on Supplementary Compensation. However, while generally consistent vis a vis the Joint Protocol, the provisions of the Paris and Vienna Conventions, as revised, differ on some significant aspects. Another remaining issue is whether the improved international nuclear liability regime will succeed in attracting in the future a larger number of countries, particularly outside Europe, and will so become truly universal. Therefore, the need for international co-operation to address these issues, to facilitate the adoption of new implementing legislation and to ensure that this special regime keeps abreast of economic and technological developments, is in no way diminished after the revision of the Conventions.(author)
6. EFFECTS OF SYNTHETIC DISEASE-MODIFYING ANTIRHEUMATIC DRUGS, BIOLOGICAL AGENTS, AND PSYCHOPHARMACOTHERAPY ON THE MENTAL DISORDERS IN PATIENTS WITH RHEUMATOID ARTHRITIS
Directory of Open Access Journals (Sweden)
A. A. Abramkin
2017-01-01
Full Text Available Mental disorders (MDs of the anxiety-depressive spectrum (ADS and cognitive impairment (CI are characteristic of the majority of patients with rheumatoid arthritis (RA; however, the effects of disease-modifying antirheumatic drugs (DMARDs, biological agents (BAs, and their combinations with psychopharmacological drugs (PPDs on these abnormalities have been insufficiently studied. Objective: to investigate trends in the incidence of MDs in RA patients receiving different treatment regimens.Subjects and methods. The investigation included 128 RA patients (13% men and 87% women who fulfilled the 1987 American College of Rheumatology criteria; their mean age was 47.4±0.9 years; the median duration of RA was 96 [48; 228] months. RA activity was found to be high, moderate, and low in 48, 56, and 24 patients, respectively. DAS28 averaged 5.34±0.17. 80% of the patients received DMARDs. MDs were diagnosed based on ICD-10 coding, by using a semi-structured interview and scales, such as the Hospital Anxiety and Depression Scale, the Hamilton Anxiety Scale, and the Montgomery-Asberg Depression Rating Scale. Clinical and psychological procedures were used to diagnose CI. At the study inclusion stage, ADS disorders were detected in 123 (96.1% patients; CI was found in 88 (68.7%. Forty-one (32.1% patients were diagnosed with major depression (an obvious or moderate depressive episode, 53 (41.4% patients had minor depression (a mild depressive episode and dysthymia, and 29 (22.6% had anxiety disorders (ADs (adjustment disorders with anxiety symptoms, as well as generalized anxiety disorder. The dynamics of MDs was estimated in 112 (87.5% of the 128 patients and in 83 (64.8% at one- and five-year follow-ups, respectively. The following groups were identified according to the performed therapy: 1 synthetic DMARDs (n = 39; 2 synthetic DMARDs + PPDs (n = 43; 3 BAs + DMARDs (n = 32; 4 BAs + DMARDs + PPDs (n = 9.Results and discussion. In Group 1, the
7. Unexpected exacerbations following initiation of disease-modifying drugs in neuromyelitis optica spectrum disorder: Which factor is responsible, anti-aquaporin 4 antibodies, B cells, Th1 cells, Th2 cells, Th17 cells, or others?
Science.gov (United States)
Kira, Jun-Ichi
2017-08-01
Some disease-modifying drugs for multiple sclerosis, which mainly act on T cells, are ineffective for neuromyelitis optica spectrum disorder and induce unexpected relapses. These include interferon beta, glatiramer acetate, fingolimod, natalizumab, and alemtuzumab. The cases reported here suggest that dimethyl fumarate, which reduces the number of Th1 and Th17 cells and induces IL-4-producing Th2 cells, is also unsuitable for neuromyelitis optica spectrum disorder, irrespective of anti-aquaporin 4 IgG serostatus. Although oral dimethyl fumarate with manageable adverse effects is easy to initiate in the early course of multiple sclerosis, special attention should be paid for atypical demyelinating cases.
8. Ofatumumab, a human anti-CD20 monoclonal antibody, for treatment of rheumatoid arthritis with an inadequate response to one or more disease-modifying antirheumatic drugs: results of a randomized, double-blind, placebo-controlled, phase I/II study
DEFF Research Database (Denmark)
Østergaard, Mikkel; Baslund, Bo; Rigby, William
2010-01-01
To investigate the safety and efficacy of ofatumumab, a novel human anti-CD20 monoclonal antibody (mAb), in patients with active rheumatoid arthritis (RA) whose disease did not respond to > or = 1 disease-modifying antirheumatic drug....
9. The evolution of development conventions
Directory of Open Access Journals (Sweden)
Fabio Stefano Erber
2012-04-01
Full Text Available This paper presents a conceptual view on development and its translation into development policies. It argues that society's perception of development is structured by conventions, which provide a view of the past, present and future and, at the same time, allows a certain hierarchy of problems and solutions to such problems. The prevalence of a specific convention depends on the international conditions faced by this society and on the distribution of economic and political power within that society. Therefore, in complex societies there is always a struggle for hegemony between competing development conventions.
10. Evolutionary Games and Social Conventions
DEFF Research Database (Denmark)
Hansen, Pelle Guldborg
2007-01-01
-defined metaphors of individual learning and social imitation processes, from which a revised theory of convention may be erected (see Sugden 2004, Binmore 1993 and Young 1998). This paper makes a general argument in support of the evolutionary turn in the theory of convention by a progressive exposition of its...... in Aumann (1976) and which, together with the assumptions of perfect rationality, came to be defining of classical game theory. However, classical game theory is currently undergoing severe crisis as a tool for exploring social phenomena; a crisis emerging from the problem of equilibrium selection around......Some thirty years ago Lewis published his Convention: A Philosophical Study (Lewis, 2002). This laid the foundation for a game-theoretic approach to social conventions, but became more famously known for its seminal analysis of common knowledge; the concept receiving its canonical analysis...
11. The efficacy and tolerability of leflunomide (Arava® in therapy for psoriatic arthritis
Directory of Open Access Journals (Sweden)
2013-01-01
Full Text Available The paper gives data on differentiated disease-modifying anti-rheumatic therapy for psoriatic arthritis (PsA. When performing the therapy, account must be taken of the presence and magnitude of the major manifestations of this disease: the pattern of arthritis and spondylosis, the number of inflamed entheses, the number of swollen fingers or toes, the pattern of psoriasis in terms of its extent and stage, the presence and magnitude of systemic manifestations and the functional state of involved organs. There are data on the biological activity of leflunomide, its effect on the main manifestations of PsA with an analysis of its efficacy and tolerability, as well as the results of a comparative investigation of disease-modifying anti-rheumatic drugs used for the therapy of this disease.
12. Coexisting ankylosing spondylitis and rheumatoid arthritis: a case report with literature review.
Science.gov (United States)
Guo, Ying-Ying; Yang, Li-Li; Cui, Hua-Dong; Zhao, Shuai; Zhang, Ning
2011-10-01
A 30-year-old female patient with coexisting ankylosing spondylitis and rheumatoid arthritis was diagnosed and treated. The human leukocyte antigen (HLA)-B27 is a predisposing factor of ankylosing spondylitis and HLA-DR4 is a predisposing factor of rheumatoid arthritis. This patient was HLA-B27 and HLA-DR4 positive, and ankylosing spondylitis manifested before rheumatoid arthritis. After disease modifying anti-rheumatic drugs successfully arrested ankylosing spondylitis activity the patient conceived and delivered a healthy baby. One year later, she developed peripheral polyarthritis and was diagnosed with rheumatoid arthritis. We hypothesized that pregnancy may be one of the environmental factors that can activate rheumatoid arthritis, and that disease modifying anti-rheumatic drugs play an important role in keeping the disease under control.
13. Paris convention - Decisions, recommendations, interpretations
International Nuclear Information System (INIS)
1990-01-01
This booklet is published in a single edition in English and French. It contains decisions, recommendations and interpretations concerning the 1960 Paris Convention on Third Party Liability in the Field of Nuclear Energy adopted by the OECD Steering Committee and the OECD Council. All the instruments are set out according to the Article of the Convention to which they relate and explanatory notes are added where necessary [fr
14. Use of etanercept in a patient with rheumatoid arthritis on hemodialysis.
Science.gov (United States)
Sugioka, Yuko; Inui, Kentaro; Koike, Tatsuya
2008-01-01
Disease-modifying anti-rheumatic drugs (DMARDs) are typically used for the therapy of rheumatoid arthritis (RA), but most have some nephrotoxicity. In several clinical studies, etanercept had fewer adverse effects on renal function than other DMARDs. We report the case of a 64-year-old woman with RA and renal insufficiency on hemodialysis treated using etanercept therapy. This case suggests that etanercept therapy might be effective in the short term for such patients.
15. Drug: D07478 [KEGG MEDICUS
Lifescience Database Archive (English)
Full Text Available D07478 Drug Aurotioprol; Allochrysine (TN) ... C3H6AuO4S2.Na ... Anti-inflammatory ... DG01912 ... Gold... preparations ... DG01985 ... Disease modifying anti-rheumatic drugs (DMARDs) ... DG01912 ... Gold preparat...ions ATC code: M01CB05 ... Gold preparation ... CAS: 27279-43-2 PubChem: 96024436 NIKKAJI: J35.087G ...
16. Changing clinical patterns in rheumatoid arthritis management over two decades:Sequential observational studies
OpenAIRE
Mian, Aneela N; Ibrahim, Fowzia; Scott, Ian C; Bahadur, Sardar; Filkova, Maria; Pollard, Louise; Steer, Sophia; Kingsley, Gabrielle H; Scott, David L; Galloway, James
2016-01-01
BACKGROUND: Rheumatoid arthritis (RA) treatment paradigms have shifted over the last two decades. There has been increasing emphasis on combination disease modifying anti-rheumatic drug (DMARD) therapy, newer biologic therapies have become available and there is a greater focus on achieving remission. We have evaluated the impact of treatment changes on disease activity scores for 28 joints (DAS28) and disability measured by the health assessment questionnaire scores (HAQ).METHODS: Four cross...
17. Improving healthcare consumer effectiveness: An Animated, Self-serve, Web-based Research Tool (ANSWER) for people with early rheumatoid arthritis
OpenAIRE
Li, Linda C; Adam, Paul; Townsend, Anne F; Stacey, Dawn; Lacaille, Diane; Cox, Susan; McGowan, Jessie; Tugwell, Peter; Sinclair, Gerri; Ho, Kendall; Backman, Catherine L
2009-01-01
Abstract Background People with rheumatoid arthritis (RA) should use DMARDs (disease-modifying anti-rheumatic drugs) within the first three months of symptoms in order to prevent irreversible joint damage. However, recent studies report the delay in DMARD use ranges from 6.5 months to 11.5 months in Canada. While most health service delivery interventions are designed to improve the family physician's ability to refer to a rheumatologist and prescribe treatments, relatively little has been do...
18. Novel versus conventional antipsychotic drugs.
Science.gov (United States)
Love, R C
1996-01-01
Novel antipsychotic agents differ from conventional ones in several key characteristics, including effectiveness, adverse reactions, and receptor-binding profile. Most of the newer agents have an affinity for the serotonin 5HT2 receptor that is at least 10 times greater than that for the dopamine D2 receptor. This increased affinity for the serotonin receptor may be responsible for another distinguishing characteristic of novel antipsychotic agents--decreased frequency of extrapyramidal side effects. These side effects, which include pseudoparkinsonism, acute dystonias, and akathisia, frequently are the reason for noncompliance with conventional drug therapy. The newer drugs are often effective in patients resistant to treatment with conventional agents. They also appear to reduce the negative symptoms of schizophrenia in many patients.
19. The prospect of conventional disarmament
International Nuclear Information System (INIS)
1989-01-01
The prospect of conventional disarmament in Europe holds out great consequences not only for the continent but also for the entire world. The arms race both in its nuclear and conventional aspects has been the single most important element of the destabilizing factors in international relations since 1945. Though initially borne out of the ideological division of Europe and the consequent quest for strategic military superiority, it soon developed a technological momentum of its own, becoming more the cause than the effect of the distrust in the relationship of the two alliances. The issue of conventional weapons was raised for negotiations side by side with that of nuclear weapons when the United Nations took up the question of disarmament in 1946. Due, however, to the unforeseen and most dangerous advance in nuclear weaponry, the fear engendered shifted all attention at the multilateral level to nuclear weapons. Except in Europe where the Mutual and Balanced Force Reduction Talks in Central Europe were initiated, conventional weapons disarmament did not attract multilateral attention again until the First Special Session of the United nations General Assembly Devoted to Disarmament in 1978. The Final Document of the Special Session did accord highest priority to negotiations on nuclear weapons. However, it also affirmed that side by side with negotiations on nuclear weapons, the limitation and gradual reduction of armed forces and conventional weapons should be resolutely pursued within the framework of general and complete disarmament. States with the largest military arsenals, it was stated, had a special responsibility in pursuing conventional armaments reduction. Underscoring the central role of Europe further, the Final Document postulated that the achievement of a more stable situation at a lower level of military potential would contribute toward strengthening of security in Europe and constitute a significant step toward international peace and security
20. Conventional imaging in paediatric uroradiology
International Nuclear Information System (INIS)
Riccabona, M.; Lindbichler, F.; Sinzig, M.
2002-01-01
Objective: To briefly describe basic conventional imaging in paediatric uroradiology. Method: The state of the art performance of standard imaging techniques (intravenous urography (IVU), voiding cystourethrography (VCU), and ultrasound (US)) is described, with emphasis on technical aspects, indications, and patient preparation such as adequate hydration. Only basic applications as used in routine clinical work are included. Result and conclusion: Conventional imaging methods are irreplaceable. They cover the majority of daily clinical routine queries, with consecutive indication of more sophisticated modalities in those patients who need additional imaging for establishing the final diagnosis or outlining therapeutic options
1. Conventional imaging in paediatric uroradiology
Energy Technology Data Exchange (ETDEWEB)
Riccabona, M. E-mail: [email protected]; Lindbichler, F.; Sinzig, M
2002-08-01
Objective: To briefly describe basic conventional imaging in paediatric uroradiology. Method: The state of the art performance of standard imaging techniques (intravenous urography (IVU), voiding cystourethrography (VCU), and ultrasound (US)) is described, with emphasis on technical aspects, indications, and patient preparation such as adequate hydration. Only basic applications as used in routine clinical work are included. Result and conclusion: Conventional imaging methods are irreplaceable. They cover the majority of daily clinical routine queries, with consecutive indication of more sophisticated modalities in those patients who need additional imaging for establishing the final diagnosis or outlining therapeutic options.
2. The European Convention on bioethics.
Science.gov (United States)
Byk, C
1993-03-01
Benefiting from a widely recognised experience of the field of bioethics, the Council of Europe which represents all the democratic countries of Europe, has embarked on the ambitious task of drafting a European Convention on bioethics. The purpose of this text is to set out fundamental values, such as respect for human dignity, free informed consent and non-commercialisation of the human body. In addition to this task, protocols will provide specific standards for the different fields concerned with the application of biomedical sciences. The convention and the first two protocols (human experiments and organ transplants) are due to be ready for signature by mid 1994.
3. Conventional and unconventional political participation
International Nuclear Information System (INIS)
Opp, K.D.
1985-01-01
A non-recursive model is proposed and empirically tested with data of opponents of nuclear power. In explaining conventional and unconventional participation the theory of collective action is applied and modified in two respects: the perceived influence on the elimination of collective evils are taken into account; the selective incentives considered are non-material ones. These modifications proved to be valid: the collective good variables and non-material incentives were important determinants for the two forms of participation. Another result was that there is a reciprocal causal relationship between conventional and unconventional participation. (orig./PW) [de
4. Grounding Damage to Conventional Vessels
DEFF Research Database (Denmark)
Lützen, Marie; Simonsen, Bo Cerup
2003-01-01
The present paper is concerned with rational design of conventional vessels with regard to bottom damage generated in grounding accidents. The aim of the work described here is to improve the design basis, primarily through analysis of new statistical data for grounding damage. The current regula...
5. Conventional and Non-Conventional Yeasts in Beer Production
Directory of Open Access Journals (Sweden)
Angela Capece
2018-06-01
Full Text Available The quality of beer relies on the activity of fermenting yeasts, not only for their good fermentation yield-efficiency, but also for their influence on beer aroma, since most of the aromatic compounds are intermediate metabolites and by-products of yeast metabolism. Beer production is a traditional process, in which Saccharomyces is the sole microbial component, and any deviation is considered a flaw. However, nowadays the brewing sector is faced with an increasing demand for innovative products, and it is diffusing the use of uncharacterized autochthonous starter cultures, spontaneous fermentation, or non-Saccharomyces starters, which leads to the production of distinctive and unusual products. Attempts to obtain products with more complex sensory characteristics have led one to prospect for non-conventional yeasts, i.e., non-Saccharomyces yeasts. These generally are characterized by low fermentation yields and are more sensitive to ethanol stress, but they provide a distinctive aroma and flavor. Furthermore, non-conventional yeasts can be used for the production of low-alcohol/non-alcoholic and light beers. This review aims to present the main findings about the role of traditional and non-conventional yeasts in brewing, demonstrating the wide choice of available yeasts, which represents a new biotechnological approach with which to target the characteristics of beer and to produce different or even totally new beer styles.
6. The marked and rapid therapeutic effect of tofacitinib in combination with subcutaneous methotrexate in a rheumatoid arthritis patient with poor prognostic factors who is resistant to standard disease-modifying antirheumatic drugs and biologicals: A clinical case
Directory of Open Access Journals (Sweden)
N. V. Demidova
2016-01-01
Full Text Available Today, it is generally accepted that it is necessary to achieve clinical remission in rheumatoid arthritis (RA or as minimum a low disease activity. The paper describes a clinical case of a female patient diagnosed with RA who was observed to have inefficiency of standard disease-modifying antirheumatic therapy with methotrexate 25 mg/week, secondary inefficiency of tumor necrosis factor-α inhibitors (adalimumab, and inefficiency/poor tolerance of the interlukin-6 receptor antagonist tocilizumab. This determined the need to use fofacitinib (TOFA, a drug with another mechanism of action. TOFA is the first agent from a new group of immunomodulatory and anti-inflammatory drugs, intracellular kinase inhibitors. Disease remission could be achieved during therapy with TOFA, which enables one to consider this synthetic drug as a therapy option that potentially competes with therapy with biologicals.
7. Quasisymmetry equations for conventional stellarators
International Nuclear Information System (INIS)
Pustovitov, V.D.
1994-11-01
General quasisymmetry condition, which demands the independence of B 2 on one of the angular Boozer coordinates, is reduced to two equations containing only geometrical characteristics and helical field of a stellarator. The analysis is performed for conventional stellarators with a planar circular axis using standard stellarator expansion. As a basis, the invariant quasisymmetry condition is used. The quasisymmetry equations for stellarators are obtained from this condition also in an invariant form. Simplified analogs of these equations are given for the case when averaged magnetic surfaces are circular shifted torii. It is shown that quasisymmetry condition can be satisfied, in principle, in a conventional stellarator by a proper choice of two satellite harmonics of the helical field in addition to the main harmonic. Besides, there appears a restriction on the shift of magnetic surfaces. Thus, in general, the problem is closely related with that of self-consistent description of a configuration. (author)
8. PROJECT: RECOMMENDATIONS ON TREATMENT OF RHEUMATOID ARTHRITIS DEVELOPED BY ALL-RUSSIAN PUBLIC ORGANIZATION «ASSOCIATION OF RHEUMATOLOGISTS OF RUSSIA» – 2014 (PART 1
Directory of Open Access Journals (Sweden)
E. L. Nasonov
2015-01-01
Full Text Available Authors report new recommendations of All-Russian Public Organization «Association of Rheumatologists of Russia» (ARR on treatment of rheumatoid arthritis (RA, which adapts contemporary concept accepted in the respective field of pharmacotherapy known as «Treat to Target». According to it, the main objective of RA pharmacotherapy is a remission (or low disease activity. To achieve it, disease modifying anti-rheumatic drugs (DMARD should be administered to all RA patients as early as possible, with efficacy monitoring and therapy correction according to the disease activity. Special attention has been paid to the use of methotrexate (MTX as «the gold standard» of RA pharmacotherapy and the key component of «Treat to Target» strategy. Early MTX administration (including subcutaneous injections should become an obligatory component of RA treatment at all stages of the disease. If MTX is not efficient or not well tolerated (including subcutaneous form of the drug as monotherapy or combined with conventional DMARD, biological agents should be used. Those include TNFα inhibitors, antagonist of interleukin-6 receptor (Tocilizumab, anti-B-cell drugs (Rituximab and agents blocking T-cell activation (Abatacept. Tofacitinib therapy (JAK inhibitor is indicated in patients who are resistant to conventional DMARDs and biologics. All biologics and Tofacitinib are more effective in combination with MTX (or other DMARD.
9. PROJECT: RECOMMENDATIONS ON TREATMENT OF RHEUMATOID ARTHRITIS DEVELOPED BY ALL-RUSSIAN PUBLIC ORGANIZATION «ASSOCIATION OF RHEUMATOLOGISTS OF RUSSIA» – 2014 (PART 1
Directory of Open Access Journals (Sweden)
E. L. Nasonov
2016-01-01
Full Text Available Authors report new recommendations of All-Russian Public Organization «Association of Rheumatologists of Russia» (ARR on treatment of rheumatoid arthritis (RA, which adapts contemporary concept accepted in the respective field of pharmacotherapy known as «Treat to Target». According to it, the main objective of RA pharmacotherapyis a remission (or low disease activity. To achieve it, disease modifying anti-rheumatic drugs (DMARD should be administered to all RA patients as early as possible, with efficacy monitoring and therapy correction according to the disease activity. Special attention has been paid to the use of methotrexate (MTX as «the gold standard» of RA pharmacotherapy and the key component of «Treat to Target» strategy. Early MTX administration (including subcutaneous injections should become an obligatory component of RA treatment at all stages of the disease. If MTX is not efficient or not well tolerated (including subcutaneous form of the drug as monotherapy or combined with conventional DMARD, biological agents should be used. Those include TNFα inhibitors, antagonist of interleukin-6 receptor (Tocilizumab, anti-B-cell drugs Rituximab and agents blocking T-cell activation (Abatacept. Tofacitinib therapy (JAK inhibitor is indicated in patients who are resistant to conventional DMARDs and biologics. All biologics and Tofacitinib are more effective in combination with MTX (or other DMARD.
10. Dilution Confusion: Conventions for Defining a Dilution
Science.gov (United States)
Fishel, Laurence A.
2010-01-01
Two conventions for preparing dilutions are used in clinical laboratories. The first convention defines an "a:b" dilution as "a" volumes of solution A plus "b" volumes of solution B. The second convention defines an "a:b" dilution as "a" volumes of solution A diluted into a final volume of "b". Use of the incorrect dilution convention could affect…
11. Apocryphal Angels in Nun Convents
Directory of Open Access Journals (Sweden)
Mario Ávila Vivar
2018-01-01
Full Text Available The preponderance of studies about viceregal angelic series, and the widespread belief that the representation of apocryphal angels is a specific peculiarity of viceregal angelology, have created such a close relation between it and the apocryphal angels, that they are even considered as synonymous. However, both the texts and the presence of this angels in the spanish convents of the XVII century, evidence that the apocryphal angels appeared and they were represented in Spain long before that in its american viceregal. Therefore, it is here where their origins and their meaning should be sought.
12. Diverticular Disease: Reconsidering Conventional Wisdom
Science.gov (United States)
Peery, Anne F.; Sandler, Robert S.
2013-01-01
Colonic diverticula are common in developed countries and complications of colonic diverticulosis are responsible for a significant burden of disease. Several recent publications have called into question long held beliefs about diverticular disease. Contrary to conventional wisdom, studies have not shown that a high fiber diet protects against asymptomatic diverticulosis. The risk of developing diverticulitis among individuals with diverticulosis is lower than the 10–25% commonly quoted, and may be as low as 1% over 11 years. Nuts and seeds do not increase the risk of diverticulitis or diverticular bleeding. It is unclear whether diverticulosis, absent diverticulitis or overt colitis, is responsible for chronic gastrointestinal symptoms or worse quality of life. The role of antibiotics in acute diverticulitis has been challenged by a large randomized trial that showed no benefit in selected patients. The decision to perform elective surgery should be made on a case-by-case basis and not routinely after a second episode of diverticulitis, when there has been a complication, or in young people. A colonoscopy should be performed to exclude colon cancer after an attack of acute diverticulitis but may not alter outcomes among individuals who have had a colonoscopy prior to the attack. Given these surprising findings, it is time to reconsider conventional wisdom about diverticular disease. PMID:23669306
13. Implementing the chemical weapons convention
Energy Technology Data Exchange (ETDEWEB)
Kellman, B.; Tanzman, E. A.
1999-12-07
In 1993, as the CWC ratification process was beginning, concerns arose that the complexity of integrating the CWC with national law could cause each nation to implement the Convention without regard to what other nations were doing, thereby causing inconsistencies among States as to how the CWC would be carried out. As a result, the author's colleagues and the author prepared the Manual for National Implementation of the Chemical Weapons Convention and presented it to each national delegation at the December 1993 meeting of the Preparatory Commission in The Hague. During its preparation, the Committee of CWC Legal Experts, a group of distinguished international jurists, law professors, legally-trained diplomats, government officials, and Parliamentarians from every region of the world, including Central Europe, reviewed the Manual. In February 1998, they finished the second edition of the Manual in order to update it in light of developments since the CWC entered into force on 29 April 1997. The Manual tries to increase understanding of the Convention by identifying its obligations and suggesting methods of meeting them. Education about CWC obligations and available alternatives to comply with these requirements can facilitate national response that are consistent among States Parties. Thus, the Manual offers options that can strengthen international realization of the Convention's goals if States Parties act compatibly in implementing them. Equally important, it is intended to build confidence that the legal issues raised by the Convention are finite and addressable. They are now nearing competition of an internet version of this document so that interested persons can access it electronically and can view the full text of all of the national implementing legislation it cites. The internet address, or URL, for the internet version of the Manual is http: //www.cwc.ard.gov. This paper draws from the Manual. It comparatively addresses approximately thirty
14. Implementing the chemical weapons convention
International Nuclear Information System (INIS)
Kellman, B.; Tanzman, E. A.
1999-01-01
In 1993, as the CWC ratification process was beginning, concerns arose that the complexity of integrating the CWC with national law could cause each nation to implement the Convention without regard to what other nations were doing, thereby causing inconsistencies among States as to how the CWC would be carried out. As a result, the author's colleagues and the author prepared the Manual for National Implementation of the Chemical Weapons Convention and presented it to each national delegation at the December 1993 meeting of the Preparatory Commission in The Hague. During its preparation, the Committee of CWC Legal Experts, a group of distinguished international jurists, law professors, legally-trained diplomats, government officials, and Parliamentarians from every region of the world, including Central Europe, reviewed the Manual. In February 1998, they finished the second edition of the Manual in order to update it in light of developments since the CWC entered into force on 29 April 1997. The Manual tries to increase understanding of the Convention by identifying its obligations and suggesting methods of meeting them. Education about CWC obligations and available alternatives to comply with these requirements can facilitate national response that are consistent among States Parties. Thus, the Manual offers options that can strengthen international realization of the Convention's goals if States Parties act compatibly in implementing them. Equally important, it is intended to build confidence that the legal issues raised by the Convention are finite and addressable. They are now nearing competition of an internet version of this document so that interested persons can access it electronically and can view the full text of all of the national implementing legislation it cites. The internet address, or URL, for the internet version of the Manual is http: //www.cwc.ard.gov. This paper draws from the Manual. It comparatively addresses approximately thirty
15. Conventional power sources for colliders
International Nuclear Information System (INIS)
Allen, M.A.
1987-07-01
At SLAC we are developing high peak-power klystrons to explore the limits of use of conventional power sources in future linear colliders. In an experimental tube we have achieved 150 MW at 1 μsec pulse width at 2856 MHz. In production tubes for SLAC Linear Collider (SLC) we routinely achieve 67 MW at 3.5 μsec pulse width and 180 pps. Over 200 of the klystrons are in routine operation in SLC. An experimental klystron at 8.568 GHz is presently under construction with a design objective of 30 MW at 1 μsec. A program is starting on the relativistic klystron whose performance will be analyzed in the exploration of the limits of klystrons at very short pulse widths
16. Laparoscopic splenectomy using conventional instruments
Directory of Open Access Journals (Sweden)
Dalvi A
2005-01-01
Full Text Available INTRODUCTION : Laparoscopic splenectomy (LS is an accepted procedure for elective splenectomy. Advancement in technology has extended the possibility of LS in massive splenomegaly [Choy et al., J Laparoendosc Adv Surg Tech A 14(4, 197-200 (2004], trauma [Ren et al., Surg Endosc 15(3, 324 (2001; Mostafa et al., Surg Laparosc Endosc Percutan Tech 12(4, 283-286 (2002], and cirrhosis with portal hypertension [Hashizume et al., Hepatogastroenterology 49(45, 847-852 (2002]. In a developing country, these advanced gadgets may not be always available. We performed LS using conventional and reusable instruments in a public teaching the hospital without the use of the advanced technology. The technique of LS and the outcome in these patients is reported. MATERIALS AND METHODS : Patients undergoing LS for various hematological disorders from 1998 to 2004 were included. Electrocoagulation, clips, and intracorporeal knotting were the techniques used for tackling short-gastric vessels and splenic pedicle. Specimen was delivered through a Pfannensteil incision. RESULTS : A total of 26 patients underwent LS. Twenty-two (85% of patients had spleen size more than 500 g (average weight being 942.55 g. Mean operative time was 214 min (45-390 min. The conversion rate was 11.5% ( n = 3. Average duration of stay was 5.65 days (3-30 days. Accessory spleen was detected and successfully removed in two patients. One patient developed subphrenic abscess. There was no mortality. There was no recurrence of hematological disease. CONCLUSION : Laparoscopic splenectomy using conventional equipment and instruments is safe and effective. Advanced technology has a definite advantage but is not a deterrent to the practice of LS.
17. Paris Convention on third party liability in the field of nuclear energy and Brussels Convention Supplementary to the Paris Convention
International Nuclear Information System (INIS)
1989-01-01
This new bilingual (English and French) edition of the 1960 Paris Convention and 1963 Brussels Supplementary Convention incorporates the provisions of the Protocols which amended each of them on two occasions, in 1964 and 1982. The Expose des motifs to the Paris Convention, as revised in 1982 is also included in this pubication. (NEA) [fr
18. The disease modifying osteoarthritis drug (DMOAD)
DEFF Research Database (Denmark)
Qvist, Per; Bay-Jensen, Anne-Christine; Christiansen, Claus
2008-01-01
and with DMOADs in particular, and we advance the need for a new development paradigm for DMOADs. Two central elements in this paradigm are a stronger focus on the biology of the joint and the application of new and more sensitive biomarkers allowing redesign of clinical trials in osteoarthritis....
19. The expert meeting dedicated to the discussion of results of a local open-label multicenter observational study of the efficiency and safety of tofacitinib in patients with active rheumatoid arthritis with the inefficiency of disease-modifying antirheumatic drugs and to the elaboration of recommendations for the use for tofacitinib in the therapy of rheumatoid arthritis
Directory of Open Access Journals (Sweden)
2016-01-01
Full Text Available The expert meeting dedicated to the discussion of results of a local open-label multicenter observational study of the efficiency and safety of tofacitinib in patients with active rheumatoid arthritis with the inefficiency of disease-modifying antirheumatic drugs and to the elaboration of recommendations for the use for tofacitinib in the therapy of rheumatoid arthritis.
20. Conventional and advanced liquid biofuels
Directory of Open Access Journals (Sweden)
2016-01-01
Full Text Available Energy security and independence, increase and fluctuation of the oil price, fossil fuel resources depletion and global climate change are some of the greatest challanges facing societies today and in incoming decades. Sustainable economic and industrial growth of every country and the world in general requires safe and renewable resources of energy. It has been expected that re-arrangement of economies towards biofuels would mitigate at least partially problems arised from fossil fuel consumption and create more sustainable development. Of the renewable energy sources, bioenergy draws major and particular development endeavors, primarily due to the extensive availability of biomass, already-existence of biomass production technologies and infrastructure, and biomass being the sole feedstock for liquid fuels. The evolution of biofuels is classified into four generations (from 1st to 4th in accordance to the feedstock origin; if the technologies of feedstock processing are taken into account, than there are two classes of biofuels - conventional and advanced. The conventional biofuels, also known as the 1st generation biofuels, are those produced currently in large quantities using well known, commercially-practiced technologies. The major feedstocks for these biofuels are cereals or oleaginous plants, used also in the food or feed production. Thus, viability of the 1st generation biofuels is questionable due to the conflict with food supply and high feedstocks’ cost. This limitation favoured the search for non-edible biomass for the production of the advanced biofuels. In a general and comparative way, this paper discusses about various definitions of biomass, classification of biofuels, and brief overview of the biomass conversion routes to liquid biofuels depending on the main constituents of the biomass. Liquid biofuels covered by this paper are those compatible with existing infrastructure for gasoline and diesel and ready to be used in
1. [Investigation of the clinical usefullness of leukocytapheresis on rheumatoid arthritis resistant to or failed with the other treatments].
Science.gov (United States)
Sawada, Jin; Kimoto, Osamu; Suzuki, Daisuke; Shimoyama, Kumiko; Ogawa, Noriyoshi
2009-12-01
To examine therapeutic effect of leukocytapheresis (LCAP) for rheumatoid arthritis (RA) resistant to various treatments. Thirteen patients with RA (mean age : 60.8+/-11.4, male : female=5 : 8), 1 who were resistant to disease-modifying anti-rheumatic drugs (DMARDs) and biologics, or 2 who failed with those medicines because of side effects or complications. We performed LCAP, which was carried out once a week for a total of five sessions, with a throughput of about 0.1 L/kg. Before and after LCAP, we evaluated the effect of LCAP therapy. DAS28 (CRP) score was 5.70+/-1.12 before LCAP, 4.57+/-1.19 (P<0.05) just after the final LCAP and 4.83+/-1.35 (P<0.05) about 4 weeks after LCAP. DAS28 score decreased in all patients after LCAP. No serious adverse events were observed except temporary anemia. LCAP therapy may be useful and safe for patients with RA resistant to conventional medication. Patients who show good clinical response by LCAP needs to be clarified.
2. Can cardiovascular magnetic resonance prompt early cardiovascular/rheumatic treatment in autoimmune rheumatic diseases? Current practice and future perspectives.
Science.gov (United States)
Mavrogeni, Sophie I; Sfikakis, Petros P; Dimitroulas, Theodoros; Koutsogeorgopoulou, Loukia; Katsifis, Gikas; Markousis-Mavrogenis, George; Kolovou, Genovefa; Kitas, George D
2018-06-01
Life expectancy in autoimmune rheumatic diseases (ARDs) remains lower compared to the general population, due to various comoborbidities. Cardiovascular disease (CVD) represents the main contributor to premature mortality. Conventional and biologic disease-modifying antirheumatic drugs (DMARDs) have considerably improved long-term outcomes in ARDs not only by suppressing systemic inflammation but also by lowering CVD burden. Regarding atherosclerotic disease prevention, EULAR has recommended tight disease control accompanied by regular assessment of traditional CVD risk factors and lifestyle changes. However, this approach, although rational and evidence-based, does not account for important issues such as myocardial inflammation and the long asymptomatic period that usually proceeds clinical manifestations of CVD disease in ARDs before or after the diagnosis of systemic disease. Cardiovascular magnetic resonance (CMR) can offer reliable, reproducible and operator independent information regarding myocardial inflammation, ischemia and fibrosis. Some studies suggest a role for CMR in the risk stratification of ARDs and demonstrate that oedema/fibrosis visualisation with CMR may have the potential to inform cardiac and rheumatic treatment modification in ARDs with or without abnormal routine cardiac evaluation. In this review, we discuss how CMR findings could influence anti-rheumatic treatment decisions targeting optimal control of both systemic and myocardial inflammation irrespective of clinical manifestations of cardiac disease. CMR can provide a different approach that is very promising for risk stratification and treatment modification; however, further studies are needed before the inclusion of CMR in the routine evaluation and treatment of patients with ARDs.
3. A practical approach to vaccination of patients with autoimmune inflammatory rheumatic diseases in Australia.
Science.gov (United States)
Wong, Peter K K; Bagga, Hanish; Barrett, Claire; Hanrahan, Paddy; Johnson, Doug; Katrib, Amel; Leder, Karin; Marabani, Mona; Pentony, Peta; Riordan, John; White, Ray; Young, Laurel
2017-05-01
Autoimmune inflammatory rheumatic diseases (AIIRD), such as rheumatoid arthritis, psoriatic arthritis and ankylosing spondylitis are often complicated by infection, which results in significant morbidity and mortality. The increased risk of infection is probably due to a combination of immunosuppressive effects of the AIIRD, comorbidities and the use of immunosuppressive conventional synthetic disease-modifying anti-rheumatic drugs (DMARDs) and more recently, targeted synthetic DMARDs and biologic DMARDs that block specific pro-inflammatory enzymes, cytokines or cell types. The use of these various DMARDs has revolutionised the treatment of AIIRD. This has led to a marked improvement in quality of life for AIIRD patients, who often now travel for prolonged periods. Many infections are preventable with vaccination. However, as protective immune responses induced by vaccination may be impaired by immunosuppression, where possible, vaccination may need to be performed prior to initiation of immunosuppression. Vaccination status should also be reviewed when planning overseas travel. Limited data regarding vaccine efficacy in patients with AIIRD make prescriptive guidelines difficult. However, a vaccination history should be part of the initial work-up in all AIIRD patients. Those caring for AIIRD patients should regularly consider vaccination to prevent infection within the practicalities of routine clinical practice. © 2017 Royal Australasian College of Physicians.
4. Ultrasound-detected bone erosion is a relapse risk factor after discontinuation of biologic disease-modifying antirheumatic drugs in patients with rheumatoid arthritis whose ultrasound power Doppler synovitis activity and clinical disease activity are well controlled.
Science.gov (United States)
Kawashiri, Shin-Ya; Fujikawa, Keita; Nishino, Ayako; Okada, Akitomo; Aramaki, Toshiyuki; Shimizu, Toshimasa; Umeda, Masataka; Fukui, Shoichi; Suzuki, Takahisa; Koga, Tomohiro; Iwamoto, Naoki; Ichinose, Kunihiro; Tamai, Mami; Mizokami, Akinari; Nakamura, Hideki; Origuchi, Tomoki; Ueki, Yukitaka; Aoyagi, Kiyoshi; Maeda, Takahiro; Kawakami, Atsushi
2017-05-25
In the present study, we explored the risk factors for relapse after discontinuation of biologic disease-modifying antirheumatic drug (bDMARD) therapy in patients with rheumatoid arthritis (RA) whose ultrasound power Doppler (PD) synovitis activity and clinical disease activity were well controlled. In this observational study in clinical practice, the inclusion criteria were based on ultrasound disease activity and clinical disease activity, set as low or remission (Disease Activity Score in 28 joints based on erythrocyte sedimentation rate Ultrasound was performed in 22 joints of bilateral hands at discontinuation for evaluating synovitis severity and presence of bone erosion. Patients with a maximum PD score ≤1 in each joint were enrolled. Forty patients with RA were consecutively recruited (November 2010-March 2015) and discontinued bDMARD therapy. Variables at the initiation and discontinuation of bDMARD therapy that were predictive of relapse during the 12 months after discontinuation were assessed. The median patient age was 54.5 years, and the median disease duration was 3.5 years. Nineteen (47.5%) patients relapsed during the 12 months after the discontinuation of bDMARD therapy. Logistic regression analysis revealed that only the presence of bone erosion detected by ultrasound at discontinuation was predictive of relapse (OR 8.35, 95% CI 1.78-53.2, p = 0.006). No clinical characteristics or serologic biomarkers were significantly different between the relapse and nonrelapse patients. The ultrasound synovitis scores did not differ significantly between the groups. Our findings are the first evidence that ultrasound bone erosion may be a relapse risk factor after the discontinuation of bDMARD therapy in patients with RA whose PD synovitis activity and clinical disease activity are well controlled.
5. Efficacy and safety of tofacitinib for active rheumatoid arthritis with an inadequate response to methotrexate or disease-modifying antirheumatic drugs: a meta-analysis of randomized controlled trials
Science.gov (United States)
Song, Gwan Gyu; Bae, Sang-Cheol
2014-01-01
Background/Aims The aim of this study was to assess the efficacy and safety of tofacitinib (5 and 10 mg twice daily) in patients with active rheumatoid arthritis (RA). Methods A systematic review of randomized controlled trials (RCTs) that examined the efficacy and safety of tofacitinib in patients with active RA was performed using the Medline, Embase, and Cochrane Controlled Trials Register databases as well as manual searches. Results Five RCTs, including three phase-II and two phase-III trials involving 1,590 patients, met the inclusion criteria. The three phase-II RCTs included 452 patients with RA (144 patients randomized to 5 mg of tofacitinib twice daily, 156 patients randomized to 10 mg of tofacitinib twice daily, and 152 patients randomized to placebo) who were included in this meta-analysis. The American College of Rheumatology 20% response rate was significantly higher in the tofacitinib 5- and 10-mg groups than in the control group (relative risk [RR], 2.445; 95% confidence interval [CI], 1.229 to 4.861; p = 0.011; and RR, 2.597; 95% CI, 1.514 to 4.455; p = 0.001, respectively). The safety outcomes did not differ between the tofacitinib 5- and 10-mg groups and placebo groups with the exception of infection in the tofacitinib 10-mg group (RR, 2.133; 95% CI, 1.268 to 3.590; p = 0.004). The results of two phase-III trials (1,123 patients) confirmed the findings in the phase-II studies. Conclusions Tofacitinib at dosages of 5 and 10 mg twice daily was found to be effective in patients with active RA that inadequately responded to methotrexate or disease-modifying antirheumatic drugs, and showed a manageable safety profile. PMID:25228842
6. 15 CFR 742.18 - Chemical Weapons Convention (CWC or Convention).
Science.gov (United States)
2010-01-01
... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false Chemical Weapons Convention (CWC or... REGULATIONS CONTROL POLICY-CCL BASED CONTROLS § 742.18 Chemical Weapons Convention (CWC or Convention). States... Use of Chemical Weapons and on Their Destruction, also known as the Chemical Weapons Convention (CWC...
7. Human rights and conventionality control in Mexico
Directory of Open Access Journals (Sweden)
Azul América Aguiar-Aguilar
2014-12-01
Full Text Available The protection of human rights in Mexico has, de jure, suffered an important change in the last years, given a new judicial interpretation delivered by the National Supreme Court of Justice that allows the use of conventionality control, which means, that it allows federal and state judges to verify the conformity of domestic laws with those established in the Inter-American Convention of Human Rights. To what extent domestic actors are protecting human rights using this new legal tool called conventionality control? In this article I explore whom and how is conventionality control being used in Mexico. Using N-Vivo Software I reviewed concluded decisions delivered by intermediate level courts (Collegiate Circuit Courts in three Mexican states. The evidence points that conventionality control is a very useful tool especially to defenders, who appear in sentences claiming compliance with the commitments Mexico has acquired when this country ratified the Convention.
8. Merchant shipping (Safety Convention) Act 1977
International Nuclear Information System (INIS)
1977-01-01
When this Act comes into force, it will enable the United Kingdom to ratify and to give effect to the 1974 International Convention for the Safety of Life at Sea (the SOLAS Convention) which replaces the SOLAS Convention of 1960. Under the Act, the Secretary of State may make such rules as he considers appropriate regarding ships provided with nuclear power plants in accordance with Chapter VIII of the Annex to the 1974 Convention and to Recommendations attached to it, dealing with nuclear ships, and insofar as those provisions have not been implemented by the Merchant Shipping Acts 1894 to 1974. (NEA) [fr
9. Computer Understanding of Conventional Metaphoric Language
National Research Council Canada - National Science Library
Martin, James H
1990-01-01
.... This approach asserts that the interpretation of conventional metaphoric language should proceed through the direct application of specific knowledge about the metaphors in the language. MIDAS...
10. The climate change convention and human health.
Science.gov (United States)
Rowbotham, E J
1995-01-01
The United Nations Framework Convention on Climate Change, signed at Rio in June 1992, is intended to minimize climate change and its impact. Much of its text is ambiguous and it is not specifically directed to health considerations. It is, however, recognized that adverse effects of climate change on health are a concern of humankind, and health is an integral part of the Convention. The Convention includes commitments by the developed countries to reduce emissions of greenhouse gases and to increase public awareness of these commitments. The significance of the Convention in these respects is discussed critically and future developments considered.
11. The protocol amending the 1963 Vienna Convention
International Nuclear Information System (INIS)
Lamm, V.
2006-01-01
Technically the Vienna Convention was revised by the adoption of the protocol to amend the instrument. and according to Article 19 of the protocol 'A State which is Party to this Protocol but not to the 1963 Vienna Convention shall be bound by the provisions of that Convention as amended by this Protocol in relation to other States Parties hereto, and failing an expression of a different intention by that State at the time of deposit of an instrument referred to in Article 20 shall be bound by the provisions of the 1963 Vienna Convention in relation to States which are only Parties thereto'. This solution has created a special situation, because after the entry into force of the protocol there will be living together or operating in practice 'two' Vienna Conventions, notably the convention's original text of 1963 and its new version as amended by the protocol. After the protocol has come into force, a state may only accede to the amended version, but in the inter se relations of the States Party to the 'old' Vienna Convention the provisions of that convention will remain in force until such time as they have acceded to the new protocol. This rather complicated situation is nevertheless understandable and is fully in accord with Article 40 of the 1969 Vienna Convention on the Law of Treaties, which provides for the amendment of multilateral treaties. In 1989 the negotiations on the revision of the Vienna Convention had begun with the aim of strengthening the existing nuclear liability regime and of improving the situation of potential victims of nuclear accidents. The Protocol to Amend the Vienna Convention serves those purposes; it also reflects a good compromise, since it is the outcome of a negotiation process in which experts from both nuclear and non-nuclear states, from Contacting Parties and non-Contracting Parties were very active. That affords some assurance that the compromise solution reached is acceptable to all States participating in the adoption of
12. 7 CFR 58.316 - Conventional churns.
Science.gov (United States)
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Conventional churns. 58.316 Section 58.316 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards....316 Conventional churns. Churns shall be constructed of aluminum, stainless steel or equally corrosion...
13. Influence of Kosher (Shechita) and conventional slaughter ...
African Journals Online (AJOL)
Influence of Kosher (Shechita) and conventional slaughter techniques on shear force, drip and cooking loss of beef. ... South African Journal of Animal Science ... force values for meat samples from cattle slaughtered by the Kosher method compared to those from cattle slaughtered by the conventional slaughter method.
14. Comparison of community managed projects and conventional ...
African Journals Online (AJOL)
Comparison of community managed projects and conventional approaches in rural water supply of Ethiopia. ... African Journal of Environmental Science and Technology ... This study aimed to compare Community Managed Projects (CMP) approach with the conventional approaches (Non-CMP) in the case of Ethiopia.
15. Suction v. conventional curettage in incomplete abortion
African Journals Online (AJOL)
Suction v. conventional curettage in incomplete abortion. A randomised controlled trial. D. A. A. VERKUYL, C. A. CROWTHER .Abstract This randomised controlled trial of 357 patients who had had an incomplete abortion compared suction curettage with conventional curettage for evacuation ofthe uterus. The 179 patients ...
16. numerical assessment of conventional regulation effectiveness
African Journals Online (AJOL)
Benkoussas B, Djedjig R, and Vauquelin O
2016-05-01
May 1, 2016 ... The effectiveness of an underground smoke control system mainly depends on fire safety engineering that is ... In the same context, this work aims firstly, at investigating the effectiveness of conventional regulation applied to .... 5a). Fig.4. Station smoke behavior for conventional ventilation regulation. Fig.5a.
17. AECT Convention, Orlando, Florida 2008 Report
Science.gov (United States)
Vega, Eddie
2009-01-01
This article presents several reports that highlight the events at the 2008 Association for Educational Communications and Technology (AECT) International Convention in Orlando, Florida. At the annual convention this year, the Multimedia Production Division goal was to continue to share information about the latest tools in multimedia production,…
18. The effect of tofacitinib on function and quality of life indicators in patients with rheumatoid arthritis resistant to synthetic and biological disease-modifying antirheumatic drugs in real clinical practice: Results of a multicenter observational study
Directory of Open Access Journals (Sweden)
D. E. Karateev
2017-01-01
Full Text Available Tofacitinib (TOFA, a representative of a new class of targeted synthetic disease-modifying antirheumatic drugs (s-DMARD, is a promising drug for treating rheumatoid arthritis (RA and other immune inflammatory diseases.Objective: to evaluate the efficiency and safety of therapy with TOFA in combination with methotrexate (MTX and other s-DMARDs in real clinical practice in patients with active RA and previous ineffective therapy.Patients and methods. A 6-month Russian multicenter study of function and quality of life enrolled 101 patients with resistant RA: 18 men and 83 women; mean age, 51.03±11.28 years; mean disease duration, 105.4±81.43 months; rheumatoid factor-positive individuals (89.1%; and anticyclic citrullinated peptide antibody-positive ones (74.7%. 93 (92,1% of these patients completed a 24-week study. TOFA was used as both second-line drug (after failure of therapy with s-DMARD (n=74 and as a third-line drug (after failure of therapy with s-DMARDs and biological agents (BAs (n=74. The tools RAPID3, HAQ, and EQ-5D were used to determine disease outcomes from a patient's assessment.Results. All the three tools demonstrated significant positive changes at 3–6 months following therapy initiation. RAPID3 scores for the status of a patient achieving a low disease activity or remission coincided with the mean DAS28-ESR and SDAI scores in 60% and 68% of cases, respectively. The achievement rates of the minimally clinically significant improvement (ΔHAQ≥0.22 and functional remission (HAQ≤0.5 at 6 months of TOFA therapy were 79.6 and 30.1%, respectively. The mean change value in EQ-5D scores over 6 months was -0.162±0.21. There were no significant between the groups of patients who used TOFA as a second- or third-line agent in the majority of indicators, except EQ-5D scores at 6 months.Conclusions. The results of our multicenter study using considerable Russian material confirmed the pronounced positive effect of TOFA used
19. Auranofin, an Anti-Rheumatic Gold Compound, Modulates Apoptosis by Elevating the Intracellular Calcium Concentration ([Ca{sup 2+}]{sub i}) in MCF-7 Breast Cancer Cells
Energy Technology Data Exchange (ETDEWEB)
Varghese, Elizabeth; Büsselberg, Dietrich, E-mail: [email protected] [Weil Cornell Medical College in Qatar, Qatar Foundation-Education City, P.O. Box 24144 Doha (Qatar)
2014-11-06
Auranofin, a transition metal complex is used for the treatment of rheumatoid arthritis but is also an effective anti-cancer drug. We investigate the effects of Auranofin in inducing cell death by apoptosis and whether these changes are correlated to changes of intracellular calcium concentration ([Ca{sup 2+}]{sub i}) in breast cancer cells (MCF-7). Cytotoxicity of Auranofin was evaluated using MTS assay and the Trypan blue dye exclusion method. With fluorescent dyes SR-FLICA and 7-AAD apoptotic death and necrotic death were differentiated by Flow cytometry. A concentration dependent decrease in the viability occurred and cells were shifted to the apoptotic phase. Intracellular calcium ([Ca{sup 2+}]{sub i}) was recorded using florescence microscopy and a calcium sensitive dye (Fluo-4 AM) with a strong negative correlation (r = −0.713) to viability. Pharmacological modulators 2-APB (50 μM), Nimodipine (10 μM), Caffeine (10 mM), SKF 96365(20 μM) were used to modify calcium entry and release. Auranofin induced a sustained increase of [Ca{sup 2+}]{sub i} in a concentration and time dependent manner. The use of different blockers of calcium channels did not reveal the source for the rise of [Ca{sup 2+}]{sub i}. Overall, elevation of [Ca{sup 2+}]{sub i} by Auranofin might be crucial for triggering Ca{sup 2+}-dependent apoptotic pathways. Therefore, in anti-cancer therapy, modulating [Ca{sup 2+}]{sub i} should be considered as a crucial factor for the induction of cell death in cancer cells.
20. Efficacy and Safety of Vaccination in Pediatric Patients with Systemic Inflammatory Rheumatic Diseases: a systematic review of the literature
Directory of Open Access Journals (Sweden)
Sandra Sousa
2017-01-01
Full Text Available Introduction: Children and adolescents with systemic rheumatic diseases have an increased risk of infections. Although some infections are vaccine-preventable, immunization among patients with juvenile rheumatic diseases is suboptimal, partly due to some doubts that still persist regarding its efficacy and safety in this patient population. Objectives: To review the available evidence regarding the immunological response and the safety of vaccination in children and adolescents with systemic inflammatory rheumatic diseases (SIRD. Methods: A systematic review of the current literature until December 2014 using MEDLINE, EMBASE and abstracts from the American College of Rheumatology and European League Against Rheumatism congresses (2011-2014, complemented by hand search was performed. Eligible studies were identified and efficacy (seroprotection and/or seroconversion and safety (reactions to vaccine and relapse of rheumatic disease outcomes were extracted and summarized according to the type of vaccine. Results: Twenty-eight articles concerning vaccination in pediatric patients with SIRDs were found, that included almost 2100 children and adolescents, comprising nearly all standard vaccinations of the recommended immunization schedule. Children with SIRDs generally achieved seroprotection and seroconversion; nevertheless, the antibody levels were often lower when compared with healthy children. Glucocorticoids and conventional disease-modifying anti-rheumatic drugs do not seem to significantly hamper the immune responses, whereas TNF inhibitors may reduce antibody production, particularly in response to pneumococcal conjugate, influenza, meningococcal C and hepatitis A vaccine. There were no serious adverse events, nor evidence of a relevant worsening of the underlying rheumatic disease. Concerning live attenuated vaccines, the evidence is scarce, but no episodes of overt disease were reported, even in patients under biological therapy
1. Efficacy and Safety of Vaccination in Pediatric Patients with Systemic Inflammatory Rheumatic Diseases: a systematic review of the literature.
Science.gov (United States)
Sousa, Sandra; Duarte, Ana Catarina; Cordeiro, Inês; Ferreira, Joana; Gonçalves, Maria João; Meirinhos, Tiago; Rocha, Teresa Martins; Romão, Vasco C; Santos, Maria José
2017-01-01
Children and adolescents with systemic rheumatic diseases have an increased risk of infections. Although some infections are vaccine-preventable, immunization among patients with juvenile rheumatic diseases is suboptimal, partly due to some doubts that still persist regarding its efficacy and safety in this patient population. To review the available evidence regarding the immunological response and the safety of vaccination in children and adolescents with systemic inflammatory rheumatic diseases (SIRD). A systematic review of the current literature until December 2014 using MEDLINE, EMBASE and abstracts from the American College of Rheumatology and European League Against Rheumatism congresses (2011-2014), complemented by hand search was performed. Eligible studies were identified and efficacy (seroprotection and/or seroconversion) and safety (reactions to vaccine and relapse of rheumatic disease) outcomes were extracted and summarized according to the type of vaccine. Twenty-eight articles concerning vaccination in pediatric patients with SIRDs were found, that included almost 2100 children and adolescents, comprising nearly all standard vaccinations of the recommended immunization schedule. Children with SIRDs generally achieved seroprotection and seroconversion; nevertheless, the antibody levels were often lower when compared with healthy children. Glucocorticoids and conventional disease-modifying anti-rheumatic drugs do not seem to significantly hamper the immune responses, whereas TNF inhibitors may reduce antibody production, particularly in response to pneumococcal conjugate, influenza, meningococcal C and hepatitis A vaccine. There were no serious adverse events, nor evidence of a relevant worsening of the underlying rheumatic disease. Concerning live attenuated vaccines, the evidence is scarce, but no episodes of overt disease were reported, even in patients under biological therapy. Existing literature demonstrates that vaccines are generally well
2. Digital vs. conventional implant impressions: efficiency outcomes.
Science.gov (United States)
Lee, Sang J; Gallucci, German O
2013-01-01
The aim of this pilot study was to evaluate the efficiency, difficulty and operator's preference of a digital impression compared with a conventional impression for single implant restorations. Thirty HSDM second year dental students performed conventional and digital implant impressions on a customized model presenting a single implant. The outcome of the impressions was evaluated under an acceptance criteria and the need for retake/rescan was decided. The efficiency of both impression techniques was evaluated by measuring the preparation, working, and retake/scan time (m/s) and the number of retakes/rescans. Participants' perception on the level of difficulty for the both impressions was assessed with a visual analogue scale (VAS) questionnaire. Multiple questionnaires were obtained to assess the participants' perception on preference, effectiveness and proficiency. Mean total treatment time was of 24:42 m/s for conventional and 12:29 m/s for digital impressions (P impressions (P impression (P impression technique and 30.63 (±17.57) for digital impression technique (P = 0.006). Sixty percent of the participants preferred the digital impression, 7% the conventional impression technique and 33% preferred either technique. Digital impressions resulted in a more efficient technique than conventional impressions. Longer preparation, working, and retake time were consumed to complete an acceptable conventional impression. Difficulty was lower for the digital impression compared with the conventional ones when performed by inexperienced second year dental students. © 2012 John Wiley & Sons A/S.
3. Convention on supplementary compensation for nuclear damage
International Nuclear Information System (INIS)
Chinese Nuclear Society, Beijing; U.S. Nuclear Energy Institute
2000-01-01
The Contracting parties recognize the importance of the measures provided in the Vienna Convention on Civil Liability for Nuclear Damage and the Paris Convention on Third party liability in the Field of Nuclear Energy as well as in national legislation on compensation for nuclear damage consistent with the principles of these conventions. The Contracting parties desire to establish a worldwide liability regime to supplement and enhance these measures with a view to increasing the amount of compensation for nuclear damage and encourage regional and global co-operation to promote a higher level of nuclear safety in accordance with the principle of international partnership and solidarity
4. National report of Brazil. Nuclear Safety Convention
International Nuclear Information System (INIS)
1998-09-01
This document represents the national report prepared as a fulfillment of the brazilian obligations related to the Convention on Nuclear Safety. In chapter 2 some details are given about the existing nuclear installations. Chapter 3 provides details about the legislation and regulations, including the regulatory framework and the regulatory body. Chapter 4 covers general safety considerations as described in articles 10 to 16 of the Convention. Chapter 5 addresses to the safety of the installations during siting, design, construction and operation. Chapter 6 describes planned activities to further enhance nuclear safety. Chapter 7 presents the final remarks related to the degree of compliance with the Convention obligations
5. French Economics of Convention and Economic Sociology
DEFF Research Database (Denmark)
Jagd, Søren
foundation of markets and of money may be an occasion for economic sociology to focus even more on elaborating on the institutional void created by traditional economic theory. A second point is that economic sociology could benefit from the perspective of a plurality of forms of coordination involved......The French Economics of convention tradition has developed to be an influential research tradition situated in the area between economics and sociology. The aim of the paper is to explore some of the themes that may be common to economics of conventions and economic sociology by looking more...... closely into three recent texts from the economics of convention tradition discussing, in slightly different ways, differences and similarities between economics of convention and economic sociology. It is argued that André Orléan’s point that a common aim could be to ‘denaturalise’ the institutional...
6. Numerical assessment of conventional regulation effectiveness for ...
African Journals Online (AJOL)
... depends on fire safety engineering that is provided, and which is generally established using smoke spread field and temperature distribution predictions. ... conventional regulation; ventilation strategies; smoke temperature; smoke barriers ...
7. Convention on supplementary compensation for nuclear damage
Energy Technology Data Exchange (ETDEWEB)
NONE
1998-07-22
The document reproduces the text of the Convention on Supplementary Compensation for Nuclear Damage which was adopted on 12 September 1997 by a Diplomatic Conference held between 8-12 September 1997 in Vienna
8. Foster parenting, human imprinting and conventional handling ...
African Journals Online (AJOL)
p2492989
Foster parenting, human imprinting and conventional handling affects survival and early .... bird may subsequently direct its sexual attention to those humans on whom it was imprinted (Bubier et al., ..... The mind through chicks' eyes: memory,.
9. Convention on supplementary compensation for nuclear damage
International Nuclear Information System (INIS)
1998-01-01
The document reproduces the text of the Convention on Supplementary Compensation for Nuclear Damage which was adopted on 12 September 1997 by a Diplomatic Conference held between 8-12 September 1997 in Vienna
10. Medan Convention & Exhibition Center (Arsitektur Ekspresionisme)
OpenAIRE
Iskandar, Nurul Auni
2015-01-01
Medan is one of the third largest city in Indonesia, which is currently being developed, and a city with lots of activities. In the city of Medan has a high investment opportunities for a convention, because of its strategic position in Southeast Asia and also supported by the facility and the potential for tourism in North Sumatra, Medan city has the potential for industrial MICE (Meeting, Incentive, Conference, Exhibition). The construction of Medan Convention & Exhibition Cente...
11. Technical Efficiency Performance of Conventional Banks
OpenAIRE
Endri, Endri
2012-01-01
This study aims to measure the performance of the technical efficiency of the conventional commercial banks during the period 2008-2009 by using non-parametric method of Data Envelopment Analysis (DEA). Test results showed that the total of all conventional commercial banks during the period 2008-2009 has not shown that optimal performance in which the level of technical efficiency is still below 100 percent. Appalling conditions, the efficiency of national banks experienced a decline of 73.5...
12. Association between use of disease-modifying antirheumatic drugs and diabetes in patients with ankylosing spondylitis, rheumatoid arthritis, or psoriasis/psoriatic arthritis: a nationwide, population-based cohort study of 84,989 patients
Directory of Open Access Journals (Sweden)
Chen HH
2017-05-01
Full Text Available Hsin-Hua Chen,1–7 Der-Yuan Chen,1–6 Chi-Chen Lin,1,2 Yi-Ming Chen,1–4 Kuo-Lung Lai,3,4 Ching-Heng Lin1 1Department of Medical Research, Taichung Veterans General Hospital, 2Institute of Biomedical Science and Rong Hsing Research Center for Translational Medicine, Chung-Hsing University, Taichung, 3School of Medicine, National Yang-Ming University, Taipei, 4Division of Allergy, Immunology and Rheumatology, Department of Internal Medicine, Taichung Veterans General Hospital, 5School of Medicine, Chung-Shan Medical University, 6Department of Medical Education, Taichung Veterans General Hospital, Taichung, 7Institute of Public Health and Community Medicine Research Center, National Yang-Ming University, Taipei, Taiwan Purpose: The aim of this study is to investigate the association between the use of disease-modifying antirheumatic drugs (DMARDs and diabetes mellitus (DM in patients with ankylosing spondylitis (AS, rheumatoid arthritis (RA, or psoriasis/psoriatic arthritis (PS/PSA.Patients and methods: This retrospective cohort study used a nationwide, population-based administrative database to enroll 84,989 cases with AS, RA, or PS/PSA who initiated treatment with anti-tumor necrosis factor (anti-TNF drugs or nonbiologic DMARDs. Multivariable analysis was used to estimate the effect of different therapies on the risk of DM.Results: The incidence rates of DM per 1,000 person-years were 8.3 for users of anti-TNF drugs, 13.3 for users of cyclosporine (CSA, 8.4 for users of hydroxychloroquine (HCQ, and 8.1 for users of other nonbiologic DMARDs. Compared with the users of nonbiologic DMARDs, the multivariate-adjusted hazard ratios (aHRs for DM were significantly lower for those who used anti-TNF drugs with HCQ (aHR: 0.49, 95% confidence interval [CI]: 0.36–0.66 and those who used HCQ alone (aHR: 0.70, 95% CI: 0.63–0.78, but not for those who used anti-TNFs without HCQ (aHR: 1.23, 95% CI: 0.94–1.60 or CSA (aHR: 1.14, 95% CI: 0.77–1
13. Use of tofacitinib in real clinical practice to treat patients with rheumatoid arthritis resistant to synthetic and biological disease-modifying antirheumatic drugs: Results of a multicenter observational study
Directory of Open Access Journals (Sweden)
D. E. Karateev
2016-01-01
Full Text Available Tofacitinib (TOFA, a member of a new class of targeted synthetic disease-modifying antirheumatic drugs (DMARDs, is a promising medication for the treatment of rheumatoid arthritis (RA and other immunoinflammatory diseases. The paper describes the Russian experi-ence with TOFA used to treat severe RA.Patients and methods. 101 RA patients (18 men and 83 women; mean age, 51.03±11.28 years; mean disease duration, 105.4±81.43 months who were positive for rheumatoid factor (89.1% and anti-cyclic citrullinated peptide antibodies (74.7% and resistant to therapy with synthetic DMARDs (sDMARDs (80.2% and biological agents (19.8% were given TOFA at a dose of 5 mg twice daily, which could be doubled if necessary. TOFA was used alone (n=9 or in combination with methotrexate (MT (n=75 or other sDMARDs (n=17. The achievement of low disease activity (LDA and clinical remission at 3 and 6 months of treatment by DAS28-ESR SDAI, and CDAI scores, and the indices of safety and tolerability were assessed.Results. A total of 93 (92.1% of the 101 patients completed a 24-week period of the investigation. 8 (7.9% patients prematurely discontinued TOFA after an average of 2.75±0.71 months. At the end of the study, the patients achieved the primary endpoint (LDA including remission in terms of DAS28-ESR ≤3.2 (34.7%, SDAI ≤11 (47.5%, and CDAI ≤10 (48.5% and the secondary endpoints (clinical remission in terms of DAS28-ESR ≤2.6 (17.8%, SDAI ≤3.3 (8.9%, and CDAI ≤2.8 (6.9%. When TOFA was combined with MT, the discontinuation rate for the former was significantly lower (2.7% than when TOFA was used in combination with other sDMARDs (29.4% or alone (11.1%; p<0.01. At 3 and 6 months of follow-up, LDA was achieved more frequently when TOFA was combined with MT than when other treatment regimens were used. Fatal outcomes and serious adverse events (AEs, as AEs previously undescribed in the literature, were not seen during a follow-up within
14. Transfrontier nuclear civil liability without international conventions
International Nuclear Information System (INIS)
Dogauchi, M.
1992-01-01
Japan is not a contracting party of any international convention in the field of nuclear civil liability, and neither are other east Asian countries who have or will soon have nuclear plants. Therefore, the ordinary rules on private international law will play an important role in dealing with transfrontier nuclear civil liability. Above all, the problems on judicial jurisdiction and governing law are crucial points. With regard to the relations between the above countries and the countries whose legal systems are within the framework of Paris or Vienna Conventions, geographical scopes of these conventions are to be considered. There are two different parts in the international civil liability conventions: uniform civil liability law and mutual funds. As to the first, it is important that, even without the conventions, the basic structure of the nuclear civil liability laws in non-member countries are almost the same with those of members. In any event, considering that the establishment of a single international regime to cover all countries will be hardly possible, legal consequences under the private international law will be explored. (author)
15. Compact Ignition Tokamak conventional facilities optimization
International Nuclear Information System (INIS)
Commander, J.C.; Spang, N.W.
1987-01-01
A high-field ignition machine with liquid-nitrogen-cooled copper coils, designated the Compact Ignition Tokamak (CIT), is proposed for the next phase of the United States magnetically confined fusion program. A team of national laboratory, university, and industrial participants completed the conceptual design for the CIT machine, support systems and conventional facilities. Following conceptual design, optimization studies were conducted with the goal of improving machine performance, support systems design, and conventional facilities configuration. This paper deals primarily with the conceptual design configuration of the CIT conventional facilities, the changes that evolved during optimization studies, and the revised changes resulting from functional and operational requirements (F and ORs). The CIT conventional facilities conceptual design is based on two premises: (1) satisfaction of the F and ORs developed in the CIT building and utilities requirements document, and (2) the assumption that the CIT project will be sited at the Princeton Plasma Physics Laboratory (PPPL) in order that maximum utilization can be made of existing Tokamak Fusion Test Reactor (TFTR) buildings and utilities. The optimization studies required reevaluation of the F and ORs and a second look at TFTR buildings and utilities. Some of the high-cost-impact optimization studies are discussed, including the evaluation criteria for a change from the conceptual design baseline configuration. The revised conventional facilities configuration are described and the estimated cost impact is summarized
16. Conventions and workflows for using Situs
International Nuclear Information System (INIS)
Wriggers, Willy
2012-01-01
Recent developments of the Situs software suite for multi-scale modeling are reviewed. Typical workflows and conventions encountered during processing of biophysical data from electron microscopy, tomography or small-angle X-ray scattering are described. Situs is a modular program package for the multi-scale modeling of atomic resolution structures and low-resolution biophysical data from electron microscopy, tomography or small-angle X-ray scattering. This article provides an overview of recent developments in the Situs package, with an emphasis on workflows and conventions that are important for practical applications. The modular design of the programs facilitates scripting in the bash shell that allows specific programs to be combined in creative ways that go beyond the original intent of the developers. Several scripting-enabled functionalities, such as flexible transformations of data type, the use of symmetry constraints or the creation of two-dimensional projection images, are described. The processing of low-resolution biophysical maps in such workflows follows not only first principles but often relies on implicit conventions. Situs conventions related to map formats, resolution, correlation functions and feature detection are reviewed and summarized. The compatibility of the Situs workflow with CCP4 conventions and programs is discussed
17. The protocol amending the 1963 Vienna Convention
International Nuclear Information System (INIS)
Lamm, V.
1998-01-01
In the first stage of the revision process, the only goal was to amend certain provisions of the Vienna Convention. Later, in what might be called the second stage, the question was seriously raised of establishing a new supplementary convention by which additional funds were to be provided by the international community of States. Most experts felt that the nuclear liability regime of the Vienna Convention, as amended, would really serve the interests of potential victims of nuclear incidents only if it were supported by an international supplementary fund providing additional compensation for nuclear damage to that provided by the operator. Thus, the Standing Committee started to consider the establishment, under the Vienna Convention, of a mechanism for mobilizing additional funds for compensation of nuclear damage. During the negotiations it was deemed necessary to establish a separate treaty for such a supplementary fund, and indeed, efforts were undertaken to draw up such an instrument concurrently with the revision of the Vienna Convention. (K.A.)
18. Economic Sociology and Economics of Convention
DEFF Research Database (Denmark)
Jagd, Søren
This paper is part of a larger exploration of the French Economics of Convention tradition. The aim of the paper is to explore potential themes of common interest to economic sociology and Economics of Conventions. The paper is in two parts. First, I summarise the main theoretical features of EC...... the institutional framework of social action. Second, I explore two issues raised by economics of conventions that may be particularly important to consider for economic sociology. The first issue is the explicit exploration of the consequences of a plurality of forms of justification suggested by Luc Boltanski...... and Laurent Thévenot in ‘économie de la grandeur’. This perspective has already been taken up in economic sociology in David Stark’s notion of a ‘Sociology of Worth’. The second issue, recently suggested by André Orléan, is the need to denaturalise economic theory and economic action to demonstrate the social...
19. Economics of Convention and New Economic Sociology
DEFF Research Database (Denmark)
Jagd, Søren
2007-01-01
The aim of the article is to explore potential common themes in economic sociology and economics of conventions. The article explores two issues raised by economics of conventions that may be of particular importance to economic sociology. First, the explicit exploration of the consequences...... of a plurality of forms of justification, as elaborated in économie de la grandeur. This perspective was recently taken up in economic sociology by David Stark's introduction of the notion ‘sociology of worth'. The second issue, recently suggested by André Orléan, is the need to denaturalize economic theory...... and economic action to demonstrate the social constructed nature of economic action. It is argued that these two issues demonstrate that a fruitful dialogue is indeed possible between economic sociology and economics of convention and should be encouraged....
20. Prerequisites for a nuclear weapons convention
International Nuclear Information System (INIS)
Liebert, W.
1999-01-01
A Nuclear Weapons Convention (NWC) would prohibit the research, development, production, testing, stockpiling, transfer, use and threat of use of nuclear weapons and would serve their total elimination.' In this fashion it follows the model laid out by the biological and chemical weapons conventions. The NWC would encompass a few other treaties and while replacing them should learn from their experiences. The Nuclear Weapons Convention should at some given point in the future replace the Non-Proliferation Treaty (NPT) and so resolve its contradictions and shortcomings. The main objectives of an NWC Would be: reduction of the nuclear arsenals of the 'five' nuclear weapons powers down to zero within a set of fixed periods of time; elimination of stockpiles of weapons-usable materials and, where existent, nuclear warheads in de-facto nuclear weapon and threshold states; providing assurance that all states will retain their non-nuclear status forever
1. HMB-45 reactivity in conventional uterine leiomyosarcomas.
Science.gov (United States)
Simpson, Karen W; Albores-Saavedra, Jorge
2007-01-01
We studied the human melanoma black-45 (HMB-45) reactivity in 25 uterine leiomyosarcomas including 23 conventional and 2 myxoid variants. Eleven tumors were poorly differentiated, and 14 were well to moderately differentiated. Nine uterine leiomyosarcomas labeled with HMB-45 in 10% or less of the tumor cells. Six were poorly differentiated and 3 were well differentiated. Our study indicates that 36% of conventional leiomyosarcomas focally express HMB-45. HMB-45 reactivity was more common in the poorly differentiated than in the well-differentiated group of leiomyosarcomas. In light of our findings and of those recently reported in the literature, we believe that the term PEComa should not be used for uterine leiomyosarcomas with clear cells or for conventional leiomyosarcomas that stain positively with HMB-45.
2. Communicating novel and conventional scientific metaphors
DEFF Research Database (Denmark)
Knudsen, Sanne
2005-01-01
. But we still need empirical studies of the career of metaphors in scientific discourse and of the communicative strategies identifying a given metaphor as either novel or conventional. This paper presents a case study of the discursive development of the metaphor of "the genetic code" from......Metaphors are more popular than ever in the study of scientific reasoning and culture because of their innovative and generative powers. It is assumed, that novel scientific metaphors become more clear and well-defined, as they become more established and conventional within the relevant discourses...... the introduction of the metaphor to it was established as an entire network of interrelated conventional metaphors. Not only do the strategies in communicating the metaphor change as the metaphor becomes more established within the discourse, but the genres in which the metaphor is developed and interpreted...
3. Digital hilar tomography. Comparison with conventional technique
International Nuclear Information System (INIS)
Schaefer, C.B.; Braunschweig, R.; Teufl, F.; Kaiser, W.; Claussen, C.D.
1993-01-01
The aim of the following study was to compare conventional hilar tomography and digital hilar tomography. 20 patients were examined both with conventional and digital hilar tomography using the same tomographic technique and the identical exposure dose. All patients underwent computed tomography of the chest as a golden standard. The digital technique, especially the edge-enhanced image version, showed superior image quality. ROC-analysis by 4 readers found equal diagnostic performance without any statistical difference. Digital hilar tomography shows a superior and constant image quality and lowers the rate of re-exposure. Therefore, digital hilar tomography is the preferable method. (orig.) [de
4. Archaeology and the World Heritage Convention
Directory of Open Access Journals (Sweden)
Henry Cleere
2003-10-01
Full Text Available International efforts to designate outstanding examples of the world's cultural and natural heritage began after the Second World War. The World Heritage Convention was signed at the General Conference of UNESCO in 1972 and the first cultural sites were selected in 1978. Now over 600 have been inscribed on the World Heritage List. The author, who is an honorary visiting professor at the Institute, acted as an advisor to the World Heritage Committee from 1992 to 2002 and here describes how the Convention came into being and discusses the representation of archaeological sites on the List.
5. Convention on nuclear safety. Final act
International Nuclear Information System (INIS)
1994-01-01
The Diplomatic Conference, which was convened by the International Atomic Energy Agency at its Headquarters from 14 to 17 June 1994, adopted the Convention on Nuclear Safety reproduced in document INFCIRC/449 and the Final Act of the Conference. The text of the Final Act of the Conference, including an annexed document entitled ''Some clarification with respect to procedural and financial arrangements, national reports, and the conduct of review meetings, envisaged in the Convention on Nuclear Safety'', is reproduced in the Attachment hereto for the information of all Member States
6. Control of non-conventional synchronous motors
CERN Document Server
Louis, Jean-Paul
2013-01-01
Classical synchronous motors are the most effective device to drive industrial production systems and robots with precision and rapidity. However, numerous applications require efficient controls in non-conventional situations. Firstly, this is the case with synchronous motors supplied by thyristor line-commutated inverters, or with synchronous motors with faults on one or several phases. Secondly, many drive systems use non-conventional motors such as polyphase (more than three phases) synchronous motors, synchronous motors with double excitation, permanent magnet linear synchronous motors,
7. Comparative Effectiveness of Conventional Rote Learning and ...
African Journals Online (AJOL)
This study investigated the relative effectiveness of Mnemonics technique (MNIT) and conventional rote learning technique (CRL) on the teaching-learning of physical features (Geography). A pre-test and post-test control group design was adopted for the study. A sample of ninety SS I students was randomly selected out of ...
8. Conflict and convention in dynamic networks.
Science.gov (United States)
Foley, Michael; Forber, Patrick; Smead, Rory; Riedl, Christoph
2018-03-01
An important way to resolve games of conflict (snowdrift, hawk-dove, chicken) involves adopting a convention: a correlated equilibrium that avoids any conflict between aggressive strategies. Dynamic networks allow individuals to resolve conflict via their network connections rather than changing their strategy. Exploring how behavioural strategies coevolve with social networks reveals new dynamics that can help explain the origins and robustness of conventions. Here, we model the emergence of conventions as correlated equilibria in dynamic networks. Our results show that networks have the tendency to break the symmetry between the two conventional solutions in a strongly biased way. Rather than the correlated equilibrium associated with ownership norms (play aggressive at home, not away), we usually see the opposite host-guest norm (play aggressive away, not at home) evolve on dynamic networks, a phenomenon common to human interaction. We also show that learning to avoid conflict can produce realistic network structures in a way different than preferential attachment models. © 2017 The Author(s).
9. Analysis of the London dumping convention
International Nuclear Information System (INIS)
Nauke, M.K.
1983-05-01
This report gives an in-depth review of the provisions of the London Dumping Convention and of its origins in the context of the international legal framework for controlling all aspects of marine pollution. Particular attention is paid to the provisions concerning radioactive waste. (NEA) [fr
10. The Burning Plasma Experiment conventional facilities
International Nuclear Information System (INIS)
Commander, J.C.
1991-01-01
The Burning Program Plasma Experiment (BPX) is phased to start construction of conventional facilities in July 1994, in conjunction with the conclusion of the Tokamak Fusion Test Reactor (TFTR) project. This paper deals with the conceptual design of the BPX Conventional Facilities, for which Functional and Operational Requirements (F ampersand ORs) were developed. Existing TFTR buildings and utilities will be adapted and used to satisfy the BPX Project F ampersand ORs to the maximum extent possible. However, new conventional facilities will be required to support the BPX project. These facilities include: The BPX building; Site improvements and utilities; the Field Coil Power Conversion (FCPC) building; the TFTR modifications; the Motor Generation (MG) building; Liquid Nitrogen (LN 2 ) building; and the associated Instrumentation and Control (I ampersand C) systems. The BPX building will provide for safe and efficient shielding, housing, operation, handling, maintenance and decontamination of the BPX and its support systems. Site improvements and utilities will feature a utility tunnel which will provide a space for utility services--including pulse power duct banks and liquid nitrogen coolant lines. The FCPC building will house eight additional power supplied for the Toroidal Field (TF) coils. The MG building will house the two MG sets larger than the existing TFTR MG sets. This paper also addresses the conventional facility cost estimating methodology and the rationale for the construction schedule developed. 6 figs., 1 tab
11. Electric and Conventional Vehicle Driving Patterns
DEFF Research Database (Denmark)
Krogh, Benjamin Bjerre; Andersen, Ove; Torp, Kristian
2014-01-01
The electric vehicle (EV) is an interesting vehicle type that can reduce the dependence on fossil fuels, e.g., by using electricity from wind turbines. A significant disadvantage of EVs is a very limited range, typically less than 200 km. This paper compares EVs to conventional vehicles (CVs...
12. Teaching effectiveness and students' performance in conventional ...
African Journals Online (AJOL)
There has been a proliferation of coaching centres in Lagos State. These run side-by-side conventional schools offering general education. Stakeholders in the education industry have raised questions on the relevance of these coaching centres particularly in terms of students' academic performance, teaching ...
13. Fracture healing: direct magnification versus conventional radiography
International Nuclear Information System (INIS)
Link, T.M.; Kessler, T.; Lange, T.; Overbeck, J.; Fiebich, M.; Peters, P.E.
1994-01-01
14. James Madison and the Constitutional Convention.
Science.gov (United States)
Scanlon, Thomas M.
1987-01-01
Part 1 of this three-part article traces James Madison's life and focuses primarily on those events that prepared him for leadership in the U.S. Constitutional Convention of 1787. It describes his early love of learning, education, and public service efforts. Part 2 chronicles Madison's devotion to study and preparation prior to the Constitutional…
15. The role of regional pollution conventions
International Nuclear Information System (INIS)
Haywar, P.
1989-01-01
Within the last 12 years a number of regional pollution conventions and action plans have been negotiated to protect the world's seas from pollution. This paper traces the development of this activity and points out the specific role of regional, as opposed to global, pollution conventions. Chief among the functions of regional conventions is the specific legal framework they provide for a particular geographical region. They also provide a forum for neighboring states to develop a coherent policy for a particular regional sea, as well as being the means of establishing regional control over potentially polluting activities. Regional agreements also constitute a suitable framework for monitoring the input of pollutants to the marine environment and assessing their effects. In addition, they provide a forum for the exchange of scientific and technical information and for developing cooperation between states. The paper concludes by summarizing the most important functions of a regional convention and suggesting that, with increasing industrialization and pollution stress, there will continue to be a need for action to be taken at the regional level
16. Non conventional energy sources and energy conservation
International Nuclear Information System (INIS)
Bueno M, F.
1995-01-01
Geographically speaking, Mexico is in an enviable position. Sun, water, biomass and geothermal fields main non conventional energy sources with commercial applications, are presents and in some cases plentiful in national territory. Moreover the coastal tidal power which is in research stage in several countries. Non conventional energy sources are an alternative which allow us to reduce the consumption of hydrocarbons or any other type of primary energetic, are not by oneself choices for the energy conservation, but energy replacements. At the beginning of this year, CONAE created the Direction of Non conventional Energy Sources, which main objective is to promote and impulse programs inclined towards the application of systems based in renewable energy sources. The research centers represent a technological and consultative support for the CONAE. They have an infrastructure developed along several years of continuous work. The non conventional energy sources will be a reality at the same time that their cost be equal or lower than the cost for the traditional generating systems. CONAE (National Commission for Energy Conservation). (Author)
17. Comparison of membrane bioreactor technology and conventional ...
African Journals Online (AJOL)
The purpose of this paper was to review the use of membrane bioreactor technology as an alternative for treating the discharged effluent from a bleached kraft mill by comparing and contrasting membrane bioreactors with conventional activated sludge systems for wastewater treatment. There are many water shortage ...
18. The Conventional and Unconventional about Disability Conventions: A Reflective Analysis of United Nations Convention on the Rights of Persons with Disabilities
Science.gov (United States)
Umeasiegbu, Veronica I.; Bishop, Malachy; Mpofu, Elias
2013-01-01
This article presents an analysis of the United Nations Convention on the Rights of Persons with Disabilities (CRPD) in relation to prior United Nations conventions on disability and U.S. disability policy law with a view to identifying the conventional and also the incremental advances of the CRPD. Previous United Nations conventions related to…
19. International antiterrorist conventions concerning the safety of air transport
Directory of Open Access Journals (Sweden)
Jacek BARCIK
2008-01-01
Full Text Available In this article the international law regulations are presented concerning the civilian safety of the air transport. The history concerning air terrorism and international antiterrorist conventions was described in detail, involving The Chicago Convention, The Tokyo Convention, The Hague Convention and Montreal Convention.
20. Risk of malignancy in patients with rheumatic disorders
Directory of Open Access Journals (Sweden)
Wong Victor Tak-lung
2016-12-01
Full Text Available Patients with autoimmune rheumatic diseases including rheumatoid arthritis (RA, systemic lupus erythematosus (SLE, Sjogren’s syndrome (SS, and inflammatory myositis are at increased risk of developing malignancies. Treatment of these conditions, including disease-modifying anti-rheumatic drugs (DMARDs and biologic therapies, are also associated with increased risk of malignancies.Cancer adds to the disease burden in these patients, affecting their quality of life and life expectancy. The decision in choosing immunosuppressive agents in these rheumatic diseases should take into account the disease severity, expectation for disease control, comorbidities, as well asthe side effects including risks of cancer.
1. The unappreciated slowness of conventional tourism
Directory of Open Access Journals (Sweden)
G.R. Larsen
2016-05-01
Full Text Available Most tourists are not consciously engaging in ‘slow travel’, but a number of travel behaviours displayed by conventional tourists can be interpreted as slow travel behaviour. Based on Danish tourists’ engagement with the distances they travel across to reach their holiday destination, this paper explores unintended slow travel behaviours displayed by these tourists. None of the tourists participating in this research were consciously doing ‘slow travel’, and yet some of their most valued holiday memories are linked to slow travel behaviours. Based on the analysis of these unintended slow travel behaviours, this paper will discuss the potential this insight might hold for promotion of slow travel. If unappreciated and unintentional slow travel behaviours could be utilised in the deliberate effort of encouraging more people to travel slow, ‘slow travel’ will be in a better position to become integrated into conventional travel behaviour.
2. Air pollution: UNCED convention on climate change
International Nuclear Information System (INIS)
Pieri, M.
1992-01-01
In addition to United Nations papers delineating the Organization's convention on climate change and strategies concerning the protection of the earth's atmosphere, this booklet presents four papers expressing the views of Italian and American strategists. The central theme is the establishment of current global air pollution trends, the determination of suitable air pollution limits, and the preparation of feasible socio-economic strategies to allow industrialized and developing countries to work together effectively to achieve the proposed global air quality goals
3. Non-conventional mesons at PANDA
International Nuclear Information System (INIS)
Giacosa, Francesco
2015-01-01
Non-conventional mesons, such as glueballs and tetraquarks, will be in the focus of the PANDA experiment at the FAIR facility. In this lecture we recall the basic properties of QCD and describe some features of unconventional states. We focus on the search of the not-yet discovered glueballs and the use of the extended Linear Sigma Model for this purpose, and on the already discovered but not-yet understood X, Y, Z states. (paper)
4. Kuala Namu Convention And Exhibition Centre
OpenAIRE
Gustriana, Trisna
2017-01-01
Aerotropolis area development that is expected to accommodate the development of business and commercial appeal and this is the chance for the designer to be able to take advantage of the situation and condition of land as well as possible. So that the revolutionary changes but is able to embrace all stakeholders is the solution needed to development Aerotropolis. Kuala Namu's Convention and Exhibition Center is expected to be a solution for regional development of Kuala Namu a...
5. The Aarhus Convention: A new regional convention on citizens' environmental rights
International Nuclear Information System (INIS)
Wates, J.
2000-01-01
The UN ECE Convention on Access to Information, Public Participation in Decision-making and Access to Justice in Environmental Matters had been adopted at Arhus, Denmark, at the Fourth Ministerial Conference in the 'Environment for Europe' process, and signed by thirty-five countries and the European Community. This paper summarises the main features of the Convention and briefly discusses its relevance to radioactive waste management issues. It then describes some of the activities currently being undertaken under the auspices of the Convention. (author)
6. A complementary conventional analysis for channelized reservoirs
International Nuclear Information System (INIS)
Escobar Freddy Humberto; Montealegre M, Matilde
2007-01-01
Many well pressure data coming from long and narrow reservoirs which result from either fluvial deposition of faulting connote be completely interpreted by conventional analysis since some flow regimes are not conventionally recognized yet in the oil literature. This narrow geometry allows for the simultaneous development of two linear flow regimes coming from each one of the lateral sides of the system towards the well. This has been called dual linear flow regime. If the well is off-centered with regards to the two lateral boundaries, then, and of the linear flow regimes vanishes and, than, two possibilities con be presented. Firstly, if the closer lateral boundary is close to flow the unique linear flow persists along the longer lateral boundary. It has been called single linear flow. Following this, either steady or pseudo-steady states will develop. Secondly, if a constant - pressure closer lateral boundary is dealt with, then parabolic flow develops along the longer lateral boundary. Steady state has to be developed once the disturbance reaches the farther boundary. This study presents new equations for conventional analysis for the dual linear, linear and parabolic flow regimes recently introduced to the oil literature. The equations were validated by applying them to field and simulated examples
7. Fuzzy logic control to be conventional method
Energy Technology Data Exchange (ETDEWEB)
Eker, Ilyas [University of Gaziantep, Gaziantep (Turkey). Department of Electrical and Electronic Engineering; Torun, Yunis [University of Gaziantep, Gaziantep (Turkey). Technical Vocational School of Higher Education
2006-03-01
Increasing demands for flexibility and fast reactions in modern process operation and production methods result in nonlinear system behaviour of partly unknown systems, and this necessitates application of alternative control methods to meet the demands. Fuzzy logic (FL) control can play an important role because knowledge based design rules can easily be implemented in systems with unknown structure, and it is going to be a conventional control method since the control design strategy is simple and practical and is based on linguistic information. Computational complexity is not a limitation any more because the computing power of computers has been significantly improved even for high speed industrial applications. This makes FL control an important alternative method to the conventional PID control method for use in nonlinear industrial systems. This paper presents a practical implementation of the FL control to an electrical drive system. Such drive systems used in industry are composed of masses moving under the action of position and velocity dependent forces. These forces exhibit nonlinear behaviour. For a multi-mass drive system, the nonlinearities, like Coulomb friction and dead zone, significantly influence the operation of the systems. The proposed FL control configuration is based on speed error and change of speed error. The feasibility and effectiveness of the control method are experimentally demonstrated. The results obtained from conventional FL control, fuzzy PID and adaptive FL control are compared with traditional PID control for the dynamic responses of the closed loop drive system. (author)
8. Fuzzy logic control to be conventional method
International Nuclear Information System (INIS)
Eker, Ilyas; Torun, Yunis
2006-01-01
Increasing demands for flexibility and fast reactions in modern process operation and production methods result in nonlinear system behaviour of partly unknown systems, and this necessitates application of alternative control methods to meet the demands. Fuzzy logic (FL) control can play an important role because knowledge based design rules can easily be implemented in systems with unknown structure, and it is going to be a conventional control method since the control design strategy is simple and practical and is based on linguistic information. Computational complexity is not a limitation any more because the computing power of computers has been significantly improved even for high speed industrial applications. This makes FL control an important alternative method to the conventional PID control method for use in nonlinear industrial systems. This paper presents a practical implementation of the FL control to an electrical drive system. Such drive systems used in industry are composed of masses moving under the action of position and velocity dependent forces. These forces exhibit nonlinear behaviour. For a multi-mass drive system, the nonlinearities, like Coulomb friction and dead zone, significantly influence the operation of the systems. The proposed FL control configuration is based on speed error and change of speed error. The feasibility and effectiveness of the control method are experimentally demonstrated. The results obtained from conventional FL control, fuzzy PID and adaptive FL control are compared with traditional PID control for the dynamic responses of the closed loop drive system
9. PUBLIC POLICY VIOLATION UNDER NEW YORK CONVENTION
Directory of Open Access Journals (Sweden)
Michelle Ayu Chinta Kristy
2013-04-01
Full Text Available The increasing number of the use of arbitration in Asia has highlighted the significant influence of the recognition and enforcement of arbitral awards. The New York Convention currently becomes the most widely accepted convention to which the courts would refer when recognizing and enforcing foreign arbitral awards. This article would firstly provide a comparative study of the court’s interpretation towards public policy as mentioned under Article V (2 b of the New York Convention between non-arbitration-friendly-law Indonesia and arbitration-friendly-law China. Subsequently, it will discuss whether uniformity in interpreting and reserving public policy is required or not. Peningkatan jumlah penggunaan lembaga arbitrasi di Asia mendorong peningkatan signifikansi pengakuan dan pelaksanaan putusan arbitrasi asing. Konvensi New York saat ini menjadi konvensi yang diterima secara luas dimana dijadikan referensi oleh pengadilan dalam hal pengakuan dan pelaksanaan putusan arbitrasi asing. Artikel ini akan pertama-tama membahas studi perbandingan atas interpretasi pengadilan mengenai penggunaan kebijakan publik sebagaimana tertera pada Pasal V (2 b Konvensi New York antara Indonesia yang hukumnya tidak mendukung dan China dengan hukum yang mendukung pengakuan dan pelaksanaan putusan arbitrasi asing. Apakah keseragaman antar negara dalam menginterpretasi dan menggunakan kebijakan publik diperlukan atau tidak dibahas pada diskusi selanjutnya.
10. Effectiveness of the Convention on Nuclear Safety
International Nuclear Information System (INIS)
Schwarz, G.
2016-01-01
The Convention on Nuclear Safety (CNS) has been established after the Chernobyl accident with the primary objective of achieving and maintaining a high level of nuclear safety worldwide, through the enhancement of national measures and international cooperation. The CNS is an incentive convention. It defines the basic safety standard which shall be met by the Contracting Parties. The verification of compliance is based on a self-assessment by the Countries and a Peer Review by the other Contracting Parties. As of July 2015, there are 78 Contracting Parties. Among the Contracting Parties of the Convention are all countries operating nuclear power plants except the Islamic Republic of Iran and Taiwan, all countries constructing nuclear power plants, all countries having nuclear power plants in long term shutdown and all countries having signed contracts for the construction of nuclear power plants. The National Reports under the CNS therefore cover almost all nuclear power plants of the world. The peer review of reports, questions and answers that are exchanged in connection with the Review Meetings provided a unique overview of nuclear safety provisions and issues in countries planning or operating nuclear power plants. This is especially important for neighbouring countries to those operating nuclear power plants.
11. Metal ion implantation: Conventional versus immersion
International Nuclear Information System (INIS)
Brown, I.G.; Anders, A.; Anders, S.; Dickinson, M.R.; MacGill, R.A.
1994-01-01
Vacuum-arc-produced metal plasma can be used as the ion feedstock material in an ion source for doing conventional metal ion implantation, or as the immersing plasma for doing plasma immersion ion implantation. The basic plasma production method is the same in both cases; it is simple and efficient and can be used with a wide range of metals. Vacuum arc ion sources of different kinds have been developed by the authors and others and their suitability as a metal ion implantation tool has been well established. Metal plasma immersion surface processing is an emerging tool whose characteristics and applications are the subject of present research. There are a number of differences between the two techniques, both in the procedures used and in the modified surfaces created. For example, the condensibility of metal plasma results in thin film formation and subsequent energetic implantation is thus done through the deposited layer; in the usual scenario, this recoil implantation and the intermixing it produces is a feature of metal plasma immersion but not of conventional energetic ion implantation. Metal plasma immersion is more suited (but not limited) to higher doses (>10 17 cm -2 ) and lower energies (E i < tens of keV) than the usual ranges of conventional metal ion implantation. These and other differences provide these vacuum-arc-based surface modification tools with a versatility that enhances the overall technological attractiveness of both
12. Measures to implement the Chemical Weapons Convention
Energy Technology Data Exchange (ETDEWEB)
Tanzman, E.; Kellman, B.
1999-11-05
This seminar is another excellent opportunity for those involved in preventing chemical weapons production and use to learn from each other about how the Chemical Weapons Convention (CWC) can become a foundation of arms control in Africa and around the world. The author is grateful to the staff of the Organization for the Prohibition of Chemical Weapons (OPCW) for inviting him to address this distinguished seminar. The views expressed in this paper are those of the authors alone, and do not represent the position of the government of the US nor or of any other institution. In 1993, as the process of CWC ratification was beginning, concerns arose that the complexity of integrating the treaty with national law would cause each nation to implement the Convention without regard to what other nations were doing, thereby causing inconsistencies among States Parties in how the Convention would be carried out. As a result the Manual for National Implementation of the Chemical Weapons Convention was prepared and presented it to each national delegation at the December 1993 meeting of the Preparatory Commission in The Hague. During its preparation, the Manual was reviewed by the Committee of Legal Experts on National Implementation of the Chemical Weapons Convention, a group of distinguished international jurists, law professors, legally-trained diplomats, government officials, and Parliamentarians from every region of the world, including Mica. In February 1998, the second edition of the Manual was published in order to update it in light of developments since the CWC entered into force on 29 April 1997. The second edition 1998 clarified the national implementation options to reflect post-entry-into-force thinking, added extensive references to national implementing measures that had been enacted by various States Parties, and included a prototype national implementing statute developed by the authors to provide a starting point for those whose national implementing
13. Measures to implement the Chemical Weapons Convention
International Nuclear Information System (INIS)
Tanzman, E.; Kellman, B.
1999-01-01
This seminar is another excellent opportunity for those involved in preventing chemical weapons production and use to learn from each other about how the Chemical Weapons Convention (CWC) can become a foundation of arms control in Africa and around the world. The author is grateful to the staff of the Organization for the Prohibition of Chemical Weapons (OPCW) for inviting him to address this distinguished seminar. The views expressed in this paper are those of the authors alone, and do not represent the position of the government of the US nor or of any other institution. In 1993, as the process of CWC ratification was beginning, concerns arose that the complexity of integrating the treaty with national law would cause each nation to implement the Convention without regard to what other nations were doing, thereby causing inconsistencies among States Parties in how the Convention would be carried out. As a result the Manual for National Implementation of the Chemical Weapons Convention was prepared and presented it to each national delegation at the December 1993 meeting of the Preparatory Commission in The Hague. During its preparation, the Manual was reviewed by the Committee of Legal Experts on National Implementation of the Chemical Weapons Convention, a group of distinguished international jurists, law professors, legally-trained diplomats, government officials, and Parliamentarians from every region of the world, including Mica. In February 1998, the second edition of the Manual was published in order to update it in light of developments since the CWC entered into force on 29 April 1997. The second edition 1998 clarified the national implementation options to reflect post-entry-into-force thinking, added extensive references to national implementing measures that had been enacted by various States Parties, and included a prototype national implementing statute developed by the authors to provide a starting point for those whose national implementing
14. The sustainability transition. Beyond conventional development
Energy Technology Data Exchange (ETDEWEB)
1996-10-01
This paper synthesizes findings of the first phase in SEIs PoleStar Project - a project aimed at developing long-term strategies and policies for sustainable development. Taking a global and long-range perspective, the paper aims to describe a theoretical framework for addressing sustainability, to identify emerging issues and outline directions for future action. The paper begins by setting todays development and environmental challenges in historical context, and describing the scenario method for envisioning and evaluating alternative futures, and identifying propitious areas for policy and action. It next summarizes a detailed scenario based on conventional development assumptions, and discusses the implications of this scenario for demographic and economic patterns, energy and water resources, land resources and agriculture, and pollution loads and the environment to the year 2050. The conventional scenario relies in part on the sectorally-oriented work discussed in Papers 3 through 6 of the PoleStar Project report series, and makes use of the PoleStar System, software designed for integrated resource, environment and socio-economic accounting and scenario analysis (described in Paper 2). The paper then examines the critical risks to social, resource and environmental systems lying ahead on the conventional development path. Finally, the paper surveys the requirements for sustainability across a number of policy dimensions, and raises key questions for the future. The PoleStar Project is proceeding to examine a range of alternative development scenarios, in the context of the work of the regionally-diverse Global Scenario Group, convened by SEI. The hope remains to offer wise counsel for a transition to an equitable, humane and sustainable future for the global community. 144 refs, 30 figs, 9 tabs
15. The sustainability transition. Beyond conventional development
International Nuclear Information System (INIS)
1996-01-01
This paper synthesizes findings of the first phase in SEI's PoleStar Project - a project aimed at developing long-term strategies and policies for sustainable development. Taking a global and long-range perspective, the paper aims to describe a theoretical framework for addressing sustainability, to identify emerging issues and outline directions for future action. The paper begins by setting today's development and environmental challenges in historical context, and describing the scenario method for envisioning and evaluating alternative futures, and identifying propitious areas for policy and action. It next summarizes a detailed scenario based on conventional development assumptions, and discusses the implications of this scenario for demographic and economic patterns, energy and water resources, land resources and agriculture, and pollution loads and the environment to the year 2050. The conventional scenario relies in part on the sectorally-oriented work discussed in Papers 3 through 6 of the PoleStar Project report series, and makes use of the PoleStar System, software designed for integrated resource, environment and socio-economic accounting and scenario analysis (described in Paper 2). The paper then examines the critical risks to social, resource and environmental systems lying ahead on the conventional development path. Finally, the paper surveys the requirements for sustainability across a number of policy dimensions, and raises key questions for the future. The PoleStar Project is proceeding to examine a range of alternative development scenarios, in the context of the work of the regionally-diverse Global Scenario Group, convened by SEI. The hope remains to offer wise counsel for a transition to an equitable, humane and sustainable future for the global community. 144 refs, 30 figs, 9 tabs
16. Alternative Fuels Data Center: Conventional Natural Gas Production
Science.gov (United States)
Conventional Natural Gas Production to someone by E-mail Share Alternative Fuels Data Center : Conventional Natural Gas Production on Facebook Tweet about Alternative Fuels Data Center: Conventional Natural Gas Production on Twitter Bookmark Alternative Fuels Data Center: Conventional Natural Gas Production
17. Einstein Synchronisation and the Conventionality of Simultaneity
OpenAIRE
2006-01-01
Despite a broad-range title the paper settles for the related issue of whether the Special Theory of Relativity (STR) necessarily advocates the demise of an ontological difference between past and future events, between past and future in general. In the jargon of H. Stein: are we forced, within the framework of the STR, to choose only between ‘solipsism’ and ‘determinism’ exclusively? A special emphasis is placed on the role that the conventionality of simultaneity plays in the STR with rega...
18. Standardizing Naming Conventions in Radiation Oncology
Energy Technology Data Exchange (ETDEWEB)
2012-07-15
Purpose: The aim of this study was to report on the development of a standardized target and organ-at-risk naming convention for use in radiation therapy and to present the nomenclature for structure naming for interinstitutional data sharing, clinical trial repositories, integrated multi-institutional collaborative databases, and quality control centers. This taxonomy should also enable improved plan benchmarking between clinical institutions and vendors and facilitation of automated treatment plan quality control. Materials and Methods: The Advanced Technology Consortium, Washington University in St. Louis, Radiation Therapy Oncology Group, Dutch Radiation Oncology Society, and the Clinical Trials RT QA Harmonization Group collaborated in creating this new naming convention. The International Commission on Radiation Units and Measurements guidelines have been used to create standardized nomenclature for target volumes (clinical target volume, internal target volume, planning target volume, etc.), organs at risk, and planning organ-at-risk volumes in radiation therapy. The nomenclature also includes rules for specifying laterality and margins for various structures. The naming rules distinguish tumor and nodal planning target volumes, with correspondence to their respective tumor/nodal clinical target volumes. It also provides rules for basic structure naming, as well as an option for more detailed names. Names of nonstandard structures used mainly for plan optimization or evaluation (rings, islands of dose avoidance, islands where additional dose is needed [dose painting]) are identified separately. Results: In addition to its use in 16 ongoing Radiation Therapy Oncology Group advanced technology clinical trial protocols and several new European Organization for Research and Treatment of Cancer protocols, a pilot version of this naming convention has been evaluated using patient data sets with varying treatment sites. All structures in these data sets were
19. 9th Structural Engineering Convention 2014
CERN Document Server
2015-01-01
The book presents research papers presented by academicians, researchers, and practicing structural engineers from India and abroad in the recently held Structural Engineering Convention (SEC) 2014 at Indian Institute of Technology Delhi during 22 – 24 December 2014. The book is divided into three volumes and encompasses multidisciplinary areas within structural engineering, such as earthquake engineering and structural dynamics, structural mechanics, finite element methods, structural vibration control, advanced cementitious and composite materials, bridge engineering, and soil-structure interaction. Advances in Structural Engineering is a useful reference material for structural engineering fraternity including undergraduate and postgraduate students, academicians, researchers and practicing engineers.
20. National Convention on Family Life Education.
Science.gov (United States)
1973-12-01
This secretarial report gives brief comments on some discussion of topics at the National Convention on Family Life Education. Discussion included: 1) legalized prostitution as a means to reduce venereal disease; 2) family life education promotion by government and civic groups; 3) more authority for the Population Council; 4) more liberal abortion legislation than previously; 5) statutory notification of veneral disease by medical practitioners; 6) compensatory measures for working women with young children, and 7) the need for modernization of legislation pertaining to child health, adoption, paternity, the Persons Act, infant life preservation, drugs, age of consent, and the age of minority.
1. Standardizing Naming Conventions in Radiation Oncology
International Nuclear Information System (INIS)
Santanam, Lakshmi; Hurkmans, Coen; Mutic, Sasa; Vliet-Vroegindeweij, Corine van; Brame, Scott; Straube, William; Galvin, James; Tripuraneni, Prabhakar; Michalski, Jeff; Bosch, Walter
2012-01-01
Purpose: The aim of this study was to report on the development of a standardized target and organ-at-risk naming convention for use in radiation therapy and to present the nomenclature for structure naming for interinstitutional data sharing, clinical trial repositories, integrated multi-institutional collaborative databases, and quality control centers. This taxonomy should also enable improved plan benchmarking between clinical institutions and vendors and facilitation of automated treatment plan quality control. Materials and Methods: The Advanced Technology Consortium, Washington University in St. Louis, Radiation Therapy Oncology Group, Dutch Radiation Oncology Society, and the Clinical Trials RT QA Harmonization Group collaborated in creating this new naming convention. The International Commission on Radiation Units and Measurements guidelines have been used to create standardized nomenclature for target volumes (clinical target volume, internal target volume, planning target volume, etc.), organs at risk, and planning organ-at-risk volumes in radiation therapy. The nomenclature also includes rules for specifying laterality and margins for various structures. The naming rules distinguish tumor and nodal planning target volumes, with correspondence to their respective tumor/nodal clinical target volumes. It also provides rules for basic structure naming, as well as an option for more detailed names. Names of nonstandard structures used mainly for plan optimization or evaluation (rings, islands of dose avoidance, islands where additional dose is needed [dose painting]) are identified separately. Results: In addition to its use in 16 ongoing Radiation Therapy Oncology Group advanced technology clinical trial protocols and several new European Organization for Research and Treatment of Cancer protocols, a pilot version of this naming convention has been evaluated using patient data sets with varying treatment sites. All structures in these data sets were
2. Unconventional applications of conventional intrusion detection sensors
International Nuclear Information System (INIS)
Williams, J.D.; Matter, J.C.
1983-01-01
A number of conventional intrusion detection sensors exists for the detection of persons entering buildings, moving within a given volume, and crossing a perimeter isolation zone. Unconventional applications of some of these sensors have recently been investigated. Some of the applications which are discussed include detection on the edges and tops of buildings, detection in storm sewers, detection on steam and other types of large pipes, and detection of unauthorized movement within secure enclosures. The enclosures can be used around complicated control valves, electrical control panels, emergency generators, etc
3. Standardizing naming conventions in radiation oncology.
Science.gov (United States)
Santanam, Lakshmi; Hurkmans, Coen; Mutic, Sasa; van Vliet-Vroegindeweij, Corine; Brame, Scott; Straube, William; Galvin, James; Tripuraneni, Prabhakar; Michalski, Jeff; Bosch, Walter
2012-07-15
The aim of this study was to report on the development of a standardized target and organ-at-risk naming convention for use in radiation therapy and to present the nomenclature for structure naming for interinstitutional data sharing, clinical trial repositories, integrated multi-institutional collaborative databases, and quality control centers. This taxonomy should also enable improved plan benchmarking between clinical institutions and vendors and facilitation of automated treatment plan quality control. The Advanced Technology Consortium, Washington University in St. Louis, Radiation Therapy Oncology Group, Dutch Radiation Oncology Society, and the Clinical Trials RT QA Harmonization Group collaborated in creating this new naming convention. The International Commission on Radiation Units and Measurements guidelines have been used to create standardized nomenclature for target volumes (clinical target volume, internal target volume, planning target volume, etc.), organs at risk, and planning organ-at-risk volumes in radiation therapy. The nomenclature also includes rules for specifying laterality and margins for various structures. The naming rules distinguish tumor and nodal planning target volumes, with correspondence to their respective tumor/nodal clinical target volumes. It also provides rules for basic structure naming, as well as an option for more detailed names. Names of nonstandard structures used mainly for plan optimization or evaluation (rings, islands of dose avoidance, islands where additional dose is needed [dose painting]) are identified separately. In addition to its use in 16 ongoing Radiation Therapy Oncology Group advanced technology clinical trial protocols and several new European Organization for Research and Treatment of Cancer protocols, a pilot version of this naming convention has been evaluated using patient data sets with varying treatment sites. All structures in these data sets were satisfactorily identified using this
4. RF torch discharge combined with conventional burner
International Nuclear Information System (INIS)
Janca, J.; Tesar, C.
1996-01-01
The design of the combined flame-rf-plasma reactor and experimental examination of this reactor are presented. For the determination of the temperature in different parts of the combined burner plasma the methods of emission spectroscopy were used. The temperatures measured in the conventional burner reach the maximum temperature 1900 K but in the burner with the superimposed rf discharge the neutral gas temperature substantially increased up to 2600 K but also the plasma volume increases substantially. Consequently, the resident time of reactants in the reaction zone increases
5. Conventional myelography - evaluation of risk and benefit
International Nuclear Information System (INIS)
Hentschel, F.
1989-01-01
While the benefit and methodic risk of conventional myelography (KMG) are known, a radiation risk of 0.04 to 0.9 annual radiation-induced cancers can be estimated for all inhabitants of the GDR, dependent on the investigated region and the technique used. An optimized technique can reduce the radiation burden to 50 or 25%. With comparable values of benefit and radiation risk spinal CT and KMG are not contradictory but complementary investigations. Alternative methods (MRT, US) must not be discussed from the standpoint of radiation burden, but according to their availability and their methodic limitations. (author)
Directory of Open Access Journals (Sweden)
2014-06-01
Full Text Available Arthritides are acute or chronic inflammation of one or more joints. The most common types of arthritis are osteoarthritis and rheumatoid arthritis, but there are more than 100 different forms. Right and early diagnosis is extremely important for the prevention of eventual structural and functional disability of the affected joint. Imaging findings, especially those of advanced level imaging, play a major role in diagnosis and monitor the progression of arthritis or its response to therapy. The objective of the review is to discuss the findings of conventional and advanced radiological imaging of most common arthritides and to present a simplified approach for their radiological evaluation.
7. Adding intelligence to conventional industrial robots
International Nuclear Information System (INIS)
Harrigan, R.W.
1993-01-01
Remote systems are needed to accomplish many tasks such as the clean up of waste sites in which the exposure of personnel to radiation, chemical, explosive, and other hazardous constituents is unacceptable. In addition, hazardous operations which in the past have been completed by technicians are under increased scrutiny due to high costs and low productivity associated with providing protective clothing and environments. Traditional remote operations have, unfortunately, proven to also have very low productivity when compare with unencumbered human operators. However, recent advances in the integration of sensors and computing into the control of conventional remotely operated industrial equipment has shown great promise for providing systems capable of solving difficult problems
8. Guidelines regarding National Reports under the Convention on Nuclear Safety
International Nuclear Information System (INIS)
2013-01-01
These Guidelines, established by the Contracting Parties pursuant to Article 22 of the Convention on Nuclear Safety (hereinafter called the Convention), are intended to be read in conjunction with the text of the Convention. Their purpose is to provide guidance to the Contracting Parties regarding material that may be useful to include in the National Reports required under Article 5 of the Convention and thereby to facilitate the most efficient review of implementation by the Contracting Parties of their obligations under the Convention.
9. Indonesia Interest in International Labor Organization (ILO) Convention No.189
OpenAIRE
2014-01-01
This research aims to analyze, Indonesia Interest in ILO convention No. 189 on Decent Works for Domestic Workers. Indonesia has massive number of domestic workers caused by low quality of education. Therefore, Indonesia agreed on creation of ILO Convention No.189 in protecting their society that works as domestic workers. However, in the early of ILO Convention No.189 agreement creation in 2011, Indonesia has not ratified this Convention to 2013. If Indonesia has ratified this convention prev...
10. Implementation of the Aarhus convention - A survey
Directory of Open Access Journals (Sweden)
Marina Malis Sazdovska
2016-11-01
Full Text Available Legislation on global and regional level in the field of environmental protection is characterized by the adoption of international conventions and agreements that attempt to regulate this matter legally. As an extremely important area, which exceeds the boundaries of nation-state and as a global environmental problem, the issues of environmental protection are a major concern to international organizations. It is directly linked to reducing the jurisdiction of the States and transfer of competences to international organizations and institutions in order to solve the problems in a global experience. In order to overcome the problems regarding the implementation of international documents, the creation of certain policies by international organizations and institutions is required to promote the idea of environmental protection as a basic moo of the global world. Taking into account the recommendations of Brundtland Commission, humanity has a moral obligation to preserve natural resources for future generations. Main objective of this article is the presentation of research on the implementation of the Aarhus Convention and the proposal of measures for the creation of ideas and policies on improving access to information in the field. The research is done with the students from the faculty of Security which accessed the information in environmental matters. 11. Future directions conventional oil supply, Western Canada International Nuclear Information System (INIS) Campbell, G.R.; Hayward, J. 1997-01-01 The history of the Canadian oil industry was briefly sketched and the future outlook for crude oil and natural gas liquids in western Canada was forecast. The historical review encompassed some of the significant events in history of the Canadian oil industry, including the Leduc discovery in 1947, the Swan Hills discovery in 1957, the start of commercial production from the Athabasca oil sands in 1967, the discovery of the Hibernia oilfield offshore Newfoundland in 1979, and the onset of the use of horizontal production wells in western Canada in 1987. The resource base, supply costs, and the technology that is being developed to reduce costs and to improve recovery, were reviewed. Future oil prices were predicted, taking into account the costs associated with technological developments. It was suggested that the character of the industry is undergoing a change from an industry dominated by conventional supply to a mixed industry with increasing volume of heavy oil, primary bitumen, synthetic oil and frontier supply replacing 'conventional' light crude oil. Projections into the future are subject to uncertainty both on the supply as well as on the demand side. The potential impact of technology can significantly affect demand, and technological developments can yield additional supplies which exceed current expectations. 10 figs 12. Non-conventional energy and propulsion methods International Nuclear Information System (INIS) Valone, T. 1991-01-01 From the disaster of the Space Shuttle, Challenger, to the Kuwaiti oil well fires, we are reminded constantly of our dependence on dangerous, combustible fuels for energy and propulsion. Over the past ten years, there has been a considerable production of new and exciting inventions which defy conventional analysis. The term non-conventional was coined in 1980 by a Canadian engineer to designate a separate technical discipline for this type of endeavor. Since then, several conferences have been devoted solely to these inventions. Integrity Research Corp., an affiliate of the Institute, has made an effort to investigate each viable product, develop business plans for several to facilitate development and marketing, and in some cases, assign an engineering student intern to building a working prototype. Each inventor discussed in this presentation has produced a unique device for free energy generation or highly efficient force production. Included in this paper is also a short summary for non-specialists explaining the physics of free energy generation along with a working definition. The main topics of discussion include: space power, inertial propulsion, kinetobaric force, magnetic motors, thermal fluctuations, over-unity hat pumps, ambient temperature superconductivity and nuclear battery 13. The Chemical Weapons Convention -- Legal issues Energy Technology Data Exchange (ETDEWEB) NONE 1997-08-01 The Chemical Weapons Convention (CWC) offers a unique challenge to the US system of constitutional law. Its promise of eliminating what is the most purely genocidal type of weapon from the worlds arsenals as well as of destroying the facilities for producing these weapons, brings with it a set of novel legal issues. The reservations about the CWC expressed by US business people are rooted in concern about safeguarding confidential business information and protecting the constitutional right to privacy. The chief worry is that international verification inspectors will misuse their power to enter commercial property and that trade secrets or other private information will be compromised as a result. It has been charged that the Convention is probably unconstitutional. The author categorically disagrees with that view and is aware of no scholarly writing that supports it. The purpose of this presentation is to show that CWC verification activities can be implemented in the US consistently with the traditional constitutional regard for commercial and individual privacy. First, he very briefly reviews the types of verification inspections that the CWC permits, as well as some of its specific privacy protections. Second, he explains how the Fourth Amendment right to privacy works in the context of CWC verification inspections. Finally, he reviews how verification inspections can be integrated into these constitutional requirements in the SU through a federal implementing statute. 14. Technological advancements revitalize conventional oil sector International Nuclear Information System (INIS) Thomson, L. 2000-01-01 Maturing reserves in the Western Canada Sedimentary Basin is resulting in a gradual shift of focus from huge new discoveries and wildcat gushers to developing new technologies for exploration and enhanced recovery techniques of production, keeping costs down and reducing environmental impacts, as a means of keeping conventional oil plays a viable force in the oil and gas industry. The value in refocusing efforts towards technology development is given added weight by a recent announcement by the Petroleum Communication Foundation, which stated that in addition to the oil sands and offshore oil and gas developments, one of the country's largest undeveloped oil resource is the 70 per cent of discovered crude oil in western Canadian pools that cannot be recovered by current conventional production techniques. Therefore, development of new technologies to exploit these currently unrecoverable resources is a matter of high priority. To remain competitive, the new techniques must also lower the cost of recovering oil from these sources, given that the cost of oil production in Canada is already higher than that in most other competing countries 15. Accuracy of Digital vs. Conventional Implant Impressions Science.gov (United States) Lee, Sang J.; Betensky, Rebecca A.; Gianneschi, Grace E.; Gallucci, German O. 2015-01-01 The accuracy of digital impressions greatly influences the clinical viability in implant restorations. The aim of this study is to compare the accuracy of gypsum models acquired from the conventional implant impression to digitally milled models created from direct digitalization by three-dimensional analysis. Thirty gypsum and 30 digitally milled models impressed directly from a reference model were prepared. The models were scanned by a laboratory scanner and 30 STL datasets from each group were imported to an inspection software. The datasets were aligned to the reference dataset by a repeated best fit algorithm and 10 specified contact locations of interest were measured in mean volumetric deviations. The areas were pooled by cusps, fossae, interproximal contacts, horizontal and vertical axes of implant position and angulation. The pooled areas were statistically analysed by comparing each group to the reference model to investigate the mean volumetric deviations accounting for accuracy and standard deviations for precision. Milled models from digital impressions had comparable accuracy to gypsum models from conventional impressions. However, differences in fossae and vertical displacement of the implant position from the gypsum and digitally milled models compared to the reference model, exhibited statistical significance (p<0.001, p=0.020 respectively). PMID:24720423 16. The EU Arbitration Convention : An evaluating assessment of the governance and functioning of the EU Arbitration Convention NARCIS (Netherlands) Pit, Harm Mark 2017-01-01 The EU Arbitration Convention An evaluating assessment of the governance and functioning of the EU Arbitration Convention Summary for non-experts The EU Arbitration Convention is a convention between EU Member States to eliminate double taxation arising from – for tax purposes – transfer pricing 17. Reframing less conventional speech to disrupt conventions of "compulsory fluency": A conversation analysis approach Directory of Open Access Journals (Sweden) Camille Duque 2018-05-01 Full Text Available Our purpose is to illuminate compliances with, and resistances to, what we are calling "compulsory fluency" which we define as conventions for what constitutes competent speech. We achieve our purpose through a study of day-to-day communication between a woman with less conventional speech and her support providing family members and friends. Drawing from McRuer's (2006 compulsory ablebodiedness and Kafer's (2013 compulsory able-mindedness, we use "compulsory fluency" to refer to a form of articulation that is standardized and idealized and imposed on all speakers including those whose speech is less conventional. We see compulsory fluency as central to North American conceptions of personhood which are tied to individual ability to speak for one's self (Brueggemann, 2005. In this paper, we trace some North American principles for linguistic competence to outline widely held ideals of receptive and expressive language use, namely, conventions for how language should be understood and expressed. Using Critical Disability Studies (Goodley, 2013; McRuer, 2006 together with a feminist framework of relational autonomy (Nedelsky, 1989, our goal is to focus on experiences of people with less conventional speech and draw attention to power in communication as it flows in idiosyncratic and intersubjective fashion (Mackenzie & Stoljar, 2000; Westlund, 2009. In other words, we use a critical disability and feminist framing to call attention to less conventional forms of communication competence and, in this process, we challenge assumptions about what constitutes competent speech. As part of a larger qualitative study, we conduct a conversation analysis informed by Rapley and Antaki (1996 to examine day-to-day verbal, vocal and non-verbal communications of a young woman who self identifies as "having autism" - pseudonym Addison - in interaction with her support-providing family members and friends. We illustrate a multitude of Addison's compliances with 18. Supersymmetry Parameter Analysis : SPA Convention and Project CERN Document Server Aguilar-Saavedra, J A; Allanach, Benjamin C; Arnowitt, R; Baer, H A; Bagger, J A; Balázs, C; Barger, V; Barnett, M; Bartl, Alfred; Battaglia, M; Bechtle, P; Belyaev, A; Berger, E L; Blair, G; Boos, E; Bélanger, G; Carena, M S; Choi, S Y; Deppisch, F; Desch, Klaus; Djouadi, A; Dutta, B; Dutta, S; Díaz, M A; Eberl, H; Ellis, Jonathan Richard; Erler, Jens; Fraas, H; Freitas, A; Fritzsche, T; Godbole, Rohini M; Gounaris, George J; Guasch, J; Gunion, J F; Haba, N; Haber, Howard E; Hagiwara, K; Han, L; Han, T; He, H J; Heinemeyer, S; Hesselbach, S; Hidaka, K; Hinchliffe, Ian; Hirsch, M; Hohenwarter-Sodek, K; Hollik, W; Hou, W S; Hurth, Tobias; Jack, I; Jiang, Y; Jones, D R T; Kalinowski, Jan; Kamon, T; Kane, G; Kang, S K; Kernreiter, T; Kilian, W; Kim, C S; King, S F; Kittel, O; Klasen, M; Kneur, J L; Kovarik, K; Kraml, Sabine; Krämer, M; Lafaye, R; Langacker, P; Logan, H E; Ma, W G; Majerotto, Walter; Martyn, H U; Matchev, K; Miller, D J; Mondragon, M; Moortgat-Pick, G; Moretti, S; Mori, T; Moultaka, G; Muanza, S; Mukhopadhyaya, B; Mühlleitner, M M; Nauenberg, U; Nojiri, M M; Nomura, D; Nowak, H; Okada, N; Olive, Keith A; Oller, W; Peskin, M; Plehn, T; Polesello, G; Porod, Werner; Quevedo, Fernando; Rainwater, D L; Reuter, J; Richardson, P; Rolbiecki, K; de Roeck, A; Weber, Ch. 2006-01-01 High-precision analyses of supersymmetry parameters aim at reconstructing the fundamental supersymmetric theory and its breaking mechanism. A well defined theoretical framework is needed when higher-order corrections are included. We propose such a scheme, Supersymmetry Parameter Analysis SPA, based on a consistent set of conventions and input parameters. A repository for computer programs is provided which connect parameters in different schemes and relate the Lagrangian parameters to physical observables at LHC and high energy e+e- linear collider experiments, i.e., masses, mixings, decay widths and production cross sections for supersymmetric particles. In addition, programs for calculating high-precision low energy observables, the density of cold dark matter (CDM) in the universe as well as the cross sections for CDM search experiments are included. The SPA scheme still requires extended efforts on both the theoretical and experimental side before data can be evaluated in the future at the level of the d... 19. For a convention for nuclear weapon elimination International Nuclear Information System (INIS) 2008-03-01 This document contains two texts linked with the project of an international convention for the elimination of nuclear weapons (the text of this project has been sent to the UN General Secretary and is part of an international campaign to abolish nuclear weapons, ICAN). These two texts are contributions presented in London at the Global Summit for a Nuclear Weapon-free World. The first one calls into question the deterrence principle and the idea of a nuclear weapon-based security. It calls for different forms of action to promote a nuclear weapon-free world. The second text stresses the role and the responsibility of states with nuclear weapons in nuclear disarmament and in the reinforcement of the nuclear non proliferation treaty (NPT) 20. Muzzle shunt augmentation of conventional railguns International Nuclear Information System (INIS) Parker, J.V. 1991-01-01 This paper reports on augmentation which is a technique for reducing the armature current and hence the armature power dissipation in a plasma armature railgun. In spite of the advantages, no large augmented railguns have been built, primarily due to the mechanical and electrical complexity introduced by the extra conductors required. it is possible to achieve some of the benefits of augmentation in a conventional railgun by diverting a fraction φ of the input current through a shunt path at the muzzle of the railgun. In particular, the relation between force and armature current is the same as that obtained in an n-turn, series-connected augmented railgun with n = 1/(1 - φ). The price of this simplification is a reduction in electrical efficiency and some additional complexity in the external electrical system 1. Conventional radiology: fixed installations in medical environment International Nuclear Information System (INIS) 2010-01-01 This document presents the different procedures, the different types of specific hazards, the analysis of risks, their assessment and the preventive methods with regard to radioprotection in the case of fixed conventional radiology equipment in medical environment. It indicates and describes the concerned personnel, the course of procedures, the hazards, the identification of the risk associated with ionizing radiation, the risk assessment and the determination of exposure levels (definition of regulated areas, personnel categories), the strategy aimed at controlling the risk (risk reduction, technical measures concerning the installation or the personnel, teaching and information, prevention, incident), the different measures of medical monitoring, the assessment of risk control, and other risks. An appendix proposes an example of workstation assessment 2. Rhegmatogenous retinal detachment and conventional surgical treatment. Science.gov (United States) Golubovic, M 2013-01-01 The aim of the paper was to present the efficacy and indications for application of conventional surgical treatment of retinal detachment by using external implants, that is,application of encircling band and buckle. This study comprised patients from the University Eye Clinic in Skopje. A total of 33 patients were diagnosed and surgically treated in the period between May 2010 and August 2011. Conventional surgery was applied in smaller number of patients whose changes of the vitreous body were manifested by detachment of posterior hyaloid membrane, syneresis, with appearance of a small number of pigment cells in the vitreous body and synchysis, and the very retina was with fresh detachment without folds or epiretinal changes (that is, PVR A grade). There were a larger number of patients with more distinct proliferative changes of the vitreous body and of the retina, grades PVR B to C1-C2, and who also underwent the same surgical approach. Routine ophthalmologic examinations were performed, including: determination of visual acuity by Snellen's optotypes, determination of eye pressure with Schiotz's tonometer, examination of anterior segment on biomicroscopy, indirect biomicroscopy of posterior eye segment (vitreous body and retina) and examination on biomicroscopy with Goldmann prism, B scan echography of the eyes before and after surgical treatment. Conventional treatment was used by external application of buckle or application of buckle and encircling band. In case of one break, radial buckle was applied and in case of multiple breaks in one quadrant limbus parallel buckle was applied. Besides buckle, encircling band was applied in patients with total or subtotal retinal detachment with already present distinct changes in the vitreous body (PVR B or C1-C2) and degenerative changes in the vitreous body. Breaks were closed with cryopexy. The results obtained have shown that male gender was predominant and that the disease was manifested in younger male adults 3. Technologies for the future : conventional recovery enhancement Energy Technology Data Exchange (ETDEWEB) Isaacs, E. [Alberta Energy Research Inst., Edmonton, AB (Canada) 2005-07-01 This conference presentation examined Alberta's oil production and water use; global finding and development costs across continents; and current trends for conventional oil. The presentation examined opportunities for testing new technologies for enhanced oil recovery (EOR) and provided several tables of data on EOR production in the United States. The evolution of United States EOR production, and the number of EOR projects in Canada were also addressed. The presentation also discussed where EOR goes from here as well as the different EOR mechanisms to alter phase behaviour and to alter relative flow. It also discussed chemical methods and major challenges for chemical EOR and examined EOR technologies needing a major push in the Western Canada Sedimentary Basin. Lessons learned from the Joffre site regarding carbon dioxide miscible flood were revealed along with how coal gasification produces substitute natural gas and carbon dioxide for EOR. Suggestions for research and technology and enhanced water management were included. tabs., figs. 4. Limitation and reduction of conventional arms International Nuclear Information System (INIS) Chervov, N. 1989-01-01 We are living at a time when war between East and West---not only nuclear but also conventional war--- is totally senseless. It cannot solve any problem---political, economic, or other. From the military point of view, war between East and West is madness. Calculations show that after 20 days of conventional warfare Europe could become another Hiroshima. Therefore we must work out forms of long-term cooperation. Before it is too late, we must radically reduce our military potentials and rethink our military doctrines. The reduction by 500,000 men is for the USSR no simple solution. But that step may become a model for further actions by East and West. The West's proposal that armed forces should be reduced to the level of 95 percent of NATO's armed forces is not a solution. Both sides---the Warsaw Treaty Organization and NATO---must be deprived of the capacity to launch a sudden attack; they must be deprived of their attack potential. The USSR initiative shows the true way toward that goal. What is happening in connection with our decision is not always correctly interpreted in the West, and so I should like to draw attention to some distinctive features of the Soviet armed forces reductions and, first of all, their scale (equivalent to the Bundeswehr of the Federal Republic of Germany). With respect to Europe, Soviet troops are to be reduced in the German Democratic Republic, Czechoslovakia, Hungary, Poland, and the European part of the Soviet Union---a total of 240,000 men, 10,000 tanks, 9,500 artillery systems, and 800 combat aircraft 5. Innovation and the Development Convention in Brazil Directory of Open Access Journals (Sweden) Fabio Stefano Erber 2004-01-01 means to achieving fast and stable economic growth. Nonetheless, the degree of endogenous technical innovation in Brazil remains very low. This paper explores the conjecture that the latter result is a consequence of the hegemonic view of development. The first section presents some quantitative and qualitative data to support our assertion about the innovativeness of the Brazilian economy. The second section argues that the “view of development” may be profitably treated as a “convention”, a set of beliefs shared by decision-makers and used to identify the main issues which a development strategy has to tackle and the appropriate means to address such issues. A development convention contains also a “negative” agenda — issues and solutions which should be avoided. The same section then analyses the development convention which was hegemonic from the nineties to the date of the paper (2002 and the implications of its positive and negative agendas for technological development, assuming such convention had worked as its supporters supposed it would. It argues that the theoretical results are consistent with the facts described in the first section. The last section comments the actual working of the development convention, arguing that it stressed the main technological features present in the “pure form” of the convention and concludes with a brief discussion of the role of innovation in a new development convention which seemed to be arising at that time. 6. Analysis of non-melanoma skin cancer across the tofacitinib rheumatoid arthritis clinical programme. Science.gov (United States) Curtis, Jeffrey R; Lee, Eun Bong; Martin, George; Mariette, Xavier; Terry, Ketti K; Chen, Yan; Geier, Jamie; Andrews, John; Kaur, Mandeep; Fan, Haiyun; Nduaka, Chudy I 2017-01-01 Tofacitinib is an oral Janus kinase inhibitor for the treatment of rheumatoid arthritis (RA). We evaluated the incidence of non-melanoma skin cancer (NMSC) across the tofacitinib RA development programme. NMSC events (through August 2013) were identified in patients receiving tofacitinib in two Phase (P)1, eight P2, six P3 and two long-term extension (LTE) studies. In P123 studies, tofacitinib was administered at various doses (1-30 mg twice daily [BID], 20 mg once daily), as monotherapy or with conventional synthetic disease-modifying anti-rheumatic drugs, mainly methotrexate. In LTE studies, patients from qualifying P123 studies received tofacitinib 5 or 10 mg BID. Crude incidence rates (IRs; patients with events/100 patient-years) for first NMSC event were evaluated across doses and over time. In the overall population, comprising data from 18 studies (15,103 patient-years), 83 of 6092 tofacitinib-treated patients had NMSC events. The IR for NMSC (0.55 [95% confidence interval, 0.45-0.69] overall population) was stable up to 84 months of observation. IRs for tofacitinib 5 and 10 mg BID in combined P123 trials were 0.61 (0.34-1.10) and 0.47 (0.24-0.90), respectively. Corresponding IRs for LTE studies were 0.41 (0.26-0.66) and 0.79 (0.60-1.05). The IR for NMSC across the tofacitinib RA clinical development programme was low and remained stable over time. The IR for NMSC in LTE studies was numerically but not significantly higher with tofacitinib 10 versus 5 mg BID; an inverse dose relationship was observed in P123 trials. Longer follow-up is required to confirm these results. 7. Cost-effectiveness of sequenced treatment of rheumatoid arthritis with targeted immune modulators. Science.gov (United States) Jansen, Jeroen P; Incerti, Devin; Mutebi, Alex; Peneva, Desi; MacEwan, Joanna P; Stolshek, Bradley; Kaur, Primal; Gharaibeh, Mahdi; Strand, Vibeke 2017-07-01 To determine the cost-effectiveness of treatment sequences of biologic disease-modifying anti-rheumatic drugs or Janus kinase/STAT pathway inhibitors (collectively referred to as bDMARDs) vs conventional DMARDs (cDMARDs) from the US societal perspective for treatment of patients with moderately to severely active rheumatoid arthritis (RA) with inadequate responses to cDMARDs. An individual patient simulation model was developed that assesses the impact of treatments on disease based on clinical trial data and real-world evidence. Treatment strategies included sequences starting with etanercept, adalimumab, certolizumab, or abatacept. Each of these treatment strategies was compared with cDMARDs. Incremental cost, incremental quality-adjusted life-years (QALYs), and incremental cost-effectiveness ratios (ICERs) were calculated for each treatment sequence relative to cDMARDs. The cost-effectiveness of each strategy was determined using a US willingness-to-pay (WTP) threshold of150,000/QALY. For the base-case scenario, bDMARD treatment sequences were associated with greater treatment benefit (i.e. more QALYs), lower lost productivity costs, and greater treatment-related costs than cDMARDs. The expected ICERs for bDMARD sequences ranged from ∼$126,000 to$140,000 per QALY gained, which is below the US-specific WTP. Alternative scenarios examining the effects of homogeneous patients, dose increases, increased costs of hospitalization for severely physically impaired patients, and a lower baseline Health Assessment Questionnaire (HAQ) Disability Index score resulted in similar ICERs. bDMARD treatment sequences are cost-effective from a US societal perspective.
8. Disease activity in and quality of life of patients with psoriatic arthritis mutilans: the Nordic PAM Study.
Science.gov (United States)
Lindqvist, U; Gudbjornsson, B; Iversen, L; Laasonen, L; Ejstrup, L; Ternowitz, T; Ståhle, M
2017-11-01
To describe the social status and health-related quality of life of patients with psoriatic arthritis mutilans (PAM) in the Nordic countries. Patients with at least one mutilated joint confirmed by radiology were studied. Disease activity involving joints and skin, physician-assessed disease activity, and patient's education and work status were recorded. Data from the 36-item Short Form Health Survey, Health Assessment Questionnaire and Dermatology Life Quality Index questionnaire were gathered and correlated with disease duration, pain, and general well-being (visual analogue scale). The controls were 58 Swedish patients with long-standing psoriatic arthritis sine PAM. Sixty-seven patients were included. Patients with PAM had a protracted disease history (33 ± 14 years) and disease onset at a relatively early age (30 ± 12 years). Overall inflammatory activity at inclusion was mild to moderate. The mean number of mutilated joints was 8.2 and gross deformity was found in 16% of patients. Forty per cent were treated with biological and 32% with conventional synthetic disease-modifying anti-rheumatic drugs. Forty-two per cent had retired early or were on sick leave. Impaired functional capacity with little or no ability to perform self-care or everyday tasks was reported by 21% of the patients. Patients between 45 and 60 years of age reported the most impaired quality of life in comparison to the control group. PAM seriously affects social functioning. Whether early recognition of PAM and new forms of therapy can improve disease outcome and quality of life remains to be studied.
9. The impact of targeted Rheumatoid Arthritis pharmacological treatment on mental health: A systematic review and network meta-analysis.
Science.gov (United States)
Matcham, Faith; Galloway, James; Hotopf, Matthew; Roberts, Emmert; Scott, Ian C; Steer, Sophia; Norton, Sam
2018-06-06
10. Biomass energy conversion: conventional and advanced technologies
Energy Technology Data Exchange (ETDEWEB)
Young, B C; Hauserman, W B [Energy and Environmental Research Center, University of North Dakota, Grand Forks, ND (United States)
1995-12-01
Increasing interest in biomass energy conversion in recent years has focused attention on enhancing the efficiency of technologies converting biomass fuels into heat and power, their capital and operating costs and their environmental emissions. Conventional combustion systems, such as fixed-bed or grate units and entrainment units, deliver lower efficiencies (<25%) than modem coal-fired combustors (30-35%). The gasification of biomass will improve energy conversion efficiency and yield products useful for heat and power generation and chemical synthesis. Advanced biomass gasification technologies using pressurized fluidized-bed systems, including those incorporating hot-gas clean-up for feeding gas turbines or fuel cells, are being demonstrated. However, many biomass gasification processes are derivatives of coal gasification technologies and do not exploit the unique properties of biomass. This paper examines some existing and upcoming technologies for converting biomass into electric power or heat. Small-scale 1-30 MWe units are emphasized, but brief reference is made to larger and smaller systems, including those that bum coal-biomass mixtures and gasifiers that feed pilot-fuelled diesel engines. Promising advanced systems, such as a biomass integrated gasifier/gas turbine (BIG/GT) with combined-cycle operation and a biomass gasifier coupled to a fuel cell, giving cycle efficiencies approaching 50% are also described. These advanced gasifiers, typically fluid-bed designs, may be pressurized and can use a wide variety of biomass materials to generate electricity, process steam and chemical products such as methanol. Low-cost, disposable catalysts are becoming available for hot-gas clean-up (enhanced gas composition) for turbine and fuel cell systems. The advantages, limitations and relative costs of various biomass gasifier systems are briefly discussed. The paper identifies the best known biomass power projects and includes some information on proposed and
11. Biomass energy conversion: conventional and advanced technologies
International Nuclear Information System (INIS)
Young, B.C.; Hauserman, W.B.
1995-01-01
Increasing interest in biomass energy conversion in recent years has focused attention on enhancing the efficiency of technologies converting biomass fuels into heat and power, their capital and operating costs and their environmental emissions. Conventional combustion systems, such as fixed-bed or grate units and entrainment units, deliver lower efficiencies (<25%) than modem coal-fired combustors (30-35%). The gasification of biomass will improve energy conversion efficiency and yield products useful for heat and power generation and chemical synthesis. Advanced biomass gasification technologies using pressurized fluidized-bed systems, including those incorporating hot-gas clean-up for feeding gas turbines or fuel cells, are being demonstrated. However, many biomass gasification processes are derivatives of coal gasification technologies and do not exploit the unique properties of biomass. This paper examines some existing and upcoming technologies for converting biomass into electric power or heat. Small-scale 1-30 MWe units are emphasized, but brief reference is made to larger and smaller systems, including those that bum coal-biomass mixtures and gasifiers that feed pilot-fuelled diesel engines. Promising advanced systems, such as a biomass integrated gasifier/gas turbine (BIG/GT) with combined-cycle operation and a biomass gasifier coupled to a fuel cell, giving cycle efficiencies approaching 50% are also described. These advanced gasifiers, typically fluid-bed designs, may be pressurized and can use a wide variety of biomass materials to generate electricity, process steam and chemical products such as methanol. Low-cost, disposable catalysts are becoming available for hot-gas clean-up (enhanced gas composition) for turbine and fuel cell systems. The advantages, limitations and relative costs of various biomass gasifier systems are briefly discussed. The paper identifies the best known biomass power projects and includes some information on proposed and
12. NESTA Revolutionizing Teacher's Experiences at NSTA Conventions
Science.gov (United States)
Ireton, F.
2002-05-01
National Science Teachers Association (NSTA) conventions are traditionally composed of short workshops, half or full day workshops, and lectures on science teaching or education research. Occasional science lectures such as the AGU lecture offer science content information. The National Earth Science Teachers Association (NESTA) will join the National Association of Geoscience Teachers (NAGT), American Geophysical Union (AGU), and the American Geological Institute (AGI) to bring teachers a suite of exciting and informative events at the (NSTA) 2002 convention. Events begin with a guided learning field trip to Mission Trails Regional Park and Torrey Pines State Reserve where Earth and space science teachers experience a model of constructivist leaning techniques. Most field trips are a "show and tell" experience, designed to transmit knowledge from the field trip leader to the field trip participants. In the "guided learning" environment, the leader serves as a facilitator, asking questions, guiding participants to discover concepts for themselves. Participants examine selected processes and features that constitute a constructivist experience in which knowledge acquired at any given location builds on knowledge brought to the site. Employing this strategy involves covering less breadth but greater depth, modeling the concept of "less is more." On Thursday NESTA will host two Share-a-thons. These are not what a person would think of as a traditional workshop where presenter makes a presentation then the participants work on an activity. They could be called the flea market of teaching ideas. Tables are set around the perimeter of a room where the presenters are stationed. Teachers move from table to table picking up information and watching short demonstrations. The Earth and Space Science Resource Day on Friday will focus on teachers needs. Starting with breakfast, teachers will hear from Soames Summerhays, Naturalist and President of Summerhays Films, about how he
13. The peritoneal fibrinolytic response to conventional and laparoscopic colonic surgery
NARCIS (Netherlands)
Brokelman, Walter; Holmdahl, Lena; Falk, Peter; Klinkenbijl, Jean; Reijnen, Michel
2009-01-01
Laparoscopic surgery is considered to induce less peritoneal trauma than conventional surgery. The peritoneal plasmin system is important in the processes of peritoneal healing and adhesion formation. The present study assessed the peritoneal fibrinolytic response to laparoscopic and conventional
14. Transportation management and security during the 2004 Democratic National Convention
Science.gov (United States)
2005-01-05
The transportation operations plan for the 2004 Democratic National Convention (DNC) in Boston, Massachusetts, was not a typical transportation plan driven by goals such as mobility and air quality. The DNC was the first national political convention...
15. Uncertainty, Conventions and Co-ordination in the Business Enterprise
DEFF Research Database (Denmark)
Jagd, Søren
The paper presents the basic propositions of convention theory with special consideration to the analysis of uncertainty, the role of institutions and conventions, and the implications this perspective has for the analysis of the business enterprise......The paper presents the basic propositions of convention theory with special consideration to the analysis of uncertainty, the role of institutions and conventions, and the implications this perspective has for the analysis of the business enterprise...
16. Guidelines regarding National Reports under the Convention on Nuclear Safety
International Nuclear Information System (INIS)
2011-01-01
These guidelines, established by the Contracting Parties pursuant to Article 22 of the Convention on Nuclear Safety (hereinafter called the Convention), are intended to be read in conjunction with the text of the Convention. Their purpose is to provide guidance to the Contracting Parties regarding material that it may be useful to include in the National Reports required under Article 5 and thereby to facilitate the most efficient review of implementation by the Contracting Parties of their obligations under the Convention [es
17. Guidelines regarding National Reports under the Convention on Nuclear Safety
International Nuclear Information System (INIS)
2011-01-01
These guidelines, established by the Contracting Parties pursuant to Article 22 of the Convention on Nuclear Safety (hereinafter called the Convention), are intended to be read in conjunction with the text of the Convention. Their purpose is to provide guidance to the Contracting Parties regarding material that it may be useful to include in the National Reports required under Article 5 and thereby to facilitate the most efficient review of implementation by the Contracting Parties of their obligations under the Convention
18. Protecting Bone Health in Pediatric Rheumatic Diseases: Pharmacological Considerations.
Science.gov (United States)
Zhang, Yujuan; Milojevic, Diana
2017-06-01
Bone health in children with rheumatic conditions may be compromised due to several factors related to the inflammatory disease state, delayed puberty, altered life style, including decreased physical activities, sun avoidance, suboptimal calcium and vitamin D intake, and medical treatments, mainly glucocorticoids and possibly some disease-modifying anti-rheumatic drugs. Low bone density or even fragility fractures could be asymptomatic; therefore, children with diseases of high inflammatory load, such as systemic onset juvenile idiopathic arthritis, juvenile dermatomyositis, systemic lupus erythematosus, and those requiring chronic glucocorticoids may benefit from routine screening of bone health. Most commonly used assessment tools are laboratory testing including serum 25-OH-vitamin D measurement and bone mineral density measurement by a variety of methods, dual-energy X-ray absorptiometry as the most widely used. Early disease control, use of steroid-sparing medications such as disease-modifying anti-rheumatic drugs and biologics, supplemental vitamin D and calcium, and promotion of weight-bearing physical activities can help optimize bone health. Additional treatment options for osteoporosis such as bisphosphonates are still controversial in children with chronic rheumatic diseases, especially those with decreased bone density without fragility fractures. This article reviews common risk factors leading to compromised bone health in children with chronic rheumatic diseases and discusses the general approach to prevention and treatment of bone fragility.
19. Conventional - Frontier and east coast supply
International Nuclear Information System (INIS)
Morrell, G.R.
1998-01-01
An assessment of frontier basins in Canada with proven potential for petroleum resources was provided. A prediction of which frontier basin will become a major supplier of conventional light oil was made by examining where companies are investing in frontier exploration today. Frontier land values for five active frontier areas were discussed. These included the Grand Banks of Newfoundland, Nova Scotia Offshore, Western Newfoundland, the southern Northwest Territories and the Central Mackenzie Valley. The focus of this presentation was on three of these regions which are actually producing: Newfoundland's Grand Banks, offshore Nova Scotia and the Mackenzie Valley. Activities in each of these areas were reviewed. The Canada-Newfoundland Offshore Petroleum Board has listed Hibernia's reserves at 666 million barrels. The Sable Offshore Energy Project on the continental shelf offshore Nova Scotia proposes to develop 5.4 tcf of gas plus 75 million barrels of NGLs over a project life of 14 years. In the Mackenzie Valley there are at least three petroleum systems, including the 235 million barrel pool at Norman Wells. 8 refs., 1 tab., 3 figs
20. Retrieval of buried waste using conventional equipment
International Nuclear Information System (INIS)
Valentich, D.J.
1994-01-01
A field test was conducted to determine the effectiveness of using conventional type construction equipment for the retrieval of buried transuranic (TRU) waste. A cold (nonhazardous and nonradioactive test pit 841 m 3 in volume) was constructed with boxes and drums filled with simulated waste materials, such as metal, plastic, wood, concrete, and sludge. Large objects, including truck beds, vessels, vaults, pipes, and beams were also placed in the pit. These materials were intended to simulate the type of waste found in existing TRU buried waste pits and trenches. A series of commercially available equipment items, such as excavators and tracked loaders outfitted with different end effectors, were used to remove the simulated waste. Work was performed from both the abovegrade and belowgrade positions. During the demonstration, a number of observations, measurements, and analyses were performed to determine which equipment was the most effective in removing the waste. The retrieval rates for the various excavation techniques were recorded. The inherent dust control capabilities of the excavation methods used were also observed
1. Challenging convention: symbolic interactionism and grounded theory.
Science.gov (United States)
Newman, Barbara
2008-01-01
Not very much is written in the literature about decisions made by researchers and the justifications on method as a result of a particular clinical problem, together with an appropriate and congruent theoretical perspective, particularly for Glaserian grounded theory. I contend the utilisation of symbolic interactionism as a theoretical perspective to inform and guide the evolving research process and analysis of data when using classic or Glaserian grounded theory (GT) method, is not always appropriate. Within this article I offer an analysis of the key issues to be addressed when contemplating the use of Glaserian GT and the utilisation of an appropriate theoretical perspective, rather than accepting convention of symbolic interactionism (SI). The analysis became imperative in a study I conducted that sought to explore the concerns, adaptive behaviours, psychosocial processes and relevant interactions over a 12-month period, among newly diagnosed persons with end stage renal disease, dependent on haemodialysis in the home environment for survival. The reality of perception was central to the end product in the study. Human ethics approval was granted by six committees within New South Wales Health Department and one from a university.
2. Non-conventional fuel tax credit
International Nuclear Information System (INIS)
Soeoet, P.M.
1988-01-01
Coal-seam methane, along with certain other non-conventional fuels, is eligible for a tax credit. This production tax credit allowed coal-seam methane producers to receive $0.7526 per million Btu of gas sold during 1986. In 1987, this credit rose to$0.78 per million Btu. The tax credit is a very significant element of the economic analysis of current coal-seam methane projects. In today's spot market, gas prices are around $1.50 per million Btu. Allowing for costs of production, the gas producer will net more income from the tax credit than from the sale of the gas. The Crude Oil Windfall Profit Tax Act of 1980 is the source of this tax credit. There were some minor changes made by subsequent legislation, but most of the tax credit has remained intact. Wells must be drilled by 1990 to qualify for the tax credit but the production from such wells is eligible for the tax credit until 2001. Projections have been made, showing that the tax credit should increase to$0.91 per million Btu for production in 1990 and \$1.34 per million Btu in 2000. Variables which may decrease the tax credit from these projections are dramatically lower oil prices or general economic price deflation
3. The Convention on the Recognition and Enforcement of Foreign ...
African Journals Online (AJOL)
The Convention on the Recognition and Enforcement of Foreign Arbitral Awards, often referred to as the New York Convention, has established itself as a regulatory and enforcement instrument which is crucial to international trade. This is evident from the fact that more than 150 countries have so far ratified the convention.
4. Trends, Fashions, Patterns, Norms, Conventions...and Hypertext Too.
Science.gov (United States)
Amitay, Einat
2001-01-01
Outlines the theory behind the formation of language conventions, then reveals conventions evolving in the community of people writing hypertext on the Web. Demonstrates how these conventions can be used to augment and shift the meaning of already published hypertexts. Describes the system called InCommonSense, which reuses particular hypertext…
5. Comparison of single-port and conventional laparoscopic abdominoperineal resection
DEFF Research Database (Denmark)
Nerup, Nikolaj; Rosenstock, Steffen; Bulut, Orhan
2018-01-01
with conventional laparoscopy and 12 with SP surgery. RESULTS: Patients' characteristics were in general comparable, but patients in the conventional laparoscopy-group had a significantly higher American Society of Anesthesiologists-score. The operative time was slightly shorter in the conventional laparoscopy...
6. Assessment of undiscovered conventional oil and gas resources of Thailand
Science.gov (United States)
Schenk, Chris
2011-01-01
The U.S. Geological Survey estimated mean volumes of 1.6 billion barrels of undiscovered conventional oil and 17 trillion cubic feet of undiscovered conventional natural gas in three geologic provinces of Thailand using a geology-based methodology. Most of the undiscovered conventional oil and gas resource is estimated to be in the area known as offshore Thai Basin province.
7. 30 CFR 75.206 - Conventional roof support.
Science.gov (United States)
2010-07-01
... HEALTH MANDATORY SAFETY STANDARDS-UNDERGROUND COAL MINES Roof Support § 75.206 Conventional roof support. (a) Except in anthracite mines using non-mechanized mining systems, when conventional roof support... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Conventional roof support. 75.206 Section 75...
8. Minamata Convention on Mercury. Reporting obligations of the Parties to the Convention and the sources of data existing in Poland
Directory of Open Access Journals (Sweden)
Strzelecka-Jastrząb Ewa
2018-01-01
Full Text Available After that, when more than 60 years ago in the Japanese city of Minamata there was caused a mass poisoning of residents by seafood contaminated with mercury, Minamata Convention on Mercury came into force on August 16, 2017. To date, the Convention has been signed by 128 States, the signatories of the Convention and ratified by 83 States - Parties to the Convention. The Convention imposes a number of obligations on the Parties to the Convention, including the reporting obligation. The paper analyses the reporting obligations of the Parties to the Convention, which are in force after the entry into force of the Convention, pursuant to the provisions contained therein. In addition, the existing sources of quantitative data on mercury in Poland are characterized.
9. Conventional treatment planning optimization using simulated annealing
International Nuclear Information System (INIS)
Morrill, S.M.; Langer, M.; Lane, R.G.
1995-01-01
Purpose: Simulated annealing (SA) allows for the implementation of realistic biological and clinical cost functions into treatment plan optimization. However, a drawback to the clinical implementation of SA optimization is that large numbers of beams appear in the final solution, some with insignificant weights, preventing the delivery of these optimized plans using conventional (limited to a few coplanar beams) radiation therapy. A preliminary study suggested two promising algorithms for restricting the number of beam weights. The purpose of this investigation was to compare these two algorithms using our current SA algorithm with the aim of producing a algorithm to allow clinically useful radiation therapy treatment planning optimization. Method: Our current SA algorithm, Variable Stepsize Generalized Simulated Annealing (VSGSA) was modified with two algorithms to restrict the number of beam weights in the final solution. The first algorithm selected combinations of a fixed number of beams from the complete solution space at each iterative step of the optimization process. The second reduced the allowed number of beams by a factor of two at periodic steps during the optimization process until only the specified number of beams remained. Results of optimization of beam weights and angles using these algorithms were compared using a standard cadre of abdominal cases. The solution space was defined as a set of 36 custom-shaped open and wedged-filtered fields at 10 deg. increments with a target constant target volume margin of 1.2 cm. For each case a clinically-accepted cost function, minimum tumor dose was maximized subject to a set of normal tissue binary dose-volume constraints. For this study, the optimized plan was restricted to four (4) fields suitable for delivery with conventional therapy equipment. Results: The table gives the mean value of the minimum target dose obtained for each algorithm averaged over 5 different runs and the comparable manual treatment
10. Conventions for reporting and displaying overflight observations
International Nuclear Information System (INIS)
McFarland, B.; Murphy, J.; Simecek-Beatty, D.
1993-01-01
During the critical initial phases of an oil spill response, as observations and reports come in from different agencies and companies, descriptions and representations can vary widely. These apparently conflicting reports can cause unnecessary confusion, wasting valuable time and resources. As the number of open-quotes expertsclose quotes and the amount of open-quotes necessaryclose quotes information multiply, the potential for information overload also increases. Important information that needs to be presented can be lost in the flood of information that is available. For many years the National Oceanic and Atmospheric Administration (NOAA), in support of the US Coast Guard, has coordinated scientific input concerning the tracking and prediction of the transport of oil spilled in the marine environment. This role frequently involves recording visual or remote sensing observations from multiple platforms and observers, and displaying the information in a clear format, which needs to be rapidly available and unambiguous. Simple graphic products help identify conflicting views of information and allow responders to quickly build a open-quotes graphic consensusclose quotes of the situation. To this end the authors have developed in-house guidelines for presentation of crucial response information. Because correctly designed graphics can clearly and rapidly transmit large amounts of information, these guidelines focus on the graphic presentation of information. Some of these same conventions and criteria are being applied in evaluating and developing information acquisition and display tools. This poster presentation includes examples of the hardware and software used by Genwest and NOAA for the rapid display of response information
11. Biodegradable and compostable alternatives to conventional plastics
Science.gov (United States)
Song, J. H.; Murphy, R. J.; Narayan, R.; Davies, G. B. H.
2009-01-01
12. Joint implementation in the climate change convention
International Nuclear Information System (INIS)
Merkus, H.; Heintz, R.
1994-01-01
The United Nations Climate Change Convention offers developed countries the possibility to realize a part of their obligations elsewhere via financing of emission reduction activities. This so-called joint implementation (JI) enlarges the effectiveness of the international climate change policy. It is assumed that the marginal costs of the emission reductions differ between countries. The application of JI has benefits, but also bears risks. With regard to potential nett benefits and costs/risks it is of interest to distinguish between micro-effects (on a project level), macro-effects (on a national level) and global effects, focusing on JI between OECD countries and developing countries. Five comments on this tripartite are made: (1) it is important to gain insight in the JI potential; (2) JI can contribute to projects of a high development priority; (3) there is a chance that parties, involved in JI, claim more reductions (credits) than take place in reality (double counting); (4) there exists the risk of a delay of technological progress; and (5) the danger exists that JI causes a minor stimulus for developing countries to accept emission reduction obligations. A functional JI-system demands criteria that limits the risks and optimize the benefits. The main criteria concern verification of realized emission reductions; an acceptable balance between measures in one's own country and JI, for which three mechanisms are briefly discussed (partial credit entry, funds, and separated targets); and additionality of JI-financing. Side criteria are the monitoring, verification and control of the possession of the credits. Finally attention is paid to the role of the government in JI and further developments and chances for JI. 7 refs
13. Gravitational collapse of conventional polytropic cylinder
Science.gov (United States)
Lou, Yu-Qing; Hu, Xu-Yao
2017-07-01
In reference to general polytropic and conventional polytropic hydrodynamic cylinders of infinite length with axial uniformity and axisymmetry under self-gravity, the dynamic evolution of central collapsing mass string in free-fall dynamic accretion phase is re-examined in details. We compare the central mass accretion rate and the envelope mass infall rate at small radii. Among others, we correct mistakes and typos of Kawachi & Hanawa (KH hereafter) and in particular prove that their key asymptotic free-fall solution involving polytropic index γ in the two power exponents is erroneous by analytical analyses and numerical tests. The correct free-fall asymptotic solutions at sufficiently small \\hat{r} (the dimensionless independent self-similar variable) scale as {˜ } -|ln \\hat{r}|^{1/2} in contrast to KH's ˜ -|ln \\hat{r}|^{(2-γ )/2} for the reduced bulk radial flow velocity and as {˜ } \\hat{r}^{-1}|ln \\hat{r}|^{-1/2} in contrast to KH's {˜ } \\hat{r}^{-1} |ln \\hat{r}|^{-(2-γ )/2} for the reduced mass density. We offer consistent scenarios for numerical simulation code testing and theoretical study on dynamic filamentary structure formation and evolution as well as pertinent stability properties. Due to unavoidable Jeans instabilities along the cylinder, such collapsing massive filaments or strings can further break up into clumps and segments of various lengths as well as clumps embedded within segments and evolve into chains of gravitationally collapsed objects (such as gaseous planets, brown dwarfs, protostars, white dwarfs, neutron stars, black holes in a wide mass range, globular clusters, dwarf spheroidals, galaxies, galaxy clusters and even larger mass reservoirs etc.) in various astrophysical and cosmological contexts as articulated by Lou & Hu recently. As an example, we present a model scheme for comparing with observations of molecular filaments for forming protostars, brown dwarfs and gaseous planets and so forth.
14. Biodegradable and compostable alternatives to conventional plastics.
Science.gov (United States)
Song, J H; Murphy, R J; Narayan, R; Davies, G B H
2009-07-27
15. The semantics of Chemical Markup Language (CML): dictionaries and conventions
Science.gov (United States)
2011-01-01
The semantic architecture of CML consists of conventions, dictionaries and units. The conventions conform to a top-level specification and each convention can constrain compliant documents through machine-processing (validation). Dictionaries conform to a dictionary specification which also imposes machine validation on the dictionaries. Each dictionary can also be used to validate data in a CML document, and provide human-readable descriptions. An additional set of conventions and dictionaries are used to support scientific units. All conventions, dictionaries and dictionary elements are identifiable and addressable through unique URIs. PMID:21999509
16. The semantics of Chemical Markup Language (CML): dictionaries and conventions.
Science.gov (United States)
Murray-Rust, Peter; Townsend, Joe A; Adams, Sam E; Phadungsukanan, Weerapong; Thomas, Jens
2011-10-14
The semantic architecture of CML consists of conventions, dictionaries and units. The conventions conform to a top-level specification and each convention can constrain compliant documents through machine-processing (validation). Dictionaries conform to a dictionary specification which also imposes machine validation on the dictionaries. Each dictionary can also be used to validate data in a CML document, and provide human-readable descriptions. An additional set of conventions and dictionaries are used to support scientific units. All conventions, dictionaries and dictionary elements are identifiable and addressable through unique URIs.
17. Nuclear liability: Joint protocol relating to the application of the Vienna Convention and the Paris Convention, 1988
International Nuclear Information System (INIS)
1989-10-01
The Joint Protocol Relating to the Application of the Vienna Convention and the Paris Convention was adopted by the Conference on the Relationship between the Paris Convention and the Vienna Convention, which met in Vienna, at the Headquarters of the International Atomic Energy Agency on 21 September 1988. The Joint Protocol establishes a link between the Paris Convention on Third Party Liability in the Field of Nuclear Energy of 1960 and the Vienna Convention on Civil Liability for Nuclear Damage of 1963. The Joint Protocol will extend to the States adhering to it the coverage of the two Conventions. It will also resolve potential conflicts of law, which could result from the simultaneous application of the two Conventions to the same nuclear accident. The Conference on the Relationship between the Paris Convention and the Vienna Convention was jointly organized by the International Atomic Energy Agency and the OECD Nuclear Energy Agency. This publication contains the text of the Final Act of the Conference in the six authentic languages, the Joint Protocol Relating to the Application of the Vienna Convention and the Paris Convention, also in the six authentic languages and an explanatory note, prepared by the IAEA and NEA Secretariats, providing background information on the content of the Joint Protocol
18. The economics of developing non-conventional reserves
International Nuclear Information System (INIS)
Kuuskraa, V.A.
1997-01-01
A fact-based perspective on the economics of non-conventional natural gas reserves such as coalbed methane, gas shales and tight gas was presented. Traditionally, tax credits stimulate the development of non-conventional gas. Although tax credits for non-conventional gas development stopped at the end of 1992, because of improved technologies, improved finding rates and well productivities, non-conventional reserves continue to play a major role in the U.S. gas drilling development. Non-conventional reserves account for three out of five gas wells drilled in the U.S. The non-conventional gas industry competes directly with the conventional natural gas industry. This paper examined how well non-conventional gas compares to the current replacement costs of conventional natural gas. Investment costs for a non-conventional operation were studied, illustrated by an overview of the costs and economics of non-conventional reserves in the San Juan coal basin, the Piceance tight gas basin and the Michigan and Ft. Worth gas shales basins. 9 tabs
19. The dependence of Islamic and conventional stocks: A copula approach
Science.gov (United States)
Razak, Ruzanna Ab; Ismail, Noriszura
2015-09-01
Recent studies have found that Islamic stocks are dependent on conventional stocks and they appear to be more risky. In Asia, particularly in Islamic countries, research on dependence involving Islamic and non-Islamic stock markets is limited. The objective of this study is to investigate the dependence between financial times stock exchange Hijrah Shariah index and conventional stocks (EMAS and KLCI indices). Using the copula approach and a time series model for each marginal distribution function, the copula parameters were estimated. The Elliptical copula was selected to present the dependence structure of each pairing of the Islamic stock and conventional stock. Specifically, the Islamic versus conventional stocks (Shariah-EMAS and Shariah-KLCI) had lower dependence compared to conventional versus conventional stocks (EMAS-KLCI). These findings suggest that the occurrence of shocks in a conventional stock will not have strong impact on the Islamic stock.
20. Unbiased quantitative testing of conventional orthodontic beliefs.
Science.gov (United States)
Baumrind, S
1998-03-01
This study used a preexisting database to test in hypothesis from the appropriateness of some common orthodontic beliefs concerning upper first molar displacement and changes in facial morphology associated with conventional full bonded/banded treatment in growing subjects. In an initial pass, the author used data from a stratified random sample of 48 subjects drawn retrospectively from the practice of a single, experienced orthodontist. This sample consisted of 4 subgroups of 12 subjects each: Class I nonextraction, Class I extraction, Class II nonextraction, and Class II extraction. The findings indicate that, relative to the facial profile, chin point did not, on average, displace anteriorly during treatment, either overall or in any subgroup. Relative to the facial profile, Point A became significantly less prominent during treatment, both overall and in each subgroup. The best estimate of the mean displacement of the upper molar cusp relative to superimposition on Anterior Cranial Base was in the mesial direction in each of the four subgroups. In only one extraction subject out of 24 did the cusp appear to be displaced distally. Mesial molar cusp displacement was significantly greater in the Class II extraction subgroup than in the Class II nonextraction subgroup. Relative to superimposition on anatomical "best fit" of maxillary structures, the findings for molar cusp displacement were similar, but even more dramatic. Mean mesial migration was highly significant in both the Class II nonextraction and Class II extraction subgroups. In no subject in the entire sample was distal displacement noted relative to this superimposition. Mean increase in anterior Total Face Height was significantly greater in the Class II extraction subgroup than in the Class II nonextraction subgroup. (This finding was contrary to the author's original expectation.) The generalizability of the findings from the initial pass to other treated growing subjects was then assessed by
1. Distance Education at Conventional Universities in Germany
Directory of Open Access Journals (Sweden)
Hans-Henning Kappel
2002-01-01
Full Text Available Germany’s educational system has undergone a series of transformations during the last 40 years. In recent years, marked increases in enrolment have occurred. In response, admission requirements have been relaxed and new universities have been established.Academic distance education in the former Federal Republic of Germany (West Germany was ushered in by the educational radio broadcasts around the end of the 1960s. Aside from the formation of the FernUniversität (Open University in West Germany in 1975, there were significant developments in distance education occurring at the major universities in the German Democratic Republic (East Germany. After German reunification in 1990, the new unitary state launched programs to advance the development of distance education programs at conventional universities.Germany’s campus-based universities (Präsenzuniversitäten created various entities, including central units and consortia of universities to design and market distance education programs. Hybridisation provides the necessary prerequisites for dual mode delivery, such as basic and continuing education programs, as well as for the combination of distance and campus-based education (Präsenzstudium. Hybridisation also has also opened the door for the creation of new programs.Following an initial phase in which distance education research is expected to centralize a trend towards decentralisation is likely to follow. The German Association for Distance Education (AG-F offers a viable research network in distance education. Two dual mode case studies are also be surveyed: The Master of Arts degree, offered by the University of Koblenz-Landau, with Library Science as the second major, and the University of Kaiserslautern, where basic education will continue to be captured within the domain of the Präsenzstudium or campus-based education.The area in which distance education is flourishing most is within the field of academic continuing
2. Safeguards for a nuclear weapon convention
International Nuclear Information System (INIS)
Fischer, D.
1999-01-01
An NDT presupposes a fundamental commitment by all parties to its final objective and hence requires a high and sustained level of confidence amongst all states concerned. The appropriate format for an Nuclear Disarmament Treaty (NDT) would probably be a multilateral treaty open to all states. The treaty must necessarily include the five nuclear weapon states and a procedure would have to be found for securing the ratification of the threshold states without conferring upon them the status of nuclear weapon states. While the IAEA may well be able to carry out the safeguards tasks required by an NDT it would probably be necessary to establish a new international organization to verify the elimination of all nuclear weapons. The experience of UNSCOM and the IAEA in Iraq, and of the IAEA in the DPRK, have shown how difficult the verification of international obligations is in the absence of a commitment to disarm, while the experience of the INF and START treaties, and of the IAEA in South Africa have shown how much simpler it is when the parties concerned are fully committed to the process. Verifying and safeguarding an NDT would be largely an extrapolation of activities already carried out by the nuclear weapon states under the INF and START treaties and by the IAEA in the routine application of safeguards as well as in its less routine work in Iraq, South Africa and the DPRK. Both the verification and safeguarding tasks would be made very much easier if it were possible to bring down to a few hundred the number of nuclear warheads remaining in the hands of any avowed nuclear weapon state, and to conclude a cutoff convention. Experience is needed to show whether the additional safeguards authority accorded to the IAEA by 'programme 93+2' will enable it to effectively safeguard the facilities that would be decommissioned as a result of an NDT and those that would remain in operation to satisfy civilian needs. Subject to this rider and on condition that the IAEA
3. Update on the use of steroids in rheumatoid arthritis.
Science.gov (United States)
García-Magallón, Blanca; Silva-Fernández, Lucía; Andreu-Sánchez, José Luis
2013-01-01
Corticosteroids are a mainstay in the therapy of rheumatoid arthritis (RA). In recent years, a number of high-quality controlled clinical trials have shown their effect as a disease-modifying anti-rheumatic drug (DMARD) and a favourable safety profile in recent-onset RA. Despite this, they are more frequently used as bridge therapy while other DMARDs initiate their action than as true disease-modifying agents. Low-dose corticosteroid use during the first two years of disease slows radiologic damage and reduces the need of biologic therapy aimed at reaching a state of clinical remission in recent-onset RA. Thus, their systematic use in this clinical scenario should be considered. Copyright © 2013 Elsevier España, S.L. All rights reserved.
4. The framework convention on climate change a convention for sustainable energy development
Energy Technology Data Exchange (ETDEWEB)
Hassing, P.; Mendis, M.S.; Menezes, L.M.; Gowen, M.M.
1996-12-31
In 1992, over 165 countries signed the United Nations Framework Convention on Climate Change (FCCC). These countries have implicitly agreed to alter their anthropogenic activities that increase the emissions of greenhouse gases (GHGs) into the atmosphere and deplete the natural sinks for these same greenhouse gases. The energy sector is the major source of the primary anthropogenic GHGs, notably carbon dioxide and methane. The Organization for Economic Co-operation and Development (OECD) countries presently account for the major share of GHG emissions from the energy sector. However, the developing countries are also rapidly increasing their contribution to global GHG emissions as a result of their growing consumption of fossil-based energy. Implementation of this global climate change convention, if seriously undertaken by the signatory countries, will necessitate changes in the energy mix and production processes in both the OECD and developing countries. International actions also will be needed to put the world on a sustainable energy path. By adoption of the FCCC, representatives of the world`s populations have indicated their desire to move toward such a path. The Conference of Parties to the Convention has just concluded its second meeting, at which the Parties endorsed a U.S. proposal that legally binding and enforceable emissions targets be adopted. It is clearly evident that the FCCC, as presently operating, cannot achieve the objective of stabilizing GHG concentrations in the atmosphere unless it adopts a major protocol to significantly reduce anthropogenic GHG emissions. As demonstrated here, a good starting point in determining the steps the Parties to the FCCC should take in designing a protocol is to remember that the primary source of anthropogenic GHG emissions is the consumption of fossil fuels and the future growth of GHG emissions will derive primarily from the ever-increasing demand for and consumption of these fuels.
5. Roselle improvement through conventional and mutation breeding
International Nuclear Information System (INIS)
2002-01-01
Roselle (Hibiscus sabdariffa L.) from Malvaceae family is relatively a new crop in Malaysia. The origin is not fully known but believed to be from West Africa, although the plant is found native from India to Malaysia. The calyxes, stems and leaves are acid and closely resemble the cranberry (Vaccinium spp.) in flavour. Anthocyanins, which are now receiving a growing importance as natural food colorant, are responsible for the red to purple color of the calyx and other parts of the plant. The calyxes from the flowers are processed to produce juice for drink containing very high vitamin C (ascorbic acid), and also into jam, jelly and dried products. Interestingly, many other parts of the plant are also claimed to have various medicinal values. Presently, roselle is planted in Terengganu (175 ha in 2002) on bris soils, but its planting has spread to some parts of Kelantan, Pahang, Johor and also Sarawak. The number of roselle varieties available for planting is very limited; however, the effort carried out for roselle improvement thus far is equally very limited. There has been very little serious conventional breeding attempted, although varietal evaluation has had been carried out, particularly in form of agronomic trials. Since 1999, several studies on induced mutations have been attempted at UKM. A preliminary polyploidization study was conducted to determine the effects of colchicine concentrations of 0%, 0.04%, 0.08%, 0.12% and 0.16% and soaking times of 2 and 4 hours at room temperature (30 degree C) on 2-day old germinated seeds on morpho-agronomic traits (e.g. number of branches, internode length, leaf length, leaf width, number of flowers and days to flowering), ploidy level and pollen grain size in treated and also derived generations. Flow cytometric analyses of nuclear DNA AT content of leaf samples using LB01 lysis buffer and DNA specific fluorochrome DAPI (4',6-diamidino-2-phenylindole) staining were carried out using a flow cytometer at MINT, Bangi
6. Are nuclear ships environmentally safer than conventionally powered ships
International Nuclear Information System (INIS)
Bone, C.A.; Molgaard, C.A.; Helmkamp, J.C.; Golbeck, A.L.
1988-01-01
An epidemiologic analysis was conducted to determine if risk of hospitalization varied by age, ship type, or occupation between nuclear and conventional powered ship crews in the U.S. Navy. Study cohorts consisted of all male enlisted personnel who served exclusively aboard conventional or nuclear powered aircraft carriers and cruisers during the years 1975-1979; cases were those men hospitalized during this period (N = 48,242). Conventional ship personnel showed significantly elevated rates of injury and disease when compared to nuclear ship personnel. The largest relative risks by age occurred for conventional ship crewmen less than 30 years old. Seaman, logistics (supply), and healthcare personnel serving aboard conventional ships comprised the occupational groups exhibiting the highest hospitalization rate differentials. The results strongly suggest that nuclear ships provide a healthier, safer working and living environment than conventional ships
7. International nuclear liability conventions: status and possible changes
International Nuclear Information System (INIS)
Reyners, Patrick.
1978-01-01
The table of ratifications and accessions annexed to this paper shows that despite the considerable progress achieved these past years and the entry into force of the Vienna Convention, the number of Contracting Parties to the Nuclear Civil Liability Conventions remains insufficient. The adaptation of the first of these Conventions - the Paris Convention - as well as its Brussels Supplementary Convention to the technical and economic developments which have taken place since their adoption should provide the means for encouraging their implementation at international level. The main amendments which are envisaged are replacement of the present unit of account by the Special Drawing Right, the increase of the amounts of liability and compensation and finally, the technical scope of the Paris Convention. (NEA) [fr
8. Beyond the conventional: meeting the challenges of landscape governance within the European Landscape Convention?
Science.gov (United States)
Scott, Alister
2011-10-01
Academics and policy makers seeking to deconstruct landscape face major challenges conceptually, methodologically and institutionally. The meaning(s), identity(ies) and management of landscape are controversial and contested. The European Landscape Convention provides an opportunity for action and change set within new governance agendas addressing interdisciplinarity and spatial planning. This paper critically reviews the complex web of conceptual and methodological frameworks that characterise landscape planning and management and then focuses on emerging landscape governance in Scotland within a mixed method approach involving policy analyses, semi-structured interviews and best practice case studies. Using Dower's (2008) criteria from the Articles of the European Landscape Convention, the results show that whilst some progress has been made in landscape policy and practice, largely through the actions of key individuals and champions, there are significant institutional hurdles and resource limitations to overcome. The need to mainstream positive landscape outcomes requires a significant culture change where a one-size-fits-all approach does not work. Copyright © 2011 Elsevier Ltd. All rights reserved.
9. Effects of feeding high protein or conventional canola meal on dry cured and conventionally cured bacon.
Science.gov (United States)
Little, K L; Bohrer, B M; Stein, H H; Boler, D D
2015-05-01
Objectives were to compare belly, bacon processing, bacon slice, and sensory characteristics from pigs fed high protein canola meal (CM-HP) or conventional canola meal (CM-CV). Soybean meal was replaced with 0 (control), 33, 66, or 100% of both types of canola meal. Left side bellies from 70 carcasses were randomly assigned to conventional or dry cure treatment and matching right side bellies were assigned the opposite treatment. Secondary objectives were to test the existence of bilateral symmetry on fresh belly characteristics and fatty acid profiles of right and left side bellies originating from the same carcass. Bellies from pigs fed CM-HP were slightly lighter and thinner than bellies from pigs fed CM-CV, yet bacon processing, bacon slice, and sensory characteristics were unaffected by dietary treatment and did not differ from the control. Furthermore, testing the existence of bilateral symmetry on fresh belly characteristics revealed that bellies originating from the right side of the carcasses were slightly (P≤0.05) wider, thicker, heavier and firmer than bellies from the left side of the carcass. Copyright © 2015 Elsevier Ltd. All rights reserved.
10. Guidelines regarding National Reports under the Convention on Nuclear Safety
International Nuclear Information System (INIS)
1999-01-01
These guidelines, established by the Contracting Parties pursuant to Article 22 of the Convention, are intended to be read in conjunction with the text of the Convention. Their purpose is to provide guidance to the Contracting Parties regarding material which it may be useful to include in the national reports required by Article 5 and thereby to facilitate the most efficient review of implementation by the Contracting Parties of their obligations under the Convention
11. Guidelines regarding national reports under the convention on nuclear safety
International Nuclear Information System (INIS)
1998-01-01
These guidelines, established by the Contracting Parties pursuant to Article 22 of the Convention, are intended to be read in conjunction with the text of the Convention. Their purpose is to provide guidance to the Contracting Parties regarding material which it may be useful to include in the national reports required by Article 5 and thereby to facilitate the most efficient review of implementation by the Contracting Parties of their obligations under the Convention
12. Guidelines regarding National Reports under the Convention on Nuclear Safety
Energy Technology Data Exchange (ETDEWEB)
NONE
1999-10-15
These guidelines, established by the Contracting Parties pursuant to Article 22 of the Convention, are intended to be read in conjunction with the text of the Convention. Their purpose is to provide guidance to the Contracting Parties regarding material which it may be useful to include in the national reports required by Article 5 and thereby to facilitate the most efficient review of implementation by the Contracting Parties of their obligations under the Convention.
13. Ambiguities and conventions in the perception of visual art.
Science.gov (United States)
Mamassian, Pascal
2008-09-01
Vision perception is ambiguous and visual arts play with these ambiguities. While perceptual ambiguities are resolved with prior constraints, artistic ambiguities are resolved by conventions. Is there a relationship between priors and conventions? This review surveys recent work related to these ambiguities in composition, spatial scale, illumination and color, three-dimensional layout, shape, and movement. While most conventions seem to have their roots in perceptual constraints, those conventions that differ from priors may help us appreciate how visual arts differ from everyday perception.
14. Convention on Contracts for the International Sale of Goods (CISG)
DEFF Research Database (Denmark)
Lookofsky, Joseph
Also sometimes referred to as the Vienna Sales Convention, the Convention on Contracts for the International Sale of Goods (CISG) regulates the rights of buyers and sellers in international sales. The Convention, which first entered into effect in 1988, is the first sales law treaty to win...... with international sales contracts and sales contract disputes will obtain an excellent overview of the Convention, as well as valuable information as to all its 101 Articles, compromising key topic areas such as the following: • Determining when the CISG applies; • Freedom of contract under Article 6...
15. Convention on Contracts for the International Sale of Goods (CISG)
DEFF Research Database (Denmark)
Lookofsky, Joseph
Also sometimes referred to as the Vienna Sales Convention, the Convention on Contracts for the International Sale of Goods (CISG) regulates the rights of buyers and sellers in international sales. The Convention, which first entered into effect in 1988, is the first sales law treaty to win....... With this monograph as their guide, lawyers and scholars who deal with international sales contracts and sales contract disputes will obtain an excellent overview of the Convention, as well as valuable information as to all its 101 Articles, compromising key topic areas such as the following: • Determining when...
16. The convention on environmental impact assessment in a transboundary context
International Nuclear Information System (INIS)
Schrage, W.
2000-01-01
The ECE Convention on Environmental Impact Assessment in a Transboundary Context (EIA Convention) is the first multilateral treaty to specify the procedural rights and duties of Parties with regard to transboundary impacts of proposed activities and to provide procedures, in a transboundary context, for the consideration of environmental impacts in decision-making. The EIA Convention, elaborated under the auspices of the United Nations Economic Commission for Europe (ECE), was adopted at Espoo, Finland, in February 1991. Obligations stipulated, and measures and procedures provided for in this Convention are described. (author)
17. Conventional breeding strategies to enhance the sustainability of ...
African Journals Online (AJOL)
Conventional breeding strategies to enhance the sustainability of Musa biodiversity conservation for endemic cultivars. M Pillay, R Ssebuliba, J Hartman, D Vuylsteke, D Talengera, W Tushemereirwe ...
18. Military Technology and Conventional Weapons Export Controls: The Wassenaar Arrangement
National Research Council Canada - National Science Library
Grimmett, Richard F
2006-01-01
This report provides background on the Wassenaar Arrangement, which was formally established in July 1996 as a multilateral arrangement aimed at controlling exports of conventional weapons and related...
19. Gold Finger: Metal Jewellery as a Disease Modifying Antirheumatic Therapy!
Directory of Open Access Journals (Sweden)
T. Hlaing
2009-01-01
Full Text Available Polyarticular psoriatic arthritis is a chronic, progressive and disabling auto-immune disease often affecting the small joints of the hands in a symmetrical fashion. The disease can progress rapidly causing joint swelling and damaging cartilage and bone around the joints resulting in severe deformities. We report a very unusual case of a 49-year-old woman who presented with polyarticular psoriatic arthritis affecting all proximal interphalangeal (PIP joints of both hands except the left ring finger PIP joint. On clinical examination there was no evidence of arthritis in the left ring finger PIP joint. We confirmed the paucity of joint damage in the PIP joint of the left ring finger using more modern imaging modalities such as musculoskeletal ultrasound and MRI scan of the small joints of the hands. All other PIP joints in both hands demonstrated advanced degrees of joint damage secondary to chronic psoriatic inflammatory arthritis. We postulated that wearing a gold wedding ring has helped protecting the PIP joint of the left ring finger from the damaging effect of inflammatory arthritis. The possible mechanisms by which metal jewellery (gold ring confer protection to adjacent joints was discussed.
20. An overview of the biological disease modifying drugs available for ...
African Journals Online (AJOL)
transcription factors AP-1 and NF-κB to the cell nucleus, as shown in Figure ... of naïve T cells requires the CD80/86 costimulatory molecules on .... arthritis: abridged Cochrane systematic review and network meta-analysis. Br ... N. Engl J Med.
1. Transposition into swiss law of the Paris convention and the Brussels supplementary convention, as amended
International Nuclear Information System (INIS)
Tami, R.; Daina, S.
2004-01-01
Apart from the considerable increase in the amounts of cover, two basic factors lie behind the Swiss government decision to propose shortly to parliament a draft revised L.R.C.N.(federal act on nuclear third party liability). These are, firstly, that the revised Paris/Brussels system still incorporates the principle of the limited liability of the operator of a nuclear installation but now contains a minimum liability amount (liability threshold) and no longer a maximum amount (liability ceiling), and secondly, that the States parties are allowed to provide in their national legislation for the unlimited liability of operators. One of the aims of ratifying the revised conventions is to enable most victims to obtain fair compensation on an egalitarian basis for damage caused by a nuclear incident, and also to join an international system for compensating nuclear damage based on solidarity between states, most of them nuclear. (N.C.)
2. Defect Detectability Improvement for Conventional Friction Stir Welds
Science.gov (United States)
Hill, Chris
2013-01-01
This research was conducted to evaluate the effects of defect detectability via phased array ultrasound technology in conventional friction stir welds by comparing conventionally prepped post weld surfaces to a machined surface finish. A machined surface is hypothesized to improve defect detectability and increase material strength.
3. 7 CFR 7.10 - Conduct of county convention.
Science.gov (United States)
2010-01-01
... 7 Agriculture 1 2010-01-01 2010-01-01 false Conduct of county convention. 7.10 Section 7.10 Agriculture Office of the Secretary of Agriculture SELECTION AND FUNCTIONS OF AGRICULTURAL STABILIZATION AND... other purpose. (e) The county committee shall give advance public notice of the county convention which...
4. Islamic vs. conventional banks : Business models, efficiency and stability
NARCIS (Netherlands)
Beck, T.H.L.; Demirgüc-Kunt, A.; Merrouche, O.
2013-01-01
How different are Islamic banks from conventional banks? Does the recent crisis justify a closer look at the Sharia-compliant business model for banking? When comparing conventional and Islamic banks, controlling for time-variant country-fixed effects, we find few significant differences in business
5. Interactive Translation Prediction versus Conventional Post-editing in Practice
DEFF Research Database (Denmark)
Sanchis-Trilles, German; Alabau, Vicent; Buck, Christian
2014-01-01
We conducted a field trial in computer-assisted professional translation to compare Interactive Translation Prediction (ITP) against conventional post- editing (PE) of machine translation (MT) output. In contrast to the conventional PE set-up, where an MT system first produces a static translatio...
6. Convention on nuclear safety. Rules of procedure and financial rules
International Nuclear Information System (INIS)
1999-01-01
The document is the first revision of the Rules of Procedures and Financial Rules that apply mutatis mutandis to any meetings of the Contracting Parties to the Convention on Nuclear Safety (INFCIRC/573), convened in accordance with the Chapter 3 of the Convention
7. Moral, Conventional, and Personal Rules: The Perspective of Foster Youth
Science.gov (United States)
Mullins, David; Tisak, Marie S.
2006-01-01
Forty-five foster youth (9-13 year old and 14-17 year olds) were asked to evaluate moral, conventional, and personal rules and violations by providing judgments and reasons. The results suggest that foster youths' judgments distinguished between the moral, conventional, and personal domains. However, in providing reasons to support their judgments…
8. Reichenbach and the conventionality of distant simultaneity in perspective
NARCIS (Netherlands)
Dieks, D.G.B.J.
2008-01-01
We take another look at Reichenbach’s 1920 conversion to conventionalism, with a special eye to the background of his ‘conventionality of distant simultaneity’ thesis. We argue that elements of Reichenbach earlier neo-Kantianism can still be discerned in his later work and, related to this, that his
9. 26 CFR 521.103 - Scope of the convention.
Science.gov (United States)
2010-04-01
... convention, to be accomplished on a reciprocal basis, are to avoid double taxation upon major items of income... looking to the avoidance of double taxation and fiscal evasion. (b) The specific classes of income from... UNDER TAX CONVENTIONS DENMARK General Income Tax Taxation of Nonresident Aliens Who Are Residents of...
10. Stakeholder involvement in international conventions governing civil nuclear activities
International Nuclear Information System (INIS)
Emmerechts, Sam
2017-01-01
Mr Emmerechts explained that international conventions have varying positions on stakeholders and their involvement depending upon the intent of the legislator and the field they cover, ranging from a narrow to a broad interpretation. He addressed stakeholder involvement in two other international conventions governing civil nuclear activities, namely the Convention on Nuclear Safety, and the Joint Convention on the Safety of Spent Fuel Management and on the Safety of Radioactive Waste Management (the Joint Convention), both concluded under the auspices of the International Atomic Energy Agency (IAEA). He noted that the Convention on Nuclear Safety remains a 'traditional' international legal instrument, focusing on governments and governmental bodies as the main stakeholders and limiting obligations regarding the involvement of the public and intergovernmental organisations to their receiving information and observing. Likewise, the Joint Convention limits obligations regarding public involvement to access to information, notably as to the siting of proposed facilities. However, he noted that in the European Union, the Directive on Nuclear Safety (2014/87/Euratom) and the Directive for the Safe Management of Spent Fuel and Radioactive Waste (2011/70/Euratom) have more advanced public participation requirements in nuclear decision making. Mr Emmerechts explained that the substantial differences between nuclear legislation and the Aarhus and Espoo Conventions with regards to public involvement requirements could partly be explained by the technicality of nuclear information and by issues related to nuclear security
11. Imaging in hematology. Part 1: Ultrasonography and conventional radiology
International Nuclear Information System (INIS)
Zhechev, Y.
2003-01-01
Applications of conventional ultrasonography techniques (B-mode or real time) in oncohematology are presented. The newer adaptations (in particular colour Doppler) provide incremental advantages that support their inclusion in the imaging techniques available to modern hematology. Conventional radiologic studies include chest and bone X-ray, gastrointestinal contrast examination and bipedal lymphangiography
12. Conventional and anomalous quantum Rabi oscillations in graphene
International Nuclear Information System (INIS)
Khan, Enamullah; Kumar, Vipin; Kumar, Upendra; Setlur, Girish S.
2014-01-01
We study the non linear response of graphene in presence of quantum field in two different regimes. Far from resonance, using our new technique asymptotic rotating wave approximation (ARWA), we obtained that the matter field interaction leads to the slow oscillations like conventional Rabi oscillations observed in conventional semiconductors using well known rotating wave approximation (RWA). The Rabi frequency obtained in both the regimes
13. Knowing linguistic conventions | Robinson | South African Journal of ...
African Journals Online (AJOL)
These are three standard accounts of the epistemic status of linguistic conventions, which all play into the first camp: (1) knowledge by intuition, (2) inferential a priori knowledge and (3) a posteriori knowledge. I give reasons why these accounts should be rejected. I then argue that linguistic conventions, if conceived of as ...
14. Convention on nuclear safety. Rules of procedure and financial rules
International Nuclear Information System (INIS)
1998-01-01
The document presents the Rules of Procedure and Financial Rules that apply mutatis mutandis to any meeting of the Contracting Parties to the Convention on Nuclear Safety (INFCIRC/449) convened in accordance with Chapter 3 of the Convention. It includes four parts: General provisions, Preparatory process for review meetings, Review meetings, and Amendment and interpretation of rules
15. Convention on Nuclear Safety. Rules of procedure and financial rules
International Nuclear Information System (INIS)
2002-01-01
The document is the second revision of the Rules of Procedures and Financial Rules that apply mutatis mutandis to any meetings of the Contracting Parties to the Convention on Nuclear Safety (INFCIRC/573), convened in accordance with the Chapter 3 of the Convention
16. Functional MRI of Conventional and Anomalous Metaphors in Mandarin Chinese
Science.gov (United States)
Ahrens, Kathleen; Liu, Ho-Ling; Lee, Chia-Ying; Gong, Shu-Ping; Fang, Shin-Yi; Hsu, Yuan-Yu
2007-01-01
This study looks at whether conventional and anomalous metaphors are processed in different locations in the brain while being read when compared with a literal condition in Mandarin Chinese. We find that conventional metaphors differ from the literal condition with a slight amount of increased activation in the right inferior temporal gyrus. In…
17. 19 CFR 114.2 - Customs Conventions and Agreements.
Science.gov (United States)
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Customs Conventions and Agreements. 114.2 Section 114.2 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY; DEPARTMENT OF THE TREASURY CARNETS General Provisions § 114.2 Customs Conventions and Agreements. The...
18. National nuclear safety report 1998. Convention on nuclear safety
International Nuclear Information System (INIS)
1998-01-01
The Argentine Republic subscribed the Convention on Nuclear Safety, approved by a Diplomatic Conference in Vienna, Austria, in June 17th, 1994. According to the provisions in Section 5th of the Convention, each Contracting Party shall submit for its examination a National Nuclear Safety Report about the measures adopted to comply with the corresponding obligations. This Report describes the actions that the Argentine Republic is carrying on since the beginning of its nuclear activities, showing that it complies with the obligations derived from the Convention, in accordance with the provisions of its Article 4. The analysis of the compliance with such obligations is based on the legislation in force, the applicable regulatory standards and procedures, the issued licenses, and other regulatory decisions. The corresponding information is described in the analysis of each of the Convention Articles constituting this Report. The present National Report has been performed in order to comply with Article 5 of the Convention on Nuclear Safety, and has been prepared as much as possible following the Guidelines Regarding National Reports under the Convention on Nuclear Safety, approved in the Preparatory Meeting of the Contracting Parties, held in Vienna in April 1997. This means that the Report has been ordered according to the Articles of the Convention on Nuclear Safety and the contents indicated in the guidelines. The information contained in the articles, which are part of the Report shows the compliance of the Argentine Republic, as a contracting party of such Convention, with the obligations assumed
19. Comparative evaluation of organic and conventional farming on ...
African Journals Online (AJOL)
Five samples of organic fruits with seal certification, organic fruits without seal certification and conventional fruits were acquired from supermarkets and farm in Rio de Janeiro, Brazil. Organic lime and orange showed higher mean values of acidity, being 4.5 and 34.8% higher, when compared to conventional fruit, ...
20. Revision of the Paris and Brussels Conventions of Nuclear Liability
International Nuclear Information System (INIS)
Reyners, P.
2002-01-01
The Contracting Parties to the 1960 Paris Convention on Third Party Liability in the Field of Nuclear Energy and to the 1963 Brussels Convention Supplementary to the Paris Convention, have concluded this Spring four years of negotiation on the revision of these instruments. This exercise was itself started as a logical consequence of the adoption in 1997 of a revised Vienna Convention on Civil Liability for Nuclear Damage and of a Convention on Supplementary Compensation for Nuclear Damage. The Contracting Parties have concluded that the existing regime established by these Conventions remains viable and sound but that it also warrants improvements to ensure that greater financial security will be available to compensate a potentially larger number of victims in respect of a broader range of nuclear damage. A number of more technical amendments have also been agreed, in particular to ensure compatibility with other existing Conventions in this field. When the revised Paris and Brussels Conventions come into force, the total amount of funds available for compensation, provided by the liable nuclear operator and by the States concerned, will be 1.5 billion euros. (author)
1. The Vienna Conventions on Early Notification and Assistance
International Nuclear Information System (INIS)
Cameron, P.
1988-01-01
Following the Chernobyl accident, the IAEA established and opened for signature on 26th September 1986 two Conventions, on Early Notification of a Nuclear Accident and on Assistance in Case of a Nuclear Accident or Radiological Emergency respectively. This chapter describes the Conventions and their origins (NEA) [fr
2. Comparison of landscape features in organic and conventional farming systems.
NARCIS (Netherlands)
Mansvelt, van J.D.; Stobbelaar, D.J.; Hendriks, K.
1998-01-01
Four organic (biodynamic) farms coupled with conventional farms from their neighbourhood in The Netherlands, Germany and Sweden, and 3 organic farms and 4 conventional farms from the West Friesean region in The Netherlands were evaluated to compare their impact on landscape diversity. Materials used
International Nuclear Information System (INIS)
Krug, B.; Harnischmacher, U.; Krahe, T.; Fischbach, R.; Altenburg, A.; Krings, F.
1995-01-01
In 326 patients abdominal contrast radiographs were compared to digital luminescence radiographs (DLR) and conventional screen-film system ones. The digital exposure dose was 50% of the conventional. In DLR, 2 different types of postprocessed images were obtained from each data set. A display with low spatial frequency enhancement filtered to look like a conventional radiograph was compared to a display with high spatial frequency enhancement. Conventional and DLR images were evaluated randomly and separately by 4 radiologists by means of a questionnaire. DLR proved to be diagnostically equivalent to the conventional technique with the exception of a slightly diminished visibility of the mucosal pattern. High spatial frequency enhancement did not provide additional diagnostic information and should be dispensed with in abdominal examinations. (orig.)
4. INTEREST RATES AND CURRENCIES EFFECTS ON ISLAMIC AND CONVENTIONAL BONDS
Directory of Open Access Journals (Sweden)
Ghazali Syamni
2011-09-01
Full Text Available Bond markets have not been well developed in emerging countries. Realizing its important role, especially after the 1997 crises and the islamic economics development, emerging countries have started to develop such markets. This research examines the effect of interest rates and currencies on Islamic and conventional bonds in Bursa Malaysia. The analysis on Islamic bonds shows that interest rates and currencies do not influence Islamic bonds, which supports the prohibition of interest in Islam. The analysis on conventional bonds finds evidence that both interest rates and currencies affect conventional bond. It also finds evidence of a negative association between interest rates and a conventional bond. Keywords: Interest rate, currency, conventional bond, Islamic bond JEL classification numbers: G11, G12, G15
5. Environmental impact of non-conventional energy sources
International Nuclear Information System (INIS)
Abbasi, S.A.; Abbasi, Naseema; Nipaney, P.C.; Ramasamy, E.V.
1995-01-01
Whereas the global attention has always been focused on the adverse environmental impacts of conventional energy sources, only a few studies have been conducted on the clean environment image of the non-conventional energy sources, particularly the renewable ones. The question whether the non-conventional sources are really as benign as they are made out to be is addressed in the present paper in the background of a classical paradigm developed by Lovin which had postulated the hard (malignant) and soft (benign) energy concepts in the first place. It then assesses the likely environmental impacts of several major non-conventional energy sources and comes up with the note of caution that in many cases the adverse impacts may not be insubstantial; indeed in some cases they can be as strongly negative as the impacts of the conventional energy sources. (author). 31 refs
6. Profit-based conventional resource scheduling with renewable energy penetration
Science.gov (United States)
Reddy, K. Srikanth; Panwar, Lokesh Kumar; Kumar, Rajesh; Panigrahi, B. K.
2017-08-01
Technological breakthroughs in renewable energy technologies (RETs) enabled them to attain grid parity thereby making them potential contenders for existing conventional resources. To examine the market participation of RETs, this paper formulates a scheduling problem accommodating energy market participation of wind- and solar-independent power producers (IPPs) treating both conventional and RETs as identical entities. Furthermore, constraints pertaining to penetration and curtailments of RETs are restructured. Additionally, an appropriate objective function for profit incurred by conventional resource IPPs through reserve market participation as a function of renewable energy curtailment is also proposed. The proposed concept is simulated with a test system comprising 10 conventional generation units in conjunction with solar photovoltaic (SPV) and wind energy generators (WEG). The simulation results indicate that renewable energy integration and its curtailment limits influence the market participation or scheduling strategies of conventional resources in both energy and reserve markets. Furthermore, load and reliability parameters are also affected.
7. Amendment of APPRE for Ratification of the International Conventions
International Nuclear Information System (INIS)
Yoo, Ho Sik; Kwak, Sung Woo; Chang, Sung Soon; Seo, Hyung Min; Lee, Jeong Hoon; Lee, Jeong Ho
2010-01-01
Both the international community and the IAEA have been making efforts to strengthen the global regime on nuclear security. As a result of these efforts, two conventions regarding nuclear security were issued by the UN and IAEA. The International Convention for the Suppression of Acts of Nuclear Terrorism (NTC) and the Amendment to Convention of Physical Protection of Nuclear Material (CPPNMNF). The NTC entered into force in 2007, but the CPPNMNF still has not yet been enacted. In the work plan released after the 2010 Nuclear Security Summit (which was held in Washington D.C) these conventions were mentioned as important tools against nuclear terrorism. The purpose of these conventions was to prevent malicious acts against radioactive materials and nuclear facilities. The article also specifies strong penal provisions. Many countries which had ratified these conventions had to revise or change their domestic acts or laws in order conform to these new international regimes. The ROK signed these two conventions in 2005: however, it has not ratified them yet. The government has a plan to ratify them before the 2012 Nuclear Security Summit, which will be held in the ROK. Each article in the conventions should be reviewed thoroughly in terms of their effects on the domestic legal and institutional systems. The penal provisions regulating the conventions should especially be carefully scrutinized since their effects are considerable. In this paper, we compared the penal provisions in the conventions with the ROK's laws and selected the provisions that are not specified in the ROK's legal system. The proposed articles for amendment to the APPRE are also suggested
8. Tailings dams from the perspective of conventional dam engineering
International Nuclear Information System (INIS)
Szymanski, M.B.
1999-01-01
A guideline intended for conventional dams such as hydroelectric, water supply, flood control, or irrigation is used sometimes for evaluating the safety of a tailings dam. Differences between tailings dams and conventional dams are often substantial and, as such, should not be overlooked when applying the techniques or safety requirements of conventional dam engineering to tailings dams. Having a dam safety evaluation program developed specifically for tailings dams is essential, if only to reduce the chance of potential errors or omissions that might occur when relying on conventional dam engineering practice. This is not to deny the merits of using the Canadian Dam Safety Association Guidelines (CDSA) and similar conventional dam guidelines for evaluating the safety of tailings dams. Rather it is intended as a warning, and as a rationale underlying basic requirement of tailings dam emgineering: specific experience in tailings dams is essential when applying conventional dam engineering practice. A discussion is included that focuses on the more remarkable tailings dam safety practics. It is not addressed to a technical publications intended for such dams, or significantly different so that the use of conventional dam engineering practice would not be appropriate. The CDSA Guidelines were recently revised to include tailings dams. But incorporating tailings dams into the 1999 revision of the CDSA Guidelines is a first step only - further revision is necessary with respect to tailings dams. 11 refs., 2 tabs
9. The 1968 Brussels convention and liability for nuclear damage
International Nuclear Information System (INIS)
Sands, Ph.; Galizzi, P.
2000-01-01
The legal regime governing civil liability for transboundary nuclear damage is expressly addressed by two instruments adopted in the 1960's: the 1960 Paris Convention on Third Party Liability in the Field of Nuclear Energy and the 1963 Vienna Convention on Civil Liability for Nuclear Damage These establish particular rules governing the jurisdiction of national courts and other matters, including channelling of liability to nuclear operators, definitions of nuclear damage, the applicable standard of care, and limitations on liability. Another instrument - the 1968 Brussels Convention on Jurisdiction and the Enforcement of Judgements in Civil and Commercial Matters (hereinafter referred to as 'the Brussels Convention') - which is not often mentioned in the nuclear context will nevertheless also be applicable in certain cases. It is premised upon different rules as to forum and applicable law, and presents an alternate vision of the appropriate arrangements governing civil liability for nuclear damage. In this paper we consider the relative merits and demerits of the Brussels Convention from the perspective of non-nuclear states which might suffer damage as a result of a nuclear accident in another state. We conclude that in the context of the applicability of the Brussels Convention the dedicated nuclear liability conventions present few attractions to non-nuclear states in Europe. We focus in particular on issues relating to jurisdiction and applicable law, and do so by reference to a hypothetical accident in the United Kingdom which has transboundary effects in Ireland. (author)
10. Comparing the profitability of organic and conventional broiler production
Directory of Open Access Journals (Sweden)
F Cobanoglu
2014-03-01
Full Text Available Recently, organic broiler chicken production has received more attention worldwide. This study has carried out an economic analysis to compare the profitability of organic versus conventional growing systems per unit of broiler meat production. To achieve this goal, 400 slow-growing broiler chickens (Hubbard Red-JA were reared in an organic production system, and the same number of fast-growing birds (Ross-308 in a conventional system. The profitability was deduced with an economic analysis that compared total costs and net income. Results showed that organic broiler meat can cost from 70% to 86% more with respect to variable and fixed costs when compared with conventional production. The main reasons for the higher cost of organic broiler meat were feed, labor, certification, and outdoor area maintenance. The proportion of fixed costs in total costs was 1.54% in the conventional system and 7.48% in the organic system. The net income per kg of chicken meat in the organic system was € 0.75, which is 180% higher than chicken meat grown in a conventional system (€ 0.27; however, the price of organic broiler meat sold in the present study was twice as high as that obtained for conventional broilers. In conclusion, organic broiler meat production was more profitable than conventional rearing.
11. Comparing the profitability of organic and conventional broiler production
Directory of Open Access Journals (Sweden)
F Cobanoglu
2014-12-01
Full Text Available Organic broiler chicken production has recently received more attention worldwide. This study carried out an economic analysis to compare the profitability of organic versus conventional growing systems per unit of broiler meat production. In this study, 400 slow-growing broilers (Hubbard Red-JA were reared in an organic production system and the same number of fast-growing broilers (Ross-308 were reared in a conventional system. Profitability was deduced from an economic analysis that compared total costs and net income. Results showed that organic broiler meat can cost from 70% to 86% more with respect to variable and fixed costs when compared with conventional production. The main reasons for the higher cost of organic broiler meat were feed, labor, certification, and outdoor area maintenance. The proportion of fixed costs in total costs was 1.54% in the conventional system and 7.48% in the organic system. The net income per kg of chicken meat in the organic system was €0.75, which is 180% higher compared with the conventional system (€0.27; however, organic broiler meat was sold at a twice as high price than the conventional one. In conclusion, organic broiler meat production was more economical than conventional rearing.
12. Reversing the conventional leather processing sequence for cleaner leather production.
Science.gov (United States)
Saravanabhavan, Subramani; Thanikaivelan, Palanisamy; Rao, Jonnalagadda Raghava; Nair, Balachandran Unni; Ramasami, Thirumalachari
2006-02-01
Conventional leather processing generally involves a combination of single and multistep processes that employs as well as expels various biological, inorganic, and organic materials. It involves nearly 14-15 steps and discharges a huge amount of pollutants. This is primarily due to the fact that conventional leather processing employs a "do-undo" process logic. In this study, the conventional leather processing steps have been reversed to overcome the problems associated with the conventional method. The charges of the skin matrix and of the chemicals and pH profiles of the process have been judiciously used for reversing the process steps. This reversed process eventually avoids several acidification and basification/neutralization steps used in conventional leather processing. The developed process has been validated through various analyses such as chromium content, shrinkage temperature, softness measurements, scanning electron microscopy, and physical testing of the leathers. Further, the performance of the leathers is shown to be on par with conventionally processed leathers through bulk property evaluation. The process enjoys a significant reduction in COD and TS by 53 and 79%, respectively. Water consumption and discharge is reduced by 65 and 64%, respectively. Also, the process benefits from significant reduction in chemicals, time, power, and cost compared to the conventional process.
13. Substitutes or Complements? Diagnosis and Treatment with non-Conventional and Conventional Medicine
Directory of Open Access Journals (Sweden)
Aida Isabel Tavares
2015-04-01
Full Text Available Background Portugal has a strong tradition of conventional western healthcare. So it provides a natural case study for the relationship between Complementary/Alternative Medicine (CAM and Western Medicine (WM. This work aims to test the relationship between CAM and WM users in the diagnosis and treatment stages and to estimate the determinants of CAM choice. Methods The forth Portuguese National Health Survey is employed to estimate two single probit models and obtain the correlation between the consumption of CAM and WM medicines in the diagnosis and treatment stages. Results Firstly, both in the diagnosis and the treatment stage, CAM and WM are seen to be complementary choices for individuals. Secondly, self-medication also shows complementarity with the choice of CAM treatment. Thirdly, education has a non-linear relationship with the choice of CAM. Finally, working status, age, smoking and chronic disease are determinant factors in the decision to use CAM. Conclusion The results of this work are relevant to health policy-makers and for insurance companies. Patients need freedom of choice and, for the sake of safety and efficacy of treatment, WM and CAM healthcare ought to be provided in a joint and integrated health system.
14. IAEA Director General welcomes landmark convention to combat nuclear terrorism
International Nuclear Information System (INIS)
2005-01-01
15. The climate change convention: What role can business play?
International Nuclear Information System (INIS)
Zammit Cutajar, M.
1994-01-01
The development, content, and some implications of the United Nations Framework Convention on Climate Change are treated briefly. The Climate Change Convention commits those developed countries which have ratified it to limit their emissions of greenhouse gases. While this Convention is an agreement among Governments, and is not directly binding on companies, individuals or organizations, business people and others need to understand it and be prepared for the national initiatives that will follow its ratification. New opportunities are being created for energy-efficient firms, and for new technologies and products. (author)
16. Patient radiation dose in conventional and xerographic cephalography
International Nuclear Information System (INIS)
Copley, R.L.; Glaze, S.A.; Bushong, S.C.; West, D.C.
1979-01-01
A comparison of the radiation doses for xeroradiographic and conventional film screen cephalography was made. Alderson tissue-equivalent phantoms were used for patient simulation. An optimum technique in terms of patient dose and image quality indicated that the dose for the Xerox process ranged from five to eleven times greater than that for the conventional process for entrance and exit exposures, respectively. This dose, however, falls within an acceptable range for other dental and medical radiation doses. It is recommended that conventional cephalography be used for routine purposes and that xeroradiography be reserved for situations requiring the increased image quality that the process affords
17. The history of double tax conventions in Croatia
Directory of Open Access Journals (Sweden)
Hrvoje Arbutina
2014-06-01
Full Text Available After a short introduction, the authors briefly describe the national experience in handling the problems of international double taxation through double tax conventions. This chapter is divided according to stages in the history of double tax conventions identified. The authors analyse the goals of tax treaty policies in differentiated stages with a survey of the economic implications. Special focus is placed on inter-country influences and the impact on and of international institutions and organisations through an examination of the influence of bilateral tax treaties on model tax conventions and vice versa. The fifth chapter presents concluding observations.
18. Conventional diagnostic imaging of the temporal bone. A historical review
International Nuclear Information System (INIS)
Canigiani, G.
1997-01-01
The Viennese Medical School played an important role in the development of radiological examinations and signs of the temporal bone with conventional X-rays. Famous pioneers include E.G. Mayer (1893-1969) and L. Psenner (1910-1986). Nowadays conventional X-rays and tomography have lost their important role in diagnostic radiology of the temporal bone, but the basic principles established in those early years of radiology are still used now. This statement is correct not only for conventional X-rays, but particularly for 'poly'-tomography in comparison with CT. (orig.) [de
19. Brazil and the UN framework convention on climate change
International Nuclear Information System (INIS)
Marques De Souza, J.A.
1996-01-01
Due to a high share (96%) of hydropower generation in its electricity production, Brazil emits relatively small amounts of CO 2 . It is argued that, because developed countries are responsible for some 65% of the global emissions of GHGs, they should start to reduce their greenhouse gas emission, which follows also directly from the Framework Convention on Climate Change. After ratification of the Convention Brazil has taken all steps to implement the Convention and to assess its greenhouse gas emissions. Various advisory and co-ordinating bodies have been installed by decree in mid 1994. (author). 1 fig., 1 tab
20. Soil Microbial Activity in Conventional and Organic Agricultural Systems
Directory of Open Access Journals (Sweden)
Romero F.V. Carneiro
2009-06-01
Full Text Available The aim of this study was to evaluate microbial activity in soils under conventional and organic agricultural system management regimes. Soil samples were collected from plots under conventional management (CNV, organic management (ORG and native vegetation (AVN. Soil microbial activity and biomass was significantly greater in ORG compared with CNV. Soil bulk density decreased three years after adoption of organic system. Soil organic carbon (SOC was higher in the ORG than in the CNV. The soil under organic agricultural system presents higher microbial activity and biomass and lower bulk density than the conventional agricultural system.
1. Conventional radiology in the bony compromise of Langerhans cells Histiocytosis
International Nuclear Information System (INIS)
Morales, Nilson; Gonzalez, Claudia Patricia; Melendez, Patricia; Terselich, Gretty
1999-01-01
We present a descriptive study of 47 patients who attended the National Cancer Institute in Bogota, Colombia with pathological diagnosis of Langerhans cell histiocytosis. We reviewed the most frequent conventional x-ray findings
2. REMOVAL OF URANIUM FROM DRINKING WATER BY CONVENTIONAL TREATMENT METHODS
Science.gov (United States)
The USEPA currently does not regulate uranium in drinking water but will be revising the radionuclide regulations during 1989 and will propose a maximum contaminant level for uranium. The paper presents treatment technology information on the effectiveness of conventional method...
3. Environmental impact assessment of conventional and organic milk production
NARCIS (Netherlands)
Boer, de I.J.M.
2003-01-01
Organic agriculture addresses the public demand to diminish environmental pollution of agricultural production. Until now, however, only few studies tried to determine the integrated environmental impact of conventional versus organic production using life cycle assessment (LCA). The aim of this
4. Third National Report on compliance with the Joint Convention Obligations
International Nuclear Information System (INIS)
2008-09-01
The Joint Convention on the Safety of Spent Fuel Management and the Safety of Radioactive Waste Management, hereinafter referred to as the 'Joint Convention', is the result of international discussions that followed the adoption of the Convention on Nuclear Safety, in 1994. France signed the Joint Convention at the General Conference of the International Atomic Energy Agency (IAEA) held on 29 September 1997, the very first day the Joint Convention was opened for signature. She approved it on 22 February 2000 and filed the corresponding instruments with the IAEA on 27 April 2000. The Joint Convention entered into force on 18 June 2001. For many years, France has been taking an active part in the pursuit of international actions to reinforce nuclear safety and considers the Joint Convention to be a key step in that direction. The fields covered by the Joint Convention have long been part of the French approach to nuclear safety. This report is the third one of its kind. It is published in accordance with Article 32 of the Joint Convention and presents the measures taken by France to meet each of her obligations set out in the Convention. The facilities and the radioactive materials covered by this Convention are quite diversified in nature and are controlled in France by different regulatory authorities. Above a specific threshold of radioactive content, a facility is referred to as a 'basic nuclear facility' (installation nucleaire de base - INB) and placed under the control of the Nuclear Safety Authority (Autorite de surete nucleaire - ASN). Below that threshold and provided that the facility involved is subject to the nomenclature of classified facilities for other purposes than their radioactive materials, any facility may be considered as a 'classified facility on environmental-protection grounds' (installation classee pour la protection de l'environnement - ICPE) and placed under the control of the Ministry for the Environment. Facilities that contain only
5. Gun barrel erosion - Comparison of conventional and LOVA gun propellants
NARCIS (Netherlands)
Hordijk, A.C.; Leurs, O.
2006-01-01
The research department Energetic Materials within TNO Defence, Security and Safety is involved in the development and (safety and insensitive munitions) testing of conventional (nitro cellulose based) and thermoplastic elastomer (TPE) based gun propellants. Recently our testing capabilities have
6. Diagnostic accuracy of the combined use of conventional ...
African Journals Online (AJOL)
2016-03-10
Mar 10, 2016 ... conventional sonography and sonoelastography in differentiating benign and ... Common presentation of cancer thyroid is solid solitary nodule and ..... and Itoh,9 was useful for comparing breast ultrasound and elastographic ...
7. Application of international maritime protection conventions to radioactive pollution
International Nuclear Information System (INIS)
Stein, R.M.; Walden, R.M.
1975-01-01
The application of international maritime protection conventions to radioactive pollution is discussed with particular emphasis on the 1972 London Convention on prevention of marine pollution by dumping of wastes and other matter. Under that Convention, wastes are divided into three categories according to their radioactivity. High level wastes, whose dumping is prohibited, and low level wastes which require a special dumping permit are studied on the basis of definitions established by the International Atomic Energy Agency. Mention is made of the IAEA-recommended procedures for issue of the specific dumping as well as of the exceptions provided for ships and aircraft enjoying State immunity and cases of force majeure or emergencies. Also dealt with are the other international Conventions applying to prevention of radioactive marine pollution [fr
8. China's Foreign Conventional Arms Acquisitions: Background and Analysis
National Research Council Canada - National Science Library
Kan, Shirley; Bolkcom, Christopher; O'Rourke, Ronald
2001-01-01
This CRS Report examines the major, foreign conventional weapon systems that China has acquired or has committed to acquire since 1990, with particular attention to implications for U.S. security concerns...
9. Therapeutic Cancer Vaccines in Combination with Conventional Therapy
DEFF Research Database (Denmark)
Andersen, Mads Hald; Junker, N.; Ellebaek, E.
2010-01-01
The clinical efficacy of most therapeutic vaccines against cancer has not yet met its promise. Data are emerging that strongly support the notion that combining immunotherapy with conventional therapies, for example, radiation and chemotherapy may improve efficacy. In particular combination...
10. Therapeutic cancer vaccines in combination with conventional therapy
DEFF Research Database (Denmark)
Andersen, Mads Hald; Junker, Niels; Ellebaek, Eva
2010-01-01
The clinical efficacy of most therapeutic vaccines against cancer has not yet met its promise. Data are emerging that strongly support the notion that combining immunotherapy with conventional therapies, for example, radiation and chemotherapy may improve efficacy. In particular combination...
11. [NY Convention, Ethiopia's Course of Action Ahead], Amharic
African Journals Online (AJOL)
Foreign Arbitral Awards: Advantages, Disadvantages and ... The Convention on the Recognition and Enforcement of Foreign Arbitral. Awards ..... also, on the application of the party claiming enforcement of the award, order the other party to.
12. Orthodontic intrusion : Conventional and mini-implant assisted intrusion mechanics
Directory of Open Access Journals (Sweden)
Anup Belludi
2012-01-01
intrusion has revolutionized orthodontic anchorage and biomechanics by making anchorage perfectly stable. This article addresses various conventional clinical intrusion mechanics and especially intrusion using mini-implants that have proven effective over the years for intrusion of maxillary anteriors.
13. The raw milk quality from organic and conventional agriculture
Directory of Open Access Journals (Sweden)
Juraj Čuboň
2008-01-01
Full Text Available In the experiment the parameters of milk quality from organic and conventional dairy farm were analyzed. The number of somatic cells was 219. 103 . ml−1 in the organic milk and 242. 103 . ml−1 in the conventional milk. It seems that conditions of organic farming could be able to have a positive effect of health of mammary gland. We found the highest number of somatic cells at the end of the year (336.103 . ml−1 in organic milk in December, respectively 336.103 . ml−1 in conventional milk in November. The total bacteria count was higher in organic milk (86.103 CFU . ml−1 than conventional (51.103 CFU . ml−1 likewise the number of coliform bacteria. Number of coliform bacteria was by conventional milk under 1000 CFU . ml−1 for all samples. The highest number of coliform bacteria in organic milk was achieved in February (1000 CFU . ml−1. We found higher content of fat (4.23 g . 100g−1 and protein (3.41 g . 100g−1 by organic milk in comparison with the conventional milk (4.11 g . 100g−1, resp. 3.39 g . 100g−1. The higher content of protein and fat in organic milk and the higher protein content in conventional milk were determined in December. The heat resistance was determined by 96 % ethanol required to coagulation of 2 ml of milk. The conventional milk has significantly lower heat resistance (1.38 ml than the organic one (1.86 ml. Better heat stability by organic milk and higher content of Ca (144.29 mg . 100g−1 correspond with higher technological quality of organic milk.
14. Specific features of human rights guaranteed by the Aarhus Convention
Directory of Open Access Journals (Sweden)
Etinski Rodoljub
2013-01-01
Full Text Available The Aarhus Convention legally articulates basic human needs to live in the environment adequate for human health and well-being and to engage in protection and improvement of the environment. It recognized and protected a general human right to adequate environment and three particular rights in environmental matters - to information, to public participation in decision-making and to justice. The Aarhus Convention introduced innovative approach to human rights protection in relation to transboundary issues and legal standing.
15. The history of double tax conventions in Croatia
OpenAIRE
Hrvoje Arbutina; Nataša Žunić Kovačević
2014-01-01
After a short introduction, the authors briefly describe the national experience in handling the problems of international double taxation through double tax conventions. This chapter is divided according to stages in the history of double tax conventions identified. The authors analyse the goals of tax treaty policies in differentiated stages with a survey of the economic implications. Special focus is placed on inter-country influences and the impact on and of international institutions and...
16. Impact of replacement of conventional Recloser with PulseCloser
OpenAIRE
Olgert Metko; Rajmonda Bualoti; Engjell Zeqo
2011-01-01
Conventional recloser stresses the circuit with current fault every time they reclose into a fault. After clearing a fault, a conventional recloser simply recloses the interrupters to continuously test the presence of the fault. If the fault is still there, the interrupters are tripped again. Then, after a time delay, the interrupters are reclosed. During reclosing operation of Automatic Recloser, including the faster recloser one, powerful transient processes occurs and significant amount of...
17. Panorama 2017 - New conventional oil and gas discoveries
International Nuclear Information System (INIS)
Hureau, Geoffroy; Vially, Roland
2017-01-01
In 2016, spending on exploration is expected to decline by approximately 35% for the second consecutive year (total exploration-production is down 24%). However, volumes discovered through conventional exploration should only fall by 25% due to several significant discoveries, particularly in Alaska and West Africa. The largest discovery in 2016 was an unconventional Permian shale basin in the United States, which could alone contain as many hydrocarbons as all of the year's conventional discoveries combined
18. 1987 Annual convention of the Austrian Physical Society
International Nuclear Information System (INIS)
1987-09-01
This is the pre-convention program of the annual convention to be held in September 1987. The divisions 1) general, 2) nuclear - and particle physics, 3) high-polymer physics have 124 contributions with abstracts, 75 of them of INIS interest. Another 22 contributions from the divisions, 4) atomic -, nuclear - and plasma physics, 5) solid state physics and 6) vocational training, are announced ty titles only. (G.Q.)
19. Convention on the physical protection of nuclear material
International Nuclear Information System (INIS)
1982-01-01
The document presents the original draft for a Convention on the Physical Protection of Nuclear Material, full reports of all the discussions held by representatives of Member States at meetings called by the IAEA, texts of written comments provided by Member States and the final agreed text of the Convention, list of original signatory States and status of the list of signatory States at the date of publication
20. Digital Versus Conventional Impressions in Fixed Prosthodontics: A Review.
Science.gov (United States)
Ahlholm, Pekka; Sipilä, Kirsi; Vallittu, Pekka; Jakonen, Minna; Kotiranta, Ulla
2018-01-01
To conduct a systematic review to evaluate the evidence of possible benefits and accuracy of digital impression techniques vs. conventional impression techniques. Reports of digital impression techniques versus conventional impression techniques were systematically searched for in the following databases: Cochrane Central Register of Controlled Trials, PubMed, and Web of Science. A combination of controlled vocabulary, free-text words, and well-defined inclusion and exclusion criteria guided the search. Digital impression accuracy is at the same level as conventional impression methods in fabrication of crowns and short fixed dental prostheses (FDPs). For fabrication of implant-supported crowns and FDPs, digital impression accuracy is clinically acceptable. In full-arch impressions, conventional impression methods resulted in better accuracy compared to digital impressions. Digital impression techniques are a clinically acceptable alternative to conventional impression methods in fabrication of crowns and short FDPs. For fabrication of implant-supported crowns and FDPs, digital impression systems also result in clinically acceptable fit. Digital impression techniques are faster and can shorten the operation time. Based on this study, the conventional impression technique is still recommended for full-arch impressions. © 2016 by the American College of Prosthodontists.
1. Choledochal cyst: Comparison of MR and conventional cholangiography
International Nuclear Information System (INIS)
Kim, S.H.; Lim, J.H.; Yoon, H.-K.; Han, B.K.; Lee, S.K.; Kim, Y.I.
2000-01-01
AIMS: To assess the diagnostic value of magnetic resonance (MR) cholangiography versus conventional cholangiography in patients with choledochal cyst and to determine whether MR cholangiography can be considered an alternative to conventional cholangiography. MATERIALS AND METHODS: Thirteen patients with choledochal cyst were examined by MR cholangiography and conventional cholangiograms. Magnetic resonance cholangiography employed T2-weighted axial and coronal fast spin-echo, single and multislab single-shot fast spin-echo sequences, including source images with maximum intensity projections. The diagnostic value of MR cholangiography and conventional cholangiograms was assessed and compared using the criteria of depiction of morphology, anomalous pancreaticobiliary duct union and demonstration of complications such as stones. A four-point diagnostic scale was applied to the delineation of the ductal anatomy with the Wilcoxon signed-ranks test and McNemar's test used for statistical analysis. RESULTS: The depiction of the choledochal cyst was significantly better with MR cholangiography than with conventional cholangiography (P 0.03). The detection rate of an anomalous pancreaticobiliary duct union was not significantly different with either method (P = 0.641), nor was the detection rate of bile duct stones (P = 0.375). CONCLUSION: Magnetic resonance cholangiography provides data equivalent to or superior to those from conventional cholangiography in evaluating choledochal cyst. Magnetic resonance cholangiography is recommended as a non-invasive examination of choice for the evaluation of choledochal cyst. Kim, S.H. (2000). Clinical Radiology 55, 378-383
2. 10 CFR Appendix I to Subpart B of... - Uniform Test Method for Measuring the Energy Consumption of Conventional Ranges, Conventional...
Science.gov (United States)
2010-01-01
... PROGRAM FOR CONSUMER PRODUCTS Test Procedures Pt. 430, Subpt. B, App. I Appendix I to Subpart B of Part... between the center and the corners of the conventional gas oven on the diagonals of a horizontal plane...
3. Digital versus conventional implant impressions for edentulous patients: accuracy outcomes.
Science.gov (United States)
Papaspyridakos, Panos; Gallucci, German O; Chen, Chun-Jung; Hanssen, Stijn; Naert, Ignace; Vandenberghe, Bart
2016-04-01
To compare the accuracy of digital and conventional impression techniques for completely edentulous patients and to determine the effect of different variables on the accuracy outcomes. A stone cast of an edentulous mandible with five implants was fabricated to serve as master cast (control) for both implant- and abutment-level impressions. Digital impressions (n = 10) were taken with an intraoral optical scanner (TRIOS, 3shape, Denmark) after connecting polymer scan bodies. For the conventional polyether impressions of the master cast, a splinted and a non-splinted technique were used for implant-level and abutment-level impressions (4 cast groups, n = 10 each). Master casts and conventional impression casts were digitized with an extraoral high-resolution scanner (IScan D103i, Imetric, Courgenay, Switzerland) to obtain digital volumes. Standard tessellation language (STL) datasets from the five groups of digital and conventional impressions were superimposed with the STL dataset from the master cast to assess the 3D (global) deviations. To compare the master cast with digital and conventional impressions at the implant level, analysis of variance (ANOVA) and Scheffe's post hoc test was used, while Wilcoxon's rank-sum test was used for testing the difference between abutment-level conventional impressions. Significant 3D deviations (P impressions (P > 0.001). Digital implant impressions are as accurate as conventional implant impressions. The splinted, implant-level impression technique is more accurate than the non-splinted one for completely edentulous patients, whereas there was no difference in the accuracy at the abutment level. The implant angulation up to 15° did not affect the accuracy of implant impressions. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
4. International convention for the suppression of acts of nuclear terrorism
International Nuclear Information System (INIS)
Jankowitsch-Prevor, O.
2005-01-01
The Preamble, composed of 13 paragraphs and drafted in the usual style of a General Assembly resolution, is aimed at placing the convention in a number of relevant contexts. First, the convention is linked to the issue of the maintenance of international peace and security through a reference to the purposes of the United Nations under Article 1 of the Charter. Next, it is presented as being a further step in the decisions, measures and instruments developed by the United Nations over the past ten years with the common objective of eliminating international terrorism in all its forms. Lastly, the convention is placed in its specific nuclear context through a number of references. In its third paragraph, the Preamble contains a reference to the principle recognizing 'the right of all states to develop and apply nuclear energy for peaceful purposes and their legitimate interests in the potential benefits to be derived from the peaceful application of nuclear energy'. This paragraph is identical to the first paragraph of the Preamble of the CPPNM, and the same principle is stated again in the first paragraph of the Preamble of the Amendment to the CPPNM, and constitutes a kind of general statement in favour of the peaceful use of nuclear energy and technology, without explicit reservations concerning non-proliferation, the safety and security of nuclear facilities or the management of radioactive waste. A draft amendment presented by the United States delegation in the final phase of work that suggested adding the phrase 'while recognizing that the goals of peaceful utilisation should not be used as a cover for proliferation' to the sentence cited above, was apparently not retained. Next, the Preamble mentions the 1980 Convention on the Physical Protection of Nuclear Material, and in the tenth paragraph the threat that 'acts of nuclear terrorism may result in the gravest consequences and may pose a threat to international peace and security'. Paragraph 11 of the
5. Microbial analysis of meatballs cooled with vacuum and conventional cooling.
Science.gov (United States)
Ozturk, Hande Mutlu; Ozturk, Harun Kemal; Koçar, Gunnur
2017-08-01
Vacuum cooling is a rapid evaporative cooling technique and can be used for pre-cooling of leafy vegetables, mushroom, bakery, fishery, sauces, cooked food, meat and particulate foods. The aim of this study was to apply the vacuum cooling and the conventional cooling techniques for the cooling of the meatball and to show the vacuum pressure effect on the cooling time, the temperature decrease and microbial growth rate. The results of the vacuum cooling and the conventional cooling (cooling in the refrigerator) were compared with each other for different temperatures. The study shows that the conventional cooling was much slower than the vacuum cooling. Moreover, the microbial growth rate of the vacuum cooling was extremely low compared with the conventional cooling. Thus, the lowest microbial growth occurred at 0.7 kPa and the highest microbial growth was observed at 1.5 kPa for the vacuum cooling. The mass loss ratio for the conventional cooling and vacuum cooling was about 5 and 9% respectively.
6. Implications of the Law of the Sea Convention
International Nuclear Information System (INIS)
Brewer, W.C. Jr.
1989-01-01
This paper reports that protection and preservation of the marine environment from wastes and toxic substances was an early concern of the Third United Nations Conference on the Law of the Sea, and the subject is extensively dealt with in the text of the Convention, adopted on 30 April 1982. The environmental provisions of the Convention are intended to serve as an umbrella treaty that states general goals, delimits the power and geographical jurisdiction of states in dealing with environmental problems, and requires states to cooperate through regional and global organizations in the development of standards. The most complex provisions of the environmental text deal with vessel discharges, reflecting the high degree of public interest in oil pollution, whereas the ocean dumping provisions rely largely on the standards of the London Dumping Convention. Pollution carried by air and rivers and wastes from seabed mining within national jurisdiction are treated briefly. The International Seabed Authority, created elsewhere in the Convention to regulate seabed mining, is granted power to regulate pollution from such mining beyond national jurisdiction. Overall, the most important contribution made by the Law of the Sea Convention to the protection of the marine environment is the obligation of states to bring national marine pollution laws up to global standards
7. The nuclear safety convention. Results for Argentine as contracting party
International Nuclear Information System (INIS)
Caruso, Gustavo
2002-01-01
A powerful mechanism for increasing safety worldwide is through the development and adoption of legally binding Safety Conventions. Since 1986 four Conventions were ratified in the areas of Nuclear, Radiation and Waste Safety. The Nuclear Safety Convention establishes an international co-operation mechanism to maintain safety nuclear installations, focused on: to achieve and maintain a high level of nuclear safety worldwide through the enhancement of national measures and international co-operation including, where appropriate, safety-related technical co-operation; to establish and maintain effective defences in nuclear installations against potential radiological hazards in order to protect individuals, society and the environment from harmful effects of ionizing radiation from such installations and to prevent accidents with radiological consequences and to mitigate such consequences should they occur. Each contracting party shall take, within the framework of its national law, the legislative, regulatory and administrative measures and other steps necessary for implementing its obligations under this Convention. Moreover, each contracting parties shall submit for review prior to each review meeting, a National Report on the measures it has taken to implement each of the obligations of the Convention. The contracting parties concluded that the review process had proven to be of great value to their national nuclear safety programmes. (author)
8. A comparison of piezosurgery with conventional techniques for internal osteotomy.
Science.gov (United States)
Koçak, I; Doğan, R; Gökler, O
2017-06-01
To compare conventional osteotomy with the piezosurgery medical device, in terms of postoperative edema, ecchymosis, pain, operation time, and mucosal integrity, in rhinoplasty patients. In this prospective study, 49 rhinoplasty patients were randomly divided into two groups according to osteotomy technique used, either conventional osteotomy or piezosurgery. For all patients, the total duration of the operation was recorded, and photographs were taken and scored for ecchymosis and edema on postoperative days 2, 4, and 7. In addition, pain level was evaluated on postoperative day 2, and mucosal integrity was assessed on day 4. All scoring and evaluation was conducted by a physician who was blinded to the osteotomy procedure. In the piezosurgery group, edema scores on postoperative day 2 and ecchymosis scores on postoperative days 2, 4, and 7 were significantly lower than in the conventional osteotomy group (p piezosurgery group than in the conventional osteotomy group (p piezosurgery group. When total operation duration was compared, there was no significant difference between the groups (p > 0.05). Piezosurgery is a safe osteotomy method, with less edema (in the early postoperative period) and ecchymosis compared with conventional osteotomy, as well as less pain, a similar operation duration, and no mucosal damage.
9. Determinants of Liquidity Risk in Indonesian Islamic and Conventional Banks
Directory of Open Access Journals (Sweden)
2016-07-01
Full Text Available The purpose of the study is to examine the causes of the liquidity risk in Islamic and conventional banks in Indonesia using panel data regression method. The study found the significant and positive relation of ROA and NPF with the liquidity risk, whereas the negative and significant relation of CAR with the liquidity risk in Indonesian Conventional Banks. Meanwhile in Islamic banks, CAR result significantly positive effect on liquidity risk, while ROA shows negative and significant result. Possible explanation for this is that, given the huge profit by the conventional banks, it has more chance to allocate it as liquidity reserve as well as increasing the facilities (improvement on technology. When the NPL is high, conventional banks will increase the liquid assets as a buffer. Unlike that of conventional banks, the Islamic banks in Indonesia might allocate capital as liquidity reserves and might allocate ROA in fixed assets or financing or technology. The result confirm that the role of capital and bank’s performance in indeed important to the banking liquidityDOI: 10.15408/aiq.v8i2.2871
10. Transport of nuclear material under the 1971 Brussels Convention
International Nuclear Information System (INIS)
Lagorce, M.
1975-01-01
The legal regime in force before entry into force of the 1971 Brussels Convention relating to civil liability for the maritime carriage of nuclear material created serious difficulties for maritime carriers, regarding both the financial risks entailed and restrictions on enjoyment of the rights granted by civil liability conventions. The 1971 Convention exonerates from liability any person likely to be held liable for nuclear damage under maritime law, provided another person is liable under the nuclear conventions or an equivalent national law. A problem remaining is that of compensation of nuclear damage to the means of transport for countries not having opted for re-inclusion of such damage in the nuclear law regime; this does not apply however to countries having ratified the Convention to date. A feature of the latter is that it establishes as extensively as possible the priority of nuclear law over maritime law. Furthermore the new regime continues to preserve efficiently the interests of victims of nuclear incidents. It is therefore to be hoped that insurers will no longer hesitate to cover international maritime carriage of nuclear material [fr
11. Alberta's conventional oil supply: How much? How long?
International Nuclear Information System (INIS)
Heath, M.
1992-01-01
To assess the future conventional crude oil supply potential in Alberta, a modelling system was designed with the capacity to determine the fraction of existing and potential reserves which could prove technically, economically and/or commercially viable over time. The reference case analysis described assumed constant real oil prices and fiscal burdens, capital and operating costs. Reserve additions from new pool discoveries were summed with reserves from existing pools to arrive at an estimate of the potential supply of established reserves in each play area. The established reserves from all plays were then totalled to provide the provincial conventional oil resource potential. Alberta's recoverable conventional crude oil reserves were shown to be declining at about 2 percent per year. However, even with declining recoverable reserves and relatively low prices, the results of the study indicated that the conventional oil industry remained a major revenue generator for the province and would continue to be so over the next 15 to 20 years. Improved operating efficiencies, cost reductions, reasonable prices and cooperation between industry and government were shown to be necessary to assure the continued viability of Alberta's conventional oil industry. figs., tabs., 11 refs
12. Conventional natural gas resources of the Western Canada Sedimentary Basin
International Nuclear Information System (INIS)
Bowers, B.
1999-01-01
The use of decline curve analysis to analyse and extrapolate the production performance of oil and gas reservoirs was discussed. This mathematical analytical tool has been a valid method for estimating the conventional crude oil resources of the Western Canada Sedimentary Basin (WCSB). However, it has failed to provide a generally acceptable estimate of the conventional natural gas resources of the WCSB. This paper proposes solutions to this problem and provides an estimate of the conventional natural gas resources of the basin by statistical analysis of the declining finding rates. Although in the past, decline curve analysis did not reflect the declining finding rates of natural gas in the WCSB, the basin is now sufficiently developed that estimates of conventional natural gas resources can be made by this analytical tool. However, the analysis must take into account the acceleration of natural gas development drilling that has occurred over the lifetime of the basin. It was concluded that ultimate resources of conventional marketable natural gas of the WCSB estimated by decline analysis amount to 230 tcf. It was suggested that further research be done to explain why the Canadian Gas Potential Committee (CGPC) estimate for Alberta differs from the decline curve analysis method. 6 refs., 35 figs
13. Impact of body weight on the achievement of minimal disease activity in patients with rheumatic diseases: a systematic review and meta-analysis.
Science.gov (United States)
Lupoli, Roberta; Pizzicato, Paolo; Scalera, Antonella; Ambrosino, Pasquale; Amato, Manuela; Peluso, Rosario; Di Minno, Matteo Nicola Dario
2016-12-13
In this study, we evaluated the impact of obesity and/or overweight on the achievement of minimal disease activity (MDA) in patients with psoriatic arthritis (PsA) and patients with rheumatoid arthritis (RA) receiving an anti-rheumatic treatment. Obesity can be considered a low-grade, chronic systemic inflammatory disease and some studies suggested that obese patients with rheumatic diseases exhibit a lower rate of low disease activity achievement during treatment with anti-rheumatic drugs. A systematic search was performed in major electronic databases (PubMed, Web of Science, Scopus, Embase) to identify studies reporting MDA achievement in obese and/or overweight patients with RA or PsA and in normal-weight RA or PsA control subjects. Results were expressed as Odds Ratios (ORs) with pertinent 95% Confidence Intervals (95%CIs). We included 17 studies (10 on RA and 7 on PsA) comprising a total of 6693 patients (1562 with PsA and 5131 with RA) in the analysis. The MDA achievement rate was significantly lower in obese patients than in normal-weight subjects (OR 0.447, 95% CI 0.346-0.577, p rheumatic diseases receiving treatment with traditional or biologic disease-modifying antirheumatic drugs.
14. Convention Theory in the Anglophone Agro-food Literature
DEFF Research Database (Denmark)
Ponte, Stefano
2016-01-01
In the past two decades, convention theory has been applied in various branches of agro-food studies, providing analytical and theoretical insight for examining alternative food networks, coordination and governance in agro-food value chains, and the so-called 'quality turn' in food production...... and consumption. In this article, I examine convention theory applications in the Anglophone literature on agro-food studies through the review of 51 relevant contributions. I highlight how CT has helped explain different modes of organization and coordination of agro-food operations in different places, and how...... (Salais and Storper, 1992; Storper and Salais, 1997); and another applying the 'orders of worth' approach of Boltanski and Thevenot (1991[2006]) and further elaborations of 'quality conventions'. After tracing broad trajectories and the significance of new developments in this literature, I highlight its...
15. World Engineer’s Convention 2011: Engineers Power the World
CERN Multimedia
Yi Ling Hwong (Knowledge Transfer) and Katarina Anthony
2011-01-01
Can the increasing global energy consumption be met without intensifying global warming? Do the necessary technical solutions exist, and is the switch to a low-carbon energy supply feasible and financially viable? These crucial questions and many others were dealt with at the 2011World Engineer’s Convention (WEC). CERN was invited to participate in the event, highlighting its significant contribution to global engineering with an exhibition space devoted to the LHC on the convention floor and a keynote speech delivered by CERN’s Director-General. From 4 – 9 September 2011, more than 2000 engineers and researchers, as well as politicians and business representatives from about 100 countries gathered at the 2011World Engineer’s Convention (WEC). Held in Geneva, Switzerland, they met to discuss solutions for a sustainable energy future. Discussions looked at the development of engineering solutions through a variety of approaches, with ...
16. Convention on the Physical Protection of Nuclear Material
International Nuclear Information System (INIS)
1980-01-01
The convention on the Physical Protection of Nuclear Material is composed of the text of 23 articles, annex 1 showing the levels of physical protection and annex 2 which is the categorization list of nuclear material. The text consists of definitions (article 1), the scope of applications (2), liability of protecting nuclear material during international transport (3 and 4), duty of mutual cooperation (5 and 6), responsibility for criminal punishment (7 to 13), and final provisions (14 to 23). It is to be noted that the nuclear material for military purposes and domestic nuclear facilities are excluded in the connection. After the brief description of the course leading to the establishment of the convention, individual articles and annexes and the respective Japanese version, and the explanation based on the intergovernmental meeting discussion on the draft convention are described. (J.P.N.)
17. Advances in Neuroscience and the Biological and Toxin Weapons Convention
Science.gov (United States)
Dando, Malcolm
2011-01-01
This paper investigates the potential threat to the prohibition of the hostile misuse of the life sciences embodied in the Biological and Toxin Weapons Convention from the rapid advances in the field of neuroscience. The paper describes how the implications of advances in science and technology are considered at the Five Year Review Conferences of the Convention and how State Parties have developed their appreciations since the First Review Conference in 1980. The ongoing advances in neurosciences are then assessed and their implications for the Convention examined. It is concluded that State Parties should consider a much more regular and systematic review system for such relevant advances in science and technology when they meet at the Seventh Review Conference in late 2011, and that neuroscientists should be much more informed and engaged in these processes of protecting their work from malign misuse. PMID:21350673
18. CT and conventional radiographic techniques in interstitial pulmonary disease
International Nuclear Information System (INIS)
Leipner, N.; Schueller, H.; Uexkuell-Gueldenband, V. v.; Schlolaut, K.H.; Overlack, A.; Bonn Univ.
1988-01-01
One hundred and sixty-four patients with pulmonary fibrosis were examined by CT and by conventional radiological methods. Sixty patients had asbestosis, thirty-nine silicosis, forty sarcoidosis and twenty-five had idiopathic pulmonary fibrosis. CT is superior to conventional radiography in evaluating interstitial pulmonary changes, particularly of the pleura and the lung parenchyma. In sixty-nine patients there were some findings which could only be demonstrated by CT. In asbestosis, silicosis and sarcoidosis the CT classification of the lung parenchyma which we have suggested produces significantly better correlation with vital capacity than can be achieved from conventional chest films, according to the guidelines of the I.L.O. (orig./GDG) [de
19. Fifth national report of Brazil for the nuclear safety convention
International Nuclear Information System (INIS)
2010-01-01
This Fifth National Report is a new update to include relevant information for the period of 2007/2009. This document represents the national report prepared as a fulfillment of the Brazilian obligations related to the Convention on Nuclear Safety. In chapter 2 some details are given about the existing nuclear installations. Chapter 3 provides details about the legislation and regulations, including the regulatory framework and the regulatory body. Chapter 4 covers general safety considerations as described in articles 10 to 16 of the Convention. Chapter 5 addresses to the safety of the installations during siting, design, construction and operation. Chapter 6 describes planned activities to further enhance nuclear safety. Chapter 7 presents the final remarks related to the degree of compliance with the Convention obligations
20. Ideal and conventional feedback systems for RWM suppression
International Nuclear Information System (INIS)
Pustovitov, V.D.
2002-01-01
Feedback suppression of resistive wall modes (RWM) is studied analytically using a model based on a standard cylindrical approximation. Two feedback systems are compared: 'ideal', creating only the field necessary for RMW suppression, and 'conventional', like that used in the DIII-D tokamak and considered as a candidate for ITER. The widespread opinion that the feedback with poloidal sensors is better than that with radial sensors is discussed. It is shown that the 'conventional' feedback with radial sensors can be effective only in a limited range, while using the input signal from internal poloidal sensors allows easy fulfilment of the stability criterion. This is a property of the 'conventional' feedback, but the 'ideal' feedback would stabilise RWM in both cases. (author)
1. Ideal and conventional feedback systems for RWM suppression
Energy Technology Data Exchange (ETDEWEB)
Pustovitov, V.D.
2002-01-01
Feedback suppression of resistive wall modes (RWM) is studied analytically using a model based on a standard cylindrical approximation. Two feedback systems are compared: 'ideal', creating only the field necessary for RMW suppression, and 'conventional', like that used in the DIII-D tokamak and considered as a candidate for ITER. The widespread opinion that the feedback with poloidal sensors is better than that with radial sensors is discussed. It is shown that the 'conventional' feedback with radial sensors can be effective only in a limited range, while using the input signal from internal poloidal sensors allows easy fulfilment of the stability criterion. This is a property of the 'conventional' feedback, but the 'ideal' feedback would stabilise RWM in both cases. (author)
2. Dural ectasia and conventional radiography in the Marfan lumbosacral spine
International Nuclear Information System (INIS)
Ahn, N.U.; Nallamshetty, L.; Ahn, U.M.; Buchowski, J.M.; Kebaish, K.M.; Sponseller, P.D.; Rose, P.S.; Garrett, E.S.
2001-01-01
Objective. To determine how well conventional radiographic findings can predict the presence of dural ectasia in Marfan patients.Design and patients. Twelve Marfan patients without dural ectasia and 21 Marfan patients with dural ectasia were included in the study. Five radiographic measurements were made of the lumbosacral spine: interpediculate distance, scalloping value, sagittal canal diameter, vertebral body width, and transverse process width.Results. The following measurements were significantly larger in patients with dural ectasia: interpediculate distances at L3-L4 levels (P 38.0 mm, sagittal diameter at S1 >18.0 mm, or scalloping value at L5 >5.5 mm.Conclusion. Dural ectasia in Marfan syndrome is commonly associated with several osseous changes that are observable on conventional radiographs of the lumbosacral spine. Conventional radiography can detect dural ectasia in patients with Marfan syndrome with a very high specificity (91.7%) but a low sensitivity (57.1%). (orig.)
3. PAPNET-assisted primary screening of conventional cervical smears.
Science.gov (United States)
Cenci, M; Nagar, C; Vecchione, A
2000-01-01
The PAPNET System is the only device with a neural-network-based-artificial intelligence to detect and show the images of abnormal cells on the monitor to be evaluated in an interactive way. We effectively used the PAPNET in rescreening of conventional cervical smears and we detected its advantages and its disadvantages. In this paper, we report our results from PAPNET-assisted primary screening performed on 20,154 conventional smears. The smears were classified as Negative or as Review. The Negative cases were rapidly rescreened mainly near the coverslip edges, which are the slide areas not analyzed by automated devices because of focusing problems. The Review cases were fully reanalyzed by the optic microscope. In summary, 140 positive smears were detected: 57 cases showed changes due to HPV, 63 LSIL, 15 HSIL, and 5 carcinomas. Therefore, the PAPNET System was confirmed as useful in primary screening of conventional cervical samples as well as rescreening.
4. A proposed structure for an international convention on climate change
International Nuclear Information System (INIS)
Nitze, W.A.
1991-01-01
In this chapter, the author recommends a framework convention that will stimulate policy changes without expensive emission reductions in the short term. A central task for a climate convention will be to provide the international community with a permanent mechanism for coordinating its efforts to deal with climate change. The convention should go beyond organizational structure to establish a process for updating the parties' understanding of the science and potential impacts of climate change and for building consensus on policy responses. Each party must then be required to prepare and distribute its own national plan for reducing greenhouse gas emissions and for adapting to future change while achieving its development objectives. A set of targets and timetables for the reduction of greenhouse gas reductions is presented
International Nuclear Information System (INIS)
Noorhazleena Azaman; Khairul Anuar Mohd Salleh; Sapizah Rahim; Shaharudin Sayuti; Arshad Yassin; Abdul Razak Hamzah
2010-01-01
In Industrial Radiography, there are many criteria that need to be considered based on established standards to accept or reject the radiographic film. For conventional radiography, we need to consider the optical density by using the densitometer when viewing the film on the viewer. But in the computed radiography (CR) we need to evaluate and performed the analysis from the quality of the digital image through grey value. There are many factors that affected the digital image quality. One of the factors which are affected to the digital image quality in the image processing is grey value that related to the contrast resolution. In this work, we performed grey value study measurement on digital radiography systems and compared it with exposed films in conventional radiography. The test sample is a steel step wedge. We found out the contrast resolution is higher in Computed Radiography compared with Conventional Radiography. (author)
6. Stability of Brillouin flow in planar, conventional, and inverted magnetrons
International Nuclear Information System (INIS)
Simon, D. H.; Lau, Y. Y.; Greening, G.; Wong, P.; Gilgenbach, R. M.; Hoff, B. W.
2015-01-01
The Brillouin flow is the prevalent flow in crossed-field devices. We systematically study its stability in the conventional, planar, and inverted magnetron geometry. To investigate the intrinsic negative mass effect in Brillouin flow, we consider electrostatic modes in a nonrelativistic, smooth bore magnetron. We found that the Brillouin flow in the inverted magnetron is more unstable than that in a planar magnetron, which in turn is more unstable than that in the conventional magnetron. Thus, oscillations in the inverted magnetron may startup faster than the conventional magnetron. This result is consistent with simulations, and with the negative mass property in the inverted magnetron configuration. Inclusion of relativistic effects and electromagnetic effects does not qualitatively change these conclusions
7. Financing options to develop non-conventional reserves
International Nuclear Information System (INIS)
Tricoli, C.
1997-01-01
The economics of non-conventional natural gas reserves such as coalbed methane, gas shales and tight gas were discussed, with special reference to financing options to develop such reserves. Before 1992, tax credits were used to stimulate the development of non-conventional gas. The requirements for section 29 tax credits, the objectives of investors and producers, and the methods used to monetize section 29 tax credits, such as public royalty trusts, partnership structures, and up-front payment mechanisms were described. The capital gains implications of gas sales were also reviewed. It was noted that in the absence of tax credits, financing the development of non-conventional reserves must undergo the same economic scrutiny as any other oil and gas project
8. Comparing the force ripple during asynchronous and conventional stimulation.
Science.gov (United States)
Downey, Ryan J; Tate, Mark; Kawai, Hiroyuki; Dixon, Warren E
2014-10-01
Asynchronous stimulation has been shown to reduce fatigue during electrical stimulation; however, it may also exhibit a force ripple. We quantified the ripple during asynchronous and conventional single-channel transcutaneous stimulation across a range of stimulation frequencies. The ripple was measured during 5 asynchronous stimulation protocols, 2 conventional stimulation protocols, and 3 volitional contractions in 12 healthy individuals. Conventional 40 Hz and asynchronous 16 Hz stimulation were found to induce contractions that were as smooth as volitional contractions. Asynchronous 8, 10, and 12 Hz stimulation induced contractions with significant ripple. Lower stimulation frequencies can reduce fatigue; however, they may also lead to increased ripple. Future efforts should study the relationship between force ripple and the smoothness of the evoked movements in addition to the relationship between stimulation frequency and NMES-induced fatigue to elucidate an optimal stimulation frequency for asynchronous stimulation. © 2014 Wiley Periodicals, Inc.
9. Convention on Nuclear Safety. Second National Report, October 2001
International Nuclear Information System (INIS)
2001-01-01
The present document is the second Spanish national report prepared in order to comply with the obligations deriving from the convention on Nuclear Safety, made in Vienna on 20th September 1994. This convention was signed by Spain on 15th October 1994 and ratified by way of an instrument issued by the Ministry of Foreign Affairs, signed by H. M. the King on 19th June 1995. The convention, which entered into force on 24th October 1996, following ratification by a minimum number of countries, as set out in articles 20, 21 and 22 includes 51 countries and Euratom, in addition to Spain. The first review meeting, organised in accordance with chapter 3 of the Convention, was held in vienna in April 1999. Spain was represented by the CSN, the State organisation solely responsible for nuclear safety, both for the drawing up of the national report and for participation in the meeting held between the parties. In accordance with article 21, the second review meeting has been scheduled for April 2002, also in Vienna. At the review meeting, the countries party to the Convention review the national reports required by article 5, Spain submitted its first national report in September 1998. The present document is an update of that first report, and is to be submitted by 15th October 2001, as agreed on during the first review meeting. This report will be reviewed by the interested countries, which will forward their comments and questions. In April 2002, the Spanish report and the questions received will be subjected to the review process contemplated by the convention, along with the reports submitted by the other countries
10. National report of Brazil: nuclear safety convention - September 1998
International Nuclear Information System (INIS)
1998-09-01
This National Report was prepared by a group composed of representatives of the various Brazilian organizations with responsibilities in the field of nuclear safety, aiming the fulfilling the Convention of Nuclear Energy obligations. The Report contains a description of the Brazilian policy and programme on the safety of nuclear installations, and an article by article description of the measures Brazil is undertaking in order to implement the obligations described in the Convention. The last chapter describes plans and future activities to further enhance the safety of nuclear installations in Brazil
11. Use of non-conventional energy sources for power generation
International Nuclear Information System (INIS)
Umapathaiah, R.; Sharma, N.D.
1999-01-01
India being a developing country, cannot afford to meet the power and energy demand only from conventional sources. Power generation can be augmented by using non-conventional energy sources. Sufficient importance must be given for recovery of energy from industrial/urban waste. Solar heating system must replace industrial and domestic sectors. Solar photovoltaic, biogas plant, biomass based gasified system must also be given sufficient place in energy sector. More thrust has to be given for generation of power by using sugar cane which is a perennial source
12. National report of Brazil: nuclear safety convention - September 1998
Energy Technology Data Exchange (ETDEWEB)
NONE
1998-09-01
This National Report was prepared by a group composed of representatives of the various Brazilian organizations with responsibilities in the field of nuclear safety, aiming the fulfilling the Convention of Nuclear Energy obligations. The Report contains a description of the Brazilian policy and programme on the safety of nuclear installations, and an article by article description of the measures Brazil is undertaking in order to implement the obligations described in the Convention. The last chapter describes plans and future activities to further enhance the safety of nuclear installations in Brazil.
13. Nuclear damage under the 1997 protocol: conventional thinking?
International Nuclear Information System (INIS)
Warren, G.
2000-01-01
This communication expresses the critical point of view of nuclear insurers about the international civil liability system and tackles questions about the current revision of this system. After having examined the nuclear risk nature in the insurance point of view and remind the objective of nuclear conventions in this context, the author expresses the opinion that compensation means, planned by these conventions, can be suitable to limited nuclear accident but would be insufficient to face the consequences of a serious nuclear accident. (N.C.)
14. U.S. Perspectives on the Joint Convention
International Nuclear Information System (INIS)
Strosnider, J.; Federline, M.; Camper, L.; Abu-Eid, R.; Gnugnoli, G.; Gorn, J.; Bubar, P.; Tonkay, D.
2006-01-01
The Joint Convention on the Safety of Spent Fuel Management and on the Safety of Radioactive Waste Management (Joint Convention) is an international convention, under the auspices of the International Atomic Energy Agency (IAEA). It is a companion to a suite of international conventions on nuclear safety and physical security, which serve to promote a global culture for the safe use of radioactive materials. Although the U.S. was the first nation to sign the Joint Convention on September 29, 1997, the ratification process was a challenging experience for the U.S., in the face of legislative priorities dominated by concerns for national security and threats from terrorism after September 11, 2001. Notwithstanding these prevailing circumstances, the U.S. ratified the Joint Convention in 2003, just prior to the First Review Meeting of the Contracting Parties, and participated fully therein. For the United States, participation as a Contracting Party provides many benefits. These range from working with other Parties to harmonize international approaches to achieve strong and effective nuclear safety programs on a global scale, to stimulating initiatives to improve safety systems within our own domestic programs, to learning about technical innovations by other Parties that can be useful to U.S. licensees, utilities, and industry in managing safety and its associated costs in our waste management activities. The Joint Convention process also provides opportunities to identify future areas of bilateral and multilateral technical and regulatory cooperation with other Parties, as well as an opportunity for U.S. vendors and suppliers to broaden their market to include foreign clients for safety improvement equipment and services. The Joint Convention is consistent with U.S. foreign policy considerations to support, as a priority, the strengthening of the worldwide safety culture in the use of nuclear energy. Because of its many benefits, we believe it is important to take
15. World Trade Organization, ILO conventions, and workers' compensation.
Science.gov (United States)
2005-01-01
The World Trade Organization, the World Bank, and the International Monetary Fund can assist in the implementation of ILO Conventions relating to occupational safety and health in developing countries. Most countries that seek to trade globally receive permission to do so from the WTO. If the WTO required member countries to accept the core ILO Conventions relating to occupational safety and health and workers' compensation, it could accomplish something that has eluded international organizations for decades. International workers' compensation standards are seldom discussed, but may at this time be feasible. Acceptance of a minimum workers' compensation insurance system could be a requirement imposed on applicant nations by WTO member states.
16. An alternative to conventional babbitt metal-lined generator pads
Energy Technology Data Exchange (ETDEWEB)
Puuska, H. [Imatra Hydroelectric Power Plant (Finland)
1996-08-01
The generator refurbishment of the Imatra Hydroelectric Power Plant Unit 1 in Finland is described. The generator work called for installation of a new cooling system for the generator thrust bearing. The considerations leading to the decision to replace the conventional babbitt metal-lined pads with elastic metal-plastic coated thrust bearing pads and the installation of the new pads are outlined in the article. Results of the trial run are summarized; the end temperature of the unit was more than 20C lower than for units equipped with conventional babbitted bearings.
17. Analysis of Drying Process Quality in Conventional Dry-Kilns
OpenAIRE
Sedlar Tomislav; Pervan Stjepan
2010-01-01
This paper presents testing results of drying quality in a conventional dry kiln. Testing is based on a new methodology that will show the level of success of the drying process management by analyzing the quality of drying process in a conventional dry kiln, using a scientifi cally improved version of the check list in everyday practical applications. A company that specializes in lamel and classic parquet production was chosen so as to verify the new testing methodology. A total of 56 m3 of...
18. 48th Annual Convention of Computer Society of India
CERN Document Server
2014-01-01
This volume contains 85 papers presented at CSI 2013: 48th Annual Convention of Computer Society of India with the theme “ICT and Critical Infrastructure”. The convention was held during 13th –15th December 2013 at Hotel Novotel Varun Beach, Visakhapatnam and hosted by Computer Society of India, Vishakhapatnam Chapter in association with Vishakhapatnam Steel Plant, the flagship company of RINL, India. This volume contains papers mainly focused on Data Mining, Data Engineering and Image Processing, Software Engineering and Bio-Informatics, Network Security, Digital Forensics and Cyber Crime, Internet and Multimedia Applications and E-Governance Applications.
19. Comparison Of Conventional And Recycled “Green” Office Paper
Directory of Open Access Journals (Sweden)
Klemen Možina
2011-05-01
Full Text Available To confront with the market need, we have to find alternative in respond to enormous necessity and application ofoffice paper. Therefore, one way in dealing with the problem is to replace or just decrease the use of paper madeentirely from primary components, mainly wood fibbers (deciduous and conifer. We analysed mechanical, optical,structural and microscopic properties. Experiments were performed on three conventional and three recycled officepapers reachable on the market. Results, obtained from measurements, confirm presumption, that mechanical andsurface properties of recycled office paper can be collated and they discern from conventional office paper.
20. Discrete event simulation versus conventional system reliability analysis approaches
DEFF Research Database (Denmark)
Kozine, Igor
2010-01-01
Discrete Event Simulation (DES) environments are rapidly developing and appear to be promising tools for building reliability and risk analysis models of safety-critical systems and human operators. If properly developed, they are an alternative to the conventional human reliability analysis models...... and systems analysis methods such as fault and event trees and Bayesian networks. As one part, the paper describes briefly the author’s experience in applying DES models to the analysis of safety-critical systems in different domains. The other part of the paper is devoted to comparing conventional approaches...
1. Distinguishing and diagnosing contemporary and conventional features of dental erosion.
Science.gov (United States)
Bassiouny, Mohamed A
2014-01-01
The vast number and variety of erosion lesions encountered today require reconsideration of the traditional definition. Dental erosion associated with modern dietary habits can exhibit unique features that symbolize a departure from the decades-old conventional image known as tooth surface loss. The extent and diversity of contemporary erosion lesions often cause conflicting diagnoses. Specific examples of these features are presented in this article. The etiologies, genesis, course of development, and characteristics of these erosion lesions are discussed. Contemporary and conventional erosion lesions are distinguished from similar defects, such as mechanically induced wear, carious lesions, and dental fluorosis, which affect the human dentition.
2. U.S. Continuing Involvement with the Joint Convention
International Nuclear Information System (INIS)
Stewart, L.; Tonkay, D.; Regnier, E.; Schultheisz, D.; Gnugnoli, G.
2009-01-01
The Joint Convention on the Safety of Spent Fuel Management and on the Safety of Radioactive Waste Management (Joint Convention) is an international convention, under the auspices of the International Atomic Energy Agency (IAEA). It is a companion to a suite of international conventions on nuclear safety and physical security, which serve to promote a global culture for the safe use of radioactive materials. The Joint Convention is an official international treaty, and as such, there are obligations on the part of the United States. Those nations having ratified the Joint Convention are designated as 'Contracting Parties.' Nations that are not IAEA Member States may also become Contracting Parties to the Joint Convention, although none has done so. The primary obligations are threefold. The first is to prepare a national report, which addresses the national safety program in radioactive waste management, spent nuclear fuel management, and disused sealed sources. As the U.S. prepares a national report, other Contracting Parties to the Joint Convention also prepare their national reports, which leads to the second obligation on the part of the Contracting Parties. This is the obligation to review other countries' national reports. The last specific obligation is to actively participate in the triennial peer review meeting, referred to as the Review Meeting of the Contracting Parties. The U.S. ratified the Joint Convention in 2003, just prior to the First Review Meeting of the Contracting Parties, and has participated fully therein in the ensuing Review Meetings. Because of the benefits in active participation, it is important for the U.S. to maintain its leadership role in promoting its ratification in the global setting, as well as in more focused regions. Because of the important benefits associated with active participation, the U.S. has strongly supported a Regional Conference Initiative outreach program to increase membership. To launch the Initiative, the U
3. Vienna Convention and Its Revision and convention on Supplementary Compensation for Nuclear Damage on September 12, 1997
International Nuclear Information System (INIS)
Soljan, V.
1998-01-01
After Chernobyl, the perception of common interest in modernization of the international regime that regulate various aspects of nuclear energy, has been evident among states with nuclear power plants as well as those likely to be involved in or affected by a nuclear incident. The adoption of the protocol Amending the Vienna Convention on Civil liability for Nuclear Damage, 1963 and the Convention on Supplementary Compensation for nuclear damage in September 1997, represents important part of the entire result that has been achieved from the 1986. This article gives a brief survey on the background of the process of modernization of the international regime of liability for nuclear damage and examines solutions contained in the provisions of the conventions. (author)
4. IAEA supports regional seas conventions and action plans
International Nuclear Information System (INIS)
2000-01-01
The document informs about the 3rd Global Meeting of Regional Seas Conventions and Action Plans held in Monaco in November 2000 at the IAEA's Marine Environmental Laboratory (IAEA-MEL). The meeting assembled a number of marine environmental experts from several UN bodies to reinforce activities to protect the marine environment
5. Conventions and nomenclature for double diffusion encoding NMR and MRI
DEFF Research Database (Denmark)
Shemesh, Noam; Jespersen, Sune N; Alexander, Daniel C
2015-01-01
, such as double diffusion encoding (DDE) NMR and MRI, may provide novel quantifiable metrics that are less easily inferred from conventional diffusion acquisitions. Despite the growing interest on the topic, the terminology for the pulse sequences, their parameters, and the metrics that can be derived from them...
6. Selections from the ABC 2011 Annual Convention, Montreal, Canada
Science.gov (United States)
Whalen, D. Joel; Andersen, Ken; Campbell, Gloria; Crenshaw, Cheri; Cross, Geoffrey A.; Grinols, Anne Bradstreet; Hildebrand, John; Newman, Amy; Ortiz, Lorelei A.; Paulson, Edward; Phillabaum, Melinda; Powell, Elizabeth A.; Sloan, Ryan
2012-01-01
The 12 Favorite Assignments featured in this article were presented at the 2011 Annual Convention of the Association for Business Communication (ABC), Montreal, Canada. A variety of learning objectives are featured: delivering bad news, handling difficult people, persuasion, reporting financial analysis, electronic media, face-to-face…
7. Tool for efficient intermodulation analysis using conventional HB packages
OpenAIRE
Vannini, G.; Filicori, F.; Traverso, P.
1999-01-01
A simple and efficient approach is proposed for the intermodulation analysis of nonlinear microwave circuits. The algorithm, which is based on a very mild assumption about the frequency response of the linear part of the circuit, allows for a reduction in computing time and memory requirement. Moreover. It can be easily implemented using any conventional tool for harmonic-balance circuit analysis
8. Conventional external beam radiotherapy for central nervous system malignancies
International Nuclear Information System (INIS)
Halperin, E.C.; Burger, P.C.
1985-01-01
Fractionated external beam photon radiotherapy is an important component of the clinical management of malignant disease of the central nervous system. The practicing neurologist or neurosurgeon frequently relies on the consultative and treatment skills of a radiotherapist. This article provides a review for the nonradiotherapist of the place of conventional external beam radiotherapy in neuro-oncology. 23 references
9. The Transmission of Monetary Policy through Conventional and Islamic Banks
NARCIS (Netherlands)
Zaheer, S.; Ongena, S.; van Wijnbergen, S.J.G.
2011-01-01
We investigate the differences in banks’ responses to monetary policy shocks across bank size, liquidity, and type, i.e., conventional versus Islamic, in Pakistan between 2002:II to 2010:I. We find that following a monetary contraction, small banks with liquid balance sheets cut their lending less
10. Coherent states for oscillators of non-conventional statistics
International Nuclear Information System (INIS)
Dao Vong Duc; Nguyen Ba An
1998-12-01
In this work we consider systematically the concept of coherent states for oscillators of non-conventional statistics - parabose oscillator, infinite statistics oscillator and generalised q-deformed oscillator. The expressions for the quadrature variances and particle number distribution are derived and displayed graphically. The obtained results show drastic changes when going from one statistics to another. (author)
11. Vienna convention on civil liability for nuclear damage
International Nuclear Information System (INIS)
1996-01-01
The Vienna Convention on Civil Liability for Nuclear Damage was adopted on 21 May 1963 and was opened for signature on the same day. It entered into force on 12 November 1977, i.e. three months after the date of deposit with the Director General of the fifth instrument of ratification, in accordance with Article 23
12. The Use of Hyper-Reference and Conventional Dictionaries.
Science.gov (United States)
Aust, Ronald; And Others
1993-01-01
Describes a study of 80 undergraduate foreign language learners that compared the use of a hyper-reference source incorporating an electronic dictionary and a conventional paper dictionary. Measures of consultation frequency, study time, efficiency, and comprehension are examined; bilingual and monolingual dictionary use is compared; and further…
13. Susceptibility of green and conventional building materials to microbial growth.
Science.gov (United States)
Mensah-Attipoe, J; Reponen, T; Salmela, A; Veijalainen, A-M; Pasanen, P
2015-06-01
Green building materials are becoming more popular. However, little is known about their ability to support or limit microbial growth. The growth of fungi was evaluated on five building materials. Two green, two conventional building materials and wood as a positive control were selected. The materials were inoculated with Aspergillus versicolor, Cladosporium cladosporioides and Penicillium brevicompactum, in the absence and presence of house dust. Microbial growth was assessed at four different time points by cultivation and determining fungal biomass using the N-acetylhexosaminidase (NAHA) enzyme assay. No clear differences were seen between green and conventional building materials in their susceptibility to support microbial growth. The presence of dust, an external source of nutrients, promoted growth of all the fungal species similarly on green and conventional materials. The results also showed a correlation coefficient ranging from 0.81 to 0.88 between NAHA activity and culturable counts. The results suggest that the growth of microbes on a material surface depends on the availability of organic matter rather than the classification of the material as green or conventional. NAHA activity and culturability correlated well indicating that the two methods used in the experiments gave similar trends for the growth of fungi on material surfaces. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
14. Why doesn't conventional IVF work in the horse?
NARCIS (Netherlands)
Leemans, Bart; Gadella, Bart M; Stout, Tom Arjun Edgar; De Schauwer, Catharina; Nelis, Hilde Maria; Hoogewijs, Maarten; Van Soom, Ann
2016-01-01
In contrast to man and many other mammalian species, conventional in vitro fertilization (IVF) with horse gametes is not reliably successful. The apparent inability of stallion spermatozoa to penetrate the zona pellucida in vitro is most likely due to incomplete activation of spermatozoa
15. Construction Management for Conventional Facilities of Proton Accelerator
International Nuclear Information System (INIS)
Kim, Jun Yeon; Cho, Jang Hyung; Cho, Sung Won
2013-01-01
Proton Engineering Frontier Project, puts its aim to building 100MeV 20mA linear proton accelerator which is national facility for NT, BT, IT, and future technologies, expected to boost up the national industry competitiveness. This R and D, Construction Management is in charge of the supportive works such as site selection, architecture and engineering of conventional facilities, and overall construction management. The major goals of this work are as follows: At first, architecture and engineering of conventional facilities. Second, construction management, supervision and inspection on construction of conventional facilities. Lastly, cooperation with the project host organization, Gyeongju city, for adjusting technically interrelated work during construction. In this research, We completed the basic, detail, and field changed design of conventional facilities. Acquisition of necessary construction and atomic license, radiation safety analysis, site improvement, access road construction were successfully done as well. Also, we participated in the project host related work as follows: Project host organization and site selection, construction technical work for project host organization and procedure management, etc. Consequently, we so fulfilled all of the own goals which were set up in the beginning of this construction project that we could made contribution for installing and running PEFP's developed 100MeV 20mA linear accelerator
16. Clearance of building structures for conventional non-nuclear reuse
International Nuclear Information System (INIS)
Buss, K.; Boehringer, S.
1998-01-01
At the example of a fuel assembly plant the strategy of control measurements on building surfaces, which shall be conventionally reused after their clearance, is regarded. Based on the given clearance levels the used measuring methods, especially with regard of possibly covered or intruded uranium contamination, are shown. The possibility of using the in-situ-γ-spectroscopy is discussed. (orig.) [de
17. The transmission of monetary policy through conventional and Islamic banks
NARCIS (Netherlands)
Zaheer, S.; Ongena, S.; van Wijnbergen, S.J.G.
2013-01-01
We investigate the differences in banks’ responses to monetary policy shocks across bank size, liquidity, and type—i.e., conventional versus Islamic—in Pakistan between 2002:Q2 and 2010:Q1. We find that following a monetary contraction, small banks with liquid balance sheets cut their lending less
18. The transmission of monetary policy through conventional and islamic banks
NARCIS (Netherlands)
Zaheer, S.; Ongena, S.; van Wijnbergen, S.
2012-01-01
We investigate the differences in banks' responses to monetary policy shocks across bank size, liquidity, and type, i.e., conventional versus Islamic, in Pakistan between 2002:II to 2010:I. We find that following a monetary contraction, small banks with liquid balance sheets cut their lending less
19. Parosteal lipoma - a case report: conventional radiology, CT and MRI
International Nuclear Information System (INIS)
Albuquerque, Silvio Cavalcanti; Nascimento, Edilene Cristina do; Silva, Ivone Martins da
1996-01-01
The authors report a case parosteal lipoma, a rare benign tumor, associated with exostosis, in proximal radius. The diagnosis aspects in conventional radiology, computed tomography and magnetic resonance imaging are presented, as well as a review of the medical literature about the case. (author). 7 refs., 3 figs
20. Effects Of Using Non-Conventional Feedstuffs On The Productivity ...
African Journals Online (AJOL)
This study examines effects of privately producing layers mash by including such non-conventional feedstuff as cassava, brewers dried grain, etc on the productivity, cost and profit of poultry (eggs) farms. Primary data was collected from three categories of farms – 12” convcentional feedstuff users“ (CFU), ...
1. Isospectrality of conventional and new extended potentials, second ...
2015-11-27
Home; Journals; Pramana – Journal of Physics; Volume 73; Issue 2. Isospectrality of conventional and new extended potentials, ... Proceedings of the International Workshop/Conference on Computational Condensed Matter Physics and Materials Science (IWCCMP-2015). Posted on November 27, 2015. Guest Editors: ...
2. Assessment of Conventional Teaching Procedures: Implications for Gifted Learners
Science.gov (United States)
Alenizi, Mogbel Aid K.
2016-01-01
The present research aims to assess the conventional teaching procedures in the development of mathematical skills of the students with learning difficulties. The study group was made up of all the children with academic learning disorders in KSA. The research questions have been scrutinized from the averages and the standard deviation of the…
3. Preserving Musicality through Pictures: A Linguistic Pathway to Conventional Notation
Science.gov (United States)
Nordquist, Alice L.
2016-01-01
The natural musicality so often present in children's singing can begin to fade as the focus of a lesson shifts to the process of reading and writing conventional notation symbols. Approaching the study of music from a linguistic perspective preserves the pace and flow that is inherent in spoken language and song. SongWorks teaching practices…
4. 40. annual convention 1990 of the Austrian physical society
International Nuclear Information System (INIS)
Anon.
1990-01-01
Titles and abstracts of the 1990 Convention of the Austrian Physical Society, 17-21 September 1990 at Salzburg, Austria, are given. The topical sections are: 1. Atomic, Molecular- and Plasma Physics; 2. Solid State Physics; 3. Polymer Physics; 4. Nuclear- and Particle Physics; 5. Medical Physics and Biophysics. There are alltogether 193 contributions, 61 thereof of INIS interest
5. The struggle for textual conventions in a language support programme
African Journals Online (AJOL)
In this article, the writer explores the experience of a group of South African learners with regard to a language support course that aims to facilitate their struggle to master English textual conventions in discipline specific contexts. The academic context of this study was that of a nursing science degree programme where ...
6. Selections from the ABC 2012 Annual Convention, Honolulu, Hawaii
Science.gov (United States)
Whalen, D. Joel
2013-01-01
The 13 Favorite Assignments featured here were presented at the 2012 Association for Business Communication (ABC) Annual Convention, Honolulu, Hawaii. A variety of learning objectives are featured, including the following: enhancing resume's visual impact, interpersonal skills, social media, team building, web design, community service projects,…
7. 46 CFR 91.60-40 - Duration of Convention certificates.
Science.gov (United States)
2010-10-01
... VESSELS INSPECTION AND CERTIFICATION Certificates Under International Convention for Safety of Life at Sea... period of not more than 60 months. (1) A Cargo Ship Safety Construction Certificate. (2) A Cargo Ship Safety Equipment Certificate. (3) A Safety Management Certificate. (4) A Cargo Ship Safety Radio...
8. Non-Conventional Methodologies in the Synthesis of 1-Indanones
Directory of Open Access Journals (Sweden)
Manuela Oliverio
2014-04-01
Full Text Available 1-Indanones have been successfully prepared by means of three different non-conventional techniques, namely microwaves, high-intensity ultrasound and a Q-tube™ reactor. A library of differently substituted 1-indanones has been prepared via one-pot intramolecular Friedel-Crafts acylation and their efficiency and “greenness” have been compared.
9. Comparative Cost/Benefit of Alternative/Conventional Feedstuff in ...
African Journals Online (AJOL)
benefit of the use of conventional (corn/soya bean based) and alternative (less of corn and soya bean substituted with agro-allied and industrial by-products) feedstuffs. Completely randomized design was used and the experiment conducted for a ...
10. A comparison of EEG spectral entropy with conventional quantitative ...
African Journals Online (AJOL)
A comparison of EEG spectral entropy with conventional quantitative EEG at varying depths of sevoflurane anaesthesia. PR Bartel, FJ Smith, PJ Becker. Abstract. Background and Aim: Recently an electroencephalographic (EEG) spectral entropy module (M-ENTROPY) for an anaesthetic monitor has become commercially ...
11. Conventional versus virtual radiographs of the injured pelvis and acetabulum
Energy Technology Data Exchange (ETDEWEB)
Bishop, Julius A.; Rao, Allison J.; Pouliot, Michael A.; Bellino, Michael [Stanford University School of Medicine, Department of Orthopaedic Surgery, Stanford, CA (United States); Beaulieu, Christopher [Stanford University School of Medicine, Department of Radiology, Stanford, CA (United States)
2015-09-15
Evaluation of the fractured pelvis or acetabulum requires both standard radiographic evaluation as well as computed tomography (CT) imaging. The standard anterior-posterior (AP), Judet, and inlet and outlet views can now be simulated using data acquired during CT, decreasing patient discomfort, radiation exposure, and cost to the healthcare system. The purpose of this study is to compare the image quality of conventional radiographic views of the traumatized pelvis to virtual radiographs created from pelvic CT scans. Five patients with acetabular fractures and ten patients with pelvic ring injuries were identified using the orthopedic trauma database at our institution. These fractures were evaluated with both conventional radiographs as well as virtual radiographs generated from a CT scan. A web-based survey was created to query overall image quality and visibility of relevant anatomic structures. This survey was then administered to members of the Orthopaedic Trauma Association (OTA). Ninety-seven surgeons completed the acetabular fracture survey and 87 completed the pelvic fracture survey. Overall image quality was judged to be statistically superior for the virtual as compared to conventional images for acetabular fractures (3.15 vs. 2.98, p = 0.02), as well as pelvic ring injuries (2.21 vs. 1.45, p = 0.0001). Visibility ratings for each anatomic landmark were statistically superior with virtual images as well. Virtual radiographs of pelvic and acetabular fractures offer superior image quality, improved comfort, decreased radiation exposure, and a more cost-effective alternative to conventional radiographs. (orig.)
12. Antioxidant activity in selected Slovenian organic and conventional crops
Directory of Open Access Journals (Sweden)
Manca KNAP
2015-12-01
Full Text Available The demand for organically produced food is increasing. There is widespread belief that organic food is substantially healthier and safer than conventional food. According to literature organic food is free of phytopharmaceutical residues, contain less nitrates and more antioxidants. The aim of the present study was to verify if there are any differences in the antioxidant activity between selected Slovenian organic and conventional crops. Method of DPPH (2,2-diphenyl-1-picryhydrazyl was used to determine the antioxidant activity of 16 samples from organic and conventional farms. The same varieties of crops were analysed. DPPH method was employed to measure the antioxidant activity of polar antioxidants (AAp and antioxidant activity of fraction in ethyl acetate soluble antioxidants (EA AA. Descriptive statistics and variance analysis were used to describe differences between farming systems. Estimated differences between interactions for the same crop and different farming practice were mostly not statistically significant except for the AAp for basil and beetroot. Higher statistically significant values were estimated for conventional crops. For the EA AA in broccoli, cucumber, rocket and cherry statistically significant higher values were estimated for organic production.
13. Conventional and serological detection of Fasciolosis in ruminants ...
African Journals Online (AJOL)
The study was conducted to determined seasonal prevalence of fasciolosis and compare between its conventional diagnosis and serological identification in ruminants slaughtered at Maiduguri abattoir, northeastern Nigeria. Nine hundred samples each of faeces and blood; that is 300 each from cattle, sheep and goats was ...
14. Vachellia karroo leaf meal: a promising non-conventional feed ...
African Journals Online (AJOL)
Vachellia karroo leaf meal: a promising non-conventional feed resource for improving goat production in low-input farming systems of Southern Africa. ... Vachellia karroo possesses desirable fatty acid profiles, and high protein and mineral contents that can improve animal performance. Presently, the use of V. karroo for ...
NARCIS (Netherlands)
Gerards, J.H.; Glas, L.R.
2017-01-01
The numerous reforms to the Convention system of the past two decades have unquestionably had an effect on applicants’ means to access justice in the system. It is, however, open to question how these changes should be evaluated: with reference to the individual right to petition, or with reference
16. National report of Brazil on nuclear safety convention - introduction
International Nuclear Information System (INIS)
1998-01-01
This document was prepared for fulfilling the Brazilian obligations under the Convention on Nuclear Safety. Chapter 1 presents some historical aspects of the Brazilian nuclear policy, targets to be attained for increasing the nuclear energy contribution for the national production of electric energy
17. Training Needs Analysis: Weaknesses in the Conventional Approach.
Science.gov (United States)
Leat, Michael James; Lovel, Murray Jack
1997-01-01
Identification of the training and development needs of administrative support staff is not aided by conventional performance appraisal, which measures summary or comparative effectiveness. Meaningful diagnostic evaluation integrates three levels of analysis (organization, task, and individual), using behavioral expectation scales. (SK)
18. A prompt start: Implementing the framework convention on climate change
International Nuclear Information System (INIS)
Chayes, A.; Skolnikoff, E.B.; Victor, D.G.
1992-01-01
A Framework Convention on Climate Change is under active negotiation in the United Nations with the expectation it will be ready for Signature at the Rio Conference this June. Under the most optimistic projections, a Convention will not come into force and be an effective instrument for months, probably years. In recognition of the several institutional tasks that will be of crucial importance whatever the detailed content of the Convention a small group of high international organizations involved in the negotiations was convened at the Rockefeller Foundation's Conference Center at Bellagio in January. The discussions at Bellagio on the need for a Prompt Start on these institutional tasks benefitted from earlier meetings at Harvard in March and at Bermuda in May, 1991, that the co-organizers convened to discuss these and related aspects of the negotiations on a Climate Convention. Those meetings were attended by members of the academic community, officials from the United Nations, and representatives of governments involved in the negotiations
19. National Ignition Facility system design requirements conventional facilities SDR001
International Nuclear Information System (INIS)
Hands, J.
1996-01-01
This System Design Requirements (SDR) document specifies the functions to be performed and the minimum design requirements for the National Ignition Facility (NIF) site infrastructure and conventional facilities. These consist of the physical site and buildings necessary to house the laser, target chamber, target preparation areas, optics support and ancillary functions
20. Single-incision laparoscopic surgery and conventional laparoscopic ...
African Journals Online (AJOL)
Indications for surgery included grades II-III varicocele or ipsilateral testicular hypotrophy. The SIL-V procedure was performed in 44 patients with roticulating and conventional 5 mm instruments. Testicular vessels were isolated “en bloc,” clipped and cut. Operating time, visual analogue scale and post-operative results were ...
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3824268579483032, "perplexity": 9543.611350017189}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986677964.40/warc/CC-MAIN-20191018055014-20191018082514-00025.warc.gz"}
|
https://discuss.codechef.com/questions/8532/god-code
|
×
# [closed] God Code.....
0 A code Which can solve all problem of a contest... Guys this is a very interesting code for this contest The code is int main() { int d, T =1; while(scanf("%d",&d)!=EOF) { printf("Case #%d: \n", T); T++; } return 0; } By using this code you can solve all 4 problem of this contest please use this... Because the problem setters want only blank output file with Case #1: Case #2: like this... Happy Coding.... asked 21 Apr '13, 01:27 2★upen_jat 14●1●2●3 accept rate: 0% 8.6k●19●48●98
### The question has been closed for the following reason "Live Contest, question categorized as unfair means. @admin, delete it asap" by bugkiller 21 Apr '13, 02:00
0 http://www.codechef.com/CDNB2013 Go there and solve all the problems .... answered 21 Apr '13, 01:29 2★upen_jat 14●1●2●3 accept rate: 0%
By Email:
Markdown Basics
• *italic* or _italic_
• **bold** or __bold__
• image?
• numbered list: 1. Foo 2. Bar
• to add a line break simply add two spaces to where you would like the new line to be.
• basic HTML tags are also supported
• mathemetical formulas in Latex between \$ symbol
Question tags:
×913
×214
×38
question asked: 21 Apr '13, 01:27
question was seen: 894 times
last updated: 21 Apr '13, 02:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5842557549476624, "perplexity": 17509.277146204906}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794870497.66/warc/CC-MAIN-20180527225404-20180528005404-00068.warc.gz"}
|
https://neurips.cc/Conferences/2019/ScheduleMultitrack?event=13911
|
Timezone: »
Poster
Energy-Inspired Models: Learning with Sampler-Induced Distributions
John Lawson · George Tucker · Bo Dai · Rajesh Ranganath
Wed Dec 11 10:45 AM -- 12:45 PM (PST) @ East Exhibition Hall B + C #120
Energy-based models (EBMs) are powerful probabilistic models, but suffer from intractable sampling and density evaluation due to the partition function. As a result, inference in EBMs relies on approximate sampling algorithms, leading to a mismatch between the model and inference. Motivated by this, we consider the sampler-induced distribution as the model of interest and maximize the likelihood of this model. This yields a class of energy-inspired models (EIMs) that incorporate learned energy functions while still providing exact samples and tractable log-likelihood lower bounds. We describe and evaluate three instantiations of such models based on truncated rejection sampling, self-normalized importance sampling, and Hamiltonian importance sampling. These models out-perform or perform comparably to the recently proposed Learned Accept/RejectSampling algorithm and provide new insights on ranking Noise Contrastive Estimation and Contrastive Predictive Coding. Moreover, EIMs allow us to generalize a recent connection between multi-sample variational lower bounds and auxiliary variable variational inference. We show how recent variational bounds can be unified with EIMs as the variational family.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8868663311004639, "perplexity": 2858.7394704352223}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988882.94/warc/CC-MAIN-20210508151721-20210508181721-00518.warc.gz"}
|
https://www.physicsforums.com/threads/mathematical-induction.95577/
|
# Mathematical Induction
1. Oct 19, 2005
### dglee
Show taht for every natural numbers n>=2 $$n \geq 2$$ the number 2^2^n - 6 $$2^{2^n} -6$$ is a multiple of 10. Using mathematical induction.
Okay i got no clue how to start this question. Ahhh Is there a series where the x^n is a series? Well this stuff really sucks.
Last edited: Oct 19, 2005
2. Oct 19, 2005
### Tide
HINT:
$$2^{2^{n+1}} = \left( 2^{2^n}\right)^2$$
3. Oct 19, 2005
### dglee
Oh WOw... that helped a lot.. now i can show that its inductive hmm but how would i show its a multiple of 10?
hmm maybe if i said 10^2 is 100 so thats a multiple of 10. $$10*n\leq2^{2^n}-6$$ if i proved that.. hmm i wonder if that would be right... $$10*(n+1)\leq2^{2^(n+1)}$$ hmm if i proved that would i have solved the question?
wow if hmm well ahaha thanks a LOT!!!!!!!! YOUR AWSOME. that little hint helped a lot but i got no clue if im actually doing it right.
Last edited: Oct 19, 2005
4. Oct 19, 2005
For Mathematical Induction, you assume that P_{k} is true, in this case for n greater or equal to 2. With this, you can immediately say that 2^(2^k) - 6 is equal to 10a, for some a, where a is a positive integer.
Can you then use this fact, and the hint provided, to prove the desired result for 2^(2^(k+1)) - 6?
5. Oct 19, 2005
### dglee
hmm should could i say that
$$10(a+x)\leq (\left2^{2^n}\right )^2 - 6$$ where x is some positive number and should that it is inductive to prove that 10 is a multiple?
soo
$$10(a)\leq2^{2^n} - 6$$ then
$$10(a+x)\leq(\left 2^{2^n}\right)^2 - 6$$
then show that
$$10(a+x)\leq2^{2^{n+1}} -6$$ ahhh im confusd now.. ahh
Last edited: Oct 19, 2005
6. Oct 19, 2005
### Tide
HINT: If M - 6 is a multiple of 10 then M ends in the digit 6! :)
7. Oct 19, 2005
Hmmm... Why are there so many inequalitites in your working? From what I know, the only inequality to appear in your solution should be the fact that n is greater than or equal to 2, but this is just a specification, and should not appear in your proof.
8. Oct 19, 2005
Steps in Mathematical Induction
1) Let $$P_{n}$$ be the statement $$2^{2^n}-6$$ is divisible by 10, for n$$\geq$$ 2.
2) Check that the result you want to prove is valid for n=2, so $$P_{2}$$ is true.
3) Assume $$P_{k}$$ is true, for some n $$\geq$$ 2. So, $$2^{2^k}-6$$ = 10a, for some a, which is a positive integer.
4) Using this result, you must somehow prove that $$2^{2^{k+1}}-6$$ is a multiple of 10. How would you go around doing it? Look at the first hint provided and observe... What has been done to the term $$2^{2^n}$$ ? USE BOTH THE RESULT FROM STEP 3 AND THE FIRST HINT
5) Once you have proven step 4, give a conclusion. "Since $$P_{2}$$ is true, and for some n$$\geq$$2, $$P_{k}$$ is true $$\Longrightarrow$$ $$P_{k+1}$$ is true. By Mathematical Induction, $$P_{n}$$ is true for all n$$\geq$$2."
Last edited: Oct 19, 2005
9. Oct 19, 2005
### dglee
Wow thanks a lot for your help. I will try to figure this out. You helped a lot pizzasky and Tide.
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8393678665161133, "perplexity": 714.6971180550121}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823442.17/warc/CC-MAIN-20181210191406-20181210212906-00408.warc.gz"}
|
http://math.stackexchange.com/users/44220/brian?tab=activity
|
# Brian
less info
reputation
3
bio website location age 31 member for 2 years, 1 month seen Feb 16 at 6:46 profile views 10
# 8 Actions
Feb16 awarded Supporter Oct10 comment Prove by induction $\sum_{i=1}^ni^3=\frac{n^2(n+1)^2}{4}$ for $n\ge1$ My algebra is rusty and the intellectual leap to $\frac{(n+1)^2}{4}[n^2+4(n+1)]$ in the first equality escapes me. Can you explain further? Oct10 revised Prove by induction $\sum_{i=1}^ni^3=\frac{n^2(n+1)^2}{4}$ for $n\ge1$ deleted 3 characters in body Oct10 awarded Editor Oct10 revised Prove by induction $\sum_{i=1}^ni^3=\frac{n^2(n+1)^2}{4}$ for $n\ge1$ Fixed formatting of equalities in second equation Oct10 comment Prove by induction $\sum_{i=1}^ni^3=\frac{n^2(n+1)^2}{4}$ for $n\ge1$ @MichaelHardy Thanks for pointing out my misused arrow. The post has been edited to fix this. Oct10 awarded Student Oct10 asked Prove by induction $\sum_{i=1}^ni^3=\frac{n^2(n+1)^2}{4}$ for $n\ge1$
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8288465142250061, "perplexity": 2299.5371043094738}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400380233.64/warc/CC-MAIN-20141119123300-00088-ip-10-235-23-156.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/two-calc-problems.112282/
|
# Two calc problems
• Start date
#### cscott
784
1
PV = C (Boyle's Law)
At a certain instant, the volume is 480 cm^3, the pressure is 160 kPa, and the pressure is increasing at a rate of 15 kPa/min. At what rate is the volume decreasing at this instant?
----
Find the equations of both lines that pass through the point (2, 3) and are tangent to the parabola y = x^2 + x.
Related Calculus and Beyond Homework News on Phys.org
#### benorin
Homework Helper
1,057
7
$$PV=C\Rightarrow\frac{d}{dt}(PV)=\frac{d}{dt}C\Rightarrow \frac{dP}{dt}V+P\frac{dV}{dt}=0$$
then plug-in the known values of P,V, and dP/dt to solve for dV/dt.
#### HallsofIvy
41,618
821
Any line through (2, 3) can be written as y= m(x- 2)+ 3 for some m.
If (x,y) is a point where that line intersects the parabola y= x2+ x, then we must have m(x-2)+ 3= x2+ x. If, in addition, the line is tangent to the parabola there, we must have
m= 2x+ 1. Solve those two equations for x and m.
I've edited this: before I had m= 2x- 1. Obviously, the derivative of x2+ x is 2x+ 1, not 2x- 1.
Last edited by a moderator:
#### cscott
784
1
Thank you.
"Two calc problems"
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7459266185760498, "perplexity": 4136.080299127134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578656640.56/warc/CC-MAIN-20190424194348-20190424220348-00098.warc.gz"}
|
https://planetmath.org/limitfunctionofsequence
|
# limit function of sequence
###### Theorem 1.
Let $f_{1},\,f_{2},\,\ldots$ be a sequence of real functions all defined in the interval$[a,\,b]$. This function sequence converges uniformly to the limit function $f$ on the interval $[a,\,b]$ if and only if
$\lim_{n\to\infty}\sup\{|f_{n}(x)-f(x)|\vdots\,\,a\leqq x\leqq b\}=0.$
If all functions $f_{n}$ are continuous in the interval $[a,\,b]$ and $\lim_{n\to\infty}f_{n}(x)=f(x)$ in all points $x$ of the interval, the limit function needs not to be continuous in this interval; example $f_{n}(x)=\sin^{n}x$ in $[0,\,\pi]$:
###### Theorem 2.
If all the functions $f_{n}$ are continuous and the sequence $f_{1},\,f_{2},\,\ldots$ converges uniformly to a function $f$ in the interval $[a,\,b]$, then the limit function $f$ is continuous in this interval.
Note. The notion of can be extended to the sequences of complex functions (the interval is replaced with some subset $G$ of $\mathbb{C}$). The limit function of a uniformly convergent sequence of continuous functions is continuous in $G$.
Title limit function of sequence LimitFunctionOfSequence 2013-03-22 14:37:45 2013-03-22 14:37:45 pahio (2872) pahio (2872) 22 pahio (2872) Theorem msc 26A15 msc 40A30 LimitOfAUniformlyConvergentSequenceOfContinuousFunctionsIsContinuous PointPreventingUniformConvergence function sequence limit function
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 19, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.987941563129425, "perplexity": 1288.1540655199094}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202628.42/warc/CC-MAIN-20190322034516-20190322060516-00450.warc.gz"}
|
https://link.springer.com/article/10.1007%2Fs11276-013-0641-6
|
Wireless Networks
, Volume 20, Issue 4, pp 733–745
# A fully distributed replica allocation scheme for an opportunistic network
• Ji-Hyeun Yoon
• Jae-Ho Choi
• Kwang-Jo Lee
• Sung-Bong Yang
Article
## Abstract
An opportunistic network (OPPNET) consists of diverse mobile nodes with various mobility patterns. Numerous mobility patterns and the resource constraints of mobile nodes lead to network partitioning that result in system performance degradation including low data accessibility. In a traditional mobile ad hoc network (MANET) which is similar to an OPPNET, replica allocation schemes have been proposed to increase data accessibility. Although the schemes are efficient in a MANET, they may not be directly applicable to an OPPNET because the schemes are based on a grouping of mobile nodes. It is very difficult to build groups based on network topology in an OPPNET because a node in an OPPNET does not keep its network topology information. In this paper, we propose a novel replica allocation scheme for an opportunistic network called the Snooping-based Fully Distributed replica allocation scheme. The proposed scheme allocates replicas in a fully distributed manner without grouping to reduce the communication cost, and fetches allocated replicas utilizing a novel candidate list concept to achieve high data accessibility. In the proposed scheme, a node can fetch replicas opportunistically based on the candidate list. Consequently, the proposed replica allocation scheme achieves high data accessibility while reducing the communication cost significantly. Extensive simulation results demonstrate that the proposed scheme reduces the communication cost and improves data accessibility over traditional schemes.
## Keywords
Replica allocation Opportunistic network Data accessibility Communication cost Distributed scheme Estimated access frequency
## Notes
### Acknowledgments
This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (2013R1A1A2011114).
## References
1. 1.
Conti, M., Giordano, S., May, M., & Passarella, A. (2010). From opportunistic networks to opportunistic computing. IEEE Communications Magazine, 48(9), 126–139.
2. 2.
Conti, M., & Kumar, M. (2010). Opportunities in opportunistic computing. IEEE Computer, 43(1), 42–50.
3. 3.
Pelusi, L., Passarella, A., & Conti, M. (2006). Opportunistic networking: Data forwarding in disconnected mobile ad hoc networks. IEEE Communications Magazine, 44(11), 134–141.
4. 4.
Juang, P., Oki, H., Wang, Y., Martonosi, M., Peh, L. S., & Rubenstein, D. (2002). Energy-efficient computing for wildlife tracking: Design tradeoffs and early experiences with zebranet. In Proceedings of ASPLOS (pp. 96–107).Google Scholar
5. 5.
Choi, J., Shim, K., Lee, S., & Wu, K. (2012). Handling selfishness in replica allocation over a mobile ad hoc network. IEEE Transactions on Mobile Computing, 11(2), 278–291.
6. 6.
Hara, T. (2001). Effective replica allocation in ad hoc networks for improving data accessibility. In Proceedings of IEEE INFOCOM (pp. 1568–1576).Google Scholar
7. 7.
Shinohara, M., Hayashi, H., Hara, T., & Nishio, S. (2006). Replica allocation considering power consumption in mobile ad hoc networks. In Proceedings of PerCom (pp. 13–17).Google Scholar
8. 8.
Yen, Y. S., Wu, J. Z., Chung, B. Y., & Chao, H. C. (2009). A novel energy-efficient replica allocation within one-hop groups (ERAOG) in MANET. In Proceedings of CHINACOM (pp. 1–9).Google Scholar
9. 9.
Martin, N., & England, J. (2011). Mathematical theory of entropy. New York: Cambridge University Press.
10. 10.
Cao, G., Yin, L., & Das, C. R. (2004). Cooperative cache-based data access in ad hoc networks. IEEE Computer, 37(2), 32–39.
11. 11.
Chen, L. J., Yu, C. H., Sun, T., Chen, Y. C., & Chu, H. (2006). A hybrid routing approach for opportunistic networks. In Proceedings of CHANTS (pp. 213–220).Google Scholar
12. 12.
Ramanathan, R., Hansen, R., Basu, P., Rosales-Hain, R., & Krishnan, R. (2007). Prioritized epidemic routing for opportunistic networks. In Proceedings of Mobisys (pp. 62–66).Google Scholar
13. 13.
Burns, B., Brock, O., & Levine, B. N. (2005). MV routing and capacity building in disruption tolerant networks. In Proceedings of IEEE INFOCOM (pp. 398–408).Google Scholar
14. 14.
Small, T., & Haas, Z. J. (2003). The shared wireless infostation model: A new ad hoc networking paradigm (or where there is a whale, there is a way). In Proceedings of ACM MobiHoc (pp. 233–244).Google Scholar
15. 15.
Jain, S., Shah, R., Brunette, W., Borriello, G., & Roy, S. (2006). Exploiting mobility for energy efficient data collection in wireless sensor networks. Mobile Networks and Applications, 11(3), 327–339.
16. 16.
Krifa, A., Barakat, C., & Spyropoulos, T. (2011). MobiTrade: Trading content in disruption tolerant networks. In Proceedings of CHANTS (pp. 31–36).Google Scholar
17. 17.
Boldrini, C., Conti, M., & Passarella, A. (2008). Context and resource awareness in opportunistic network data dissemination. In Proceedings of WoWMoM (pp. 1–6).Google Scholar
18. 18.
Lenders, V., Karlsson, G., & May, M. (2007). Wireless Ad Hoc Podcasting. In Sensor, mesh and ad hoc communications and networks (pp. 273–283).Google Scholar
19. 19.
Pantazopoulos, P., Stavrakakis, I., Passarella, A., & Conti, M. (2010). Efficient social-aware content placement in opportunistic networks. In Proceedings of WONS (pp. 17–24).Google Scholar
20. 20.
Reich, J., & Chaintreau, A. (2009). The age of impatience: Optimal replication schemes for opportunistic networks. In Proceedings of CoNEXT (pp. 85–96).Google Scholar
21. 21.
Ma, Y., Kibria, M. R., & Jamalipour, A. (2008). Cache-based content delivery in opportunistic mobile ad hoc networks. In Proceedings of IEEE GLOBECOM (pp. 1–5).Google Scholar
22. 22.
Zhuo, X., Li, Q., Cao, G., Dai, Y., Szymanski, B., & Porta, T. L. (2011) Social-based cooperative caching in DTNs: A contact duration aware approach. In Proceedings of MASS (pp. 92–101).Google Scholar
23. 23.
Boldrini, C., & Passarella, A. (2010). HCMM: Modeling spatial and temporal properties of human mobility driven by users’ social relationships. Computer Communications, 33(9), 1056–1074.
24. 24.
The Network Simulator NS-3. http://www.nsnam.org.
25. 25.
Boldrini, C., Conti, M., & Passarella, A. (2008). ContentPlace: Social-aware data dissemination in opportunistic networks. In Proceedings of MSWiM (pp. 203–210).Google Scholar
## Authors and Affiliations
• Ji-Hyeun Yoon
• 1
• Jae-Ho Choi
• 1
• Kwang-Jo Lee
• 1
• Sung-Bong Yang
• 1
1. 1.Department of Computer ScienceYonsei UniversitySeoulSouth Korea
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8525223731994629, "perplexity": 13770.866755582774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647251.74/warc/CC-MAIN-20180320013620-20180320033620-00251.warc.gz"}
|
http://www.satishkashyap.com/2014/01/video-lectures-on-classical-field.html
|
## Search this site
### Video Lectures on "Classical Field Theory" by Prof. Suresh Govindarajan sir
Video Lecture Series from IIT Professors :
Classical Field Theory by Prof. Suresh Govindarajan sir
Prof. Suresh Govindarajan
Dr. Suresh Govindarajan – Suresh Govindarajan is undoubtedly one of the most brilliant String Theorists in India. He has been at the forefront of String Theory research and has a lot of publications in Superstring Theory and related fields. He had completed his bachelor's in Electrical Engineering from IIT Madras and went on to get a PhD in Physics from University of Pennsylvania. He has worked in many Institutions which are leaders in the field of research including CERN, TIFR and IIT Madras. Currently, he is an Associate Professor in the Dept. Of Physics, IIT Madras. People who have attended his classes will vouch for the fact that he is a wonderful and enthusiastic teacher
• High School (1982, Atomic Energy Central School, Hyderabad)
• B.Tech. in Electrical Engineering (1986, Indian Institute of Technology Madras)
• Ph. D. in Theoretical Physics (1991, University of Pennsylvania)
COURSE OUTLINE The course introduces the student to relativistic classical field theory. The basic object is a field (such as the electromagnetic field) which possesses infinite degrees of freedom. The use of local and global symmetries (such as rotations) forms an underlying theme in the discussion. Concepts such as conservation laws, spontaneous breakdown of symmetry, Higgs mechanism etc. are discussed in this context. Several interesting solutions to the Euler-Lagrange equations of motion such as kinks, vortices, monopoles and instantons are discussed along with their applications. The Standard Model of particle physics is used to illustrate how the various concepts discussed in this course are combined in real applications. All necessary mathematical background is provided to make the course self-contained. This course may also be considered as a prelude to Quantum Field Theory.
1. Classical Mechanics, Electromagnetism (and possibly the special theory of relativity).
1. L. D. Landau and E. M. Lifshitz, The Classical Theory of Fields, Pergamon (1975).
2. M. R. Spiegel, Vector Analysis, Schaum Outline Series, McGraw-Hill (1974).
3. M. Carmeli, Classical Fields, Wiley (1982).
4. A. O. Barut, Electrodynamics and Classical Theory of Fields, Chap. 1,Macmillan (1986).
5. C. Itzykson and J. B. Zuber, Quantum Field Theory, Chap. 1, McGraw-Hill (1986).
6. S. Coleman, Aspects of Symmetry, Cambridge Univ. Press.
7. R. Rajaraman, Solitons and Instantons, North-Holland.
Module 1 Introduction to Classical Field Theory (1 Lecture)
Lecture Number Content of the Lecture Additional Info Lecture 1: What is Classical Field Theory? Review of classical mechanics, Particle Trajectories and the Principle of least action, Feynman's description of QM, Classical Mechanics to Classical Fields. Do Problem Set 1 before viewing Lecture 2!
Module 2 Symmetries and Group Theory (6 Lectures)
Lecture Number Content of the Lecture Additional Info Lecture 2: Symmetries and Invariances - I Symmetries, Invariances of Newton's EOM vs Maxwell's Equations, The Galilean Group. Lecture 3: Symmetries and Invariances - II Invariances of Maxwell's Equations continued, Common Four Vectors, Covariant Formulation of Maxwell's Equations,Lorentz and Poincare Groups, Rotation Group and vectors under rotation. Attempt Problem Set 2 after viewing Lecs. 2 and 3. Lecture 4: Group Theory in Physics - I Definition of a Group, Antisymmetric Matrices and SO(d),Vectors and Tensors of SO(d), Parity: Polar and Axial Vectors. Solve Problem Set 3 while/after viewing lecs. 4 and 5. Lecture 5 Group Theory in Physics - II Generalizations of SO(d) (specifically the Lorentz Group),Simple Boost Matrices and Rapidity, SO(p,q) with general signatures in metric,The Symplectic Group. 40:30 The matrix should be symmetric and non-degenerate. Symplectic matrices have det=1 Lecture 6: Finite Groups - I Finite Groups of low order : Cyclic and Coxeter(specifically Dihedral) Groups,Definition of a Subgroup, Equivalence relation and Cosets. 49:19 Left coset wrongly called right coset. Corrected at the start of lec. 7 Lecture 7: Finite Groups - II Left and Right Cosets, Permutation Group,Normal Subgroups, Classification of Finite Simple Groups, Monstrous moonshine. Solve Problem Set 4 while/after viewing lecs. 6 and 7. Normal Subgroups
Module 3 Actions for Classical Field Theory (3 Lectures)
Lecture Number Content of the Lecture Additional Info Lecture 8: Basics of CFT - I Classical Mechanics of Fields, Structure of the KE term in the Lagrangian density, the ultra-local term, and Lorentz invariance of the Lagrangian. Solve Problem Set 5 after viewing lecs. 8 and 9. Lecture 9: Basics of CFT - II Action Principle for fields, Conditions on Lagrangian density for no surface contribution, Conserved Currents,Hamiltonian density, Conditions for Finite Energy. at 50:46 and 51:33 min[finite energy cond — wrong power] Lecture 10: Basics of CFT - III Definition of Vacuum and examples, Vacuum Solutions for quartic potential, Topological Currents and Charges, Noether's Theorem, Application to translational invariance. Solve Problem Set 6 after viewing lec. 10 but before lec. 15 where it is discussed.
Module 4 Green Functions for the Klein-Gordon Operator (2 Lectures)
Lecture Number Content of the Lecture Additional Info Lecture 11 Green Functions - I Inhomogenous Klein-Gordon Equation, Method of the Green functions, Advanced and Retarded Green Functions. Lecture 12 Green Functions - II Green Functions of the KG operator, Closing the contour,The Feynman propagator.
Module 5 Symmetries and Conserved quantities (2 Lectures)
Lecture Number Content of the Lecture Additional Info Lecture 13 Noether's Theorem - I Types of Symmetries, Internal Symmetries, Notion of "small", Transformations to first order (for Lorentz Group), Formulation to derive the Master formula. Lecture 14 Noether's Theorem - II Derivation of the Master formula for the Noether current, The energy-momentum tensor and the generalized angular momentum tensor as examples. Solve Problem Set 7 after viewing lec. 14 but before lec. 20 where it is discussed.
Module 6 Solitons - I (Kink soliton) (1 lecture)
Lecture Number Content of the Lecture Additional Info Lecture 15 Kink Soliton Studying time-independent, finite energy solutions to the Euler-Lagrange equations of motion,the kink soliton, Derrick's theorem and its proof.
Module 7 Hidden Symmetry (Spontaneous Symmetry Breaking) & the abelian Higgs mechanism (3 Lectures)
Lecture Number Content of the Lecture Additional Info Lecture 16: Hidden Symmetry Spontaneous symmetry breaking and statement of Goldstone's theorem. Lecture 17: Local Symmetries Symmetry breaking continued, The Mermin-Wagner-Coleman theorem,The ideas of global and local symmetries, the covariant derivative,Minimal prescription for the covariant derivative. 23:13 Should change the location of the minus sign in the SO(2) matrix or equivalently take q to −q. Lecture 18 The Abelian Higgs model Definition of field strength using the covariant derivative, Small fluctuations about the vacuum solution,The Higgs mechanism in the U(1) case 29:25: Index mismatch in cov. current μ/ν on LHS/RHS.
Module 8 Lie algebras, symmetry breaking and Noether's theorem for Maxwell Equations (2 Lectures)
Lecture Number Content of the Lecture Additional Info Lecture 19: Lie Algebras - I Recap of symmetries and Noether's theorem, Lie algebras and finite-dimensional representations, su(2) Lie Algebra. Solve Problem Set 8 while viewing lectures 19/20. Lecture 20 Lie Algebras - II su(3) Lie Algebra ; Symmetry breaking in terms of Lie algebras, Conserved currents for the Proca action: energy-momentum, generalized angular momentum and the symmetric energy-momentum tensors.
Module 9 Solitons — II (Magnetic Vortices) (2 Lectures)
Lecture Number Content of the Lecture Additional Info Lecture 21: Magnetic Vortices - I Finite energy, time-independent solutions in the Abelian Higgs model in 2+1 dimensions, Topological charge == Magnetic flux, Quantization of magnetic flux, The Bogomol'nyi-Prasad-Sommerfield(BPS) bound for energy, Saturation of the BPS bound. Solve Problem Set 9 before viewing lecture 30. Lecture 22: Magnetic Vortices - II Vortices in the Abelian Higgs model applied to superconducting materials,characteristic lengths in the problem, "size" of a vortex, Description of vortex number using the fundamental group of the gauge group U(1), or the circle. 35:38 and 36:03 min[finite energy cond — wrong power]
Module 10 Towards Non-abelian gauge theories (2 Lectures)
Lecture Number Content of the Lecture Additional Info Lecture 23: Non-abelian gauge theories - I Non-abelian gauge symmetry with SU(2) as an example, Covariant derivative in the non-abelian case, Construction of a locally SU(2) invariant Lagrangian, Transformation of the gauge fields under local gauge transformations. Solve Problem Set 10 while viewing lectures 23/24. Lecture 24: Non-abelian gauge theories - II Transformation of the gauge fields(continued), Derivation of the field strength for the gauge field, Symmetry breaking in the non-abelian case, Goldstone's theorem in terms of Lie Algebras.
Module 11 Representation theory of Lie Algebras (2 Lectures)
Lecture Number Content of the Lecture Additional Info Lecture 25: Irreps of Lie algebras - I Representation theory of su(2) and su(3), the Cartan subalgebra, the adjoint representation. 3:00 - Misleading statement: Map from G to GL(N). While GL(N) is set of all linear maps on V, the map from G to GL(N) is not linear.28:00 - Blocks "0" and "*" in the matrix have been interchanged. Lecture 26: Irreps of Lie algebras - II Representation theory continued, Ferrer's diagrams.
Quiz (Test yourself)
If you have gotten this far, you can test you understanding by taking this Quiz. It is meant to be an open notes (i.e., your own notes) examination and the expected duration is one and half hours. I don't intend to post the solutions online but they will be provided on request. This is to counter the natural human tendency to look at solutions if they are available! What is a good score? I would say anything over 50% is acceptable.
Module 12 The Standard Model of Particle Physics (2 Lectures)
Lecture Number Content of the Lecture Additional Info Lecture 27 The Standard Model - I su(3) multiplets, Motivation for the Standard Model, Colour confinement, Gell-Mann—Nishijima relation. Lecture 28 The Standard Model - II Electroweak symmetry breaking — An application of symmetry breaking in the non-abelian case.
Module 13 The Lorentz and Poincare Lie Algebras (1 Lecture)
Lecture Number Content of the Lecture Additional Info Lecture 29: Irreps of the Lorentz/Poincare algebras The Lorentz and Poincare algebras and their representations.
Module 14 Solitons — III (Monopoles and Dyons) (3 Lectures)
Lecture Number Content of the Lecture Additional Info Lecture 30: The Dirac mononpole Magnetically charged solutions: The Dirac monopole, Flux quantization. Solve Problem Set 11 while viewing lectures 30/31/32. Lecture 31: The 't Hooft-Polaykov monopole Magnetically charged solutions: The ‘t Hooft-Polyakov monopole, The Prasad-Sommerfield limit. Lecture 32: Revisiting Derrick’s Theorem Revisiting Derrick's theorem, BPS solution Lecture 33: The Julia-Zee dyon Constructing dyonic solutions, Dirac quantization for dyons; Dimensional reduction.
Module 15 Instantons and their physical interpretation (4 Lectures)
Lecture Number Content of the Lecture Additional Info Lecture 34: Instantons - I Quantum mechanical tunnelling and Instantons. Solve Problem Set 12 while viewing lectures 34/37. Lecture 35:| Instantons - II Kink soliton and tunnelling, Instantons in pure Yang-Mills theories(SU(2)). Lecture 36: - Instantons - III More on instantons, The BPS bound. Lecture 37: Instantons - IV Free parameters in instanton solutions, moduli space, Complexified Yang-Mills and theta vacua.
Module 16 An introduction to some advanced topics (2 Lectures)
Lecture Number Content of the Lecture Additional Info Lecture 38: Dualities Dualities in Field Theory: Ising Model; Sine-Gordon / Massive Thirring; SU(2) Yang-Mills in 3+1 dimensions. Lecture 39: Geometrization of Field Theory General relativity as a gauge theory; Geometrization of Field Theory; Glimpse into String theory and branes.
The Final
If you have gotten this far, you can test your understanding (of the course material) by taking this Final Examination. It is meant to be an open notes (i.e., your own notes) examination and the expected duration is three hours. I don't intend to post the solutions online but they will be provided on request.
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8501892685890198, "perplexity": 2096.7684134004176}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128319265.41/warc/CC-MAIN-20170622114718-20170622134718-00566.warc.gz"}
|
http://math.stackexchange.com/questions/37088/integration-doubt
|
# Integration doubt
How do I integrate $\sin(x)/x$? I tried using integration by parts, but it led me to nowhere. Please help.
-
You have to use the (nonelementary) sine integral in the general case; however, if you're interested in the limit from 0 to $\infty$, there is an m.SE question devoted to that topic... – J. M. May 5 '11 at 3:03
The function $f(x)=\sin(x)/x$ does not admit an elementary antiderivative, i.e., there is no formula for its integral (using quotients of polynomials, trig. functions, logartithms, exponentials, i.e., the usual functions you study in calculus).
Symbolic integration is the part of calculus that deals with finding antiderivatives. There is a fairly sophisticated algorithm due to Risch and implementing it shows that there is no nice formula for $\displaystyle \int\frac{\sin(x)}x dx$. The algorithm is sufficiently elaborate that apparently no software package can currently find antiderivatives for all functions for which it is possible. The Wikipedia page I linked to has references to the original (and nice) paper.
A few years ago, Matthew Wiener posted a fairly readable account of the algorithm on sci.math; here is a pdf of the post.
For a nice full length exposition of the mathematics involved, I highly recommend the book by Manuel Bronstein,"Symbolic Integration 1 (transcendental functions)" (2 ed.), 1997, Springer-Verlag.
Now, not all is bad news here: One can integrate term by term the power series for $\sin(x)/x$ expression and obtain the power series of its antiderivative, (that converges everywhere), and there are numerical methods to approximate very decently this function. Finally, one can compute explicitly (for example, using methods of complex analysis) that $$\int_0^\infty\frac{\sin(x)}x dx=\frac{\pi}2.$$
-
The pdf was typed by Apollo Hogan. I'm afraid I lost the link to the original post. – Andres Caicedo May 5 '11 at 3:36
here is the original sci.math post. – J. M. May 7 '11 at 3:22
@J.M. : Thanks! – Andres Caicedo May 12 '11 at 16:37
– Andres Caicedo Apr 12 '13 at 2:50
There is no indefinite integral that can be written in elementary functions. However, as sometimes happens, the definite integral on certain endpoints is known; see:
Solving the integral $\int_{0}^{\infty} \frac{\sin{x}}{x} \ dx = \frac{\pi}{2}$?
-
If you ask Wolfram Alpha, it will tell you that the integral is $\text{Si}(x)+C$. If you ask it what is $\text{Si}(x)$, it will tell you, among other things, that $$\text{Si}(x)=\int_0^x \frac{\sin t}{t}dt$$ Not very helpful! But one can get some information out of all this unhelpfulness.
If there were an expression for your integral in terms of elementary functions, Wolfram Alpha, which is really pretty good, would very likely produce such an expression. And indeed it can be proved that there is no such expression.
But the integral that you want shows up naturally in a number of applications, for example in optics. So it is convenient to have a name for it, and $\text{Si}(x)$ is as far as I know the only one in common use.
Some definite integrals involving $\sin(x)/x$ can be evaluated explicitly, but of course not by the usual technique of finding an indefinite integral and then substituting.
There is nothing particularly mysterious about a function given by a simple formula not having an indefinite integral given by a combination of elementary functions. In fact "most" elementary functions do not have an elementary antiderivative.
-
To add: one should consider him/herself incredibly lucky if an integral s/he encounters in applications has a (simple) closed form. – J. M. May 5 '11 at 5:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9713327288627625, "perplexity": 455.98392175855975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.