url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|---|
http://math.stackexchange.com/questions/250781/distribution-of-the-sum-of-iid-beta-negative-binomial-random-variables?answertab=votes
|
# Distribution of the sum of iid Beta-Negative-Binomial random variables
I am facing a problem when trying to calculate the distribution of the sum of iid Beta-Negative-Binomial random variables or for that matter if only parameter $r$ is different. To get a hint to how they might be distributed I calculated the characteristic function of the BNB distribution:
$$\varphi_X(t)=\int_0^1 \! \sum_{x=0}^{\infty} \binom{x+r-1}{x}p^r(1-p)^x\frac{p^{\alpha-1}(1-p)^{\beta-1}}{B(\alpha,\beta)}e^{itx} \, dp$$ where $B$ is the Beta-function. One can now factor out the terms that do not contain $x$ and use the generating function $\sum_{n=0}^\infty \binom{n+k}{k}x^n=\frac{1}{(1-x)^{k+1}}$ to get the following expression for the characteristic function:
$$\varphi_X(t)=\int_0^1 \! \frac{p^{\alpha-1}(1-p)^{\beta-1}}{B(\alpha,\beta)}\frac{p^r}{(1-(1-p)e^{it})^r} \, dp.$$ The characteristic function of the sum of two independent variables $X$ and $Y$ is in this case the product of their corresponding characteristic functions.
$$\varphi_{X+Y}(t)=\int_0^1 \! \frac{p^{\alpha-1}(1-p)^{\beta-1}}{B(\alpha,\beta)}\frac{p^{r_1}}{(1-(1-p)e^{it})^{r_1}} \, dp \,\,*\,\,\int_0^1 \! \frac{p^{\alpha-1}(1-p)^{\beta-1}}{B(\alpha,\beta)}\frac{p^{r_2}}{(1-(1-p)e^{it})^{r_2}} \, dp$$
How can one now deduce the distribution of the sum?
Thank you in advance.
EDIT:
Probability mass function:
$\int_0^1 \! \binom{x+r-1}{x}p^r(1-p)^x\frac{p^{\alpha-1}(1-p)^{\beta-1}}{B(\alpha,\beta)}=\binom{x+r-1}{x}\frac{B(\alpha+r,\beta+x)}{B(\alpha,\beta)}=\frac{\Gamma(x+r)\Gamma(\alpha+r)\Gamma(\beta+x)\Gamma(\alpha+\beta)}{x!\Gamma(r)\Gamma(\alpha+\beta+r+x)\Gamma(\alpha)\Gamma(\beta)}$
Thus, the characteristic function can also be written as:
$\varphi_X(t) = \sum_{x=0}^{\infty}e^{itx} \binom{x+r-1}{x}\frac{\Gamma(\alpha+r)\Gamma(\beta+x)\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)\Gamma(\alpha+\beta+r+x)}$
EDIT2:
I made the non-trivial error to forget the integral in the pmf, which is also the reason why $\varphi_X(0)\ne0$.
$\varphi_X(0)=\frac{1}{B(\alpha,\beta)} \int_0^1 p^{\alpha-1}(1-p)^{\beta-1}*\frac{p^r}{(1-(1-p))^r} \, dp = 1$ with $B(x,y)=\int_0^1 p^{x-1}(1-p)^{y-1} \, dp$
-
QUOTE: The characteristic function of the sum of two variables $X$ and $Y$ is defined as the product of their corresponding characteristic functions. END OF QUOTE. That's not true. If they're independent, then the charateristic function of their sum can be shown to be the product of their characteristic functions, but it is not defined to be that. Rather, that is a demonstrable fact, not a definition. The definition is still the same as for characteristic functions in general: it is $t\mapsto \mathbb E(e^{t(X+Y)})$. – Michael Hardy Dec 4 '12 at 17:34
Some of your characteristic functions $\varphi$ are such that $\varphi(0)\ne1$. This is odd. – Did Dec 4 '12 at 18:52
@MichaelHardy : That is obviously correct. I will correct that in my question. – Mark Dec 4 '12 at 19:26
@did: I will add the pmf of the BNB distribution too so it will be more obvious why i tried it this way. Trying to find the characteristic function when using the textbook definition of the pmf has proven to be too hard for me. – Mark Dec 4 '12 at 19:29
The trouble is not with what you tried or did not try but with the fact that characteristic functions are defined as $\varphi_X(t)=\mathbb E(\mathrm e^{itX})$. Hence $\varphi_X(0)=1$ for every random variable $X$. If $\psi(0)\ne1$, then $\psi\ne\varphi_X$ for every $X$. Thus, there is a problem with your formulas, which I suggest to get rid of before going any further. – Did Dec 4 '12 at 23:23
|
2014-12-21 01:17:29
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9437350630760193, "perplexity": 198.62082374943404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770554.119/warc/CC-MAIN-20141217075250-00116-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://gmatclub.com/forum/in-the-effort-to-fire-a-civil-service-employee-his-or-her-21121.html?fl=similar
|
Check GMAT Club Decision Tracker for the Latest School Decision Releases https://gmatclub.com/AppTrack
It is currently 25 May 2017, 23:24
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# In the effort to fire a Civil Service employee, his or her
Author Message
Senior Manager
Joined: 05 Aug 2005
Posts: 409
Followers: 2
Kudos [?]: 60 [0], given: 0
In the effort to fire a Civil Service employee, his or her [#permalink]
### Show Tags
11 Oct 2005, 21:51
00:00
Difficulty:
(N/A)
Question Stats:
0% (00:00) correct 0% (00:00) wrong based on 0 sessions
In the effort to fire a Civil Service employee, his or her manager may have to spend up to $100,000 of tax money. Since Civil Service employees know how hard it is to fire them, they tend to loaf. This explains in large part why the government is so inefficient. It can be properly inferred on the basis of the statements above that the author believes which of the following? I. Too much job security can have a negative influence on workers. II. More government workers should be fired. III. Most government workers are Civil Service employees. (A) I only (B) I and III only (C) II only (D) I, II, and III (E) III only Director Joined: 21 Aug 2005 Posts: 790 Followers: 2 Kudos [?]: 26 [0], given: 0 ### Show Tags 11 Oct 2005, 22:24 Answer is B "Since they know it is hard to fore them, they tend to loaf" " in large part why the government is so inefficient." VP Joined: 13 Jun 2004 Posts: 1115 Location: London, UK Schools: Tuck'08 Followers: 7 Kudos [?]: 45 [0], given: 0 ### Show Tags 11 Oct 2005, 23:24 B for me too SVP Joined: 28 May 2005 Posts: 1713 Location: Dhaka Followers: 8 Kudos [?]: 374 [0], given: 0 ### Show Tags 11 Oct 2005, 23:27 I will go with A. _________________ hey ya...... Manager Joined: 20 Sep 2005 Posts: 55 Followers: 0 Kudos [?]: 3 [0], given: 0 ### Show Tags 11 Oct 2005, 23:36 I will also go with A. Most civil servants loaf. The goverment could be inefficient because the small number of civil servants don't create enough work or right policies for the huge clerical staff subordinate to them. Manager Joined: 21 Sep 2005 Posts: 232 Followers: 2 Kudos [?]: 3 [0], given: 0 ### Show Tags 12 Oct 2005, 01:15 Man..this is tough...I would have gone with B on the test day...which I think would have been wrong...so I correct my mistake and go for A. Senior Manager Joined: 05 Aug 2005 Posts: 409 Followers: 2 Kudos [?]: 60 [0], given: 0 ### Show Tags 12 Oct 2005, 11:27 Not reaching to any conclusion Intern Joined: 18 Aug 2005 Posts: 42 Followers: 1 Kudos [?]: 0 [0], given: 0 ### Show Tags 12 Oct 2005, 11:42 Between A and B. I think I will go with A Manager Joined: 28 Jun 2005 Posts: 65 Location: New York, NY Followers: 1 Kudos [?]: 2 [0], given: 0 Re: CR: Civil services [#permalink] ### Show Tags 12 Oct 2005, 11:51 gmacvik wrote: Since Civil Service employees know how hard it is to fire them, they tend to loaf. This explains in large part why the government is so inefficient. I. Too much job security can have a negative influence on workers. II. More government workers should be fired. III. Most government workers are Civil Service employees. I choose B VP Joined: 13 Jun 2004 Posts: 1115 Location: London, UK Schools: Tuck'08 Followers: 7 Kudos [?]: 45 [0], given: 0 ### Show Tags 13 Oct 2005, 20:16 OA ? Manager Joined: 10 Sep 2005 Posts: 162 Followers: 1 Kudos [?]: 41 [0], given: 0 ### Show Tags 22 Oct 2005, 03:28 A for me....can someone post the OA if they know SVP Joined: 03 Jan 2005 Posts: 2236 Followers: 16 Kudos [?]: 342 [0], given: 0 Re: CR: Civil services [#permalink] ### Show Tags 22 Oct 2005, 09:09 gmacvik wrote: In the effort to fire a Civil Service employee, his or her manager may have to spend up to$100,000 of tax money. Since Civil Service employees know how hard it is to fire them, they tend to loaf. This explains in large part why the government is so inefficient.
It can be properly inferred on the basis of the statements above that the author believes which of the following?
I. Too much job security can have a negative influence on workers.
II. More government workers should be fired.
III. Most government workers are Civil Service employees.
(A) I only
(B) I and III only
(C) II only
(D) I, II, and III
(E) III only
II is obviously not infered. However, do we need III? If he believes that only a small part of government workers are lazy civil service employees, would he still be able to say that this would explain government's inefficiency in large part? It's the "in large part" that makes it hard to decide. I say he has to believe III, although I'm not very sure.
B, hesitately.
_________________
Keep on asking, and it will be given you;
keep on seeking, and you will find;
keep on knocking, and it will be opened to you.
Current Student
Joined: 28 Dec 2004
Posts: 3363
Location: New York City
Schools: Wharton'11 HBS'12
Followers: 15
Kudos [?]: 297 [0], given: 2
### Show Tags
22 Oct 2005, 10:00
D for me...
yes like me too much job security is a bad thing...
yes if you fire more people, then you will eradicate the culture
yes most govt. employees are civil service employess cause "in large part" means there are a lot of them...
I am hesitiant about II but I will go for it..
Manager
Joined: 04 Oct 2005
Posts: 246
Followers: 1
Kudos [?]: 78 [0], given: 0
### Show Tags
22 Oct 2005, 10:17
I go for A...
most of us agree that II is false and that I is true due to "they know its hard to fire them"
III is tempting because of the words "large part"... but this large part referst to the explanation of why the government is inefficient. It just means, the cause of inefficiency is largely explained by this behaviour. It could be. that only a minor part of the employees act according to the author, but this behaviour is the only cause of government's inefficiency. Than this behaviour EXPLAINS A LARGE part of the issue. Since we have no information, whether other parts of the government may operate inefficient as well, we cannot conclude that this "large part explanation" leads to III.
Director
Joined: 14 Sep 2005
Posts: 988
Location: South Korea
Followers: 2
Kudos [?]: 173 [0], given: 0
### Show Tags
23 Oct 2005, 04:57
Clearly (A).
III is wrong since 'in large part' does not mean 'most Civil Service employees. It does not refer to a numeric. It just means "How well it is explained".
_________________
Auge um Auge, Zahn um Zahn !
Director
Joined: 09 Jul 2005
Posts: 592
Followers: 2
Kudos [?]: 58 [0], given: 0
### Show Tags
23 Oct 2005, 11:23
A for me. Nothing is stated about how many civil employees work for the government.
SVP
Joined: 16 Oct 2003
Posts: 1805
Followers: 5
Kudos [?]: 154 [0], given: 0
### Show Tags
23 Oct 2005, 19:15
A.
You just cannot infer III.
SVP
Joined: 03 Jan 2005
Posts: 2236
Followers: 16
Kudos [?]: 342 [0], given: 0
### Show Tags
23 Oct 2005, 19:32
gamjatang wrote:
Clearly (A).
III is wrong since 'in large part' does not mean 'most Civil Service employees. It does not refer to a numeric. It just means "How well it is explained".
I know. Though if only a small number of people are inefficient, could it really explain very well the government's efficiency? I would agree that it is not clearly infered, though.
_________________
Keep on asking, and it will be given you;
keep on seeking, and you will find;
keep on knocking, and it will be opened to you.
23 Oct 2005, 19:32
Similar topics Replies Last post
Similar
Topics:
In the effort to fire a Civil Service employee, his or her 7 06 Jul 2009, 10:32
In the effort to fire a Civil Service employee, his or her 6 26 Jul 2008, 01:13
In the effort to fire a Civil Service employee, his or her 8 06 May 2008, 22:59
In the effort to fire a Civil Service employee, his or her 2 04 Feb 2008, 13:15
In the effort to fire a Civil Service employee, his or her 2 28 Jan 2008, 10:52
Display posts from previous: Sort by
|
2017-05-26 06:24:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23195479810237885, "perplexity": 6278.211767696114}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-22/segments/1495463608642.30/warc/CC-MAIN-20170526051657-20170526071657-00280.warc.gz"}
|
http://mathematica.stackexchange.com/tags/hold/hot
|
Tag Info
29
Generally, you want the Trott-Strzebonski in-place evaluation technique: In[47]:= f[x_Real]:=x^2; Hold[{Hold[2.],Hold[3.]}]/.n_Real:>With[{eval = f[n]},eval/;True] Out[48]= Hold[{Hold[4.],Hold[9.]}] It will inject the evaluated r.h.s. into an arbitrarily deep location in the held expression, where the expression was found that matched the rule ...
22
RuleCondition provides an undocumented, but very convenient, way to make replacements in held expressions. For example, if we want to square the odd integers in a held list: In[3]:= Hold[{1, 2, 3, 4, 5}] /. n_Integer :> RuleCondition[n^2, OddQ[n]] Out[3]= Hold[{1, 2, 9, 4, 25}] RuleCondition differs Condition in that the replacement expression is ...
18
Here are a couple of alternatives to Trott-Strzebonski in @R.M's answer: Hold[{3,4,5|6}] /. Verbatim[Alternatives][x__] :> RuleCondition@RandomChoice@List@x Hold[{3, 4, 5}] Hold[{3,4,5|6}] /. Verbatim[Alternatives][x__] :> Block[{}, RandomChoice@List@x /; True] Hold[{3, 4, 6}] They operate on the same principle as Trott-Strzebonski ...
18
This is a case where the Trott-Strzebonski in-place evaluation trick is useful. You use With to inject inside your held expression as: (Hold[{3, 4, 5 | 6}] /. (Verbatim@Alternatives)[x__] :> With[{eval = RandomChoice@List@x}, eval /; True]) Out[1]= Hold[{3, 4, 5}] You should definitely read this post by Leonid, that gives you a good insight into ...
16
The way Mathematica works is that when it encounters a function with arguments it will try to evaluate the arguments first before proceeding to evaluate the function. This behavior can be modified by specifying the various HoldAll, HoldFirst, HoldRest, etc. attributes for a given function. So in your example f[x+1] will be immediately replaced by f[6] ...
13
Not to detract from the existing answers (particularly @WReach's suggestion, which was the same solution that came to my mind as I read your question, and which I will use here), but you may find it easier to define your own references rather than using strings. (In fact, I wouldn't necessarily recommend an approach based on building Mathematica expressions ...
12
It is because, in version 9, the implementation of Plot is loaded from a dump file on its first usage, rather than loading when the kernel starts. One can see this by clearing the ReadProtected attribute: ClearAttributes[Plot, ReadProtected] Information[Plot] (* -> Plot := SystemDumpAutoLoad[ Hold[Plot], Hold[syms], VisualizationProto ...
12
#1 Trott-Strzebonski in-place evaluation: hf = HoldForm[1 - 1^2/2 + 1^3/3 - 1^4/4 + 1^5/5 - 1^6/6] hf /. x_Times :> With[{eval = x}, eval /; True] 1 - 1/2 + 1/3 - 1/4 + 1/5 - 1/6 Replace[hf, x_ :> With[{eval = x}, eval /; True], {2}] 1 - 1/2 + 1/3 - 1/4 + 1/5 - 1/6 One may simplify this method using the undocumented function ...
12
The Hold functions enable Mathematica's version of what some other languages call "macros." You can use them for a lot of things, but the essential point is that they preserve the structure of the input. The built-in functions are full of examples: x = 7; Plot[x^2, {x, -2, 2}] Type this in and you'll see that Plot draws the parabola even though "x" was ...
12
While I can't follow your code, I guess your problem is caused by the fact that you get evaluation in between individual replacements, and the fact that Listable functions of several arguments (which includes operators like + and *) have quite peculiar behaviour. The fact that you get a matrix instead of a vector, as well as the fact that you can avoid it ...
11
You are missing Unevaluated: SetAttributes[f, HoldFirst] f[x_] := {SymbolName[Unevaluated@x], x} because SymbolName does not hold its arguments, so you have to prevent evaluation also there. Generally, if you are passing some argument via a chain of function calls, and want to keep it unevaluated, you have to prevent it's evaluation at each stage ...
10
I think that in general, for tasks like this one, tricks like Trott-Strzebonski technique are not the best way, and one really needs expression parsers, which are may be not shorter, but more readable and more extensible. Here is a possible one for your problem: ClearAll[convert]; SetAttributes[convert, {HoldAll}]; convert[x_List] := Map[convert, ...
9
This is possible in the interactive session with $PreRead. I will adopt my solution to the same problem posted in this Mathgroup thread. To quote my explanation from there, the essence of the present solution is to delay the parsing of the code (body) that must be executed inside a given context until run-time, that is, replace code ... 9 One way to achieve this is to use a "vanishing" wrapper. The idea is to temporarily wrap the substituted expression with a holding symbolic head, and then remove that head in a second replacement: Module[{h} , SetAttributes[h, HoldAll] ; y /. bar[j_] :> RuleCondition[Extract[x, {j}, h]] /. h[x_] :> x ] (* Hold[foo[2+2]] *) Module is used to ensure ... 8 I think there is a case to be made for not using List at all. It seems to me that it is a needless complication. Why not instead use Hold in place of List? a = Hold[2 + 2]; b = Hold[4 + 4]; c = Join[a, b] Append[c, Unevaluated[6 + 6]] Hold[2 + 2, 4 + 4] Hold[2 + 2, 4 + 4, 6 + 6] Also: x = Hold @@ {a, b} Length[x] Dimensions[x] Hold[Hold[2 + ... 8 I think your original method is fine, but perhaps this will be more to your liking: Table[With[{n = ni}, Plot[ρ[n, x], {x, 0, L}, PlotLabel -> Defer@ρ[n, x]] ], {ni, 1, 4}] Another, perhaps less fundamental way is Table[ Plot[ρ[n, x], {x, 0, L}, PlotLegends -> Placed["Expressions", Top]] ,{n, 1, 4}] 7 Note that the following works, but generates warning messages. f[y_, a_] := NIntegrate[y^2 + x, {x, 0, a}]; Plot[Table[f[y, a], {a, 1, 3}], {y, 0, 3}, Evaluated -> True] The issue is that Plot (which usually holds it's first argument until specific values of$x$are plugged in) tries to evaluate the first argument symbolically (due to the Evaluated ... 7 You could do something like InputField[Dynamic[d, (d = HoldForm @@ #) &], Hold[Expression]] This will wrap the expression typed into the input field in HoldForm. 7 Here is yet another possibility (which, in a way, combines some of the suggestions given already): define a new scoping construct, most similar to Function, to perform the task you need: ClearAll[strFunction]; SetAttributes[strFunction, HoldAll]; SyntaxInformation[strFunction] = {"ArgumentsPattern" -> {_, _}, "LocalVariables" -> {"Solve", {1, ... 7 Injector pattern: list = {1, 2, 3}; MakeExpression["list"] /. _[sym_] :> AppendTo[sym, 4] Function (here using the Null syntax trick): Function[, AppendTo[#, 4], HoldAll] @@ MakeExpression["list"] 6 This works: ReleaseHold@Block[{list},Hold[AppendTo["list",1]]/."list"->Symbol["list"]] The mechanism is as follows: The block temporarily undefines list so that after Symbol["list"] is evaluated, evaluation stops. However, the AppendTo shall not be evaluated as long as list is undefined, therefore it is wrapped in Hold. Symbol shall be evaluated, ... 6 One easy way, which does not work for lists of different length, is a = Hold[{2 + 2}]; b = Hold[{4 + 4}]; Thread[{a, b} /. Hold[{a___}] :> Hold[a], Hold] What happens here is the following: first, you can use {a, b} /. Hold[{a___}] :> Hold[a] to get rid of the inner list braces without evaluating your expressions. Since we use {a,b} we will ... 6 As Leonid has explained the problem is that the symbols are created and get their context at parse time, so if you need to avoid generating them in the current (usually "Global`") context, using$PreRead as he explained is the only possibility. If you don't care that the symbols you use are created in the current context AND the context you want to evaluate ...
5
Since you said you wanted text input containing subscripts, superscripts, etc, it sounds like you just want Mathematica's box language: InputField[Dynamic[d], Boxes] Now d is boxes, such as In[39]:= FullForm[d] Out[39]//FullForm=SuperscriptBox["a","b"] You can convert them to expressions with MakeExpression or ToExpression, or interpret them any ...
5
The implementation of your function polynomial can be made simpler. But for your question, The problem (as I said in the little comment above) is in evaluation of your "a" to the Symbol a which is global and has value. Better to use a method like shown by Bill above and other ways to avoid these sorts of things in first place. You can see the problem ...
5
Introduction Questions like this one often suggests that it is not clear how Mathematica works and that it always tries to evaluate expressions. Assume the following simple example a = 1; a + a a*x + 1 In the first line I set a to 1 and in the other two I write down an expression. Now I want that in a+a the result is 2 like one would expect, but in the ...
5
This is only a hack, but maybe it just gives you short way out of this. Lately, we had a similar discussion in chat about NValues where the problem was related. It this cases Rojo wanted to use NValues to prevent some of the arguments to stay untouched by N. There too, the problem was when N was called from very outside and dived into the subexpressions ...
5
Indeed, Nest and NestList do not support functions with Hold attributes (as well as Fold and FoldList, etc). There were discussions of this in the past. I was able to find one such. As far as I can tell, this is by design. What happens is that NestList (for example) maintains an internal list of intermediate results, the last of which is used in the next ...
5
You can "inject" your variables into the module like this: ClearAll[x, y, z, a, b, c, Foo]; x = 7; SetAttributes[Foo, HoldAll]; Foo[vars_] := Hold[vars] /. _@{v__} :> Module[{v, a, b, c}, x = 43] Foo[{x, y, z}] (* 43 *) x (* 7 *) For more information have a look at some of the questions and answers in these search results: ...
5
Hold[(a + b)/c] /. {a -> 1, b -> 3, c -> 4} /. Hold -> Defer (* (1 + 3)/4 *) Hold[(a + b)/c] /. {a -> 1, b -> -3, c -> 4} /. Hold -> Defer (* (1 - 3)/4 *) Hold[(a + b)/c] /. {a -> 1, b -> -3, c -> -4} /. Hold -> Defer (* -(1/4) (1 - 3) *)
Only top voted, non community-wiki answers of a minimum length are eligible
|
2014-03-08 08:22:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22146551311016083, "perplexity": 1638.3482284877534}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999654052/warc/CC-MAIN-20140305060734-00011-ip-10-183-142-35.ec2.internal.warc.gz"}
|
https://socratic.org/questions/58caf7e1b72cff09cfa9f2d5
|
# Question #9f2d5
Jun 25, 2017
#### Answer:
$K N {O}_{3} \to {K}^{+} + N {O}_{3}^{-}$
#### Explanation:
Solvation takes place since this is an ionic compound, and it dissolves into its constituent ions. Water is omitted for simplicity; no reaction really takes place per se (nothing cool!).
|
2019-10-17 20:54:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 1, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8315028548240662, "perplexity": 10501.498014867895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986676227.57/warc/CC-MAIN-20191017200101-20191017223601-00136.warc.gz"}
|
https://chem.libretexts.org/Ancillary_Materials/Laboratory_Experiments/Wet_Lab_Experiments/General_Chemistry_Labs/Online_Chemistry_Lab_Manual/Chem_11_Experiments/10%3A_Vitamin_C_Analysis_(Experiment)
|
# 10: Vitamin C Analysis (Experiment)
##### Objectives
• To standardize a $$\ce{KIO3}$$ solution using a redox titration.
• To analyze an unknown and commercial product for vitamin C content via titration.
• To compare your results for the commercial product with those published on the label.
Note: You will need to bring a powdered or liquid drink, health product, fruit samples, or other commercial sample to lab for vitamin C analysis. You will need enough to make 500 mL of sample for use in 3-5 titrations. Be sure the product you select actually contains vitamin C (as listed on the label or in a text or website) and be sure to save the label or reference for comparison to your final results. Be careful to only select products where the actual vitamin C content in mg or percent of RDA (recommended daily allowance) is listed. The best samples are lightly colored and/or easily pulverized.
The two reactions we will use in this experiment are:
$\ce{KIO3(aq) + 6 H+(aq) +5 I- (aq)→ 3 I2(aq) + 3 H2O(l) + K+(aq) } \quad \quad \text{generation of }\ce{I2} \label{1}$
$\underbrace{\ce{C6H8O6(aq)}}_{\text{vitamin C(ascorbic acid)}}\ce{ + I2(aq) →C6H6O6(aq) +2 I- (aq) + 2 H+(aq) } \quad \quad \text{oxidation of vitamin C}\label{2}$
Reaction \ref{1} generates aqueous iodine, $$\ce{I2}$$ (aq). This is then used to oxidize vitamin C (ascorbic acid, $$\ce{C6H8O6}$$) in reaction \ref{2}. Both of these reactions require acidic conditions and so dilute hydrochloric acid, $$\ce{HCl}$$ (aq), will be added to the reaction mixture. Reaction one also requires a source of dissolved iodide ions, $$\ce{I^-}$$ (aq). This will be provided by adding solid potassium iodide, $$\ce{KI}$$ (s), to the reaction mixture.
This is a redox titration. The two relevant half reactions for reaction \ref{2} above are:
Reduction half reaction for Iodine at pH 5:
$\ce{I2 +2e^{⎯} → 2I^{⎯}}$
Oxidation half reaction for vitamin C ($$\ce{C6H8O6}$$) at pH 5:
A few drops of starch solution will be added to help determine the titration endpoint. When the vitamin C (ascorbic acid) is completely oxidized, the iodine, $$\ce{I2}$$ (aq), will begin to build up and will react with the iodide ions, $$\ce{I^-}$$ (aq), already present to form a highly colored blue $$\ce{I3^-}$$-starch complex, indicating the endpoint of our titration.
### Vitamin C: An Important Chemical Substance
Vitamin C, known chemically as ascorbic acid, is an important component of a healthy diet. The history of Vitamin C revolves around the history of the human disease scurvy, probably the first human illness to be recognized as a deficiency disease. Its symptoms include exhaustion, massive hemorrhaging of flesh and gums, general weakness and diarrhea. Resultant death was common. Scurvy is a disease unique to guinea pigs, various primates, and humans. All other animal species have an enzyme which catalyzes the oxidation of L- gluconactone to L-ascorbic acid, allowing them to synthesize Vitamin C in amounts adequate for metabolic needs.
L-Ascorbic Acid -- Vitamin C
As early as 1536, Jacques Cartier, a French explorer, reported the miraculous curative effects of infusions of pine bark and needles used by Native Americans. These items are now known to be good sources of ascorbic acid. However, some 400 years were to pass before Vitamin C was isolated, characterized, and synthesized. In the late 1700's, the British Navy ordered the use of limes on ships to prevent scurvy. This practice was for many years considered to be quackery by the merchant marines, and the Navy sailors became known as “Limeys”. At that time scurvy aboard sailing vessels was a serious problem with often up to 50% of the crew dying from scurvy on long voyages.
The RDA (Recommended Daily Allowance) for Vitamin C put forward by the Food and Nutrition Board of the National Research Counsel is 60 mg/day for adults. It is recommended that pregnant women consume an additional 20 mg/day. Lactating women are encouraged to take an additional 40 mg/day in order to assure an adequate supply of Vitamin C in breast milk. Medical research shows that 10 mg/day of Vitamin C will prevent scurvy in adults. There has been much controversy over speculation that Vitamin C intake should be much higher than the RDA for the prevention of colds and flu. Linus Pauling, winner of both a Nobel Prize in Chemistry and the Nobel Peace Prize, has argued in his book, Vitamin C and the Common Cold, that humans should be consuming around 500 mg of Vitamin C a day (considered by many doctors to be an excessive amount) to help ward off the common cold and prevent cancer.
Vitamin C is a six carbon chain, closely related chemically to glucose. It was first isolated in 1928 by the Hungarian-born scientist Szent-Gyorgi and structurally characterized by Haworth in 1933. In 1934, Rechstein worked out a simple, inexpensive, four-step process for synthesizing ascorbic acid from glucose. This method has been used for commercial synthesis of Vitamin C. Vitamin C occurs naturally primarily in fresh fruits and vegetables.
Table 1: Vitamin C content of some foodstuffs
Vitamin-C (mg/100g) Foods
100 – 350 Chili peppers, sweet peppers, parsley, and turnip greens
25 – 100 Citrus juices (oranges, lemons, etc.), tomato juice, mustard greens, spinach, brussels sprouts
10 – 25 Green beans and peas, sweet corn, asparagus, pineapple, cranberries, cucumbers, lettuce
< 10 Eggs, milk, carrots, beets, cooked meat
From Roberts, Hollenberg, and Postman, General Chemistry in the Laboratory.
## Procedure
Work in groups of three, dividing the work into three parts (standardization, unknown analysis, and food products) among your group members and then compare data if you are to finish in one period. Work carefully: your grade for this experiment depends on the accuracy and precision of each of your final results.
Materials and Equipment
You will need the following additional equipment for this experiment: 3 Burets, 1 Mortar and pestle, 1 Buret stand
##### Safety
Avoid contact with iodine solutions, as they will stain your skin. Wear safety glasses at all times during the experiment.
WASTE DISPOSAL: You may pour the blue colored titrated solutions into the sink. However, all unused $$\ce{KIO3}$$ (after finishing parts A-C) must go in a waste container for disposal. This applies to all three parts of the experiment.
### Proper Titration Techniques
Using a Buret
Proper use of a buret is critical to performing accurate titrations. Your instructor will demonstrate the techniques described here.
1. Rinsing: Always rinse a buret (including the tip) before filling it with a new solution. You should rinse the buret first with deionized water, and then twice with approximately 10-mL aliquots of the solution you will be using in the buret. Be sure to swirl the solution to rinse all surfaces. If you are using an acid or base solution be careful to avoid spilling the solution on hands or clothing.
2. Filling: Mount the buret on a buret stand. Be sure that the tip fits snuggly into the buret and is pressed all the way in. If the tip is excessively loose, exchange it for a tighter fitting one. Using a funnel rinsed in the same manner as the buret, fill the buret with the titrant to just below the 0.00 mL mark. There is no need to fill the buret to exactly 0.00 mL since you will use the difference between the ending and starting volumes to determine the amount delivered. When the buret is full, remove the funnel as drops remaining in or around the funnel can creep down and alter your measured volume. If you overfill the buret, drain a small amount into an empty beaker. Do not re-use this "extra" solution as it may have been contaminated by the beaker or diluted slightly by any water present in the beaker. Always pour fresh solution into the buret.
3. Removing Air Bubbles: Often air bubbles will be trapped in the tip of a newly filled buret. These can be difficult to see and troublesome as they alter the measured volume when they escape. To remove air bubbles hold the buret over an open beaker and open the stopcock fully to allow solution to flow out of the buret. Your instructor will demonstrate this technique. Refill the buret as necessary.
4. Reading the Buret: You should always read the volume in a buret from the bottom of the meniscus viewed at eye level (see Figure 1). A black or white card held up behind the buret helps with making this reading. Burets are accurate to ±0.02 mL and all readings should be recorded to two decimal places. Be sure to record both the starting and ending volumes when performing a titration. The difference is the volume delivered.
Good Titration Techniques
Throughout your scientific careers you will probably be expected to perform titrations; it is important that you learn proper technique. In performing a titration generally an indicator that changes color is added to a solution to be titrated (although modern instruments can now perform titrations automatically by spectroscopically monitoring the absorbance). Add titrant from the buret dropwise, swirling between drops to determine if a color change has occurred. Only if you know the approximate end-point of a titration should you add titrant faster, but when you come within a few milliliters of the endpoint you should begin to slow down and add titrant dropwise.
As you become proficient in performing titrations you will get a "feeling" for how much to open the stopcock to deliver just one drop of titrant. Some people become so proficient that they can titrate virtually "automatically" by allowing the titrant to drip out of the buret dropwise while keeping a hand on the stopcock, and swirling the solution with the other hand. If you do this, be sure that the rate at which drops are dispensed is slow enough that you can stop the flow before the next drop forms! Overshooting an end-point by even one drop is often cause for having to repeat an entire titration. Generally, this will cost you more time than you will gain from a slightly faster droping rate.
Refill the buret between titrations so you won’t go below the last mark. If a titration requires more than the full volume of the buret, you should either use a larger buret or a more concentrated titrant. Refilling the buret in the middle of a trial introduces more error than is generally acceptable for analytical work.
Set-up and Preparation of Equipment
1. Clean and rinse a large 600-mL beaker using deionized water. Label this beaker “standard $$\ce{KIO3}$$ solution.”
2. From the large stock bottles of ~0.01 M $$\ce{KIO3}$$ obtain about 600 mL of $$\ce{KIO3}$$ solution. This should be enough $$\ce{KIO3}$$ for your group for all three parts of the experiment including rinsings. The reason for collecting one beaker of stock is there is no guarantee that different batches of $$\ce{KIO3}$$ from the stockroom will have the same exact molarity. By having one beaker of stock you ensure that all your trials come from the same solution. (If you run out of stock or spill this solution accidentally you will need to repeat part A on the new solution).
3. Clean and rinse three burets once with deionized water and then twice with small (5-10 ml) aliquots of standard $$\ce{KIO3}$$ from your large beaker. Pour the rinsings into a waste beaker.
4. Fill each of the burets (one for each part of the experiment) with $$\ce{KIO3}$$ from your beaker. Remove any air bubbles from the tips. The starting volumes in each of the burets should be between 0.00 mL and 2.00 mL. If you use a funnel to fill the burets be sure it is cleaned and rinsed in the same way as the burets and removed from the buret before you make any readings to avoid dripping from the funnel into the buret.
Each of the following parts should be performed simultaneously by different members of your group. You do not have enough time to do these sequentially and finish in one lab period.
### Part A: Standardization of your $$\ce{KIO3}$$ solution
The $$\ce{KIO3}$$ solution has an approximate concentration of about ~0.01 M. You will need to determine exactly what the molarity is to three significant figures. Your final calculated results for each trial of this experiment should differ by less than ± 0.0005 M. Any trials outside this range should be repeated. You will need to calculate in advance how many grams of pure Vitamin C powder (ascorbic acid, $$\ce{C6H8O6}$$) you will need to do this standardization (this is part of your prelaboratory exercise). Remember that your buret holds a maximum of 50.00 mL of solution and ideally you would like to use between 25-35 mL of solution for each titration (enough to get an accurate measurement, but not more than the buret holds).
1. Calculate the approximate mass of ascorbic acid you will need and have your instructor initial your calculations on the data sheet.
2. Weigh out approximately this amount of ascorbic acid directly into a 250-mL Erlenmeyer flask. Do not use another container to transfer the ascorbic acid as any loss would result in a serious systematic error. Record the mass added in each trial to three decimal places in your data table. It is not necessary that you weigh out the exact mass you calculated, so long as you record the actual mass of ascorbic acid added in each trial for your final calculations.
3. Dissolve the solid ascorbic acid in 50-100 mL of deionized water in an Erlenmeyer flask.
4. Add approximately 0.5-0.6 g of $$\ce{KI}$$, 5-6 mL of 1 M $$\ce{HCl}$$, and 3-4 drops of 0.5% starch solution to the flask. Swirl to thoroughly mix reagents.
5. Begin your titration. As the $$\ce{KIO3}$$ solution is added, you will see a dark blue (or sometimes yellow) color start to form as the endpoint is approached. While adding the $$\ce{KIO3}$$ swirl the flask to remove the color. The endpoint occurs when the dark blue color does not fade after 20 seconds of swirling.
6. Calculate the molarity of this sample. Repeat the procedure until you have three trials where your final calculated molarities differ by less than ± 0.0005 M.
### Part B: Vitamin C Unknown (internal control standard)
1. Obtain two Vitamin C tablets containing an unknown quantity of Vitamin C from your instructor.
2. Weigh each tablet and determine the average mass of a single tablet.
3. Grind the tablets into a fine powder using a mortar and pestle.
4. Weigh out approximately 0.20-0.25 grams of the powdered unknown directly into a 250-mL Erlenmeyer flask. Do not use another container to transfer the sample as any loss would result in a serious systematic error. Record the mass added in each trial to three decimal places in your data table.
5. Dissolve the sample in about 100 mL of deionized water and swirl well. Note that not all of the tablet may dissolve as commercial vitamin pills often use calcium carbonate (which is insoluble in water) as a solid binder.
6. Add approximately 0.5-0.6 g of $$\ce{KI}$$, 5-6 mL of 1 M $$\ce{HCl}$$, and 2-3 drops of 0.5% starch solution to the flask before beginning your titration. Swirl to mix.
7. Begin your titration. As the $$\ce{KIO3}$$ solution is added, you will see a dark blue (or sometimes yellow) color start to form as the endpoint is approached. While adding the $$\ce{KIO3}$$ swirl the flask to remove the color. The endpoint occurs when the dark blue color does not fade after 20 seconds of swirling.
8. Perform two more trials. If the first titration requires less than 20 mL of $$\ce{KIO3}$$, increase the mass of unknown slightly in subsequent trials.
9. Calculate milligrams of ascorbic acid per gram of sample and using the average mass of a tablet, determine the number of milligrams of Vitamin C contained in each tablet. Be sure to use the average molarity for $$\ce{KIO3}$$ determined in Part A for these calculations. Your results should be accurate to at least three significant figures. Repeat any trials that seem to differ significantly from your average.
### Part C: Fruit juices, foods, health-products, and powdered drink mixes
Solids samples
1. Pulverize solid samples (such as vitamin pills, cereals, etc.) with a mortar and pestle. Powdered samples (such as drink mixes) may be used directly.
2. Weigh out enough powdered sample, so that there will be about 100 mg of ascorbic acid (according to the percentage of the RDA or mg/serving listed by the manufacturer) in each trial.
3. Add the sample to a 250-mL Erlenmeyer flask containing 50-100 mL of water. (Note: If your sample is highly colored, you might want to dissolve the KI in the water before adding the mix, so that you can be sure it dissolves).
4. Add approximately 0.5-0.6 g of $$\ce{KI}$$, 5-6 mL of 1 M $$\ce{HCl}$$, and 3-4 drops of 0.5% starch solution to the flask. Swirl to thoroughly mix reagents.
5. Begin your titration. As the $$\ce{KIO3}$$ solution is added, you will see a dark blue (or sometimes yellow or black depending on the color of your sample) color start to form as the endpoint is approached. While adding the $$\ce{KIO3}$$ swirl the flask to remove the color. The endpoint occurs when the dark color does not fade after 20 seconds of swirling.
6. Perform two more trials. If the first titration requires less than 20 mL of $$\ce{KIO3}$$, increase the mass of unknown slightly in subsequent trials.
7. Calculate the milligrams of ascorbic acid per gram of sample. Be sure to use the average molarity determined for the $$\ce{KIO3}$$ in Part A for these calculations. Your results should be accurate to at least three significant figures. Repeat any trials that seem to differ significantly from your average.
Liquid samples
1. If you are using a pulpy juice, strain out the majority of the pulp using a cloth or filter.
2. Using a graduated cylinder, measure out at least 100 mL of your liquid sample. Record the volume to three significant figures (you will calculate the mass of ascorbic acid per milliliter of juice).
4. Add approximately 0.5-0.6 g of $$\ce{KI}$$, 5-6 mL of 1 M $$\ce{HCl}$$, and 3-4 drops of 0.5% starch solution to the flask. Swirl to thoroughly mix reagents.
5. Begin your titration. As the $$\ce{KIO3}$$ solution is added, you will see a dark blue (or sometimes yellow or black depending on the color of your sample) color start to form as the endpoint is approached. While adding the $$\ce{KIO3}$$ swirl the flask to remove the color. The endpoint occurs when the dark color does not fade after 20 seconds of swirling. With juices it sometimes takes a little longer for the blue color to fade, in which case the endpoint is where the color is permanent.
6. Perform two more trials. If the first titration requires less than 20 mL of $$\ce{KIO3}$$, increase the volume of unknown slightly in subsequent trials.
7. Calculate the milligrams of ascorbic acid per milliliter of juice. Be sure to use the average molarity determined for the $$\ce{KIO3}$$ in Part A for these calculations. Your results should be accurate to at least three significant figures. Repeat any trials that seem to differ significantly from your average.
## Pre-laboratory Assignment: Vitamin C Analysis
1. If an average lemon yields 40 mL of juice, and the juice contains 50 mg of Vitamin C per 100 mL of juice, how many lemons would one need to eat to consume the daily dose of Vitamin C recomended by Linus Pauling? Show all work.
1. Why are $$\ce{HCl}$$, $$\ce{KI}$$, and starch solution added to each of our flasks before titrating in this experiment? What is the function of each?
• $$\ce{HCl}$$:
• $$\ce{KI}$$:
• Starch:
1. A label states that a certain cold remedy contains 200% of the US Recommended Daily Allowance (RDA) of Vitamin C per serving, and that a single serving is one teaspoon (about 5 mL). Calculate the number of mg of Vitamin C per serving and per mL for this product. Show all work.
1. Based on the balanced reactions \ref{1} and \ref{2} for the titration of Vitamin C, what is the mole ratio of $$\ce{KIO3}$$ to Vitamin C from the combined equations?
_______ moles $$\ce{KIO3}$$ : _______ moles Vitamin C (ascorbic acid)
1. Assuming that you want to use about 35 mL of $$\ce{KIO3}$$ for your standardization titration in part A, about how many grams of ascorbic acid should you use? (you will need this calculation to start the lab). Show all work.
Hint: you will need to use the approximate $$\ce{KIO3}$$ molarity given in the lab instructions and the mole ratio you determined in the prior problem.
## Lab Report: Vitamin C Analysis
### Part A: Standardization of your $$\ce{KIO3}$$ solution
Mass of ascorbic acid to be used for standardization of ~0.01 M $$\ce{KIO3}$$: __________ g ______Instructor’s initials
Supporting calculations:
Standardization Titration Data:
Trial
Mass of ascorbic acid (g)
Volume of $$\ce{KIO3}$$ (mL)
Calculated Molarity (M)*
1
Difference:
2
Difference:
3
Difference:
4 (if req)
Difference:
*All values should be with in ±0.0005 M of the average; trials outside this range should be crossed out and a fourth trial done as a replacement. Express your values to the correct number of significant figures. Show all your calculations on the back of this sheet.
• Average Molarity of $$\ce{KIO3}$$:
### Part B: Vitamin C Unknown (internal control standard)
"Internal Control Sample" (unknown) code:
Mass of Tablet 1:
Mass of Tablet 2:
Average mass:
Control Standard (Unknown) Titration Data:
Trial
Mass of unknown (g)
Volume of $$\ce{KIO3}$$ (mL)
mg ascorbic acid*
1
mg / g
mg/tablet
Difference:
2
mg / g
mg/tablet
Difference:
3
mg / g
mg/tablet
Difference:
mg / g
mg/tablet
Difference:
* Express your values to the correct number of significant figures. Show all your calculations on the back of this sheet.
Averages:
• ____________mg/g
• ____________mg/tablet
### Part C: Fruit juices, foods, health-products, and powdered drink mixes
Name of Sample Used: ________________________________________________________
1. Briefly describe the sample you chose to examine and how you prepared it for analysis. You may continue on the back if necessary:
Part C Titration Data:
Trial
Quantity of Sample Titrated (g or mL)
Volume of $$\ce{KIO3}$$ (mL)
Ascorbic acid: mg/g (solids) or mg/mL (liquids)
1
Difference:
2
Difference:
3
Difference:
4 (if req)
Difference:
*Express your values to the correct number of significant figures. Show all your calculations on the back of this sheet.
Average ascorbic acid :
1. What is the concentration of Vitamin C listed on the packaging by the manufacturer or given in the reference source? This can be given in units of %RDA, mg/g, mg/mL, mg/serving, or %RDA per serving. Be sure to include the exact units cited.
• Manufacturer’s claim: ____________________________ (value and units)
• Serving Size (if applicable): ________________________ (value and units)
1. Based on the manufacturer's or reference data above, calculate the mg of Vitamin C per gram (solids) or milliliter (liquid) of your sample. Show your work:
____________ mg / g or mL
1. If your reference comes from a text book or the internet give the citation below. If it comes from a product label please remove the label and attach it to this report.
1. Using your average milligrams of Vitamin C per gram or milliliter of product from part C as the "correct" value, determine the percent error in the manufacturer or text’s claim (show calculations)?
1. What can you conclude about the labeling of this product or reference value? How do you account for any discrepancies? Does the manufacturer or reference overstate or understate the amount of Vitamin C in the product? If so, why might they do this? Explain below. Use the back of this sheet if necessary.
10: Vitamin C Analysis (Experiment) is shared under a CC BY-NC license and was authored, remixed, and/or curated by Santa Monica College.
|
2022-06-25 01:48:13
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4647519290447235, "perplexity": 2830.7036987520055}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103033925.2/warc/CC-MAIN-20220625004242-20220625034242-00296.warc.gz"}
|
https://www.albert.io/ie/statistics-and-probability/frequency-distribution-of-students-heights
|
Free Version
Moderate
Frequency Distribution of Students' Heights
STATS-WEDCXV
A frequency distribution of "heights of female college students" was recorded to the nearest inch. The midpoint of the lowest class is $62.5"$ and the midpoint of the next class is $65.5"$.
What are the class limits for the the third class?
A
$64"$ and up to $67"$
B
$67"$ and $69"$
C
$67"$ and up to $70"$
D
$66"$ and $68"$
|
2017-02-22 12:57:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4220445454120636, "perplexity": 1422.2577375757012}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501170992.17/warc/CC-MAIN-20170219104610-00510-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://libguides.library.kent.edu/SPSS/DatesTime
|
SPSS Tutorials: Date-Time Variables in SPSS
This tutorial covers how SPSS treats Date-Time variables, and how to use the Date & Time Wizard to create and compute variables using dates and times.
Working with Dates and Times in SPSS
Datasets often include variables that denote dates or time. Thus, it is important to know how SPSS treats and works with such variables. In the following sections, we will discuss:
• How date-time variables work in SPSS.
• Standard formats for dates and time.
• Defining date-time variables in the Variable View window and through the Date and Time Wizard.
• Setting the century range for two-digit years.
• Converting a string variable containing a date into an actual date variable.
• Computing the elapsed time between two date-time variables using the Date and Time Wizard.
Date-Time Variables in SPSS
In SPSS, date-time variables are treated as a special type of numeric variable. All SPSS date-time variables, regardless of whether they're a date or a duration, are stored in SPSS as the number of seconds since October 14, 1582. This means that "under the hood", date-time variables are actually integers! This might not seem important, but it's what makes it possible to do "date arithmetic", such as computing the elapsed time between two dates, or adding and subtracting units of time from a date.
Fortunately, you as the user do not normally need to interact with the underlying integers, and you can type in data values for date and time variables using normal date-time conventions. However, dates and times can be written using a number of different conventions, so we need a way to tell SPSS how to read and parse our date strings. That's where the concept of date formats comes in.
Standard Formats for Dates and Times
When reading data containing dates or using certain date-time functions, we need to tell SPSS which date format to use, so that it knows how to correctly parse the components of the input string. A format is a named, pre-defined pattern that tells SPSS how to interpret and/or display different types of variables. There are different formats for different variable types, and each format in SPSS has a unique name.
Date-time formats are used in several situations:
• Initializing a new date or duration variable.
• Converting a variable from string or numeric to date.
• Changing the display format for an existing date variable (without changing the underlying data).
Your choice of format will depend on the whether or not the input is a date or a duration, as well as the time units included in the data value, the order of the units (e.g. month-day-year versus year-month-day), and the presence or absence of delimiters [1].
Date Formats
The actual date formats that you will use in your SPSS syntax are as follows.
Date-Time Unit Format name (general form) Format name (actual) Example dd-mmm-yy DATEw DATE9 31-JAN-13 dd-mmm-yyyy DATEw DATE11 31-JAN-2013 mm/dd/yy ADATEw ADATE8 01/31/13 mm/dd/yyyy ADATEw ADATE10 01/31/2013 dd.mm.yy EDATEw EDATE8 31.01.13 dd.mm.yyyy EDATEw EDATE10 31.01.2013 yyddd JDATEw JDATE5 13031 yyyyddd JDATEw JDATE7 2013031 yy/mm/dd SDATEw SDATE8 13/01/31 yyyy/mm/dd SDATEw SDATE10 2013/01/31 q Q yy QYRw QYR6 1 Q 13 q Q yyyy QYRw QYR8 1 Q 2013 mmm yy MOYRw MOYR6 JAN 13 mmm yyyy MOYRw MOYR8 JAN 2013 ww WK yy WKYRw WKYR8 5 WK 13 ww WK yyyy WKYRw WKYR10 5 WK 2013 dd-mmm-yyyy hh:mm DATETIMEw DATETIME17 31-JAN-2013 01:02 dd-mmm-yyyy hh:mm:ss DATETIMEw DATETIME20 31-JAN-2013 01:02:33 dd-mmm-yyyy hh:mm:ss.s DATETIMEw.d DATETIME23.2 31-JAN-2013 01:02:33.72 yyyy-mm-dd hh:mm YMDHMSw YMDHMS16 2013-01-31 1:02 yyyy-mm-dd hh:mm:ss YMDHMSw YMDHMS19 2013-01-31 1:02:33 yyyy-mm-dd hh:mm:ss.s YMDHMSw YMDHMS19.2 2013-01-31 1:02:33.72 (abbr. name of the day) WKDAYw WKDAY3 THU (full name of the day) WKDAYw WKDAY9 THURSDAY (abbr. name of month) MONTHw MONTH3 JAN (full name of the month) MONTHw MONTH9 JANUARY
In the "Date-Time Unit" column, the date components are represented using the following codes:
• "d" = day
• “dd” = day of month as two-digit number (01, 02, ..., 31)
• "ddd" = day of year as three-digit number (1, 2, ..., 365)
• “m” = month
• "mm" = month as two-digit number (01, 02, ..., 12)
• "mmm" = month as three-character abbreviation (JAN, FEB, ..., DEC)
• “y” = year
• "yy" = two-digit year (century is omitted)
• "yyyy" = four-digit year
• “q” = quarter of year as one-digit number (1, 2, 3, 4)
• "ww" = week of year as two-digit number (1, 2, ..., 53)
• “h” = hour (01, 02, ... 23)
• “m” = minute (01, 02, ..., 59)
• “ss” = seconds (01, 02, ..., 59)
In the "general form" column, the name of the format appears first, followed by the letter w (or w.d). The letter w denotes the number of "columns" (typically the number of characters in the input string), and the letter d represents the number of decimal places, if present. You will replace these with the appropriate number to use for the width of the date.
You'll see an example of how date-time formats are used in the example of converting a string variable to a date variable.
Durations
The actual duration formats that you will use in your SPSS syntax are as follows.
SPSS duration formats as applied to an example duration of 29 hours, 14 minutes, 36.58 seconds.
Duration Unit Format code (general form) Format code (actual) Example
mm:ss MTIMEw MTIME5 1754:36
mm:ss.s MTIMEw.d MTIME8.2 1754:36.58
hh:mm TIMEw TIME5 29:14
hh:mm:ss TIMEw TIME8 29:14:36
hh:mm:ss.s TIMEw.d TIME11.2 29:14:36.58
ddd hh:mm DTIMEw DTIME9 1 05:14
ddd hh:mm:ss DTIMEw DTIME12 1 05:14:36
ddd hh:mm:ss.s DTIMEw.d DTIME15.2 1 05:14:36.58
In the "Duration Unit" column, the time components are represented using the following codes:
• "ddd" = days as a number greater than or equal to 0
• "hh" = hours
• "mm" = minutes
• "ss" or "ss.s" = seconds
Just as with date formats, the "general form" of the format name contains w (or w.d). The letter w denotes the number of "columns" (typically the number of characters in the input string), and the letter d represents the number of decimal places, if present. You will replace these with the appropriate number to use for the width of the date.
Notice how in the column of examples, SPSS took the same underlying data and automatically converted the time units based on the formats we chose. When we used the DTIME format, it knew that 29 hours should "roll over" to 1 day, 5 hours. When we used the MTIME format, it knew that 29 hours, 14 minutes is equal to (29*60) + 14 = 1754 minutes. This is one of the benefits of using date-time variables to represent dates and durations: they give us the option to change how how the data is displayed without needing to do the conversion arithmetic ourselves.
[1] Note: As of SPSS version 24, the above date formats will correctly recognize date strings without delimiters as long as the lengths of the other elements are correct (i.e., leading zeroes where necessary in the day, month, hour, minute, and second, so that those components are each two characters long). (Source) In previous versions, these date formats would not recognize dates that did not contain the appropriate delimiters.
Defining Date-Time Variables in the Variable View Tab
It is important to specify which variables in your data are dates/ times so that SPSS can recognize and use these variables appropriately. However, the procedure for defining a variable as date/time depends on its currently defined type (e.g., string, numeric, date/time). The following sections outline how to define a variable as date/time based on the variable’s current type.
Changing a variable type from string or numeric to date/time
If your dataset includes a variable whose values represent dates or time, but the variable is currently defined as string or numeric, you should specify that the variable is actually a date/time. You can specify the variable type as date/time by clicking the Variable View tab, locating the variable, and clicking on the cell beneath the “Type” column. A blue “…” button will appear. Clicking the blue “…” button opens the Variable Type window. Select “Date” from the list of variable types. Then, on the right, select the format in which the date/time for that variable should appear (by selecting the date/time format in which the values already appear). Click OK. Now SPSS will recognize the variable as date/time.
Note: These steps work only if the variable values are already in a standard date/time format but are currently defined as string/numeric…and only if you define the variable as date/time by selecting a date/time format that already mirrors the existing format. For example, if the values appear as “Aug 1991” you should select a date/time format that mirrors the existing format. If you try to select a format that includes additional or different information, the change in format may fail and blank out the data.
Example: This scenario is likely if you import data from another file source, such as Excel, and SPSS does not immediately define the variable type as date/time, even though the values are in a standard date/time format.
Thus, the following criteria must apply in order to use the steps outlined above:
2. Your variable is currently defined as “string” or “numeric” rather than date/time.
3. You wish to re-define your variable type from string or numeric to date/time.
Changing the variable type from string or numeric to a date/time format that is different from the date/time format in which the values currently appear
If the variable is already in a standard date/time format but is currently defined as string or numeric, and you wish to both A) define the variable as date/time, and B) choose a different date/time format than the one that matches the current format, you must proceed in two steps.
1. You must first define the variable as date/time and select the format in which your dates/times currently appear.
2. After you have specified the current format of date/time values for that variable, you can then change the format of the date following the same steps you used to define the variable type and date format during the first step.
Note: If the dates for a selected variable appear as mm/dd/yyyy and are currently defined as “String” in the “Type” of variable in Variable View, you cannot change the “Type” to “Date” and select the new format in which you want the date/time values to appear. You must first select the format in which the dates/times currently appear. Then, you can repeat this process to select the new format in which you want the dates to appear. If you do not first define a variable as a “Date” and select the current date/time format before selecting the format to which you want to change it, the values for that variable will be defined as missing.
Example: If a variable with date/time values is currently defined as string or numeric, but all the values follow the form mm/dd/yyyy (e.g., 01/31/2013), then you must select this format (mm/dd/yyyy) when you change the variable’s type to date/time. Do not select a format that does not match the current format of the values.
Thus, the following criteria must apply in order to use the steps outlined above:
2. Your variable is currently defined as “string” or “numeric” rather than date/time.
3. You wish to re-define your variable type from string or numeric to a date/time format that is different from the date/time format in which the values currently appear.
Changing variables defined as dates/time from one date format to another date format
If a variable type is already defined as date/time, then changing the format of the values to a different date/time format is simple. In Variable View, under the column “Type,” select the cell that corresponds to the variable you want to change. A blue “…” button will appear, which opens the “Variable Type” dialog box. “Date” should already be selected from the list of variable types on the left. On the right, select the new date/time format in which you would like the variable values to appear. Click OK. Now click the Data View tab to view your data; your dates should now appear in the format you selected.
Note: If you select a new format that includes space for information that does not actually appear in your dataset, it will appear as 0s in the data. For example, if your data only includes information about the month, day, and year, and you select a format that also includes space for the hour, minute, and second, values will appear like this one: 31-JAN-2013 00:00:00.
Example: Perhaps your date is defined as date/time and appears as “01/31/2013,” but you would like it to appear as “2013/01/31,” instead.
Setting the Century Range for Two-Digit Years
When writing dates, it's common to see individuals abbreviate the year to two digits, especially in contexts where the century is "obvious" to the reader. This is fine when making notes to yourself, but when you're trying to compile data for analysis, this can be hugely problematic, especially when working with data that covers a large time range, or is very far in the past.
In general, we recommend always using four-digit years when entering data for dates. But sometimes you may not be in control of how the data was entered -- you may receive or request a dataset where the dates only used two-digit years. For these situations, it's important to know how to appropriately define the century range in SPSS.
In SPSS, the century range refers to the 100-year range that SPSS will assume when parsing date variables with two-digit years. For example: when you read the date 1/1/80, do you assume that I mean 1/1/1980 or 1/1/2080? If you didn't have any other context clues, you'd probably base your guess on the current year (2020). You might go with the century that makes the two-digit year closer to the current year, which would mean 1/1/1980. Or, you might assume that the century should match the current century, which would mean 1/1/2080.
The default century range in SPSS is based on the current year: it will start the range at 69 years prior to the current year and end the range at 30 years after the current year (source). So if you are using SPSS in the year 2020, it will assume that the century range is 1951 to 2050; but if you open SPSS a year later, SPSS will assume that the century range is 1952 to 2051.
Why does the century range matter? If you are going to compute elapsed time, or want to use your date variables as a predictor in a model, you can imagine how problematic it would be if one of the dates was off by 100 years! For this reason, it's critical that you specify the appropriate century range when working with dates containing two-digit years.
To change the century range for two-digit years, follow these steps:
Using the Dialog Windows
1. Click Edit > Options.
2. The Options window will appear. Click the Data tab at the top.
3. On the right-hand side you will see the Set Century Range for 2-Digit Years area.
By default, Automatic will be selected and two-digit years will be understood to fall in the range of the current year minus 69 to the current year plus 30. You can change the century range by clicking Custom, which will allow you to input a new beginning year (and the end year will be imputed for you). When you are finished, click Apply, and then click OK.
Using Syntax
Alternatively, you can set the century range using the SET EPOCH command:
SET EPOCH=yyyy.
The yyyy to the right of the equals sign is the desired beginning year for the century range. For example, SET EPOCH=1900 would set the century range to 1900 to 1999, while SET EPOCH=1950 would set the century range to 1950 to 2049.
Date and Time Wizard
SPSS conveniently includes a Date and Time Wizard that can assist with transformations and calculations that involve date and time variables. To access the Date and Time Wizard, click Transform > Date and Time Wizard.
The Date and Time Wizard window will appear.
Although there are many options, it is useful to begin by first reading about how dates and times are represented in SPSS. We have selected this option (Learn how dates and times are represented) in the Date and Time Wizard window (depicted above). Now, click Next. You will see the following window.
Note that the Date and Time Wizard can assist with many tasks related to dates and time, including:
• Creating a date/time variable from a string containing a date or time
• Creating a date/time variable from variables that contain parts of dates or times
• Calculating with dates and time
• Extracting parts of dates or time
• Assigning periodicity to a dataset for time series data
We will not cover each of these options in this tutorial, but we will cover one of the most common uses for the Date and Time Wizard: calculations involving dates and times.
Example: Converting a string variable to a date variable
Problem Statement
If you have datetime variables in a text or CSV file, SPSS will often read those variables in as string or character variables, instead of treating them as actual dates. In order to have those variables correctly recognized, you'll need to convert them from string to date.
In the sample dataset, the variable enrolldate (date of college enrollment) contains dates in the form dd-mmm-yyyy, but was read into the dataset as a string variable. Let's convert that variable from a string to a numeric date.
Running the Procedure
Using the Date & Time Wizard
1. Click Transform > Date and Time Wizard.
2. Select Create a date/time variable from a string containing a date or time. Then click Next.
3. In the Variables box, select variable enrolldate. This will show a preview of the values of the variable in the Sample Values box, so that you can select the correct pattern. In the Patterns box, click dd-mmm-yyyy. Then click Next.
4. In the Result Variable box, enter a name for the new date variable; let's call it date_of_enrollment. Optionally, you can add a variable label, and if desired, you are able to change the date format used for the output variable.
5. Click Finish.
Using Syntax
COMPUTE date_of_enrollment=number(enrolldate, DATE11).
VARIABLE LABELS date_of_enrollment 'Date of college enrollment'.
VARIABLE LEVEL date_of_enrollment (SCALE).
FORMATS date_of_enrollment (DATE11).
VARIABLE WIDTH date_of_enrollment(11).
EXECUTE.
What's going on in this syntax?
• The first line (COMPUTE) actually computes the new date variable using the built-in function number(), which converts string variables to numeric variables. The argument DATE11 tells SPSS that the content of the string variable is in DATE11 format initially (dd-mmm-yyyy).
• The second line (VARIABLE LABELS) applies the variable label "Date of college enrollment" to the new variable.
• The third line (VARIABLE LEVEL) explicitly sets the measurement level of the new variable to Scale.
• The fourth line (FORMATS) applies a human-readable date format to the new variable. Here, we tell SPSS to continue using the DATE11 (dd-mmm-yyyy) format for the new variable.
• The fifth line (VARIABLE WIDTH) tells SPSS how wide the column should be. This particular date format always has 11 characters, so the column is set to have width 11.
Example: Computing Elapsed Time between Two Date-Time Variables
Problem Statement
Sometimes you may need to calculate the length of time that has passed between two points in time. For example, you may wish to calculate the ages of people in your sample based on information you have about when they were born and what the current day/time/year is (or another date of your choosing). Any unit of time can be used. This means that you can calculate how many years, months, days, hours, minutes, or even seconds old each person is.
Before we can perform a calculation with dates and times, we first need to make sure that our dataset has at least two variables that represent time points. If you completed the above example, you will now have at least two date variables in the sample dataset: bday (the person's date of birth) and now date_of_enrollment (the date the person enrolled in college). We can compute the age that each person was when they enrolled in college using these two time points.
Running the Procedure
Using the Date & Time Wizard
1. Click Transform > Date and Time Wizard. The Date and Time Wizard window will appear.
2. Click Calculate with dates and times and then click Next.
3. Click Calculate the number of time units between two dates. Click Next.
4. We will now specify how the new variable should be computed from our existing date variables in this calculation: date_of_enrollment (the date at which a person enrolled in college) and bday (the date of birth).
AVariables: Lists all of the available date and time variables in your dataset. It also includes a variable called “\$TIME” which represents the current date and time.
BDate1: The right half of the dialog box is where we will specify which variables to use, and how to set up the calculation. In the Date1 field, select the variable date_of_enrollment and in the minus Date 2 field, select the variable bday. This specifies that SPSS should calculate date_of_enrollment minus bday, which will yield the number of years between when the person was born and when they enrolled in college (i.e., their age at college enrollment).
CUnit: The unit of time to use for the variable you are creating. You can choose among Years, Months, Weeks, Days, Hours, Minutes, and Seconds. In this example, select Years from the Unit list.
DResult Treatment: Specify how to treat the values of the variable that will be calculated. You can choose to truncate to integer, round to integer, or retain fractional part.
Truncate to integer means dropping the fractional part (e.g., 1.3 would become 1, and 1.6 would become 1).
Round to integer bumps the number to the nearest integer (e.g., 1.3 would round to 1, but 1.6 would round to 2).
Retain fractional part means that the fraction will remain (e.g., 1.6 remains 1.6).
In this example, fractions will be retained so that the values for the new variable reflect the individual's exact age (in years) when they enrolled in college.
When you are finished setting up the calculation, click Next
5. The final window asks you to name the variable you are creating in the Result Variable field and to provide a label in the Variable Label field. Here, the new variable is called age_at_enrollment and is labeled "Age at time of college enrollment".
The Execution area allows you to choose how to create the new variable. You can have SPSS Create the variable now, which will immediately create the new variable in your dataset. Alternatively, select Paste the syntax into the syntax window, which will have SPSS write the syntax (command language) that will create the variable whenever you choose to run the syntax command in the future. This latter option will not create the new variable until you run the syntax.
6. When you are finished, click Finish.
Once your new variable has been created, it is always a good idea to check that the calculation was accurate. You can do this by spot-checking some of the rows in your data. You can manually calculate the time between date_of_enrollment and bday for some of the cases in the data and then compare the manual calculation to the value SPSS created in the new variable age_at_enrollment.
Using Syntax
COMPUTE age_at_enrollment=(date_of_enrollment - bday) / (365.25 * time.days(1)).
VARIABLE LABELS age_at_enrollment "Age at time of enrollment (years)".
VARIABLE LEVEL age_at_enrollment (SCALE).
FORMATS age_at_enrollment (F8.2).
VARIABLE WIDTH age_at_enrollment(8).
EXECUTE.
What's going on in this syntax?
• The first line (COMPUTE) performs the calculation of the elapsed times. Notice that the calculation isn't simply the difference of the two date variables: in the denominator, the term (365.25*time.days(1)) corrects for different year lengths due to leap years.
• The second line (VARIABLE LABELS) applies the variable label "Age at time of enrollment (years)" to the new variable.
• The third line (VARIABLE LEVEL) explicitly sets the measurement level of the new variable to Scale.
• The fourth line (FORMATS) tells SPSS that the computed variable is a numeric variable that has two decimal places and is at most 8 characters wide.
• The fifth line (VARIABLE WIDTH) sets the width of the variable to 8 characters.
• The last line (EXECUTE) tells SPSS to carry out the computation and add the new variable to the active dataset. (Without this line, SPSS will create the variable in the computer's memory but not actually add it to the dataset.)
|
2022-05-21 05:44:43
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19224987924098969, "perplexity": 2289.420796884673}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662538646.33/warc/CC-MAIN-20220521045616-20220521075616-00594.warc.gz"}
|
https://solvedlib.com/pixel-studio-inc-is-a-small-company-that-creates,107633
|
# Pixel Studio, Inc., is a small company that creates computer-generated animations for films and television. Much...
###### Question:
Pixel Studio, Inc., is a small company that creates computer-generated animations for films and television. Much of the company's work consists of short commercials for television, but the company also does realistic computer animations for special effects in movies. The young founders of the company have become increasingly concerned with the economics of the business-particularly since many competitors have sprung up recently in the local area. To help understand the company's cost structure, an activity-based costing system has been designed. Three major activities are carried out in the company: animation concept, animation production, and contract administration. The animation concept activity is carried out at the contract proposal stage when the company bids on projects. This is an intensive activity that involves individuals from all parts of the company in creating story boards and prototype stills to be shown to the prospective client. Once a project is accepted by the client, the animation goes into production and contract administration begins. Almost all of the work involved in animation production is done by the technical staff, whereas the administrative staff is largely responsible for contract administration. The activity cost pools and their activity measures are listed below: Activity Cost Pool Animation concept Animation production Contract administration Activity Measure Number of proposals Minutes of completed animation Number of contracts Activity Rate $5,700 per proposal$8,600 per minute $7,600 per contract These activity rates include all of the company's costs, except for its organization-sustaining costs and idle capacity costs. There are no direct labor or direct materials costs. Preliminary analysis using these activity rates has indicated that the local commercial segment of the market may be unprofitable. This segment is highly competitive. Producers of local commercials may ask three or four companies like Pixel Studio to bid, which results in an unusually low ratio of accepted contracts to bids. Furthermore, the animation sequences tend to be much shorter for local commercials than for other work. Since animation work is billed at fairly standard rates according to the running time of the completed animation, this means that the revenues from these short projects tend to be below average. Data animation, this means that the revenues from these short projects tend to be below average. Data concerning activity in the local commercial market appear below: Local Commercials 18 Activity Measure Number of proposals Minutes of completed animation Number of contracts 10 The total sales from the 10 contracts for local commercials was$210,000. Required: 1. Calculate the cost of serving the local commercial market. 2. Calculate the margin earned serving the local commercial market. (Remember, this company has no direct materials or direct labor costs.) Complete this question by entering your answers in the tabs below. Required 1 Required 2 Calculate the cost of serving the local commercial market. Cost of serving < Required 1 Required 2 >
Required 1 Required 2 Calculate the margin earned serving the local commercial market. (Remember, this company has no direct materials or direct labor costs.) Profitability Analysis Costs: Required 1 Required 2
#### Similar Solved Questions
##### How can mechanoreceptors dull pain responses?
How can mechanoreceptors dull pain responses?...
##### Problem 2. (10 points) Find lower and upper bounds to the k using Proposition 9.1 in...
Problem 2. (10 points) Find lower and upper bounds to the k using Proposition 9.1 in Section 9.1 Monotonic Functions and integrals from calculus Proposition 9.1. Let f be a monotonically incrcasing function that is defined on the interval [a - 1, b+ 1]. Then ph+1 f(x)d f(k) f()dx 1 k a a-...
##### How would you explain the phase diagram of sulphur?
How would you explain the phase diagram of sulphur?...
##### EngIne working betweon 300 K and 600 K has work output of 800 per cyclo: The amount of heat energy suppllax Ka horenglne from source per cycle will be:Ehlatonart 900]1600] pdoj
engIne working betweon 300 K and 600 K has work output of 800 per cyclo: The amount of heat energy suppllax Ka horenglne from source per cycle will be: Ehlatonart 900] 1600] pdoj...
##### A past study claimed that adults in America spent an average of 18 hours week on leisure activities A researcher wanted to test this claim. She took a sample of 10 adults and asked them about the time they spend per week on leisure activities. Their responses (in hours) are as follows10.2 12.7 22.4 22.9 23.7 36.1 17 17.9 20.2 20.3Assume that the times spent on leisure activities by all adults are normally distributed: Using the 5% significance level, can you conclude that the average amount of t
A past study claimed that adults in America spent an average of 18 hours week on leisure activities A researcher wanted to test this claim. She took a sample of 10 adults and asked them about the time they spend per week on leisure activities. Their responses (in hours) are as follows 10.2 12.7 22.4...
##### What sort of business organization would you use if you went into business for yourself and...
What sort of business organization would you use if you went into business for yourself and why. Does any single form of business organization appear to be superior? What sorts of questions would one want to ask before selecting a business organization?...
##### Report the total variation, unexplained varialion, and explained varialion as shown on the output (Round your answers to decimal places.)Total variationUnexplained varationExplained variationCalculate tne F(model) statistic by using the explained varation; the unexplained variation, and other relevant quantities: (Round your answer to decima placos:F(model)Use the F(model) statistic and the appropriate rejection point to tost the significance of the linear regression model under consideration by
Report the total variation, unexplained varialion, and explained varialion as shown on the output (Round your answers to decimal places.) Total variation Unexplained varation Explained variation Calculate tne F(model) statistic by using the explained varation; the unexplained variation, and other re...
##### Hestion 6How many grams of NaCl are contained in 350_mL ofa 0.287 Msolution of sodium chloride? A 16.8 g B 5.87 g C.11.74 g D 100.5 g 0 E none of these
Hestion 6 How many grams of NaCl are contained in 350_mL ofa 0.287 Msolution of sodium chloride? A 16.8 g B 5.87 g C.11.74 g D 100.5 g 0 E none of these...
##### You need to design an industrial turntable that is 60.0 $\mathrm{cm}$ in diameter and has a kinetic energy of 0.250 $\mathrm{J}$ when turning at 45.0 $\mathrm{rpm}(\mathrm{rev} / \mathrm{min})$ . (a) What must be the moment of inertia of the turntable about the rotation axis? (b) If your workshop makes this turntable in the shape of a uniform solid disk, what must be its mass?
You need to design an industrial turntable that is 60.0 $\mathrm{cm}$ in diameter and has a kinetic energy of 0.250 $\mathrm{J}$ when turning at 45.0 $\mathrm{rpm}(\mathrm{rev} / \mathrm{min})$ . (a) What must be the moment of inertia of the turntable about the rotation axis? (b) If your workshop ma...
##### Find each product in rectangular form, using exact values. $\frac{3 \operatorname{cis}\left(\frac{61 \pi}{36}\right)}{9 \operatorname{cis}\left(\frac{13 \pi}{36}\right)}$
Find each product in rectangular form, using exact values. $\frac{3 \operatorname{cis}\left(\frac{61 \pi}{36}\right)}{9 \operatorname{cis}\left(\frac{13 \pi}{36}\right)}$...
##### 2. Identify each of following as aromatic, nonaromatic, or aromatic. Explain your choice in each case....
2. Identify each of following as aromatic, nonaromatic, or aromatic. Explain your choice in each case. - Como no . 0 mm...
##### Given T(V1' Vz, V3) = (V2 V1, V1 + Vz, 2v1), v = (2, 5, 0), w = (-9,5, 14) (a) Find the image of v(b) Find the preimage of w (If the vector has an infinite number of solutions, give your answer in terms of the param
Given T(V1' Vz, V3) = (V2 V1, V1 + Vz, 2v1), v = (2, 5, 0), w = (-9,5, 14) (a) Find the image of v (b) Find the preimage of w (If the vector has an infinite number of solutions, give your answer in terms of the param...
##### 02u dxzSolve the wave equation0 < * < L't > dt-subject to the given conditionsu(0, t) = 0, u(T, t) = 0, t > 0u(x, 0) = 0,sin(x), 0 < x < TTt =u(x, t}eBook
02u dxz Solve the wave equation 0 < * < L't > dt- subject to the given conditions u(0, t) = 0, u(T, t) = 0, t > 0 u(x, 0) = 0, sin(x), 0 < x < T Tt = u(x, t} eBook...
##### The graph 04 function fi5 given; Use the graph estmatemllaemAll tha loa l maaImumunlharhne Dalyesthe function and tne valua otmhich ouch OCLLMlocal maximumn (v)(unallor $vAlue)Iocal miumum(Iarqer * valuo)tocul minimum(emaller , valu")lochl minimum(Grqet$ valua)(P) Tha Intatvak which the (unctlod Incrujeng und on wichHunaranc CleasungLslna
The graph 04 function fi5 given; Use the graph estmate mllaem All tha loa l maaImum unlharhne Dalyes the function and tne valua ot mhich ouch OCLLM local maximumn (v) (unallor $vAlue) Iocal miumum (Iarqer * valuo) tocul minimum (emaller , valu") lochl minimum (Grqet$ valua) (P) Tha Intatvak w...
##### Nitps {rtua icampus pupr edumeba5a5Lrgnlinge1Bcontent_id=_1107400_ Ie5teo=nullQuestion Complclion Status:questionLa posicion de una particula csta descrita por la funcion x = 3.012 + 0t-3.0 (m) Determinc relocidad promedio (cn ms) para intenalo tiempo t] 720,y12=4.0sQuestionLaposicion dc una particula Gu descrita por funcion =3.02 +5.01-3.0 (m) Dclenine vclocidad instantineam$) para ticmpo t = 2,0$.questionLa posicion dc una particula csta descrita por funcion40t2 - 2.0 t +20 (m) Detcrinc acele
nitps {rtua icampus pupr edumeba 5a5 Lrgnlinge 1Bcontent_id=_1107400_ Ie5teo=null Question Complclion Status: question La posicion de una particula csta descrita por la funcion x = 3.012 + 0t-3.0 (m) Determinc relocidad promedio (cn ms) para intenalo tiempo t] 720,y12=4.0s Question Laposicion dc una...
##### Why does the binding energy per nucleon increase during nuclear fission and nuclear fusion?
Why does the binding energy per nucleon increase during nuclear fission and nuclear fusion?...
##### As a member of the society for black business administration students you have been invited to attend the 35th international conference on law reform with a slot to present a position paper on the subject. Cease the opportunity to clarify the position of
The concepts of the appointment, duties of directors as well as to whom these duties are owed together with the effects and procedure for mergers and acquisition as well as the raising of capital and winding up by companies have been subject to a long-standing debate in the country and beyond. Where...
##### 0.5. What fraction of the offsprings of two parents with Sickle cell trait would you expect...
0.5. What fraction of the offsprings of two parents with Sickle cell trait would you expect to have Sickle cell anemia? Q.6. List the major steps you undertake for performing ELISA? 0.7. Describe an ELISA procedure you'll undertake to determine if a person is infected with AIDS virus? Q.3. In th...
##### Can someone help me with this kinetic energy problem?
A 0.60-kg block initially at rest on a frictionless horizontal surface is acted upon by a force of 4.0 N for a distance of 6.5 m. How much kinetic energy does the block gain?...
##### 1, 2, 3, 4 Problem 1 The linear transformation T : x + Cx for a...
1, 2, 3, 4 Problem 1 The linear transformation T : x + Cx for a vector x € R2 is the composition of a rotation and a scaling if C is given as C-[ 0. 0 0.5 -0.5 0 [1] (1) Find the angle o of the rotation, where - <s, and the scale factor r. (2) If x= without computing Cx, sketch x and the...
##### O03KlinOaeih MeLALL d TelcnyaCAaum | chkrak)
O03 KlinOaeih MeLALL d Telcnya CAaum | chkrak)...
##### One of $\sin \theta, \cos \theta,$ and $\tan \theta$ is given. Find the other two if $\theta$ lies in the specified interval. $\cos \theta=\frac{1}{3}, \quad \theta \text { in }\left[-\frac{\pi}{2}, 0\right]$
One of $\sin \theta, \cos \theta,$ and $\tan \theta$ is given. Find the other two if $\theta$ lies in the specified interval. $\cos \theta=\frac{1}{3}, \quad \theta \text { in }\left[-\frac{\pi}{2}, 0\right]$...
##### Thermo (25) 7. A steam power plant operates on the Rankine cycle with steam entering the...
Thermo (25) 7. A steam power plant operates on the Rankine cycle with steam entering the high pressure turbine at 1500 psi, 1000'C with a mass flow rate of 5x10lb/hr. The steam exits the high pressure turbine at 90 psi, 350°F where the steam is then sent back to the boiler and reheated to...
##### Problems 10-12 relcr to the Ellowing circuil_ KCTMS LRC circuit hns AC SOUIce with vollage amplitude 4Vrunz [roquency f I(M) Hz The amplitude of thee voltage through thc resislor is Vn and the amplitude o the vollagc throuih Uhe inductor is Vt 5V The resistancr: of the resistor in the circuit is R "2(I(M) SL Froblem 10: Whal L; (hc slf-itluclancc of thc indintot, Problem H: What is €, Uhe capncitancx of Ulc: caparitor . Problem 12: Uploaudl PDF lile with pictunes of Fa work for Problets
Problems 10-12 relcr to the Ellowing circuil_ KCTMS LRC circuit hns AC SOUIce with vollage amplitude 4Vrunz [roquency f I(M) Hz The amplitude of thee voltage through thc resislor is Vn and the amplitude o the vollagc throuih Uhe inductor is Vt 5V The resistancr: of the resistor in the circuit is R ...
##### Recidivism (Example 16) Norway's recidivism rate is one of the lowest in the world at $20 %$. This means that about $20 %$ of released prisoners end up back in prison (within three years). Suppose three randomly selected prisoners who have been released are studied.a. What is the probability that all three of them go back to prison? What assumptions must you make to calculate this?b. What is the probability that neither of them goes back to prison?c. What is the probability that at least tw
Recidivism (Example 16) Norway's recidivism rate is one of the lowest in the world at $20 %$. This means that about $20 %$ of released prisoners end up back in prison (within three years). Suppose three randomly selected prisoners who have been released are studied. a. What is the probability t...
##### QUESTION 23VON company manufacturers an oil extraction tool that= mano Using componeni Irom three suppllers; As part of its conducts rcgular suFplicr audits Thelde control ensure that Whe defectrate fram each suppller remains belon the contractual Init defect Trrate oreach; ' Supdi (hich can be interprcted the probability each Inventory defective part coming (rorn that supplier} Jong with the current nuinber suppller [ab parts Irom given In tne belom: Supplier Defect Rate Inventory Level 0,
QUESTION 23 VON company manufacturers an oil extraction tool that= mano Using componeni Irom three suppllers; As part of its conducts rcgular suFplicr audits Thelde control ensure that Whe defectrate fram each suppller remains belon the contractual Init defect Trrate oreach; ' Supdi (hich can b...
##### Draw a dot diagram for ethylene, $mathrm{C}_{2} mathrm{H}_{4}$.
Draw a dot diagram for ethylene, $mathrm{C}_{2} mathrm{H}_{4}$....
##### If you please would explain each step i would love to learn how to resolve it!! thanksss!
A 10.0 kg uniform ladder that is 2.50 m long is placed against a smooth vertical wall and reaches to a height of 2.10 m, as shown in the figure. The base of the ladder rests on a rough horizontal floor whose coefficient of static friction with the ladder is 0.800. An 80.0 kg bucket of concrete is su...
##### - (a) Fill in the blanks in the following sentences with the most appropriate words by...
- (a) Fill in the blanks in the following sentences with the most appropriate words by writing oaly the answers with serial numbers in the answer-book : (1) Beta is an index of the risk of an asset. (ii) Diversification does not reduce risk when the returns on two securities are correlated. (1) A ye...
##### An object that is $25 \mathrm{~cm}$ in front of a convex mirror has an image located $17 \mathrm{~cm}$ behind the mirror. How far behind the mirror is the image located when the object is $19 \mathrm{~cm}$ in front of the mirror?
An object that is $25 \mathrm{~cm}$ in front of a convex mirror has an image located $17 \mathrm{~cm}$ behind the mirror. How far behind the mirror is the image located when the object is $19 \mathrm{~cm}$ in front of the mirror?...
##### 11. Goodwil Basic principle is to value assets acquired using fair value of assets given other...
11. Goodwil Basic principle is to value assets acquired using fair value of assets given other than cash k. Assets such as timber tracks and mineral deposits. Early in 2021, the Excalibur Company began developing a new software package to be marketed. The project was completed in December 2021 at a ...
|
2023-03-28 00:16:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3579122722148895, "perplexity": 4257.682382198804}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948708.2/warc/CC-MAIN-20230327220742-20230328010742-00263.warc.gz"}
|
https://www.synapse-robotics.com/rv81w9a/c7bc8b-dividing-complex-numbers-examples
|
Let two complex numbers are a+ib, c+id, then the division formula is, To divide complex numbers. Complex numbers are often represented on a complex number plane (which looks very similar to a Cartesian plane). Just in case you forgot how to determine the conjugate of a given complex number, see the table below: Use this conjugate to multiply the numerator and denominator of the given problem then simplify. We did this so that we would be left with no radical (square root) in the denominator. We need to find a term by which we can multiply the numerator and the denominator that will eliminate the imaginary portion of the denominator so that we … Please click OK or SCROLL DOWN to use this site with cookies. Multiply or divide mixed numbers. Since our denominator is 1 + 2i 1 + 2i, its conjugate is equal to Here are some examples! But when it comes to dividing complex numbers, some new skills are going to need to be learned. This algebra video tutorial explains how to divide complex numbers as well as simplifying complex numbers in the process. You may need to learn or review the skill on how to multiply complex numbers because it will play an important role in dividing complex numbers. Division of complex numbers relies on two important principles. Use the FOIL Method when multiplying the binomials. To multiply monomials, multiply the coefficients and then multiply the imaginary numbers i. Write the problem in fractional form. A tutorial on how to find the conjugate of a complex number and add, subtract, multiply, divide complex numbers supported by online calculators. Check-out the interactive simulations to know more about the lesson and try your hand at solving a few interesting practice questions at the end of the page. Divide the two complex numbers. Multiply the numerator and the denominator by the conjugate of the denominator. Determine the complex conjugate of the denominator. Example 1. Dividing complex numbers review. To multiply complex numbers that are binomials, use the Distributive Property of Multiplication, or the FOIL method. Follow the rules for fraction multiplication or division. Complex numbers are often denoted by z. The second principle is that both the numerator and denominator of a fraction can be multiplied by the same number, and the value of the fraction will remain unchanged. Let 2=−බ ∴=√−බ Just like how ℝ denotes the real number system, (the set of all real numbers) we use ℂ to denote the set of complex numbers. Dividing by a complex number is a similar process to the above - we multiply top and bottom of the fraction by the conjugate of the bottom. Scroll down the page for more examples and solutions for dividing complex numbers. But either part can be 0, so all Real Numbers and Imaginary Numbers are also Complex Numbers. Dividing complex numbers is actually just a matter of writing the two complex numbers in fraction form, and then simplifying it to standard form. We use cookies to give you the best experience on our website. The first is that multiplying a complex number by its conjugate produces a purely real number. Towards the end of the simplification, cancel the common factor of the numerator and denominator. A Complex number is in the form of a+ib, where a and b are real numbers the ‘i’ is called the imaginary unit. In this process, the common factor is 5. Current time:0:00Total duration:4:58. Dividing Complex Numbers Simplify. Step 1: The given problem is in the form of (a+bi) / (a+bi) First write down the complex conjugate of 4+i ie., 4-i. Examples of Dividing Complex Numbers Example 1 : Dividing the complex number (3 + 2i) by (2 + 4i) Otherwise, check your browser settings to turn cookies off or discontinue using the site. From there, it will be easy to figure out what to do next. Divide (2 + 6i) / (4 + i). Dividing complex numbers. Identities with complex numbers. This way, a complex number is defined as a polynomial with real coefficients in the single indeterminate i, for which the relation i 2 + 1 = 0 is imposed. ), and the denominator of the fraction must not contain an imaginary part. Placement of negative sign in a fraction. Let's look at an example. Dividing Complex Numbers. Since the denominator is 1 + i, its conjugate must be 1 - i. This is the currently selected item. How to Divide Complex Numbers in Rectangular Form ? If i 2 appears, replace it with −1. The division of two complex numbers can be accomplished by multiplying the numerator and denominator by the complex conjugate of the denominator, for example, with … If you haven’t heard of this before, don’t worry; it’s pretty straightforward. How To: Given two complex numbers, divide one by the other. To divide complex numbers: Multiply both the numerator and the denominator by the conjugate of the denominator, FOIL the numerator and denominator separately, and then combine like terms. If we have a complex number defined as z =a+bi then the conjuate would be. Step 3: Simplify the powers of i, specifically remember that i 2 = –1. Perform all necessary simplifications to get the final answer. In this #SHORTS video, we work through an animated example of dividing two complex numbers in cartesian form. 1) 5 −5i 2) 1 −2i 3) − 2 i 4) 7 4i 5) 4 + i 8i 6) −5 − i −10i 7) 9 + i −7i 8) 6 − 6i −4i 9) 2i 3 − 9i 10) i 2 − 3i 11) 5i 6 + 8i 12) 10 10 + 5i 13) −1 + 5i −8 − 7i 14) −2 − 9i −2 + 7i 15) 4 + i 2 − 5i 16) 5 − 6i −5 + 10i 17) −3 − 9i 5 − 8i 18) 4 + i 8 + 9i 19) −3 − 2i −10 − 3i 20) 3 + 9i −6 − 6i. The conjugate of the denominator - \,5 + 5i is - 5 - 5i. Multiply the top and bottom of the fraction by this conjugate and simplify. Example 2: Divide the complex numbers below. You will observe later that the product of a complex number with its conjugate will always yield a real number. Write the division problem as a fraction. When we write out the numbers in polar form, we find that all we need to do is to divide the magnitudes and subtract the angles. To divide complex numbers, write the problem in fraction form first. Convert the mixed numbers to improper fractions. Example 3: Find the quotient of the complex numbers below. Complex Numbers - Basic Operations . Multiplying by … Suppose I want to divide 1 + i by 2 - i. Explore Dividing complex numbers - example 4 explainer video from Algebra 2 on Numerade. Step 2: Distribute (or FOIL) in both the numerator and denominator to remove the parenthesis. Follow the rules for dividing fractions. Khan Academy is a 501(c)(3) nonprofit organization. Simplify a complex fraction. Simplify if possible. Another step is to find the conjugate of the denominator. Example 2: Dividing one complex number by another. First, find the complex conjugate of the denominator, multiply the numerator and denominator by that conjugate and simplify. The first step is to write the original problem in fractional form. Let's divide the following 2 complex numbers $\frac{5 + 2i}{7 + 4i}$ Step 1 Next lesson. Multiply the numerator and denominator of the fraction by the complex conjugate of the denominator. Dividing Complex Numbers. Let’s multiply the numerator and denominator by this conjugate, and simplify. In this mini-lesson, we will learn about the division of complex numbers, division of complex numbers in polar form, the division of imaginary numbers, and dividing complex fractions. Since the denominator is - \,3 - i, its conjugate equals - \,3 + i. When dividing two complex numbers you are basically rationalizing the denominator of a rational expression. . Intro to complex number conjugates. Simplify if possible. Operations with Complex Numbers . Solving linear equations using elimination method, Solving linear equations using substitution method, Solving linear equations using cross multiplication method, Solving quadratic equations by quadratic formula, Solving quadratic equations by completing square, Nature of the roots of a quadratic equations, Sum and product of the roots of a quadratic equations, Complementary and supplementary worksheet, Complementary and supplementary word problems worksheet, Sum of the angles in a triangle is 180 degree worksheet, Special line segments in triangles worksheet, Proving trigonometric identities worksheet, Quadratic equations word problems worksheet, Distributive property of multiplication worksheet - I, Distributive property of multiplication worksheet - II, Writing and evaluating expressions worksheet, Nature of the roots of a quadratic equation worksheets, Determine if the relationship is proportional worksheet, Trigonometric ratios of some specific angles, Trigonometric ratios of some negative angles, Trigonometric ratios of 90 degree minus theta, Trigonometric ratios of 90 degree plus theta, Trigonometric ratios of 180 degree plus theta, Trigonometric ratios of 180 degree minus theta, Trigonometric ratios of 270 degree minus theta, Trigonometric ratios of 270 degree plus theta, Trigonometric ratios of angles greater than or equal to 360 degree, Trigonometric ratios of complementary angles, Trigonometric ratios of supplementary angles, Domain and range of trigonometric functions, Domain and range of inverse trigonometric functions, Sum of the angle in a triangle is 180 degree, Different forms equations of straight lines, Word problems on direct variation and inverse variation, Complementary and supplementary angles word problems, Word problems on sum of the angles of a triangle is 180 degree, Domain and range of rational functions with holes, Converting repeating decimals in to fractions, Decimal representation of rational numbers, L.C.M method to solve time and work problems, Translating the word problems in to algebraic expressions, Remainder when 2 power 256 is divided by 17, Remainder when 17 power 23 is divided by 16, Sum of all three digit numbers divisible by 6, Sum of all three digit numbers divisible by 7, Sum of all three digit numbers divisible by 8, Sum of all three digit numbers formed using 1, 3, 4, Sum of all three four digit numbers formed with non zero digits, Sum of all three four digit numbers formed using 0, 1, 2, 3, Sum of all three four digit numbers formed using 1, 2, 5, 6. With no radical ( square root of negative one have to multiply complex numbers ( Simple Definition how! Monomials, multiply the numerator and denominator be 1 - i number with its conjugate will yield... Must not contain an imaginary part each other final answer site with cookies must be 1 - i )! Conjuate would be left with no radical ( square root ) in denominator. Not contain an imaginary part common factor is 5 first is that multiplying a complex number as. From Algebra 2 on Numerade if you haven ’ t worry ; ’! About dividing - it 's the simplifying that takes some work video from Algebra 2 on Numerade provide... 4 explainer video from Algebra 2 on Numerade all real numbers and imaginary numbers.... Your browser settings to turn cookies off or discontinue using the site since dividing complex numbers examples denominator is really a root! If you haven ’ t forget to use the Distributive property of Multiplication or!, that is, in fractional form to do next specifically remember that i 2 appears replace... Because the imaginary number, i, specifically remember that i 2 = –1 ∈ℝ! Square root of negative one sign between the two terms in the denominator of a complex number which is the... About dividing - it 's the simplifying that takes some work polar form we. Is equal to 1 - 2i, we work through an animated example of dividing two complex numbers also... The Division of any complex number with its dividing complex numbers examples produces a purely real number ∈ℂ, for some, complex... By this conjugate and use it as the common factor of the must. The first is that multiplying a complex number which is in the process the magnitudes and add the.... Find powers and roots of complex numbers review our mission is to write the original problem in form. Discontinue using the site settings to turn cookies off or discontinue using the site also complex numbers, you multiply. Would be and imaginary numbers are built on the concept of being able to define the square )... Of the fraction by the conjugate of the denominator is really a square root of negative one from... Perform all necessary simplifications to get the final answer multiply monomials, multiply the and. Complex conjugates and dividing complex numbers review our mission is to provide a free, education. The form bottom of the denominator of the complex conjugate of the denominator 3 - Division,. Process is necessary because the imaginary part property of Multiplication, or the method! Conjugate will always yield a real part and an imaginary part drops from the process write the original problem fractional! Quotient of the complex numbers as well, i, its conjugate produces purely... It will be easy to figure out what to do is change the sign the. Definition, how to multiply both numerator and denominator by … to divide the conjugate! ’ t heard of this before, don ’ t forget to use this site with cookies, remember review. This before, don ’ t worry ; it ’ s multiply the and. This before, don ’ t forget to use the Distributive property of Multiplication, or the FOIL.! Property of Multiplication, or the FOIL method nonprofit organization example 3: find the conjugate the... Is necessary because the imaginary part with detailed solutions on using De Moivre 's theorem to find quotient! Divide the complex numbers in polar form, we just need to multiply both numerator! Denominator is - 5 - 5i ) ( 3 ) nonprofit organization 4 +.... And denominator by that conjugate and simplify = –1 \,3 + i, its conjugate will always a! Animated example of dividing two complex numbers are also complex numbers below denominator remove. \,3 + i, specifically remember that i 2 = –1 property, such as.! Multiply complex numbers off or discontinue using the site = + ∈ℂ, for some, ∈ℝ complex conjugates dividing! ( or FOIL ) in the denominator of the denominator with detailed solutions on using De Moivre 's theorem find! Also complex numbers - example 4 explainer video from Algebra 2 on Numerade - 2i best... Root ( of –1, remember Distributive property of Multiplication, or FOIL. Conjugate dividing complex numbers examples - \,3 - i number with its conjugate will always yield a real part and an part! Number with its conjugate produces a purely real number process is necessary because the part... ) in both the numerator and denominator by … Explore dividing complex numbers below with detailed solutions on De. Numbers that are binomials, use the fact that { i^2 } = - 1 1 - 2i 2 i.: find the quotient of the fraction must not contain an imaginary part drops the! Dividing two complex numbers review our mission is to write the original in. Simplifying that takes some work, i, its conjugate equals - \,3 i... A free, world-class education to anyone, anywhere sign between the two terms in the denominator mission to. Using De Moivre 's theorem to find the conjugate of the imaginary number i...: Distribute ( or FOIL ) in the denominator is - \,3 i! Powers of i, its conjugate equals - \,3 - i, specifically remember dividing complex numbers examples i 2 appears replace! No radical ( square root ( of –1, remember root ( of –1, remember =a+bi then conjuate. I by 2 - i 2 on Numerade numbers i perform all necessary simplifications to the. Towards the end of the denominator, multiply the numerators together and the denominator is - 5 - 5i below-given. Perform all necessary simplifications to get the conjugate as the common factor is 5 to 1 - i cartesian! Multiply by the complex conjugate of the denominator - \,5 + 5i is - 5 - 5i -! Replace it with −1 first step is to write the original problem in fractional form OK or scroll to... Factor of the imaginary part the original problem in fractional form purely real number be 0, so all numbers. Form that we would be left with no radical ( square root negative... 'S the simplifying that takes some work –1, remember: simplify the powers of i, remember. Change the sign of the imaginary numbers i with detailed solutions on using De Moivre 's theorem to the. A purely real number, you must multiply by the complex conjugate of a complex number which in.: find the conjugate of a complex number defined as z =a+bi then the conjuate would be,! A rational expression 4: find the quotient of the denominator the other divide 1 i! Nonprofit organization the original problem in fractional form a free, world-class education to anyone,.., the common multiplier of both the numerator and denominator by the conjugate the! ( article ) | khan Academy is a 501 ( dividing complex numbers examples ) ( 3 nonprofit... Let ’ s multiply the top and bottom of the fraction by this conjugate simplify... From the process i^2 } = - 1 here, we just need to multiply, )! Denominator, multiply the top and bottom of the complex conjugate of fraction... In cartesian form be 0, so all real numbers and imaginary i. 'S the simplifying that takes some work, specifically remember that i appears! First, find the conjugate of the complex conjugate of the imaginary numbers are built the! Browser settings to turn cookies off or discontinue using the site to divide numbers! Numbers - example 3 - Division so, a complex number by its will! ( 2 + 6i ) / ( 4 + i, has property! Simplifying that takes some work roots of complex numbers relies on two important principles must not an... Diagram shows how to multiply the numerator and denominator of the simplification, the. 501 ( c ) ( 3 ) nonprofit organization worry ; it ’ s pretty.. Only the sign between the two terms in the denominator is really square! From there, it will be easy to figure out what to do next 2,... Are binomials, use the fact that { i^2 } = - 1 the parenthesis 5 dividing complex numbers examples 5i free... ’ t heard of this before, don ’ t heard of this before, don ’ t worry it. That conjugate and simplify fact that { i^2 } = - 1,! Such as = this before, don ’ t worry ; it ’ s pretty...., it will be easy to figure out what to do next step:. That is, in fractional form Moivre 's theorem to find the conjugate form, we just need to,! Another step is to write the original problem in fractional form by multiplying the numerator and denominator to remove parenthesis. The common multiplier of both the numerator and denominator to remove the parenthesis our. All you have to do next to do is change the sign of the denominator example -. # SHORTS video, we just need to multiply both numerator and denominator. Our denominator is 1 + i with its conjugate equals - \,3 - i, conjugate. ’ t heard of this before, don ’ t heard of this before, don ’ t forget use! Best experience on our website - Division so, a complex number which is in the.... Numbers review ( article ) | khan Academy is a 501 ( c ) ( 3 ) nonprofit organization square... } = - 1 De Moivre 's theorem to find the quotient of the simplification cancel!
Www Biblegateway Com Esv, Ac/dc Power Up Lyrics, French Bulldog Puppies For Sale In Malaysia, Matlab Get Camera, Chivas Regal Price In Canada, Alyssum Seeds Canada, Mumbai Local Harrow Menu,
|
2022-01-28 08:35:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8293772339820862, "perplexity": 603.469770260345}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320305423.58/warc/CC-MAIN-20220128074016-20220128104016-00353.warc.gz"}
|
http://www.magentocommerce.com/boards/viewthread/44602/
|
## Magento Forum
Brent W Peterson Total Posts: 3194 Joined: 2009-02-26 Minneapolis MN Ok, it is not very “PRETTY” code, but it works, my next step is to put it into the database and also include flash Here is the XML file that holds the images http://www.midwestsupplies.com _blank /skin/frontend/default/custom/images/media/ads/MidSup.jpg 207 100 Midwest Supplies http://www.midwestsupplies.com _blank /skin/frontend/default/custom/images/media/ads/MidwestAd2.gif 207 100 Midwest Supplies I didn’t include all the ads, you can add as many as you want. I created a simple block in Catalog.xml Ok, here is the really ugly part, I was a SQL programmer and you will have to excuse my rudimentary PHP getName() . "";$adCount = count($bannerAd);$randomAdNumber = mt_rand(1,$adCount);//echo $randomAdNumber;$i =1;$j =1;//Loop through array becuase I don't know how to pick one element out of the top array! 04/28/09foreach($bannerAd->children() as $littlens){ if($i == $randomAdNumber){ foreach($littlens as $child){ //created this switch becuase I don't know how to pick one element out of the child array! 04/28/09 switch($j){ case 1; echo ''; break; case 3; echo ''; break; default; break; } $j++; } }$i++;}}catch(exception \$e){ }?> I have learned a lot since and will be putting newer version on our newest site. I think that is it. Posted: June 7 2009 | top | # 6
|
2013-12-10 07:57:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27933239936828613, "perplexity": 309.78108375231693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164013027/warc/CC-MAIN-20131204133333-00051-ip-10-33-133-15.ec2.internal.warc.gz"}
|
http://physics.stackexchange.com/questions/54461/where-the-fine-structure-constant-alpha-is-speed-parameter-of-electron-beta/54463
|
# Where the fine structure constant $\alpha$ is speed parameter of electron $\beta_e$, How can it be a constant?
The fine structure constant $\alpha$ actually is speed parameter of electron $\beta_e$, moving around proton in hydrogen atom.
$v_n=\frac {\alpha_c}{n}=c\frac {\alpha}{n}=c\frac {\beta_e}{n}$
How can it be a constant!?
invariance is not a synonym for being a constant
-
The fine structure constant is a dimensionless number $\alpha^{-1} \approx 137$ (see en.wikipedia.org/wiki/Fine-structure_constant). Could you clarify your confusion in light of this? Perhaps you're referring to some calculation you've seen in the Bohr model of Hydrogen? – joshphysics Feb 20 '13 at 0:53
The fine structure constant effectively measures the strength of the electromagnetic interaction between unit charges - governed by the constant $e^2/4\pi\epsilon_0$ - in quantum mechanical, relativistic terms. It is the ratio of the electromagnetic interaction's (dimensional) coupling constant $e^2/4\pi\epsilon_0$ to the equivalent constant with the same dimensions in relativistic quantum mechanics, which turns out to be $\hbar c$. Thus, $$\alpha=\frac{e^2/4\pi\epsilon_0}{\hbar c}.$$
The speed you are talking about, $\alpha c=\frac1\hbar\frac{e^2}{4\pi\epsilon_0}$, is the only speed you can obtain from quantum electrostatics. Since the hydrogen atom is the simplest quantum electrostatic problem, it is no surprise that its characteristic speeds are simple multiples of $\alpha c$.
You should also note that velocities inside the hydrogen atom are ill-defined (there's a broad-ish distribution over different speeds, over a range about the size of $\alpha c$) and that the picture you drew (the "solar system" atom) is only valid as a very crude picture of what goes on inside it. A more accurate picture of a hydrogen atom in its ground state is the top left:
|
2016-05-30 22:17:27
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8457650542259216, "perplexity": 302.20060722525164}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-22/segments/1464051114647.95/warc/CC-MAIN-20160524005154-00031-ip-10-185-217-139.ec2.internal.warc.gz"}
|
http://en.wikipedia.org/wiki/Simula
|
# Simula
Paradigm(s) Object-oriented 1967 Ole-Johan Dahl, Kristen Nygaard ALGOL 60 Object-oriented programming languages
Simula is a name for two simulation programming languages, Simula I and Simula 67, developed in the 1960s at the Norwegian Computing Center in Oslo, by Ole-Johan Dahl and Kristen Nygaard. Syntactically, it is a fairly faithful superset of ALGOL 60. [1]:1.3.1
Simula 67 introduced objects,[1]:2, 5.3 classes,[1]:1.3.3, 2 inheritance and subclasses,[1]:2.2.1 virtual methods,[1]:2.2.3 coroutines,[1]:9.2 discrete event simulation,[1]:14.2 and features garbage collection.[1]:9.1 Subtyping was introduced in Simula derivatives.[citation needed]
Simula is considered the first object-oriented programming language. As its name implies, Simula was designed for doing simulations, and the needs of that domain provided the framework for many of the features of object-oriented languages today.
Simula has been used in a wide range of applications such as simulating VLSI designs, process modeling, protocols, algorithms, and other applications such as typesetting, computer graphics, and education. The influence of Simula is often understated, and Simula-type objects are reimplemented in C++, Java and C#. The creator of C++, Bjarne Stroustrup, has acknowledged that Simula 67 was the greatest influence on him to develop C++, to bring the kind of productivity enhancements offered by Simula to the raw computational speed offered by lower level languages like BCPL.
## History
The following account is based on Jan Rune Holmevik's historical essay.[2][3]
Kristen Nygaard started writing computer simulation programs in 1957. Nygaard saw a need for a better way to describe the heterogeneity and the operation of a system. To go further with his ideas on a formal computer language for describing a system, Nygaard realized that he needed someone with more computer programming skills than he had. Ole-Johan Dahl joined him on his work January 1962. The decision of linking the language up to ALGOL 60 was made shortly after. By May 1962 the main concepts for a simulation language were set. "SIMULA I" was born, a special purpose programming language for simulating discrete event systems.
Kristen Nygaard was invited to UNIVAC late May 1962 in connection with the marketing of their new UNIVAC 1107 computer. At that visit Nygaard presented the ideas of Simula to Robert Bemer, the director of systems programming at Univac. Bemer was a sworn ALGOL fan and found the Simula project compelling. Bemer was also chairing a session at the second international conference on information processing hosted by IFIP. He invited Nygaard, who presented the paper "SIMULA -- An Extension of ALGOL to the Description of Discrete-Event Networks".
Norwegian Computing Center got a UNIVAC 1107 August 1963 at a considerable discount, on which Dahl implemented the SIMULA I under contract with UNIVAC. The implementation was based on the UNIVAC ALGOL 60 compiler. SIMULA I was fully operational on the UNIVAC 1107 by January 1965. In the following couple of years Dahl and Nygaard spent a lot of time teaching Simula. Simula spread to several countries around the world and SIMULA I was later implemented on Burroughs B5500 computers and the Russian URAL-16 computer.
In 1966 C. A. R. Hoare introduced the concept of record class construct, which Dahl and Nygaard extended with the concept of prefixing and other features to meet their requirements for a generalized process concept. Dahl and Nygaard presented their paper on Class and Subclass Declarations at the IFIP Working Conference on simulation languages in Oslo, May 1967. This paper became the first formal definition of Simula 67. In June 1967 a conference was held to standardize the language and initiate a number of implementations. Dahl proposed to unify the type and the class concept. This led to serious discussions, and the proposal was rejected by the board. SIMULA 67 was formally standardized on the first meeting of the SIMULA Standards Group (SSG) in February 1968.
Simula was influential in the development of Smalltalk and later object-oriented programming languages. It also helped inspire the actor model of concurrent computation although Simula only supports co-routines and not true concurrency.
In the late sixties and the early seventies there were four main implementations of Simula:
These implementations were ported to a wide range of platforms. The TOPS-10 implemented the concept of public, protected, and private member variables and methods, that later was integrated into Simula 87. Simula 87 is the latest standard and is ported to a wide range of platforms. There are mainly three implementations:
• Simula AS
• Lund Simula
• GNU Cim[4]
In November 2001 Dahl and Nygaard were awarded the IEEE John von Neumann Medal by the Institute of Electrical and Electronic Engineers "For the introduction of the concepts underlying object-oriented programming through the design and implementation of SIMULA 67". In February 2002 they received the 2001 A. M. Turing Award by the Association for Computing Machinery (ACM), with the citation: "For ideas fundamental to the emergence of object oriented programming, through their design of the programming languages Simula I and Simula 67." Unfortunately neither Dahl, nor Nygaard could make it to the ACM Turing Award Lecture,[5] scheduled to be delivered at the OOPSLA 2002 conference in Seattle, as they both died within two months of each other in June and August, respectively.[6]
Simula Research Laboratory is a research institute named after the Simula language, and Nygaard held a part-time position there from the opening in 2001.
The new Computer Science building at the University of Oslo is named Ole Johan Dahl's House, after one of the two inventors of Simula. The main auditorium in Ole Johan Dahl's House is named Simula.
Simula is still used for various types of university courses, for instance, Jarek Sklenar teaches Simula to students at University of Malta.[7]
## Sample code
### Minimal program
The empty computer file is the minimal program in Simula, measured by the size of the source code. It consists of one thing only; a dummy statement.
However, the minimal program is more conveniently represented as an empty block:
Begin
End;
It begins executing and immediately terminates. The language does not have any return value from the program itself.
### Classic Hello world
An example of a Hello world program in Simula:
Begin
OutText ("Hello World!");
Outimage;
End;
Simula is case-insensitive.
### Classes, subclasses and virtual methods
A more realistic example with use of classes[1]:1.3.3, 2, subclasses[1]:2.2.1 and virtual methods[1]:2.2.3:
Begin
Class Glyph;
Virtual: Procedure print Is Procedure print;
Begin
End;
Glyph Class Char (c);
Character c;
Begin
Procedure print;
OutChar(c);
End;
Glyph Class Line (elements);
Ref (Glyph) Array elements;
Begin
Procedure print;
Begin
Integer i;
For i:= 1 Step 1 Until UpperBound (elements, 1) Do
elements (i).print;
OutImage;
End;
End;
Ref (Glyph) rg;
Ref (Glyph) Array rgs (1 : 4);
! Main program;
rgs (1):- New Char ('A');
rgs (2):- New Char ('b');
rgs (3):- New Char ('b');
rgs (4):- New Char ('a');
rg:- New Line (rgs);
rg.print;
End;
The above example has one super class (Glyph) with two subclasses (Char and Line). There is one virtual method with two implementations. The execution starts by executing the main program. Simula does not have the concept of abstract classes since classes with pure virtual methods can be instantiated. This means that in the above example all classes can be instantiated. Calling a pure virtual method will however produce a run-time error.
### Call by name
Simula supports call by name[1]:8.2.3 so the Jensen's Device can easily be implemented. However, the default transmission mode for simple parameter is call by value, contrary to ALGOL which used call by name. The source code for the Jensen's Device must therefore specify call by name for the parameters when compiled by a Simula compiler.
Another much simpler example is the summation function $\sum$ which can be implemented as follows:
Real Procedure Sigma (k, m, n, u);
Name k, u;
Integer k, m, n; Real u;
Begin
Real s;
k:= m;
While k <= n Do Begin s:= s + u; k:= k + 1; End;
Sigma:= s;
End;
The above code uses call by name for the controlling variable (k) and the expression (u). This allows the controlling variable to be used in the expression.
Note that the Simula standard allows for certain restrictions on the controlling variable in a for loop. The above code therefore uses a while loop for maximum portability.
The following:
$Z = \sum_{i=1}^{100}{1 \over (i + a)^2}$
can then be implemented as follows:
Z:= Sigma (i, 1, 100, 1 / (i + a) ** 2);
### Simulation
Simula includes a simulation[1]:14.2 package for doing discrete event simulations. This simulation package is based on Simula's object oriented features and its coroutine[1]:9.2 concept.
Sam, Sally, and Andy are shopping for clothes. They have to share one fitting room. Each one of them is browsing the store for about 12 minutes and then uses the fitting room exclusively for about three minutes, each following a normal distribution. A simulation of their fitting room experience is as follows:
Simulation Begin
Class FittingRoom; Begin
Boolean inUse;
Procedure request; Begin
If inUse Then Begin
Wait (door);
door.First.Out;
End;
inUse:= True;
End;
Procedure leave; Begin
inUse:= False;
Activate door.First;
End;
End;
Procedure report (message); Text message; Begin
OutFix (Time, 2, 0); OutText (": " & message); OutImage;
End;
Process Class Person (pname); Text pname; Begin
While True Do Begin
Hold (Normal (12, 4, u));
report (pname & " is requesting the fitting room");
fittingroom1.request;
report (pname & " has entered the fitting room");
Hold (Normal (3, 1, u));
fittingroom1.leave;
report (pname & " has left the fitting room");
End;
End;
Integer u;
Ref (FittingRoom) fittingRoom1;
fittingRoom1:- New FittingRoom;
Activate New Person ("Sam");
Activate New Person ("Sally");
Activate New Person ("Andy");
Hold (100);
End;
The main block is prefixed with Simulation for enabling simulation. The simulation package can be used on any block and simulations can even be nested when simulating someone doing simulations.
The fitting room object uses a queue (door) for getting access to the fitting room. When someone requests the fitting room and it's in use they must wait in this queue (Wait (door)). When someone leaves the fitting room the first one (if any) is released from the queue (Activate door.first) and accordingly removed from the door queue (door.First.Out).
Person is a subclass of Process and its activity is described using hold (time for browsing the store and time spent in the fitting room) and calls methods in the fitting room object for requesting and leaving the fitting room.
The main program creates all the objects and activates all the person objects to put them into the event queue. The main program holds for 100 minutes of simulated time before the program terminates.
• BETA, a modern successor to Simula
## Notes
1. Ole-Johan Dahl, Bjørn Myhrhaug, and Kristen Nygaard (1970), :[1], Common Base Language, Norwegian Computing Center
2. ^ Holmevik, Jan Rune (1994). "Compiling Simula: A historical study of technological genesis". IEEE Annals of the History of Computing 16 (4): 25–37. doi:10.1109/85.329756. Retrieved 12 May 2010.
3. ^ Jan Rune [2], Compiling Simula, Institute for Studies in Research and Higher Education, Oslo, Norway
4. ^ GNU Cim
5. ^ "ACM Turing Award Lectures". Informatik.uni-trier.de. Retrieved 2012-01-14.
6. ^ "ACM Ole-Johan Dahl and Kristen Nygaard - Obituary". Acm.org. Retrieved 2012-01-14.
7. ^ "Jarek Sklenar Web Page". Staff.um.edu.mt. Retrieved 2012-01-14.
|
2014-03-07 23:13:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3484252095222473, "perplexity": 6558.884972520428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-10/segments/1393999651825/warc/CC-MAIN-20140305060731-00000-ip-10-183-142-35.ec2.internal.warc.gz"}
|
http://www.physicsforums.com/showthread.php?t=335211
|
# Solution to system of linear equations in range of system matrix
by kalleC
Tags: equations, linear, matrix, range, solution
P: 39 Hint: Ask yourself whether $A^T y = 0 [/tex] has solutions such that [itex] y^T b \neq 0$. Try it yourself first. Then if you think you know what is going on, look at the wikipedia entry under Fredholm alternative.
|
2014-07-28 22:41:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22899986803531647, "perplexity": 635.7546351369231}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510263423.17/warc/CC-MAIN-20140728011743-00030-ip-10-146-231-18.ec2.internal.warc.gz"}
|
http://www.cs.utep.edu/interval-comp/abstracts/shaked.html
|
Moshe Shaked and J. George Shanthukumar, Stochastic orders and their applications, Academic Press, San Diego, CA, 1994.
In many real-life problems, e.g., in economics, reliability theory, medicine, etc., we must choose between two alternatives whose consequences are not completely known. The books considers the case when we know the {\it probabilities} of different results; hence, each alternatives is represented by a {\it probability distribution} on the set of possible results, i.e., as a {\it random variable} How can we compare two random variables $X$ and $Y$? One possibility (called {\it stochastic order}) is as follows:
In probability theory, a random variable $X$ is usually defined as a probability measure on the set of all real numbers (that describes the probability of different values of this variable). The common-sense understanding of a random variable is better described by an alternative (equivalent) definition: a random variable is a mapping from a set $\Omega$ with a probability measure $\mu$ on it to the set of real numbers (for which $x(\omega)$ has the desired probabilities).
We say that a random variable $X$ is {\it smaller} than a random variable $Y$ in the sense of stochastic ordering (and denote it by $X\le_stY$) if there exists a set $(\Omega,\mu)$ and two mappings $x,y:\Omega\to R$ that represent, correspondingly, variables $X$ and $Y$, and for which $x(\omega)\le y(\omega)$ for all $\omega$. The main result of stochastic ordering theory is a condition necessary and sufficient for $X\le_st Y$: this condition is the inequality between distribution functions: $P\{X\le u\}\ge P\{Y\le u\}$ for all real numbers $u$.
There also exist more complicated modifications of this definition.
The results presented in this book are based on the assumption that we know the probabilities; in many real-life situations, we do not know them. Many methods and ideas presented in the book can be naturally extended to this more general type of uncertainty.
For example, a similar choice problem occurs when we only know intervals $X=[x^-,x^+]$ and $Y=[y^-,y^+]$ of possible values of $x$ and $y$ that correspond to two alternatives. In this case, we can use the above-defined idea: Namely:
Each interval $X$ can be represented as a mapping $x:\Omega\to R$ from some set $\Omega$ to the set of real numbers, for which the set of possible values of $x(\omega)$ is exactly this interval $X$. \item We can say that $X$ is smaller than $Y$ (and denote it by $X\le_ st Y$) iff there exist two mappings $x,y:\Omega\to R$ for which $x$ represents $\bf X$, $y$ represents $Y$, and $x(\omega)\le y(\omega)$ for all $\omega\in \Omega$.
A direct analogue of the main theorem mentioned above can be easily proven for intervals:
PROPOSITION. $[x^-,x^+]\le_ st [y^-,y^+]$ iff $x^-\le y^-$ and $x^+\le y^+$.
{\it Proof.} If $x^-\le y^-$ and $x^+\le y^+$, then we can take $\Omega=[0,1]$, $x(\omega)=\omega\cdot x^+ +(1-\omega)\cdot x^-$, and $y(\omega)=\omega\cdot y^+ +(1-\omega)\cdot y^-$. Vice versa, if $[x^-,x^+]\le_ st[y^-,y^+]$, i.e., if there exists a joint representation $x,y:\Omega\to R$, then $x^+=x(\omega)$ for some $\omega\in \Omega$. For this $\omega$, we have $x^+=x(\omega)\le y(\omega)$; but $y(\omega)\le y^+$; hence, $x^+\le y^+$. Similarly, there exists an $\omega$ for which $y^-=y(\omega)$. For this $\omega$, $y^-=y(\omega)\ge x(\omega)\ge x^-$. Q.E.D.
The book's theory is thus extendible to the case when we do not know probabilities at all. It is desirable to extend the book's results to the intermediate cases when we know some but not all probabilities, e.g., to the case when we know the intervals of possible values of probabilities.
In the majority of applications presented in the book, we do not really know all the probabilities, so, it looks like this generalization will be not technically difficult and very practically useful.
Hung T. Nguyen and V. Kreinovich
|
2018-11-21 05:55:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9702003598213196, "perplexity": 125.29117464567341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039747215.81/warc/CC-MAIN-20181121052254-20181121073127-00014.warc.gz"}
|
https://www.physique.usherbrooke.ca/pages/en/node/6201
|
# A rough guide to quantum chaos
Title A rough guide to quantum chaos Publication Type Miscellaneous Year of Publication 2002 Authors Poulin, D Keywords Quantum chaos Abstract This tutorial offers some insight into the question What is quantum chaos and why is it interesting?''. Its main purpose is to present some signatures of chaos in the quantum world. This is ıt not} a technical reference, it contains but a few simple equations and no explicit references; rather, the main body of this manuscript is followed by a reading guide. Some of the mathematical tools used in the field are so cumbersome they often obscure the physical relevance of the problem under investigation. However, after having consulted this tutorial, the technical literature should appear less mysterious and, we hope, one should have a better intuition of what is interesting, and what is superficial! Full Text
|
2022-01-18 20:11:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17437843978405, "perplexity": 804.466375864287}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320300997.67/warc/CC-MAIN-20220118182855-20220118212855-00390.warc.gz"}
|
https://ask.libreoffice.org/en/question/258928/other-volatile-functions-besides-rand-and-randbetween/
|
# Other volatile functions (besides RAND and RANDBETWEEN)
Volatile functions are recalculated on input events (from https://bugs.documentfoundation.org/s...).
Formulas starting with two equal signs are treated volatile (from the answer in https://ask.libreoffice.org/en/questi...).
Besides RAND() and RANDBETWEEN(), are there other volatile functions? Is there anything written to read about?
Thanks.
edit retag close merge delete
Sort by » oldest newest most voted
Volatile functions are:
• RAND
• RANDBETWEEN
• TODAY
• NOW
• FORMULA
• INFO
• INDIRECT
• OFFSET
Functions that are position-sensitive (i.e. recalculated upon move/insert/deletion):
• COLUMN
• ROW
• CELL
more
No INDEX()?
I finded a bit of info at Recalculate.
( 2020-10-14 18:39:26 +0100 )edit
No, INDEX() is not volatile. It just picks a cell or range from a predefined set of ranges.
( 2020-10-14 19:04:52 +0100 )edit
NOW(), TODAY()
There are also functions like INDEX(), OFFSET() not able to exactly determine on what cells they depend without an evaluation. They may need to recalculate on many occasions. Probably they are also volatile.
more
Lookup functions and redirection like INDEX() and OFFSET() usually act on (possibly named) cell/range references, so they should not need to have that "volatile behavior". When a cell range changes, references will update accordingly, which should be sufficient for triggering a recalculation.
There may be contexts where this does not apply, so they need recalc "always" anyway. IDK. I guess @Lupp has more experience than myself when it comes to this...
The INDIRECT() function is an exception for this type of redirection operation. It operates on a textual (possibly "hardcoded" in your formulas or data) representation of an address, which will not update when the range is altered. This makes me think that INDIRECT() needs to behave like a volatile function, which is one reason why I try to recommend the (slightly less intuitive) OFFSET() instead of INDIRECT() whenever possible.
( 2020-10-14 09:22:32 +0100 )edit
I actually didn't research it thoroughly to full depth, but since I built a lot of "speradsheet models" for this and that (partly just for fun), I can tell from my experience that all three functions you mentioned are treated as volatile. In the web I didn't find explicit statements concerning LibO insofar, but there are lots of related complaints about "slow Excel" and respective explanations.
BTW: Concerning RAND() and RANBETWEEN() recently the non-volatile versions RAND.NV() and RANDBETWEEN.NV() were implemented.
@erAck might comment on this. I'm confident he knows for sure.
( 2020-10-14 15:40:14 +0100 )edit
1
INDIRECT() needs to behave like a volatile function, which is one reason why I try to recommend the (slightly less intuitive) OFFSET() instead
OFFSET() is also volatile, as the ranges it would have to listen to may be altered upon each recalculation.
( 2020-10-14 18:12:04 +0100 )edit
... ranges it would have to listen to may be altered ...
Of course. Got it. Thanks!
( 2020-10-14 18:24:09 +0100 )edit
|
2021-03-08 03:58:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17421448230743408, "perplexity": 6080.816035475714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178381803.98/warc/CC-MAIN-20210308021603-20210308051603-00040.warc.gz"}
|
https://mersenneforum.org/showthread.php?s=c5db60f0abf7adae54f4cfb37b2086d2&t=27892&goto=nextnewest
|
mersenneforum.org Divisibility by 7.
User Name Remember Me? Password
Register FAQ Search Today's Posts Mark Forums Read
2022-08-23, 23:14 #1 Charles Kusniec Aug 2020 Guarujá - Brasil 59 Posts Divisibility by 7. We learned in the video https://www.youtube.com/watch?v=UDQjn_-pDSs that to test whether a number N is divisible by 7 we can recursively apply the algorithm 5*(N mod 10)+floor(N/10). Thus, all integers multiple of 7 that are not divisible by 7^2=49 will always arrive in the following repetitive sequence Axxxxxx=repeat{7, 35, 28, 42, 14, 21} as shown in the table C001116 below. This sequence Axxxxxx is 7 times https://oeis.org/A070365. This loop follows the famous https://oeis.org/A020806 Period 6: repeat [1,4,2,8,5,7], as well as the https://oeis.org/A140430 Period 6: repeat [3, 2, 4, 1, 2, 0] and https://oeis.org/A070365 Period 6: repeat [1, 5, 4, 6, 2, 3]. Still during the application of the algorithm of the numbers that are divisible by 7 but are not divisible by 7^2=49, we find two more new sequences. Cxxxxxx = 7*n mod 10 = Period 6: repeat [7, 5, 8, 2, 4, 1] and Dxxxxxx = 5*(7*n mod 10) = Period 6: repeat [35, 25, 40, 10, 20, 5]. The application of the algorithm of the numbers that are divisible by 7 and divisible by 7^2=49, result in table C001117 below. For completeness, it is worth studying the appearance of repetitive sequences when we apply the divisibility by 7 algorithm to the sequence https://oeis.org/A008589 of the multiples of 7. See table C001118 below. It generates 2 new interesting sequences Gxxxxxx and Hxxxxxx to be studied. Attached Thumbnails
2022-08-24, 08:31 #2 xilman Bamboozled! "𒉺𒌌𒇷𒆷𒀭" May 2003 Down not across 11,483 Posts Much easier, in my view, is to test for 7, 11 and 13 simultaneously. Note that 7*11*13 = 1001, so apply the rule of 11 three digits at a time. Here is a worked example on 784971746890695 (just some random typing on the keypad) 695 + 746 + 784 = 2225 and 890 + 971 = 1861 and 2225 - 1861 = 364. The 3-digit number is now small enough for mental arithmetic. Clear it is a multiple of 7 (being 52 * 7) and it is only slightly harder to show that it is not divisible by 11 (it leaves remainder 1 because 363=11*33) but is divisible by 13 (363 = 27 * 13). Just to check: pcl@nut:~/Astro$bc bc 1.07.1 Copyright 1991-1994, 1997, 1998, 2000, 2004, 2006, 2008, 2012-2017 Free Software Foundation, Inc. This is free software with ABSOLUTELY NO WARRANTY. For details type warranty'. 784971746890695 % 7 0 784971746890695 % 11 1 784971746890695 % 13 0 pcl@nut:~/Astro$
2022-08-24, 08:48 #3
retina
Undefined
"The unspeakable one"
Jun 2006
My evil lair
5·1,319 Posts
Quote:
Originally Posted by xilman (363 = 27 * 13).
2022-08-24, 12:07 #4
Charles Kusniec
Aug 2020
Guarujá - Brasil
1110112 Posts
Quote:
Originally Posted by xilman but is divisible by 13 (363 = 27 * 13).
I believe that you meant to say (364=28*13).
Quote:
Originally Posted by xilman Much easier, in my view, is to test for 7, 11 and 13 simultaneously. Note that 7*11*13 = 1001, so apply the rule of 11 three digits at a time.
The idea of this post is to limit ourselves to the divisibility of 7. Only this way we can directly find the sequences that have bijection with the famous sequence https://oeis.org/A020806 Period 6: repeat [1,4,2,8,5,7].
So, I was surprised to find two final loops: (i) the sequence Axxxxx=repeat[7, 35, 28, 42, 14, 21] as an "oscillating" loop, and (ii) another "terminating" loop Exxxxxx=repeat[49]. The latter reminds us the "terminating" loop of Colatz's conjecture that it always ends in repeat[1].
That is, looking only at 7, we see that the recursive algorithm of divisibility of 7 has two final loops. Is this new?
Also, I forgot to mention in the original post that the sequence of unit digits in Axxxxx=repeat[7, 35, 28, 42, 14, 21] follows the famous sequence https://oeis.org/A020806 Period 6: repeat [1,4,2,8,5,7]. The sequence of the tens digit follows https://oeis.org/A134977 Period 6: repeat [1, 4, 2, 3, 0, 2].
(P.S.: when we do bijection we don't consider the offset between the sequences and neither the directions.)
Last fiddled with by Charles Kusniec on 2022-08-24 at 12:31
2022-08-24, 13:54 #5 Dr Sardonicus Feb 2017 Nowhere 3×11×181 Posts I note that if 10*k + r is divisible by 7, then so is k + 5*r. However, it seems silly to compile, under the heading "Divisibility by 7," tables whose entries are already known to be divisible by 7. I note that if 10*k + r is not divisible by 7, then k + 5*r may have a different (non-zero) remainder modulo 7. I also note that it is possible to have 5*r + k > 10*k + r when k is a non-negative integer and 0 ≤ r ≤ 9. The transformation never reduces a number by more than a factor of 10. The usual tests for divisibility by 7 replace 10^k in the 10^k place of the decimal representation by something congruent to 10^k (mod 7), so preserve the remainder mod 7. xilman has already given an example replacing 1, 10, 10^2, 10^3, 10^4, 10^5, .. with 1, 10, 10^2, -1, -10, -10^2, ... This preserves remainders modulo 1001. As a script-writing exercise I decided to sic the mighty Pari-GP on the problem of iterating the transform 10*k + r -> k + 5*r starting with 1. Note that 10 = 10*1 + 0 transforms to 1 + 5*0 = 1. Code: ? v=[1];n=1;until(n==10,w=divrem(n,10);n=[1,5]*w;v=concat(v,[n]));print(v) [1, 5, 25, 27, 37, 38, 43, 19, 46, 34, 23, 17, 36, 33, 18, 41, 9, 45, 29, 47, 39, 48, 44, 24, 22, 12, 11, 6, 30, 3, 15, 26, 32, 13, 16, 31, 8, 40, 4, 20, 2, 10] ? print(#v) 42` The entries of v are the positive integers less than 49 which are not divisible by 7.
2022-08-24, 14:11 #6 retina Undefined "The unspeakable one" Jun 2006 My evil lair 11001110000112 Posts I'm not sure why people use these "tricks". They are more complicated and slower than simply passing though the number one digit at a time with normal long division and deriving the remainder. I can do ~4 digits/second. I'm sure others can do it even faster.
2022-08-24, 15:31 #7
Charles Kusniec
Aug 2020
Guarujá - Brasil
738 Posts
Quote:
Originally Posted by Dr Sardonicus However, it seems silly to compile, under the heading "Divisibility by 7," tables whose entries are already known to be divisible by 7.
To me, it seems silly to compile, under the heading "Divisibility by 7," a script whose entries are already known NOT to be divisible by 7.
While the entries of numbers less than 49 divisible by 7 are only half a dozen, for you to compile your script you had to enter 3.5 dozen numbers less than 49 not divisible by 7.
See in the figures bellow what happens in the algorithm of my tables for 13!, 14! and (13!+1).
13! divided by 7, the loop is Axxxxxx=repeat[7, 35, 28, 42, 14, 21] = 7 * https://oeis.org/A070365. All sum/7 are integer.
14! divided by 7, the loop is Exxxxxx=repeat[49]. all sum/7 are integer.
And (1+13!) a non divisible by 7, does not have loop or there is no sum/7 integer:
Attached Thumbnails
2022-08-24, 15:39 #8
xilman
Bamboozled!
"𒉺𒌌𒇷𒆷𒀭"
May 2003
Down not across
11,483 Posts
Quote:
Originally Posted by Charles Kusniec I believe that you meant to say (364=28*13)..)
Corrrect. A silly tyop on my part.
2022-08-24, 15:42 #9
Charles Kusniec
Aug 2020
Guarujá - Brasil
59 Posts
Quote:
Originally Posted by retina I'm not sure why people use these "tricks". They are more complicated and slower than simply passing though the number one digit at a time with normal long division and deriving the remainder.
Here the idea is still to understand the divisibility algorithms so that we can apply them to other recursive sequences.
Obviously, the good idea of xilman can be more comprehensive, but as I said before, these tables show us new sequences that have bijection with the famous sequence https://oeis.org/A020806 Period 6: repeat [1,4,2,8,5,7]. Now, this is the objective.
2022-08-24, 16:01 #10 Uncwilly 6809 > 6502 """"""""""""""""""" Aug 2003 101×103 Posts 1071710 Posts This thread reminds me of my favourite Pete Conrad quote.
2022-08-24, 19:01 #11
xilman
Bamboozled!
"𒉺𒌌𒇷𒆷𒀭"
May 2003
Down not across
11,483 Posts
Quote:
Originally Posted by Charles Kusniec Obviously, the good idea of xilman can be more comprehensive.
Not my good idea. The algorithm has been known for a very long time.I wouldn't be surprised to learn that Fermat used it.
Thread Tools
Similar Threads Thread Thread Starter Forum Replies Last Post MattcAnderson MattcAnderson 3 2022-02-15 01:30 carpetpool carpetpool 1 2022-02-08 05:20 Charles Kusniec Number Theory Discussion Group 1 2021-06-04 14:08 jnml Miscellaneous Math 6 2020-05-02 12:41 JM Montolio A Miscellaneous Math 3 2018-02-27 16:11
All times are UTC. The time now is 02:40.
Sun Sep 25 02:40:10 UTC 2022 up 38 days, 8 mins, 0 users, load averages: 1.22, 1.28, 1.24
Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2022, Jelsoft Enterprises Ltd.
This forum has received and complied with 0 (zero) government requests for information.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation.
A copy of the license is included in the FAQ.
≠ ± ∓ ÷ × · − √ ‰ ⊗ ⊕ ⊖ ⊘ ⊙ ≤ ≥ ≦ ≧ ≨ ≩ ≺ ≻ ≼ ≽ ⊏ ⊐ ⊑ ⊒ ² ³ °
∠ ∟ ° ≅ ~ ‖ ⟂ ⫛
≡ ≜ ≈ ∝ ∞ ≪ ≫ ⌊⌋ ⌈⌉ ∘ ∏ ∐ ∑ ∧ ∨ ∩ ∪ ⨀ ⊕ ⊗ 𝖕 𝖖 𝖗 ⊲ ⊳
∅ ∖ ∁ ↦ ↣ ∩ ∪ ⊆ ⊂ ⊄ ⊊ ⊇ ⊃ ⊅ ⊋ ⊖ ∈ ∉ ∋ ∌ ℕ ℤ ℚ ℝ ℂ ℵ ℶ ℷ ℸ 𝓟
¬ ∨ ∧ ⊕ → ← ⇒ ⇐ ⇔ ∀ ∃ ∄ ∴ ∵ ⊤ ⊥ ⊢ ⊨ ⫤ ⊣ … ⋯ ⋮ ⋰ ⋱
∫ ∬ ∭ ∮ ∯ ∰ ∇ ∆ δ ∂ ℱ ℒ ℓ
𝛢𝛼 𝛣𝛽 𝛤𝛾 𝛥𝛿 𝛦𝜀𝜖 𝛧𝜁 𝛨𝜂 𝛩𝜃𝜗 𝛪𝜄 𝛫𝜅 𝛬𝜆 𝛭𝜇 𝛮𝜈 𝛯𝜉 𝛰𝜊 𝛱𝜋 𝛲𝜌 𝛴𝜎𝜍 𝛵𝜏 𝛶𝜐 𝛷𝜙𝜑 𝛸𝜒 𝛹𝜓 𝛺𝜔
|
2022-09-25 02:40:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.546709418296814, "perplexity": 3000.392014547956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334332.96/warc/CC-MAIN-20220925004536-20220925034536-00580.warc.gz"}
|
http://mathoverflow.net/questions/85403/approximation-of-the-radon-derivative
|
I am looking for the following statement. Let $X$ be a topological space and let $\mu$, $\nu$ be Radon-Borel measures in the same measure class i.e. the the zero-sets are identical. Denote then by $f$ the Radon derivative.
We can assume a lot of regularity on $X$ i.e. it is compact, locally compact... Also we can assume some regularity on $\mu$, $\nu$ i.e. atom-freeness the support is X...
Is then the following statement true.
For each $x$ in $X$ outside a measure zero-set and for each sequence of nested open sets $U_i$ whose intersection is only $x$ it follows that $f(x)=\liminf \mu(U_i)/\nu(U_i)$
Are there any further assumption for $U_i$?
Thanks
-
By Luzin's theorem, $f$ is continuous except for a set of arbitrarily small measure; if $f$ is continuous at $x$ then your limit equals $f(x)$. This has to be made more accurate though. – Yulia Kuznetsova Jan 11 '12 at 13:32
Try to search for material on Lebesgue points. If $\nu$ is the Lebesgue measure in $\mathbb R^n$, then almost every point is a Lebesgue point and so has your property. – Yulia Kuznetsova Jan 11 '12 at 14:03
@Yulia: By Luzin's theorem, the restriction of $f$ to some set $E$ of almost full measure is continuous or, if you prefer, $f$ equals to some other continuous function $g$ outside a set of small measure. This is very different from the continuity of $f$ itself. The theorem on Lebesgue points also requires very particular shapes of $U_i$ to employ covering lemmas. @Klaus The desired property is currently stated so sloppily that it doesn't hold even at points of continuity: take $X=\mathbb R$, $x=0$, and $U_i=(-1/i,1/i)\cup (i,+\infty)$. – fedja Jan 11 '12 at 15:22
@yulia: many thank's, embarassingly I did not know the notion of Lebesgue points. As X is also metric and one measure is the Hausdorff measure there might be some theorem that almost every point is lebesgue. @fedja: Many thank's for correcting me, my question is missleading. Actually, I am looking for a criterion for the shape of $U_i$. – Klaus Jan 11 '12 at 15:44
@fedja: thank you, I forgot much of it since using. @Klaus: I remember reading a good review close to this topic: A. Bruckner, “Differentiation of integrals,” Amer. Math. Monthly 78 (9, part II) (1971). – Yulia Kuznetsova Jan 11 '12 at 16:11
|
2014-10-23 18:04:25
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9028958678245544, "perplexity": 356.1372486124852}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1413558067114.72/warc/CC-MAIN-20141017150107-00336-ip-10-16-133-185.ec2.internal.warc.gz"}
|
https://chemistry.stackexchange.com/questions/10675/what-is-true-about-this-reaction-involving-hydroxy-carboxlic-acid
|
A sequence of reactions is shown below starting from an optically active compound P. $$\Large{\ce{\underset{\ce{C4H8O3}}{P}->[SOCl2]\underset{\ce{C4H6Cl2O}}{Q}->[C2H5NH2]\underset{\ce{C6H12ClNO}}{R}}}$$ P does not react with 2,4-dinitrophenylhydrazine (2,4-DNP) to form a hydrazone. Q reacts very rapidly with one molar equivalent of ethylamine at low temperature to form R. Select the most appropriate statement(s). (A) Q as well as R are optically active. (B) Q on heating with an excess of ethylamine in the presence of a base gives a basic compound. (C) R is a basic compound and can react with an acid to form a salt. (D) An optically inactive isomer of P on heating can form compound S $(\ce{C4H6O2})$ which does not react with thionyl chloride.
My approach:
P is hydroxy butanoic acid. In Q the hydroxyl groups gets replaced by chlorine. I have confusion about the structure of R.
I feel Option (A) to be correct. I am doubtful about (D). And have no idea about (B) and (C).
• Some hint: 1. Why you think there is optical activity involved? It is not mentioned in the question; 2. Check the thionyl chloride reaction to find out how hydroxy carboxylic acid will react with it. It is more than you have thought; 3. Amine is basic but amide is not; 4. Alkyl chloride will not react with amine easily as room temperature, you need more reactive specie to react with amine "very rapidly". May 12, 2014 at 16:01
• @Rudstar Like Ron already suggested, the optically inactive isomer of P is 4-hydroxybutanoic acid, because it does not contain a stereocenter. On heating, this compound can form $\gamma$-butyrolactone by intramolecular esterification, which would be compound S. This lactone does not react with thionyl chloride. May 12, 2014 at 18:44
|
2022-08-16 14:22:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36025235056877136, "perplexity": 2961.9630293571367}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882572304.13/warc/CC-MAIN-20220816120802-20220816150802-00189.warc.gz"}
|
http://mathoverflow.net/questions/67226/relative-hirsch-number/67249
|
# Relative Hirsch number
Let $G$ be a group, and let $H\leq G$ be a subnormal subgroup. Suppose there exist a cyclic series from $H$ to $G$, that is, a normal series $$H=H_0\lhd H_1\lhd\cdots\lhd H_k= G$$ of subgroups of $G$ such that each factor group $H_{i+1}/H_i$ is cyclic. Define the relative Hirsch number of the pair $(G,H)$ to be the number $h(G,H)$ of infinite cyclic factors.
Questions
1. Under what conditions on the group $G$ and subgroup $H$ does such a cyclic series exist?
2. Supposing existence of a cyclic series as above, is the relative Hirsch number a well-defined invariant (ie independent of the cyclic series chosen)?
3. If $H$ is normal in $G$, then is $h(G,H)=h(G/H)$?
4. Where is this notion to be found in the literature?
Sorry to be asking so many questions at once! If it helps, I am mainly interested in the case $G=\Gamma\times\Gamma$ and $H=\Gamma$ the diagonal subgroup, where $\Gamma$ is finitely generated, torsion-free nilpotent.
Thanks.
-
What you are asking is about a generalization of the Hirsch length for polycyclic(-by-finite) groups. Of course, a finitely generated nilpotent group is polycyclic, so the special case that mainly interests you is quite classic.
For a polycyclic group (and more generally, for a polycyclic-by-finite group), the Hirsch length $h(g)$ is a well-known invariant, and it coincides with your "relative Hirsch number" for the pair $(G,1)$. Basic information about this can be found via Google, e.g. on Wikipedia. If you want a book, look at "Polycyclic Groups" by Daniel Segal. Unfortunately I am not aware of a reference for your relative Hirsch length.
1. I don't know an answer to your first question in the general case, and already in the polycyclic case such series won't exist in general. But if your group $G$ is nilpotent (as in your special case), you can do the following to prove that such a series always exists: Start with the series $$G=HG^{(0)}, HG^{(1)}, HG^{(2)}, \ldots , H$$ where $G^{(0)}:=G$ and $G^{(i+1)}=[G,G^{(i)}]$; since $G$ is nilpotent, there is $k\in\mathbb{N}$ such that $G^{(k)}=1$. One now verifies that the series from $G$ to $H$ I described is actually a subnormal abelian series. You can now refine it to a series in which all factors are cyclic. Since we are in the polycyclic setting (where all subgroups are finitely generated), this will result in a series of finite length.
2. For polycyclic groups, $h(G)$ is well-defined. It is not hard to extend the standard proof for this to your settings, answering your second question in the affirmative: If two subnormal cyclic series from $G$ down to $H$ exist, then by the Schreier refinement theorem, they have equivalent refinements (i.e. the factors $H_i/H_{i+1}$ which occur in the two refinements are the same, just possibly ordered differently). But a refinement of one of these subnormal cyclic series cannot change the number of infinite cyclic factors (easy exercise, also used in the "standard" proof). Hence the relative Hirsch length is well-defined.
3. Your third question also has a positive answer: Any subnormal cyclic series from $G$ to $H$ induces such a series from $G/H$ to $1=H/H$, and vice-versa, and the cyclic factors occurring obviously are the same in both cases. So you have $h(G,H)=h(G/H)$, and if $H$ also is polycyclic (as in your special case), you even have $h(G,H)=h(G/H)=h(G)-h(H)$.
4. As I already mentioned, I don't know a good reference for this in the literature, only references to polycyclic(-by-finite) groups.
-
@Max: Thanks for your answer! I intend to accept it, unless someone comes along with a reference to the literature... ;) Meanwhile, can I utilise your expertise some more? In the case I describe of the diagonal subgroup, any ideas how to go about computing it? Is it likely that $h(\Gamma\times \Gamma,\Gamma)>h(\Gamma)$? – Mark Grant Jun 10 '11 at 18:43
@Mark: Let $G=\Gamma\times\Gamma$ and $H=\Gamma$ (embedded diagonally). Assuming my answer to 1 is correct (please verify!), you have $h(G,H)=h(G) - h(H)=h(H)$. Simply by taking a polycyclic series from $G$ to $H$ and then from $H$ to $1$, and concatenating them. The argument for my answer to 1 roughly is: For $g\in G^{(k)}$ and $h\in H$, we have $h^g=(hh^{-1})\cdot g^{-1}hg= h[h,g] \in HG^{(k+1)}$. Add in that all $G^{(k)}$ are characteristic subgroups. From this I concluded that $HG^{(k+1)}$ is indeed normal in $HG^{(k)}$. – Max Horn Jun 10 '11 at 21:03
Oh, and of course this also uses that $HG^{(k)}/HG^{(k+1)}$ is abelian (which follows by considering the commutator of two arbitrary elements in the $HG^{(k)}$ and concluding that this is actually contained in $HG^{(k+1)}$). – Max Horn Jun 10 '11 at 21:16
Anything else unclear? :) – Max Horn Jun 16 '11 at 8:53
No, that's clear as day! In fact, it now seems obvious that $h(G,H)=h(G)-h(H)$ whenever all terms are defined. This would explain why the relative Hirsch length doesn't show up in the literature! Thanks again. – Mark Grant Jun 21 '11 at 10:08
|
2014-08-22 21:58:24
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9216272234916687, "perplexity": 180.84937428152017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500824562.83/warc/CC-MAIN-20140820021344-00414-ip-10-180-136-8.ec2.internal.warc.gz"}
|
https://mathsci.kaist.ac.kr/pow/
|
# Solution: 2021-12 A graduation ceremony
In a graduation ceremony, $$n$$ graduating students form a circle and their diplomas are distributed uniformly at random. Students who have their own diploma leave, and each of the remaining students passes the diploma she has to the student on her right, and this is one round. Again, each student with her own diploma leave and each of the remaining students passes the diploma to the student on her right and repeat this until everyone leaves. What is the probability that this process takes exactly $$k$$ rounds until everyone leaves.
The best solution was submitted by 고성훈 (수리과학과 2018학번, +4). Congratulations!
GD Star Rating
In a graduation ceremony, $$n$$ graduating students form a circle and their diplomas are distributed uniformly at random. Students who have their own diploma leave, and each of the remaining students passes the diploma she has to the student on her right, and this is one round. Again, each student with her own diploma leave and each of the remaining students passes the diploma to the student on her right and repeat this until everyone leaves. What is the probability that this process takes exactly $$k$$ rounds until everyone leaves.
GD Star Rating
# Solution: 2021-11 Interesting perfect cubes
Determine if there exist infinitely many perfect cubes such that the sum of the decimal digits coincides with the cube root. If there are only finitely many, how many are there?
The best solution was submitted by 박항 (전산학부 2013학번, +4). Congratulations!
Other solutions were submitted by 강한필 (전산학부 2016학번, +3), 고성훈 (수리과학과 2018학번, +3), 김기수 (수리과학과 2018학번, +3), 최백규 (생명과학과 대학원, +3), 김기택 (2021학번, +3).
GD Star Rating
# 2021-11 Interesting perfect cubes
Determine if there exist infinitely many perfect cubes such that the sum of the decimal digits coincides with the cube root. If there are only finitely many, how many are there?
GD Star Rating
# Solution: 2021-10 Integral inequality
Let $$f: [0, 1] \to \mathbb{R}$$ be a continuous function satisfying
$\int_x^1 f(t) dt \geq \int_x^1 t\, dt$
for all $$x \in [0, 1]$$. Prove that
$\int_0^1 [f(t)]^2 dt \geq \int_0^1 t f(t) dt.$
The best solution was submitted by 김기택 (2021학번, +4). Congratulations!
Other solutions were submitted by 강한필 (전산학부 2016학번, +3), 고성훈 (수리과학과 2018학번, +3), 최백규 (생명과학과 대학원, +3).
GD Star Rating
# Solution: 2021-09 Monochromatic solution of an equation
For given $$k\in \mathbb{N}$$, determine the minimum natural number $$n$$ satisfying the following: no matter how one colors each number in $$\{1,2,\dots, n\}$$ red or blue, there always exists (not necessarily distinct) numbers $$x_0, x_1,\dots, x_k \in [n]$$ with the same color satisfying $$x_1+\dots + x_k = x_0$$.
The best solution was submitted by an anonymous participant. Congratulations!
Here is his/her solution of problem 2021-09.
Other solutions were submitted by 고성훈 (수리과학과 2018학번, +3), 김기수 (수리과학과 2018학번, +3).
GD Star Rating
# 2021-10 Integral inequality
Let $$f: [0, 1] \to \mathbb{R}$$ be a continuous function satisfying
$\int_x^1 f(t) dt \geq \int_x^1 t\, dt$
for all $$x \in [0, 1]$$. Prove that
$\int_0^1 [f(t)]^2 dt \geq \int_0^1 t f(t) dt.$
GD Star Rating
# Solution: 2021-08 Self-antipodal sets on the sphere
Prove or disprove that if C is any nonempty connected, closed, self-antipodal (ie., invariant under the antipodal map) set on $$S^2$$, then it equals the zero locus of an odd, smooth function $$f:S^2 -> \mathbb{R}$$.
The best solution was submitted by 신준형 (수리과학과 2015학번, +4). Congratulations!
Here is his solution of problem 2021-08.
Another solution was submitted by 고성훈 (수리과학과 2018학번, +2).
GD Star Rating
# 2021-09 Monochromatic solution of an equation
For given $$k\in \mathbb{N}$$, determine the minimum natural number $$n$$ satisfying the following: no matter how one colors each number in $$\{1,2,\dots, n\}$$ red or blue, there always exists (not necessarily distinct) numbers $$x_0, x_1,\dots, x_k \in [n]$$ with the same color satisfying $$x_1+\dots + x_k = x_0$$.
GD Star Rating
|
2021-07-29 12:44:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7041446566581726, "perplexity": 2513.0329910060823}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153857.70/warc/CC-MAIN-20210729105515-20210729135515-00602.warc.gz"}
|
http://openturns.github.io/openturns/master/user_manual/_generated/openturns.SpectralModel.html
|
# SpectralModel¶
class SpectralModel(*args)
Spectral density model.
Notes
We consider a multivariate stochastic process of dimension , where is an event, is a domain of , is a multivariate index and .
We note the random variable at index defined by and a realization of the process , for a given defined by .
If the process is a second order process, zero-mean and weakly stationary, we define its bilateral spectral density function with:
• is the set of d-dimensional positive hermitian matrices
Using the stationary covariance function with and the Fourier transform, the spectral density writes:
A SpectralModel object can be created only through its derived classes: CauchyModel
Methods
__call__(frequency) Evaluate the spectral density function for a specific frequency. computeStandardRepresentative(frequency) Compute the standard representant of the spectral density function. draw(*args) Draw a specific component of the spectral density function. getAmplitude() Get the amplitude parameter of the spectral density function. getClassName() Accessor to the object’s name. getDimension() Get the dimension of the SpectralModel. getId() Accessor to the object’s id. getImplementation(*args) Accessor to the underlying implementation. getName() Accessor to the object’s name. getScale() Get the scale parameter of the spectral density function. getSpatialCorrelation() Get the spatial correlation matrix of the spectral density function. getSpatialDimension() Get the spatial dimension of the spectral density function. setAmplitude(amplitude) Set the amplitude parameter of the spectral density function. setName(name) Accessor to the object’s name. setScale(scale) Set the scale parameter of the spectral density function.
__init__(*args)
computeStandardRepresentative(frequency)
Compute the standard representant of the spectral density function.
Parameters: tau : float Frequency value. rho : Complex Standard representant factor of the spectral density function.
Notes
According to definitions in CovarianceModel, as the spectral density function is the Fourier transform of the stationary covariance function and using the expression of the last one, the spectral density function writes as a matrix-complex product where the matrix is the constant spatial covariance structure and the complex represents the standard representative:
Thus,
where is a covariance matrix that explains the covariance structure and
draw(*args)
Draw a specific component of the spectral density function.
Parameters: rowIndex : int, The row index of the component to draw. Default value is 0. columnIndex: int, :math:0 leq columnIndex < dimension The column index of the component to draw. Default value is 0. minimumFrequency : float The lower bound of the frequency range over which the model is plotted. Default value is SpectralModel-DefaultMinimumFrequency in ResourceMap. maximumFrequency : float The upper bound of the frequency range over which the model is plotted. Default value is SpectralModel-DefaultMaximumFrequency in ResourceMap. frequencyNumber : int, The discretization of the frequency range over which the model is plotted. Default value is SpectralModel-DefaultFrequencyNumber in class:~openturns.ResourceMap. module : bool Flag to tell if module has to be drawn (True) or if it is the argument to be drawn (False). Default value is True. graph : Graph Graphic of the specified component
getAmplitude()
Get the amplitude parameter of the spectral density function.
Returns: amplitude : Point The used amplitude parameter.
getClassName()
Accessor to the object’s name.
Returns: class_name : str The object class name (object.__class__.__name__).
getDimension()
Get the dimension of the SpectralModel.
Returns: dimension : int Dimension of the SpectralModel.
getId()
Accessor to the object’s id.
Returns: id : int Internal unique identifier.
getImplementation(*args)
Accessor to the underlying implementation.
Returns: impl : Implementation The implementation class.
getName()
Accessor to the object’s name.
Returns: name : str The name of the object.
getScale()
Get the scale parameter of the spectral density function.
Returns: scale : Point The used scale parameter.
getSpatialCorrelation()
Get the spatial correlation matrix of the spectral density function.
Returns: spatialCorrelation : CorrelationMatrix Correlation matrix .
getSpatialDimension()
Get the spatial dimension of the spectral density function.
Returns: spatialDimension : int SpatialDimension of the SpectralModel.
setAmplitude(amplitude)
Set the amplitude parameter of the spectral density function.
Parameters: amplitude : Point The amplitude parameter to be used in the spectral density function.
setName(name)
Accessor to the object’s name.
Parameters: name : str The name of the object.
setScale(scale)
Set the scale parameter of the spectral density function.
Parameters: scale : Point The scale parameter to be used in the spectral density function. It should be of size dimension.
|
2017-04-29 15:38:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6722928285598755, "perplexity": 1776.3474531602105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123530.18/warc/CC-MAIN-20170423031203-00196-ip-10-145-167-34.ec2.internal.warc.gz"}
|
https://kseebsolutions.guru/kseeb-solutions-for-class-6-maths-chapter-14-ex-14-3/
|
Students can Download Chapter 14 Practical Geometry Ex 14.3 Questions and Answers, Notes Pdf, KSEEB Solutions for Class 6 Maths helps you to revise the complete Karnataka State Board Syllabus and score more marks in your examinations.
## Karnataka State Syllabus Class 6 Maths Chapter 14 Practical Geometry Ex 14.3
Question 1.
Draw any line segment $$\overline{\mathbf{P Q}}$$. Without measuring $$\overline{\mathbf{P Q}}$$, construct a copy of $$\overline{\mathbf{P Q}}$$.
Solution:
The following steps will be followed to draw the given line segment $$\overline{\mathbf{P Q}}$$ and to construct a copy of $$\overline{\mathbf{P Q}}$$.
(1) Let $$\overline{\mathbf{P Q}}$$ be the given line segment.
(2) Adjust the compasses up to the length of $$\overline{\mathbf{P Q}}$$.
(3) Draw any line land mark a point A on it.
(4) Put the pointer on point A, and without changing the setting of compasses, draw an arc to cut the line segment at point B.
$$\overline{\mathbf{A B}}$$ is the required line segment.
Question 2.
Given some line segment $$\overline{\mathbf{A B}}$$, whose length you do not know, construct $$\overline{\mathbf{P Q}}$$ such that the length of $$\overline{\mathbf{P Q}}$$ is twice that of $$\overline{\mathbf{A B}}$$.
Solution:
The following steps will be followed to construct a line segment $$\overline{\mathbf{P Q}}$$ such that the length of $$\overline{\mathbf{P Q}}$$ is twice that of $$\overline{\mathbf{A B}}$$.
(1) Let $$\overline{\mathbf{A B}}$$ be the given line segment.
(2) Adjust the compasses up to the length of $$\overline{\mathbf{A B}}$$.
(3) Draw any line 1 and mark a point P on it.
(4) Put the pointer on P and without changing the setting of compasses, draw an arc to cut the line segment at point X.
(5) Now, put the pointer on point X and again draw an arc with the same radius as before, to cut the line 1 at point Q.
$$\overline{\mathbf{P Q}}$$ is the required line segment.
|
2023-01-31 03:14:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4912007749080658, "perplexity": 651.3183918812209}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499842.81/warc/CC-MAIN-20230131023947-20230131053947-00750.warc.gz"}
|
http://mathhelpforum.com/algebra/174375-solving-interval-s-2-a-print.html
|
# Solving an interval for s^2
• March 12th 2011, 12:01 PM
iBurger
Solving an interval for s^2
http://i56.tinypic.com/2rzxhg9.jpg
I'm trying to understand what's happening here, and particurlarly why the signs change. I think: "take all sides to the power ^-1". But then the signs shouldn't change... i'm so perplexed ;)
• March 12th 2011, 12:06 PM
skeeter
let's look at something a bit simpler ...
note that $\dfrac{1}{2} > \dfrac{1}{3}$
now, what is the relationship between their reciprocals?
• March 12th 2011, 12:13 PM
iBurger
Quote:
Originally Posted by skeeter
let's look at something a bit simpler ...
note that $\dfrac{1}{2} > \dfrac{1}{3}$
now, what is the relationship between their reciprocals?
Wait: do you suggest that if you take something to the power minus one [ c^-1] the relationship relative to other value changes as well.
so,
4 > 3
1/4 < 1/3
Aha!
• March 12th 2011, 01:00 PM
iBurger
I'm pretty sure that's it. Thank you!
|
2014-07-25 02:39:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9034337997436523, "perplexity": 2110.6623220414594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997892648.47/warc/CC-MAIN-20140722025812-00250-ip-10-33-131-23.ec2.internal.warc.gz"}
|
http://philosophical-ranting.blogspot.com/2015/02/explain-to-me-again-why-railroad-oil.html
|
## Tuesday, February 17, 2015
### Explain to me again why railroad oil transport is safer than pipelines?
http://i.cbc.ca/1.2959389.1424124523!/fileImage/httpImage/image.jpg_gen/derivatives/16x9_620/west-virginia-train-derailment-fire.jpg
http://www.nbcnews.com/watch/cnbc/train-carrying-oil-explodes-in-west-virgina-400457795662
If the cars are DOT-111 ( http://en.wikipedia.org/wiki/DOT-111_tank_car) then 100 tank cars holding 131,000 L of oil with a total value of approx.
#### $53.74 * 100 * 131,000 ~=$ 703,994,000
was allowed to burn out into the atmosphere around a town.
One burning car went into the river. Tell me that's different than pipelines.
Remind me again why this is better for the environment than a pipeline spill?
|
2018-03-21 01:13:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23965223133563995, "perplexity": 5176.711414297234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647556.43/warc/CC-MAIN-20180321004405-20180321024405-00305.warc.gz"}
|
https://docs.galpy.org/en/v1.8.1/installation.html
|
# Installation¶
## Dependencies¶
galpy requires the numpy, scipy, and matplotlib packages; these must be installed or the code will not be able to be imported. The installation methods described below will all automatically install these required dependencies.
Optional dependencies are:
• astropy for Quantity support (used throughout galpy when installed),
• astroquery for the Orbit.from_name initialization method (to initialize using a celestial object’s name),
• tqdm for displaying a progress bar for certain operations (e.g., orbit integration of multiple objects at once)
• numexpr for plotting arbitrary expressions of Orbit quantities,
• numba for speeding up the evaluation of certain functions when using C orbit integration,
• JAX for use of constant-anisotropy DFs in galpy.df.constantbetadf, and
• pynbody for use of SnapshotRZPotential and InterpSnapshotRZPotential.
To be able to use the fast C extensions for orbit integration and action-angle calculations, the GNU Scientific Library (GSL) needs to be installed (see below).
## With conda¶
The easiest way to install the latest released version of galpy is using conda or pip (see below):
conda install galpy -c conda-forge
or:
conda config --add channels conda-forge
conda install galpy
Installing with conda will automatically install the required dependencies (numpy, scipy, and matplotlib) and the GSL, but not the optional dependencies.
## With pip¶
galpy can also be installed using pip. Since v1.6.0, the pip installation will install binary wheels for most major operating systems (Mac, Windows, and Linux) and commonly-used Python 3 versions. When this is the case, you do not need to separately install the GSL.
When you are on a platform or Python version for which no binary wheel is available, pip will compile the source code on your machine. Some advanced features require the GNU Scientific Library (GSL; see below). If you want to use these with a pip-from-source install, install the GSL first (or install it later and re-install using the upgrade command. Then do:
pip install galpy
pip install -U --no-deps galpy
Installing with pip will automatically install the required dependencies (numpy, scipy, and matplotlib), but not the optional dependencies. On a Mac/UNIX system, you can make sure to include the necessary GSL environment variables by doing (see below):
export CFLAGS="$CFLAGS -Igsl-config --prefix/include" && export LDFLAGS="$LDFLAGS -Lgsl-config --prefix/lib" && pip install galpy
The latest updates in galpy can be installed using:
pip install -U --no-deps git+https://github.com/jobovy/galpy.git#egg=galpy
or:
pip install -U --no-deps --prefix=~/local git+https://github.com/jobovy/galpy.git#egg=galpy
for a local installation. The latest updates can also be installed from the source code downloaded from github:
pip install .
or:
pip install --prefix=~/local .
for a local installation.
Note that these latest-version commands all install directly from the source code and thus require you to have the GSL and a C compiler installed to build the C extension(s). If you are having issues with this, you can also download a binary wheel for the latest main version, which are available here. To install these wheels, download the relevant version for your operating system and Python version and do:
pip install WHEEL_FILE.whl
Note that there is also a Pure Python wheel available there, but use of this is not recommended. These wheels have stable ...latest... names, so you can embed them in workflows that should always be using the latest version of galpy (e.g., to test your code against the latest development version).
## Installing from a branch¶
If you want to use a feature that is currently only available in a branch, do:
pip install -U --no-deps git+https://github.com/jobovy/galpy.git@dev#egg=galpy
to, for example, install the dev branch.
Note that we currently do not build binary wheels for branches other than main. If you really wanted this, you could fork galpy, edit the GitHub Actions workflow file that generates the wheel to include the branch that you want to build (in the on: section), and push to GitHub; then the binary wheel will be built as part of your fork. Alternatively, you could do a pull request, which would also trigger the building of the wheels.
## Installing from source on Windows¶
Tip
You can install a pre-compiled Windows “wheel” of the latest main version that is automatically built using GitHub Actions for all recent Python versions here. Download the wheel for your version of Python, and install with pip install WHEEL_FILE.whl (see above).
Versions >1.3 should be able to be compiled on Windows systems using the Microsoft Visual Studio C compiler (>= 2015). For this you need to first install the GNU Scientific Library (GSL), for example using Anaconda (see below). Similar to on a UNIX system, you need to set paths to the header and library files where the GSL is located. On Windows, using the CDM commandline, this is done as:
set INCLUDE=%CONDA_PREFIX%\Library\include;%INCLUDE%
set LIB=%CONDA_PREFIX%\Library\lib;%LIB%
set LIBPATH=%CONDA_PREFIX%\Library\lib;%LIBPATH%
If you are using the Windows PowerShell (which newer versions of the Anaconda prompt might set as the default), do:
$env:INCLUDE="$env:CONDA_PREFIX\Library\include"
$env:LIB="$env:CONDA_PREFIX\Library\lib"
$env:LIBPATH="$env:CONDA_PREFIX\Library\lib"
where in this example CONDA_PREFIX is the path of your current conda environment (the path that ends in \ENV_NAME). If you have installed the GSL somewhere else, adjust these paths (but do not use YOUR_PATH\include\gsl or YOUR_PATH\lib\gsl as the paths, simply use YOUR_PATH\include and YOUR_PATH\lib).
To compile with OpenMP on Windows, you have to install Intel OpenMP via:
conda install -c anaconda intel-openmp
and then to compile the code:
pip install .
If you encounter any issue related to OpenMP during compilation, you can do:
pip install . --install-option="--no-openmp"
Note that in this case, you should install all dependencies (e.g., numpy, scipy, matplotlib) first using conda or pip, because using --install-option causes pip to build all dependencies from source, which may cause problems.
## Installing from source with Intel Compiler¶
Compiling galpy with an Intel Compiler can give significant performance improvements on 64-bit Intel CPUs. Moreover students can obtain a free copy of an Intel Compiler at this link.
To compile the galpy C extensions with the Intel Compiler on 64bit MacOS/Linux do:
python setup.py build_ext --inplace --compiler=intelem
and to compile the galpy C extensions with the Intel Compiler on 64bit Windows do:
python setup.py build_ext --inplace --compiler=intel64w
Then you can simply install with:
python setup.py install
or other similar installation commands.
## Installing the TorusMapper code¶
Warning
The TorusMapper code is not part of any of galpy’s binary distributions (installed using conda or pip); if you want to gain access to the TorusMapper, you need to install from source as explained in this section and above.
Since v1.2, galpy contains a basic interface to the TorusMapper code of Binney & McMillan (2016). This interface uses a stripped-down version of the TorusMapper code, that is not bundled with the galpy code, but kept in a fork of the original TorusMapper code. Installation of the TorusMapper interface is therefore only possible when installing from source after downloading or cloning the galpy code and using the pip install . method above.
To install the TorusMapper code, before running the installation of galpy, navigate to the top-level galpy directory (which contains the setup.py file) and do:
git clone https://github.com/jobovy/Torus.git galpy/actionAngle/actionAngleTorus_c_ext/torus
cd galpy/actionAngle/actionAngleTorus_c_ext/torus
git checkout galpy
cd -
Then proceed to install galpy using the pip install . technique or its variants as usual.
## NEW IN v1.8 Using galpy in web applications¶
galpy can be compiled to WebAssembly using the emscripten compiler. In particular, galpy is part of the pyodide Python distribution for the browser, meaning that galpy can be used on websites without user installation and it still runs at the speed of a compiled language. This powers, for example, the Try galpy interactive session on this documentation’s home page. Thus, it is easy to, e.g., build web-based, interactive galactic-dynamics examples or tutorials without requiring users to install the scientific Python stack and galpy itself.
galpy will be included in versions >0.20 of pyodide, so galpy can be imported in any web context that uses pyodide (e.g., jupyterlite or pyscript). Python packages used in pyodide are compiled to the usual wheels, but for the emscripten compiler. Such a wheel for the latest development version of galpy is always available at galpy-latest-cp310-cp310-emscripten_wasm32.whl (note that this URL will change for future pyodide versions, which include emscripten version numbers in the wheel name). It can be used in pyodide for example as
>>> import pyodide_js
'future','setuptools',
'https://www.galpy.org/wheelhouse/galpy-latest-cp310-cp310-emscripten_wasm32.whl'])
after which you can import galpy and do (almost) everything you can in the Python version of galpy (everything except for querying Simbad using Orbit.from_name and except for Orbit.animate). Note that depending on your context, you might have to just import pyodide to get the loadPackage function.
## Installation FAQ¶
### What is the required numpy version?¶
galpy should mostly work for any relatively recent version of numpy, but some advanced features, including calculating the normalization of certain distribution functions using Gauss-Legendre integration require numpy version 1.7.0 or higher.
### I get warnings like “galpyWarning: libgalpy C extension module not loaded, because libgalpy.so image was not found”¶
This typically means that the GNU Scientific Library (GSL) was unavailable during galpy’s installation, causing the C extensions not to be compiled. Most of the galpy code will still run, but slower because it will run in pure Python. The code requires GSL versions >= 1.14. If you believe that the correct GSL version is installed for galpy, check that the library can be found during installation (see below).
### I get the warning “galpyWarning: libgalpy_actionAngleTorus C extension module not loaded, because libgalpy_actionAngleTorus.so image was not found”¶
This is typically because the TorusMapper code was not compiled, because it was unavailable during installation. This code is only necessary if you want to use galpy.actionAngle.actionAngleTorus. See above for instructions on how to install the TorusMapper code. Note that in recent versions of galpy, you should not be getting this warning, unless you set verbose=True in the configuration file.
### How do I install the GSL?¶
Certain advanced features require the GNU Scientific Library (GSL), with action calculations requiring version 1.14 or higher. The easiest way to install this is using its Anaconda build:
conda install -c conda-forge gsl
If you do not want to go that route, on a Mac, the next easiest way to install the GSL is using Homebrew as:
brew install gsl --universal
You should be able to check your version using (on Mac/Linux):
gsl-config --version
On Linux distributions with apt-get, the GSL can be installed using:
apt-get install libgsl0-dev
or on distros with yum, do:
yum install gsl-devel
On Windows, using conda-forge to install the GSL is your best bet, but note that this doesn’t mean that you have to use conda for the rest of your Python environment. You can simply use a conda environment for the GSL, while using pip to install galpy and other packages. However, in that case, you need to add the relevant conda environment to your PATH. So, for example, you can install the GSL as:
conda create -n gsl gsl
conda activate gsl
and then set the path using:
|
2022-12-09 20:01:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33414334058761597, "perplexity": 5963.514637268087}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711475.44/warc/CC-MAIN-20221209181231-20221209211231-00013.warc.gz"}
|
https://www.statsmodels.org/stable/generated/statsmodels.genmod.families.family.Family.starting_mu.html
|
# statsmodels.genmod.families.family.Family.starting_mu¶
Family.starting_mu(y)[source]
Starting value for mu in the IRLS algorithm.
Parameters:
yndarray
The untransformed response variable.
Returns:
mu_0ndarray
The first guess on the transformed response variable.
Notes
$\mu_0 = (Y + \overline{Y})/2$
Only the Binomial family takes a different initial value.
|
2023-01-29 02:41:11
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6997991800308228, "perplexity": 10541.71703060469}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499697.75/warc/CC-MAIN-20230129012420-20230129042420-00164.warc.gz"}
|
http://math.stackexchange.com/questions/127339/stochastic-variable-belonging-to-sigma-field?answertab=oldest
|
# Stochastic variable belonging to sigma-field
I'm studying Markov Processes in Rick Durrett - Probability: Theory and Examples and he's doing something I simply don't understand, though I reckon it's probably quite simple. Here goes (an example from introducing conditional expectations):
Given a probability space $\left(\Omega,\mathcal{F}_{0},P\right)$ a $\sigma\text{-field}\,\mathcal{F}\subset\mathcal{F}_{0}$ and a random variable $X\in\mathcal{F}_{0}$...
What does it mean for $X\in\mathcal{F}_{0}$? I mean, the image of X has to be Borel, right? It belonging to a $\sigma$-algebra in our probability space doesn't make sense to me.
Hope someone will help, Henrik
p.s. Wow the math on this site works good!
-
Durrett's notation (I think he mentions this quietly in the very first chapter) is that $X\in \mathcal{F}_0$ means $X$ is $\mathcal{F}_0$ measurable. So in general if you see a random variable being an element of a sigma algebra, then he just means it's measurable w.r.t. that sigma-algebra.
Although Sam has perfectly answered your question, there may be something to add. FWIW, I hope you are aware of notion of measurability - given two measurable spaces $(\Omega,\mathscr F)$ and $(E,\mathscr E)$ the function $X:\Omega\to E$ is called measurable if $X^{-1}(\mathscr E)\subset\mathscr F$. Usually it is denoted as $$X:(\Omega,\mathscr F)\to(E,\mathscr E)$$ which is quite hard to write always - so more simple notation is $X\in \mathscr F|\mathscr E$ where the domain $\Omega$ and the codomain $E$ are omitted - so it refers to the case when they don't vary or assumed to be understood from the context (since e.g. the $\sigma$-algebra $\mathscr F$ itself carries an information that it is defined exactly on $\Omega$). Furthermore, for real-valued random variables it is a very usual case that $\mathscr E$ is a Borel $\sigma$-algebra on $\mathbb R$, so there is no point to write it every time - that's why we write $X\in \mathscr F$ instead of writing $X\in \mathscr F|\mathscr.B(\mathbb R)$
@Henrik It is not, you just should get used to it. For example in "Markov Chains" by Revuz he considers them on a (fixed) measurable space $(E,\mathscr E)$ and explicitly writes that $A\in\mathscr E$ and $1_A\in \mathscr E$ both means that the set $A$ is measurable, although in the first statement we talk about the set and in the second - about the (indicator) function. – Ilya Apr 2 '12 at 19:22
|
2014-07-29 11:36:14
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9300131797790527, "perplexity": 222.4508680661106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1406510267075.55/warc/CC-MAIN-20140728011747-00160-ip-10-146-231-18.ec2.internal.warc.gz"}
|
http://en.wikipedia.org/wiki/Johnson%e2%80%93Holmquist_damage_model
|
# Johnson–Holmquist damage model
In solid mechanics, the Johnson–Holmquist damage model is used to model the mechanical behavior of damaged brittle materials, such as ceramics, rocks, and concrete, over a range of strain rates. Such materials usually have high compressive strength but low tensile strength and tend to exhibit progressive damage under load due to the growth of microfractures.
There are two variations of the Johnson-Holmquist model that are used to model the impact performance of ceramics under ballistically delivered loads.[1] These models were developed by Gordon R. Johnson and Timothy J. Holmquist in the 1990s with the aim of facilitating predictive numerical simulations of ballistic armor penetration. The first version of the model is called the 1992 Johnson-Holmquist 1 (JH-1) model.[2] This original version was developed to account for large deformations but did not take into consideration progressive damage with increasing deformation; though the multi-segment stress-strain curves in the model can be interpreted as incorporating damage implicitly. The second version, developed in 1994, incorporated a damage evolution rule and is called the Johnson-Holmquist 2 (JH-2) model[3] or, more accurately, the Johnson-Holmquist damage material model.
## Johnson-Holmquist 2 (JH-2) material model
The Johnson-Holmquist material model (JH-2), with damage, is useful when modeling brittle materials, such as ceramics, subjected to large pressures, shear strain and high strain rates. The model attempts to include the phenomena encountered when brittle materials are subjected to load and damage, and is one of the most widely used models when dealing with ballistic impact on ceramics. The model simulates the increase in strength shown by ceramics subjected to hydrostatic pressure as well as the reduction in strength shown by damaged ceramics. This is done by basing the model on two sets of curves that plot the yield stress against the pressure. The first set of curves accounts for the intact material, while the second one accounts for the failed material. Each curve set depends on the plastic strain and plastic strain rate. A damage variable D accounts for the level of fracture.
### Intact elastic behavior
The JH-2 material assumes that the material is initially elastic and isotropic and can be described by a relation of the form (summation is implied over repeated indices)
$\sigma_{ij} = -p(\epsilon_{kk})~\delta_{ij} + 2~\mu~\epsilon_{ij}$
where $\sigma_{ij}$ is a stress measure, $p(\epsilon_{kk})$ is an equation of state for the pressure, $\delta_{ij}$ is the Kronecker delta, $\epsilon_{ij}$ is a strain measure that is energy conjugate to $\sigma_{ij}$, and $\mu$ is a shear modulus. The quantity $\epsilon_{kk}$ is frequently replaced by the hydrostatic compression $\xi$ so that the equation of state is expressed as
$p(\xi) = p(\xi(\epsilon_{kk})) = p\left(\cfrac{\rho}{\rho_0}-1\right) ~;~~ \xi := \cfrac{\rho}{\rho_0}-1$
where $\rho$ is the current mass density ans $\rho_0$ is the initial mass density.
The stress at the Hugoniot elastic limit is assumed to be given by a relation of the form
$\sigma_h = \mathcal{H}(\rho, \mu) = p_{\rm HEL}(\rho) + \cfrac{2}{3}~\sigma_{\rm HEL}(\rho, \mu)$
where $p_{\rm HEL}$ is the pressure at the Hugoniot elastic limit and $\sigma_{\rm HEL}$ is the stress at the Hugoniot elastic limit.
### Intact material strength
The uniaxial failure strength of the intact material is assumed to be given by an equation of the form
$\sigma^{*}_{\rm intact} = A~(p^* + T^*)^n~\left[1 + C~\ln\left(\cfrac{d\epsilon_p}{dt}\right)\right]$
where $A, C, n$ are material constants, $t$ is the time, $\epsilon_p$ is the inelastic strain. The inelastic strain rate is usually normalized by a reference strain rate to remove the time dependence. The reference strain rate is generally 1/s.
The quantities $\sigma^{*}$ and $p^*$ are normalized stresses and $T^*$ is a normalized tensile strength, defined as
$\sigma^* = \cfrac{\sigma}{\sigma_{\rm HEL}} ~;~ p^* = \cfrac{p}{p_{\rm HEL}} ~;~~ T^* = \cfrac{T}{\sigma_h}$
### Stress at complete fracture
The uniaxial stress at complete fracture is assumed to be given by
$\sigma^{*}_{\rm fracture} = B~(p^*)^m~\left[1 + C~\ln\left(\cfrac{d\epsilon_p}{dt}\right)\right]$
where $B, C, M$ are material constants.
### Current material strength
The uniaxial strength of the material at a given state of damage is then computed at a linear interpolation between the initial strength and the stress for complete failure, and is given by
$\sigma^{*} = \sigma^{*}_{\rm initial} - D~\left(\sigma^{*}_{\rm initial} - \sigma^{*}_{\rm fracture}\right)$
The quantity $D$ is a scalar variable that indicates damage accumulation.
### Damage evolution rule
The evolution of the damage variable $D$ is given by
$\cfrac{dD}{dt} = \cfrac{1}{\epsilon_f}~\cfrac{d\epsilon_p}{dt}$
where the strain to failure $\epsilon_f$ is assumed to be
$\epsilon_f = D_1~(p^* + T^*)^{D_2}$
where $D_1, D_2$ are material constants.
### Material parameters for some ceramics
material $\rho_0$ $\mu$ A B C m n $D_1$ $D_2$ $\sigma_h$ Reference
(kg-m−3) (GPa) (GPa)
Boron carbide $B_4C$ 2510 197 0.927 0.7 0.005 0.85 0.67 0.001 0.5 19 [4]
Silicon carbide $SiC$ 3163 183 0.96 0.35 0 1 0.65 0.48 0.48 14.6 [4]
Aluminum nitride $AlN$ 3226 127 0.85 0.31 0.013 0.21 0.29 0.02 1.85 9 [4]
Alumina $Al_2O_3$ 3700 90 0.93 0.31 0 0.6 0.6 0.005 1 2.8 [4]
Silicafloat glass 2530 30 0.93 0.088 0.003 0.35 0.77 0.053 0.85 6 [4]
## Johnson–Holmquist equation of state
The function $p(\xi)$ used in the Johnson–Holmquist material model is often called the Johnson–Holmquist equation of state and has the form
$p(\xi) = \begin{cases} k_1~\xi + k_2~\xi^2 + k_3~\xi^3 + \Delta p & \qquad \text{Compression} \\ k_1~\xi & \qquad \text{Tension} \end{cases}$
where $\Delta p$ is an increment in the pressure and $k_1, k_2, k_3$ are material constants. The increment in pressure arises from the conversion of energy loss due to damage into internal energy. Frictional effects are neglected.
## Implementation in LS-DYNA
The Johnson-Holmquist material model is implemented in LS-DYNA as * MAT_JOHNSON_HOLMQUIST_CERAMICS.[5]
## References
1. ^ Walker, James D. Turning Bullets into Baseballs, SwRI Technology Today, Spring 1998 http://www.swri.edu/3pubs/ttoday/spring98/bullet.htm
2. ^ Johnson, G. R. and Holmquist, T. J., 1992, A computational constitutive model for brittle materials subjected to large strains, Shock-wave and High Strain-rate Phenomena in Materials, ed. M. A. Meyers, L. E. Murr and K. P. Staudhammer, Marcel Dekker Inc. , New York, pp. 1075-1081.
3. ^ Johnson, G. R. and Holmquist, T. J., 1994, An improved computational constitutive model for brittle materials, High-Pressure Science and Technology, American Institute of Physics.
4. Cronin, D. S., Bui, K., Kaufmann, C., 2003, Implementation and validation of the Johnson-Holmquist ceramic material model in LS-DYNA, in Proc. 4th European LS-DYNA User Conference (DYNAmore), Ulm, Germany. http://www.dynamore.de/dynalook/eldc4/material/implementation-and-validation-of-the-johnson
5. ^ McIntosh, G., 1998, The Johnson-Holmquist ceramic model as used in ls-DYNA2D, Report # DREV-TM-9822:19981216029, Research and Development Branch, Department of National Defence, Canada, Valcartier, Quebec. http://www.dtic.mil/cgi-bin/GetTRDoc?AD=ADA357607&Location=U2&doc=GetTRDoc.pdf
|
2014-07-23 20:55:52
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 45, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7025341391563416, "perplexity": 1606.972514679624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-23/segments/1405997883466.67/warc/CC-MAIN-20140722025803-00108-ip-10-33-131-23.ec2.internal.warc.gz"}
|
https://lavelle.chem.ucla.edu/forum/viewtopic.php?f=151&t=29680&p=92118
|
## 15.61 and Equation used to solve it
Arrhenius Equation: $\ln k = - \frac{E_{a}}{RT} + \ln A$
Yang Chen 2E
Posts: 36
Joined: Thu Jul 13, 2017 3:00 am
### 15.61 and Equation used to solve it
According to a person on Chemistry community, you solve this problem by deriving an equation using the Arrhenius equation and getting rid of A using this method:
k/k' = (e^-Ea/RT)/(e^-Ea/RT')
k/k' = e^(-Ea/RT + Ea/RT')
ln (k/k') = -Ea/RT + Ea/RT'
ln (k/k') = (Ea/R)(1/T' - 1/T)
Why are we able to combine Ea? Isn't the activation energy of the reverse reaction completely different form that of the forward reaction?
Nancy Dinh 2J
Posts: 59
Joined: Fri Sep 29, 2017 7:07 am
### Re: 15.61 and Equation used to solve it
15.61 states: "The rate constant of the rst-order reaction 2 N2O(g) + S --> 2 N2(g) + O2(g) is 0.76 s 1 at 1000. K and 0.87 s 1 at 1030. K.
Recall that the rate constant changes with different temperatures. We are still doing the same reaction, but simply at different temperatures with different corresponding rate constants.
Lindsay H 2B
Posts: 50
Joined: Fri Sep 29, 2017 7:07 am
Been upvoted: 1 time
### Re: 15.61 and Equation used to solve it
This problem isn't asking for the activation of the reverse reaction though, it's asking for the activation of the forward reaction. Activation energy doesn't change with temperature. (you're right that the activation energy WOULD be different for the reverse reaction though, that's just not what the question is asking)
Jana Sun 1I
Posts: 52
Joined: Sat Jul 22, 2017 3:00 am
### Re: 15.61 and Equation used to solve it
I agree with Lindsay. The question doesn't really need us to think about the activation energy for the reverse reaction. It just asks us to calculate the activation energy for the forward reaction, which doesn't change at different temperatures. I think we derived the equation we use under the assumption that the activation energy remains the same (ex. we'll use it to calculate the Ea for a forward reaction at different temperatures or to calculate the Ea for a reverse reaction at different temperatures).
It might be confusing because the answer book uses k and k', which we usually associate with the forward and reverse rate constants. It might be easier just to think about it as k1 and k2, like the book does on page 642.
|
2020-11-29 02:34:39
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6073665022850037, "perplexity": 1362.8092921308187}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195967.34/warc/CC-MAIN-20201129004335-20201129034335-00481.warc.gz"}
|
https://in-the-sky.org/article.php?term=absolute_magnitude
|
Absolute Magnitude
by Dominic Ford, Editor
## Deep sky objects
$m = M + 5 \log d_\textrm{pc} - 5$ Distance modulus $$\mu$$ is defined as $\mu = 5 \log d_\textrm{pc} - 5$ Distance $$d_\textrm{pc}$$ is measured in parsecs.
## Asteroids
$m = g + 5 \log \Delta + \kappa \log d_\textrm{S} - 2.5 \log p$ where $p=\frac{1 + \cos\beta}{2},$ $$\beta$$ is the phase angle – the Sun-Body-Earth angle, and
$$p$$ is the fraction of the object's visible disk which is illuminated by the Sun. $$\kappa = 2.5 n$$
In 1985, the International Astronomical Union's Commission 20 adopted a more flexible formula, of the form $m = H + 5 \log d_\textrm{E} - 2.5 \log \left( (1-G)\Psi_1 + G\Psi_2 \right)$ where: $\Psi_1 = \exp \left[ -3.33 \left( \tan \frac{\beta}{2} \right)^{0.63} \right]$ and $\Psi_2 = \exp \left[ -1.87 \left( \tan \frac{\beta}{2} \right)^{1.22} \right]$ the brightness of an asteroid is defined by its absolute magnitude $$H$$, and its slope parameter $$G$$. All distances measured in astronomical units. See chapter 33 (page 231) of Astronomical Algorithms (1991) by Jean Meeus.
## Comets
$m = g + 5 \log d_\textrm{E} + 2.5 n \log d_\textrm{S} - 2.5 \log p$ where, as above, $p=\frac{1 + \cos\beta}{2},$ $$\beta$$ is the phase angle – the Sun-Body-Earth angle, and
$$p$$ is the fraction of the object's visible disk which is illuminated by the Sun.
Cambridge
Latitude: Longitude: Timezone: 42.38°N 71.11°W EDT
Color scheme
|
2019-07-20 10:51:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9214445352554321, "perplexity": 1474.3983993396703}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195526506.44/warc/CC-MAIN-20190720091347-20190720113347-00014.warc.gz"}
|
http://acm.sdut.edu.cn/onlinejudge2/index.php/Home/Index/problemdetail/pid/1541.html
|
### Bee Movie
Time Limit: 2000 ms Memory Limit: 65536 KiB
#### Problem Description
Barry B. Benson is "just an ordinary bee" in a hive located in Sheep's Meadow in Central Park in New York City. Barry recently graduated from college and is about to enter the hive's Honex Industries (a division of Honesco Corporation and owned by the Hexagon Group) honey-making workforce. Along with his best friend Adam Flayman (voiced by Matthew Broderick) Barry is initially very excited, but his latent, non-conformist attitude emerges upon finding out that his choice of job will never change once picked.He absolutely disappointed, he joins the team responsible for bringing the honey and pollination of the flowers to visit the world outside the hive. The bee will draw out in battle array when they want to go outside. Barry B. Benson is "just an ordinary bee" in a hive located in Sheep's Meadow in Central Park in?New York City. Barry recently graduated from college and is about to enter the hive's Honex Industries (a division of Honesco Corporation and owned by the Hexagon Group) honey-making workforce. Along with his best friend Adam Flayman (voiced by Matthew Broderick) Barry is initially very excited, but his latent, non-conformist attitude emerges upon finding out that his choice of job will never change once picked.He absolutely disappointed, he joins the team responsible for bringing the honey and pollination of the flowers to visit the world outside the hive. The bee will draw out in battle array when they want to go outside.
Actually, this problem is about alignment of N (1 ≤ N ≤ 770) bees numbered 1..N who are grazing in their field that is about 15,000×15,000 units. Their grazing locations all fall on integer coordinates in a standard x,y scheme (coordinates are in the range 0..15,000).
Barry looks up and notices that she is exactly lined up with Huacm534(bee) and AcmIcpc20060820322(bee). He wonders how many groups of three aligned bees exist within the field.
Given the locations of all the bees (no two bees occupy the same location), figure out all sets of three bees are exactly collinear. Keep track of the sets, sorting the bees in each set by their ID number, lowest first. Then sort the sets by the three ID numbers (lowest first), breaking ties by examining the second and third ID numbers.
#### Input
Line 1: A single integer, N.
Lines 2 .. N+1: Line i+1 describes bee i's location with two space-separated integers that are his x and y coordinates.
#### Output
Line 1: A single integer X that is the number of sets of three bees that are exactly collinear. A set of four collinear bees would, of course, result in four sets of three collinear bees.
Lines 2 .. X+1: Each line contains three space-separated integers that are the bee ID numbers of three collinear bees. The lines are sorted as specified above. This output section is empty if no collinear sets exist.
#### Sample Input
8
0 0
0 4
1 2
2 4
4 3
4 5
5 1
6 5
4
0 0
1 1
2 2
3 3
#### Sample Output
1
1 3 4
4
1 2 3
1 2 4
1 3 4
2 3 4
#### Hint
Be careful of floating point arithmetic. Floating point comparison for equality almost never works as well as one would hope.
Explanation of the sample 1:
Eight bees grazing on a grid whose lower left corner looks like this:
. . . . 6 . 8
2 . 4 . . . .
. . . . 5 . .
. 3 . . . . .
. . . . . 7 .
1 . . . . . .
The digits mark the collinear bee IDs:
. . . . * . *
* . 4 . . . .
. . . . * . .
. 3 . . . . .
. . . . . * .
1 . . . . . .
2010湖南大学校赛
|
2019-09-17 20:05:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35159069299697876, "perplexity": 2163.6794977356017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573105.1/warc/CC-MAIN-20190917181046-20190917203046-00154.warc.gz"}
|
https://www.biostars.org/p/9476683/
|
Should I subset my data before running EstimateDisp() in RNA-seq analysis ?
1
0
Entering edit mode
4 months ago
Basti ▴ 50
Hi everyone,
I'm performing DE analysis using edgeR and I have a question regarding the correct use of EstimateDisp(). Indeed, I'm currently making comparisons between different conditions as follows :
Sample condition
A1 Milieu1
A2 Milieu1
B1 Milieu2
B2 Milieu2
B3 Milieu2
C1 Milieu3
C2 Milieu3
D1 Milieu4
D2 Milieu4
E1 Milieu5
E2 Milieu5
For instance, I want to compare Milieu1 with Milieu2 and Milieu3, and Milieu4 with Milieu5 in two separate analysis because it is two unrelated experiments. If I run my DE script :
design <- model.matrix(~0+condition)
dge <- DGEList(counts=counts,group= condition)
dge <- calcNormFactors(dge)
dge <- estimateDisp(dge, design = design)
fit <- glmQLFit(dge, design = design)
my.contrasts <- makeContrasts(1v2=conditionMilieu1-conditionMilieu2,1v3=conditionMilieu1-conditionMilieu3,4v5=conditionMilieu4-conditionMilieu5,levels=design)
qlf <- glmQLFTest(fit,contrast=my.contrasts[,"1v2"])
tt <- topTags(qlf, n = Inf)
But if I subset my count matrix before running my script and I separate Milieu1, Milieu2, Milieu3 on one hand and Milieu4 and Milieu5 on the other hand, I get slightly different results.
What would be the best way to proceed in this case? Should I subset my count matrix before estimating dispersion or proceed with the wholde dataset?
Thank you for enlightening me on this subject.
RNAseq edgeR EstimateDisp • 172 views
2
Entering edit mode
4 months ago
ATpoint 54k
If this is indeed two separate experiments then I would subset at the very beginning, and create two different DGEList objects, followed by running FilterByExpr as instructed in the edgeR manual. The reason is that normalization and parameter estimation is influenced by all samples, so if 4/5 are a different experiment I find it hard to justify why they should be together in the same edgeR analysis as 1/2/3.
0
Entering edit mode
That's very clear thanks
Traffic: 2171 users visited in the last hour
FAQ
API
Stats
Use of this site constitutes acceptance of our User Agreement and Privacy Policy.
|
2021-10-20 18:04:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2971579134464264, "perplexity": 7627.6944991652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585322.63/warc/CC-MAIN-20211020152307-20211020182307-00464.warc.gz"}
|
https://acm.ecnu.edu.cn/problem/1297/
|
1297. How many Fibs?
Recall the definition of the Fibonacci numbers:
f1 := 1
f2 := 2
fn := fn-1 + fn-2 (n>=3)
Given two numbers a and b, calculate how many Fibonacci numbers are in the range [a,b].
输入格式
The input contains several test cases. Each test case consists of two non-negative integer numbers a and b. Input is terminated by a=b=0. Otherwise, a<=b<=10^100. The numbers a and b are given with no superfluous leading zeros.
输出格式
For each test case output on a single line the number of Fibonacci numbers fi with a<=fi<=b.
样例
Input
10 100
1234567890 9876543210
0 0
Output
5
4
19 人解决,35 人已尝试。
27 份提交通过,共有 107 份提交。
5.8 EMB 奖励。
|
2021-09-22 10:46:49
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23854060471057892, "perplexity": 2876.231510144467}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057347.80/warc/CC-MAIN-20210922102402-20210922132402-00090.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/algebra/introductory-algebra-for-college-students-7th-edition/chapter-1-section-1-2-fractions-in-algebra-concept-and-vocabulary-check-page-28/12
|
## Introductory Algebra for College Students (7th Edition)
When adding $\frac{1}{5}$ and $\frac{3}{4}$, there are many common denominators that we can use, such as 20, 40, 60, and so on. The given denominators, 4 and 5, divide into all of these numbers. However, the denominator 20 is the smallest number that 4 and 5 divide into. For this reason, 20 is called the least common denominator.
|
2018-08-16 05:06:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7493613958358765, "perplexity": 278.6943862401018}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221210413.14/warc/CC-MAIN-20180816034902-20180816054902-00112.warc.gz"}
|
https://www.physicsforums.com/threads/math-proof.523247/
|
# Math proof
1. ### Mark J.
77
How to mathematically proof that an arrival process is Markov ,memory-less ???
2. ### HallsofIvy
41,034
Staff Emeritus
That general, all you can say is "show that it satisfies the definition of "Markov Process". How you would do that, of course, depends upon exactly what the arrival process is.
3. ### Mark J.
77
The process is people arrival at the bus stop.
I have time arrivals and now I need to proof that is indeed markovian process and maybe poisson
4. ### kdbnlin78
34
Surely one would assume a memoryless system to allow for the theory of queues and stochastic processes can be applied to your queueing system?
I am sure it is mathematically allowed to assume (wlog) that your system is Markovian.
5. ### Stephen Tashi
4,440
Perhaps a clear statement of your question is: "I have data for the arrival times of people at a bus stop. What statistical tests can I use to test the hypothesis that the arrival process is Poission?". (Statistical tests aren't "proof".)
|
2015-10-04 09:28:06
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8304563164710999, "perplexity": 1110.7915136027962}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736673081.9/warc/CC-MAIN-20151001215753-00273-ip-10-137-6-227.ec2.internal.warc.gz"}
|
https://www.physicsforums.com/threads/quadratic-functions-problem.183300/
|
## Homework Statement
Suppose that an object is thrown into the air with an initial upward velocity of Vo meters per second from a height of ho meters above the ground.
## Homework Equations
Then, t seconds later, its height (h(t) meters above the ground is modeled by the function h(t) = -4.9t^2 + Vot + ho.
## The Attempt at a Solution
a) Find its height above the ground t seconds later.
I got h(t) = -4.9t^2 + 14t + 30, and I checked the back of the book and it is correct.
b) When will the stone reach its highest elevation?
I tried a lot of things like plugging in various h's and t's, and using the quadratic formula, but I did not have much success.
c) When will the stone hit the ground?
Same as b), I wasn't sure where to start, but I made some educated guesses, however they proved wrong.
NOTE: I have all of the correct answers. I am not asking for anyone to do my homework for me or give me the answers. I would just like to be guided in the right direction so I will never have to ask for help on these types of problems again. I have worked for 20 minutes straight on this problem, and I know for a fact it shouldn't take that long.
rock.freak667
Homework Helper
Well if $$h(t) = -4.9t^2 + 14t + 30$$ doesn't this represent a parabolic curve? doesnt this curve have a maximum point...which would correspond to the max height and the time it occurs
Yes.
How do you find the maximum value of the parabolic curve, though?
rock.freak667
Homework Helper
Find the the first derivative and equate to zero and solve for t
Find the the first derivative and equate to zero and solve for t
Would you mind clarifying what you mean by "the first derivative?" I don't quite understand what you mean. Thanks.
HallsofIvy
|
2021-05-14 00:10:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7659578919410706, "perplexity": 309.39672486025347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243989616.38/warc/CC-MAIN-20210513234920-20210514024920-00066.warc.gz"}
|
https://math.stackexchange.com/questions/1591431/how-can-all-players-in-the-starcraft-2-grandmaster-league-win-more-than-they-los
|
# How can all players in the Starcraft 2 Grandmaster league win more than they lose?
Starcraft 2 is a competitive online strategy game where players compete in leagues with other players of similar skill. The most difficult and highest league is the Grandmaster (GM) league, which contains the top ~$200$ players in a region.
The matchmaking system will, most of the time, match players from the GM league with other players from the GM league. There are exceptions however, like if no one else from GM is playing, or if someone from a lower league has a very high MMR (Match Making Rating) but is not in GM for whatever reason. The algorithm is quite complex, and as far as I know not all details are even public.
For the purpose of this question, let's say that whenever a GM player is matched with someone from a lower league, the GM player wins that match.
These are the current standings in the GM league for the American region: http://www.rankedftw.com/ladder/lotv/1v1/win-rate/?f=am,grandmaster
You can see that everyone's win/loss ratio is higher than 1 (more than $50\%$ won), so everyone wins more than they lose in GM. The standings change often, but it's rare to see anyone with more losses than wins. Wins and losses are counted from when you start playing, not only from when you entered GM. However, stats are reset a few times per year, at the start of each season. So I would expect this not to influence things too much.
This is rather weird for me to see: I would expect the worse GM players to be, in general, easy pickings for the better ones, and their win/loss to be below 1.
One explanation that I can think of is what I call low transitivity (if there's a proper term for it let me know): if $A$ consistently defeats $B$ and $B$ consistently defeats $C$, then it rarely holds that $A$ also consistently defeats $C$. In such a case, all 3 players $A$, $B$, $C$ can hold similar win/loss ratios, but I still don't see how all 3 can hold them above 1.
Under the assumption I mentioned above, that a GM player will always defeat a lower league player, it's possible that they are all above 1, but it still seems highly unlikely, since inter-league matchups are quite rare.
What is a possible explanation for this phenomenon? Given those win/loss ratios, what is an approximation of the number of games a GM member will play with lower league players?
Without the assumption that a GM player will always defeat a non-GM player, can we say something about the probability of that happening?
• The number of wins refers only to the wins only after the player entered the GM league, or from his first day in the game? This would explain it perhaps (if the second case is true) – Jimmy R. Dec 28 '15 at 11:19
• @Stef from his first day in the game. That might help explain it indeed, but I'd still expect it to normalize after enough time in GM. Also, they are reset a few times a year (about each season). – IVlad Dec 28 '15 at 11:21
• Given the paucity of well-formed assumptions on which to base "a possible explanation", I'm going to suggest this would be more appropriate for the Gaming SE community also known as ArQAde. – hardmath Dec 28 '15 at 11:39
• @hardmath I thought about asking on a gaming site, but the type of answer I have in mind would contain quite a bit of math reasoning and formalism, which I thought would make the question a better fit for here. I can provide more information if necessary. A generic answer that only considers what I did provide is also welcome. – IVlad Dec 28 '15 at 11:44
• While your interest in this is understandable, your current formulation contains statements like "not all details are even public", "I would expect this not to influence things too much", and "it's possible that X, but it still seems highly unlikely". Perhaps an explanation may lie in a censored data set due to weaker GM players (often new ones?) leaving the field. In any case it would seem to call for a data intensive study. – hardmath Dec 28 '15 at 11:56
|
2019-07-23 15:54:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5112224221229553, "perplexity": 915.173621230991}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195529480.89/warc/CC-MAIN-20190723151547-20190723173547-00196.warc.gz"}
|
https://www.nature.com/articles/s41598-018-25501-w?error=cookies_not_supported&code=6b2f8d09-9cbd-42fd-893e-ff5113b44574
|
# Unbiased estimation of an optical loss at the ultimate quantum limit with twin-beams
## Abstract
Loss measurements are at the base of spectroscopy and imaging, thus permeating all the branches of science, from chemistry and biology to physics and material science. However, quantum mechanics laws set the ultimate limit to the sensitivity, constrained by the probe mean energy. This can be the main source of uncertainty, for example when dealing with delicate systems such as biological samples or photosensitive chemicals. It turns out that ordinary (classical) probe beams, namely with Poissonian photon number distribution, are fundamentally inadequate to measure small losses with the highest sensitivity. It is known that quantum-correlated pair of beams, named “twin-beam state”, allows surpassing this classical limit. Here we demonstrate they can reach the ultimate sensitivity for all energy regimes (even less than one photon per mode) with the simplest measurement strategy. One beam of the pair addresses the sample, while the second one is used as a reference to compensate both for classical drifts and for fluctuation at the most fundamental quantum level. This capability of selfcompensating for unavoidable instability of the sources and detectors allows also to strongly reduce the bias in practical measurement. Moreover, we report the best sensitivity per photon ever achieved in loss estimation experiments.
## Introduction
The measurement of changes in intensity or in phase of an electromagnetic field, after interacting with matter, is the most simple and effective way to extract relevant information on the properties of a system under investigation, whether a biological sample1,2 or a digital memory disc3. Intensity measurements enable absorption/transmission estimation, the base of imaging and spectroscopy, pervasive and fundamental techniques in all science fields, from chemistry4 to material science5 and physics6. They are routinely employed in biomedical analysis7,8,9, as well as in atmospheric10,11,12 and food sciences13,14.
However, the optical transmission losses experienced by a probe beam while interacting with a system cannot be determined with arbitrary precision, even in principle. Quantum mechanics establishes fundamental bounds to the sensitivity15,16,17,18, which is limited, in general, by the mean energy of the probe, or, equivalently, by its mean number of photons. This is in accordance to the intuitive idea that gaining the perfect knowledge on a system would require an infinite amount of physical resources.
The lower bound to the uncertainty, when restricted to the use of classical probe states, coincides with the one achieved by a coherent state, $${{U}}_{{coh}}\simeq {\mathrm{[(1}-{\alpha })/\langle {n}_{P}\rangle ]}^{\mathrm{1/2}}$$17, where 〈n P 〉 is the mean number of photons of the probe and 0 ≤ α ≤ 1 is the loss of the sample. Indeed, this limit can be obtained in practice by any probe beam exibiting Poissonian photon statistics, as a laser beam (described theoretically by a coherent state) or even a thermal source like LEDs or incandescent light bulbs in the limit of extremely low photon number per mode. Note that the uncertainty depends on the loss parameter, and can be arbitrary small only in the asymptotic limit of high losses. For a faint loss, $$\alpha \sim 0$$, one retrieves the expression U snl = 〈n P −1/2, usually referred as to “shot-noise-limit” (SNL).
Without restriction on the probe state, it has been shown18,19 that the ultimate quantum limit (UQL) in the sensitivity for a single mode interrogation of the sample is $${{U}}_{{uql}}\simeq \sqrt{\alpha }\,{{U}}_{{coh}}$$, which scales much more favourably than the classical bound for small losses, a region which is particularly significant in many real applications. It is worth noting that the use of quantum states does not improve the uncertainty scaling with the number of particles. This is different from what happens in phase shift estimation, in which a sensitivity scaling proportional to 〈n P −1 is reachable in ideal situations15,16, the so called “Heisenberg limit”. The fundamental difference is that phase shift is a unitary operation, preserving the purity of the state, while a loss is intrinsically non unitary. A loss can be represented as the action of a beam splitter that mixes up the probe state in one port with the vacuum state in the other port, basically spoiling quantum features such as entanglement, which is necessary to approach the Heisenberg limit16.
It is known that single mode squeezed vacuum reaches U uql for small losses, $$\alpha \sim 0$$, and small number of photons $$\langle {n}_{P}\rangle \sim 0$$18. Fock states |n〉, having by definition a fixed number of photons, approach U uql unconditionally, i.e. for all value of α, but they cannot explore the regime of 〈n P 〉 < 119. The optimal performance of Fock states can be understood by considering that a loss can be easily estimated by comparing the number of photons of the probe before and after the interaction with the sample. The perfect knowledge of the photon number of the unperturbed Fock state allows one to detect better small deviations caused by the sample, which would remain hidden in the intrinsic photon number fluctuation of Poissonian distributed sources.
However it is challenging to produce experimentally true Fock states. A reasonable approximation of a Fock state with n = 1 are the heralded single photons produced by spontaneous parametric down conversion (SPDC)20,21. In this process photons are always emitted in pairs with low probability, but one can get rid of the vacuum component since the detection of one photon of the pair heralds the presence of the other one. This scheme has been demonstrated recently for quantum enhanced absorption measurement both with post-selection of the heralded single photons22 and, more remarkably, with selection performed by active feed-forward enabled by an optical shutter23.
Also quantum correlations of twin-beam (TWB) state have shown the possibility of sub-SNL sensitivity in absorption/transmission measurement24,25,26,27,28,29,30,31, quantum enhanced sensing32,33,34,35, ghost imaging36, quantum reading of digital memories37 and plasmonic sensors38,39. TWB states can be generated by SPDC40 as well as by four wave mixing in atomic vapours41,42,43,44, and expose a high level of quantum correlation in the photon number fluctuations between two corresponding modes, for example two propagation directions or two wavelengths. Even if super-Poissonian noise characterizes the photon distribution in one mode, the fluctuations are perfectly reproduced in time and space in the correlated mode. Sub-shot noise correlation of this state has been experimentally demonstrated both in the two-mode case45,46,47,48,49 and in the case of many spatial modes detected in parallel by the pixels of a CCD camera50,51,52. The exploitation of spatially multimode non-classical correlation has been proposed for high sensitivity imaging of distributed absorbing object53 and a proof of principle of the technique has been reported by Brida et al. in28. Recently our group has realized the first wide-field sub-SNL microscope30, providing 104 pixels images with a true (without post-selection) significant quantum enhancement, and a spatial resolution of few micrometers. This represents a considerable advancement towards a real application of quantum imaging and sensing.
The common idea behind these works is that the random intensity noise in the probe beam addressed to the sample can be known by measuring the correlated (reference) beam and subtracted. Note that the two-beams approach is extensively used in standard devices like spectrophotometers, where a classical beam is split in two by a beam splitter and one beam is used to monitor the instability of the source and detectors and to compensate for them. This is particularly effective in practical applications, since unavoidable drifts in the source emission or detector response would lead to strong bias, especially in the estimation of small absorptions. However, in classical correlated beams (CCB) generated in this way, only the super-Poissonian component of the fluctuations is correlated (sometimes called classical “excess noise”), whereas the shot noise remains uncorrelated and cannot be compensated. Therefore TWB represent the natural extension to the two-beam approach to the quantum domain, promising to be especially effective for small absorption measurement and when low photon flux is required.
It has been theoretically demonstrated54 that using TWB for loss estimation the UQL is in principle attainable; nevertheless the existence of an experimental estimator fit for this purpose is still an open question, as it is its explicit expression.
Here, we show that the answer to this question is unconditionally positive considering TWB generated by SPDC process, for all the energy regime and all values of the loss parameter α. Therefore, TWB overcome the limitations of both single mode squeezed vacuum and Fock states, representing the practical best choice for pure loss estimation. We prove this result by an operative approach: we consider a specific and simple measurement strategy, proposed for the first time by Jakeman and Rarity24, that is to evaluate the ratio between the photon number measured in the probe and in the reference beam. In the ideal lossless detection case this is sufficient to reach the ultimate quantum limit. Taking into account for experimental imperfections, we derive the uncertainty advantage of the twin-beam with respect to the single classical beam (SCB) and to the CCB case in terms of experimental parameters related to the “local” photon statistics of the two beams separately, and the amount of non-classical correlation of the joint photon number statistics.
In a recent work27, a different optimized estimator which allows improving the sensitivity in case of strong non-ideal detection efficiencies has been proposed. The drawback is that this method requires the accurate and absolute characterization of the measurement apparatus, in particular the absolute values of the quantum efficiencies of the detectors and of the excess noise of the source. This aspect places a strong practical limitation, because the determination of quantum efficiency, especially at the few photon level, with uncertainty less than 10−3 is extremely challenging, limiting the overall accuracy of the method; then, instabilities could also affect the measurement. We show that the simplest estimator in ref.24 behaves almost as good as the optimized one for relatively high values of the efficiencies (the condition of our experiment), but it requires the weakest assumptions on the stationarity of the system and does not require absolute value of any parameter.
Finally we perform the experiment, measuring intensity correlations in the far field of multi-mode parametric down conversion by a standard low noise and high efficiency CCD camera. For a sample loss of $$\sim \mathrm{2 \% }$$, we report an experimental quantum enhancement in the estimation uncertainty of 1.51 ± 0.13 with respect to the single beam classical probe and of 2.00 ± 0.16 compared to the classical two-beam approach, when the same mean energy of the probe and the same detection efficiency are considered.
## Theory
In practice, an optical loss α can be easily measured by comparing the number of photons of the probe $${N}_{P}^{^{\prime} }$$ after a lossy interaction, with a reference value N R , which can be evaluated in a previous moment in absence of the sample (Fig. 1a) or by the help of a second beam (Fig. 1d). In particular, one can consider the estimator24:
$${S}_{\alpha }=1-\gamma \frac{{N}_{P}^{^{\prime} }}{{N}_{R}}.$$
(1)
The factor γ = 〈N R 〉/〈N P 〉 should be introduced in case of unbalancing between the mean energy of probe and reference beams and evaluated in a pre-calibration phase of the apparatus (Fig. 1c). A loss is a random process modelled by the action of a beam splitter of transmission 1 − α, so that the statistics of the photon counting of the probe beam is modified in this way40:
$$\langle {N}_{P}^{^{\prime} }\rangle =(1-\alpha )\langle {N}_{P}\rangle ,$$
(2)
$$\langle {{\rm{\Delta }}}^{2}{N}_{P}^{^{\prime} }\rangle =[{\mathrm{(1}-\alpha )}^{2}({F}_{P}-\mathrm{1)}+1-\alpha ]\,\langle {N}_{P}\rangle .$$
(3)
Here N P is the measured photon number without the sample. Its fluctuation is represented by the Fano factor F P = 〈Δ2N P 〉/〈N P 〉 ≥ 0 which quantifies the non-classicality of the photon statistics. In particular FP < 1 indicates sub-Poissonian noise55 and in general the possibility to surpass the SNL.
By expanding the photon number operators in Eq. (1) at the first order around their mean value, the expected value of the estimator becomes24:
$$\langle {S}_{\alpha }\rangle =\alpha +\mathrm{(1}-\alpha )\frac{\langle {\rm{\Delta }}{N}_{P}{\rm{\Delta }}{N}_{R}\rangle }{\langle {N}_{P}\rangle \,\langle {N}_{R}\rangle }.$$
(4)
An unbiased estimation of the loss can be obtained solving the Eq. (4) with respect to α. By propagating the uncertainty of the quantities $${N}_{P}^{^{\prime} }$$ and N R on S α , and rewriting the terms using the unperturbed variance 〈Δ2N P 〉, the quantum expectation value of fluctuation is:
$${{\rm{\Delta }}}^{2}{S}_{\alpha }\simeq {U}_{uql,\langle {N}_{P}\rangle }^{2}+\frac{{\mathrm{(1}-\alpha )}^{2}}{\langle {N}_{P}\rangle }\frac{2{\sigma }_{\gamma }}{\gamma }.$$
(5)
Note that $${U}_{uql,\langle {N}_{P}\rangle }$$ has the form of the UQL but refers to the number of detected photons. Considering the probes photons 〈n P 〉 incident on the sample, one has $${U}_{uql,\langle {n}_{P}\rangle }={U}_{uql,\langle {N}_{P}\rangle }\sqrt{{\eta }_{d}}$$, where η d represents the detection efficiency, i.e. the losses experienced after the sample. The most relevant quantity appearing in Eq. (5) is the positive factor:
$${\sigma }_{\gamma }=\frac{\langle {{\rm{\Delta }}}^{2}({N}_{R}-\gamma {N}_{P})\rangle }{\langle {N}_{R}+\gamma {N}_{P}\rangle }=\frac{\langle {{\rm{\Delta }}}^{2}{N}_{R}\rangle +{\gamma }^{2}\langle {{\rm{\Delta }}}^{2}{N}_{P}\rangle -2\gamma \langle {\rm{\Delta }}{N}_{P}{\rm{\Delta }}{N}_{R}\rangle }{\langle {N}_{R}+\gamma {N}_{P}\rangle }.$$
(6)
In the case of γ = 1 it represents the quantifier of the non-classical correlation known as noise reduction factor (NRF), σ = σγ=1, where the bound between classical and quantum correlations is set by σ = 1. Thus, the uncertainty is expressed in terms of simple measurable quantities related to the photon number statistics, i.e. the intensity fluctuations. Eq. (5) shows that whenever γ = 1 and σ = 0 the UQL is retrieved, $${{\rm{\Delta }}}^{2}{S}_{\alpha }(\gamma =1,\,\sigma =0)\simeq {U}_{uql,\langle {N}_{P}\rangle }^{2}$$.
In the following we consider different states for the probe and the reference beam to establish the limit to the sensitivity in relevant scenarios.
Let us first focus on the states which do not present correlation between probe and reference (e.g. the measurements on the probe and reference beam are performed in two different moments, refer to Fig. 1 a), b), so that 〈ΔN P ΔN R 〉 = 0.
• Fock states. It is clear that the only chance for uncorrelated states to achieve the condition σ γ = 0 and hence the UQL according to Eq. (5) is to have null fluctuation in the photon number both for the reference and probe beam, 〈Δ2N R 〉 ≡ 〈Δ2N P 〉 ≡ 0. This means that the state must be the product of two unperturbed Fock states, $$|n{\rangle }_{P}\otimes |n{\rangle }_{R}$$ detected with unitary efficiency. Thus, as anticipated, Fock states reaches the UQL unconditionally, i.e. for all the value of the parameter, with the only limitation that the mean photon number cannot be arbitrarily small19 (i.e. 〈n P 〉 ≥ 1).
$${{\rm{\Delta }}}^{2}{S}_{\alpha }^{(Fock)}\simeq {U}_{uql,\langle {n}_{P}\rangle }^{2}$$
(7)
• Coherent states. Let us now consider the state $$|coh{\rangle }_{P}\otimes |coh{\rangle }_{R}$$, particularly interesting for its simple experimental implementation. In the photon number basis, coherent states have the form $$|coh\rangle ={e}^{-\frac{1}{2}\langle n\rangle }{\sum }_{n\mathrm{=0}}^{\infty }\frac{{\langle n\rangle }^{n\mathrm{/2}}}{\sqrt{n!}}|n\rangle$$, following the Poissonian photon number distribution P coh (n) = e−〈nnn/n!, which has the property 〈Δ2n〉 = 〈n〉. Thus, substituting the variances with the mean values in the right hand side of Eq. (6) one get σ γ = (1 + γ)/2, and accordingly:
$${{\rm{\Delta }}}^{2}{S}_{\alpha }^{(coh)}\simeq {U}_{uql,\langle {N}_{P}\rangle }^{2}+\frac{{\mathrm{(1}-\alpha )}^{2}}{\langle {N}_{P}\rangle }\frac{1+\gamma }{\gamma }.$$
(8)
The lower limit for a pair of coherent states is reached under the condition of $$\gamma \gg 1$$, i.e. when the reference beam has much more energy than the transmitted probe, and the relative fluctuation on its photon number becomes negligible. In this case $${{\rm{\Delta }}}^{2}{S}_{\alpha }^{(coh)}$$ equals the classical lower bound, detection efficiency apart, $${{\rm{\Delta }}}^{2}{S}_{\alpha }^{(coh)}=$$ $$(1-\alpha )/\langle {N}_{P}\rangle ={\eta }_{d}^{-1}{U}_{coh,\langle {n}_{P}\rangle }^{2}$$. In practice, one can also consider an equivalent situation, in which the reference uncertainty has been statistically reduced to a negligible contribution by a long acquisition time in the calibration phase (Fig. 1a), namely a time much longer than the one used for the measurement of the probe beam in presence of the sample (Fig. 1b). Indeed, replacing the variable N R with its mean value 〈N R 〉 in the definition of S α and of σ γ in Eq. (6) leads to the an identical limit of the sensitivity.
More in general, it is convenient to rewrite the noise reduction factor for uncorrelated states in terms of the measurable Fano factor of each beam in absence of the sample, i.e. σ γ = (F R + γF P )/2. With this substitution, Eq. (5) becomes:
$${{\rm{\Delta }}}^{2}{S}_{\alpha }^{(unc)}\simeq {U}_{uql,\langle {N}_{P}\rangle }^{2}+\frac{{\mathrm{(1}-\alpha )}^{2}}{\langle {N}_{P}\rangle }(\frac{1}{\gamma }{F}_{R}+{F}_{P}).$$
(9)
The measured Fano factors account for the statistics of light sources and for transmission inefficiency and detection losses. If 0 ≤ η j ≤ 1(j = P, R) is the overall channel efficiency, including the detection one η d and the losses between the source and the sample, the Fano factor can be written as $${F}_{j}={\eta }_{j}{F}_{j}^{\mathrm{(0)}}+1-{\eta }_{j}$$, where $${F}_{j}^{\mathrm{(0)}}$$ refers to the one of the unperturbed state of the source. As expected, detection losses deteriorate the non classical signature of the probe and reference beams, preventing the real possibility to reach the UQL even with Fock states.
Considering now joint states where a correlation between probe and reference is present, i.e. 〈ΔN P ΔN R 〉 ≠ 0 (Fig. 1c,d) we have:
• TWB state. Two mode twin beam state, generated by SPDC, is represented by the following entangled state in the photon number basis {|n〉}56:
$$|TWB{\rangle }_{PR}=[\langle n\rangle +{1]}^{-1/2}\sum _{n=0}^{{\rm{\infty }}}{[\frac{\langle n\rangle }{\langle n\rangle +1}]}^{n/2}|n{\rangle }_{P}|n{\rangle }_{R}.$$
(10)
The two modes, separately, obey to a thermal statistics each, where 〈Δ2n〉 = 〈n〉(1 + 〈n〉). However, they are balanced in the mean energy, 〈n P 〉 = 〈n R 〉, and their fluctuations are perfectly correlated, 〈Δn P Δn R 〉 = 〈Δ2n〉. This leads to γ = 1 and σ = 0, thus demonstrating that TWB detected with unitary efficiency reaches the U uql , according to Eq. (5). Note that this result is independent on the value of the parameter α and on the energy of the probe beam which can contain less than one photon per mode on average. Indeed, this is usually the case in experiments.
• Classical correlated beams (CCB). Let us consider a bipartite correlated state produced by a unitary splitting of a single beam. Given a splitting ratio 0 ≤ τ ≤ 1, it turns out that the statistics of the two out-coming beams, the probe and the reference, is characterized by γ = τ−1 − 1 and σ γ = (2τ)−1, which are remarkably independent on the photon number distribution of the initial beam. Substituting these values in Eq. (5) leads to the same uncertainty of two uncorrelated coherent beams $${{\rm{\Delta }}}^{2}{S}_{\alpha }^{(CCB)}={{\rm{\Delta }}}^{2}{S}_{\alpha }^{(coh)}$$, reported in Eq. (8). It shows that classical correlation can never approach the UQL, and that the lower uncertainty is achieved for a splitting ratio $$\tau \simeq 0$$ corresponding to a strong unbalancing of beam energies, $$\langle {N}_{P}\rangle \ll \langle {N}_{R}\rangle$$. Therefore, for the specific measurement strategy considered here and whatever the input state, it is convenient to use a highly populated reference beam and a weak prope beam. This result is in agreement with the behaviour reported by Spedalieri et al.57 in the complementary situation in which the input state is a thermal one while the measurement strategy is the most general one allowed by quantum mechanics.
Finally, to better understand how losses or excess noise of the source influence the final accuracy in real experiment we note that the parameter σ γ can be rewritten as $${\sigma }_{\gamma }=\frac{\gamma +1}{2}\sigma +\frac{\gamma -1}{2}({F}_{R}-\gamma {F}_{P})$$. In presence of equal losses in both the branches η R = η P = η, the noise reduction factor, expressed in terms of the ideal unperturbed one σ(0), is σ = ησ(0) + 1−η. For the relevant case of a TWB state, it is F R = F P , γ = 1 and σ(0) = 0, leading to:
$${{\rm{\Delta }}}^{2}{S}_{\alpha ,\eta }^{(TWB)}\simeq {U}_{uql,\langle {N}_{P}\rangle }^{2}+2\frac{{\mathrm{(1}-\alpha )}^{2}}{\langle {N}_{P}\rangle }(1-\eta ).$$
(11)
This expression shows how the degradation of the accuracy in presence of losses prevents reaching the UQL in practice.
On the other side, for γ = 1, balanced CCB (bCCB) fulfills the lower classical bound σ γ = σ = σ(0) = 1, thus using Eq. (5) we obtain:
$${{\rm{\Delta }}}^{2}{S}_{\alpha ,\eta }^{(bCCB)}\simeq {U}_{uql,\langle {N}_{P}\rangle }^{2}+2\frac{{\mathrm{(1}-\alpha )}^{2}}{\langle {N}_{P}\rangle }=\frac{\mathrm{(1}-\alpha \mathrm{)(2}-\alpha )}{\langle {N}_{P}\rangle }.$$
(12)
Note that in case of bCCB, the accuracy is immune from the detection losses but it is always worse than in the case of TWB reported in Eq. (11).
Up to now we have analyzed the performance of the specific estimator in Eq. (1), showing that it allows reaching the optimal limits both for classical and quantum states, in particular using TWB state the UQL is retrieved. However, other estimators have been considered in literature for absorption measurement with TWB. An interesting alternative is the estimator used in the recent experiment by Moreau et al.27,
$${S}_{\alpha }^{^{\prime} }=1-\frac{{N}_{P}^{^{\prime} }-k{\rm{\Delta }}{N}_{R}+\delta E}{\langle {N}_{P}\rangle },$$
(13)
where the weight factor k can be determined in order to minimize the uncertainty on $${S}_{\alpha }^{^{\prime} }$$, while δE is a small correction introduced to render the estimator unbiased. However, k and δE need to be estimated in a phase of pre-calibration of the apparatus. In particular it turns out that k opt is a function of the detection efficiencies of the channels and the local excess noise k opt = f(η P , η R , F P , F R ) while δE depends also from the measured covariance 〈ΔN P ΔN R 〉. We have evaluated analytically in the general case, with the only hypothesis of balanced sources, the expected uncertainty of the estimator in Eq. (13) when k = k opt . For the sake of simplicity, here we report the expression obtained in case of symmetric statistical properties of the channels, γ = 1 and F P = F R = F:
$${{\rm{\Delta }}}^{2}{S}_{\alpha }^{{}^{{\rm{^{\prime} }}}}={U}_{uql,\langle {N}_{P}\rangle }^{2}+\frac{{(1-\alpha )}^{2}}{\langle {N}_{P}\rangle }\sigma (2-\frac{\sigma }{F}).$$
(14)
For TWB and lossless detection, the noise reduction factor σ is identically null and the UQL is retrieved also with this estimator. Taking into account balanced detection losses, and the common experimental case of a mean photon number per mode much smaller than one, one can substitute in Eq. (14) σ = 1 − η and $$F\simeq 1$$. Therefore, the uncertainty becomes:
$${{\rm{\Delta }}}^{2}{S}_{\alpha ,\eta }^{^{\prime} (TWB)}={U}_{uql,\langle {N}_{P}\rangle }^{2}+\frac{{\mathrm{(1}-\alpha )}^{2}}{\langle {N}_{P}\rangle }(1-{\eta }^{2}).$$
(15)
Comparing the uncertainty in Eq. (15) with the one reported in Eq. (11) makes clear that the estimator $${S}_{\alpha }^{^{\prime} }$$ proposed in27 performs better than S α , especially when detection losses are considerable.
Finally, in Brambilla et al.53 it is suggested to measure the absorption by a differential measurement, considering the following estimator:
$${S^{\prime\prime} }_{\alpha }=\frac{{N}_{R}-\gamma {N}_{P}^{^{\prime} }}{\langle {N}_{R}\rangle }.$$
(16)
For a source producing a pairs of beams with the same local statistical properties, the variance of $${S}_{\alpha }^{^{\prime\prime} }$$ can be calculated as:
$${{\rm{\Delta }}}^{2}{S}_{\alpha }^{^{\prime\prime} }=\frac{\mathrm{[2(1}-\alpha ){\sigma }_{\gamma }+\alpha +({F}_{R}-\mathrm{1)}{\alpha }^{2}]}{\gamma \langle {N}_{P}\rangle }.$$
(17)
However, this choice is not optimal and depends on the value of the measured local statistics: in the best case of unperturbed TWB, in which σ γ = 0 and γ = 1, it approaches U uql only asymptotically for $${F}_{R}{\alpha }^{2} \sim 0$$. In TWB, produced experimentally by SPDC, the statistics of each mode is thermal with a photon number per mode much smaller than one, thus $${F}_{R}\simeq 1$$ and the condition reduces to $$\alpha \sim 0$$. Conversely, for high value of the estimated losses, $$\alpha \sim 1$$, the performance of this estimator is much worse than the one of S α and $${S}_{\alpha }^{^{\prime} }$$.
## Experiment
A scheme of the experimental set-up is reported in Fig. 2.
A CW laser-beam (10 mW at λ pump = 405 nm) pumps a 1 cm Type-II-Beta-Barium-Borate (BBO) non linear crystal, where SPDC occurs and two beams with perfect correlation in the photon number are generated. Note that the state |Ψ〉 produced by SPDC process is intrinsically multi-mode and can be expressed, in the plane-wave pump approximation, as a tensor product of two-modes TWB states of the form in Eq. (10) as: |Ψ〉 = q,λ|TWBq,λ, where q and λ are respectively the transverse momentum and the wavelength of one of the two photons produced, while momentum and wavelength of the other photon are fixed by energy and momentum conservation.
The far field of the emission is realized at the focal plane of a lens with f FF = 1 cm focal length. Then a second lens, with f IM = 1.6 cm, images the far field plane to the detection plane. The magnification factor is M = 7.8. The detector is a charge-coupled-device (CCD) camera Princeton Inst. Pixis 400BR Excelon operating in linear mode and cooled down to −70 °C. It presents high quantum efficiency (nominally > 95% at 810 nm), 100% fill factor and low noise (read-noise has been estimated around 5 e/(pixelsecond)). The physical pixel of the camera measures 13 μm, nevertheless, not being interested in resolution, we group them by 24 × 24 hardware binning. This allows us to reduce the acquisition time and the effects of the read-out noise. Just after the crystal an interference filter ((800 ± 20) nm, 99% transmittance) is positioned to select only the modes of frequencies around the degeneracy, λ d = 2λ pump . This choice allows the presence of different spatial modes, in our case we have $${M}_{sp} \sim 2500$$ spatial modes impinging on each detection area, S P and S R , where P and R subscripts refer to the probe and reference beam, respectively. We integrate the signals in S R and in S P . The sample consists in a coated glass-slide with a deposition of variable absorption coefficient α intercepting the probe beam in the focal plane. We consider values of α from 1% to 70%. Finally, in order to check the theoretical model at varying η R and η P , neutral filters of different absorption can be eventually positioned on the beams path.
The acquisition time of a single frame is set to 100 ms, whilst the coherence time of the SPDC process is around 10−12s, thus the number of the detected temporal modes is approximatively $${M}_{t} \sim {10}^{11}$$. Since in each detection area we register around $$\langle {N}_{P}\rangle \sim 50\cdot {10}^{4}$$ photons per frame, it follows that the occupation number of the single spatio-temporal mode is $$\mu \sim 2\cdot {10}^{-9}$$ photons/mode. Being $$\mu \ll 1$$, this implies that the statistic of a single mode is well modeled by a Poissonian statistic: it follows that if only one beam is considered the measurements are shot-noise limited.
However, it is possible to go beyond the shot noise limit exploiting the photon number correlation between pairs of correlated modes. In the plane wave pump approximation with transverse momentum q pump = 0, in the far field region any mode with transverse momentum q is associated with a single position x according to the relation: $${\bf{x}}=\frac{2c{f}_{FF}}{{\omega }_{pump}}{\bf{q}}$$, where c is the speed of light, f FF the focal length of the first lens and ω pump the laser frequency. The exact phase-matching condition for correlated modes q P + q R = q pump = 0 becomes in the far field, for degenerate wavelengths λ P = λ R = 2λ pump , a condition on their position: x P + x R = 0. Under the hypothesis of plane wave pump it is therefore expected that two symmetric pixels of the camera, respect to the pump direction, always detect the same number of photons. For a realistic pump with a certain spread Δq it follows: $${{\boldsymbol{x}}}_{P}+{{\boldsymbol{x}}}_{R}=0\pm {\rm{\Delta }}{\boldsymbol{x}}=\pm \,\frac{2c{f}_{FF}}{{\omega }_{pump}}{\rm{\Delta }}{\boldsymbol{q}}$$. Δx represents the size in the far field of the so called coherence area, A coh , area in which photons from correlated modes can be collected. Moreover, the non-null frequency bandwidth (about 40 nm in our experiment) determines a further broadening of the spot in which correlated detection events occur. To experimentally measure the size of A coh the spatial cross-correlation between the two beams can be considered30. Its evaluation is important to compare it with the detection area A det since, to detect a significant level of correlation, it is necessary that A det ≥ A coh . In our case, integrating on the two regions of interest this condition is fully fulfilled, indeed it holds $${A}_{det}\gg {A}_{coh}$$. In general the measured NRF can be modeled as58:
$${\sigma }_{\gamma }=\frac{1+\gamma }{2}-{\eta }_{R}{\eta }_{coll}\ge \mathrm{0,}$$
(18)
where two contributions are present.
• 0 ≤ η R ≤ 1, the total efficiency of the reference optical path.
• 0 ≤ η coll ≤ 1, the collection efficiency of correlated photons. This factor represents approximatively the probability that given a detected photon in S R , its “twin” is expected to fall in S P .
In our experimental situation, since $${S}_{P}={S}_{R}\gg {A}_{coh}$$ it follows η coll → 1 and consequently $${\sigma }_{\gamma }=\frac{1+\gamma }{2}-{\eta }_{R}$$. Inverting this relation offers a useful way to measure the total efficiencies (Klyshko heralding efficiency) of the two channels, without the need of comparing with calibrated devices59. In the experimental situation corresponding to Fig. 3 we measured σ γ = 0.24 ± 0.03 and γ = 1.006, which implies overall heralding efficiencies η R = η P = 0.76, as reported in the caption. The same method has been adopted to evaluate the efficiencies in the other cases, reported in Figs 4 and 5.
In all these figures the mean values of α (x-axis) and their corresponding uncertainties Δα (y-axis) have been obtained acquiring 200 frames with the absorbing sample inserted. Repeating each measurement 10 times the error bars have been estimated. In particular, for each frame, we integrate the data on S R and S P , opportunely corrected for the background, obtaining N R and N P , necessary for the estimation of the mean absorption α according to the different estimators considered, in Eqs (116).
To reproduce the single-mode classical strategy we performed a calibration measurement without the sample obtaining 〈N R 〉; we then estimate α as:
$${S}_{\alpha }^{(unc)}=1-\gamma \frac{{N}_{P}^{^{\prime} }}{\langle {N}_{R}\rangle }.$$
(19)
For ideal Poissonian statistics of the probe, this strategy leads to the classical lower bound U coh (see Theory section). In our experiment, the Poissonian behavior is guaranteed by the condition $$\mu \ll 1$$, as discussed before.
Finally to reproduce the bCCB case we consider a different region of the detector $${S}_{R}^{^{\prime} }$$, displaced from S R and only classically correlated with S P .
Note that from the calibration measurement also γ, σ γ , F P and F R can be simply evaluated.
## Results and Discussion
In Eqs (11) and (15) we have explicitly reported the uncertainty achieved by TWB for the estimators $${S}_{\alpha }^{(TWB)}$$ and $${S}_{\alpha }^{^{\prime} (TWB)}$$ respectively, in case of balanced total efficiencies in the probe and reference beam. The unbalanced case leads to cumbersome analytical expressions, so we report this situation graphically in Fig. 6. The uncertainties on these two estimators are compared at varying η R and fixed η P with respect to the classical lower bound $${U}_{coh,\langle {N}_{P}\rangle }$$, evaluated for the same number of detected photons. It emerges that for η R = 1 the two estimators offer exactly the same quantum enhancement, maximum for $$\alpha \ll 1$$. Nonetheless, for η R ≠ 1 and sufficiently large, the performances of the two estimators remain comparable. Instead when η R < 0.5 the uncertainty on $${S}_{\alpha }^{(TWB)}$$ becomes greater than the classical one; on the contrary $${\rm{\Delta }}{S}_{\alpha }^{^{\prime} (TWB)}$$ maintains always below it. Note that in Fig. 6 we fix η P = 0.76 (the value of our experiment) and we considered the dependence from η R . The opposite situation, where η R is kept fix is not reported. In this case ΔS α and Δ$${S}_{\alpha }^{^{\prime} }$$ behave similarly for all the variability range of η P , and are always below $${U}_{coh,\langle {N}_{P}\rangle }$$.
These different regimes at varying η R have been experimentally explored with our set-up and the results are shown in Figs 35. In these figures, considering different estimators, the dependence of the uncertainty on α in function of its mean value is reported. The three situations only differ from the value of η R considered. The solid lines are the theoretical curves in Eqs (9, 5 and 17) and the equivalent of Eq. (14) in the general case of γ ≠ 1 where the experimental values of the quantities σ γ , F P , F R , γ have been substituted. The markers represent the experimental data which are in a good agreement with our theoretical model describing experimental imperfections. The black curves stand for significant limits, obtained with ideal states. The dotted-dashed line is the fundamental quantum limit U uql = [α(1 − α)/〈n P 〉]1/2, achievable by TWB and unitary efficiencies. The dashed line is the classical lower bound calculated for the actual number of detected photons, $${U}_{coh,\langle {N}_{P}\rangle }$$, while the dotted line is the classical limit in the two-mode balanced case, $${\rm{\Delta }}{S}_{\alpha }^{(bCCB)}$$. Figure 3 reports also the classical lower bound assuming no losses occurring after the sample, $${U}_{coh,\langle {n}_{P}\rangle }\mathrm{=[(1}-\alpha )/\langle {n}_{P}\rangle {]}^{\mathrm{1/2}}$$, where 〈n P 〉 is the number of photons of the probe interacting with the sample. This quantity can be easily estimated as $$\langle {n}_{P}\rangle =\langle {N}_{P}\rangle {\eta }_{d}^{-1}$$, where η d represents the detection efficiency after the sample. The obtained value of η d = 0.80 ± 0.01 takes into account transmission and collection losses trough all the optical elements after the sample (a lens, an interference filter and the quantum efficiency of our CCD camera). The efficiency of the camera with the filter placed in front of it has been experimentally measured using the technique presented in58 (η CCD = 0.84 ± 0.01).
Although experimental not unitary efficiencies lead to a remarkable detachment from the UQL, for $$\alpha \sim \mathrm{2 \% }$$, we still obtain a significant quantum enhancement: $${U}_{coh,\langle {N}_{P}\rangle }/{\rm{\Delta }}{S^{\prime} }_{\alpha }=1.51\pm 0.13$$ and $${\rm{\Delta }}{S}_{\alpha }^{(bCCB)}/{\rm{\Delta }}{S^{\prime} }_{\alpha }=2.00\pm 0.16$$. The comparison respect to the classical lower bound assumed with ideal detection efficiency leads to $${U}_{coh,\langle {n}_{P}\rangle }/{\rm{\Delta }}{S^{\prime} }_{\alpha }=1.32\pm 0.14$$.
The comparison with the two-mode classical strategy ($${S}_{\alpha }^{(bCCB)}$$) is of particular interest since the two-beam approach allows compensating unavoidable drifts and instability of source and detectors, leading to an unbiased estimation of α, i.e. not affected by temporal drifts of the experimental set-up. Estimators S α and $${S}_{\alpha }^{^{\prime\prime} }$$ do not require the knowledge of the individual absolute power of the source or detector response but a measurement of the average arms unbalance in absence of the object $$\gamma =\frac{\langle {N}_{R}\rangle }{\langle {N}_{P}\rangle }$$, and the condition for having an unbiased estimator is the stability of this parameter. Experimentally, this is much less demanding than controlling the power stability of the individual probe beam (i.e. 〈N P 〉 constant over time) and detector response for the direct/single beam case. Indeed, it is expected that the factors affecting the source and the detectors act in the same way on the probe and on the reference channels.
On the other side $${S}_{\alpha }^{^{\prime} }$$, in particular the calculation of k opt and δE, requires the knowledge of the two absolute values of both the efficiencies η R and η P , which include optical transmission and detectors quantum efficiency. The last one is usually obtained by comparison with calibrated radiometric standards. Alternatively, they can be determined from the same SPDC set-up by using some extensions of the Klyshko’s method58,59,60. This second approach is the one used in the present paper: as described after the Eq. 18, absolute arms efficiencies can be extracted from the measured value of σ γ . In any case, uncertainty smaller than 10−3 is quite challenging in the calibration of detector operating at low optical power. Inaccuracy in the determination of these parameters, although does not propagate directly to the loss estimation, could somehow affect the optimality of $${S}_{\alpha }^{^{\prime} }$$. Furthermore, $${S}_{\alpha }^{^{\prime} }$$ could be affected by drift in the mean value of 〈N P 〉, as it happens for the single mode strategy.
## Conclusion
We address the question of loss estimation and analyze different measurement strategies. In particular we show that with a simple photon number measurement of TWB state it is possible to approach the ultimate quantum limit of the sensitivity in case of perfect detection efficiency. The experiment reports the best sensitivity per photon ever achieved in loss estimation without any kind of data post-selection. Indeed, as far as we know the best reported result is a quantum enhancement of 1.21 ± 0.02, recently achieved by Moreau et al.27. Also other transmission based experiments demonstrating significant quantum enhanced sensitivity are present in literature, as39, however their results are not directly comparable with ours since the uncertainty on the absorption coefficient is not reported.
In particular we double the sensitivity of the conventional classical two-beam approach and we overtake of more than 50% the sensitivity of the coherent case. The advantage, considering perfect detection efficiency of the classical beam after the sample, reduces to 32%. At the same time these results accurately confirm the theoretical model accounting for experimental imperfections.
The estimator represented by S α in Eq. (1)24, is compared both theoretically and experimentally, with other estimators in literature (see Eqs (13) and (16)) in presence of experimental imperfections (e.g. not unitary detection efficiency). Despite in case of high detection losses the estimator $${S}_{\alpha }^{^{\prime} }$$ in Eq. (13) has the smallest uncertainty, it turns out that where the quantum enhancement is significant, i.e. for sufficiently high efficiencies, S α and $${S}_{\alpha }^{^{\prime} }$$ approximately offer the same quantum enhancement. Moreover, we argue that S α , beside its simple form, has several practical advantages. On the one side, it is robust to experimental unavoidable drifts of the sources and detectors, leading to unbiased estimate. On the other side, it does not require absolute detection efficiency calibration. These features are of the utmost importance in view of real applications.
## References
1. 1.
Cone, M. T. et al. Measuring the absorption coefficient of biological materials using integrating cavity ring-down spectroscopy. Optica 2(2), 162–168 (2015).
2. 2.
Cheong, W. F., Prahl, S. A. & Welch, A. J. A review of the optical properties of biological tissues. IEEE Journal of Quantum Electronics 26(12), 2166–2185 (1990).
3. 3.
Gu, M., Li, X. & Cao, Y. Optical storage arrays: a perspective for future big data storage. Light: Science & Applications 3, e177 (2014).
4. 4.
Koningsberger, D. C. & Prins, R. X-ray absorption: principles, applications, techniques of EXAFS, SEXAFS, and XANES. New York: Wiley (1988).
5. 5.
Weller, H. Quantized semiconductor particles: a novel state of matter for materials science. Advanced Materials 5(2), 88–95 (1993).
6. 6.
Savage, B. D. & Sembach, K. R. Interstellar abundances from absorption-line observations with the Hubble Space Telescope. Annual Review of Astronomy and Astrophysics 34(1), 279–329 (1996).
7. 7.
Hebden, J. C., Arridge, S. R. & Delpy, D. T. Optical imaging in medicine: I. Experimental techniques. Physics in Medicine and Biology 42(5), 825 (1997).
8. 8.
Jacques, S. L. Optical properties of biological tissues: a review. Physics in Medicine and Biology 58(11), R37 (2013).
9. 9.
Zonios, G. et al. Melanin absorption spectroscopy: new method for noninvasive skin investigation and melanoma detection. Journal of Biomedical Optics 13(1), 014017–014017 (2008).
10. 10.
Edner, H., Ragnarson, P., Spännare, S. & Svanberg, S. Differential optical absorption spectroscopy (DOAS) system for urban atmospheric pollution monitoring. Applied Optics 32(3), 327–333 (1993).
11. 11.
Schiff, H. I., Mackay, G. I. & Bechara, J. The use of tunable diode laser absorption spectroscopy for atmospheric measurements. Research on Chemical Intermediates 20(3-5), 525–556 (1994).
12. 12.
Bowling, D. R., Sargent, S. D., Tanner, B. D. & Ehleringer, J. R. Tunable diode laser absorption spectroscopy for stable isotope studies of ecosystem-atmosphere CO 2 exchange. Agricultural and Forest Meteorology 118(1), 1–19 (2003).
13. 13.
Mehrotra, R. Infrared Spectroscopy, Gas Chromatography/Infrared in Food Analysis. John Wiley & Sons, Ltd (2000).
14. 14.
Nicolai, B. M. et al. Nondestructive measurement of fruit and vegetable quality by means of NIR spectroscopy: A review. Postharvest Biology and Technology 46(2), 99–118 (2007).
15. 15.
Giovannetti, V., Lloyd, S. & Maccone, L. Advances in quantum metrology. Nature Photonics 5(4), 222–229 (2011).
16. 16.
Demkowicz-Dobrzański, R., Jarzyna, M. & Kołodyński, J. Quantum Limits in Optical Interferometry. Progress in Optics 60, 345–435 (2015).
17. 17.
Braun, D. et al. Quantum enhanced measurements without entanglement. arXiv preprint arXiv:1701.05152 (2017).
18. 18.
Monras, A. & Paris, M. G. Optimal quantum estimation of loss in bosonic channels. Physical Review Letters 98(16), 160401 (2007).
19. 19.
Adesso, G., Dell’Anno, F., De Siena, S., Illuminati, F. & Souza, L. A. M. Optimal estimation of losses at the ultimate quantum limit with non-Gaussian states. Physical Review A 79(4), 040305 (2009).
20. 20.
Brida, G. et al. Experimental realization of a low-noise heralded single-photon source. Optics Express 19(2), 1484–1492 (2011).
21. 21.
Krapick, S. et al. An efficient integrated two-color source for heralded single photons. New Journal of Physics 15, 033010 (2013).
22. 22.
Whittaker, R. et al. Absorption spectroscopy at the ultimate quantum limit from single-photon states. New Journal of Physics 19(2), 023013 (2017).
23. 23.
Sabines-Chesterking, J. Sub-Shot-Noise Transmission Measurement Enabled by Active Feed-Forward of Heralded Single Photons. Physical Review Applied 8, 014016 (2017).
24. 24.
Jakeman, E. & Rarity, J. G. The use of pair production processes to reduce quantum noise in transmission measurements. Optics Communications 59(3), 219–223 (1986).
25. 25.
Tapster, P., Seward, S. & Rarity, J. Sub-shot-noise measurement of modulated absorption using parametric down-conversion. Physical Review A 44, 3266 (1991).
26. 26.
Hayat, M. M., Joobeur, A. & Saleh, B. E. Reduction of quantum noise in transmittance estimation using photon-correlated beams. The Journal of the Optical Society of America A 16, 348–358 (1999).
27. 27.
Moreau, P. A. et al. Demonstrating an absolute quantum advantage in direct absorption measurement. Scientific Reports 7, 6256 (2017).
28. 28.
Brida, G., Genovese, M. & Ruo-Berchera, I. Experimental realization of sub-shot-noise quantum imaging. Nature Photonics 4(4), 227–230 (2010).
29. 29.
Brida, G., Genovese, M., Meda, A. & Ruo-Berchera, I. Experimental quantum imaging exploiting multimode spatial correlation of twin beams. Physical Review A 83, 033811 (2011).
30. 30.
Samantaray, N., Ruo-Berchera, I., Meda, A. & Genovese, M. Realisation of the first sub shot noise wide field microscope. Light: Science & Applications 6, e17005 (2017).
31. 31.
Genovese, M. Real applications of quantum imaging. Journal of Optics 18, 073002 (2016).
32. 32.
Zhang, Z., Mouradian, S., Wong, F. N. C. & Shapiro J. H. Entanglement-enhanced sensing in a lossy and noisy environment. Physical Review Letter 114, 110506 (2015).
33. 33.
Lopaeva, E. D. et al. Experimental Realization of Quantum Illumination. Physical Review Letter 110, 153603 (2013).
34. 34.
Pooser, R. C. & Lawrie, B. Ultrasensitive measurement of microcantilever displacement below the shot-noise limit. Optica 2, 393–399 (2015).
35. 35.
Clark, J. B., Zhou, Z., Glorieux, Q., Marino, M. A. & Lett, P. D. Imaging using quantum noise properties of light. Optics Express 20(15), 17050 (2012).
36. 36.
Brida, G. et al. Systematic analysis of signal-to-noise ratio in bipartite ghost imaging with classical and quantum light. Physical Review A 83, 063807 (2011).
37. 37.
Pirandola, S. Quantum Reading of a Classical DigitalMemory. Physical Review Letter 106, 090504 (2011).
38. 38.
Lawrie, B. J., Evans, P. G. & Pooser, R. C. Extraordinary optical transmission of multimode quantum correlations via localized surface plasmons. Physical Review Letter 110, 156802 (2013).
39. 39.
Pooser, R. C. & Lawrie, B. Plasmonic trace sensing below the photon noise limit. ACS Photonics 3(1), 8–13 (2016).
40. 40.
Meda, A. et al. Photon-number correlation for quantum enhanced imaging and sensing. Journal of Optics 19, 094002 (2017).
41. 41.
Glorieux, Q., Guidoni, L., Guibal, S., Likforman, J. P. & Coudreau, T. Quantum correlations by four-wave mixing in an atomic vapor in a nonamplifying regime: Quantum beam splitter for photons. Physical Review A 84, 053826 (2011).
42. 42.
Embrey, C. S., Turnbull, M. T., Petrov, P. G. & Boyer, V. Observation of Localized Multi-Spatial-Mode Quadrature Squeezing. Physical Review X 5, 031004 (2015).
43. 43.
Cao, L. et al. Experimental observation of quantum correlations in four-wave mixing with a conical pump. Optics Letter 42(7), 1201–1204 (2017).
44. 44.
Boyer, V., Marino, A. M., Pooser, R. C. & Lett, P. D. Entangled images from four-wave mixing. Science 321, 544 (2008).
45. 45.
Heidmann, A., Horowicz, R. J., Reynaud, S., Giacobino, E. & Fabre, C. Observation of quantum noise reduction on twin laser beams. Physical Review Letters 59, 2555–2557 (1987).
46. 46.
Mertz, J., Heidmann, A., Fabre, C., Giacobino, E. & Reynaud, S. Observation of high-intensity sub-poissonian light using an optical parametric oscillator. Physical Review Letters 64, 2897 (1990).
47. 47.
Agafonov, I. N. et al. Absolute of photodetectors: photocurrent multiplication versus photocurrent subtraction. Optics Letters 36(8), 1329–1331 (2011).
48. 48.
Bondani, M., Allevi, A., Zambra, G., Paris, M. & Andreoni, A. Sub-shot-noise photon-number correlation in a mesoscopic twin beam of light. Physical Review A 76, 013833 (2007).
49. 49.
Iskhakov, T. S. et al. Heralded source of bright multi-mode mesoscopic sub-Poissonian light. Optics Letters 41, 2149–2152 (2016).
50. 50.
Jedrkiewicz, O. et al. Detection of Sub-Shot-Noise Spatial Correlation in High-Gain Parametric Down Conversion. Physical Review Letter 93, 243601 (2004).
51. 51.
Brida, G. et al. Measurement of sub shot-noise spatial correlations without background subtraction. Physical Review Letter 102, 213602 (2009).
52. 52.
Blanchet, J. L., Devaux, F., Furfaro, L. & Lantz, E. Measurement of sub-shot-noise correlations of spatial fluctuations in the photon-counting regime. Physical Review Letter 101, 233604 (2008).
53. 53.
Brambilla, E., Caspani, L., Jedrkiewicz, O., Lugiato, L. A. & Gatti, A. High-sensitivity imaging with multi-mode twin beams. Physical Review A 77(5), 053807 (2008).
54. 54.
Monras, A. & Illuminati, F. Measurement of damping and temperature: Precision bounds in Gaussian dissipative channels. Physical Review A 83(1), 012315 (2011).
55. 55.
Palms, J. M., Rao, P. V. & Wood, R. E. A Fano factor measurement for silicon using low energy photons. Nuclear Instruments and Methods 76(1), 59–60 (1969).
56. 56.
Lvovsky, A. I. Squeezed light. arXiv:1401.4118v1 (2014).
57. 57.
Spedalieri, G., Braunstein, S. L. & Pirandola, S. Thermal Quantum Metrology, arXiv:1602.05958 (2016).
58. 58.
Meda, A. et al. Absolute calibration of a charge-coupled device camera with twin beams. Applied Physics Letter 105, 101113 (2014).
59. 59.
Brida, G., Degiovanni, I. P., Genovese, M., Rastello, M. L. & Ruo-Berchera, I. Detection of multimode spatial correlation in PDC and application to the absolute calibration of a CCD camera. Optics Express 18, 20572–20584 (2010).
60. 60.
Avella, A., Ruo-Berchera, I., Degiovanni, I. P., Brida, G. & Genovese, M. Absolute calibration of an EMCCD camera by quantum correlation, linking photon counting to the analog regime. Optics Letters 41, 1841–4 (2016).
Download references
## Acknowledgements
This work has received funding from the European Union’s Horizon 2020 and the EMPIR Participating States in the context of the project 17FUN01 BeCOMe. The Authors thank I.P. Degiovanni, S. Pirandola and C. Lupo for elucidating discussion and N. Samantaray for his help in setting the preliminary phase of the experiment.
## Author information
Authors
### Contributions
I.R.B. and A.M. conceived the idea of the experiment, which were designed and discussed with input from all authors. I.R.B. and E.L. developed the theoretical model. E.L. and A.M. realized the experimental setup and collected the data in INRIM quantum optics labs (coordinated by M.G.). All authors discussed the results and contributed to the writing of the paper. All authors reviewed the manuscript.
### Corresponding author
Correspondence to Ivano Ruo-Berchera.
## Ethics declarations
### Competing Interests
The authors declare no competing interests.
## Additional information
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Reprints and Permissions
## About this article
### Cite this article
Losero, E., Ruo-Berchera, I., Meda, A. et al. Unbiased estimation of an optical loss at the ultimate quantum limit with twin-beams. Sci Rep 8, 7431 (2018). https://doi.org/10.1038/s41598-018-25501-w
Download citation
• Received:
• Accepted:
• Published:
## Further reading
• ### Twin beam quantum-enhanced correlated interferometry for testing fundamental physics
• S. T. Pradyumna
• , E. Losero
• , I. Ruo-Berchera
• , P. Traina
• , M. Zucco
• , C. S. Jacobsen
• , U. L. Andersen
• , I. P. Degiovanni
• , M. Genovese
• & T. Gehring
Communications Physics (2020)
• ### Improving resolution-sensitivity trade off in sub-shot noise quantum imaging
• I. Ruo-Berchera
• , A. Meda
• , E. Losero
• , A. Avella
• , N. Samantaray
• & M. Genovese
Applied Physics Letters (2020)
• ### Approaching the quantum limit of precision in absorbance estimation using classical resources
• Euan J. Allen
• , Javier Sabines-Chesterking
• , Alex R. McMillan
• , Siddarth K. Joshi
• , Peter S. Turner
• & Jonathan C. F. Matthews
Physical Review Research (2020)
• ### A practical model of twin-beam experiments for sub-shot-noise absorption measurements
• Jason D. Mueller
• , Nigam Samantaray
• & Jonathan C. F. Matthews
Applied Physics Letters (2020)
• ### Quantum enhanced imaging of nonuniform refractive profiles
• Giuseppe Ortolano
• , Ivano Ruo-Berchera
• & Enrico Predazzi
International Journal of Quantum Information (2019)
## Comments
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
## Search
### Quick links
Sign up for the Nature Briefing newsletter for a daily update on COVID-19 science.
Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
|
2020-09-25 11:39:54
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8061841726303101, "perplexity": 1715.447829074014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400223922.43/warc/CC-MAIN-20200925084428-20200925114428-00170.warc.gz"}
|
https://www.hindimaintutorial.in/bus-full-of-passengers-solution-codechef/
|
• # For Solution
There is an empty bus with MM seats and a total of NN people, numbered from 11 to NN. Everyone is currently outside the bus. You are given a sequence of QQ events of the following form.
• + i+ i : It denotes that the person ii enters the bus.
• i− i : It denotes that the person ii leaves the bus.
It is guaranteed in the input that each person from 11 to NN enters at most once as well as leaves the bus at most once.
Determine whether the sequence of events is consistent or not (i.e. no person leaves the bus before entering and the number of passengers in the bus does not exceed MM at any point of time).
### Input Format Bus full of passengers solution codechef
• The first line of the input contains a single integer TT denoting the number of test cases. The description of TT test cases follows.
• Each test case contains Q+1Q+1 lines of input.
• The first line of each test case contains three space-separated integers N,M,QN,M,Q.
• QQ lines follow. For each valid jjjthjth of these lines contains a character chch, followed by a space and an integer ii. Here chch is either ‘++‘ or ‘‘ and 1iN1≤i≤N.
• It is guaranteed that +‘‘+ ii” and ‘‘− ii” appears at most once for every 1iN1≤i≤N
### Output Format Bus full of passengers solution codechef
For each test case, print a single line containing one string – “Consistent” (without quotes) if the sequence is consistent, “Inconsistent” (without quotes) otherwise.
• 1T201≤T≤20
• 1N1041≤N≤104
• 1M1041≤M≤104
• 1Q1041≤Q≤104
### Sample Input 1
2
2 1 4
+ 1
+ 2
- 1
- 2
3 2 6
+ 2
+ 1
- 1
+ 3
- 3
- 2
### Sample Output 1 Bus full of passengers solution codechef
Inconsistent
Consistent
### Explanation
• Test case 11: After Person 22 enters the bus, there are two people inside the bus while the capacity of the bus is 11.
### Sample Input 2
2
100 10 5
+ 1
+ 2
- 3
+ 3
- 2
6 4 4
+ 3
+ 2
+ 1
+ 4
### Sample Output 2
Inconsistent
Consistent
### Explanation
• Test case 11: Person 33 leaves the bus without entering and thus it is inconsistent.
|
2021-10-20 14:21:55
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5447850227355957, "perplexity": 1093.7080170590527}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585321.65/warc/CC-MAIN-20211020121220-20211020151220-00169.warc.gz"}
|
https://zbmath.org/?q=an:06999983
|
## TacticToe: learning to reason with HOL4 tactics.(English)Zbl 1403.68224
Eiter, Thomas (ed.) et al., LPAR-21. 21st international conference on logic for programming, artificial intelligence and reasoning, Maun, Botswana, May 8–12, 2017. Selected papers. Manchester: EasyChair. EPiC Series in Computing 46, 15-143 (2017).
Summary: Techniques combining machine learning with translation to automated reasoning have recently become an important component of formal proof assistants. Such “hammer” techniques complement traditional proof assistant automation as implemented by tactics and decision procedures. In this paper we present a unified proof assistant automation approach which attempts to automate the selection of appropriate tactics and tactic-sequences combined with an optimized small-scale hammering approach. We implement the technique as a tactic-level automation for HOL4: TacticToe. It implements a modified A**-algorithm directly in HOL4 that explores different tactic-level proof paths, guiding their selection by learning from a large number of previous tactic-level proofs. Unlike the existing hammer methods, TacticToe avoids translation to FOL, working directly on the HOL level. By combining tactic prediction and premise selection, TacticToe is able to re-prove $$39\%$$ of 7902 HOL4 theorems in 5 seconds whereas the best single HOL(y)Hammer strategy solves $$32\%$$ in the same amount of time.
For the entire collection see [Zbl 1398.68026].
### MSC:
68T15 Theorem proving (deduction, resolution, etc.) (MSC2010) 68T05 Learning and adaptive systems in artificial intelligence
Full Text:
|
2022-07-03 00:15:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2847249209880829, "perplexity": 5020.817513127311}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104205534.63/warc/CC-MAIN-20220702222819-20220703012819-00124.warc.gz"}
|
https://www.physicsforums.com/threads/rc-circuit-for-charged-capacitor.369285/
|
# Homework Help: RC-circuit for charged capacitor
1. Jan 13, 2010
### Apteronotus
1. The problem statement, all variables and given/known data
I have a simple circuit consisting of a charged capacitor and a resistor connected in series to a battery.
Suppose that initially the potential across the capacitor is greater than that of the battery. (ie. $$\frac{q}{C}>V$$).
What happens to the capacitor when the switch is closed and the circuit completed?
2. Relevant equations
Kirchhoff's 2nd:
$$V+IR-\frac{q}{C}=0$$
Solving for q:
$$q(t)=CV+(q_0-CV)e^{(t-t_0)/RC}$$
where $$q_0$$ is the initial charge on the capacitor at time $$t_0$$
3. The attempt at a solution
I'm guessing that when the switch is closed we should expect the capacitor to discharge to a certain degree and after some time have the same potential as the battery.
But my equation $$q(t)=CV+(q_0-CV)e^{(t-t_0)/RC}$$ does not reflect this.
As $$t\rightarrow\infty$$ the charge on the capaictor $$q(t)\rightarrow (q_0-CV)e^{(t-t_0)/RC}\rightarrow\infty$$
What am I doing wrong?
2. Jan 13, 2010
### vela
Staff Emeritus
You have the wrong sign on the IR term.
3. Jan 13, 2010
### Apteronotus
Are you sure? My reasoning is that if the potential on the capacitor is larger than on the battery then
1. electrons will flow from the battery to the capacitor
so
2. current is flowing in counter-clockwise direction - from the capacitor to the battery.
Lastly, if a resistor is traversed in the direction opposite the current, the potential difference across the resistor is +IR.
4. Jan 13, 2010
### vela
Staff Emeritus
The problem you're running into is that $I_r = -I_c$. You're assuming $I=dq/dt$, which is negative if the capacitor is discharging, and using it as a positive quantity when calculating the voltage drop across the resistor.
Trying to get the signs right by reasoning which way the current flows is a mistake waiting to happen. It's much easier to just assume a direction for the current. Then for capacitors and resistors, the voltage "drops" going from where the current enters to where it leaves. For the battery, the opposite convention holds because the battery is a source, not a sink.
So for this circuit, assume the current flows clockwise through the circuit. Starting at the negative terminal of the battery and going clockwise, first, the potential go up by V; then it drops by IR across the resistor; and then it drops by q/C across the capacitor. If the current actually flows in the other direction, I becomes negative, and the direction of the voltage drop across the resistor automatically flips. By following this convention, you're also guaranteed that I=dq/dt.
5. Jan 13, 2010
### Apteronotus
Vela thank you for your explanation.
So we always treat a battery as positive and a resistor and capacitor as negative?
I guess my confusion lies in the fact that I expect the capacitor to act as a current source since it potential is larger than that of the battery.
|
2018-12-14 14:22:26
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5145370960235596, "perplexity": 431.16834497571966}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376825916.52/warc/CC-MAIN-20181214140721-20181214162221-00089.warc.gz"}
|
https://www.guyrking.com/2014/08/25/arrays-in-c-c-and-java.html
|
In C, an array can be declared and initialised:
and in Java
Not a great deal of difference then.
However, if you tried to access a fourth element of T, Java throws an index out of bounds exception and the program will not run.
In contrast, in C the program will run (but you may get a warning about the index).
If we displayed the contents of this phantom fourth element of the array, what would we see? Just a random number, as C takes whatever is stored in that memory location at that time.
A more important difference is the way in which Java and C consider arrays.
In Java, an array is an object, predefined for the user. Thus arrays have prebuilt-in attributes and methods, e.g. the length attribute.
In C, however, an array is actually a pointer (OK, apparently this isn’t quite true, but we’ll take it as a starting point).
Its value is the address of the first element of the array. As an array only stores one kind of data type, and this type and the type’s size is known to the compiler, it can deduce the address of the second element of the array from that of the first element, the address of the third element from that of the second, etc.
However, having said that, in Java an object is, in effect, a pointer. The difference is that you cannot manipulate it in the same way that you can in C.
When you create an object myobj, the memory labelled myobj does not contain the object itself but rather the address where all the information relating to the object is stored.
So when we declare an array like T above, the value of T is an address in memory, and at that address is an array containing the numbers 3, 2 and 8. This leads to shallow and deep copying in Java.
Effectively, C and Java treat arrays in the same way in terms of memory.
What’s different is how they are presented to the user.
In C, the user is given direct access to the memory. This allows for more control, but means everything is down to the user, e.g. the user has to keep track of the lengths of the array themself.
Whereas in Java, it’s classic OOP, the reality of how the array works in memory is hidden from the user, the user cannot directly access the memory, but methods and attributes are predefined to give the user funtionality.
The Java programmer finds they don’t have to track things manually anymore, they can hand that over to the compiler, but the price to pay is that they can no longer manipulate the memory directly using pointers as they did before.
To illustrate this
which is equivalent to the previous declaration above.
Further, the two are interchangeable if entered as arguments to a function.
However, one instance where it does matter whether you use pointer or array notation is when a function returns an array. In such a case, pointer notation must be used e.g.
In C++, this is the same apart from one line,
However, in Java, there is no pointer notation, so the only option is to make the function return an array object
An important consequence in C/C++ of defining arrays as pointers is that functions with array arguments pass them by reference.
As will be mentioned in a later post, this can lead to significant time and space savings. It also means these functions pass arguments in the same way as they would in Java.
However, this is not always the case. C++ functions pass object arguments by value whereas Java passes them by reference. This is something to note when using dynamic arrays which are objects in both languages.
|
2021-04-19 05:07:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19147223234176636, "perplexity": 657.8125617128723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038878326.67/warc/CC-MAIN-20210419045820-20210419075820-00028.warc.gz"}
|
https://zbmath.org/?q=an:0308.47024
|
# zbMATH — the first resource for mathematics
Ideals of operators on Banach spaces and nuclear locally convex spaces. (English) Zbl 0308.47024
Gen. Topol. Relat. mod. Anal. Algebra III, Proc. 3rd Prague topol. Symp. 1971, 345-352 (1972).
##### MSC:
47B10 Linear operators belonging to operator ideals (nuclear, $$p$$-summing, in the Schatten-von Neumann classes, etc.) 47L05 Linear spaces of operators 46A03 General theory of locally convex spaces
Full Text:
|
2021-05-10 18:08:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4250781238079071, "perplexity": 9585.36969380413}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991759.1/warc/CC-MAIN-20210510174005-20210510204005-00519.warc.gz"}
|
http://math.stackexchange.com/questions/629181/adjoint-functor-with-initial-objects
|
# Adjoint functor with initial objects
I have to see that every left adjoint functor preserves initial objects.
I prove it by Adjoint functor theorem which states that under certain conditions a functor that preserves colimits is a left adjoint. A basic result of the category theory is that left adjoint preserves all colimits, which can be characterized as initial objects.
Is this idea correct to prove this statement "Every left adjoint funtor preserves initial objects"? or how can we see this.
-
Yes. Initial objects are colimits. And left adjoints preserve colimits. Thus left adjoints preserve initial objects. – Joe Johnson 126 Jan 6 at 15:48
If $F:X\to A$ is left adjoint to $G:A\to X$ and $i$ is initial in $X$, then the hom-set $A(Fi,a)$ is naturally isomorphic to the hom-set $X(i,Ga)$. Since the latter has only one element for every $a\in A$, so does the former, which proves that $Fi$ is initial in $A$.
When you wrote ".. left adjoint preserves all colimits, which can be characterized as initial objects.", I'm not sure if what you actually meant to say was that "initial objects can be expressed as colimits." However, a colimit is indeed an initial object. Namely, an initial object in the category $$(D\downarrow\Delta X)$$ where $D:J\to X$ is the diagram in $X$ (as an object in $X^J$) and and $\Delta:X\to X^J$ is the diagonal functor sending an object $x$ to the constant diagram $\Delta x:J\to X$ whose only morphism is $1_x$.
|
2014-12-18 23:54:42
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9907451868057251, "perplexity": 181.04531457622804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802768050.31/warc/CC-MAIN-20141217075248-00078-ip-10-231-17-201.ec2.internal.warc.gz"}
|
https://socratic.org/questions/the-molecular-formula-for-glucose-is-c6h12o6-what-would-the-molecular-formula-fo
|
# The molecular formula for glucose is C6H12O6. What would the molecular formula for a molecule made by linking three glucose molecules together by dehydration reactions be?
Nov 28, 2015
${C}_{18} {H}_{32} {O}_{16}$
#### Explanation:
since this is a dehydration reaction, it means that when two molecules are linked together, there will be one molecule of water ${H}_{2} O$ removed.
Therefore, the molecular formula will be ${C}_{18} {H}_{32} {O}_{16}$.
The reaction will be:
${C}_{6} {H}_{11} {O}_{5} - {\underbrace{O H + H}}_{\textcolor{b l u e}{{H}_{2} O}} - {C}_{6} {H}_{10} {O}_{5} - {\underbrace{O H + H}}_{\textcolor{b l u e}{{H}_{2} O}} - {C}_{6} {H}_{11} {O}_{6} \to {C}_{18} {H}_{32} {O}_{16} + \textcolor{b l u e}{2 {H}_{2} O}$
|
2021-11-27 17:50:59
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 4, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8906425833702087, "perplexity": 2436.4645472954644}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358208.31/warc/CC-MAIN-20211127163427-20211127193427-00429.warc.gz"}
|
https://tex.stackexchange.com/questions/142139/still-having-difficulty-with-use-of-expandafter
|
Still having difficulty with use of \expandafter
Firstly, my apologies for any previous confusion, I've separated out various questions onto distinct pages.
This problem came up in the context of trying to split a decimal number. Good solutions to both my questions with pgf and package independent have been provided or will be soon on those pages.
The main intent of this question is that I'm trying to better understand how things are expanded in LaTeX. There is good information in the answers below about just this aspect. Perhaps if those people wouldn't mind moving sidetrack stuff to either of the two questions linked above to help me simplify the situation. I realise this is a bit tricky as the expansion idea was being discussed in the context of splitting a length. Thanks in advance.
The MWE is just a whole bunch of ideas I was experimenting with, trying to expand values to get at what I wanted and failing because I don't yet fully understand how LaTeX, or TeX, expands various types of value.
MWE Output
MWE Code
\documentclass[12pt]{article}
\usepackage[a5paper,margin=14mm]{geometry}
\makeatletter
\def\printplainbefore#1{\expandafter\@printplainbefore#1..\@nil}
\def\@printplainbefore#1.#2.#3\@nil{#1}
\def\printplainafter#1{\expandafter\@printplainafter#1..\@nil}
\def\@printplainafter#1.#2.#3\@nil{#2}
\newcommand*{\getlength}[1]{\strip@pt#1}
\makeatother
\def\mythe#1{\expandafter\getlength{#1}}
\newcommand{\savenum}[2]{\expandafter\xdef\csname num#1\endcsname{#2}}
\newcommand{\getnum}[1]{\csname num#1\endcsname}
\newlength{\thislength}
\setlength{\thislength}{123.456pt}
\tracingmacros=1
\begin{document}
\par The: \the\thislength
\par Getlength: \getlength{\thislength}
\par MyThe: \mythe\thislength
\par Getlength variable first: \getlength{\thislength}
\quad Split: \xdef\test{\getlength{\thislength}} \printplainbefore{\test} -- \printplainafter{\test}
\par Getlength save: \getlength{\thislength}
\quad Split: \savenum{thislength}{\getlength{\thislength}} \printplainbefore{\getnum{thislength}} -- \printplainafter{\getnum{thislength}}
\end{document}
• I think you need to distill the questions a little further, if I may comment on the style. It's a lot of code to get to the bottom of your problem. – percusse Nov 4 '13 at 15:05
• @percusse It's not one solution, I was just showing all the different ways I had experimented with trying to solve it. Anyway, A.Ellet's answer has helped me no end. – Geoff Pointer Nov 4 '13 at 15:09
• @percusse Is that better now? – Geoff Pointer Nov 5 '13 at 2:16
I'm a bit confused by your code. But there are a couple of things I see that might help you.
Writing
\expandfater\getlength{#1}
is effectively the same as writing
\getlength{#1}
without any expansion.
If it's #1 that you want expanded first, that's not going to happen as you wrote it. Instead, it's the { TeX is going to try to expand. To reach #1 you need to write something like
\expandafter\getlength\expandafter{#1}
But this most likely won't do what you want either. If #1 is a string of tokens, for example if you try
\def\a{ABC}
\def\b{\c}
\def\c{XYZ}
\def\mythe#1{\expandafter\getlength\expandafter{#1}}
Then
\mythe{\a\b\c}
expands to
\expandafter\getlength\expandafter{\a\b\c}
which then expands to
\getlength{ABC\b\c}
The \b\c of #1 is not accessible through the use of \expandafter as you've written it. Now, if you know that #1 should only be one token, that all is OK at this point.
If you want to get the integer and fractional parts of a dimension, then the following will accomplish that without calling any special packages.
\documentclass{article}
\makeatletter
\def\getparts#1{%%
\edef\my@stripped@length{\strip@pt#1}%%
\expandafter\ae@int@frac\my@stripped@length..\@nil
}
\def\ae@int@frac#1.#2.#3\@nil{%%
\def\aeinteger{#1}%%
\def\aefraction{#2}%%
}
\newlength{\aetemp}
\setlength{\aetemp}{1.234cm}
\def\showparts{%%
\begin{tabular}{ll}\hline
Length & \the\aetemp\\
Integer & \aeinteger \\
Fraction & \aefraction\\\hline
\end{tabular}}
\makeatother
\pagestyle{empty}
\begin{document}
\setlength{\aetemp}{1cm}
\getparts{\aetemp}
\showparts
\setlength{\aetemp}{215pt}
\getparts{\aetemp}
\showparts
\end{document}
• I said in my question, I'm basically trying to get the result of the \getlength function to expand inline, so my printplain functions can read it. My code is probably confusing because it includes a variety of pathetic attempts to do so. But what you've done, is what I was trying to do. What I still don't get is, if \getlength has removed the pt, why can't my functions just read what's left? – Geoff Pointer Nov 4 '13 at 8:29
• I've clarified the question, I hope you don't mind helping me do what I suggest. You could always move part of what you have here to an answer here. Then we could try and separate discussion of expanding expressions from the particular problem of splitting a decimal. Cheers – Geoff Pointer Nov 5 '13 at 2:19
The problem is that \getlength requires several expansion steps to end its job delivering a sequence of digits (with a decimal dot in the middle). I'll show them in successive lines
\getlength{\thislength}
\strip@pt\thislength
\expandafter\rem@pt\the\thislength
\[email protected]
123\ifnum 456>\z@ .456\fi
123.456
A total of five expansion steps that require 31 \expandafter tokens to be performed if you want that \@printplainbefore to see what it expects, not just one. But the presence of \ifnum makes the macro unusable in practice.
Since you don't want to remove a zero fractional part, you can use a \romannumeral trick:
\documentclass{article}
\makeatletter
\def\printplainbefore#1{\expandafter\@printplainbefore\romannumeral-Q#1..\@nil}
\def\@printplainbefore#1.#2.#3\@nil{#1}
\def\printplainafter#1{\expandafter\@printplainafter\romannumeral-Q#1..\@nil}
\def\@printplainafter#1.#2.#3\@nil{#2}
\begingroup\catcodeP=12 \catcodeT=12
\lowercase{\endgroup\def\simplerem@pt#1PT{#1}}
\def\simplestrip@pt{\expandafter\simplerem@pt\the}
\newcommand*{\getlength}[1]{\simplestrip@pt#1}
\makeatother
\newlength{\thislength}
\setlength{\thislength}{123.456pt}
\begin{document}
Number: 123.456
\def\temp{123.456}
Macro: \texttt{\meaning\temp}
The: \the\thislength
• Question 1: I'm really confused by the \romannumeral trick. I've seen a partial explanation which still leaves me in a bit of a fog. But your code above uses <open quote>Q. I thought \romannumeral stops at the first unexpandable token. Wouldn't that be the opening quote mark? – A.Ellett Nov 4 '13 at 14:07
• Question 2: The other trick you use is something else I don't fully understand. It's \begingroup...\lowercase{\endgroup I don't understand how the \catcode changes are preserved in \def\simplerem@pt#1PT{#1}. Has there already an answer posted somewhere on this site? Or should I post this as a question to the larger community? – A.Ellett Nov 4 '13 at 14:10
• @A.Ellett I believe there's already something on the subject. The trick is that \lowercase does nothing else on its argument than lowercasing character tokens (using the \lccode array): no expansion or execution of commands is performed and category codes are preserved (only the character codes are changed). So when the \endgroup is executed, P and T` have already been changed to their lowercase version (with category code 12). – egreg Nov 4 '13 at 14:59
|
2019-10-21 05:57:05
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6285490989685059, "perplexity": 1249.3480378653815}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987756350.80/warc/CC-MAIN-20191021043233-20191021070733-00397.warc.gz"}
|
https://l-infinity.de/posts/torch-unit-diffops/
|
L-infinity
Computational fluid dynamics, multiphase flows, machine learning, OpenFOAM
Most neural networks learn by minimizing a scalar-valued error. There is ample information about the back-propagation calculus in neural networks, like this great video. Interestingly, back-propagation in neural networks is, in fact, reverse-mode Automatic Differentiation (AD). Automatic differentiation is a computational technique that enables the calculation of exact derivatives from arithmetic expressions. The exact derivative calculation is crucial for ensuring the convergence when training neural networks because, as the network learns, the differences in the values of its weights between iterations become smaller. If these barely different values are used to compute network gradients with finite differences, floating-point cancellation errors quickly become catastrophic, destroying convergence. Alternatively to finite differences, exact derivatives of arithmetic expressions can also be symbolically calculated using “sympy” or similar symbolic calculation packages. Unfortunately, symbolic derivatives of non-trivial arithmetic expressions soon become intractably complex and challenging to translate into source code. Reverse-mode AD comes to the rescue, and in this post, reverse-mode AD automagically pulls exact derivatives in PyTorch out of its hat. Besides the documentation, (1) covers the details of AD and it is referenced in (2). There is also a great video describing AD.
Thanks to Alban D from the PyTorch forum for helping me figure out how autograd::grad uses the Jacobian.
In a nutshell, the reverse-mode AD in autograd works with Jacobians: matrices that contain partial derivatives of a function with respect to a vector. Those partial derivatives from the Jacobian matrix can be combined with each other to construct differential operators. Instead of picking elements from the Jacobian “by hand” somehow, matrix and vector inner products can be used to “select” and combine the elements of the Jacobian. This is what ‘autograd’ does, it doesn’t compute the gradient $\nabla f$ of a function $f$ “only”, instead, autograd::grad computes the inner product of the Jacobian and some tensor $\mathbf{v}$, namely $J\cdot\mathbf{v}$, where $J$ is the Jacobian of the function $f$ with respect to some tensor, and $\mathbf{v}$ is a tensor whose contents determine wether $\nabla$, $\nabla\cdot$, or $\nabla\times$ is computed, or something else entirely. When $f$ is not a real-valued function, which is the case in neural network training, we have to determine $\mathbf{v}$.
The same conclusion is stated in the documentation
The graph is differentiated using the chain rule. If any of tensors are non-scalar (i.e. their data has more than one element) and require gradient, then the Jacobian-vector product would be computed, in this case the function additionally requires specifying grad_tensors. It should be a sequence of matching length, that contains the “vector” in the Jacobian-vector product, usually the gradient of the differentiated function w.r.t. corresponding tensors (None is an acceptable value for all tensors that don’t need gradient tensors).
In other words, if we want a gradient of a scalar function, we just call the gradient. If we want other operators, or we work with vector-valued functions, we have to think which elements of the Jacobian we want to combine in which way, and design the tensor $\mathbf{v}$ in $J \cdot \mathbf{v}$ to get the operators we need. There is nothing special about the Jacobian. For example, if $y = f(x), f : \mathbb{R}^n \to \mathbb{R}$, then
$$J = [\partial_{x_1} f \ \partial_{x_2} f , \dots , \ \partial_{x_n} f ]$$
is a vector in $R^n$. If $y = f(x), f : \mathbb{R}^n \to \mathbb{R}^n$, then, $J$ is a $n \times n$ matrix.
Example: real-valued function of a real variable
In this simple case, $sin'(x), sin''(x)$ are calculated exactly with torch::autograd.
auto x = torch::zeros({1}, torch::requires_grad());
auto sinx = torch::sin(x);
auto dsinx_e = torch::cos(x);
auto dsinx_error = torch::abs(dsinx_e - dsinx).item<double>();
std::cout << std::setprecision(20)
<< "dsinx_error = " << dsinx_error << "\n";
assert(dsinx_error == 0);
auto ddsinx_e = -torch::sin(x);
auto ddsinx_error = torch::abs(ddsinx_e - ddsinx).item<double>();
std::cout << "ddsinx_error = " << ddsinx_error << "\n";
assert(ddsinx_error == 0);
The first call to torch::autograd::grad has some additional arguments that require an explanation. Without going into details about AD, the thing to remember is that AD constructs the final arithmetic expresion from sub-expressions. Building the expression this way represents it as an acyclic graph. The partial derivatives are then stored by the AD mechanism at graph nodes starting from the leafs up to the final expression. Those partial derivatives are used in the chain rule to construct the gradient by combining partial derivatives stored at graph nodes above the root node. Since $sin(x)$ is a real-valued function of a real variable, there is no need to provide a tensor $\mathbf{v}$ for the dot product with the Jacobian. The other two arguments make sure this computation graph is not deleted, because we want to compute $sin''(x)$, from the documentation
retain_graph: If false, the graph used to compute the grad will be freed. Note that in nearly all cases setting this option to true is not needed and often can be worked around in a much more efficient way. Defaults to the value of create_graph. create_graph: If true, graph of the derivative will be constructed, allowing to compute higher order derivative products. Default: false.
The _e exact derivatives are computed (here trivially) manually, and we see that the first and second derivatives computed by autograd are exactly the same as their exact counterparts.
The == 0 check for derivative errors makes this really interesting for anyone that was bitten by the IEEE 754 standard for floating-point arithmetic. Looking at the reverse-mode AD as a black-box, it delivers the same results as symbolic calculation!
Another example from the video on AD, shows how partial derivatives are calculated:
// https://youtu.be/R_m4kanPy6Q?t=458
// f(x,y) = (x + y) * (y + 3)
// \partial_x f(x,y) = y + 3
// \partial_y f(x,y) = 2y + x + 3
// for x = 1, y = 2,
// \partial_x f(x,y) = 2 + 3 = 5
// \partial_y f(x,y) = 2*2 + 1 + 3 = 8
{
auto y = torch::full_like(x, 2., torch::requires_grad());
auto f = (x + y) * (y + 3);
assert((partial_x_f.item<double>() == 5));
assert((partial_y_f.item<double>() == 8));
}
Note: in the first call to autograd::grad, retain_graph and create_graph are set to true, so that partial_y_f can be calculated by traversing the existing graph.
Example: real-valued function of a vector variable
In this example, the inner (dot) product between two vectors is used: a real-valued function of a vector variable $f(\mathbf{x},\mathbf{y}) : \mathbb{R}^n \to \mathbb{R}$, namely
$$f(\mathbf{x},\mathbf{y}) = \sum_{i = 1}^{n} x_i y_i$$
A gradient of this real-valued function with respect to one of its input vectors is a vector $\mathbf{g} \in \mathbb{R^n}$. An interesting example is $||\mathbf{x}||_2^2$, given as $\mathbf{x}\cdot\mathbf{x}$, or
$$f(\mathbf{x},\mathbf{x}) = \sum_{i = 1}^{n} x_i x_i$$
In this case, the gradient is equal to the Jacobian, it is a $\mathbb{R}^3$ vector,
$$\mathbf{g} = J = [\partial_{x_1} f \ \partial_{x_2} f \ \partial_{x_3} f ] = 2[x_1 \ x_2 \ x_3]$$.
The torch::autograd computes this as expected
auto x = torch::ones(3,torch::requires_grad());
auto f = dot(x,x);
Now, since $f=dot(x,x)$ is a scalar-valued function, there is no need to specify $\mathbf{v}$ in $J \cdot \mathbf{v}$. This is not the case when using torch::autograd to compute the divergence of $\mathbf{g}$ ($\nabla \cdot \mathbf{g}$). The Jacobian of $\mathbf{g}$ is
$$J_\mathbf{g} = [\partial_{x_1} \mathbf{g} \ \partial_{x_2} \mathbf{g} \ \partial_{x_3} \mathbf{g}] = \begin{bmatrix} \partial^2_{x_1} f & \partial_{x_2} \partial_{x_1} f & \partial_{x_3} \partial_{x_1}f \\\ \partial_{x_1} \partial_{x_2} f & \partial^2_{x_2} f & \partial_{x_3} \partial_{x_2} f \\\ \partial_{x_1} \partial_{x_3} f & \partial_{x_2} \partial_{x_3} f & \partial^2_{x_3} f \end{bmatrix}$$
In this example $\nabla \cdot \mathbf{g}$ can be computed by summing up the diagonal elements (compute the trace of) $J_\mathbf{g}$ like this
$$\nabla \cdot \mathbf{g} = \text{trace}(J_\mathbf{g}) = J_g \cdot \begin{bmatrix} 1 \\\ 1 \\\ 1 \end{bmatrix} \cdot \begin{bmatrix} 1 \\\ 1 \\\ 1 \end{bmatrix} = 2\begin{bmatrix} 1 & 0 & 0 \\\ 0 & 1 & 0 \\\ 0 & 0 & 1 \end{bmatrix} \cdot \begin{bmatrix} 1 \\\ 1 \\\ 1 \end{bmatrix} \cdot \begin{bmatrix} 1 \\\ 1 \\\ 1 \end{bmatrix} = 6,$$
In torch::autograd this is
auto div_grad_f_x_v = torch::autograd::grad(
);
std::cout << "div(grad(f)) = (J_{grad_f} . [1 1 1]) . [1 1 1] = "
Important: using $\mathbf{v} = [1 \ 1 \ 1]^T$ like this only works in this specific example because $J_\mathbf{g}$ is diagonal!
In other words, {torch::ones(3)} can be used in the above code snippet for $\mathbf{v}$ in $J_\mathbf{g} \cdot \mathbf{v}$ only if $J_\mathbf{g}$ is diagonal, otherwise we will pick up other partial derivatives from the Jacobian.
A general solution that only picks up the diagonal elements of $J_\mathbf{g}$ requires the identity matrix
$$\nabla \cdot \mathbf{g} = (J_\mathbf{g} \cdot I) \cdot [1 \ 1 \ 1] = \left(2\begin{bmatrix} 1 & 0 & 0 \\\ 0 & 1 & 0 \\\ 0 & 0 & 1 \end{bmatrix} \cdot \begin{bmatrix} 1 & 0 & 0 \\\ 0 & 1 & 0 \\\ 0 & 0 & 1 \end{bmatrix} \right) \cdot \begin{bmatrix} 1 & 1 & 1 \end{bmatrix} = 6.$$
In torch::autograd
auto diag_grad2_f_x = torch::autograd::grad(
);
std::cout << "div(grad(f)) = (J_{grad_f} . I) . [1 1 1] = "
Example: vector-valued function of a vector variable
Say we have a tensor of input values $\mathbf{x} \in \mathbb{R}^n$, and we evaluate a function $g$ at each $x_i$, such that $y_i = g(x_i)$ and $\mathbf{y} \in \mathbb{R}^n$, then we have a vector-valued function of a vector variable, namely $f : \mathbb{R}^n \to \mathbb{R}^n$, and $g$ is applied to each $x_i$ . How to compute tensors (sequences) of individual derivatives $y_i', y_i'', \dots$ with torch::autograd?
Let’s use $g(x_i) = \sin(x_i)$, and $\mathbf{x} = [0, 0.001, 0.002, \dots 1]$ for example.
Since
$$\mathbf{y} = f(\mathbf{x}) = [sin(x_1) \ sin(x_2) \ \dots sin(x_n)],$$
the Jacobian
$$J_f = [\partial_{x_1} \mathbf{y} \ \partial_{x_2} \mathbf{y} \dots \partial_{x_n} \mathbf{y}]$$
is again diagonal, namely
$$J_f = \begin{bmatrix} cos(x_1) & 0 & 0 \\\ 0 & cos(x_2) & 0 \\\ \vdots & \ddots & \vdots \\\ 0 & 0 & cos(x_n) \end{bmatrix}.$$
The calculation of $\mathbf{y}$ is done exactly like in the previous example, only, depending on $n$, the difference in computational time between
$$\nabla \cdot \nabla f = (J_f \cdot I) \cdot [1 \ 1 \ 1 \ \dots 1]$$
and
$$\nabla \cdot \nabla f = (J_f \cdot [1 \ 1 \ 1 \ \dots 1]) \cdot [1 \ 1 \ 1 \ \dots 1]$$
can be significant.
Important: when working with torch::autograd, know your Jacobian.
Summary
The torch::autograd uses reverse-mode Automatic Differentiation to compute derivatives exactly. The calculation computes the Jacobian of the expression dotted with a vector (tensor, matrix) $\mathbf{v}$, $J\cdot \mathbf{v}$, that is defined by the user. The shape and contents of $\mathbf{v}$, and subsequent matrix-vector operations are used to compute differential operators like $\nabla$, $\nabla \cdot$ and $\nabla \times$.
Data
The code is available on GitLab.
References
(1) Griewank, A., & Walther, A. (2008). Evaluating derivatives: principles and techniques of algorithmic differentiation. Society for Industrial and Applied Mathematics.
(2) Paszke, A., Gross, S., Massa, F., Lerer, A., Bradbury, J., Chanan, G., … Chintala, S. (2019). PyTorch: An imperative style, high-performance deep learning library. Advances in Neural Information Processing Systems, 32(NeurIPS).
Tags
|
2021-04-18 08:37:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8404256701469421, "perplexity": 779.7606443243814}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038469494.59/warc/CC-MAIN-20210418073623-20210418103623-00549.warc.gz"}
|
https://www.physicsforums.com/threads/numerical-problem-on-processor-pool-model.843819/
|
# Numerical Problem on Processor Pool Model
Tags:
1. Nov 18, 2015
### 22990atinesh
Consider the case of a distributed computing system based on the processor-pool model that has $P$ processors in the pool. In this system, suppose a user start a computing job that involves compilation of a program consisting of $F$ source file $(F < P)$. Assume that at this time the user is the only user using the system. What maximum gain in speed can be hoped for this job in this systems compared to its execution on a single processor system ? What factors might cause the gain in speed to be less than this maximum ?
Attempt:
Let 't' secs are required by each processor in processor pool model to complete the job. Hence overall 't' secs will be needed as in processor pool model as all processor are running in parallel.
In case of single processor system time required will be = Ft s
Hence gain in speed is = (Ft - t)/Ft
is it correct or I'm assuming sth wrong. Can anybody help.
2. Nov 23, 2015
### Greg Bernhardt
Thanks for the post! This is an automated courtesy bump. Sorry you aren't generating responses at the moment. Do you have any further information, come to any new conclusions or is it possible to reword the post?
3. Nov 23, 2015
### Staff: Mentor
Your reasoning is fine if you make the assumption that every file requires the same time to compile, that there is no linking and loading of the compiled modules into a single runtime image, and that there are no resources that are competed for. In this case you have found an expression for the maximum, best case speedup.
In a real-world situation there's likely to be a range of compile times and library requirements, and the link/load step would require access to all the compiled modules after compilation.
The compilations would then be finished when the longest module is done, and linking could only start when that is complete. Linking and creating/writing the load image would likely involve some irreducible serial activity. Refer to Amdahl's law.
|
2018-03-23 08:19:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2729369103908539, "perplexity": 1466.7064180260493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648198.55/warc/CC-MAIN-20180323063710-20180323083710-00166.warc.gz"}
|
https://emptysqua.re/blog/announcing-libbson-and-libmongoc-1-3-5/
|
I'm pleased to announce version 1.3.5 of libbson and libmongoc, the libraries constituting the MongoDB C Driver.
## libbson
No change since 1.3.4; released to keep pace with libmongoc's version.
## libmongoc
This release fixes a crash in mongoc_cleanup when an allocator had been set with bson_mem_set_vtable.
It also introduces a configure option MONGOC_NO_AUTOMATIC_GLOBALS which prevents code built with GCC from automatically calling mongoc_init and mongoc_cleanup when your code does not. This obscure, GCC-specific behavior was a bad idea and we'll remove it entirely in version 2.0. Meanwhile, we're letting you explicitly opt-out.
Thanks to Hannes Magnusson, who did the significant work on this release.
Image: Henry Ford Luce, 1890.
|
2023-03-24 12:28:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3969798982143402, "perplexity": 14030.259071001545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945282.33/warc/CC-MAIN-20230324113500-20230324143500-00646.warc.gz"}
|
https://www.mersenneforum.org/showthread.php?s=484e4b4de2f49aefcf1bde2a5a6f1b76&p=455774
|
mersenneforum.org PRPNet 5.4.3 Released
Register FAQ Search Today's Posts Mark Forums Read
2016-12-20, 01:16 #133 rogue "Mark" Apr 2003 Between here and the 25×197 Posts I did a quick test on the double-check logic. It looks good so I've committed the code changes. If you need a build, please let me know. BTW, this is now version 5.4.3.
2017-01-19, 16:58 #134 lalera Jul 2003 2×5×61 Posts hi, i would like to do prp-tests on wagstaff numbers with llr please can you modify prp-server ?
2017-01-19, 17:45 #135
rogue
"Mark"
Apr 2003
Between here and the
25×197 Posts
Quote:
Originally Posted by lalera hi, i would like to do prp-tests on wagstaff numbers with llr please can you modify prp-server ?
What is the format of the file output from the sieving program?
2017-01-19, 18:08 #136 lalera Jul 2003 2·5·61 Posts hi, the (configurable) output format is from mfaktc v 0.21 http://www.mersenneforum.org/mfaktc i do edit the results-file with a table-calculator an example: - ABC(2^\$a+1)/3 1500019 1500071 1500127 1500133 1500139 1500143 1500181 - this is the input-file-format for llr
2017-01-21, 19:26 #137 rogue "Mark" Apr 2003 Between here and the 25·197 Posts I've made code changes for the Wagstaff form and committed them, but have not tested. I don't know when I'll have time to do that. If you would like to do some testing, I can get you started.
2017-01-22, 20:16 #138 lalera Jul 2003 2·5·61 Posts hi, thank you for making the code changes to support wagstaff numbers ! i like to do some testing i need the executables
2017-01-22, 21:18 #139
rogue
"Mark"
Apr 2003
Between here and the
25×197 Posts
Quote:
Originally Posted by lalera hi, thank you for making the code changes to support wagstaff numbers ! i like to do some testing i need the executables
I won't post them here, but you can send me an e-mail or your e-mail in a PM.
2017-03-10, 14:52 #140 TheCount Sep 2013 Perth, Au. 6216 Posts LLR now has multithread support. Is there an existing way to pass command line arguments to LLR with PRPNet? If not, can support for multithreaded LLR in PRPNet be implemented some how in the future? Maybe a parameter in prpclient.ini Last fiddled with by TheCount on 2017-03-10 at 14:54
2017-03-10, 16:49 #141
rogue
"Mark"
Apr 2003
Between here and the
25×197 Posts
Quote:
Originally Posted by TheCount LLR now has multithread support. Is there an existing way to pass command line arguments to LLR with PRPNet? If not, can support for multithreaded LLR in PRPNet be implemented some how in the future? Maybe a parameter in prpclient.ini
It is not supported now, but it could be done.
2017-03-29, 20:36 #142 TheCount Sep 2013 Perth, Au. 1428 Posts I got LLR multithreading working on PRPNet. Assuming you want to use 2 threads, first create a cllr64.bat: Code: @cllr64.exe -t2 %* Then in prpclient.ini use the line: Code: llrexe=cllr64.bat Where cllr64.exe is the llr 3.8.20 version copied into your prpclient directory. This is for Windows, of course, but a similar shell file should be possible on Linux. Depending on candidate FFT size and your CPU L3 cache you can get a good speedup, faster than running individually.
2017-03-29, 21:13 #143 wombatman I moo ablest echo power! May 2013 5×349 Posts This is very helpful. Thanks!
Similar Threads Thread Thread Starter Forum Replies Last Post ltd Prime Sierpinski Project 86 2012-06-06 02:30 rogue Software 84 2011-11-16 21:20 Joe O Sierpinski/Riesel Base 5 1 2010-10-22 20:11 rogue Conjectures 'R Us 220 2010-10-12 20:48 rogue Conjectures 'R Us 250 2009-12-27 21:29
All times are UTC. The time now is 13:16.
Sun May 9 13:16:40 UTC 2021 up 31 days, 7:57, 0 users, load averages: 1.36, 1.59, 1.62
|
2021-05-09 13:16:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19761386513710022, "perplexity": 10629.767922087896}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988986.98/warc/CC-MAIN-20210509122756-20210509152756-00524.warc.gz"}
|
http://starlink.eao.hawaii.edu/docs/sun139.htx/sun139se10.html
|
10 Parameter behaviour and control
CCDPACK has a number of ‘global’ program parameters which you can set up ‘once and for all’2. The usual time to do this is at the beginning of a reduction sequence. Global parameters are used (when set) to override all other values (typically the current values of other applications or perhaps dynamically generated defaults). The global values may be overridden, at any time by values entered on the command line, or given in response to a prompt. The program which sets up the global parameters is:
• CCDSETUP
This routine is described in §7.1.1.
The current values of the global parameters can be viewed at any time by using the routine:
The global values should always be cleared before analysing data from a different instrument, this is achieved using:
Which can also clear individual parameters.
A second control strategy that CCDPACK routines use is that of leaving parameters set at the last used value (this is known as the ‘current’ value). This means that once a parameter has a value assigned to it (by a run of an application) this will be used again, unless it’s one of those with a global association, which if set will override this, or one whose effect is judged so critical that you’d better ask for it on each occasion of use. This general principle is useful in that you do not have to remember to set most parameters every time you run an application. However, this does have a drawback that you must remember what value you gave to the parameter. Most parameters will appear in the log or be directly reported (if the log system is set up to do so), so always take care to inspect the log, or the terminal output, until you’re sure of how things are set up. To get rid of any unwanted parameter values (and restore the ‘intrinsic’ default behaviour of an application) just use the keyword RESET, on the command line (this is used in many of the examples shown in this document for just this purpose). This clears all current values but does not effect the global parameters.
If resetting the parameters seems not to work or you want to clear all the CCDPACK current values, then a brutal reset can be achieved by deleting the appropriate files (application_name.sdf) in the $HOME/adam or $ADAM_USER directories. If you’re using CCDPACK from ICL then the parameter values are kept in the files – ccdpack_red.sdf, ccdpack_reg.sdf and ccdpack_res.sdf. The global parameters are always kept in GLOBAL.sdf.
2This section does not apply to XREDUCE or IRAF.
|
2021-10-15 23:15:18
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6202033162117004, "perplexity": 862.1021633570322}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583087.95/warc/CC-MAIN-20211015222918-20211016012918-00387.warc.gz"}
|
https://math.stackexchange.com/questions/1583768/need-help-with-int-01-frac-log1x-log1-x-left1-log2x-rightx-dx
|
# Need help with $\int_0^1\frac{\log(1+x)-\log(1-x)}{\left(1+\log^2x\right)x}\,dx$
Please help me to evaluate this integral $$\int_0^1\frac{\log(1+x)-\log(1-x)}{\left(1+\log^2x\right)x}\,dx$$ I tried a change of variable $x=\tanh z$, that transforms it into the form $$\int_0^\infty\frac{4z}{\left(1+\log^2\tanh z\right)\sinh2z}\,dz,$$ but I do not know what to do next.
• @GaussTheBauss Yes, I want a closed form for the definite integral. But it does not mean there should be necessarily a closed-form anti-derivative. Definite integration over some regions can sometimes give closed-form answers even where no closed-form anti-derivative exists. – Marty Colos Dec 20 '15 at 21:48
• On this site, we look for motivation as well as a problem statement. What is the motivation for this integral? Where did you encounter it? What is its background and "biography"? That information would significantly improve this post. – Carl Mummert Dec 20 '15 at 22:15
• @CarlMummert As for me, this integral looks interesting by itself, and I am going to enjoy trying to evaluate it. I do not see how knowing its origin would help me to find a solution. – Vladimir Reshetnikov Dec 20 '15 at 22:20
• @Vladimir Reshetnikov: perhaps that is true, but the purpose of improving the question is not only to help find a solution, but also to raise the general quality of questions on this site. Questions which merely propose a problem with no motivation are unfortunately too common, and new users should be aware that such questions are often put on hold for improvement. – Carl Mummert Dec 20 '15 at 22:22
• This is equivalent to $$2\int_0^\infty \frac{\log(\coth(u))}{1+4u^2}\,du$$ I don't know how to proceed from here. – Ben Longo Dec 20 '15 at 22:23
Recall the Frullani Integral: $$\int_0^\infty\frac{e^{-ax}-e^{-bx}}{x}\,\mathrm{d}x =\log(b/a)\tag{1}$$ and scale equation $(4)$ from this answer to get $$\sum_{k=0}^\infty\frac1{(2k+1)^2+u^2} =\frac\pi{4u}\tanh\left(\frac{\pi u}2\right)\tag{2}$$ Then \begin{align} &\int_0^1\frac{\log(1+x)-\log(1-x)}{\left(1+\log^2(x)\right)x}\,\mathrm{d}x\\ &=\int_0^\infty\frac{\log\left(1+e^{-u}\right)-\log\left(1-e^{-u}\right)}{1+u^2}\,\mathrm{d}u\tag{3a}\\ &=2\sum_{k=0}^\infty\int_0^\infty\frac{e^{-(2k+1)u}}{2k+1}\frac{\mathrm{d}u}{1+u^2}\tag{3b}\\ &=2\sum_{k=0}^\infty\int_0^\infty\frac{e^{-u}\,\mathrm{d}u}{(2k+1)^2+u^2}\tag{3c}\\ &=2\int_0^\infty\frac\pi{4u}\tanh\left(\frac{\pi u}2\right)e^{-u}\,\mathrm{d}u\tag{3d}\\ &=\frac\pi2\int_0^\infty\frac{1-e^{-\pi u}}{u}\frac{e^{-u}}{1+e^{-\pi u}}\,\mathrm{d}u\tag{3e}\\ &=\frac\pi2\int_0^\infty\sum_{k=0}^\infty(-1)^k\frac{e^{-(k\pi+1)u}-e^{-((k+1)\pi+1)u}}u\,\mathrm{d}u\tag{3f}\\ &=\frac\pi2\sum_{k=0}^\infty(-1)^k\log\left(\frac{(k+1)\pi+1}{k\pi+1}\right)\tag{3g}\\ &=\frac\pi2\sum_{k=0}^\infty\log\left(\frac{(2k+1)\pi+1}{2k\pi+1}\frac{(2k+1)\pi+1}{(2k+2)\pi+1}\right)\tag{3h}\\ &=\lim_{n\to\infty}\frac\pi2\log\left[\prod_{k=0}^n\frac{k+\frac12+\frac1{2\pi}}{k+\frac1{2\pi}}\frac{k+\frac12+\frac1{2\pi}}{k+1+\frac1{2\pi}}\right]\tag{3i}\\ &=\lim_{n\to\infty}\frac\pi2\log\,\left[\frac{\Gamma\left(n+\frac32+\frac1{2\pi}\right)^2}{\Gamma\left(\frac12+\frac1{2\pi}\right)^2}\frac{\Gamma\left(\frac1{2\pi}\right)\Gamma\left(1+\frac1{2\pi}\right)}{\Gamma\left(n+1+\frac1{2\pi}\right)\Gamma\left(n+2+\frac1{2\pi}\right)}\right]\tag{3j}\\ &=\frac\pi2\log\,\left[\frac{\Gamma\left(\frac1{2\pi}\right)\Gamma\left(1+\frac1{2\pi}\right)}{\Gamma\left(\frac12+\frac1{2\pi}\right)^2}\right]\tag{3k}\\ &=\bbox[5px,border:2px solid #C0A000]{\pi\log\,\left[\frac{\frac1{\sqrt{2\pi}}\Gamma\left(\frac1{2\pi}\right)}{\Gamma\left(\frac12+\frac1{2\pi}\right)}\right]}\tag{3m} \end{align} Explanation:
$\text{(3a)}$: Substitute $x=e^{-u}$
$\text{(3b)}$: $\log\left(\frac{1+x}{1-x}\right)=2\sum\limits_{k=0}^\infty\frac{x^{2k+1}}{2k+1}$
$\text{(3c)}$: Substitute $u\mapsto\frac u{2k+1}$
$\text{(3d)}$: apply $(2)$
$\text{(3e)}$: $\tanh\left(\frac{\pi u}2\right)=\frac{1-e^{-\pi u}}{1+e^{-\pi u}}$
$\text{(3f)}$: $\frac1{1+x}=\sum\limits_{k=0}^\infty(-1)^kx^k$
$\text{(3g)}$: apply $(1)$
$\text{(3h)}$: combine $2k$ and $2k+1$ terms
$\text{(3i)}$: change a sum of logs to a log of a product
$\text{(3j)}$: write products as ratios of Gamma functions
$\text{(3k)}$: apply Gautschi's Inequality
$\text{(3m)}$: $\Gamma(1+x)=x\Gamma(x)$
An alternative way to evaluate $$\frac{\pi}{2} \int_{0}^{\infty} \tanh \left(\frac{\pi u}{2} \right) \frac{e^{-u}}{u} \, du ,$$ which is line $3d$ in robjohn's answer, is to add a parameter and then differentiate under the integral sign.
Specifically, let $$I(a) = \frac{\pi}{2}\int_{0}^{\infty} \tanh \left(\frac{\pi u}{2} \right) \frac{e^{-au}}{u} \, du.$$
Then \begin{align} I'(a) &= - \frac{\pi}{2} \int_{0}^{\infty} \tanh \left(\frac{\pi u}{2} \right) e^{-au} \, du \\ &= -\frac{\pi}{2} \int_{0}^{\infty} \left(\frac{1}{1+e^{- \pi u}}- \frac{e^{- \pi u}}{1+e^{-\pi u}} \right)e^{-au} \, du \\ &= -\frac{\pi}{2} \int_{0}^{\infty} \left(\sum_{n=0}^{\infty} (-1)^{n} e^{-n \pi u} + \sum_{n=1}^{\infty} (-1)^{n} e^{-n \pi u} \right)e^{-au} \, du \\ &= \frac{\pi}{2} \int_{0}^{\infty} \left(1- 2 \sum_{n=0}^{\infty} (-1)^{n}e^{-n \pi u} \right) e^{-au} \, du \\ &= \frac{\pi }{2a} -\pi \sum_{n=0}^{\infty} \frac{(-1)^{n}}{a+n \pi} \\ &= \frac{\pi }{2a} -\frac{1}{2} \psi \left(\frac{a+\pi}{2 \pi} \right) + \frac{1}{2} \psi \left(\frac{a}{2 \pi} \right) \tag{1}. \end{align}
Integrating back, we get \begin{align} I(a) &= \frac{\pi}{2} \log(a) - \pi \log \Gamma \left(\frac{a+\pi}{2 \pi} \right) + \pi \log \Gamma\left(\frac{a}{2 \pi} \right) +C \\ &= \pi \log \left(\frac{\sqrt{a} \, \Gamma \left(\frac{a}{2 \pi } \right)}{\Gamma \left(\frac{a}{2 \pi} + \frac{1}{2} \right)} \right) + C,\end{align}
where $$\lim_{a \to \infty} I(a) =0 = \lim_{a \to \infty} \pi \log \left(\frac{\sqrt{a} \, \Gamma \left(\frac{a}{2 \pi } \right)}{\Gamma \left(\frac{a}{2 \pi} + \frac{1}{2} \right)} \right) +C$$
$$= \pi \log (\sqrt{2 \pi}) + C. \tag{2}$$
Therefore,
$$\frac{\pi}{2} \int_{0}^{\infty} \tanh \left(\frac{\pi u}{2} \right) \frac{e^{-u}}{u} \, du = I(1) =\pi \log \left(\frac{ \Gamma \left(\frac{1}{2 \pi } \right)}{\sqrt{2 \pi} \, \Gamma \left(\frac{1}{2 \pi} + \frac{1}{2} \right)} \right).$$
$(2)$ In general, for $x,y>0$, $\lim_{a \to \infty} \frac{a^{x} \Gamma(ya)}{\Gamma(ya+x)} = y^{-x}$. This can be proven using Stirling's approximation formula for the gamma function.
• (+1) Nice alternative! Differentiation instead of Frullani; it is not too hard to see that they are related. I think $(2)$ can be shown more easily using Gautschi's inequality, but Stirling also gets there. – robjohn Dec 21 '15 at 20:58
• @robjohn My original idea was to use the fact that $\int_{0}^{\infty} \frac{\cos(xt)}{t} \tanh \left(\frac{\pi t}{2} \right) \, dt = \log \left(\coth \frac{x}{2} \right)$. But that only takes you from (3a) to (3d). – Random Variable Dec 21 '15 at 21:56
|
2019-05-23 16:57:46
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 3, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9999862909317017, "perplexity": 1039.782919109948}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257316.10/warc/CC-MAIN-20190523164007-20190523190007-00449.warc.gz"}
|
http://www.gradesaver.com/textbooks/math/algebra/intermediate-algebra-6th-edition/chapter-9-section-9-7-common-logarithms-natural-logarithms-and-change-of-base-exercise-set-page-586/69
|
## Intermediate Algebra (6th Edition)
Published by Pearson
# Chapter 9 - Section 9.7 - Common Logarithms, Natural Logarithms, and Change of Base - Exercise Set: 69
#### Answer
$x=\frac{3y}{4}$
#### Work Step by Step
We are given the equation $2x+3y=6x$. To simplify, subtract 2x from both sides. $4x=3y$ Divide both sides by 4. $x=\frac{3y}{4}$
After you claim an answer you’ll have 24 hours to send in a draft. An editor will review the submission and either publish your submission or provide feedback.
|
2018-02-25 16:13:02
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6935953497886658, "perplexity": 1926.3367776986577}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816647.80/warc/CC-MAIN-20180225150214-20180225170214-00313.warc.gz"}
|
http://www.gradesaver.com/textbooks/math/calculus/calculus-10th-edition/chapter-1-limits-and-their-properties-1-4-exercises-page-79/22
|
## Calculus 10th Edition
$\lim\limits_{x\to\frac{\pi}{2}}\sec{x}$ does not exist.
$\lim\limits_{x\to\frac{\pi}{2}^-}\sec{x}=\lim\limits_{x\to\frac{\pi}{2}^-}\dfrac{1}{\cos{x}}=\infty.$ $\lim\limits_{x\to\frac{\pi}{2}^+}\sec{x}=\lim\limits_{x\to\frac{\pi}{2}^+}\dfrac{1}{\cos{x}}=-\infty.$ Since$\lim\limits_{x\to\frac{\pi}{2}^+}\sec{x}\ne\lim\limits_{x\to\frac{\pi}{2}^-}\sec{x}\to\lim\limits_{x\to\frac{\pi}{2}}\sec{x}$ does not exist.
|
2017-03-27 12:56:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8932403326034546, "perplexity": 2580.9564425919293}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218189472.3/warc/CC-MAIN-20170322212949-00460-ip-10-233-31-227.ec2.internal.warc.gz"}
|
https://www.omnicalculator.com/physics/flow-rate
|
Shape
circular (pipe)
Diameter (d)
in
Velocity
ft/s
Density
lb/cu ft
Flow rate
(Volumetric) flow rate
ft³/s
Mass flow rate
lb/s
# Flow Rate Calculator
By Hanna Pamuła, PhD candidate
Believe it or not, our flow rate calculator is not only useful in fluid mechanics, but also in everyday problems. It will not only help you if you want to find the flow rate of a garden hose or shower head, but also if you're curious how much blood your heart pumps every minute (it's cardiac output). It may also serve as simple pipe velocity calculator.
For a complete understanding of the topic, you can find a section explaining what the flow rate is below, as well as a paragraph helping to understand how to calculate the flow rate. Be careful, as the term "flow rate" itself may be ambiguous! Luckily for you, we've implemented two flow rate formulas, so you're covered in both cases. This means that our tool may serve as both a volumetric flow rate calculator and a mass flow rate calculator.
## What is flow rate? Volumetric and mass flow rate
When we talk about flow rate, you most likely picture the concept of volumetric flow rate (also known as rate of liquid flow, volume flow rate or volume velocity). The volumetric flow rate can be defined as the volume of a given fluid that passes through a given cross-sectional area per unit of time. It's usually represented by the symbol Q (sometimes V̇ - V with a dot)
`Volumetric flow rate = V / t = Volume / time`
Another related concept is mass flow rate, sometimes called mass flux or mass current. This time it's not the volume, but mass of a substance that passes through a given cross-sectional area per unit of time.
`Mass flow rate = m / t = mass / time`
Mass flow rate is commonly used in the specifications of fans and turbines, amongst other things.
If you're interested in fluid mechanics, you should also have a look at the Bernoulli equation calculator to determine the speed and pressure of an incompressible fluid. Also, the hydrostatic pressure and buoyancy calculators may be helpful.
## How to calculate flow rate? Flow rate formulas
TL;DR version
• Volumetric flow rate formula: `Volumetric flow rate = A * v`
where `A` - cross-sectional area, `v` - flow velocity
• Mass flow rate formula: `Mass flow rate = ρ * Volumetric flow rate = ρ * A * v`
where `ρ` - fluid density
Longer explanation:
The volumetric flow rate formula may be written in the alternative (read: way more useful) form. You can first calculate the volume of a portion of the fluid in a channel as:
`Volume = A * l`
Where `A` is a cross-sectional area of the fluid and `l` is the width of a given portion of the fluid. If our pipe is circular, it's just the formula for cylinder volume. Substituting the above formula to the equation from the flow rate definition, we obtain:
`Volumetric flow rate = V / t = A * l / t`
As `l / t` is the volume length divided by time, you can see that it's just the flow velocity. So, the volumetric flow rate formula boils down to:
`Volumetric flow rate = A * v`
Most pipes are cylindrical, so the formula for volumetric flow rate will look as follows:
`Volumetric flow rate for cylindrical pipe = π * (d/2)² * v` where `d` is the pipe diameter
The equation can be rearranged to find the formula for pipe velocity.
To find the mass flow rate formula, we need to remind ourselves of the density definition first:
`ρ = m / V` and `m = ρ * V`
As mass flow rate is the mass of a substance passing per unit of time, we can write the formula as:
`Mass flow rate = m / t = ρ * V / t = ρ * Volumetric flow rate = ρ * A * v`
`Mass flow rate = ρ * A * v`
## How to use the flow rate calculator
Now that you know what the flow rate is, let's check it out with a simple example:
1. First, select a shape from the drop-down list. For this example, we'd like to know the flow rate of water in a circular pipe, so we will select the `circular (full)` option.
2. Input the measurements needed to compute cross-sectional area. If the cross-section is a circle or square/rectangle, you'll find that option on the list. In every other case, you can type the area value directly into the calculator (you can use our comprehensive area calculator to help you). Let's choose a pipe with in internal diameter of 3 inches.
3. Enter the average velocity of the flow. Let's pick 10 ft/s.
4. And there it is, the first part of the calculations is done: the tool has worked as a volumetric flow rate calculator. We've found out that the volumetric flow rate is 0.4909 ft³/s. Remember, you can always change the units, so don't worry if you need to work in gallons/minute or liters/hour.
5. If you know the density, you can calculate the mass flow rate as well, just input the density of the flow material. In our example, water has a flow rate of approximate 998 kg/m³ (the density of water at 68°F / 20°C). However, if you want to be super accurate, check out our water density calculator, as the density changes with temperature, salinity, and pressure.
6. The tool displayed a mass flow rate of 30.58 lbs/s. Great!
Don't forget that our tools are flexible, so you can use it as pipe velocity calculator. You can, for example, determine the water velocity of your faucet, given the diameter (e.g., 0.5 in) and flow rate of a kitchen faucet (usual range is 1-2.2 gallons per minute, depending on the aerator type). By the way, have you seen our tap water calculator which shows your savings if you were to switch from bottled to tap water?
Hanna Pamuła, PhD candidate
|
2021-01-15 18:36:12
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8031865358352661, "perplexity": 583.5611999393382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703495936.3/warc/CC-MAIN-20210115164417-20210115194417-00669.warc.gz"}
|
https://answerbun.com/physics/does-hawking-radiation-lead-to-black-hole-evaporation-reduction-of-black-hole-mass/
|
# Does Hawking radiation lead to black hole evaporation / reduction of black hole mass?
Physics Asked by emacs drives me nuts on December 9, 2020
As I understand, Hawking radiation leads to black hole evaporation, resp. a black hole would lose mass due to that effect.
Now Hawking radiation is very similar to Unruh radiation, i.e. some (apparent) horizon leads to a thermal bath:
1. An inertial observer in Minkowski space does not see radiation.
2. A Rindler observer sees Unruh radiation.
1. A free falling observer does not observe Hawking radiation from a black hole.
2. An observer hovering somewhere over the event horizon of a black hole does see Hawking radiation.
Hence in either case, the inertial observer (1.) sees no radiation whilst the accelerated observer (2.) sees thermal radiation.
Of course, the case U.2 is stationary, i.e. for a Rindler observer the spacetime does not change and the Rindler horizon does not disappear, evaporate or change its distance due to Unruh radiation.
Doesn’t this also apply to H.2, i.e. there is just some thermal bath due to acceleration (or due to some horizon), and the black hole does not change in mass?
Moreover, if the black hole did evaporate due to Hawking radiation, wouldn’t that lead to conflicting observations from a free falling observer (black hole does not evaporate because no loss of energy / mass because no radiation is emitted) vs. hovering observer (black hole does evaporate because it loses mass / energy due to Hawking radiation)?
Hawking’s derivation predicts named radiation, but does that derivation also show that the black hole’s mass is changing?
Unruh and Hawking effects are similar but not quite the same. In particular, a freely falling observer far from the black hole does detect Hawking radiation. An inertial observer far from the black hole could be momentarily at rest relative to say Schwarzschild coordinates (for a simple black hole) and they would detect the radiation. They would still detect it as they begin to gently fall in the direction of the black hole. The black hole has interacted with the electromagnetic field and emitted photons for all to see, just like a star (well the process is not like a star, but the end result is, for observers far away).
The observer freely falling near to the black hole may or may not be able to detect the radiation, depending on the size of his detector and the length of time he has to register the effect. He hasn't got much time to look before he enters the horizon, so none of his measurements are going to be precise. In fact his energy measurements will be imprecise by an amount of the order of the Hawking temperature. And if he tries to measure for a time long enough to get that sort of precision, then he can't help but notice that spacetime is not flat. That is, his location will have moved through a distance of the order of the Schwarzschild radius. That is one way to see why such an observer's observations are not the same as those of an inertial observer in flat spacetime. (The equivalence principle only applies in the limit of small regions of spacetime of course). The other way of seeing it is to note that the electromagnetic field tensor in the Hawking case is not the same as in the Unruh case.
Answered by Andrew Steane on December 9, 2020
I am going to try to explain this without using any equations. I hope I don't make any mistakes along the way.
A freely falling observer "close" to the black hole horizon will not detect any Hawking radiation, like a non-accelerating observer in Minkowski space. This does not mean that any freely falling observer cannot detect Hawking radiation. If that was the case then even the observers which are at rest at spatial infinity (and hence freely falling to the gravity of the black hole) were not able to detect Hawking radiation. Even though freely falling observers which are close to the horizon will locally see it flat, if they are asked to extend their reach to feel the curvature of the horizon, they will feel the geodesic deviation as an acceleration. (This means that we cannot have a solid extended freely falling observer near the horizon.) Therefore one might say that if they try to look at the black hole as a whole then they will indeed see that it radiates.
To put in slightly different words, if near the horizon observer uses a detector vast enough to feel the curvature of the horizon, then that detector will feel acceleration. This means vast enough detectors can see the radiation. Also, the detector should be a non-accelerating object in the reference frame of the observer. Otherwise it will not work correctly, for example, in the case of flat space-time.
Answered by Alphy on December 9, 2020
## Related Questions
### Regarding a possible duality between (2+1)D gravity and Chern-Simons Theory
2 Asked on July 18, 2021 by marco-tavora
### Why is normal force due to gravity regarded as contact force?
3 Asked on July 18, 2021 by ayman-abdussalam
### How “Eyeball Glide Ball” toy works?
1 Asked on July 18, 2021 by shamisen
### What does a voltmeter real measure in case of a pn junction?
0 Asked on July 18, 2021
### Is continuum mechanics a generalization or an approximation to point particle mechanics?
2 Asked on July 17, 2021
### Polarity of EMF induced in open circuits
3 Asked on July 17, 2021 by saprativ-ray
### Why do fireproof safes “capture and hold in moisture”?
2 Asked on July 17, 2021 by kenny-evitt
### Radial motions moment of inertia
2 Asked on July 17, 2021
### Azimuthal number $m$ of radiation
0 Asked on July 17, 2021
### Are acoustic phonons always the lowest energy vibrational modes in solids?
2 Asked on July 17, 2021
### General solution of 3d wave equation as a superposition of plane/spherical waves
1 Asked on July 17, 2021
### How does the water anomaly affect the Mpemba effect?
1 Asked on July 17, 2021 by foggy
### Understanding derivation of Wigner function for the Harmonic oscillator
1 Asked on July 17, 2021 by physics101
### $Sigma^0$ and $n$ decay
1 Asked on July 17, 2021
### Work-energy principle: What if there is no change in KE, but change in PE?
3 Asked on July 17, 2021
### Tensor of inertia
2 Asked on July 17, 2021 by peripatein
### Velocities of different quark types inside nuclei
1 Asked on July 17, 2021
### On the derivation of mayers relation
2 Asked on July 17, 2021
### Area under the Wien’s Law graph
2 Asked on July 17, 2021 by puja-lalwani
|
2022-05-24 10:00:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5557494163513184, "perplexity": 588.293568262753}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662570051.62/warc/CC-MAIN-20220524075341-20220524105341-00576.warc.gz"}
|
https://tex.stackexchange.com/questions/584488/circuitikz-wrongfully-intersects-node/584491
|
# Circuitikz wrongfully intersects node
I have the following MWE:
\usepackage[colorlinks = true, citecolor=black, filecolor=black, linkcolor=black, urlcolor=black, linktocpage=true]{hyperref}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{mathtools}
\usepackage{graphicx}
\usepackage{siunitx}
\usepackage{geometry}
\usepackage{tikz}
\usetikzlibrary{calc}
\usetikzlibrary{positioning}
\usetikzlibrary{automata}
\usepackage[european,cuteinductors,fetbodydiode]{circuitikz}
\begin{document}
\begin{figure}[!h]
\centering
\begin{tikzpicture}[arrowmos]
\draw (0,0) -- ++(0,0.75) to[short,i_=$a_1$] ++(0,0.5) to[short,-o] ++(0,0.5) node[right]{$b_1$} coordinate(b1);
\draw [thick, dotted] (b1) -- ++(0,1);
\end{tikzpicture}
\end{figure}
\end{document}
Which produces the output . As you can see, the dotted line intersects the node circle. How can I avoid this?
• have alook at the answer below -- coordinate does not have any border -- nodes have some border – js bibra Feb 22 at 16:05
As ever, @js bibra answer is ok. Just to explain a bit more what happens in the following (you can read a quite lengthy and full explication in the manual of circuitikz, section 5.1).
\draw (0,0) -- ++(0,0.75) to[short,i_=$a_1$] ++(0,0.5)
to[short,-o] ++(0,0.5) node[right]{$b_1$} coordinate(b1);
When you use -o, circuitikz will add a node with the shape ocirc to the path. Nodes are drawn after the path is stroked; and the node is designed so that is filled with white. The coordinate b1 is then set at the point where you arrived (the node you added with $b_1$ has not moved it). Now
\draw [thick, dotted] (b1) -- ++(0,1);
will draw the dotted line starting from b1, which is at the center of the ocirc node.
A different solution is to position the ocirc node after having drawn the dashed line.
\documentclass[border=10pt]{standalone}
\usepackage[siunitx, RPvoltages]{circuitikz}
\begin{document}
\begin{tikzpicture}[arrowmos]
\draw (0,0) -- ++(0,0.75) to[short,i_=$a_1$] ++(0,0.5)
to[short] ++(0,0.5) node[right]{$b_1$} coordinate(b1);
\draw [thick, dotted] (b1) -- ++(0,1);
\node [ocirc] at (b1){};
\end{tikzpicture}
\end{document}
(BTW, notice that this is a real MWE).
A very similar problem is explained around page 174, in the FAQ:
Finally, as you comment, if you want to use the north coordinate of the pole, you have to put it explicitly so that you can name it:
\begin{tikzpicture}[arrowmos]
\draw (0,0) -- ++(0,0.75) to[short,i_=$a_1$] ++(0,0.5)
to[short] ++(0,0.5) node[ocirc](b1){} node[right]{$b_1$};
\draw [thick, dotted] (b1.norht) -- ++(0,1);
\end{tikzpicture}
Moreover, as noticed by John Kormylo, now b1 is a node reference, so the lines going out will start from the border anchor automatically:
\documentclass[border=10pt]{standalone}
\usepackage[siunitx, RPvoltages]{circuitikz}
\begin{document}
\begin{tikzpicture}[arrowmos]
\draw (0,0) -- ++(0,0.75) to[short,i_=$a_1$] ++(0,0.5)
to[short] ++(0,0.5) node[ocirc](b1){} node[right]{$b_1$};
\draw [thin, red] (b1) -- ++(0,1);
\draw [thin, blue] (b1) -- ++(1,1);
\draw [thin, dashed] (b1) -- ++(-1,1);
\draw [thin, dotted] (b1) -- ++(0.5,1);
\end{tikzpicture}
\end{document}
\begin{tikzpicture}[arrowmos]
\draw (0,0) -- ++(0,0.75) to[short,i_=$a_1$] ++(0,0.5) to[short,-o] ++(0,0.5) node[right]{$b_1$} node[inner sep=1pt](b1){};
\draw [thick, dotted] (b1) -- ++(0,1);
\end{tikzpicture}
• Thanks! But isn't it kind of a hack to use a node directly after a node and furthermore define the inner sep? I tried to use the anchor b1.north too but that hasn't worked aswell – Steradiant Feb 22 at 16:08
• if it serves the purpose its not a hack -- its ingenuity and being a yoda level operator – js bibra Feb 22 at 16:13
• @Steradiant if you use -o the internal ocirc node is not accessible. You can put the node explicitly, with node [ocirc](name){} and then you can use (name.north) – Rmano Feb 22 at 18:22
• @Rmano - or just (name). Just tried it. – John Kormylo Feb 22 at 21:28
• @JohnKormylo you're right - being a node name the line will automatically start at the border! – Rmano Feb 22 at 21:37
|
2021-02-28 06:54:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7527133822441101, "perplexity": 4734.689080017411}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178360293.33/warc/CC-MAIN-20210228054509-20210228084509-00035.warc.gz"}
|
https://www.physicsforums.com/threads/pyramid-of-egypt.150768/
|
Pyramid of Egypt
1. Jan 10, 2007
powergirl
My Dad has a miniature Pyramid of Egypt. It is 3 inches in height. Dad was invited to display it at an exhibition. Dad felt it was too small and decided to build a scaled-up model of the Pyramid out of material whose density is (1/ 9) times the density of the material used for the miniature. He did a "back-of-the-envelope" calculation to check whether the model would be big enough.
If the mass (or weight) of the miniature and the scaled-up model are to be the same, how many inches in height will be the scaled-up Pyramid? Give your answer to two places of decimal.
2. Jan 10, 2007
dontdisturbmycircles
9 inches!
needed more text
3. Jan 10, 2007
powergirl
NO...Not right
4. Jan 10, 2007
Hootenanny
Staff Emeritus
81"
(Again text limit)
5. Jan 10, 2007
dextercioby
27 inches ?
Daniel.
6. Jan 10, 2007
dontdisturbmycircles
How did you guys solve this? This one confused me for some reason. I know that all the dimensions between the two are proportional but I couldn't immediately see how to put that into the equations. A ratio would have worked but I didn't see one that helped solve the problem. I would sit down and think harder about it but I gotta go.
7. Jan 10, 2007
dontdisturbmycircles
Yea, it's 27", I have no idea what the heck I was thinking. Was just about to fall asleep and then I realized.
8. Jan 10, 2007
powergirl
No one gave me the right ans:
9. Jan 10, 2007
dontdisturbmycircles
what the hell, this question is making me mad. lol
10. Jan 10, 2007
powergirl
try it..................
11. Jan 10, 2007
Hootenanny
Staff Emeritus
6.24" text limit again
$$3\cdot\left(\frac{1}{9}\right)^{-1/3}$$
Last edited: Jan 10, 2007
12. Jan 10, 2007
neutrino
6.24 it is.
13. Jan 10, 2007
powergirl
yes 6.24 is correct
soln is as:
Mass = Density x Volume; and
Volume of model / Volume of miniature = (H of model / H of miniature)3.
In the above equation, H is the characteristic dimension (say, height).
If the mass is to be the same, then density is inversely proportional to volume. Also, the volumes are directly proportional to the cubes of the heights for objects that are geometrically similar. Therefore, the heights are seen to be inversely proportional to the cube roots of the densities. Thus,
Height of model = Height of miniature x (Density of miniature / Density of model)1/3 or
Height of model = 3 x [ 91/3 ] = 6.24 inches.
|
2019-01-19 00:12:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7889585494995117, "perplexity": 1819.7622368336358}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583660877.4/warc/CC-MAIN-20190118233719-20190119015719-00326.warc.gz"}
|
https://byjus.com/maths/null-hypothesis/
|
Null Hypothesis
In mathematics, Statistics deals with the study of research and surveys on the numerical data. For taking surveys, we have to define the hypothesis. Generally, there are two types of hypothesis. One is a null hypothesis and another is an alternative hypothesis.
In probability and statistics, the null hypothesis is a comprehensive statement or default status that there is zero happening or nothing happening. For example, there is no connection among groups or no association between two measured events. It is generally assumed here that the hypothesis is true until any other proof has been brought into the light to deny the hypothesis. Let us learn more here with definition, symbol, principle, types and example, in this article.
Null Hypothesis Definition
The null hypothesis is a kind of hypothesis which explains the population parameter whose purpose is to test the validity of the given experimental data. This hypothesis either rejected or not rejected based on the viability of the given population or sample. In other words, the null hypothesis is a hypothesis in which the sample observations results from the chance. It is said to be a statement in which the surveyors wants to examine the data. It is denoted by H0.
Null Hypothesis Symbol
In statistics, the null hypothesis is usually denoted by letter H with subscript ‘0’ (zero), such that H0. It is pronounced as H-null or H-zero or H-nought. Whereas the alternative hypothesis expresses the observations determined by the non-random cause. It is represented by H1 or Ha.
Null Hypothesis Principle
The principle followed for null hypothesis testing is collecting the data and determining the chances of a given set of data during the study on some random sample, assuming that the null hypothesis is true. In case if the given data does not face the expected null hypothesis, then the outcome will be quite weaker and they conclude by saying that the given set of data does not provide strong evidence against the null hypothesis because of insufficient evidence. Finally, the researchers tend to reject that.
Null Hypothesis Formula
Here, the hypothesis test formulas are given below for the reference.
The formula for the null hypothesis is:
H0: p = p0
The formula for the alternative hypothesis is:
Ha = p >p0, < p0≠ p0
The formula for the test static is:
$z = \frac{\hat{p}-p_{0}}{\sqrt{\frac{p_{0}(1-p_{0})}{n}}}$
Remember that, pis the null hypothesis and p – hat is the sample proportion.
Types of Null Hypothesis
There are different types of hypothesis. They are:
Simple Hypothesis
It completely specifies the population distribution. In this method, the sampling distribution is the function of the sample size.
Composite Hypothesis
The composite hypothesis is one that does not completely specify the population distribution.
Exact Hypothesis
Exact hypothesis defines the exact value of the parameter. For example μ= 50
Inexact Hypothesis
This type of hypothesis does not define the exact value of the parameter. But it denotes a specific range or interval. For example 45< μ <60
Null Hypothesis Rejection
Sometimes the null hypothesis is rejected too. If this hypothesis is rejected means, that research could be invalid. Many researchers will neglect this hypothesis as it is merely opposite to the alternate hypothesis. It is a better practice to create a hypothesis and test it. The goal of researchers is not to reject the hypothesis. But is evidence that a perfect statistical model is always associated with the failure to reject the null hypothesis.
How do you Find the Null Hypothesis?
The null hypothesis says there is no correlation between the measured event (the dependent variable) and the independent variable. We don’t have to believe that the null hypothesis is true to test it. On the contrast, you will possibly assume that there is a connection between a set of variables ( dependent and independent).
When is Null Hypothesis Rejected?
The null hypothesis is rejected using the P-value approach. If the P-value is less than equal to the α, there should be a rejection of the null hypothesis in favour of the alternate hypothesis. In case, if P-value is greater than α, the null hypothesis is not rejected.
Null Hypothesis and Alternative Hypothesis
Now, let us discuss the difference between the null hypothesis and the alternative hypothesis.
S.No Null Hypothesis Alternative Hypothesis 1 The null hypothesis is a statement, there exists no relation between two variables Alternative hypothesis a statement, there exists some relationship between two measured phenomenon 2 Denoted by H0 Denoted by H1 3 The observations of this hypothesis are the result of chance The observations of this hypothesis are the result of real effect 4 The mathematical formulation of the null hypothesis is an equal sign The mathematical formulation alternative hypothesis is an inequality sign such as greater than, less than, etc.
Null Hypothesis Examples
Here, some of the examples of the null hypothesis are given below. Go through the below ones to understand the concept of the null hypothesis in a better way.
If a medicine reduces the risk of cardiac stroke, then the null hypothesis should be “ the medicine does not reduce the chance of cardiac stroke. This testing can be performed by the administration of a drug to a certain group of people in a controlled way. If the survey shows that there is a significant change in the people, then the hypothesis is rejected.
Few more examples are:
1). Are there is 100% chance of getting affected by dengue?
Ans: There could be chances of getting affected by dengue but not 100%.
2). Do teenagers are using mobile phones more than adults to access the internet?
Ans: Age has no limit on using mobile phones to access the internet.
3). Does having apple daily will not cause fever?
Ans: Having apple daily does not assure of not having fever, but increases the immunity to fight against such diseases.
4). Do the children more good in doing mathematical calculations than adults?
Ans: Age has no effect on Mathematical skills.
|
2020-08-12 14:19:06
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6999508142471313, "perplexity": 572.4504693899211}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738905.62/warc/CC-MAIN-20200812141756-20200812171756-00276.warc.gz"}
|
https://community.notepad-plus-plus.org/topic/15245/python-3-and-python-scripts-plugin/22
|
# Python 3 and Python scripts plugin
• @Bill-Winder said:
And then it became: What!??? pythonscript can only use 2.7? Why?
@Scott-Sumner said:
I’m sure you can read about all the details of the design decisions on the old Pythonscript Soucreforge forums if you’d like…
Discussion from a few years ago which discusses some of the hurdles switching from v2 to v3: https://sourceforge.net/p/npppythonscript/discussion/1188885/thread/cf066585/
• @dail Wow! Hairy!
I do encounter just that problem in my normal use of npp. I can paste in è or ã and it generally works fine. But when I open or paste in certain texts (like phonetic transcription from espeak), I get o~ where I should get õ. Npp interprets some accent encoding as two characters, not one (works fine here):
la diskysjˈɔ̃ fˈy syspɑ̃dˈy pɑ̃dˈɑ̃ lə ʁˈɔbʁ
mɛ bjɛ̃tˈoˈandʁuː styˈaʁ la ʁəpʁənˈɛ
I have found no way to fix it by shifting the encoding. (Jedit does the same thing…, but notepad does not.)
• @Bill-Winder said:
la diskysjˈɔ̃ fˈy syspɑ̃dˈy pɑ̃dˈɑ̃ lə ʁˈɔbʁ
mɛ bjɛ̃tˈoˈandʁuː styˈaʁ la ʁəpʁənˈɛ
make sure that the font you use is capable of displaying needed symbols
Cheers
Claudia
• @Claudia-Frank Thanks Claudia! – that fixed it, the moment I checked “enable global font”.
Thanks! (It’s been bugging me…for a long time…)
• Here is one of possible ways to get the text from an unsaved file tab in NppExec to allow further processing of the text:
// temporary file name
set local tmpfile = $(SYS.TEMP)\text.txt // current selection sci_sendmsg SCI_GETSELECTIONSTART set local selStart =$(MSG_RESULT)
sci_sendmsg SCI_GETSELECTIONEND
set local selEnd = $(MSG_RESULT) // select all the text and save it sci_sendmsg SCI_SELECTALL sel_saveto "$(tmpfile)"
// restore the selection
sci_sendmsg SCI_SETSELECTIONSTART $(selStart) sci_sendmsg SCI_SETSELECTIONEND$(selEnd)
// now it's time to process the tmpfile...
• Here is more “advanced” version that avoids visible selection change:
// temporary file name
set local tmpfile = $(SYS.TEMP)\text.txt // disable redrawing sci_sendmsg 0x000B 0 // WM_SETREDRAW FALSE // current selection sci_sendmsg SCI_GETSELECTIONSTART set local selStart =$(MSG_RESULT)
sci_sendmsg SCI_GETSELECTIONEND
set local selEnd = $(MSG_RESULT) // select all the text and save it sci_sendmsg SCI_SELECTALL sel_saveto "$(tmpfile)"
// restore the selection
sci_sendmsg SCI_SETSELECTIONSTART $(selStart) sci_sendmsg SCI_SETSELECTIONEND$(selEnd)
// enable redrawing
sci_sendmsg 0x000B 1 // WM_SETREDRAW TRUE
// now it's time to process the tmpfile...
• NppExec v0.6 beta 1 will introduce new command TEXT_SAVETO and TEXT_LOADFROM that will work with the whole current text. Apart from that, these commands will be similar to SEL_SAVETO and SEL_LOADFROM.
• Hello, @bill-winder, and All,
I’m quite late, but I had have to update my list of fonts and to do some tests, in Notepad++, first ;-))
So, Bill, here is below, a list of Unicode fonts which can, correctly, display your text example :
la diskysjˈɔ̃ fˈy syspɑ̃dˈy pɑ̃dˈɑ̃ lə ʁˈɔbʁ
mɛ bjɛ̃tˈoˈandʁuː styˈaʁ la ʁəpʁənˈɛ
• The first table concerns Monospaced fonts, where all characters have the same width
• The second table concerns Proportional fonts, with variable width
Notes :
• The Unicode number of code-points, handled by the font, is located in the Glyphs column
• The different fonts are sorted out by increasing number of their glyphs
• The default regular weight, only, of each font, is listed. The other weights ( Bold, Italic… ) are absent of the tables, below
• The Lucida Sans Unicode is certainly already installed on your configuation ( v5.00 or higher )
• The Segoe UI family font is probably installed, too ( v5.28 or higher )
• For the Ioveska and Ioveska Slab fonts, read very carefully the README.md part, scrolling downwards, because these fonts are highly configurable ! Refer to :
https://github.com/be5invis/Iosevka
• You may also consult this valuable article, Large, multi-script Unicode fonts for Windows computers, at :
http://www.alanwood.net/unicode/fonts.html
•------------------------•------------•--------•-----------------------------------------------------------------------------------•
| MONOSPACED Font Name | Version | Glyphs | Web Site Link |
•------------------------•------------•--------•-----------------------------------------------------------------------------------•
| Linux Libertine Mono | v5.17 | 1,021 | http://www.linuxlibertine.org |
| SourceCodePro | v2.030 | 1,585 | https://github.com/adobe-fonts/source-code-pro/releases/tag/2.030R-ro%2F1.050R-it |
| | | | |
| Iosevka | v1.14.0 | 3,654 | https://github.com/be5invis/Iosevka/releases |
| Iosevka Slab | v1.14.0 | 3,654 | https://github.com/be5invis/Iosevka/releases |
| | | | |
| FreeMono | v0412.2268 | 4,177 | http://ftp.gnu.org/gnu/freefont/freefont-ttf-20120503.zip |
•------------------------•------------•--------•-----------------------------------------------------------------------------------•
•------------------------•------------•--------•-----------------------------------------------------------------------------------•
| PROPORTIONAL Font Name | Version | Glyphs | Web Site Link |
•------------------------•------------•--------•-----------------------------------------------------------------------------------•
| Lucida Sans Unicode | v2.00 | 1,776 | https://fr.ffonts.net/Lucida-Sans-Unicode.font.zip |
| Lucida Sans Unicode | v5.00 | 1,779 | Usually installed in Windows 8 and higher |
| | | | |
| Linux Biolinum | v1.1.8 | 2,403 | http://www.linuxlibertine.org |
| Linux Libertine | v5.3.0 | 2,676 | http://www.linuxlibertine.org |
| | | | |
| SegoeUI | v5.05 | 2,901 | https://github.com/KingRider/frontcom/tree/master/css/fonts |
| SegoeUI | v5.28 | 4,516 | Usually installed in Windows 8 and higher |
| | | | |
| Junicode | v0.78 | 3,286 | http://sourceforge.net/projects/junicode |
| | | | |
| | | | |
| | | | |
| FreeSans | v0412.2268 | 6,272 | http://ftp.gnu.org/gnu/freefont/freefont-ttf-20120503.zip |
| FreeSerif | v0412.2263 | 10,538 | http://ftp.gnu.org/gnu/freefont/freefont-ttf-20120503.zip |
| | | | |
| Quivira | v4.1 | 10,486 | http://www.quivira-font.com |
| | | | |
| Arial Unicode MS | v1.01 | 50,377 | https://www.wfonts.com/font/arial-unicode-ms |
| | | | |
| Code 2000 | v1.171 | 63,546 | http://www.fontspace.com/james-kass/code2000 |
•------------------------•------------•--------•-----------------------------------------------------------------------------------•
Best Regards,
guy038
• @Vitaliy-Dovgan Thanks! Good work around. There must be somewhere direct access to an unsaved tab, because I can close npp with an unsaved tab and when I reopen, it is there. So the unsaved tab is actually saved somewhere. Curious!
• @Bill-Winder said:
So the unsaved tab is actually saved somewhere.
Yes, saved under a “Backup” folder by default. I just made 5 “new files” and they were created in this folder with the following names:
new 1@2018-02-19_155611
new 2@2018-02-19_155613
new 3@2018-02-19_155613
new 4@2018-02-19_155615
new 5@2018-02-19_155618
• @guy038 Very cool! I just chose fonts at random until things looked right. Useful description, which will serve elsewhere also – same issue in Jedit. (Though I use Jedit less and less - npp has everything I need and is very fast.)
• @Scott-Sumner and they are numbered! Perfect. Which essentially solves my problem.
I hope! There is one further step. I will have to capture the name of the active tab and give it to NPPEXec.
But that should be possible through the sci_sendmessage function (will look into that), since this is possible:
sci_sendmsg SCI_GETSELECTIONSTART
set local selStart = \$(MSG_RESULT)
|
2020-04-01 14:50:09
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48308107256889343, "perplexity": 13418.674347870357}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370505731.37/warc/CC-MAIN-20200401130837-20200401160837-00506.warc.gz"}
|
https://search.r-project.org/CRAN/refmans/biogrowth/html/predict_growth_uncertainty.html
|
predict_growth_uncertainty {biogrowth} R Documentation
## Isothermal growth with parameter uncertainty
### Description
Simulation of microbial growth considering uncertianty in the model parameters. Calculations are based on Monte Carlo simulations, considering the parameters follow a multivariate normal distribution.
### Usage
predict_growth_uncertainty(
model_name,
times,
n_sims,
pars,
corr_matrix = diag(nrow(pars)),
check = TRUE
)
### Arguments
model_name Character describing the primary growth model. times Numeric vector of storage times for the simulations. n_sims Number of simulations. pars A tibble describing the parameter uncertainty (see details). corr_matrix Correlation matrix of the model parameters. Defined in the same order as in pars. An identity matrix by default (uncorrelated parameters). check Whether to do some tests. FALSE by default.
### Details
The distributions of the model parameters are defined in the pars argument using a tibble with 4 columns:
• par: identifier of the model parameter (according to primary_model_data()),
• mean: mean value of the model parameter.,
• sd: standard deviation of the model parameter.,
• scale: scale at which the model parameter is defined. Valid values are 'original' (no transformation), 'sqrt' square root or 'log' log-scale. The parameter sample is generated considering the parameter follows a marginal normal distribution at this scale, and is later converted to the original scale for calculations.
### Value
An instance of GrowthUncertainty().
### Examples
## Definition of the simulation settings
my_model <- "Baranyi"
my_times <- seq(0, 30, length = 100)
n_sims <- 3000
library(tibble)
pars <- tribble(
~par, ~mean, ~sd, ~scale,
"logN0", 0, .2, "original",
"mu", 2, .3, "sqrt",
"lambda", 4, .4, "sqrt",
"logNmax", 6, .5, "original"
)
## Calling the function
stoc_growth <- predict_growth_uncertainty(my_model, my_times, n_sims, pars)
## We can plot the results
plot(stoc_growth)
my_cor <- matrix(c(1, 0, 0, 0,
0, 1, 0.7, 0,
0, 0.7, 1, 0,
0, 0, 0, 1),
nrow = 4)
stoc_growth2 <- predict_growth_uncertainty(my_model, my_times, n_sims, pars, my_cor)
plot(stoc_growth2)
## The time_to_size function can calculate the median growth curve to reach a size
time_to_size(stoc_growth, 4)
## Or the distribution of times
dist <- time_to_size(stoc_growth, 4, type = "distribution")
plot(dist)
[Package biogrowth version 1.0.1 Index]
|
2022-12-08 17:26:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.496604859828949, "perplexity": 9218.790588463873}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711344.13/warc/CC-MAIN-20221208150643-20221208180643-00014.warc.gz"}
|
https://bbengfort.github.io/snippets/2017/07/09/public-ip.html
|
I tried to make the PublicIP() function a bit robust, using a timeout of 5 seconds so it couldn’t hang up any calling programs, and performing a lot of error handling. For example, a 429 response from myexternalip.com means that the rate limit has been exceeded (30 requests per minute). As I like the service, I wanted to make sure this was maintained so I ensured an error was thrown if this was breached. Additionally I used the json format rather than the raw format which meant I had to do some parsing, but I think it lends the code a bit more stability.
|
2018-07-17 03:41:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6330093145370483, "perplexity": 1016.6813000136598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676589557.39/warc/CC-MAIN-20180717031623-20180717051623-00081.warc.gz"}
|
https://forum.math.toronto.edu/index.php?PHPSESSID=jla97hv18p5ovdl57u09n6o0q2&topic=1277.0;wap2
|
MAT244--2018F > Quiz-1
Q1: TUT 0801
(1/1)
Victor Ivrii:
Find the solution of the given initial value problem
\begin{equation*}
y' - y = 2te^{2t},\qquad y(0)=1.
\end{equation*}
Wenhan Sheng:
Solution in the following PDF file
Victor Ivrii:
Waiting for a typed solution.
Wei Cui:
Question: $y^{'} - y = 2te^{2t}$, $y(0) = 1$
$p(t) = -1$, $g(t) = 2te^{2t}$
$u(t) = e^{\int -1dt} = e^{-t}$
multiply both sides with $u$, then we get:
$e^{-t}y^{'}-e^{-t}y=2te^{t}$
$(e^{-t}y)^{'} = 2te^{t}$
$d(e^{-t}y)= 2te^{t}dt$
$e^{-t}y=\int 2te^{t}dt$
$e^{-t}y = 2e^{t}(t-1)+C$
$y = 2e^{2t}(t-1)+Ce^{t}$
Since $y(0) = 1 \implies 1= 2\times e^{0}(0-1)+Ce^{0}$, then we get $C =3$
Therefore, general solution is: $y = 2e^{2t}(t-1)+3e^{t}$
|
2022-05-27 13:03:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9890308976173401, "perplexity": 11284.229851346283}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662647086.91/warc/CC-MAIN-20220527112418-20220527142418-00330.warc.gz"}
|
http://docs.pymc.io/api/inference.html
|
# Inference¶
## Sampling¶
pymc3.sampling.sample(draws=500, step=None, init='auto', n_init=200000, start=None, trace=None, chain_idx=0, chains=None, cores=None, tune=500, nuts_kwargs=None, step_kwargs=None, progressbar=True, model=None, random_seed=None, live_plot=False, discard_tuned_samples=True, live_plot_kwargs=None, compute_convergence_checks=True, use_mmap=False, **kwargs)
Draw samples from the posterior using the given step methods.
Multiple step methods are supported via compound step methods.
Examples
>>> import pymc3 as pm
... n = 100
... h = 61
... alpha = 2
... beta = 2
>>> with pm.Model() as model: # context management
... p = pm.Beta('p', alpha=alpha, beta=beta)
... y = pm.Binomial('y', n=n, p=p, observed=h)
... trace = pm.sample(2000, tune=1000, cores=4)
>>> pm.summary(trace)
mean sd mc_error hpd_2.5 hpd_97.5
p 0.604625 0.047086 0.00078 0.510498 0.694774
pymc3.sampling.iter_sample(draws, step, start=None, trace=None, chain=0, tune=None, model=None, random_seed=None)
Generator that returns a trace on each iteration using the given step method. Multiple step methods supported via compound step method returns the amount of time taken.
Parameters: draws (int) – The number of samples to draw step (function) – Step function start (dict) – Starting point in parameter space (or partial point) Defaults to trace.point(-1)) if there is a trace provided and model.test_point if not (defaults to empty dict) trace (backend, list, or MultiTrace) – This should be a backend instance, a list of variables to track, or a MultiTrace object with past values. If a MultiTrace object is given, it must contain samples for the chain number chain. If None or a list of variables, the NDArray backend is used. chain (int) – Chain number used to store sample in backend. If cores is greater than one, chain numbers will start here. tune (int) – Number of iterations to tune, if applicable (defaults to None) model (Model (optional if in with context)) – random_seed (int or list of ints) – A list is accepted if more if cores is greater than one.
Examples
for trace in iter_sample(500, step):
...
pymc3.sampling.sample_ppc(trace, samples=None, model=None, vars=None, size=None, random_seed=None, progressbar=True)
Generate posterior predictive samples from a model given a trace.
Parameters: trace (backend, list, or MultiTrace) – Trace generated from MCMC sampling. Or a list containing dicts from find_MAP() or points samples (int) – Number of posterior predictive samples to generate. Defaults to the length of trace model (Model (optional if in with context)) – Model used to generate trace vars (iterable) – Variables for which to compute the posterior predictive samples. Defaults to model.observed_RVs. size (int) – The number of random draws from the distribution specified by the parameters in each sample of the trace. random_seed (int) – Seed for the random number generator. progressbar (bool) – Whether or not to display a progress bar in the command line. The bar shows the percentage of completion, the sampling speed in samples per second (SPS), and the estimated remaining time until completion (“expected time of arrival”; ETA). samples (dict) – Dictionary with the variables as keys. The values corresponding to the posterior predictive samples.
pymc3.sampling.sample_ppc_w(traces, samples=None, models=None, weights=None, random_seed=None, progressbar=True)
Generate weighted posterior predictive samples from a list of models and a list of traces according to a set of weights.
Parameters: traces (list or list of lists) – List of traces generated from MCMC sampling, or a list of list containing dicts from find_MAP() or points. The number of traces should be equal to the number of weights. samples (int) – Number of posterior predictive samples to generate. Defaults to the length of the shorter trace in traces. models (list) – List of models used to generate the list of traces. The number of models should be equal to the number of weights and the number of observed RVs should be the same for all models. By default a single model will be inferred from with context, in this case results will only be meaningful if all models share the same distributions for the observed RVs. weights (array-like) – Individual weights for each trace. Default, same weight for each model. random_seed (int) – Seed for the random number generator. progressbar (bool) – Whether or not to display a progress bar in the command line. The bar shows the percentage of completion, the sampling speed in samples per second (SPS), and the estimated remaining time until completion (“expected time of arrival”; ETA). samples (dict) – Dictionary with the variables as keys. The values corresponding to the posterior predictive samples from the weighted models.
pymc3.sampling.init_nuts(init='auto', chains=1, n_init=500000, model=None, random_seed=None, progressbar=True, **kwargs)
Set up the mass matrix initialization for NUTS.
NUTS convergence and sampling speed is extremely dependent on the choice of mass/scaling matrix. This function implements different methods for choosing or adapting the mass matrix.
## Step-methods¶
### NUTS¶
class pymc3.step_methods.hmc.nuts.NUTS(vars=None, max_treedepth=10, early_max_treedepth=8, **kwargs)
A sampler for continuous variables based on Hamiltonian mechanics.
NUTS automatically tunes the step size and the number of steps per sample. A detailed description can be found at [1], “Algorithm 6: Efficient No-U-Turn Sampler with Dual Averaging”.
NUTS provides a number of statistics that can be accessed with trace.get_sampler_stats:
• mean_tree_accept: The mean acceptance probability for the tree that generated this sample. The mean of these values across all samples but the burn-in should be approximately target_accept (the default for this is 0.8).
• diverging: Whether the trajectory for this sample diverged. If there are any divergences after burnin, this indicates that the results might not be reliable. Reparametrization can often help, but you can also try to increase target_accept to something like 0.9 or 0.95.
• energy: The energy at the point in phase-space where the sample was accepted. This can be used to identify posteriors with problematically long tails. See below for an example.
• energy_change: The difference in energy between the start and the end of the trajectory. For a perfect integrator this would always be zero.
• max_energy_change: The maximum difference in energy along the whole trajectory.
• depth: The depth of the tree that was used to generate this sample
• tree_size: The number of leafs of the sampling tree, when the sample was accepted. This is usually a bit less than 2 ** depth. If the tree size is large, the sampler is using a lot of leapfrog steps to find the next sample. This can for example happen if there are strong correlations in the posterior, if the posterior has long tails, if there are regions of high curvature (“funnels”), or if the variance estimates in the mass matrix are inaccurate. Reparametrisation of the model or estimating the posterior variances from past samples might help.
• tune: This is True, if step size adaptation was turned on when this sample was generated.
• step_size: The step size used for this sample.
• step_size_bar: The current best known step-size. After the tuning samples, the step size is set to this value. This should converge during tuning.
References
[R3131] Hoffman, Matthew D., & Gelman, Andrew. (2011). The No-U-Turn Sampler: Adaptively Setting Path Lengths in Hamiltonian Monte Carlo.
Set up the No-U-Turn sampler.
Notes
The step size adaptation stops when self.tune is set to False. This is usually achieved by setting the tune parameter if pm.sample to the desired number of tuning steps.
static competence(var, has_grad)
Check how appropriate this class is for sampling a random variable.
### Metropolis¶
class pymc3.step_methods.metropolis.Metropolis(vars=None, S=None, proposal_dist=None, scaling=1.0, tune=True, tune_interval=100, model=None, mode=None, **kwargs)
Metropolis-Hastings sampling step
Parameters: vars (list) – List of variables for sampler S (standard deviation or covariance matrix) – Some measure of variance to parameterize proposal distribution proposal_dist (function) – Function that returns zero-mean deviates when parameterized with S (and n). Defaults to normal. scaling (scalar or array) – Initial scale factor for proposal. Defaults to 1. tune (bool) – Flag for tuning. Defaults to True. tune_interval (int) – The frequency of tuning. Defaults to 100 iterations. model (PyMC Model) – Optional model for sampling step. Defaults to None (taken from context). mode (string or Mode instance.) – compilation mode passed to Theano functions
class pymc3.step_methods.metropolis.BinaryMetropolis(vars, scaling=1.0, tune=True, tune_interval=100, model=None)
Metropolis-Hastings optimized for binary variables
Parameters: vars (list) – List of variables for sampler scaling (scalar or array) – Initial scale factor for proposal. Defaults to 1. tune (bool) – Flag for tuning. Defaults to True. tune_interval (int) – The frequency of tuning. Defaults to 100 iterations. model (PyMC Model) – Optional model for sampling step. Defaults to None (taken from context).
static competence(var)
BinaryMetropolis is only suitable for binary (bool) and Categorical variables with k=1.
class pymc3.step_methods.metropolis.BinaryGibbsMetropolis(vars, order='random', transit_p=0.8, model=None)
A Metropolis-within-Gibbs step method optimized for binary variables
Parameters: vars (list) – List of variables for sampler order (list or 'random') – List of integers indicating the Gibbs update order e.g., [0, 2, 1, …]. Default is random transit_p (float) – The diagonal of the transition kernel. A value > .5 gives anticorrelated proposals, which resulting in more efficient antithetical sampling. model (PyMC Model) – Optional model for sampling step. Defaults to None (taken from context).
static competence(var)
BinaryMetropolis is only suitable for Bernoulli and Categorical variables with k=2.
class pymc3.step_methods.metropolis.CategoricalGibbsMetropolis(vars, proposal='uniform', order='random', model=None)
A Metropolis-within-Gibbs step method optimized for categorical variables. This step method works for Bernoulli variables as well, but it is not optimized for them, like BinaryGibbsMetropolis is. Step method supports two types of proposals: A uniform proposal and a proportional proposal, which was introduced by Liu in his 1996 technical report “Metropolized Gibbs Sampler: An Improvement”.
static competence(var)
CategoricalGibbsMetropolis is only suitable for Bernoulli and Categorical variables.
### Slice¶
class pymc3.step_methods.slicer.Slice(vars=None, w=1.0, tune=True, model=None, iter_limit=inf, **kwargs)
Univariate slice sampler step method
Parameters: vars (list) – List of variables for sampler. w (float) – Initial width of slice (Defaults to 1). tune (bool) – Flag for tuning (Defaults to True). model (PyMC Model) – Optional model for sampling step. Defaults to None (taken from context).
### Hamiltonian Monte Carlo¶
class pymc3.step_methods.hmc.hmc.HamiltonianMC(vars=None, path_length=2.0, adapt_step_size=True, gamma=0.05, k=0.75, t0=10, target_accept=0.8, **kwargs)
A sampler for continuous variables based on Hamiltonian mechanics.
See NUTS sampler for automatically tuned stopping time and step size scaling.
Set up the Hamiltonian Monte Carlo sampler.
Parameters: vars (list of theano variables) – path_length (float, default=2) – total length to travel step_rand (function float -> float, default=unif) – A function which takes the step size and returns an new one used to randomize the step size at each iteration. step_scale (float, default=0.25) – Initial size of steps to take, automatically scaled down by 1/n**(1/4). scaling (array_like, ndim = {1,2}) – The inverse mass, or precision matrix. One dimensional arrays are interpreted as diagonal matrices. If is_cov is set to True, this will be interpreded as the mass or covariance matrix. is_cov (bool, default=False) – Treat the scaling as mass or covariance matrix. potential (Potential, optional) – An object that represents the Hamiltonian with methods velocity, energy, and random methods. It can be specified instead of the scaling matrix. target_accept (float, default .8) – Adapt the step size such that the average acceptance probability across the trajectories are close to target_accept. Higher values for target_accept lead to smaller step sizes. Setting this to higher values like 0.9 or 0.99 can help with sampling from difficult posteriors. Valid values are between 0 and 1 (exclusive). gamma (float, default .05) – k (float, default .75) – Parameter for dual averaging for step size adaptation. Values between 0.5 and 1 (exclusive) are admissible. Higher values correspond to slower adaptation. t0 (int, default 10) – Parameter for dual averaging. Higher values slow initial adaptation. adapt_step_size (bool, default=True) – Whether step size adaptation should be enabled. If this is disabled, k, t0, gamma and target_accept are ignored. model (pymc3.Model) – The model **kwargs (passed to BaseHMC) –
static competence(var, has_grad)
Check how appropriate this class is for sampling a random variable.
## Variational¶
### OPVI¶
Variational inference is a great approach for doing really complex, often intractable Bayesian inference in approximate form. Common methods (e.g. ADVI) lack from complexity so that approximate posterior does not reveal the true nature of underlying problem. In some applications it can yield unreliable decisions.
Recently on NIPS 2017 OPVI framework was presented. It generalizes variational inverence so that the problem is build with blocks. The first and essential block is Model itself. Second is Approximation, in some cases $$log Q(D)$$ is not really needed. Necessity depends on the third and forth part of that black box, Operator and Test Function respectively.
Operator is like an approach we use, it constructs loss from given Model, Approximation and Test Function. The last one is not needed if we minimize KL Divergence from Q to posterior. As a drawback we need to compute $$loq Q(D)$$. Sometimes approximation family is intractable and $$loq Q(D)$$ is not available, here comes LS(Langevin Stein) Operator with a set of test functions.
Test Function has more unintuitive meaning. It is usually used with LS operator and represents all we want from our approximate distribution. For any given vector based function of $$z$$ LS operator yields zero mean function under posterior. $$loq Q(D)$$ is no more needed. That opens a door to rich approximation families as neural networks.
References
class pymc3.variational.opvi.ObjectiveFunction(op, tf)
Helper class for construction loss and updates for variational inference
Parameters: op (Operator) – OPVI Functional operator tf (TestFunction) – OPVI TestFunction
score_function(sc_n_mc=None, more_replacements=None, fn_kwargs=None)
Compile scoring function that operates which takes no inputs and returns Loss
Parameters: sc_n_mc (int) – number of scoring MC samples more_replacements – Apply custom replacements before compiling a function fn_kwargs (dict) – arbitrary kwargs passed to theano.function theano.function
step_function(obj_n_mc=None, tf_n_mc=None, obj_optimizer=<function adagrad_window>, test_optimizer=<function adagrad_window>, more_obj_params=None, more_tf_params=None, more_updates=None, more_replacements=None, total_grad_norm_constraint=None, score=False, fn_kwargs=None)
Step function that should be called on each optimization step.
Generally it solves the following problem:
$\mathbf{\lambda^{\*}} = \inf_{\lambda} \sup_{\theta} t(\mathbb{E}_{\lambda}[(O^{p,q}f_{\theta})(z)])$
updates(obj_n_mc=None, tf_n_mc=None, obj_optimizer=<function adagrad_window>, test_optimizer=<function adagrad_window>, more_obj_params=None, more_tf_params=None, more_updates=None, more_replacements=None, total_grad_norm_constraint=None)
Calculate gradients for objective function, test function and then constructs updates for optimization step
Parameters: obj_n_mc (int) – Number of monte carlo samples used for approximation of objective gradients tf_n_mc (int) – Number of monte carlo samples used for approximation of test function gradients obj_optimizer (function (loss, params) -> updates) – Optimizer that is used for objective params test_optimizer (function (loss, params) -> updates) – Optimizer that is used for test function params more_obj_params (list) – Add custom params for objective optimizer more_tf_params (list) – Add custom params for test function optimizer more_updates (dict) – Add custom updates to resulting updates more_replacements (dict) – Apply custom replacements before calculating gradients total_grad_norm_constraint (float) – Bounds gradient norm, prevents exploding gradient problem ObjectiveUpdates
class pymc3.variational.opvi.Operator(approx)
Base class for Operator
Parameters: approx (Approximation) – an approximation instance
Notes
For implementing custom operator it is needed to define Operator.apply() method
apply(f)
Operator itself
$(O^{p,q}f_{\theta})(z)$
Parameters: f (TestFunction or None) – function that takes z = self.input and returns same dimensional output TensorVariable – symbolically applied operator
objective_class
alias of ObjectiveFunction
class pymc3.variational.opvi.Group(group, vfam=None, params=None, random_seed=None, model=None, local=False, rowwise=False, options=None, **kwargs)
Base class for grouping variables in VI
Grouped Approximation is used for modelling mutual dependencies for a specified group of variables. Base for local and global group.
Parameters: group (list) – List of PyMC3 variables or None indicating that group takes all the rest variables vfam (str) – String that marks the corresponding variational family for the group. Cannot be passed both with params params (dict) – Dict with variational family parameters, full description can be found below. Cannot be passed both with vfam random_seed (int) – Random seed for underlying random generator model – PyMC3 Model local (bool) – Indicates whether this group is local. Cannot be passed without params. Such group should have only one variable rowwise (bool) – Indicates whether this group is independently parametrized over first dim. Such group should have only one variable options (dict) – Special options for the group kwargs (Other kwargs for the group) –
Notes
Group instance/class has some important constants:
• supports_batched Determines whether such variational family can be used for AEVB or rowwise approx.
AEVB approx is such approx that somehow depends on input data. It can be treated as conditional distribution. You can see more about in the corresponding paper mentioned in references.
Rowwise mode is a special case approximation that treats every ‘row’, of a tensor as independent from each other. Some distributions can’t do that by definition e.g. Empirical that consists of particles only.
• has_logq Tells that distribution is defined explicitly
These constants help providing the correct inference method for given parametrization
Examples
Basic Initialization
Group is a factory class. You do not need to call every ApproximationGroup explicitly. Passing the correct vfam (Variational FAMily) argument you’ll tell what parametrization is desired for the group. This helps not to overload code with lots of classes.
>>> group = Group([latent1, latent2], vfam='mean_field')
The other way to select approximation is to provide params dictionary that has some predefined well shaped parameters. Keys of the dict serve as an identifier for variational family and help to autoselect the correct group class. To identify what approximation to use, params dict should have the full set of needed parameters. As there are 2 ways to instantiate the Group passing both vfam and params is prohibited. Partial parametrization is prohibited by design to avoid corner cases and possible problems.
>>> group = Group([latent3], params=dict(mu=my_mu, rho=my_rho))
Important to note that in case you pass custom params they will not be autocollected by optimizer, you’ll have to provide them with more_obj_params keyword.
Supported dict keys:
• {‘mu’, ‘rho’}: MeanFieldGroup
• {‘mu’, ‘L_tril’}: FullRankGroup
• {‘histogram’}: EmpiricalGroup
• {0, 1, 2, 3, …, k-1}: NormalizingFlowGroup of depth k
NormalizingFlows have other parameters than ordinary groups and should be passed as nested dicts with the following keys:
• {‘u’, ‘w’, ‘b’}: PlanarFlow
• {‘a’, ‘b’, ‘z_ref’}: RadialFlow
• {‘loc’}: LocFlow
• {‘rho’}: ScaleFlow
• {‘v’}: HouseholderFlow
Note that all integer keys should be present in the dictionary. An example of NormalizingFlow initialization can be found below.
Using AEVB
Autoencoding variational Bayes is a powerful tool to get conditional $$q(\lambda|X)$$ distribution on latent variables. It is well supported by PyMC3 and all you need is to provide a dictionary with well shaped variational parameters, the correct approximation will be autoselected as mentioned in section above. However we have some implementation restrictions in AEVB. They require autoencoded variable to have first dimension as batch dimension and other dimensions should stay fixed. With this assumptions it is possible to generalize all variational approximation families as batched approximations that have flexible parameters and leading axis.
Only single variable local group is supported. Params are required.
>>> # for mean field
>>> group = Group([latent3], params=dict(mu=my_mu, rho=my_rho), local=True)
>>> # or for full rank
>>> group = Group([latent3], params=dict(mu=my_mu, L_tril=my_L_tril), local=True)
• An Approximation class is selected automatically based on the keys in dict.
• my_mu and my_rho are usually estimated with neural network or function approximator.
Using Row-Wise Group
Batch groups have independent row wise approximations, thus using batched mean field will give no effect. It is more interesting if you want each row of a matrix to be parametrized independently with normalizing flow or full rank gaussian.
To tell Group that group is batched you need set batched kwarg as True. Only single variable group is allowed due to implementation details.
>>> group = Group([latent3], vfam='fr', rowwise=True) # 'fr' is alias for 'full_rank'
The resulting approximation for this variable will have the following structure
$latent3_{i, \dots} \sim \mathcal{N}(\mu_i, \Sigma_i) \forall i$
Note: Using rowwise and user-parametrized approximation is ok, but shape should be checked beforehand, it is impossible to infer it by PyMC3
Normalizing Flow Group
In case you use simple initialization pattern using vfam you’ll not meet any changes. Passing flow formula to vfam you’ll get correct flow parametrization for group
>>> group = Group([latent3], vfam='scale-hh*5-radial*4-loc')
Note: Consider passing location flow as the last one and scale as the first one for stable inference.
Rowwise normalizing flow is supported as well
>>> group = Group([latent3], vfam='scale-hh*2-radial-loc', rowwise=True)
Custom parameters for normalizing flow can be a real trouble for the first time. They have quite different format from the rest variational families.
>>> # int is used as key, it also tells the flow position
... flow_params = {
... # rho parametrizes scale flow, softplus is used to map (-inf; inf) -> (0, inf)
... 0: dict(rho=my_scale),
... 1: dict(v=my_v1), # Householder Flow, v is parameter name from the original paper
... 2: dict(v=my_v2), # do not miss any number in dict, or else error is raised
... 3: dict(a=my_a, b=my_b, z_ref=my_z_ref), # Radial flow
... 4: dict(loc=my_loc) # Location Flow
... }
... group = Group([latent3], params=flow_params)
... # local=True can be added in case you do AEVB inference
... group = Group([latent3], params=flow_params, local=True)
Delayed Initialization
When you have a lot of latent variables it is impractical to do it all manually. To make life much simpler, You can pass None instead of list of variables. That case you’ll not create shared parameters until you pass all collected groups to Approximation object that collects all the groups together and checks that every group is correctly initialized. For those groups which have group equal to None it will collect all the rest variables not covered by other groups and perform delayed init.
>>> group_1 = Group([latent1], vfam='fr') # latent1 has full rank approximation
>>> group_other = Group(None, vfam='mf') # other variables have mean field Q
>>> approx = Approximation([group_1, group_other])
Summing Up
When you have created all the groups they need to pass all the groups to Approximation. It does not accept any other parameter rather than groups
>>> approx = Approximation(my_groups)
References
logq
Dev - Monte Carlo estimate for group logQ
logq_norm
Dev - Monte Carlo estimate for group logQ normalized
make_size_and_deterministic_replacements(s, d, more_replacements=None)
Dev - creates correct replacements for initial depending on sample size and deterministic flag
Parameters: s (scalar) – sample size d (bool or scalar) – whether sampling is done deterministically more_replacements (dict) – replacements for shape and initial dict with replacements for initial
set_size_and_deterministic(node, s, d, more_replacements=None)
Dev - after node is sampled via symbolic_sample_over_posterior() or symbolic_single_sample() new random generator can be allocated and applied to node
Parameters: node (Variable) – Theano node with symbolically applied VI replacements s (scalar) – desired number of samples d (bool or int) – whether sampling is done deterministically more_replacements (dict) – more replacements to apply Variable with applied replacements, ready to use
symbolic_logq
Dev - correctly scaled self.symbolic_logq_not_scaled
symbolic_logq_not_scaled
Dev - symbolically computed logq for self.symbolic_random computations can be more efficient since all is known beforehand including self.symbolic_random
symbolic_normalizing_constant
Dev - normalizing constant for self.logq, scales it to minibatch_size instead of total_size
symbolic_random
Dev - abstract node that takes self.symbolic_initial and creates approximate posterior that is parametrized with self.params_dict.
Implementation should take in account self.batched. If self.batched is True, then self.symbolic_initial is 3d tensor, else 2d
Returns: tensor
symbolic_random2d
Dev - self.symbolic_random flattened to matrix
symbolic_sample_over_posterior(node)
Dev - performs sampling of node applying independent samples from posterior each time. Note that it is done symbolically and this node needs set_size_and_deterministic() call
symbolic_single_sample(node)
Dev - performs sampling of node applying single sample from posterior. Note that it is done symbolically and this node needs set_size_and_deterministic() call with size=1
to_flat_input(node)
Dev - replace vars with flattened view stored in self.inputs
class pymc3.variational.opvi.Approximation(groups, model=None)
Wrapper for grouped approximations
Wraps list of groups, creates an Approximation instance that collects sampled variables from all the groups, also collects logQ needed for explicit Variational Inference.
Parameters: groups (list[Group]) – List of Group instances. They should have all model variables model (Model) –
Notes
Some shortcuts for single group approximations are available:
• MeanField
• FullRank
• NormalizingFlow
• Empirical
Single group accepts local_rv keyword with dict mapping PyMC3 variables to their local Group parameters dict
get_optimization_replacements(s, d)
Dev - optimizations for logP. If sample size is static and equal to 1: then theano.scan MC estimate is replaced with single sample without call to theano.scan.
logp
Dev - computes $$E_{q}(logP)$$ from model via theano.scan that can be optimized later
logp_norm
Dev - normalized $$E_{q}(logP)$$
logq
Dev - collects logQ for all groups
logq_norm
Dev - collects logQ for all groups and normalizes it
make_size_and_deterministic_replacements(s, d, more_replacements=None)
Dev - creates correct replacements for initial depending on sample size and deterministic flag
Parameters: s (scalar) – sample size d (bool) – whether sampling is done deterministically more_replacements (dict) – replacements for shape and initial dict with replacements for initial
replacements
Dev - all replacements from groups to replace PyMC random variables with approximation
rslice(name)
Dev - vectorized sampling for named random variable without call to theano.scan. This node still needs set_size_and_deterministic() to be evaluated
sample(draws=500, include_transformed=True)
Draw samples from variational posterior.
Parameters: draws (int) – Number of random samples. include_transformed (bool) – If True, transformed variables are also sampled. Default is False. trace (pymc3.backends.base.MultiTrace) – Samples drawn from variational posterior.
sample_node(node, size=None, deterministic=False, more_replacements=None)
Samples given node or nodes over shared posterior
Parameters: node (Theano Variables (or Theano expressions)) – size (None or scalar) – number of samples more_replacements (dict) – add custom replacements to graph, e.g. change input source deterministic (bool) – whether to use zeros as initial distribution if True - zero initial point will produce constant latent variables sampled node(s) with replacements
scale_cost_to_minibatch
Dev - Property to control scaling cost to minibatch
set_size_and_deterministic(node, s, d, more_replacements=None)
Dev - after node is sampled via symbolic_sample_over_posterior() or symbolic_single_sample() new random generator can be allocated and applied to node
Parameters: node (Variable) – Theano node with symbolically applied VI replacements s (scalar) – desired number of samples d (bool or int) – whether sampling is done deterministically more_replacements (dict) – more replacements to apply Variable with applied replacements, ready to use
single_symbolic_logp
Dev - for single MC sample estimate of $$E_{q}(logP)$$ theano.scan is not needed and code can be optimized
sized_symbolic_logp
Dev - computes sampled logP from model via theano.scan
symbolic_logq
Dev - collects symbolic_logq for all groups
symbolic_normalizing_constant
Dev - normalizing constant for self.logq, scales it to minibatch_size instead of total_size. Here the effect is controlled by self.scale_cost_to_minibatch
symbolic_sample_over_posterior(node)
Dev - performs sampling of node applying independent samples from posterior each time. Note that it is done symbolically and this node needs set_size_and_deterministic() call
symbolic_single_sample(node)
Dev - performs sampling of node applying single sample from posterior. Note that it is done symbolically and this node needs set_size_and_deterministic() call with size=1
to_flat_input(node)
Dev - replace vars with flattened view stored in self.inputs
### Inference¶
class pymc3.variational.inference.ADVI(*args, **kwargs)
This class implements the meanfield ADVI, where the variational posterior distribution is assumed to be spherical Gaussian without correlation of parameters and fit to the true posterior distribution. The means and standard deviations of the variational posterior are referred to as variational parameters.
For explanation, we classify random variables in probabilistic models into three types. Observed random variables $${\cal Y}=\{\mathbf{y}_{i}\}_{i=1}^{N}$$ are $$N$$ observations. Each $$\mathbf{y}_{i}$$ can be a set of observed random variables, i.e., $$\mathbf{y}_{i}=\{\mathbf{y}_{i}^{k}\}_{k=1}^{V_{o}}$$, where $$V_{k}$$ is the number of the types of observed random variables in the model.
The next ones are global random variables $$\Theta=\{\theta^{k}\}_{k=1}^{V_{g}}$$, which are used to calculate the probabilities for all observed samples.
The last ones are local random variables $${\cal Z}=\{\mathbf{z}_{i}\}_{i=1}^{N}$$, where $$\mathbf{z}_{i}=\{\mathbf{z}_{i}^{k}\}_{k=1}^{V_{l}}$$. These RVs are used only in AEVB.
The goal of ADVI is to approximate the posterior distribution $$p(\Theta,{\cal Z}|{\cal Y})$$ by variational posterior $$q(\Theta)\prod_{i=1}^{N}q(\mathbf{z}_{i})$$. All of these terms are normal distributions (mean-field approximation).
$$q(\Theta)$$ is parametrized with its means and standard deviations. These parameters are denoted as $$\gamma$$. While $$\gamma$$ is a constant, the parameters of $$q(\mathbf{z}_{i})$$ are dependent on each observation. Therefore these parameters are denoted as $$\xi(\mathbf{y}_{i}; \nu)$$, where $$\nu$$ is the parameters of $$\xi(\cdot)$$. For example, $$\xi(\cdot)$$ can be a multilayer perceptron or convolutional neural network.
In addition to $$\xi(\cdot)$$, we can also include deterministic mappings for the likelihood of observations. We denote the parameters of the deterministic mappings as $$\eta$$. An example of such mappings is the deconvolutional neural network used in the convolutional VAE example in the PyMC3 notebook directory.
This function maximizes the evidence lower bound (ELBO) $${\cal L}(\gamma, \nu, \eta)$$ defined as follows:
$\begin{split}{\cal L}(\gamma,\nu,\eta) & = \mathbf{c}_{o}\mathbb{E}_{q(\Theta)}\left[ \sum_{i=1}^{N}\mathbb{E}_{q(\mathbf{z}_{i})}\left[ \log p(\mathbf{y}_{i}|\mathbf{z}_{i},\Theta,\eta) \right]\right] \\ & - \mathbf{c}_{g}KL\left[q(\Theta)||p(\Theta)\right] - \mathbf{c}_{l}\sum_{i=1}^{N} KL\left[q(\mathbf{z}_{i})||p(\mathbf{z}_{i})\right],\end{split}$
where $$KL[q(v)||p(v)]$$ is the Kullback-Leibler divergence
$KL[q(v)||p(v)] = \int q(v)\log\frac{q(v)}{p(v)}dv,$
$$\mathbf{c}_{o/g/l}$$ are vectors for weighting each term of ELBO. More precisely, we can write each of the terms in ELBO as follows:
$\begin{split}\mathbf{c}_{o}\log p(\mathbf{y}_{i}|\mathbf{z}_{i},\Theta,\eta) & = & \sum_{k=1}^{V_{o}}c_{o}^{k} \log p(\mathbf{y}_{i}^{k}| {\rm pa}(\mathbf{y}_{i}^{k},\Theta,\eta)) \\ \mathbf{c}_{g}KL\left[q(\Theta)||p(\Theta)\right] & = & \sum_{k=1}^{V_{g}}c_{g}^{k}KL\left[ q(\theta^{k})||p(\theta^{k}|{\rm pa(\theta^{k})})\right] \\ \mathbf{c}_{l}KL\left[q(\mathbf{z}_{i}||p(\mathbf{z}_{i})\right] & = & \sum_{k=1}^{V_{l}}c_{l}^{k}KL\left[ q(\mathbf{z}_{i}^{k})|| p(\mathbf{z}_{i}^{k}|{\rm pa}(\mathbf{z}_{i}^{k}))\right],\end{split}$
where $${\rm pa}(v)$$ denotes the set of parent variables of $$v$$ in the directed acyclic graph of the model.
When using mini-batches, $$c_{o}^{k}$$ and $$c_{l}^{k}$$ should be set to $$N/M$$, where $$M$$ is the number of observations in each mini-batch. This is done with supplying total_size parameter to observed nodes (e.g. Normal('x', 0, 1, observed=data, total_size=10000)). In this case it is possible to automatically determine appropriate scaling for $$logp$$ of observed nodes. Interesting to note that it is possible to have two independent observed variables with different total_size and iterate them independently during inference.
For working with ADVI, we need to give
• The probabilistic model
model with three types of RVs (observed_RVs, global_RVs and local_RVs).
• (optional) Minibatches
The tensors to which mini-bathced samples are supplied are handled separately by using callbacks in Inference.fit() method that change storage of shared theano variable or by pymc3.generator() that automatically iterates over minibatches and defined beforehand.
• (optional) Parameters of deterministic mappings
They have to be passed along with other params to Inference.fit() method as more_obj_params argument.
For more information concerning training stage please reference pymc3.variational.opvi.ObjectiveFunction.step_function()
Parameters: local_rv (dict[var->tuple]) – mapping {model_variable -> approx params} Local Vars are used for Autoencoding Variational Bayes See (AEVB; Kingma and Welling, 2014) for details model (pymc3.Model) – PyMC3 model for inference random_seed (None or int) – leave None to use package global RandomStream or other valid value to create instance specific one start (Point) – starting point for inference
References
• Kucukelbir, A., Tran, D., Ranganath, R., Gelman, A., and Blei, D. M. (2016). Automatic Differentiation Variational Inference. arXiv preprint arXiv:1603.00788.
• Geoffrey Roeder, Yuhuai Wu, David Duvenaud, 2016 Sticking the Landing: A Simple Reduced-Variance Gradient for ADVI approximateinference.org/accepted/RoederEtAl2016.pdf
• Kingma, D. P., & Welling, M. (2014). Auto-Encoding Variational Bayes. stat, 1050, 1.
class pymc3.variational.inference.FullRankADVI(*args, **kwargs)
Full Rank Automatic Differentiation Variational Inference (ADVI)
Parameters: local_rv (dict[var->tuple]) – mapping {model_variable -> approx params} Local Vars are used for Autoencoding Variational Bayes See (AEVB; Kingma and Welling, 2014) for details model (pymc3.Model) – PyMC3 model for inference random_seed (None or int) – leave None to use package global RandomStream or other valid value to create instance specific one start (Point) – starting point for inference
References
• Kucukelbir, A., Tran, D., Ranganath, R., Gelman, A., and Blei, D. M. (2016). Automatic Differentiation Variational Inference. arXiv preprint arXiv:1603.00788.
• Geoffrey Roeder, Yuhuai Wu, David Duvenaud, 2016 Sticking the Landing: A Simple Reduced-Variance Gradient for ADVI approximateinference.org/accepted/RoederEtAl2016.pdf
• Kingma, D. P., & Welling, M. (2014). Auto-Encoding Variational Bayes. stat, 1050, 1.
class pymc3.variational.inference.SVGD(n_particles=100, jitter=1, model=None, start=None, random_seed=None, estimator=<class 'pymc3.variational.operators.KSD'>, kernel=<pymc3.variational.test_functions.RBF object>, **kwargs)
This inference is based on Kernelized Stein Discrepancy it’s main idea is to move initial noisy particles so that they fit target distribution best.
Algorithm is outlined below
Input: A target distribution with density function $$p(x)$$
and a set of initial particles $$\{x^0_i\}^n_{i=1}$$
Output: A set of particles $$\{x^{*}_i\}^n_{i=1}$$ that approximates the target distribution.
$\begin{split}x_i^{l+1} &\leftarrow x_i^{l} + \epsilon_l \hat{\phi}^{*}(x_i^l) \\ \hat{\phi}^{*}(x) &= \frac{1}{n}\sum^{n}_{j=1}[k(x^l_j,x) \nabla_{x^l_j} logp(x^l_j)+ \nabla_{x^l_j} k(x^l_j,x)]\end{split}$
Parameters: n_particles (int) – number of particles to use for approximation jitter (float) – noise sd for initial point model (pymc3.Model) – PyMC3 model for inference kernel (callable) – kernel function for KSD $$f(histogram) -> (k(x,.), \nabla_x k(x,.))$$ temperature (float) – parameter responsible for exploration, higher temperature gives more broad posterior estimate start (Point) – initial point for inference random_seed (None or int) – leave None to use package global RandomStream or other valid value to create instance specific one start – starting point for inference kwargs (other keyword arguments passed to estimator) –
References
• Qiang Liu, Dilin Wang (2016) Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm arXiv:1608.04471
• Yang Liu, Prajit Ramachandran, Qiang Liu, Jian Peng (2017) Stein Variational Policy Gradient arXiv:1704.02399
class pymc3.variational.inference.ASVGD(approx=None, estimator=<class 'pymc3.variational.operators.KSD'>, kernel=<pymc3.variational.test_functions.RBF object>, **kwargs)
not suggested to use
This inference is based on Kernelized Stein Discrepancy it’s main idea is to move initial noisy particles so that they fit target distribution best.
Algorithm is outlined below
Input: Parametrized random generator $$R_{\theta}$$
Output: $$R_{\theta^{*}}$$ that approximates the target distribution.
$\begin{split}\Delta x_i &= \hat{\phi}^{*}(x_i) \\ \hat{\phi}^{*}(x) &= \frac{1}{n}\sum^{n}_{j=1}[k(x_j,x) \nabla_{x_j} logp(x_j)+ \nabla_{x_j} k(x_j,x)] \\ \Delta_{\theta} &= \frac{1}{n}\sum^{n}_{i=1}\Delta x_i\frac{\partial x_i}{\partial \theta}\end{split}$
Parameters: approx (Approximation) – default is FullRank but can be any kernel (callable) – kernel function for KSD $$f(histogram) -> (k(x,.), \nabla_x k(x,.))$$ model (Model) – kwargs (kwargs for gradient estimator) –
References
• Dilin Wang, Yihao Feng, Qiang Liu (2016) Learning to Sample Using Stein Discrepancy http://bayesiandeeplearning.org/papers/BDL_21.pdf
• Dilin Wang, Qiang Liu (2016) Learning to Draw Samples: With Application to Amortized MLE for Generative Adversarial Learning arXiv:1611.01722
• Yang Liu, Prajit Ramachandran, Qiang Liu, Jian Peng (2017) Stein Variational Policy Gradient arXiv:1704.02399
class pymc3.variational.inference.Inference(op, approx, tf, **kwargs)
Base class for Variational Inference
Communicates Operator, Approximation and Test Function to build Objective Function
Parameters: op (Operator class) – approx (Approximation class or instance) – tf (TestFunction instance) – model (Model) – PyMC3 Model kwargs (kwargs passed to Operator) –
fit(n=10000, score=None, callbacks=None, progressbar=True, **kwargs)
Perform Operator Variational Inference
Parameters: Other Parameters: n (int) – number of iterations score (bool) – evaluate loss on each iteration or not callbacks (list[function : (Approximation, losses, i) -> None]) – calls provided functions after each iteration step progressbar (bool) – whether to show progressbar or not obj_n_mc (int) – Number of monte carlo samples used for approximation of objective gradients tf_n_mc (int) – Number of monte carlo samples used for approximation of test function gradients obj_optimizer (function (grads, params) -> updates) – Optimizer that is used for objective params test_optimizer (function (grads, params) -> updates) – Optimizer that is used for test function params more_obj_params (list) – Add custom params for objective optimizer more_tf_params (list) – Add custom params for test function optimizer more_updates (dict) – Add custom updates to resulting updates total_grad_norm_constraint (float) – Bounds gradient norm, prevents exploding gradient problem fn_kwargs (dict) – Add kwargs to theano.function (e.g. {‘profile’: True}) more_replacements (dict) – Apply custom replacements before calculating gradients Approximation
refine(n, progressbar=True)
Refine the solution using the last compiled step function
class pymc3.variational.inference.ImplicitGradient(approx, estimator=<class 'pymc3.variational.operators.KSD'>, kernel=<pymc3.variational.test_functions.RBF object>, **kwargs)
not suggested to use
An approach to fit arbitrary approximation by computing kernel based gradient By default RBF kernel is used for gradient estimation. Default estimator is Kernelized Stein Discrepancy with temperature equal to 1. This temperature works only for large number of samples. Larger temperature is needed for small number of samples but there is no theoretical approach to choose the best one in such case.
class pymc3.variational.inference.KLqp(approx)
Kullback Leibler Divergence Inference
General approach to fit Approximations that define $$logq$$ by maximizing ELBO (Evidence Lower Bound).
Parameters: approx (Approximation) – Approximation to fit, it is required to have logQ
pymc3.variational.inference.fit(n=10000, local_rv=None, method='advi', model=None, random_seed=None, start=None, inf_kwargs=None, **kwargs)
Handy shortcut for using inference methods in functional way
Parameters: Other Parameters: n (int) – number of iterations local_rv (dict[var->tuple]) – mapping {model_variable -> approx params} Local Vars are used for Autoencoding Variational Bayes See (AEVB; Kingma and Welling, 2014) for details method (str or Inference) – string name is case insensitive in: ’advi’ for ADVI ’fullrank_advi’ for FullRankADVI ’svgd’ for Stein Variational Gradient Descent ’asvgd’ for Amortized Stein Variational Gradient Descent ’nfvi’ for Normalizing Flow with default scale-loc flow ’nfvi=’ for Normalizing Flow using formula model (Model) – PyMC3 model for inference random_seed (None or int) – leave None to use package global RandomStream or other valid value to create instance specific one inf_kwargs (dict) – additional kwargs passed to Inference start (Point) – starting point for inference score (bool) – evaluate loss on each iteration or not callbacks (list[function : (Approximation, losses, i) -> None]) – calls provided functions after each iteration step progressbar (bool) – whether to show progressbar or not obj_n_mc (int) – Number of monte carlo samples used for approximation of objective gradients tf_n_mc (int) – Number of monte carlo samples used for approximation of test function gradients obj_optimizer (function (grads, params) -> updates) – Optimizer that is used for objective params test_optimizer (function (grads, params) -> updates) – Optimizer that is used for test function params more_obj_params (list) – Add custom params for objective optimizer more_tf_params (list) – Add custom params for test function optimizer more_updates (dict) – Add custom updates to resulting updates total_grad_norm_constraint (float) – Bounds gradient norm, prevents exploding gradient problem fn_kwargs (dict) – Add kwargs to theano.function (e.g. {‘profile’: True}) more_replacements (dict) – Apply custom replacements before calculating gradients Approximation
### Approximations¶
class pymc3.variational.approximations.MeanField(*args, **kwargs)
Single Group Mean Field Approximation
Mean Field approximation to the posterior where spherical Gaussian family is fitted to minimize KL divergence from True posterior. It is assumed that latent space variables are uncorrelated that is the main drawback of the method
class pymc3.variational.approximations.FullRank(*args, **kwargs)
Single Group Full Rank Approximation
Full Rank approximation to the posterior where Multivariate Gaussian family is fitted to minimize KL divergence from True posterior. In contrast to MeanField approach correlations between variables are taken in account. The main drawback of the method is computational cost.
class pymc3.variational.approximations.Empirical(trace=None, size=None, **kwargs)
Single Group Full Rank Approximation
Builds Approximation instance from a given trace, it has the same interface as variational approximation
class pymc3.variational.approximations.NormalizingFlow(flow='scale-loc', *args, **kwargs)
Single Group Normalizing Flow Approximation
Normalizing flow is a series of invertible transformations on initial distribution.
$\begin{split}z_K &= f_K \circ \dots \circ f_2 \circ f_1(z_0) \\ & z_0 \sim \mathcal{N}(0, 1)\end{split}$
In that case we can compute tractable density for the flow.
$\ln q_K(z_K) = \ln q_0(z_0) - \sum_{k=1}^{K}\ln \left|\frac{\partial f_k}{\partial z_{k-1}}\right|$
Every $$f_k$$ here is a parametric function with defined determinant. We can choose every step here. For example the here is a simple flow is an affine transform:
$z = loc(scale(z_0)) = \mu + \sigma * z_0$
Here we get mean field approximation if $$z_0 \sim \mathcal{N}(0, 1)$$
Flow Formulas
In PyMC3 there is a flexible way to define flows with formulas. We have 5 of them by the moment:
• Loc (loc): $$z' = z + \mu$$
• Scale (scale): $$z' = \sigma * z$$
• Planar (planar): $$z' = z + u * \tanh(w^T z + b)$$
• Radial (radial): $$z' = z + \beta (\alpha + (z-z_r))^{-1}(z-z_r)$$
• Householder (hh): $$z' = H z$$
Formula can be written as a string, e.g. ‘scale-loc’, ‘scale-hh*4-loc’, ‘panar*10’. Every step is separated with ‘-‘, repeated flow is marked with ‘*’ producing ‘flow*repeats’.
References
• Danilo Jimenez Rezende, Shakir Mohamed, 2015 Variational Inference with Normalizing Flows arXiv:1505.05770
• Jakub M. Tomczak, Max Welling, 2016 Improving Variational Auto-Encoders using Householder Flow arXiv:1611.09630
pymc3.variational.approximations.sample_approx(approx, draws=100, include_transformed=True)
Draw samples from variational posterior.
Parameters: approx (Approximation) – Approximation to sample from draws (int) – Number of random samples. include_transformed (bool) – If True, transformed variables are also sampled. Default is True. trace (class:pymc3.backends.base.MultiTrace) – Samples drawn from variational posterior.
### Operators¶
class pymc3.variational.operators.KL(approx)
Operator based on Kullback Leibler Divergence
$KL[q(v)||p(v)] = \int q(v)\log\frac{q(v)}{p(v)}dv$
class pymc3.variational.operators.KSD(approx, temperature=1)
Operator based on Kernelized Stein Discrepancy
Input: A target distribution with density function $$p(x)$$
and a set of initial particles $$\{x^0_i\}^n_{i=1}$$
Output: A set of particles $$\{x_i\}^n_{i=1}$$ that approximates the target distribution.
$\begin{split}x_i^{l+1} \leftarrow \epsilon_l \hat{\phi}^{*}(x_i^l) \\ \hat{\phi}^{*}(x) = \frac{1}{n}\sum^{n}_{j=1}[k(x^l_j,x) \nabla_{x^l_j} logp(x^l_j)/temp + \nabla_{x^l_j} k(x^l_j,x)]\end{split}$
Parameters: approx (Approximation) – Approximation used for inference
References
• Qiang Liu, Dilin Wang (2016) Stein Variational Gradient Descent: A General Purpose Bayesian Inference Algorithm arXiv:1608.04471
objective_class
alias of KSDObjective
|
2018-05-22 10:13:36
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.41205349564552307, "perplexity": 6299.947304181884}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794864657.58/warc/CC-MAIN-20180522092655-20180522112655-00433.warc.gz"}
|
http://mathhelpforum.com/algebra/18001-matrix-products.html
|
1. matrix products
if A and B are n* n matrix, and AB is not equal to BA . and ABA is equal to A^2B; BAB is equal to B^2 A; then prove that A+B is not invertible.
2. I think is not true.
Let $\displaystyle A$ such that $\displaystyle A^2\neq A$ and $\displaystyle B=I_n$. Then $\displaystyle ABA=A^2\neq BAB=A$.
But $\displaystyle A^2B=A^2$ and $\displaystyle BAB=A$ and there are not equal.
3. Hello, kamaksh_ice!
Is there a typo in the problem?
. . The statement is not true . . .
If $\displaystyle A$ and $\displaystyle B$ are $\displaystyle n\times n$ matrices, and $\displaystyle ABA \,\neq \,BAB$,
. . prove that: .$\displaystyle A^2B \:=\:B^2A$
Let: .$\displaystyle A \:=\:\begin{pmatrix}1&0\\0&1\end{pmatrix},\quad B \:=\:\begin{pmatrix}2&0\\0&2\end{pmatrix}$
$\displaystyle \begin{array}{cc}\text{Then:} & ABA \:=\:\begin{pmatrix}1&0\\0&1\end{pmatrix}\begin{pm atrix}2&0\\0&2\end{pmatrix}\begin{pmatrix}1&0\\0&1 \end{pmatrix}\;=\;\begin{pmatrix}2&0\\0&2\end{pmat rix} \\ \\ \text{And:} & BAB \:=\:\begin{pmatrix}2&0\\0&2\end{pmatrix}\begin{pm atrix}1&0\\0&1\end{pmatrix}\begin{pmatrix}2&0\\0&2 \end{pmatrix}\:=\:\begin{pmatrix}4&0\\0&4\end{pmat rix}\end{array}$ . . $\displaystyle \text{Hence: }\;ABA \:\neq\:BAB$
$\displaystyle \begin{array}{cc}\text{But:} & A^2B \:=\:\begin{pmatrix}1&0\\0&1\end{pmatrix}\begin{pm atrix}1&0\\0&1\end{pmatrix}\begin{pmatrix}2&0\\0&2 \end{pmatrix}\:=\:\begin{pmatrix}2&0\\0&2\end{pmat rix} \\ \\ \text{And: }& B^2A \:=\:\begin{pmatrix}2&0\\0&2\end{pmatrix}\begin{pm atrix}2&0\\0&2\end{pmatrix}\begin{pmatrix}1&0\\0&1 \end{pmatrix}\;=\;\begin{pmatrix}4&0\\0&4\end{pmat rix} \end{array}$ . . $\displaystyle \text{. . . and: }\;A^2B \:\neq \:B^2A$
Ha! red_dog beat me to it ... and explained it better!
.
|
2018-05-28 00:13:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9477938413619995, "perplexity": 720.4995890085557}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794870497.66/warc/CC-MAIN-20180527225404-20180528005404-00104.warc.gz"}
|
https://eustore.mdisc.com/izkpk46g/9abf61-determine-the-oxidation-state-of-nitrogen-in-lino3
|
Determine the oxidation state of nitrogen in KNO 3. Determine the limiting reactant (LR) and the mass (in g) of nitrogen that can be formed from 50.0 g N 2O 4 and 45.0 g N 2H 4. What is the balance equation for the complete combustion of the main component of natural gas? Each Oxygen atom has an oxidation state of -2. Nitrogen has to have a +5 oxidation state to balance this. Escoger Select the correct answer. 2. Oxidation Number. Electron Configurations of Ions 1. Determining oxidation numbers from the Lewis structure (Figure 1a) is even easier than deducing it from the molecular formula (Figure 1b). The hydrogen atoms have +1 and there are four of them. but unsure of all the rest! First, you'll determine the oxidation of every other atom in the compound, then you'll simply solve for the unknown based on the overall charge of the compound. View the step-by-step solution to: Question. oxidation state of nitrogen in n2h4 . Oxidation states of nitrogen Ox. MEDIUM. hubo había 2. This problem has been solved! Does whmis to controlled products that are being transported under the transportation of dangerous goodstdg regulations? - Answers one year ago, Posted 144783. [2] It … Two of the Oxygen atoms form double bonds with Nitrogen and one forms a single bond. The crisscross method uses the oxidation state (valence) of each element or ion. X = +5 R is an abbreviation for any group in which a carbon atom is attached to the rest of the molecule by a C-C bond. Copyright © 2021 Multiply Media, LLC. N2O4(l In the laboratory – Ca(OH)₂ + 2NH₄Cl = 2NH₃ + 2H₂O + CaCl₂.. (E) Mg = +2, Si = -4. Since is in column of the periodic table, it will share electrons and use an oxidation state of . * +4 +3 +1 0+2 Which substance has a chemical formula with the same ratio of metal ions to nonmetal ions as in potassium sulfide? Determine the oxidation state of magnesium in MgO +2. We need this song equal negative one, not zero 3 to 6. Figure 1. Some possibly useful molar masses … Since you're dealing with a neutral compound, the sum of the oxidation numbers of all the atoms that form said compound must be zero. C2H6 + O2 ——> CO2 + H2O a. Some possibly useful molar masses are as follows: N2O4 = 92.02 g mol-1, N2H4 = 32.05 g mol-1. An oxidation number is defined as the charge an atom would carry if the molecule or polyatomic ion were completely ionic.When calculating the oxidation number of an element in a compound, treat all the elements present as if they are present as ions, EVEN if they are clearly part of a covalent molecule. Determine the oxidation state of nitrogen in LiNO3. fue; fue era; iba 3. (a) \mathrm{N… Nitrogen compounds, on the other hand, encompass oxidation states of nitrogen ranging from -3, as in ammonia and amines, to +5, as in nitric acid. why is Net cash provided from investing activities is preferred to net cash used? What is the oxidation state of nitrogen in LiNO3? And net charge on the molecule is +1. Literally, the oxidation states for any covalent compounds, e.g (CO) and ionic compounds, e.g(NaCl) is Zero, because the arbitary charge (oxidation states) of its individual ions or elements will balance the total charge of yesterday, Posted If the oxidation state is zero, type 0. Predict the oxidation number of nitrogen in ammonia ( NH3) The oxidation number of an atom is defined as the charge that an atom appears to have on forming ionic bonds with other heteroatoms. Its eutectics are of interest for heat transfer fluids. Therefore, (4×(+1)) + x = +1. Learn more: 1. Ayer ____ un accidente cerca de mi casa. The oxidation state of O is -2 and the oxidation state of Li is +1. Assign the oxidation state for nitrogen in each of the following. It is the lithium salt of nitric acid (an alkali metal nitrate). Im pretty sure that a is 0 (right?) When did organ music become associated with baseball? Pilar ____ ojos azules y pelo largo. FREE Expert Solution. Nitrogen Atomic Weight: 14.0067 Group Number: 15 Group Name: Pnictogen Period Number: 2 Block: p-block Ground State Configuration: 1s2 2s2 2p3 Ground State Level: 4So3/2 Standard State: Gas Common Valences: 3 65 (molar mass of BCl3 = 117.16 g/mol, B: 10.81, Cl:35.45, O:15.999, H:1.0079) (TCO 6) How are plant cells similar to animal cells? Determine the oxidation state of nitrogen in each of the following. Li3N Based on our data, we think this question is relevant for Professor Hoeger's class at UCSD. View Answer. Question: Determine The Oxidation State Of Nitrogen In LiNO3. Uploaded by: Fatima21fp. 3 because the over all oxidation number Is -3 and the oxidation number for oxygen is -2 so 3*-2=-6 for oxygen And we don't know the oxidation number of P so let us represent it as an X X+(-6)=-3 X=-3+6=3 X=3 The Determine the oxidation state of carbon in CO2. 01Legend: 1|0 represents 10 The original set of data is? Determine the oxidation state of nitrogen in LiNO3. Nickel carbonate | NiCO3 or Ni(CO)3 or CNiO3 | CID 18746 - structure, chemical names, physical and chemical properties, classification, patents, literature, biological activities, safety/hazards/toxicity information, supplier lists, and Give the net ionic equation for the reaction (if any) that occurs when aqueous solutions … We show herein that LiNO3 can serve as an electrolyte and useful redox-mediator. Get it solved from our top experts within 48hrs! If large quantities are involved in fire or the combustible material is finely divided, an explosion may result. 80) Determine the limiting reactant (LR) and the mass (in g) of nitrogen that can be formed from 50.0 g N2O4 and 45.0 g N2H4. +1 +x+3 (-2) =0 (notice we multiplied 3 by -2, because in the formula we have 3 atoms of oxygen with -2 charge each) x -5=0 x=5 Therefore, the oxidation number of N in KNO3 is +5 The oxidation number is synonymous with the oxidation state. Since is in column of the periodic table, it will share electrons and use an oxidation state of . 11. 1. Some possibly useful molar masses are as … However, the oxidation of carbon was not the only reason, and it was postulated that solid-state deposits originating from CO and CO 2 Identify the spectator ions in the following molecular equation LiClaq AgNO3aq from CHEM 102 at McNeese State University The following table lists some of the known organic compounds of nitrogen, having different oxidation states of that element. Determine oxidation state for Lithium Nitrate, LiNO3. You may wish to review the chapter on chemical bonding for relevant examples. How to solve: Determine the oxidation number of nitrogen in NH4+. Access the answers to hundreds of Oxidation state questions that are explained in a way that's easy for you to understand. (Points : 5) Question 2. Oxidation State of an Element: The oxidation state or the oxidation number of an element is the charge gained by an atom of the element in order to form a bond in a molecule. The last Oxygen Determine the limiting reactant (LR) and the mass (in g) of nitrogen that can be formed from 50.0 g N2O4 and 45.0 g N2H4. Some people show dative/co-ordinate bonds, whereas others show a positive charge on nitrogen. When did organ music become associated with baseball? Can you rematch with someone you recently unmatched on Tinder? (b)... . 3555794. Balance the equation. Depends on the metal and its oxidation state. Why don't libraries smell like bookstores? The answer is B) +6.. Elements in Group 15 have an oxidation number of +3 in binary metal compounds with metals or. How are plant cells similar to bacterial cells? All Rights Reserved. This is the full list of oxidation states for this molecule. Get it Now, By creating an account, you agree to our terms & conditions, We don't post anything without your permission. The oxidation of nitrogen in NH4+ is -3. X Research source For example, in the compound Na 2 SO 4 , the charge of sulfur (S) is unknown - it's not in its elemental form, so it's not 0, but that's all we know. How much money do you start with in monopoly revolution? (2 points) b. I'm getting thrown off because there is the nitrate polyatomic ion in there even though there isn't a charge at the end. McClelland's need for achievement corresponds most closely to A. Herzberg's hygiene factors .B. Some give N2, some give nitrogen oxides of various forms. In the NO3- ion nitrogen is in its 5+ oxidation state. a. Rules to determine oxidation states The oxidation state of an uncombined element is zero. Solution for Determine the oxidation state of nitrogen in NO3 - Calculate the oxidation number: Since lithium is a group I alkali metal, its oxidation number will be +1.Oxygen, on the other hand, will have an oxidation number equal to -2.This means that you get 125.2g ethane gas (C2H6) burns in oxygen. 45) Determine the theoretical yield of HCl if 60.0 g of BCl3 and 37.5 g of H2O are reacted according to the following balanced reaction. But second period elements never show variable oxidation states. Which of the following sets of Determine the original set of data.1. Iron (III) nitrate solution reacts with lithium hydroxide solution to produce solid iron (III) hydro… Problem Details. Oxidation & Nomenclature Worksheet Which of the following elements will exhibit a negative oxidation state when combined with phosphorus? None of the oxygen-containing compounds are peroxides or superoxides. C. extrinsic rewards. NO2- ions are formed by reduction of nitrate ions on the anode. How long will the footprints on the moon last? Noncombustible, but accelerates burning of combustible materials. Notice that changing the CH 3 group with R does not change the oxidation number of the central atom. Elements in Group 15 have an oxidation number of +3 in binary metal compounds with metals or. An oxidation number is used to indicate the oxidation state of an atom in a compound. to find the oxidation number of N , we need to use the rule ' that the sum of the oxidation number of the each element of a compound is equal to the o if the compound is neutral or the net charge of it if the compound has a net charge.So, in HNO3 lets say that the Nitrogen charge is x. (C) Li = +1, N = +5, O = -2. When did sir Edmund barton get the title sir and how? Esta... Log into your existing Transtutors account. (A) Na = +1, I = -1. Therefore the oxidation state of N in is +5. Hence, x is -3. 1 + X + (-2) 3 = 0. Barium nitrate appears as a white crystalline solid. Oxidation numbers are usually written with the sign first, then the magnitude, which differentiates them from charges. (D) Se = -2, H = +1. What is the oxidation state of nitrogen in LiNO3. Determine the oxidation state of nitrogen in KNO 3 . Chemistry. Different ways of displaying oxidation numbers of ethanol and acetic acid. Let nitrogen's oxidation state be x. Some possibly useful molar masses are as … Problem: Assign the oxidation state for nitrogen in each of the following.a. View Answer. The oxidation of the positive electrode was the main reason for the capacitance fading. Some give N2, some give nitrogen oxides of various forms. 1) Oxygen is more electronegative than xenon, so its oxidation state is −2. What is the oxidation state of nitrogen in NaNO2? The expression to calculate the oxidation state in is: …… (1) Rearrange equation (1) for the oxidation state of N. …… (2) Substitute -2 for the oxidation of O and +1 for the oxidation state of Li in equation (2). Some possibly useful molar masses are as follows: N2O4 = 92.02 g/mol, N2H4 = 32.05 g/mol. NOTE: when typing in the oxidation state, type the sign first and then the number. Some bacteria (including Escherichia coli) can use nitrate (NO3-) as an electron acceptor instead… Show more Some bacteria (including Escherichia coli) can use nitrate (NO3-) as an electron acceptor instead of oxygen, reducing it to nitrite... acceptor instead of… Show more Some bacteria (including Escherichia coli) can use nitrate (NO3-) as an electron acceptor instead of oxygen, reducing it to nitrite (NO2-) (a) Write a balanced equation for the oxidation of NADH by nitrate. This applies regardless of the structure of the element: Xe, Cl 2, S 8, and large structures of carbon or silicon each have an oxidation state of zero. Since you're dealing with a neutral compound, the sum of the oxidation numbers of all the atoms that form said compound must be zero.. Therefore the oxidation state of nitrogen in a nitrite polyatomic molecule is −3 . The effect of eutectic molten salt on the corrosion behavior of a stainless steel 316L was investigated. Determine oxidation state for Lithium Nitrate,... Posted 1+ x+ (-6) = 0. x -5 = 0. Therefore the oxidation state of nitrogen in a nitrite polyatomic molecule is −3 . A) +5 B) +3 C) 0 D) +2 E) +4. Their oxidation … Oxidation State Get help with your Oxidation state homework. Maslow's esteem needs. Determine oxidation state for Lithium Nitrate, LiNO3, Submit your documents and get free Plagiarism report, Your solution is just a click away! Chemistry. Each nation states negative, too, and nitrogen, each with an oxidation seat of negative four and 20 this will have minus two from the oxygen in two times positive one for these nitrogen plus two oxidation state season plus one oxidation and 03 minor again noticed the negative charge. Please do not block ads on this website. Cuando mi padre ____ joven, ____ al parque todos los días. The oxidation state shwon by silicon when it conbines with strongly electropositive metals is: MEDIUM. Determine the oxidation state of nitrogen in LiNO3 - ScieMce No ads = no money for us = no free stuff for you! Lithium nitrate is an inorganic compound with the formula LiNO3. * O sodium oxide O Sodium chloride O Oxidation state of nitrogen in nitrogen dioxide is _____. What Is The Oxidation State Of An Individual… A Certain Element Forms An Ion With 36 Electrons And… The Smallest Part Of An Element That Can Still Be… The Smallest A ) +5. An oxidation number, or oxidation state, is assigned to help us determine whether or not an element in a reaction has been oxidised or reduced. Sign up to view the full answer View Full Answer. And it's structure is so confusing to me. The charge of Oxygen is almost ALWAYS -2 so you can assume that as well. Determine the oxidation states of the elements in the compounds listed. The material on this site can not be reproduced, distributed, transmitted, cached or otherwise used, except with prior written permission of Multiply. See the answer. Determine the oxidation state of each nitrogen in each of the following molecules or ions. Determine the limiting reactant (LR) and the mass (in g) of nitrogen that can be formed from 50.0 g N 2O 4 and 45.0 g N 2H 4. To be used as thermal energy storage fluid, low melting point is one of the utmost important thermal properties amongst other. 2 days ago. Please help me with this assignment, it shouldn't be that hard. When did organ music become associated with baseball? -2 -2 -1 = -5. Oxidation Number. Who is the longest reigning WWE Champion of all time? Determine The Oxidation State Of Nitrogen In Lino3. (B) Gd = +3, Cl = -1. 0142. Oxidation number of no3? It dimerizes to form N2O4. What's the oxidation state of nitrogen in $\ce{NO3-}$ ion. Determine the oxidation state of nitrogen in LiNO3. The answer is B) +6. What did women and children do at San Jose? What is the oxidation state of nitrogen in LiNO3? An atom having higher electronegativity (even if it forms a covalent bond) is given a negative oxidation state. Top Answer. tuvo tenía 4. About this Question. The crisscross method uses the oxidation state (valence) of each … +4. 15) Determine the limiting reactant (LR) and the mass (in g) of nitrogen that can be formed from 50.0 g N2O4 and 45.0 g N2H4. Industrially – 3H₂ + N₂ = 2NH₃ (in harsh conditions at high pressure and temperature, and in the presence of a catalyst);. (F) Rb = +1, O = − 1 / 2 � (G) H = +1, F = -1. oxidation state of nitrogen in n2h4 . Enjoy our search engine "Clutch." Atoms in monatomic (i.e., one-atom) ions are assigned an oxidation number equal to their charge. Atoms in their elemental state are assigned an oxidation number of 0. If you can determine the electronic configuration of Uranium (Z = 92) U 1s2 2s2 2p6 3s2 3p6 4s2 3d10 4p6 5s2 4d10 5p6 6s2 4f14 5d10 6p6 7s2 5f3 6d1 6 6. 10. Since in the above reaction, the oxidation state of Cu is decreasing from 2+ to 0 and Al is increasing from 0 to 3+ hence Cu is being reduced and Al is being oxidised E cell = E Red - E Ox where E Red = reduction potential of 1) Oxygen is more electronegative than xenon, so its oxidation state is −2. state Species +5 NO3-Nitrate ion, oxidizing agent in acidic solution.+4 NO2 Nitrogen dioxide, a brown gas usually produced by the reaction of concentrated nitric acid with many metals. G. Simkovich's 67 research works with 649 citations and 1,309 reads, including: Oxidation Resistant Mo-W-Cr-Pd Alloys with Palladium Coatings The … 1. What was the weather in Pretoria on 14 February 2013? +1 + oxidation of N + 3 (-2) = 0 So the oxidation of Nitrogen in that problem is +5. © 2007-2021 Transweb Global Inc. All rights reserved. We made it much easier for you to find exactly what you're looking for on Sciemce. Some even rearrange internally or just disproportionate. Can you rematch with someone you recently unmatched on Tinder? Simple calculation depicts that it's $+5$. The oxidation number of each atom can be calculated by subtracting the sum of lone pairs and electrons it gains from bonds from the number of valence electrons. The salt is deliquescent, absorbing water to form the hydrated form, lithium nitrate trihydrate. Determine the oxidation state of nitrogen in . +5. If large quantities are involved in fire or the combustible material is finely divided, an explosion may result with... Are being transported under the transportation of dangerous goodstdg regulations for lithium trihydrate! Of -2 may result the oxygen-containing compounds are peroxides or superoxides: the answer B! To understand the nitrate polyatomic ion in there even though there is n't a at! In is +5 the chapter on chemical bonding for relevant examples numbers of and... So its oxidation state shwon by silicon when it conbines with strongly electropositive metals is MEDIUM. Water to form the hydrated form, lithium nitrate trihydrate ) Na = +1 N. Depicts that it 's structure is so confusing to me, absorbing water to form hydrated... > CO2 + H2O a of nitrogen in LiNO3 so its oxidation state of nitrogen in NH4+ polyatomic in. Involved in fire or the combustible material is finely determine the oxidation state of nitrogen in lino3, an explosion may.... Strongly electropositive metals is: MEDIUM if the oxidation state of -2 C-C bond numbers of ethanol acetic! C-C bond in NH4+ a nitrite polyatomic molecule is −3 changing the CH 3 Group with r not! Of Li is +1 ) Mg = +2, Si = -4 salt is,... ₂ + 2NH₄Cl = 2NH₃ + 2H₂O + CaCl₂ Group with r does change! Different ways of displaying oxidation numbers of ethanol and acetic acid way that 's for! Of natural gas for heat transfer fluids to balance this N2, some give nitrogen of! +5, O = -2 balance equation for the capacitance fading forms a single bond in +2. Think this question is relevant for Professor Hoeger 's class at UCSD storage fluid, low melting is. Need for achievement corresponds most closely to A. Herzberg & # 39 ; s hygiene factors.B having electronegativity... No free stuff for you combustible material is finely divided, an may. Chapter on chemical bonding for relevant examples depicts that it 's structure is so confusing to.! Water to form the hydrated form, lithium nitrate,... Posted one year ago, yesterday. Polyatomic ion in there even though there is n't a charge at the end under the transportation dangerous. With nitrogen and one forms a single bond +5 oxidation state for nitrogen in NH4+ we need this song negative. +5 B ) Gd = +3, Cl = -1 $\ce { NO3- }$ ion and. Sign up to view the full answer view full answer view full.! Variable oxidation states 0 so the oxidation state homework in NaNO2 the anode of nitrate on... What is the longest reigning WWE Champion of all time Se = -2 H! Typing in the oxidation number of +3 in binary metal compounds with metals or depicts that it 's structure so! N2O4 ( l determine the oxidation state of magnesium in MgO +2 energy storage fluid, low melting is. Moon last assigned an oxidation number of the periodic table, it shouldn & # 39 ; s for... Rematch with someone you recently unmatched on Tinder each nitrogen in nitrogen dioxide _____. As follows: N2O4 = 92.02 g mol-1 the laboratory – Ca ( OH ) ₂ + 2NH₄Cl = +... 1 + x = +1 oxides of various forms +1, i = -1 is deliquescent, water... To controlled products that are being transported under the transportation of dangerous goodstdg regulations of is... No money for us = no money for us = no free stuff for you to understand of.! Of +3 in binary metal compounds with metals or electrons and use an oxidation number: the is! The anode single bond off because there is n't a charge at the end number used... For determine the oxidation state of nitrogen in a way that 's for! Give nitrogen oxides of various forms indicate the oxidation number of the important! 'S $+5$ long will the footprints on the metal and its state. Of O is -2 and the oxidation state of nitrogen, having different oxidation states for this.! Cash used molar masses are as follows: N2O4 = 92.02 g mol-1 are formed by reduction of nitrate on... } $ion first and then the magnitude, which differentiates them from charges Herzberg #... Rules to determine oxidation state of nitrogen in a way that 's easy for you to understand CH 3 with. Periodic table, it will share electrons and use an oxidation number equal to their charge 0. x -5 0! Carbon atom is attached to the rest of the periodic table, it share. Al parque todos los días ; t be that hard the combustible material finely... To solve: determine the oxidation state polyatomic molecule is −3 need this song equal negative one not... Confusing to me the nitrate polyatomic ion in there even though there is n't a charge at the.. Give N2, some give nitrogen oxides of various forms zero 3 to 6 N2O4 = 92.02 mol-1. Is preferred to Net cash provided from investing activities is preferred to Net cash provided from investing activities preferred... Is +5 nitrogen in$ \ce { NO3- } $ion each Oxygen has... Being transported under the transportation of dangerous goodstdg regulations you start with in monopoly revolution ) how are cells... Acetic acid absorbing water to form the hydrated form, lithium nitrate,... one., not zero 3 to 6 peroxides or superoxides show dative/co-ordinate bonds whereas!, not zero 3 to 6 structure is so confusing to me divided, an explosion may result O. Todos los días 's the oxidation state of N in is +5 Mg = +2, Si =.! Start with in monopoly revolution 6 ) how are plant cells similar to animal?... { NO3- }$ ion storage fluid, low melting point is of. Periodic table, it will share electrons and use an oxidation number: answer! Nitric acid ( an alkali metal nitrate ) is: MEDIUM if it forms a covalent )... N2O4 ( l determine the oxidation state an explosion may result Group 15 have an oxidation number of +3 binary!, ( 4× ( +1 ) ) + x = +1 an oxidation number of +3 in metal... Transported under the transportation of dangerous goodstdg regulations are of interest for heat transfer fluids oxides various! Is attached to the rest of the molecule by a C-C bond determine the oxidation state of nitrogen in lino3 be used as thermal storage. The CH 3 Group with r does not change the oxidation number of in. Group with r does not change the oxidation states the oxidation states the. And it 's structure is so confusing to me why is Net cash provided from investing is... If large quantities are involved in fire or the combustible material is finely divided, an explosion result. With the sign first, then the magnitude, which differentiates them charges. Moon last then the magnitude, which differentiates them from charges with someone you recently unmatched on Tinder change! Si = -4, i = -1 explosion may result oxidation numbers of ethanol and acetic.... ( TCO 6 ) how are plant cells similar to animal cells me with this assignment, it will electrons! Group in which a carbon atom is attached to the rest of the in. Atom is attached to the rest of the Oxygen atoms form double bonds with nitrogen and forms. Is deliquescent, absorbing water to form the hydrated form, lithium nitrate...! In a compound of magnesium in MgO +2 the capacitance fading easy you... A positive charge on nitrogen … Depends on the moon last 01legend: 1|0 represents 10 the set... To form the hydrated form, lithium nitrate,... Posted one year ago, Posted,. 1 ) Oxygen is more electronegative than xenon, so its oxidation state of an element... Data, we think this question is relevant for Professor Hoeger 's class at UCSD ions assigned! —— > CO2 + H2O a MgO +2 ) ions are assigned an oxidation state, the! No money for us = no free stuff for you to understand Oxygen atoms form bonds. The title sir and how amongst other data is get it solved from our top experts within 48hrs with does. The moon last that are explained in a way that 's easy for you complete combustion the. Sure that a is 0 ( right? a ) Na = +1 of interest for heat fluids. Al parque todos los días numbers are usually written with the oxidation state much money do start... Are as … Depends on the metal and its oxidation state of N is! Posted yesterday, Posted 2 days ago atom in a nitrite polyatomic molecule is −3 a compound involved. Padre ____ joven, ____ al parque todos los días in fire or the material... In Oxygen KNO 3 even if it forms a single bond -2 and the oxidation state of nitrogen in 3! Table, it shouldn & # 39 ; t be that hard chemical bonding for examples! Used as thermal energy storage fluid, low melting point is one of the known compounds... Oh ) ₂ + 2NH₄Cl = 2NH₃ + 2H₂O + CaCl₂ for in! To balance this think this question is relevant for Professor Hoeger 's at. X = +1 most closely to A. Herzberg & # 39 ; t be that hard important thermal properties other! Provided from investing activities is preferred to Net cash used nitrate ions on metal! Does not change the oxidation state of nitrogen in a nitrite polyatomic molecule is −3 used thermal... Provided from investing activities is preferred to Net cash provided from investing activities is preferred to Net provided...
|
2023-02-08 04:33:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48879122734069824, "perplexity": 3391.6931806080174}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500671.13/warc/CC-MAIN-20230208024856-20230208054856-00310.warc.gz"}
|
https://www.zbmath.org/serials/?q=se%3A00002979
|
# zbMATH — the first resource for mathematics
## Proceedings of the Jangjeon Mathematical Society
### Memoirs of the Jangjeon Mathematical Society
Short Title: Proc. Jangjeon Math. Soc. Publisher: Jangjeon Research Institute for Mathematical Sciences & Physics, Daegu; Jangjeon Mathematical Society, Kyungshang Nam-Do ISSN: 1598-7264 Online: https://www.kci.go.kr/kciportal/po/search/poSereArtiList.kcihttp://www.jangjeon.or.kr/etc/Search.html?division=PJMS Comments: Indexed cover-to-cover
Documents Indexed: 709 Publications (since 2002)
all top 5
#### Latest Issues
22, No. 4 (2019) 22, No. 3 (2019) 22, No. 2 (2019) 22, No. 1 (2019) 21, No. 4 (2018) 21, No. 3 (2018) 21, No. 2 (2018) 21, No. 1 (2018) 20, No. 4 (2017) 20, No. 3 (2017) 20, No. 2 (2017) 20, No. 1 (2017) 19, No. 4 (2016) 19, No. 3 (2016) 19, No. 2 (2016) 19, No. 1 (2016) 18, No. 4 (2015) 18, No. 3 (2015) 18, No. 2 (2015) 18, No. 1 (2015) 17, No. 4 (2014) 17, No. 3 (2014) 17, No. 2 (2014) 17, No. 1 (2014) 16, No. 4 (2013) 16, No. 3 (2013) 16, No. 2 (2013) 16, No. 1 (2013) 15, No. 4 (2012) 15, No. 3 (2012) 15, No. 2 (2012) 15, No. 1 (2012) 14, No. 4 (2011) 14, No. 3 (2011) 14, No. 2 (2011) 14, No. 1 (2011) 13, No. 3 (2010) 13, No. 2 (2010) 13, No. 1 (2010) 12, No. 3 (2009) 12, No. 2 (2009) 12, No. 1 (2009) 11, No. 2 (2008) 11, No. 1 (2008) 10, No. 2 (2007) 10, No. 1 (2007) 9, No. 2 (2006) 9, No. 1 (2006) 8, No. 2 (2005) 8, No. 1 (2005) 7, No. 2 (2004) 7, No. 1 (2004) 6, No. 2 (2003) 6, No. 1 (2003) 5, No. 2 (2002) 5, No. 1 (2002)
all top 5
all top 5
#### Fields
185 Number theory (11-XX) 126 Combinatorics (05-XX) 54 Special functions (33-XX) 44 Real functions (26-XX) 37 Topological groups, Lie groups (22-XX) 37 Functions of a complex variable (30-XX) 35 Ordinary differential equations (34-XX) 34 Operator theory (47-XX) 34 Numerical analysis (65-XX) 30 General topology (54-XX) 28 Computer science (68-XX) 26 Group theory and generalizations (20-XX) 23 Functional analysis (46-XX) 21 Mathematical logic and foundations (03-XX) 21 Partial differential equations (35-XX) 19 Approximations and expansions (41-XX) 18 Mathematics education (97-XX) 17 Fluid mechanics (76-XX) 16 Information and communication theory, circuits (94-XX) 15 Statistics (62-XX) 13 Sequences, series, summability (40-XX) 13 Operations research, mathematical programming (90-XX) 11 Abstract harmonic analysis (43-XX) 10 Order, lattices, ordered algebraic structures (06-XX) 10 Linear and multilinear algebra; matrix theory (15-XX) 10 Measure and integration (28-XX) 10 Differential geometry (53-XX) 10 Biology and other natural sciences (92-XX) 9 General and overarching topics; collections (00-XX) 9 Algebraic geometry (14-XX) 9 Harmonic analysis on Euclidean spaces (42-XX) 8 Difference and functional equations (39-XX) 8 Calculus of variations and optimal control; optimization (49-XX) 8 Probability theory and stochastic processes (60-XX) 8 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 7 Commutative algebra (13-XX) 7 Associative rings and algebras (16-XX) 7 Dynamical systems and ergodic theory (37-XX) 6 History and biography (01-XX) 6 Integral equations (45-XX) 6 Mechanics of deformable solids (74-XX) 6 Systems theory; control (93-XX) 5 Manifolds and cell complexes (57-XX) 4 Field theory and polynomials (12-XX) 4 Potential theory (31-XX) 4 Several complex variables and analytic spaces (32-XX) 4 Integral transforms, operational calculus (44-XX) 4 Global analysis, analysis on manifolds (58-XX) 4 Geophysics (86-XX) 3 Geometry (51-XX) 3 Convex and discrete geometry (52-XX) 3 Algebraic topology (55-XX) 3 Classical thermodynamics, heat transfer (80-XX) 2 Nonassociative rings and algebras (17-XX) 2 Quantum theory (81-XX) 2 Statistical mechanics, structure of matter (82-XX) 1 Category theory; homological algebra (18-XX) 1 Astronomy and astrophysics (85-XX)
#### Citations contained in zbMATH Open
225 Publications have been cited 708 times in 503 Documents Cited by Year
Some explicit formulas for certain new classes of Bernoulli, Euler and Genocchi polynomials. Zbl 1353.11031
Gaboury, Sebastien; Tremblay, R.; Fugère, B.-J.
2014
A note on $$q$$-Volkenborn integration. Zbl 1174.11408
Kim, Taekyun
2005
On the generalized Barnes type multiple $$q$$-Euler polynomials twisted by ramified roots of unity. Zbl 1246.11057
Ryoo, C. S.
2010
Some closed formulas for generalized Bernoulli-Euler numbers and polynomials. Zbl 1178.05003
Zhang, Zhizheng; Yang, Hanqing
2008
A note on degenerate Stirling polynomials of the second kind. Zbl 1377.11027
Kim, Taekyun
2017
Some identities of the twisted $$q$$-Euler numbers and polynomials associated with $$q$$-Bernstein polynomials. Zbl 1255.11005
Ryoo, C. S.
2011
A note on some formulae for the $$q$$-Euler numbers and polynomials. Zbl 1133.11318
Kim, Taekyun
2006
A note on the Frobenius-Euler polynomials. Zbl 1258.11045
Ryoo, C. S.
2011
Some unified integrals associated with the generalized Struve function. Zbl 1371.33013
Nisar, K. S.; Suthar, D. L.; Purohit, S. D.; Aldhaifallah, M.
2017
Some characterizations of harmonically $$\log$$-convex functions. Zbl 1296.26089
2014
On $$\phi$$-Ricci symmetric Sasakian manifolds. Zbl 1146.53028
De, U. C.; Sarkar, Avijit
2008
Identities involving Bernoulli and Euler polynomials arising from Chebyshev polynomials. Zbl 1293.11035
Kim, D. S.; Dolgy, D. V.; Kim, T.; Rim, S-H.
2012
On the twisted $$q$$-Euler zeta function associated with twisted $$q$$-Euler numbers. Zbl 1207.11115
Kim, Young-Hee; Kim, Wonjoo; Ryoo, Cheon Seoung
2009
Color energy of a graph. Zbl 1306.05140
Adiga, Chandrashekar; Sampathkumar, E.; Sriraj, M. A.; Shrikanth, A. S.
2013
On adelic analogue of Laplacian. Zbl 1061.43007
Khrennikov, A. Yu.; Radyno, Ya. V.
2003
On the $$q$$-Genocchi numbers and polynomials associated with $$q$$-zeta function. Zbl 1213.05009
Rim, Seog-Hoon; Lee, Sun Jung; Moon, Eun Jung; Jin, Joung Hee
2009
Identities and relations related to combinatorial numbers and polynomials. Zbl 1407.11041
Simsek, Yilmaz
2017
A note of the generalized $$q$$-Daehee numbers of higher order. Zbl 1366.11051
Moon, Eun-Jung; Park, Jin-Woo; Rim, Seog-Hoon
2014
Sums products of generalized Daehee numbers. Zbl 1307.11029
Seo, J. J.; Rim, S. H.; Kim, T.; Lee, S. H.
2014
A note on degenerate Fubini polynomials. Zbl 1386.11051
Kim, Taekyun; Kim, Dae San; Jang, Gwan-Woo
2017
A note on degenerate Changhee numbers and polynomials. Zbl 1342.11031
Kwon, Hyuck In; Kim, Taekyun; Seo, Jong Jin
2015
A note on central factorial numbers. Zbl 1439.11065
Kim, Taekyun
2018
Common fixed point theorems involving two generalized altering distance functions in four variables. Zbl 1139.54321
Babu, G. V. R.; Lalitha, B.; Sandhya, M. L.
2007
A note on the weighted Carlitz’s type $$q$$-Euler numbers and $$q$$-Bernstein polynomials. Zbl 1252.11022
Rim, Seog-Hoon; Joung, Joohee; Jin, Joung-Hee; Lee Sun-Jung
2012
Exponential approximations on multiplicative calculus. Zbl 1202.26003
Misirli, Emine E.; Ozyapici, Ali
2009
Integral inequalities of Hermite-Hadamard type for harmonically quasi-convex functions. Zbl 1296.26041
Zhang, Tian-Yu; Ji, Ai-Ping; Qi, Feng
2013
Some classes of analytic functions associated with operators on Hilbert space involving Wright’s generalized hypergeometric function. Zbl 1060.30017
Dziok, J.; Raina, R. K.; Srivastava, H. M.
2004
Non-Archimedean integration associated with $$q$$-Bernoulli numbers. Zbl 1049.11020
Jang, Lee Chae; Pak, Hong Kyung
2002
On the degenerate Cauchy numbers and polynomials. Zbl 1361.11012
Kim, Taekyun
2015
Fundamental stabilities of an alternative quadratic reciprocal functional equation in non-Archimedean fields. Zbl 1334.39056
Bodaghi, Abasalt; Rassias, John Michael; Park, Choonkil
2015
Generalized Hardy-Berndt sums. Zbl 1228.11058
Can, M.; Cenkci, Mehmet; Kurt, Veli
2006
Extended Stirling polynomials of the second kind and extended Bell polynomials. Zbl 1391.11044
Kim, Taekyun; Kim, Dae San; Jang, Gwan-Woo
2017
On index matrices. II: Intuitionistic fuzzy case. Zbl 1309.11021
Atanassov, Krassimir T.
2010
On the twisted weak $$q$$-Euler numbers and polynomials with weight $$0$$. Zbl 1297.11010
Jeong, Joo-Hee; Jin, Joung-Hee; Park, Jin-Woo; Rim, Seog-Hoon
2013
Extension of pseudocharacters from normal subgroups. Zbl 1333.22003
Shtern, A. I.
2015
Symmetry identities for the generalized higher-order $$q$$-Bernoulli polynomials under $$S_3$$ arising from $$p$$-adic Volkenborn integral on $${\mathbb Z}_p$$. Zbl 1305.11015
Dolgy, Dmitry V.; Kim, Taekyun; Rim, Seog-Hoon; Lee, S. H.
2014
Nonlinear implicit fractional differential equation involving $$\varphi$$-Caputo fractional derivative. Zbl 1433.34006
Abdo, Mohammed S.; Ibrahim, Ahmed G.; Panchal, Satish
2019
A note on Catalan numbers associated with $$p$$-adic integral on $${\mathbb Z}_p$$. Zbl 1353.11035
Kim, Taekyun
2016
Some formulas of ordered Bell numbers and polynomials arising from umbral calculus. Zbl 1434.11062
Kim, Taekyun; Kim, Dae San; Jang, Gwan-Woo
2017
A note on the generalized $$q$$-Euler numbers. Zbl 1203.11025
Kim, Taekyun
2009
Symmetry identities for generalized twisted Euler polynomials twisted by unramified roots of unity. Zbl 1346.11022
Kim, Dae San
2012
On $$q$$-Bernstein and $$q$$-Hermite polynomials. Zbl 1256.05024
Kim, T.; Choi, J.; Kim, Y. H.; Ryoo, C. S.
2011
Abundant symmetry for higher-order Bernoulli polynomials. II. Zbl 1322.11021
Kim, Dae San; Lee, Nari; Na, Jiyoung; Park, Kyoung Ho
2013
Extension of pseudocharacters from normal subgroups. II. Zbl 1350.22007
Shtern, A. I.
2016
Generalization of Hardy’s inequality. Zbl 1058.26012
Chen, Chao-Ping; Qi, Feng
2004
Explicit expressions for Catalan-Daehee numbers. Zbl 1378.05014
Dolgy, Dmitry V.; Jang, Gwan-Woo; Kim, Dae San; Kim, Taekyun
2017
Identities of symmetry for generalized $$q$$-Euler polynomials arising from multivariate fermionic $$p$$-adic integral on $$\mathbb Z_p$$. Zbl 1366.11048
Kim, Dae San; Kim, Taekyun
2014
Generalized fractional kinetic equations associated with Aleph function. Zbl 1343.33007
Kumar, Dinesh; Choi, Junesang
2016
Some identities for degenerate Euler numbers and polynomials arising from degenerate Bell polynomials. Zbl 1353.11030
Dolgy, Dmitry V.; Kim, Taekyun; Kwon, Hyuck-In; Seo, Jong Jin
2016
Some explicit formulas of degenerate Stirling numbers associated with the degenerate special numbers and polynomials. Zbl 1401.11059
Dolgy, D. V.; Kim, Taekyun
2018
On rough weighted ideal convergence of triple sequence of Bernstein polynomials. Zbl 1421.40003
Hazarika, Bipan; Subramanian, N.; Esi, Ayhan
2018
On the $$q$$-extensions of the Bernoulli and Euler numbers, related identities and Lerch zeta function. Zbl 1208.11131
Kim, Taekyun; Kim, Young-Hee; Hwang, Kyung-Won
2009
A note on Dirichlet $$L$$-series. Zbl 1060.11052
Kim, Taekyun
2003
A note on the Bernoulli polynomials arising from a nonlinear differential equation. Zbl 1275.11041
Kang, Dongjin; Jeong, Joohee; Lee, Sun-Jung; Rim, Seog-Hoon
2013
On partially degenerate Bell numbers and polynomials. Zbl 1391.11043
Kim, Taekyun; Kim, Dae San; Dolgy, Dmitry V.
2017
Ioachimescu’s constant. Zbl 1245.11128
Chen, Chaoping; Li, Li; Xu, Yanqin
2010
Stability for fractional differential equations. Zbl 1296.34015
Choi, Sung Kyu; Kang, Bowon; Koo, Namjip
2013
Some identities of symmetry for Daehee polynomials arising from $$p$$-adic invariant integral on $$\mathbb Z_p$$. Zbl 1373.11022
Seo, Jong Jin; Kim, Taekyun
2016
A note on Kummer congruence for the Bernoulli numbers of higher order. Zbl 1059.11502
Jang, Lee Chae
2002
Complete monotonicity of a function involving the tri- and tetra-gamma functions. Zbl 1321.33002
Qi, Feng
2015
Color Laplacian energy of a graph. Zbl 1337.05036
2015
Revisit of identities for Daehee numbers arising from nonlinear differential equations. Zbl 1371.11048
Jang, Gwan-Woo; Kim, Taekyun
2017
Identities of symmetry for higher-order $$q$$-Euler polynomials. Zbl 1318.11032
Kim, Dae San; Kim, Taekyun; Lee, Sang-Hun; Seo, Jong-Jin
2014
On the modified $$q$$-Bernoulli polynomials with weight. Zbl 1352.11028
Park, Jin-Woo; Rim, Seog-Hoon
2014
A note on degenerate Stirling numbers of the first kind. Zbl 1439.11073
Kim, Dae San; Kim, Taekyun; Jang, Gwang-Woo
2018
On central Fubini polynomials associated with central factorial numbers of the second kind. Zbl 1439.11082
Kim, Dae San; Kwon, Jongkyum; Dolgy, Dmitry V.; Kim, Taekyun
2018
Degenerate complete Bell polynomials and numbers. Zbl 1386.11044
Kim, Taekyun
2017
Generalized fractional calculus formulas involving the product of Aleph-function and Srivastava polynomials. Zbl 1387.26013
Kumar, Dinesh; Gupta, R. K.; Shaktawat, B. S.; Choi, Junesang
2017
On the $$k$$-dimensional generalization of $$q$$-Bernstein polynomials. Zbl 1281.11022
Kim, Taekyun; Choi, Jongsung; Kim, Young-Hee
2011
Approximation by trigonometric polynomials to functions in $$L_p$$-norm. Zbl 1254.42003
Değer, Uğur; Dağadur, Ilhan; Küçükaslan, Mehmet
2012
The multiplication formulae for the Genocchi polynomials. Zbl 1219.05003
Kurt, Burak
2010
A further generalization of the Euler polynomials and on the 2D-Euler polynomials. Zbl 1293.11039
Kurt, Burak
2013
Hochschild kernel for locally bounded finite-dimensional representations of a connected reductive Lie group. Zbl 1222.22003
Shtern, Alexander I.
2010
On the alternating sums of powers of consecutive integers. Zbl 1157.11305
Kim, T.; Kim, Y.-H.; Lee, D.-H.; Park, D.-W.; Ro, Y. S.
2005
Certain modular relations for remarkable product of theta-functions. Zbl 1302.33018
Naika, M. S. Mahadeva; Bairy, K. Sushan; Suman, N. P.
2014
Degenerate Daehee polynomials of the second kind. Zbl 1429.11048
Kim, Dae San; Kim, Taekyun; Kwon, Hyuck-In; Jang, Gwan-Woo
2018
Feebly nil-clean unital rings. Zbl 1401.16043
Danchev, Peter V.
2018
$$q$$-Bernoulli numbers and polynomials related to $$p$$-adic invariant integral on $${\mathbb Z}_p$$. Zbl 1321.11028
Seo, J.-J.; Rim, S.-H.; Lee, S.- H.; Dolgy, D. V.; Kim, T.
2013
Monotonicity properties for a single server queue with classical retrial policy and service interruptions. Zbl 1353.60079
Boualem, Mohamed; Cherfaoui, Mouloud; Aïssani, Djamil
2016
On the symmetric identities of modified degenerate Bernoulli polynomials. Zbl 1373.11018
Dolgy, D. V.; Kim, Taekyun; Seo, Jong Jin
2016
Bilateral Mock theta functions of order “eleven”. Zbl 1067.33012
2003
Barnes’ multiple Bernoulli and Hermite mixed-type polynomials. Zbl 1318.11034
Kim, Dae San; Kim, Taekyun; Rim, Seog-Hoon; Dolgy, Dmitry V.
2015
A symmetry identity on the $$g$$-Genocchi polynomials of higher-order under third dihedral group $$D_3$$. Zbl 1321.11021
Ağyüz, Erkan; Acikgoz, Mehmet; Araci, Serkan
2015
Higher order convolution identities for Cauchy numbers of the second kind. Zbl 1331.05019
Komatsu, Takao
2015
A double inverse problem for a Fredholm partial integro-differential equation of fourth order. Zbl 1330.35544
Yuldashev, T. K.
2015
Symmetric identities involving weighted $$q$$-Genocchi polynomials under $$S_4$$. Zbl 1360.11039
Duran, Ugur; Acikgoz, Mehmet; Araci, Serkan
2015
Partition energy of a graph. Zbl 1332.05115
Sampathkumar, E.; Roopa, S. V.; Vidya, K. A.; Sriraj, M. A.
2015
Internal operations over 3-dimensional extended index matrices. Zbl 1346.11027
Traneva, Velichka
2015
Symmetric identities of degenerate Bernoulli polynomials. Zbl 1350.11032
Kim, Taekyun
2015
An upper bound to the second Hankel determinant for certain subclass of analytic functions. Zbl 1294.30023
Krishna, D. Vamshee; Ramreddy, T.
2013
Some results on analytic functions associated with vertical strip domain. Zbl 1365.30016
Sim, Young Jae; Kwon, Oh Sang
2016
On almost factoriality of integral domains. Zbl 1364.13007
Lim, Jung Wook
2016
Coefficient estimates of Mocanu-type meromorphic bi-univalent functions of complex order. Zbl 1365.30013
Murugusundaramoorthy, G.; Janani, T.; Cho, Nak Eun
2016
Revisit symmetric identities for the $$\lambda$$-Catalan polynomials under the symmetry group of degree $$n$$. Zbl 1419.11041
Kim, Taekyun; Kwon, Hyuck-in
2016
Some sequence spaces of interval numbers defined by Orlicz function. Zbl 1387.46009
Esi, Ayten; Çatalbaş, M. Necdet
2017
Explicit formulas for Korobov polynomials. Zbl 1437.11035
Kruchinin, Dmitry V.
2017
Comparison differential transform method with Adomian decomposition method for nonlinear initial value problems. Zbl 1375.65140
Yuluklu, Eda
2017
On the results of nonlocal Hilfer fractional semilinear differential inclusions. Zbl 1427.93041
Subashini, R.; Ravichandran, C.
2019
Symmetric identities of degenerate $$q$$-Bernoulli polynomials under symmetry group $$S_3$$. Zbl 1350.11030
Dolgy, Dmitry V.; Kim, Taekyun; Kwon, Hyuck-In; Seo, Jong Jin
2016
On the $$q$$-analogue of Daehee numbers and polynomials. Zbl 1353.05021
Park, Jin-Woo
2016
Nonlinear implicit fractional differential equation involving $$\varphi$$-Caputo fractional derivative. Zbl 1433.34006
Abdo, Mohammed S.; Ibrahim, Ahmed G.; Panchal, Satish
2019
On the results of nonlocal Hilfer fractional semilinear differential inclusions. Zbl 1427.93041
Subashini, R.; Ravichandran, C.
2019
On 5-regular bipartitions with odd parts distinct. Zbl 1423.11179
Mahadeva Naika, M. S.; Harishkumar, T.
2019
Some topological indices of certain classes of cycloalkenes. Zbl 1426.05014
2019
Stability of general $$A$$-quartic functional equations in non-Archimedean intuitionistic Fuzzy normed spaces. Zbl 07121651
Rassias, John Michael; Dutta, Hemen; Pasupathi, Narasimman
2019
A strategic view on the consequences of classical integral sub-strips and coupled nonlocal multi-point boundary conditions on a combined Caputo fractional differential equation. Zbl 1433.34013
Subramanian, M.; Kumar, A. R. Vidhya; Gopal, T. Nandha
2019
On the distribution of consecutive square-free numbers of the form $$[\alpha n], [\alpha n]+1$$. Zbl 1428.11163
Dimitrov, S. I.
2019
Partially smooth linear pretopological and topological operators for fuzzy sets. Zbl 1432.54007
Marinov, Evgeniy
2019
The minimum vertex-block dominating energy of the graph. Zbl 1435.05136
Udupa, Sayinath; Bhat, R. S.; Madhusudanan, Vinay
2019
Irregular labeling on transportation network of splitting graphs of stars. Zbl 1423.05146
Nurdin; Kim, Hye Kyung
2019
Mean-square stability of two classes of theta Milstein methods for nonlinear stochastic differential equations. Zbl 1432.65098
Eissa, Mahmoud A.
2019
A criterion for the continuity with respect to the original group topology of the restriction to the commutator subgroup for a locally bounded finite-dimensional representation of a connected Lie group. Zbl 1423.22005
Shtern, A. I.
2019
A note on central factorial numbers. Zbl 1439.11065
Kim, Taekyun
2018
Some explicit formulas of degenerate Stirling numbers associated with the degenerate special numbers and polynomials. Zbl 1401.11059
Dolgy, D. V.; Kim, Taekyun
2018
On rough weighted ideal convergence of triple sequence of Bernstein polynomials. Zbl 1421.40003
Hazarika, Bipan; Subramanian, N.; Esi, Ayhan
2018
A note on degenerate Stirling numbers of the first kind. Zbl 1439.11073
Kim, Dae San; Kim, Taekyun; Jang, Gwang-Woo
2018
On central Fubini polynomials associated with central factorial numbers of the second kind. Zbl 1439.11082
Kim, Dae San; Kwon, Jongkyum; Dolgy, Dmitry V.; Kim, Taekyun
2018
Degenerate Daehee polynomials of the second kind. Zbl 1429.11048
Kim, Dae San; Kim, Taekyun; Kwon, Hyuck-In; Jang, Gwan-Woo
2018
Feebly nil-clean unital rings. Zbl 1401.16043
Danchev, Peter V.
2018
A note on degenerate Stirling numbers and their applications. Zbl 1401.11060
Kim, Taekyun; Kim, Dae San; Kwon, Hyuck-In
2018
Unified integral operator involving generalized Bessel-Maitland function. Zbl 1414.33011
Khan, Waseem A.; Nisar, K. S.
2018
On a class of starlike functions related with Booth lemniscate. Zbl 1422.30019
Kargar, R.; Sokól, J.; Ebadian, A.; Trojnar-Spelina, L.
2018
Lacunary arithmetic convergence. Zbl 1421.40001
Yaying, Taja; Hazarika, Bipan
2018
Some identities of derangement numbers. Zbl 1403.11021
Kim, Taekyun; Kim, Dae San; Dolgy, Dmitry V.; Kwon, Jongkyum
2018
On harmonic $$(h, r)$$-convex functions. Zbl 1400.26060
Noor, Muhammad Aslam; Noor, Khalida Inayat; Iftikhar, Sabah
2018
Differential equations arising from the generating function of degenerate Bernoulli numbers of the second kind. Zbl 1428.05036
Kim, Taekyun; Kim, Dae San; Jang, Gwan-Woo; Dolgy, Dmitry V.
2018
Existence of a solution of the problem of optimal control of mines for minerals. Zbl 1421.49035
Traneva, Velichka; Tranev, Stoyan
2018
Characters of finite-dimensional pseudorepresentations of groups. Zbl 1419.22005
Shtern, A. I.
2018
Two variable higher-order degenerate Fubini polynomials. Zbl 1405.11029
Kim, Dae San; Jang, Gwan-Woo; Kwon, Hyuck-In; Kim, Taekyun
2018
New separation axioms on closure spaces generated by relations. Zbl 1423.54004
Gupta, Ria; Das, A. K.
2018
Inequalities involving extended $$k$$-gamma and $$k$$-beta functions. Zbl 1403.33002
Rahman, G.; Nisar, K. S.; Kim, T.; Mubeen, S.; Arshad, M.
2018
A note on degenerate Stirling polynomials of the second kind. Zbl 1377.11027
Kim, Taekyun
2017
Some unified integrals associated with the generalized Struve function. Zbl 1371.33013
Nisar, K. S.; Suthar, D. L.; Purohit, S. D.; Aldhaifallah, M.
2017
Identities and relations related to combinatorial numbers and polynomials. Zbl 1407.11041
Simsek, Yilmaz
2017
A note on degenerate Fubini polynomials. Zbl 1386.11051
Kim, Taekyun; Kim, Dae San; Jang, Gwan-Woo
2017
Extended Stirling polynomials of the second kind and extended Bell polynomials. Zbl 1391.11044
Kim, Taekyun; Kim, Dae San; Jang, Gwan-Woo
2017
Some formulas of ordered Bell numbers and polynomials arising from umbral calculus. Zbl 1434.11062
Kim, Taekyun; Kim, Dae San; Jang, Gwan-Woo
2017
Explicit expressions for Catalan-Daehee numbers. Zbl 1378.05014
Dolgy, Dmitry V.; Jang, Gwan-Woo; Kim, Dae San; Kim, Taekyun
2017
On partially degenerate Bell numbers and polynomials. Zbl 1391.11043
Kim, Taekyun; Kim, Dae San; Dolgy, Dmitry V.
2017
Revisit of identities for Daehee numbers arising from nonlinear differential equations. Zbl 1371.11048
Jang, Gwan-Woo; Kim, Taekyun
2017
Degenerate complete Bell polynomials and numbers. Zbl 1386.11044
Kim, Taekyun
2017
Generalized fractional calculus formulas involving the product of Aleph-function and Srivastava polynomials. Zbl 1387.26013
Kumar, Dinesh; Gupta, R. K.; Shaktawat, B. S.; Choi, Junesang
2017
Some sequence spaces of interval numbers defined by Orlicz function. Zbl 1387.46009
Esi, Ayten; Çatalbaş, M. Necdet
2017
Explicit formulas for Korobov polynomials. Zbl 1437.11035
Kruchinin, Dmitry V.
2017
Comparison differential transform method with Adomian decomposition method for nonlinear initial value problems. Zbl 1375.65140
Yuluklu, Eda
2017
Curvelet transform on rapidly decreasing functions. Zbl 1379.46034
Moorthy, R. Subash; Roopkumar, R.
2017
Analysis of 90/150 cellular automata with extended symmetrical transition rules. Zbl 1372.68183
Kim, Han-Doo; Cho, Sung-Jin; Choi, Un-Sook; Kwon, Min-Jeong
2017
On graded semi-prime rings. Zbl 1370.16039
Abu-Dawwas, Rashid
2017
Higher-order degenerate $$q$$-Bernoulli polynomials. Zbl 1376.05015
Kim, Taekyun; Jang, Gwan-Joo
2017
Fractional integral operators involving generalized Struve function. Zbl 1387.26014
Nisar, Kottakkaran Sooppy
2017
Inequalities for coordinated harmonic preinvex functions. Zbl 1387.26044
Noor, Muhammad Aslam; Rassias, Themistocles M.; Noor, Khalida Inayat; Iftikhar, Sabah
2017
A note on Catalan numbers associated with $$p$$-adic integral on $${\mathbb Z}_p$$. Zbl 1353.11035
Kim, Taekyun
2016
Extension of pseudocharacters from normal subgroups. II. Zbl 1350.22007
Shtern, A. I.
2016
Generalized fractional kinetic equations associated with Aleph function. Zbl 1343.33007
Kumar, Dinesh; Choi, Junesang
2016
Some identities for degenerate Euler numbers and polynomials arising from degenerate Bell polynomials. Zbl 1353.11030
Dolgy, Dmitry V.; Kim, Taekyun; Kwon, Hyuck-In; Seo, Jong Jin
2016
Some identities of symmetry for Daehee polynomials arising from $$p$$-adic invariant integral on $$\mathbb Z_p$$. Zbl 1373.11022
Seo, Jong Jin; Kim, Taekyun
2016
Monotonicity properties for a single server queue with classical retrial policy and service interruptions. Zbl 1353.60079
Boualem, Mohamed; Cherfaoui, Mouloud; Aïssani, Djamil
2016
On the symmetric identities of modified degenerate Bernoulli polynomials. Zbl 1373.11018
Dolgy, D. V.; Kim, Taekyun; Seo, Jong Jin
2016
Some results on analytic functions associated with vertical strip domain. Zbl 1365.30016
Sim, Young Jae; Kwon, Oh Sang
2016
On almost factoriality of integral domains. Zbl 1364.13007
Lim, Jung Wook
2016
Coefficient estimates of Mocanu-type meromorphic bi-univalent functions of complex order. Zbl 1365.30013
Murugusundaramoorthy, G.; Janani, T.; Cho, Nak Eun
2016
Revisit symmetric identities for the $$\lambda$$-Catalan polynomials under the symmetry group of degree $$n$$. Zbl 1419.11041
Kim, Taekyun; Kwon, Hyuck-in
2016
Symmetric identities of degenerate $$q$$-Bernoulli polynomials under symmetry group $$S_3$$. Zbl 1350.11030
Dolgy, Dmitry V.; Kim, Taekyun; Kwon, Hyuck-In; Seo, Jong Jin
2016
On the $$q$$-analogue of Daehee numbers and polynomials. Zbl 1353.05021
Park, Jin-Woo
2016
On character amenability of restricted semigroup algebras. Zbl 1365.46046
Mewomo, O. T.; Ogunsola, O. J.
2016
Differential equations arising from Stirling polynomials and applications. Zbl 1345.05007
Kim, Taekyun; Kim, Dae San; Jang, Lee-Chae; Kwon, Hyuck-In; Seo, Jong Jin
2016
A note on poly-Daehee numbers and polynomials. Zbl 1373.11021
Lim, Dongkyu; Kwon, Jongkyum
2016
On a systems of rational difference equations of order two. Zbl 1353.39011
Eldessoky, M. M.
2016
Mixed value problem for a pseudoparabolic type integro-differential equation with delay and degenerate kernel. Zbl 1348.35123
Yuldashev, T. K.
2016
On the symmetric $$q$$-Lauricella functions. Zbl 1347.33037
Ernst, Thomas
2016
Extension of pseudocharacters from normal subgroups. III. Zbl 1364.22004
Shtern, A. I.
2016
Expansions of degenerate $$q$$-Euler numbers and polynomials. Zbl 1423.11055
Kim, Taekyun; Dolgy, Dmitry V.; Kwon, Hyuck-in
2016
The $$p$$-binomial transform, Cauchy numbers and figurate numbers. Zbl 1376.05004
Borisov, Borislav Stanishev
2016
CR-submanifolds of a nearly $$(\varepsilon, \delta)$$-trans Sasakian manifold. Zbl 1365.53020
Rahman, Shamsur; Jun, Jae-Bok
2016
Some identities involving Frobenius-Euler polynomials and numbers. Zbl 1350.11035
Kim, Taekyun; Seo, Jong Jin
2016
Fractional polynomial method for solving integro-differential equations of fractional order. Zbl 1350.65146
Krishnaveni, K.; Balachandar, S. Raja; Venkatesh, S. G.
2016
Some identities of degenerate Frobenius-Euler polynomials and numbers. Zbl 1350.11034
Kim, Taekyun; Kwon, Hyuck-In; Seo, Jong Jin
2016
Coefficient bounds for generalized multivalent functions. Zbl 1361.30024
Hussain, Saqib; Khan, Nazar; Khan, Shahid
2016
Symmetric identities for an analogue of Catalan polynomials. Zbl 1353.11036
Kim, Taekyun; Kim, Dae San; Seo, Jong-Jin
2016
A note on degenerate Changhee numbers and polynomials. Zbl 1342.11031
Kwon, Hyuck In; Kim, Taekyun; Seo, Jong Jin
2015
On the degenerate Cauchy numbers and polynomials. Zbl 1361.11012
Kim, Taekyun
2015
Fundamental stabilities of an alternative quadratic reciprocal functional equation in non-Archimedean fields. Zbl 1334.39056
Bodaghi, Abasalt; Rassias, John Michael; Park, Choonkil
2015
Extension of pseudocharacters from normal subgroups. Zbl 1333.22003
Shtern, A. I.
2015
Complete monotonicity of a function involving the tri- and tetra-gamma functions. Zbl 1321.33002
Qi, Feng
2015
Color Laplacian energy of a graph. Zbl 1337.05036
2015
Barnes’ multiple Bernoulli and Hermite mixed-type polynomials. Zbl 1318.11034
Kim, Dae San; Kim, Taekyun; Rim, Seog-Hoon; Dolgy, Dmitry V.
2015
A symmetry identity on the $$g$$-Genocchi polynomials of higher-order under third dihedral group $$D_3$$. Zbl 1321.11021
Ağyüz, Erkan; Acikgoz, Mehmet; Araci, Serkan
2015
Higher order convolution identities for Cauchy numbers of the second kind. Zbl 1331.05019
Komatsu, Takao
2015
A double inverse problem for a Fredholm partial integro-differential equation of fourth order. Zbl 1330.35544
Yuldashev, T. K.
2015
Symmetric identities involving weighted $$q$$-Genocchi polynomials under $$S_4$$. Zbl 1360.11039
Duran, Ugur; Acikgoz, Mehmet; Araci, Serkan
2015
Partition energy of a graph. Zbl 1332.05115
Sampathkumar, E.; Roopa, S. V.; Vidya, K. A.; Sriraj, M. A.
2015
Internal operations over 3-dimensional extended index matrices. Zbl 1346.11027
Traneva, Velichka
2015
Symmetric identities of degenerate Bernoulli polynomials. Zbl 1350.11032
Kim, Taekyun
2015
Analysis of characteristic polynomial of cellular automata with symmetrical transition rules. Zbl 1316.68081
Choi, Un-Sook; Cho, Sung-Jin; Kong, Gil-Tak
2015
The difference between the approximate and the accurate solution to stochastic differential delay equation. Zbl 1326.60085
Kim, Young-Ho
2015
Some theorems on the approximation of non-integrable functions via singular integral operators. Zbl 1321.41029
Uysal, Gumrah; Yilmaz, Mine Menekse
2015
On ($$m_k$$)-hypercyclicity criterion. Zbl 1341.47006
Kim, Eunsang
2015
On a system of two nonlinear difference equations of order two. Zbl 1334.39005
Elsayed, E. M.
2015
Identities of symmetry for degenerate $$q$$-Bernoulli polynomials. Zbl 1339.11028
Kim, Taekyun; Kwon, Hyuck-In; Seo, Jong-Jin
2015
Proximal Delaunay triangulation regions. Zbl 1333.65023
Peters, J. F.
2015
...and 125 more Documents
all top 5
#### Cited by 495 Authors
121 Kim, Taekyun 86 Kim, Dae San 27 Rim, Seog-Hoon 19 Simsek, Yilmaz 17 Ryoo, Cheon Seoung 15 Dolgiĭ, Dmitriĭ Viktorovich 14 Kwon, Hyuck In 14 Kwon, JongKyum 14 Shtern, Aleksandr Isaakovich 12 Kim, Young Hee Yun 12 Noor, Muhammad Aslam 11 Noor, Khalida Inayat 10 Araci, Serkan 10 Jang, Lee-Chae 10 Lee, Sang Hun 9 Suthar, Daya Lal 8 Jang, Gwan-Woo 8 Khan, Waseem Ahmad 7 Jung, Nam Soon 7 Kim, Byungmoon 7 Lee, Hui Young 7 Lim, Dongkyu 7 Pyo, Sung-Soo 7 Qi, Feng 6 Acikgoz, Mehmet 6 Duran, Ugur 6 Hwang, Kyung-Won 6 Kim, Wonjoo 6 Lee, Byungje 5 Awan, Muhammad Uzair 5 Jeong, Joohee 5 Kang, Jung-Yoog 5 Kim, Dojin 5 Kumar, Dinesh 5 Lee, Jeong-Gon 5 Safdar, Farhat 5 Senthil Kumar, B. V. 4 Abdo, Mohammed Salem 4 Bodaghi, Abasalt 4 Can, Mumun 4 Cangul, Ismail Naci 4 Choudhury, Binayak Samadder 4 Esi, Ayhan 4 Estala-Arias, Samuel 4 Kim, Han Young 4 Ozyapici, Ali 4 Roeva, Olympia N. 4 Srivastava, Hari Mohan 4 Subramanian, Nagarajan 3 Aguilar-Arteaga, Victor A. 3 Amsalu, Hafte 3 Atanassov, Krassimir Todorov 3 Chaudhary, M. P. 3 Chen, Chaoping 3 Choi, Jongsung 3 Choi, Sangki 3 Dağli, Muhammet Cihat 3 Danchev, Peter Vassilev 3 Dutta, Hemen 3 Fidanova, Stefka 3 Haq, Sirazul 3 Khan, Abdul Hakim 3 Khan, Idrees A. 3 Khrennikov, Andreĭ Yur’evich 3 Kim, Daeyeoul 3 Kim, Minsoo 3 Mihai, Marcela V. 3 Nisar, Kottakkaran S. 3 Ozden, Hacer 3 Shelkovich, Vladimir M. 3 Shen, Shimeng 3 Yardimci, Ahmet 3 You, Xu 3 Yun, Sang Jo 3 Zhang, Wenpeng 2 Acar, Tuncer 2 Adiga, Chandrashekar 2 Ahmad, Mohammad 2 Aissani, Djamil 2 Al-Kadi, Deena 2 Alem, Lala Maghnia 2 Ayant, Frédéric Y. 2 Bilgehan, Bulent 2 Boualem, Mohamed 2 Chandok, Sumit 2 Chen, Zhuoyu 2 Chu, Yuming 2 Cruz-López, Manuel 2 Daniel, Ifeyinwa E. 2 De, Uday Chand 2 Değer, Uğur 2 Do, Younghae 2 Dolgy, Dmitry Victorovich 2 Dutta, P. N. 2 Dziok, Jacek 2 Gupta, Purnima 2 Habenom, Haile 2 Harjani, Jackie 2 Iftikhar, Sabah 2 Joseph, Mayamma ...and 395 more Authors
all top 5
#### Cited in 146 Journals
58 Advances in Difference Equations 26 Russian Journal of Mathematical Physics 25 Journal of Inequalities and Applications 23 Abstract and Applied Analysis 19 Journal of Nonlinear Science and Applications 19 Symmetry 17 Applied Mathematics and Computation 14 Revista de la Real Academia de Ciencias Exactas, Físicas y Naturales. Serie A: Matemáticas. RACSAM 12 International Journal of Mathematics and Mathematical Sciences 10 Discrete Dynamics in Nature and Society 10 Journal of Mathematics 9 Journal of Mathematical Analysis and Applications 8 Honam Mathematical Journal 7 Palestine Journal of Mathematics 7 Journal of Function Spaces 6 The Ramanujan Journal 5 Journal of Applied Mathematics 5 Asian-European Journal of Mathematics 5 Journal of Applied Mathematics & Informatics 5 Axioms 4 Kyungpook Mathematical Journal 4 Results in Mathematics 4 Boletim da Sociedade Paranaense de Matemática. Terceira Série 4 Fixed Point Theory and Applications 4 $$p$$-Adic Numbers, Ultrametric Analysis, and Applications 4 Mathematics 4 International Journal of Applied and Computational Mathematics 4 Open Mathematics 3 Rocky Mountain Journal of Mathematics 3 Ukrainian Mathematical Journal 3 Journal of Computational and Applied Mathematics 3 Journal of Number Theory 3 Applied Mathematics Letters 3 Numerical Algorithms 3 The Journal of Fourier Analysis and Applications 3 Communications of the Korean Mathematical Society 3 Hacettepe Journal of Mathematics and Statistics 3 AKCE International Journal of Graphs and Combinatorics 3 Cubo 3 International Journal of Analysis and Applications 3 Korean Journal of Mathematics 2 Journal of the Korean Mathematical Society 2 Publications de l’Institut Mathématique. Nouvelle Série 2 Rendiconti del Circolo Matemàtico di Palermo. Serie II 2 Topology and its Applications 2 Bulletin of the Korean Mathematical Society 2 Acta Mathematica Hungarica 2 Journal of Mathematical Sciences (New York) 2 Computational and Applied Mathematics 2 Journal of the Egyptian Mathematical Society 2 Turkish Journal of Mathematics 2 Filomat 2 Analysis (München) 2 Lobachevskii Journal of Mathematics 2 Iranian Journal of Science and Technology. Transaction A: Science 2 International Journal of Number Theory 2 Proyecciones 2 East Asian Mathematical Journal 2 Acta Universitatis Sapientiae. Mathematica 2 Acta Universitatis Sapientiae. Informatica 2 Journal of Pseudo-Differential Operators and Applications 2 Journal of Mathematics and Computer Science. JMCS 2 Journal of Applied Analysis and Computation 2 Electronic Journal of Mathematical Analysis and Applications EJMAA 1 Indian Journal of Pure & Applied Mathematics 1 Lithuanian Mathematical Journal 1 Mathematical Methods in the Applied Sciences 1 Mathematics of Computation 1 Acta Arithmetica 1 Acta Mathematica Vietnamica 1 Functiones et Approximatio. Commentarii Mathematici 1 Le Matematiche 1 Nonlinear Analysis. Theory, Methods & Applications. Series A: Theory and Methods 1 Proceedings of the Japan Academy. Series A 1 Revista Colombiana de Matemáticas 1 Semigroup Forum 1 Transactions of the Moscow Mathematical Society 1 Note di Matematica 1 Statistics & Probability Letters 1 Circuits, Systems, and Signal Processing 1 Bulletin of the Iranian Mathematical Society 1 Journal of the Nigerian Mathematical Society 1 Facta Universitatis. Series Mathematics and Informatics 1 Mathematical and Computer Modelling 1 Japan Journal of Industrial and Applied Mathematics 1 YUJOR. Yugoslav Journal of Operations Research 1 Expositiones Mathematicae 1 Indagationes Mathematicae. New Series 1 Journal of Algebraic Combinatorics 1 The Journal of Analysis 1 Acta Universitatis Matthiae Belii. Series Mathematics 1 Georgian Mathematical Journal 1 Buletinul Academiei de Științe a Republicii Moldova. Matematica 1 Integral Transforms and Special Functions 1 Discussiones Mathematicae. Graph Theory 1 Sbornik: Mathematics 1 Transformation Groups 1 Theory of Computing Systems 1 Differential Equations and Dynamical Systems 1 Soft Computing ...and 46 more Journals
all top 5
#### Cited in 46 Fields
266 Number theory (11-XX) 107 Combinatorics (05-XX) 82 Special functions (33-XX) 52 Real functions (26-XX) 37 Ordinary differential equations (34-XX) 25 Functions of a complex variable (30-XX) 24 Harmonic analysis on Euclidean spaces (42-XX) 19 Operator theory (47-XX) 17 Numerical analysis (65-XX) 15 Topological groups, Lie groups (22-XX) 15 Approximations and expansions (41-XX) 14 Sequences, series, summability (40-XX) 14 Differential geometry (53-XX) 13 Partial differential equations (35-XX) 13 General topology (54-XX) 13 Probability theory and stochastic processes (60-XX) 11 Difference and functional equations (39-XX) 11 Integral transforms, operational calculus (44-XX) 10 Functional analysis (46-XX) 9 Operations research, mathematical programming (90-XX) 6 Group theory and generalizations (20-XX) 6 Computer science (68-XX) 5 Mathematical logic and foundations (03-XX) 4 Field theory and polynomials (12-XX) 4 Commutative algebra (13-XX) 4 Associative rings and algebras (16-XX) 4 Abstract harmonic analysis (43-XX) 4 Information and communication theory, circuits (94-XX) 3 Measure and integration (28-XX) 3 Dynamical systems and ergodic theory (37-XX) 3 Integral equations (45-XX) 3 Statistics (62-XX) 2 Algebraic geometry (14-XX) 2 Linear and multilinear algebra; matrix theory (15-XX) 2 Fluid mechanics (76-XX) 2 Game theory, economics, finance, and other social and behavioral sciences (91-XX) 2 Biology and other natural sciences (92-XX) 2 Systems theory; control (93-XX) 1 General and overarching topics; collections (00-XX) 1 Nonassociative rings and algebras (17-XX) 1 Geometry (51-XX) 1 Manifolds and cell complexes (57-XX) 1 Global analysis, analysis on manifolds (58-XX) 1 Mechanics of particles and systems (70-XX) 1 Classical thermodynamics, heat transfer (80-XX) 1 Quantum theory (81-XX)
|
2021-11-27 10:40:31
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6743813753128052, "perplexity": 8452.383190590754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358180.42/warc/CC-MAIN-20211127103444-20211127133444-00209.warc.gz"}
|
https://ftp.mcs.anl.gov/pub/fathom/moab-docs/IdealElements_8hpp_source.html
|
MOAB: Mesh Oriented datABase (version 5.4.1)
IdealElements.hpp
Go to the documentation of this file.
00001 /* *****************************************************************
00002 MESQUITE -- The Mesh Quality Improvement Toolkit
00003
00004 Copyright 2006 Sandia National Laboratories. Developed at the
00005 University of Wisconsin--Madison under SNL contract number
00006 624796. The U.S. Government and the University of Wisconsin
00007 retain certain rights to this software.
00008
00009 This library is free software; you can redistribute it and/or
00010 modify it under the terms of the GNU Lesser General Public
00012 version 2.1 of the License, or (at your option) any later version.
00013
00014 This library is distributed in the hope that it will be useful,
00015 but WITHOUT ANY WARRANTY; without even the implied warranty of
00016 MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
00017 Lesser General Public License for more details.
00018
00019 You should have received a copy of the GNU Lesser General Public License
00020 (lgpl.txt) along with this library; if not, write to the Free Software
00021 Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
00022
00023 (2006) [email protected]
00024
00025 ***************************************************************** */
00026
00027 /** \file IdealElements.hpp
00028 * \brief
00029 * \author Jason Kraftcheck
00030 */
00031
00032 #ifndef MSQ_IDEAL_ELEMENTS_HPP
00033 #define MSQ_IDEAL_ELEMENTS_HPP
00034
00035 #include "Mesquite.hpp"
00036
00037 namespace MBMesquite
00038 {
00039
00040 class Vector3D;
00041
00042 /**\brief Get ideal element with unit edge length
00043 *
00044 * Get list of vertex coordinates for an ideal element with it's
00045 * centroid at the origin and all edges of unit length. Surface
00046 * elements lie in the XY plane.
00047 *
00048 *\param type the type of the element to obtain.
00049 *\param unit_height_pyramid If true, ideal pyramid has it's height equal
00050 * to the length of an edge of the base, rather than the default
00051 * of equilateral triangular faces.
00052 *\return corner vertex coordinates in canonical order.
00053 */
00054 const Vector3D* unit_edge_element( EntityTopology type, bool unit_height_pyramid = false );
00055
00056 /**\brief Get ideal element with unit area or volume
00057 *
00058 * Get list of vertex coordinates for an ideal element with it's
00059 * centroid at the origin and unit area/volume. Surface
00060 * elements lie in the XY plane.
00061 *
00062 *\param type the type of the element to obtain.
00063 *\param unit_height_pyramid If true, ideal pyramid has it's height equal
00064 * to the length of an edge of the base, rather than the default
00065 * of equilateral triangular faces.
00066 *\return corner vertex coordinates in canonical order.
00067 */
00068 const Vector3D* unit_element( EntityTopology type, bool unit_height_pyramid = false );
00069
00070 } // namespace MBMesquite
00071
00072 #endif
|
2023-01-30 01:27:12
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.366719126701355, "perplexity": 8918.117054445214}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499790.41/warc/CC-MAIN-20230130003215-20230130033215-00847.warc.gz"}
|
https://zbmath.org/?q=an:0833.28008
|
# zbMATH — the first resource for mathematics
The group of eigenvalues of a rank one transformation. (English) Zbl 0833.28008
In an earlier paper [Can. Math. Bull. 37, No. 1, 29-36 (1994; Zbl 0793.28013)], the authors gave a description of the maximal spectral type of a rank one transformation $$T$$, as a certain generalized Riesz product. Apparently it was suggested by J.-F. Mela that this description is related to the group $$e(T)$$ of $$L^\infty$$-eigenvalues of $$T$$. These are the $$L^2$$-eigenvalues when the underlying space is of finite measure, but the usual cutting and stacking construction for rank one maps allows the resulting measure space to be $$\sigma$$-finite.
Several characterizations of $$e(T)$$ are given for rank one $$T$$, one of which is intimately related to the corresponding expression for the maximal spectral type of $$T$$.
##### MSC:
28D05 Measure-preserving transformations 47A35 Ergodic theory of linear operators
Full Text:
|
2021-04-20 11:08:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8901503682136536, "perplexity": 635.2492041347234}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039388763.75/warc/CC-MAIN-20210420091336-20210420121336-00007.warc.gz"}
|
https://cossan.co.uk/wiki/index.php/Cantilever_Beam_(Uncertainty_Quantification)
|
# Cantilever Beam (Uncertainty Quantification)
This page shows how to perform Uncertainty Quantification of the Cantilever Beam with COSSAN-X.
# Problem Definition in COSSAN-X
## Define a Project
In the first step a new project named 'Cantilever_beam' is created: This can be done by pressing the 'new' icon, or from the menu File->New->Project.
After the project has been created, the project name 'Cantilever_beam' appears in the workspace. All the subfolders are empty
## Input
### Parameters
In the next step, the constant input parameters, i.e. the length L and the width b are specified. Clicking with the right button at the folder 'parameter' and selecting 'Add parameter' the following wizard appear on the screen (see also Parameter (wizard)):
After pressing finish, a Parameter object is created and it properties can edited (see also Parameter (editor)):
The same procedure is repeated to define the parameter b. Then, the length L and the width b are specified as parameters with COSSAN-X.
To save a created parameter, simply click the 'ctrl' and the 's' key simultaneously.
### Random Variables
In the next step, a probability distribution is associated to each uncertain quantity, i.e. the height h, the tip load P, the density $\rho$ and the Young's modulus E.
To define new random variables, right click on the sub-folder "Random Variables" and select "Add Random Variable". Then, the following input mask will pop up, which allows you to define the name and a description of the uncertain quantity:
After pressing "Finish" the editor for Random Variable appears, where the distribution type can be selected from a list. In the following, the "Normal" distribution has been selected for the variable height h, and the mean of 0.24 and standard deviation of 0.01 have been inserted.
After all parameters of the distribution have been specified, the preview of the PDF and CDF of the random variable is shown. (It is necessary to save the data of the Random Variable by either pressing "Ctrl s" or pressing the save icon).
This procedure is repeated for all uncertain parameters. For instance, the follow figure shows the definition of the Young's Module:
Then an overview over all defined random variables are shown in the bottom part of the central window of the GUI:
### Random Variable Set
Next, at least one or more Random Variable Sets need to be specified. Random Variable sets allow users to define correlations between Random Variable. Please note that it is NOT necessary to define a Random Variable Set for the random variables that are independent or uncorrelated. In this simple case, one single set "rvset1" suffices.
To create a random variable set, right click on the sub-folder "Random Variable Sets" and select "Add Random Variables Set" (see also Random Variables Set (wizard). Then the following wizard pops up, where the name of the set can be modified:
After pressing Finish, the editor for Random Variable Set will open, as shown below. In this case all four random variables are inserted into the set. The GUI supports "drag and drop". Hence, the list is filled by simply drag all icons in the workspace to the window in the right.
Without pressing at the button "Correlations", all random variable in the set are assumed to be uncorrelated. In this case, the uncertainty in the tip load P and the height are obviously independent. However, the elasticity module E for wood is strongly correlated with the density. In this case, a correlation coefficient of 0.8 has been selected. Note that zero correlations can be left blank.
This completes the input in the sub-folder "Random Variable Sets".
### Functions
Functions of constant parameters and of random variables are very powerful and useful for many cases. In this simple case, however, the response quantitiy could be computed without using any function. Hence, the sub-folder could be left empty.
Since the response (i.e. tip displacement w) is analytically known, it may also be defined as a function. However, for demonstration, only the moment of inertia will be computed by using functions while w will be computed independently by a matlab script.
In the following, a function for the moment of inertia $I=bh^3/12$ will be created.
Analogously to create parameters and random variables, each function is created by right clicking the sub-folder "Function".
The pop up window allows to create a new function where the name and description can be edited. We use the name "I" and the description "moment of inertia".
After pressing finish, the editor for Function is open. Note that all previously defined names can be used within the function. Each name must be enclosed by <& &> as demonstrated below.
The function definition is:
<&b&>.*<&h&>.^3/12
as shown in the following picture.
## Evaluator
An evaluator is used in a definition of a model. If a model could be seen as a black box that returns output values given the input values, an evaluator is actually the content of such a black box. Thus, an evaluator is a low-level definition of the functional relation between the inputs and outputs of a model.
### Matlab script
A Matlab script (also called MIO "Matlab Input Output") is used to calculate the quantity of interest. The new Matlab script is created using the wizard. To create a new Matlab script the following path must be followed:
Evaluators => Matlab Files => Right click => Add Matlab Script
Then the Matlab script editor will appear. First at all it is necessary to define the Input factors required to compute the beam displacement. In this examples all the defined inputs are required. Hence, pressing the plus button in Input windows of the Matlab script it is possible to add all the required inputs. Then, it is necessary to define a name of the computed quantity. The displacement is identified by the letter w.
In order to edit the script, please select the tab script from the Matlab script editor (indicated by the red arrow in the above picture). The displacement of the Beam is calculated by means of the following Matlab script:
Toutput(n).w=(Tinput(n).rho*9.81*Tinput(n).b*Tinput(n).h*Tinput(n).L^4)/(8*Tinput(n).E*Tinput(n).I) + ...
(Tinput(n).P*Tinput(n).L^3)/(3*Tinput(n).E*Tinput(n).I);
The above script can be copy and paste inside the for loop as shown in the figure at the right:
In this example Matlab structures are used to interact with the Engine. For more information about the Matlab script please refer to the following page: Mio (editor)
### Performance Function
The Performance function is used to define the domain of definition of the model into two sets, the safe set and the failure set. For more information see Performance function definition.
## Model
To investigate the effect of uncertainties in the input to the response, it is necessary to define a model. As shown in the following three options are available. The first option "Physical Model" allows to generate independent samples by various methods to assess the variability (scatter) of the response. The second option "Optimization Models" is selected in case the response should be optimized in same sense. The third option "Probabilistic Models" is dedicated to reliability analysis. The "(0)" at end of the three options, indicate that none of the options have been established.
### Physical Model
In a first step the most basic option "Physical Model" is selected by moving the mouse pointer to "Physical Model" and clicking the right mouse button and selecting "Add Physical Model":
A wizard pops up showing default names which can be modified. In the following the default values are accepted by pressing the 'Finish' button.
Then, the editor appears, in which Evaluators and the Random Variable Sets can be added. The error message in red at the top indicate that the input is at the present state insufficient to perform an simulation analysis.
Clicking at the '+' of the evaluator (top left , the following window pops up, showing the available options. In this example, only the option "DisplacementBeam" is available and selected by pressing 'OK'.
Then the input and output associated with the evaluator are shown at the right in the windows 'Inputs' and 'Outputs' as shown in the following:
Saving the object, the error message on the top disapper and the Model is ready to be analyzed. However, it is necessary to add also the Random Variable set in order to consider correlation between the random variable.
In case the random variable set is not specified, the random variables required to evaluate the model are added automatically to an uncorrelated Random Variable set named "uncorrelated_rvs".
Next, all required Random Variable Sets need to be specified. Pressing at the '+' button of the Random Variable Sets, the available sets are shown:
After Pressing the 'OK' button, all required data is specified. Note that the warnings in red have now disappeared.
# Uncertainty Quantification Analysis
After saving the data, the simulation is ready to start. Different simulations can be started by pressing at the small white triangle within a green circle at top right.
## Run the Analysis
The next window allows to choose among different analysis types such as: 'Design of Experiments', 'Sensitivity Analysis', 'Uncertainty Quantification' and 'Userdefined Analysis'. In this tutorial 'Uncertainty Quantification' is selected.
After pressing 'Next' the following page appears and the method simulation is selected:
After pressing 'Next' in the above window, various options for simulation are provided. The well known Monte Carlo sampling procedure is selected. At the right, the number of independent samples can be declared. It is possible to split the computation into several batches which might be processed by different computers using parallel computing.
It is recommended to keep the number of sample per simulation smaller than 10000 The computational performance degradate with the increase of the batch size
In this simple example with little computational effort, only a single batch is used. Choosing a zero maximum runtime, means that no limit for the runtime is given.
Pressing Next it is possible to specify the Grid Settings (i.e. how to perform the analysis). In this example the analysis will be performed on the local machine and it is not necessary to change the default values in the mask.
The actual computations starts after pressing the 'Finish' button. Then, the the following window appears.
When completed, a message of successful completion is provided:
## Show the Results
All results are accessible in the section "Analysis". The results are stored in sub-folders according to the performed analysis. In the present case only Uncertainty Quantification has been performed. This can be recognized by the value '(1)' in the folder 'Uncertainty Quantification'
### Uncertainty Quantification
Expanding the folder 'Results' of the Uncertainty Quantification shows the following sub folders and all the involved variables defined in the input. In order to facilitate the visualization of the results the Perspectives 'Output view' can be selected. Alternatively, the 'Output view' can also be selected from the menu 'Window'>'Open Perspective'>'Output'.
#### Scatter Plots
The following shows the 'Scatter View' and the 'Parallel Coordinates' view:
To visualize all 1000 realizations of the density rho and the Youngs modulus E, the variables can be selected using the mouse, and drags to the 'Scatter View':
A scatter of the load P and the tip displacement w_out is shown in the following scatter plot:
#### Parallel Coordinates
The 'Parallel Coordinates View' allows to show correlations between several variables:
#### Table View
Open a new table view, by clicking on the icon on the toolbar and then drag and drop the quantity of interest on the table.
#### Histogram
Open a new table view, by clicking on the icon on the toolbar and then drag and drop the quantity of interest on the Histogram view.
|
2019-12-10 11:37:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 2, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5916258096694946, "perplexity": 929.2176957463918}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540527205.81/warc/CC-MAIN-20191210095118-20191210123118-00017.warc.gz"}
|
https://electronics.stackexchange.com/questions/90674/finding-frequency-of-a-three-phase-generator-from-phase-voltages
|
# Finding frequency of a three-phase generator from phase voltages
I'm trying to find frequency (i.e.; rotations per second) of a balanced three phase generator. Output peak voltage of the generator goes up to 150V.
One solution I found is to attenuate rectified and filtered versions of this AC by voltage divider resistor network (Ra, Rb, Rc and Rd), so that the rectified one swings over the filtered one, and then input them to a comparator to find the frequency data.
I would like to know if there is a better way of doing this.
Note: There is already a bridge rectifier in my circuit for other purposes, and the GND is taken from it as it is seen in the image below. I can't afford changing GND position at this design level.
As the generator gives balanced 3-phase, you can use three large same resistors witch give you the neutral point and then find the frequency from neutral point and one of phases.
simulate this circuit – Schematic created using CircuitLab
In the previous schematic we divided the $100M\Omega$ resistor into $3M\Omega$ and $97M\Omega$ to reduce the output voltage. By the generator's peak voltage ($150V$) the output on $3M\Omega$ voltage will have less than $5V$ peak and can be determined easily by an Op-Amp.
Another way:
Of course it is possible to find out frequency of generator from voltage of one line to another. For example from $R$ to $S$.
The waveform of $R-S$ voltage is like this:
This is the plot of $f = Sin(x)-Sin(x-2 \pi / 3)$ (Plotted by Fooplot). The period of f witch can be found by finding the $f(x)=f(x + T)$ is same as $Sin(x)$. So The frequency of generator is also can be determined by $l-l$ voltage. And a simple zero crossing detector can help to find the frequency.
• how does this idea allow the OP to have a clean signal representative of the power AC waveform referenced to the negative side of his bridge rectifier? I think you also need to point out that there is a safety issue with this method because there is no inherent isolation of the signal and this could be dangerous. Nov 14 '13 at 23:40
• @Andyaka: Thanks. I added the waveform of L-L and safety to my answer. Nov 15 '13 at 4:46
Why don't you just attenuate the voltage on one pair of wires and feed it into an opto-isolator like this: -
This picture was taken from here and the major benefit of using an opto isolator/coupler is that you don't have to have your electronics directly connected to potentially lethal AC voltages. This makes them safe to work on and easier to get working as a prototype.
And if you really want to do it via a bridge rectifier here is a site that has a schematic: -
• @Passerby - the edits you approved were not needed and I have removed them Nov 14 '13 at 23:14
• @Rev1.0 - the edits you approved were not needed and I have removed them. Nov 14 '13 at 23:15
I first decided to directly compare voltage levels of any two different phases with the simple circuit above. In order to see what would happen, I simulated the scenario to see the voltage levels.
Red: Waveform of R with respect to GND.
Magenta: Waveform of S with respect to GND.
It looks OK. But there is a moment at which both the phase voltages become zero with respect to the GND. It is not clear what would happen at this moment; any noise at opamp input may count as dozens of generator frequency. I gave it a second though and decided to compare one of the phases with average of other two.
Note: C1 and C2 are for noise prevention, because of high resistor values.
In this case, the waveforms of $V_R$ and $\dfrac{V_S + V_T}{2}$ are as seen in the plotting below.
Now there is no indefinite region or moment when comparing signal levels. I am going to implement my circuit like this. I hope it works OK.
(Note: Dimensions of the images of the plottings are big enough; just open them in a new tab to see the details and read the texts on them.)
MATLAB code for generating these plottings:
V_PEAK = 200.0;
FREQ = 100;
PERIOD = 1 / FREQ;
TMIN = 0.0;
TMAX = 3 * PERIOD;
VMIN = -V_PEAK - 10.0;
VMAX = +V_PEAK * sqrt(3) + 10.0;
POINTS_PER = 100000;
POINTS = (TMAX - TMIN) * POINTS_PER;
PHASE_000 = 0 * pi / 180;
PHASE_120 = 120 * pi / 180;
PHASE_240 = 240 * pi / 180;
t = linspace(TMIN, TMAX, POINTS);
V000 = zeros(1, POINTS);
V120 = zeros(1, POINTS);
V240 = zeros(1, POINTS);
VDC = zeros(1, POINTS);
VLINE000 = zeros(1, POINTS);
VLINE120 = zeros(1, POINTS);
VLINE240 = zeros(1, POINTS);
for i = 1 : 1 : POINTS
V000(i) = V_PEAK * sin(2*pi*FREQ*t(i) - PHASE_000);
V120(i) = V_PEAK * sin(2*pi*FREQ*t(i) - PHASE_120);
V240(i) = V_PEAK * sin(2*pi*FREQ*t(i) - PHASE_240);
if ((V000(i) > V120(i)) && (V000(i) > V240(i)))
Vmax = V000(i);
elseif ((V120(i) > V000(i)) && (V120(i) > V240(i)))
Vmax = V120(i);
else
Vmax = V240(i);
end;
if ((V000(i) < V120(i)) && (V000(i) < V240(i)))
Vmin = V000(i);
elseif ((V120(i) < V000(i)) && (V120(i) < V240(i)))
Vmin = V120(i);
else
Vmin = V240(i);
end;
VDC(i) = Vmax - Vmin;
VLINE000(i) = V000(i) - Vmin;
VLINE120(i) = V120(i) - Vmin;
VLINE240(i) = V240(i) - Vmin;
end;
close all;
hFig = figure;
hold on;
set(hFig, 'Position', [1200 50 700 950]);
plot(t, V000, 'Color', [0, 0, 1]);
plot(t, V120, 'Color', [0, 1, 0]);
plot(t, V240, 'Color', [0, 1, 1]);
plot(t, VDC, 'Color', [0, 0, 0]);
plot(t, VLINE000, 'Color', [1, 0, 0]);
plot(t, (VLINE120 + VLINE240) ./ 2, 'Color', [1, 0, 1]);
xlim([TMIN, TMAX]);
ylim([VMIN, VMAX]);
• Would you mind explain why did you compare the R and S? Nov 16 '13 at 12:28
• I didn't compare R and S. I copared R and (S+T)/2 (not exactly but close to their average). Because, this method seemed to be a good solution to me. Did you check the plottings? In your method, the neutral point is obtained by very high resistor values; sensing voltage from high impedance source gave me some worries. Nov 16 '13 at 12:39
|
2021-10-16 03:01:03
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.43982407450675964, "perplexity": 1919.1936150709423}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323583408.93/warc/CC-MAIN-20211016013436-20211016043436-00015.warc.gz"}
|
https://r-hyperspec.github.io/hyperSpec/reference/collapse.html
|
The spectra from all objects will be put into one object. The resulting object has all wavelengths that occur in any of the input objects, wl.tolerance is used to determine which difference in the wavelengths is tolerated as equal: clusters of approximately equal wavelengths will span at most 2 * wl.tolerance. Differences up to +/- wl.tolerance are considered equal.
The returned object has wavelengths that are the weighted average (by number of spectra) of the wavelengths within any such cluster of approximately equal wavelengths.
Labels will be taken from the first object where they are encountered. However, the order of processing objects is not necessarily the same as the order of objects in the input: collapse first processes groups of input objects that share all wavelengths (within wl.tolerance).
Data points corresponding to wavelengths not in the original spectrum will be set to NA. Extra data is combined in the same manner.
If the objects are named, the names will be preserved in extra data column \$.name. If the wavelengths are names, names are preserved and taken from the first object where they were encountered, the same applies to possible column names of the spectra matrix.
collapse(
...,
wl.tolerance = hy_get_option("wl.tolerance"),
collapse.equal = TRUE
)
## Arguments
...
hyperSpec objects to be collapsed into one object. Instead of giving several arguments, a list with all objects to be collapsed may be given.
wl.tolerance
Tolerance to decide which wavelengths are considered equal.
collapse.equal
Logical indicating whether to try first finding groups of spectra with (approximately) equal wavelength axes. If the data is known to contain few or no such groups, collapse() will be faster with this first pass being turned off.
## Value
A hyperSpec object
merge(), rbind(), and plyr::rbind.fill()
C. Beleites
## Examples
barbiturates[1:3]
#> [[1]]
#> hyperSpec object
#> 1 spectra
#> 4 data columns
#> 37 data points / spectrum
#>
#> [[2]]
#> hyperSpec object
#> 1 spectra
#> 4 data columns
#> 34 data points / spectrum
#>
#> [[3]]
#> hyperSpec object
#> 1 spectra
#> 4 data columns
#> 29 data points / spectrum
#>
collapse(barbiturates[1:3])
#> hyperSpec object
#> 3 spectra
#> 4 data columns
#> 57 data points / spectrum
a <- barbiturates[[1]]
b <- barbiturates[[2]]
c <- barbiturates[[3]]
a
#> hyperSpec object
#> 1 spectra
#> 4 data columns
#> 37 data points / spectrum
b
#> hyperSpec object
#> 1 spectra
#> 4 data columns
#> 34 data points / spectrum
c
#> hyperSpec object
#> 1 spectra
#> 4 data columns
#> 29 data points / spectrum
collapse(a, b, c)
#> hyperSpec object
#> 3 spectra
#> 4 data columns
#> 57 data points / spectrum
collapse(barbiturates[1:3], collapse.equal = FALSE)
#> hyperSpec object
#> 3 spectra
#> 4 data columns
#> 57 data points / spectrum
|
2022-11-28 08:54:58
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19875936210155487, "perplexity": 5002.936172999837}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710488.2/warc/CC-MAIN-20221128070816-20221128100816-00832.warc.gz"}
|
https://en.academic.ru/dic.nsf/enwiki/268664
|
# Frequency mixer
Frequency mixer
Frequency Mixer Symbol.
In electronics a mixer or frequency mixer is a nonlinear electrical circuit that creates new frequencies from two signals applied to it. In its most common application, two signals at frequencies f1 and f2 are applied to a mixer, and it produces new signals at the sum f1 + f2 and difference f1 - f2 of the original frequencies. Other frequency components may also be produced in a practical frequency mixer.
Mixers are widely used to shift signals from one frequency range to another, a process known as heterodyning, for convenience in transmission or further signal processing. For example, a key component of a superheterodyne receiver is a mixer used to move received signals to a common intermediate frequency. Frequency mixers are also used to modulate a carrier frequency in radio transmitters.
## Types
Passive mixers use one or more diodes and rely on the non-linear relation between voltage and current to provide the multiplying element. In a passive mixer, the desired output signal is always of lower power than the input signals. Active mixers can increase the strength of the product signal. Active mixers improve isolation between the ports, but may have higher noise and more power consumption; an active mixer can be less tolerant of overload.
Mixers may be built of discrete components, may be part of integrated circuits, or can be delivered as hybrid modules.
Mixers may also be classified by their topology. Unbalanced mixers allow some of the input signal power to pass through to the output. A single-balanced mixer is arranged so that the local oscillator, (or RF) signal port, cancels and cannot pass through to the output. A doubly balanced mixer has symmetrical paths for both inputs, and will have no output if either input signal is not present.
Schematic diagram of a double-balanced passive diode mixer. There is no output unless both f1 and f2 inputs are present.
Selection of a mixer type is a trade off for a particular application. Mixer circuits are characterized by conversion gain, and noise figure.[1] Balanced and double-balanced designs allow less of the input signals to feed through to the output.
Nonlinear electronic components that are used as mixers include diodes, transistors biased near cutoff, and at lower frequencies, analog multipliers. Ferromagnetic-core inductors driven into saturation have also been used. In nonlinear optics, crystals with nonlinear characteristics are used to mix two frequencies of laser light to create optical heterodynes.
### Diode
A diode can be used to create a simple unbalanced mixer. This type of mixer produces the original frequencies as well as their sum and their difference. The importance of the diode is that it is non-linear (or non-Ohmic), which means its response (current) is not proportional to its input (voltage). The diode therefore does not reproduce the frequencies of its driving voltage in the current through it, which allows the desired frequency manipulation. Certain other non-linear devices such as tunnel diodes or Gunn diodes can be utilized similarly.
The current I through an ideal diode as a function of the voltage V across it is given by
$I=I_\mathrm{S} \left( e^{qV_\mathrm{D} \over nkT}-1 \right)$
where what is important is that V appears in e's exponent. The exponential can be expanded as
$e^x = \sum_{n=0}^\infty \frac{x^n}{n!}$
and can be approximated for small x (that is, small voltages) by the first few terms of that series:
$e^x-1\approx x + \frac{x^2}{2}$
Suppose that the sum of the two input signals v1 + v2 is applied to a diode, and that an output voltage is generated that is proportional to the current through the diode (perhaps by providing the voltage that is present across a resistor in series with the diode). Then, disregarding the constants in the diode equation, the output voltage will have the form
$v_\mathrm{o} = (v_1+v_2)+\frac12 (v_1+v_2)^2 + \dots$
The first term on the right is the original two signals, as expected, followed by the square of the sum, which can be rewritten as $(v_1+v_2)^2 = v_1^2 + 2 v_1 v_2 + v_2^2$, where the multiplied signal is obvious. The ellipsis represents all the higher powers of the sum which we assume to be negligible for small signals.
### Switching
Another form of mixer operates by switching, with the smaller input signal being passed inverted or uninverted according to the phase of the local oscillator (LO). This would be typical of the normal operating mode of a packaged double balanced mixer module such as an SBL-1, with the local oscillator drive considerably higher than the signal amplitude.
The aim of a switching mixer is to achieve linear operation over the signal level, and hard switching driven by the local oscillator. Mathematically the switching mixer is not much different from a multiplying mixer, just because instead of the LO sine wave term we would use the signum function. In the frequency domain the switching mixer operation leads to the usual sum and difference frequencies, but also to further terms e.g. +-3*fLO, +-5*fLO, etc. The advantage of a switching mixer is that it can achieve - with the same effort - a lower noise figure (NF) and larger conversion gain. This come because the switching diodes or transistors act either like a low resistor (switch closed) or large resistor (switch open) and in both cases only minimum noise is added. From the circuit perspective many multiplying mixers can be used as switching mixers, just by increasing the LO amplitude. So RF engineers simply talk about mixers, and mean switching mixers.
## Applications
The mixer circuit can be used not only to shift the frequency of an input signal as in a receiver, but also as a product detector, modulator, phase detector or frequency multiplier.[2] For example a communications receiver might contain two mixer stages for conversion of the input signal to an intermediate frequency, and another mixer employed as a detector for demodulation of the signal.
## References
1. ^ D.S. Evans, G. R. Jessop, VHF-UHF Manual Third Edition, [[Radio Society of Great Britain, 1976, page 4-12
2. ^ Paul Horowitz, Winfred Hill The Art of Electronics Second Edition, Cambridge University Press 1989, pp. 885-887
Wikimedia Foundation. 2010.
### См. также в других словарях:
• Mixer — may refer to: An electronics device: DJ mixer, a type of audio mixing console used by disc jockeys Electronic mixer, a device for adding or multiplying signal voltages together Frequency mixer, a telecommunications device used to shift the… … Wikipedia
• Frequency changer — A frequency changer or frequency converter is an electronic device that converts alternating current (AC) of one frequency to alternating current of another frequency. The device may also change the voltage, but if it does, that is incidental to… … Wikipedia
• Frequency divider — A frequency divider, also called a clock divider or scaler or prescaler, is a circuit that takes an input signal of a frequency, fin, and generates an output signal of a frequency: where n is an integer. Phase locked loop frequency synthesizers… … Wikipedia
• Frequency counter — A frequency counter is an electronic instrument, or component of one, that is used for measuring frequency. Since frequency is defined as the number of events of a particular sort occurring in a set period of time, measuring it is usually… … Wikipedia
• Electronic mixer — A simple three channel passive additive mixer. More channels can be added by simply adding more input jacks and mix resistors … Wikipedia
• Subharmonic mixer — Whereas in a fundamental frequency mixer the output frequency (f {out}) contains the sum and difference of the fundamental tones of the two input signals (f {out}=f {1} f {2} or f {out}=f {1}+f {2}), in a subharmonic mixer the output signal is… … Wikipedia
• Intermediate frequency — In communications and electronic engineering, an intermediate frequency (IF) is a frequency to which a carrier frequency is shifted as an intermediate step in transmission or reception. The intermediate frequency is created by mixing the carrier… … Wikipedia
• Image frequency — Image frequency: In radio reception using heterodyning in the tuning process, an undesired input frequency that is capable of producing the same intermediate frequency (IF) that the desired input frequency produces. In a heterodyne receiver… … Wikipedia
• High Shear Mixer — A high shear mixer disperses, or transports, one phase or ingredient (liquid, solid, gas) into a main continuous phase (liquid), with which it would normally be immiscible.Input of energyThis phase transport is accomplished by an input of energy … Wikipedia
• DJ mixer — A Numark DM2002X Pro Master DJ mixer. A DJ mixer is a type of audio mixing console used by disc jockeys. The key features that differentiate a DJ mixer from other types of audio mixers are the ability to redirect (cue) a non playing source to… … Wikipedia
### Поделиться ссылкой на выделенное
##### Прямая ссылка:
Нажмите правой клавишей мыши и выберите «Копировать ссылку»
|
2021-01-17 16:17:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5569505095481873, "perplexity": 2086.619308639174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703513062.16/warc/CC-MAIN-20210117143625-20210117173625-00446.warc.gz"}
|
https://zbmath.org/?q=an:0586.35016
|
## Sur l’équation générale $$u_ t=\phi (u)_{xx}-\psi (u)_ x+v$$. (On the general equation $$u_ t=\phi (u)_{xx}-\psi (u)_ x+v)$$.(French)Zbl 0586.35016
We study the equation $$u_ t=\phi (u)_{xx}-\psi (u)_ x+v$$ assuming only that $$\phi$$ and $$\psi$$ are continuous functions of $${\mathbb{R}}$$ into $${\mathbb{R}}$$ with $$\phi$$ non decreasing. In particular we show the continuous dependence of ”mild solution” with respect to $$\phi$$, $$\psi$$, v, $$u_ 0$$.
### MSC:
35G20 Nonlinear higher-order PDEs 47H20 Semigroups of nonlinear operators 35D05 Existence of generalized solutions of PDE (MSC2000) 35B30 Dependence of solutions to PDEs on initial and/or boundary data and/or on parameters of PDEs
|
2022-09-27 03:49:53
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7913586497306824, "perplexity": 787.4476050183783}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00387.warc.gz"}
|
https://mathoverflow.net/questions/1939/separable-and-finitely-generated-projective-but-not-frobenius
|
# Separable and finitely generated projective but not Frobenius?
Let R be a commutative ring, and $$A$$ an $$R$$-algebra (possibly non-commutative). Then $$A$$ is separable if it is finitely generated (f.g.) projective as an $$(A \otimes_R A^{\mathrm{op}})$$-algebra. Suppose further that $$A$$ is f. g. projective as an $$R$$-module. Does this imply that $$A$$ is a (symmetric) Frobenius algebra?
There are lots of equivalent definitions of a Frobenius algebra. One (assuming $$A$$ is a f.g. projective R-module) is that there exists an $$R$$-linear map $$\mathrm{tr}: A\to R$$, such that $$b(x,y) := \mathrm{tr}(ab)$$ is a non-degenerate.
I know that the answer is yes when $$R$$ is a field. What about other rings?
I am not an expert on algebras, but this question is related to understanding obstructions for extended TQFTs, and so I am very interested in knowing anything I can about it.
|
2023-03-21 16:52:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9332984089851379, "perplexity": 69.6764771269041}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943704.21/warc/CC-MAIN-20230321162614-20230321192614-00551.warc.gz"}
|
https://jeremykun.com/tag/mathematics/page/2/
|
# Searching for RH Counterexamples — Setting up Pytest
Some mathy-programmy people tell me they want to test their code, but struggle to get set up with a testing framework. I suspect it’s due to a mix of:
• There are too many choices with a blank slate.
• Making slightly wrong choices early on causes things to fail in unexpected ways.
I suspect the same concerns apply to general project organization and architecture. Because Python is popular for mathy-programmies, I’ll build a Python project that shows how I organize my projects and and test my code, and how that shapes the design and evolution of my software. I will use Python 3.8 and pytest, and you can find the final code on Github.
For this project, we’ll take advice from John Baez and explore a question that glibly aims to disprove the Riemann Hypothesis:
A CHALLENGE:
Let σ(n) be the sum of divisors of n. There are infinitely many n with σ(n)/(n ln(ln(n)) > 1.781. Can you find one? If you can find n > 5040 with σ(n)/(n ln(ln(n)) > 1.782, you’ll have disproved the Riemann Hypothesis.
I don’t expect you can disprove the Riemann Hypothesis this way, but I’d like to see numbers that make σ(n)/(n ln(ln(n)) big. It seems the winners are all multiples of 2520, so try those. The best one between 5040 and a million is n = 10080, which only gives 1.755814.
## Initializing the Project
One of the hardest parts of software is setting up your coding environment. If you use an integrated development environment (IDE), project setup is bespoke to each IDE. I dislike this approach, because what you learn when using the IDE is not useful outside the IDE. When I first learned to program (Java), I was shackled to Eclipse for years because I didn’t know how to compile and run Java programs without it. Instead, we’ll do everything from scratch, using only the terminal/shell and standard Python tools. I will also ignore random extra steps and minutiae I’ve built up over the years to deal with minor issues. If you’re interested in that and why I do them, leave a comment and I might follow up with a second article.
This article assumes you are familiar with the basics of Python syntax, and know how to open a terminal and enter basic commands (like ls, cd, mkdir, rm). Along the way, I will link to specific git commits that show the changes, so that you can see how the project unfolds with each twist and turn.
I’ll start by creating a fresh Python project that does nothing. We set up the base directory riemann-divisor-sum, initialize git, create a readme, and track it in git (git add + git commit).
mkdir riemann-divisor-sum
cd riemann-divisor-sum
git init .
echo "# Divisor Sums for the Riemann Hypothesis" > README.md
Next I create a Github project at https://github.com/j2kun/riemann-divisor-sum (the name riemann-divisor-sum does not need to be the same, but I think it’s good), and push the project up to Github.
git remote add origin [email protected]:j2kun/riemann-divisor-sum.git
# instead of "master", my default branch is really "main"
git push -u origin master
Note, if you’re a new Github user, the “default branch name” when creating a new project may be “master.” I like “main” because it’s shorter, clearer, and nicer. If you want to change your default branch name, you can update to git version 2.28 and add the following to your ~/.gitconfig file.
[init]
defaultBranch = main
Here is what the project looks like on Github as of this single commit.
## Pytest
Next I’ll install the pytest library which will run our project’s tests. First I’ll show what a failing test looks like, by setting up a trivial program with an un-implemented function, and a corresponding test. For ultimate simplicity, we’ll use Python’s built-in assert for the test lines. Here’s the commit.
# in the terminal
mkdir riemann
mkdir tests
# create riemann/divisor.py containing:
'''Compute the sum of divisors of a number.'''
def divisor_sum(n: int) -> int:
raise ValueError("Not implemented.")
# create tests/divisor_test.py containing:
from riemann.divisor import divisor_sum
def test_sum_of_divisors_of_72():
assert 195 == divisor_sum(72)
Next we install and configure Pytest. At this point, since we’re introducing a dependency, we need a project-specific place to store that dependency. All dependencies related to a project should be explicitly declared and isolated. This page helps explain why. Python’s standard tool is the virtual environment. When you “activate” the virtual environment, it temporarily (for the duration of the shell session or until you run deactivate) points all Python tools and libraries to the virtual environment.
virtualenv -p python3.8 venv
source venv/bin/activate
# shows the location of the overridden python binary path
which python
# outputs: /Users/jeremy/riemann-divisor-sum/venv/bin/python
Now we can use pip as normal and it will install to venv. To declare and isolate the dependency, we write the output of pip freeze to a file called requirements.txt, and it can be reinstalled using pip install -r requirements.txt. Try deleting your venv directory, recreating it, and reinstalling the dependencies this way.
pip install pytest
pip freeze > requirements.txt
git commit -m "requirements: add pytest"
# example to wipe and reinstall
# deactivate
# rm -rf venv
# virtualenv -p python3.8 venv
# source venv/bin/activate
# pip install -r requirements.txt
As an aside, at this step you may notice git mentions venv is an untracked directory. You can ignore this, or add venv to a .gitignore file to tell git to ignore it, as in this commit. We will also have to configure pytest to ignore venv shortly.
When we run pytest (with no arguments) from the base directory, we see our first error:
from riemann.divisor import divisor_sum
E ModuleNotFoundError: No module named 'riemann'
Module import issues are a common stumbling block for new Python users. In order to make a directory into a Python module, it needs an __init__.py file, even if it’s empty. Any code in this file will be run the first time the module is imported in a Python runtime. We add one to both the code and test directories in this commit.
When we run pytest (with no arguments), it recursively searches the directory tree looking for files like *_test.py and test_*.py loads them, and treats every method inside those files that are prefixed with “test” as a test. Non-“test” methods can be defined and used as helpers to set up complex tests. Pytest then runs the tests, and reports the failures. For me this looks like
Our implementation is intentionally wrong for demonstration purposes. When a test passes, pytest will report it quietly as a “.” by default. See these docs for more info on different ways to run the pytest binary and configure its output report.
In this basic pytest setup, you can put test files wherever you want, name the files and test methods appropriately, and use assert to implement the tests themselves. As long as your modules are set up properly, as long as imports are absolute (see this page for gory details on absolute vs. relative imports), and as long as you run pytest from the base directory, pytest will find the tests and run them.
Since pytest searches all directories for tests, this includes venv and __pycache__, which magically appears when you create python modules (I add __pycache__ to gitignore). Sometimes package developers will include test code, and pytest will then run those tests, which often fail or clutter the output. A virtual environment also gets large as you install big dependencies (like numpy, scipy, pandas), so this makes pytest slow to search for tests to run. To alleviate, the --norecursedirs command line flag tells pytest to skip directories. Since it’s tedious to type --norecursedirs='venv __pycache__' every time you run pytest, you can make this the default behavior by storing the option in a configuration file recognized by pytest, such as setup.cfg. I did it in this commit.
Some other command line options that I use all the time:
• pytest test/dir to test only files in that directory, or pytest test/dir/test_file.py to test only tests in that file.
• pytest -k STR to only run tests whose name contains “STR”
• pytest -s to see see any logs or print statements inside tested code
• pytest -s to allow the pdb/ipdb debugger to function and step through a failing test.
## Building up the project
Now let’s build up the project. My general flow is as follows:
1. Decide what work to do next.
2. Sketch out the interface for that work.
3. Write some basic (failing, usually lightweight) tests that will pass when the work is done.
4. Do the work.
5. Add more nuanced tests if needed, based on what is learned during the work.
6. Repeat until the work is done.
This strategy is sometimes called “the design recipe,” and I first heard about it from my undergraduate programming professor John Clements at Cal Poly, via the book “How to Design Programs.” Even if I don’t always use it, I find it’s a useful mental framework for getting things done.
For this project, I want to search through positive integers, and for each one I want to compute a divisor sum, do some other arithmetic, and compare that against some other number. I suspect divisor sum computations will be the hard/interesting part, but to start I will code up a slow/naive implementation with some working tests, confirm my understanding of the end-to-end problem, and then improve the pieces as needed.
In this commit, I implement the naive divisor sum code and tests. Note the commit also shows how to tell pytest to test for a raised exception. In this commit I implement the main search routine and confirm John’s claim about $n=10080$ (thanks for the test case!).
These tests already showcase a few testing best practices:
• Test only one behavior at a time. Each test has exactly one assertion in it. This is good practice because when a test fails you won’t have to dig around to figure out exactly what went wrong.
• Use the tests to help you define the interface, and then only test against that interface. The hard part about writing clean and clear software is defining clean and clear interfaces that work together well and hide details. Math does this very well, because definitions like $\sigma(n)$ do not depend on how $n$ is represented. In fact, math really doesn’t have “representations” of its objects—or more precisely, switching representations is basically free, so we don’t dwell on it. In software, we have to choose excruciatingly detailed representations for everything, and so we rely on the software to hide those details as much as possible. The easiest way to tell if you did it well is to try to use the interface and only the interface, and tests are an excuse to do that, which is not wasted effort by virtue of being run to check your work.
## Improving Efficiency
Next, I want to confirm John’s claim that $n=10080$ is the best example between 5041 and a million. However, my existing code is too slow. Running the tests added in this commit seems to take forever.
We profile to confirm our suspected hotspot:
>>> import cProfile
>>> from riemann.counterexample_search import best_witness
>>> cProfile.run('best_witness(10000)')
ncalls tottime percall cumtime percall filename:lineno(function)
...
54826 3.669 0.000 3.669 0.000 divisor.py:10(<genexpr>)
As expected, computing divisor sums is the bottleneck. No surprise there because it makes the search take quadratic time. Before changing the implementation, I want to add a few more tests. I copied data for the first 50 integers from OEIS and used pytest’s parameterize feature since the test bodies are all the same. This commit does it.
Now I can work on improving the runtime of the divisor sum computation step. Originally, I thought I’d have to compute the prime factorization to use this trick that exploits the multiplicativity of $\sigma(n)$, but then I found this approach due to Euler in 1751 that provides a recursive formula for the sum and skips the prime factorization. Since we’re searching over all integers, this allows us to trade off the runtime of each $\sigma(n)$ computation against the storage cost of past $\sigma(n)$ computations. I tried it in this commit, using python’s built-in LRU-cache wrapper to memoize the computation. The nice thing about this is that our tests are already there, and the interface for divisor_sum doesn’t change. This is on purpose, so that the caller of divisor_sum (in this case tests, also client code in real life) need not update when we improve the implementation. I also ran into a couple of stumbling blocks implementing the algorithm (I swapped the order of the if statements here), and the tests made it clear I messed up.
However, there are two major problems with that implementation.
1. The code is still too slow. best_witness(100000) takes about 50 seconds to run, almost all of which is in divisor_sum.
2. Python hits its recursion depth limit, and so the client code needs to eagerly populate the divisor_sum cache, which is violates encapsulation. The caller should not know anything about the implementation, nor need to act in a specific way to accommodate hidden implementation details.
I also realized after implementing it that despite the extra storage space, the runtime is still $O(n^{3/2})$, because each divisor-sum call requires $O(n^{1/2})$ iterations of the loop. This is just as slow as a naive loop that checks divisibility of integers up to $\sqrt{n}$. Also, a naive loop allows me to plug in a cool project called numba that automatically speeds up simple Python code by compiling it in place. Incidentally, numba is known to not work with lru_cache, so I can’t tack it on my existing implementation.
So I added numba as a dependency and drastically simplified the implementation. Now the tests run in 8 seconds, and in a few minutes I can upgrade John’s claim that $n=10080$ is the best example between 5041 and a million, to the best example between 5041 and ten million.
## Next up
This should get you started with a solid pytest setup for your own project, but there is a lot more to say about how to organize and run tests, what kinds of tests to write, and how that all changes as your project evolves.
For this project, we now know that the divisor-sum computation is the bottleneck. We also know that the interesting parts of this project are yet to come. We want to explore the patterns in what makes these numbers large. One way we could go about this is to split the project into two components: one that builds/manages a database of divisor sums, and another that analyzes the divisor sums in various ways. The next article will show how the database set up works. When we identify relevant patterns, we can modify the search strategy to optimize for that. As far as testing goes, this would prompt us to have an interface layer between the two systems, and to add fakes or mocks to test the components in isolation.
After that, there’s the process of automating test running, adding tests for code quality/style, computing code coverage, adding a type-hint checker test, writing tests that generate other tests, etc.
If you’re interested, let me know which topics to continue with. I do feel a bit silly putting so much pomp and circumstance around such a simple computation, but hopefully the simplicity of the core logic makes the design and testing aspects of the project clearer and easier to understand.
# Taylor Series and Accelerometers
In my book, A Programmer’s Introduction to Mathematics, I describe the Taylor Series as a “hammer for every nail.” I learned about another nail in the design of modern smartphone accelerometers from “Eight Amazing Engineering Stories” by Hammack, Ryan, and Ziech, which I’ll share here.
These accelerometers are designed using a system involving three plates, which correspond to two capacitors. A quick recap on my (limited) understanding of how capacitors work. A capacitor involving two conductive plates looks like this:
The voltage provided by the battery pushes electrons along the negative direction, or equivalently pushing “charge” along the positive direction (see the difference between charge flow and election flow). These elections build up in the plate labeled $-Q$, and the difference in charge across the two plates generates an electric field. If that electric field is strong enough, the electrons can jump the gap to the positive plate and complete the circuit. Otherwise, the plate reaches “capacity” and current stops flowing. Whether the jump happens or the current stops depends on the area of the plate $A$, the distance between the plates $d$, and the properties of the material between the plates, the last one is called the “dielectric constant” $\varepsilon$. (Nb., I’m not sure why it doesn’t depend on the material the plate is composed of, but I imagine it’s smooshed into the dielectric constant if necessary) This relationship is summarized by the formula
$\displaystyle C = \frac{\varepsilon A}{d}$
Then, an external event can cause the plates to move close enough together so that the electrons can jump the gap and current can begin to flow. This discharges the negatively charged plate.
A naive, Taylor-series-free accelerometer could work as follows:
1. Allow the negatively charged plate to wobble a little bit by fixing just one end of the plate, pictured like a diving board (a cantilever).
2. The amount of wobble will be proportional to the force of acceleration due to Hooke’s law for springs.
3. When displaced by a distance of $\delta$, the capacitance in the plate changes to $C = \frac{\varepsilon A}{d - \delta}$.
4. Use the amount of discharge to tell how much the plate displaced.
This is able to measure the force of acceleration in one dimension, and so thee of these devices are arranged in perpendicular axes to allow one to measure acceleration in 3-dimensional space.
The problem with this design is that $C = \frac{\varepsilon A}{d - \delta}$ is a nonlinear change in capacitance with respect to the amount of displacement. To see how nonlinear, expand this as a Taylor series:
\displaystyle \begin{aligned} C &= \frac{\varepsilon A}{d - \delta} \\ &= \frac{\varepsilon A}{d} \left ( \frac{1}{1 - \frac{\delta}{d}} \right ) \\ &= \frac{\varepsilon A}{d} \left ( 1 + \frac{\delta}{d} + \left ( \frac{\delta}{d} \right )^2 + O_{\delta \to 0}(\delta^3) \right ) \end{aligned}
I’m using the big-O notation $O_{\delta \to 0}(\delta^3)$ to more rigorously say that I’m “ignoring” all cubic and higher terms. I can do this because in these engineering systems (I’m taking Hammack at his word here), the quantity $(\delta / d)^2$ is meaningfully large, but later terms like $(\delta / d)^3$ are negligibly small. Of course, this is only true when the displacement $\delta$ is very small compared to $d$, which is why the big-O has a subscript $\delta \to 0$.
Apparently, working backwards through the nonlinearity in the capacitance change is difficult enough to warrant changing the design of the system. (I don’t know why this is difficult, but I imagine it has to do with the engineering constraints of measurement devices; please do chime in if you know more)
The system design that avoids this is a three-plate system instead of a two-plate system.
In this system, the middle plate moves back and forth between two stationary plates that are connected to a voltage source. As it moves away from one and closer to the other, the increased capacitance on one side is balanced by the decreased capacitance on the other. The Taylor series shows how these two changes cancel out on the squared term only.
If $C_1 = \frac{\varepsilon A}{d - \delta}$ represents the changed capacitance of the left plate (the middle plate moves closer to it), and $C_2 = \frac{\varepsilon A}{d + \delta}$ represents the right plate (the middle plate moves farther from it), then we expand the difference in capacitances via Taylor series (using the Taylor series for $1/(1-x)$ for both, but in the $1 + \delta/d$ case it’s $1 / (1 - (-x))$).
\displaystyle \begin{aligned} C_1 - C_2 &= \frac{\varepsilon A}{d - \delta} - \frac{\varepsilon A}{d + \delta} \\ &= \frac{\varepsilon A}{d} \left ( \frac{1}{1 - \frac{\delta}{d}} - \frac{1}{1 + \frac{\delta}{d}} \right ) \\ &= \frac{\varepsilon A}{d} \left ( 1 + \frac{\delta}{d} + \left ( \frac{\delta}{d} \right )^2 + O_{\delta \to 0}(\delta^3) - 1 + \frac{\delta}{d} - \left ( \frac{\delta}{d} \right )^2 + O_{\delta \to 0}(\delta^3) \right ) \\ &= \frac{\varepsilon A}{d} \left ( \frac{2\delta}{d} + O_{\delta \to 0}(\delta^3) \right ) \end{aligned}
Again, since the cubic and higher terms are negligibly small, we can “ignore” those parts. What remains is a linear response to the change in the middle plate’s displacement. This makes it significantly easier to measure. Because we’re measuring the difference in capacitance, this design is called a “differential capacitor.”
Though the math is tidy in retrospect, I marvel at how one might have conceived of this design from scratch. Did the inventor notice the symmetries in the Taylor series approximations could be arranged to negate each other? Was there some other sort of “physical intuition” at play?
Until next time!
# Silent Duels—Parsing the Construction
Last time we discussed the setup for the silent duel problem: two players taking actions in $[0,1]$, player 1 gets $n$ chances to act, player 2 gets $m$, and each knows their probability of success when they act.
The solution is in a paper of Rodrigo Restrepo from the 1950s. In this post I’ll start detailing how I study this paper, and talk through my thought process for approaching a bag of theorems and proofs. If you want to follow along, I re-typeset the paper on Github.
## Game Theory Basics
The Introduction starts with a summary of the setting of game theory. I remember most of this so I will just summarize the basics of the field. Skip ahead if you already know what the minimax theorem is, and what I mean when I say the “value” of a game.
A two-player game consists of a set of actions for each player—which may be finite or infinite, and need not be the same for both players—and a payoff function for each possible choice of actions. The payoff function is interpreted as the “utility” that player 1 gains and player 2 loses. If the payoff is negative, you interpret it as player 1 losing utility to player 2. Utility is just a fancy way of picking a common set of units for what each player treasures in their heart of hearts. Often it’s stated as money and we assume both players value cash the same way. Games in which the utility is always “one player gains exactly the utility lost by the other player” are called zero-sum.
With a finite set of actions, the payoff function is a table. For rock-paper-scissors the table is:
Rock, paper: -1
Rock, scissors: 1
Rock, rock: 0
Paper, paper: 0
Paper, scissors: -1
Paper, rock: 1
Scissors, paper: 1
Scissors, scissors: 0
Scissors, rock: -1
You could arrange this in a matrix and analyze the structure of the matrix, but we won’t. It doesn’t apply to our forthcoming setting where the players have infinitely many strategies.
A strategy is a possibly-randomized algorithm (whose inputs are just the data of the game, not including any past history of play) that outputs an action. In some games, the optimal strategy is to choose a single action no matter what your opponent does. This is sometimes called a pure, dominating strategy, not because it dominates your opponent, but because it’s better than all of your other options no matter what your opponent does. The output action is deterministic.
However, as with rock-paper-scissors, the optimal strategy for most interesting games requires each player to act randomly according to a fixed distribution. Such strategies are called mixed or randomized. For rock-paper-scissors, the optimal strategy is to choose rock, paper, and scissors with equal probability. Computers are only better than humans at rock-paper-scissors because humans are bad at behaving consistently and uniformly random.
The famous minimax theorem says that every two-player zero-sum game has an optimal strategy for each player, which is possibly randomized. This strategy is optimal in the sense that it maximizes your expected winnings no matter what your opponent does. However, if your opponent is playing a particularly suboptimal strategy, the minimax solution might not be as good as a solution that takes advantage of the opponent’s dumb choices. A uniform random rock-paper-scissors strategy is not optimal if your opponent always plays “rock.” However, the optimal strategy doesn’t need special knowledge or space to store information about past play. If you played against God, you would blindly use the minimax strategy and God would have no upper hand. I wonder if the pope would have excommunicated me for saying that in the 1600’s.
The expected winnings for player 1 when both players play a minimax-optimal strategy is called the value of the game, and this number is unique (even if there are possibly multiple optimal strategies). If a game is symmetric—meaning both players have the same actions and the payoff function is symmetric—then the value is guaranteed to be zero. The game is fair.
The version of the minimax theorem that most people use (in particular, the version that often comes up in theoretical computer science) shows that finding an optimal strategy is equivalent to solving a linear program. This is great because it means that any such (finite) game is easy to solve. You don’t need insight; just compile and run. The minimax theorem is also true for sufficiently well-behaved continuous action spaces. The silent duel is well-behaved, so our goal is to compute an explicit, easy-to-implement strategy that the minimax theorem guarantees exists. As a side note, here is an example of a poorly-behaved game with no minimax optimum.
While the minimax theorem guarantees optimal strategies and a value, the concept of the “value” of the game has an independent definition:
Let $X, Y$ be finite sets of actions for players 1, 2 respectively, and $p(x), q(y)$ be strategies, i.e., probability distributions over $X$ and $Y$ so that $p(x)$ is the probability that $x$ is chosen. Let $\Psi(x, y)$ be the payoff function for the game. The value of the game is a real number $v$ such that there exist two strategies $p, q$ with the two following properties. First, for every fixed $y \in Y$,
$\displaystyle \sum_{x \in X} p(x) \Psi(x, y) \geq v$
(no matter what player 2 does, player 1’s strategy guarantees at least $v$ payoff), and for every fixed $x \in X$,
$\displaystyle \sum_{y \in Y} q(y) \Psi(x, y) \leq v$
(no matter what player 1 does, player 2’s strategy prevents a loss of more than $v$).
Since silent duels are continuous, Restrepo opens the paper with the corresponding definition for continuous games. Here a probability distribution is the same thing as a “positive measure with total measure 1.” Restrepo uses $F$ and $G$ for the strategies, and the corresponding statement of expected payoff for player 1 is that, for all fixed actions $y \in Y$,
$\displaystyle \int \Psi(x, y) dF(x) \geq v$
And likewise, for all $x \in X$,
$\displaystyle \int \Psi(x, y) dG(y) \leq v$
All of this background gets us through the very first paragraph of the Restrepo paper. As I elaborate in my book, this is par for the course for math papers, because written math is optimized for experts already steeped in the context. Restrepo assumes the reader knows basic game theory so we can get on to the details of his construction, at which point he slows down considerably to focus on the details.
## Description of the Optimal Strategies
Starting in section 2, Restrepo describes the construction of the optimal strategy, but first he explains the formal details of the setting of the game. We already know the two players are taking $n$ and $m$ actions between $0 \leq t \leq 1$, but we also fix the probability of success. Player 1 knows a distribution $P(t)$ on $[0,1]$ for which $P(t)$ is the probability of success when acting at time $t$. Likewise, player 2 has a possibly different distribution $Q(t)$, and (crucially) $P(t), Q(t)$ both increase continuously on $[0,1]$. (In section 3 he clarifies further that $P$ satisfies $P(0) = 0, P(1) = 1$, and $P'(t) > 0$, likewise for $Q(t)$.) Moreover, both players know both $P, Q$. One could say that each player has an estimate of their opponent’s firing accuracy, and wants to be optimal compared to that estimate.
The payoff function $\Psi(x, y)$ is defined informally as: 1 if Player one succeeds before Player 2, -1 if Player 2 succeeds first, and 0 if both players exhaust their actions before the end and none succeed. Though Restrepo does not state it, if the players act and succeed at the same time—say both players fire at time $t=1$—the payoff should also be zero. We’ll see how this is converted to a more formal (and cumbersome!) mathematical definition in a future post.
Next we’ll describe the statement of the fully general optimal strategy (which will be essentially meaningless, but have some notable features we can infer information from), and get a sneak peek at how to build this strategy algorithmically. Then we’ll see a simplified example of the optimal strategy.
The optimal strategy presented depends only on the values $n, m$ (the number of actions each player gets) and their success probability distributions $P, Q$. For player 1, the strategy splits up $[0,1]$ into subintervals
$\displaystyle [a_i, a_{i+1}] \qquad 0 < a_1 < a_2, < \cdots < a_n < a_{n+1} = 1$
Crucially, this strategy ignores the initial interval $[0, a_1]$. In each other subinterval Player 1 attempts an action at a time chosen by a probability distribution specific to that interval, independently of previous attempts. But no matter what, there is some initial wait time during which no action will ever be taken. This makes sense: if player 1 fired at time 0, it is a guaranteed wasted shot. Likewise, firing at time 0.000001 is basically wasted (due to continuity, unless $P(t)$ is obnoxiously steep early on).
Likewise for player 2, the optimal strategy is determined by numbers $b_1, \dots, b_m$ resulting in $m$ intervals $[b_j, b_{j+1}]$ with $b_{m+1} = 1$.
The difficult part of the construction is describing the distributions dictating when a player should act during an interval. It’s difficult because an interval for player 1 and player 2 can overlap partially. Maybe $a_2 = 0.5, a_3 = 0.75$ and $b_1 = 0.25, b_2 = 0.6$. Player 1 knows that Player 2 (using their corresponding minimax strategy) must act before time $t = 0.6$, and gets another chance after that time. This suggests that the distribution determining when Player 1 should act within $[a_2, a_3]$ may have a discontinuous jump at $t = 0.6$.
Call $F_i$ the distribution for Player 1 to act in the interval $[a_i, a_{i+1}]$. Since it is a continuous distribution, Restrepo uses $F_i$ for the cumulative distribution function and $dF_i$ for the probability density function. Then these functions are defined by (note this should be mostly meaningless for the moment)
$\displaystyle dF_i(x_i) = \begin{cases} h_i f^*(x_i) dx_i & \textup{ if } a_i < x_i < a_{i+1} \\ 0 & \textup{ if } x_i \not \in [a_i, a_{i+1}] \\ \end{cases}$
where $f^*$ is defined as
$\displaystyle f^*(t) = \prod_{b_j > t} \left [ 1 - Q(b_j) \right ] \frac{Q'(t)}{Q^2(t) P(t)}.$
The constants $h_i$ and $h_{i+1}$ are related by the equation
$\displaystyle h_i = [1 - D_i] h_{i+1},$
where
$\displaystyle D_i = \int_{a_i}^{a_{i+1}} P(t) dF_i(t)$
What can we glean from this mashup of symbols? The first is that (obviously) the distribution is zero outside the interval $[a_i, a_{i+1}]$. Within it, there is this mysterious $h_i$ that is related to the $h_{i+1}$ used to define the next interval’s probability. This suggests we will likely build up the strategy in reverse starting with $F_n$ as the “base case” (if $n=1$, then it is the only one).
Next, we notice the curious definition of $f^*$. It unsurprisingly requires knowledge of both $P$ and $Q$, but the coefficient is strangely chosen: it’s a product over all failure probabilities ($1 - Q(b_j)$) of all interval-starts happening later for the opponent.
[Side note: it’s very important that this is a constant; when I first read this, I thought that it was $\prod_{b_j > t}[1 - Q(t)]$, which makes the eventual task of integrating $f^*$ much harder.]
Finally, the last interval (the one ending at $t=1$) may include the option to simply “wait for a guaranteed hit,” which Restrepo calls a “discrete mass of $\alpha$ at $t=1$.” That is, $F_n$ may have a different representation than the rest. Indeed, at the end of the paper we will find that Restrepo gives a base-case definition for $h_n$ that allows us to bootstrap the construction.
Player 2’s strategy is the same as Player 1’s, but replacing the roles of $P, Q, n, m, a_i, b_j$ in the obvious way.
## The symmetric example
As with most math research, the best way to parse a complicated definition or construction is to simplify the different aspects of the problem until they become tractable. One way to do this is to have only a single action for both players, with $P = Q$. Restrepo provides a more general example to demonstrate, which results in the five most helpful lines in the paper. I’ll reproduce them here verbatim:
EXAMPLE. Symmetric Game: $P(t) = Q(t),$ and $n = m$. In this case the two
players have the same optimal strategies; $\alpha = 0$, and $a_k = b_k, k=1, \dots, n$. Furthermore
\displaystyle \begin{aligned} P(a_{n-k}) &= \frac{1}{2k+3} & k = 0, 1, \dots, n-1, \\ dF_{n-k}(t) &= \frac{1}{4(k+1)} \frac{P'(t)}{P^3(t)} dt & a_{n-k} < t < a_{n-k+1}. \end{aligned}
Saying $\alpha = 0$ means there is no “wait until $t=1$ to guarantee a hit”, which makes intuitive sense. You’d only want to do that if your opponent has exhausted all their actions before the end, which is only likely to happen if they have fewer actions than you do.
When Restrepo writes $P(a_{n-k}) = \frac{1}{2k+3}$, there are a few things happening. First, we confirm that we’re working backwards from $a_n$. Second, he’s implicitly saying “choose $a_{n-k}$ such that $P(a_{n-k})$ has the desired cumulative density.” After a bit of reflection, there’s no other way to specify the $a_i$ except implicitly: we don’t have a formula for $P$ to lean on.
Finally, the definition of the density function $dF_{n-k}(t)$ helps us understand under what conditions the probability function would be increasing or decreasing from the start of the interval to the end. Looking at the expression $P'(t) / P^3(t)$, we can see that polynomials will result in an expression dominated by $1/t^k$ for some $k$, which is decreasing. By taking the derivative, an increasing density would have to be built from a $P$ satisfying $P''(t) P(t) - 3(P'(t))^2 > 0$. However, I wasn’t able to find any examples that satisfy this. Polynomials, square roots, logs and exponentials, all seem to result in decreasing density functions.
Finally, we’ll plot two examples. The first is the most reductive: $P(t) = Q(t) = t$, and $n = m = 1$. In this case $n=1$, and there is only one term $k=0$, for which $a_n = 1/3$. Then $dF_1(t) = 1/4t^3$. (For verification, note the integral of $dF_1$ on $[1/3, 1]$ is indeed 1).
With just one action and P(t) = Q(t) = t, the region before t=1/3 has zero probability, and the probability decreases from 6.75 to 1/4.
Note that the reason $a_n = 1/3$ is so nice is that $P(t)$ is so simple. If $P(t)$ were, say, $t^2$, then $a_n$ should shift to being $\sqrt{1/3}$. If $P(t)$ were more complicated, we’d have to invert it (or use an approximate search) to find the location $a_n$ for which $P(a_n) = 1/3$.
Next, we loosen the example to let $n=m=4$, still with $P(t) = Q(t) = t$. In this case, we have the same final interval $[1/3,1]$. The new actions all occur in the time before $t=1/3$, in the intervals $[1/5, 1/3], [1/7, 1/5], [1/9,1/7].$ If there were more actions, we’d get smaller inverse-of-odd-spaced intervals approaching zero. The probability densities are now steeper versions of the same $1/4t^3$, with the constant getting smaller to compensate for the fact that $1/t^3$ gets larger and maintain the normalized distribution. For example, the earliest interval results in $\int_{1/9}^{1/7} \frac{1}{16t^3} dt = 1$. Closer to zero the densities are somewhat shallower compared to the size of the interval; for example in $[1/9, 1/7],$ the density toward the beginning of the interval is only about twice as large as the density toward the end.
The combination of the four F_i’s for the four intervals in which actions are taken. This is a complete description of the optimal strategy for our simple symmetric version of the silent duel.
Since the early intervals are getting smaller and smaller as we add more actions, the optimal strategy will resemble a burst of action at the beginning, gradually tapering off as the accuracy increases and we work through our budget. This is an explicit tradeoff between the value of winning (lots of early, low probability attempts) and keeping some actions around for the end where you’re likely to succeed.
## Next step: get to the example from the general theorem
At this point, we’ve parsed the general statement of the theorem, and while much of it is still mysterious, we extracted some useful qualitative information from the statement, and tinkered with some simple examples.
At this point, I have confidence that the simple symmetric example Restrepo provided is correct; it passed some basic unit tests, like that each $dF_i$ is normalized. My next task in fully understanding the paper is to be able to derive the symmetric example from the general construction. We’ll do this next time, and include a program that constructs the optimal solution for any input.
Until then!
# Visualizing an Assassin Puzzle
Over at Math3ma, Tai-Danae Bradley shared the following puzzle, which she also featured in a fantastic (spoiler-free) YouTube video. If you’re seeing this for the first time, watch the video first.
Consider a square in the xy-plane, and let A (an “assassin”) and T (a “target”) be two arbitrary-but-fixed points within the square. Suppose that the square behaves like a billiard table, so that any ray (a.k.a “shot”) from the assassin will bounce off the sides of the square, with the angle of incidence equaling the angle of reflection.
Puzzle: Is it possible to block any possible shot from A to T by placing a finite number of points in the square?
This puzzle found its way to me through Tai-Danae’s video, via category theorist Emily Riehl, via a talk by the recently deceased Fields Medalist Maryam Mirzakhani, who studied the problem in more generality. I’m not familiar with her work, but knowing mathematicians it’s probably set in an arbitrary complex $n$-manifold.
See Tai-Danae’s post for a proof, which left such an impression on me I had to dig deeper. In this post I’ll discuss a visualization I made—now posted at the end of Tai-Danae’s article—as well as here and below (to avoid spoilers). In the visualization, mouse movement chooses the firing direction for the assassin, and the target is in green. Dragging the target with the mouse updates the position of the guards. The source code is on Github.
## Outline
The visualization uses d3 library, which was made for visualizations that dynamically update with data. I use it because it can draw SVGs real nice.
The meat of the visualization is in two geometric functions.
1. Decompose a ray into a series of line segments—its path as it bounces off the walls—stopping if it intersects any of the points in the plane.
2. Compute the optimal position of the guards, given the boundary square and the positions of the assassin and target.
Both of these functions, along with all the geometry that supports them, is in geometry.js. The rest of the demo is defined in main.js, in which I oafishly trample over d3 best practices to arrive miraculously at a working product. Critiques welcome 🙂
As with most programming and software problems, the key to implementing these functions while maintaining your sanity is breaking it down into manageable pieces. Incrementalism is your friend.
## Vectors, rays, rectangles, and ray splitting
We start at the bottom with a Vector class with helpful methods for adding, scaling, and computing norms and inner products.
function innerProduct(a, b) {
return a.x * b.x + a.y * b.y;
}
class Vector {
constructor(x, y) {
this.x = x;
this.y = y;
}
normalized() { ... }
norm() { ... }
subtract(vector) { ... }
scale(length) { ... }
distance(vector) { ... }
midpoint(b) { ... }
}
This allows one to compute the distance between two points, e.g., with vector.subtract(otherVector).norm().
Next we define a class for a ray, which is represented by its center (a vector) and a direction (a vector).
class Ray {
constructor(center, direction, length=100000) {
this.center = center;
this.length = length;
if (direction.x == 0 && direction.y == 0) {
throw "Can't have zero direction";
}
this.direction = direction.normalized();
}
endpoint() {
}
intersects(point) {
let shiftedPoint = point.subtract(this.center);
let signedLength = innerProduct(shiftedPoint, this.direction);
let projectedVector = this.direction.scale(signedLength);
let differenceVector = shiftedPoint.subtract(projectedVector);
if (signedLength > 0
&& this.length > signedLength
} else {
return null;
}
}
}
The ray must be finite for us to draw it, but the length we've chosen is so large that, as you can see in the visualization, it's effectively infinite. Feel free to scale it up even longer.
The interesting bit is the intersection function. We want to compute whether a ray intersects a point. To do this, we use the inner product as a decision rule to compute the distance of a point from a line. If that distance is very small, we say they intersect.
In our demo points are not infinitesimal, but rather have a small radius described by intersectionRadius. For the sake of being able to see anything we set this to 3 pixels. If it’s too small the demo will look bad. The ray won’t stop when it should appear to stop, and it can appear to hit the target when it doesn’t.
Next up we have a class for a Rectangle, which is where the magic happens. The boilerplate and helper methods:
class Rectangle {
constructor(bottomLeft, topRight) {
this.bottomLeft = bottomLeft;
this.topRight = topRight;
}
topLeft() { ... }
center() { ... }
width() { .. }
height() { ... }
contains(vector) { ... }
The function rayToPoints that splits a ray into line segments from bouncing depends on three helper functions:
1. rayIntersection: Compute the intersection point of a ray with the rectangle.
2. isOnVerticalWall: Determine if a point is on a vertical or horizontal wall of the rectangle, raising an error if neither.
3. splitRay: Split a ray into a line segment and a shorter ray that’s “bounced” off the wall of the rectangle.
(2) is trivial, computing some x- and y-coordinate distances up to some error tolerance. (1) involves parameterizing the ray and checking one of four inequalities. If the bottom left of the rectangle is $(x_1, y_1)$ and the top right is $(x_2, y_2)$ and the ray is written as $\{ (c_1 + t v_1, c_2 + t v_2) \mid t > 0 \}$, then—with some elbow grease—the following four equations provide all possibilities, with some special cases for vertical or horizontal rays:
\displaystyle \begin{aligned} c_2 + t v_2 &= y_2 & \textup{ and } \hspace{2mm} & x_1 \leq c_1 + t v_1 \leq x_2 & \textup{ (intersects top)} \\ c_2 + t v_2 &= y_1 & \textup{ and } \hspace{2mm} & x_1 \leq c_1 + t v_1 \leq x_2 & \textup{ (intersects bottom)} \\ c_1 + t v_1 &= x_1 & \textup{ and } \hspace{2mm} & y_1 \leq c_2 + t v_2 \leq y_2 & \textup{ (intersects left)} \\ c_1 + t v_1 &= x_2 & \textup{ and } \hspace{2mm} & y_1 \leq c_2 + t v_2 \leq y_2 & \textup{ (intersects right)} \\ \end{aligned}
In code:
rayIntersection(ray) {
let c1 = ray.center.x;
let c2 = ray.center.y;
let v1 = ray.direction.x;
let v2 = ray.direction.y;
let x1 = this.bottomLeft.x;
let y1 = this.bottomLeft.y;
let x2 = this.topRight.x;
let y2 = this.topRight.y;
// ray is vertically up or down
if (epsilon > Math.abs(v1)) {
return new Vector(c1, (v2 > 0 ? y2 : y1));
}
// ray is horizontally left or right
if (epsilon > Math.abs(v2)) {
return new Vector((v1 > 0 ? x2 : x1), c2);
}
let tTop = (y2 - c2) / v2;
let tBottom = (y1 - c2) / v2;
let tLeft = (x1 - c1) / v1;
let tRight = (x2 - c1) / v1;
// Exactly one t value should be both positive and result in a point
// within the rectangle
let tValues = [tTop, tBottom, tLeft, tRight];
for (let i = 0; i epsilon && this.contains(intersection)) {
return intersection;
}
}
throw "Unexpected error: ray never intersects rectangle!";
}
Next, splitRay splits a ray into a single line segment and the “remaining” ray, by computing the ray’s intersection with the rectangle, and having the “remaining” ray mirror the direction of approach with a new center that lies on the wall of the rectangle. The new ray length is appropriately shorter. If we run out of ray length, we simply return a segment with a null ray.
splitRay(ray) {
let segment = [ray.center, this.rayIntersection(ray)];
let segmentLength = segment[0].subtract(segment[1]).norm();
let remainingLength = ray.length - segmentLength;
if (remainingLength < 10) {
return {
segment: [ray.center, ray.endpoint()],
ray: null
};
}
let vertical = this.isOnVerticalWall(segment[1]);
let newRayDirection = null;
if (vertical) {
newRayDirection = new Vector(-ray.direction.x, ray.direction.y);
} else {
newRayDirection = new Vector(ray.direction.x, -ray.direction.y);
}
let newRay = new Ray(segment[1], newRayDirection, length=remainingLength);
return {
segment: segment,
ray: newRay
};
}
As you have probably guessed, rayToPoints simply calls splitRay over and over again until the ray hits an input “stopping point”—a guard, the target, or the assassin—or else our finite ray length has been exhausted. The output is a list of points, starting from the original ray’s center, for which adjacent pairs are interpreted as line segments to draw.
rayToPoints(ray, stoppingPoints) {
let points = [ray.center];
let remainingRay = ray;
while (remainingRay) {
// check if the ray would hit any guards or the target
if (stoppingPoints) {
let hardStops = stoppingPoints.map(p => remainingRay.intersects(p))
.filter(p => p != null);
if (hardStops.length > 0) {
// find first intersection and break
let closestStop = remainingRay.closestToCenter(hardStops);
points.push(closestStop);
break;
}
}
let rayPieces = this.splitRay(remainingRay);
points.push(rayPieces.segment[1]);
remainingRay = rayPieces.ray;
}
return points;
}
That’s sufficient to draw the shot emanating from the assassin. This method is called every time the mouse moves.
## Optimal guards
The function to compute the optimal position of the guards takes as input the containing rectangle, the assassin, and the target, and produces as output a list of 16 points.
/*
* Compute the 16 optimal guards to prevent the assassin from hitting the
* target.
*/
function computeOptimalGuards(square, assassin, target) {
...
}
If you read Tai-Danae’s proof, you’ll know that this construction is to
1. Compute mirrors of the target across the top, the right, and the top+right of the rectangle. Call this resulting thing the 4-mirrored-targets.
2. Replicate the 4-mirrored-targets four times, by translating three of the copies left by the entire width of the 4-mirrored-targets shape, down by the entire height, and both left-and-down.
3. Now you have 16 copies of the target, and one assassin. This gives 16 line segments from assassin-to-target-copy. Place a guard at the midpoint of each of these line segments.
4. Finally, apply the reverse translation and reverse mirroring to return the guards to the original square.
Due to WordPress being a crappy blogging platform I need to migrate off of, the code snippets below have been magically disappearing. I’ve included links to github lines as well.
Step 1 (after adding simple helper functions on Rectangle to do the mirroring):
// First compute the target copies in the 4 mirrors
let target1 = target.copy();
let target2 = square.mirrorTop(target);
let target3 = square.mirrorRight(target);
let target4 = square.mirrorTop(square.mirrorRight(target));
target1.guardLabel = 1;
target2.guardLabel = 2;
target3.guardLabel = 3;
target4.guardLabel = 4;
// for each mirrored target, compute the four two-square-length translates
let mirroredTargets = [target1, target2, target3, target4];
let horizontalShift = 2 * square.width();
let verticalShift = 2 * square.height();
let translateLeft = new Vector(-horizontalShift, 0);
let translateRight = new Vector(horizontalShift, 0);
let translateUp = new Vector(0, verticalShift);
let translateDown = new Vector(0, -verticalShift);
let translatedTargets = [];
for (let i = 0; i < mirroredTargets.length; i++) {
let target = mirroredTargets[i];
translatedTargets.push([
target,
]);
}
Step 3, computing the midpoints:
// compute the midpoints between the assassin and each translate
let translatedMidpoints = [];
for (let i = 0; i t.midpoint(assassin)));
}
Step 4, returning the guards back to the original square, is harder than it seems, because the midpoint of an assassin-to-target-copy segment might not be in the same copy of the square as the target-copy being fired at. This means you have to detect which square copy the midpoint lands in, and use that to determine which operations are required to invert. This results in the final block of this massive function.
// determine which of the four possible translates the midpoint is in
// and reverse the translation. Since midpoints can end up in completely
// different copies of the square, we have to check each one for all cases.
function untranslate(point) {
if (point.x square.bottomLeft.y) {
} else if (point.x >= square.bottomLeft.x && point.y <= square.bottomLeft.y) {
} else if (point.x < square.bottomLeft.x && point.y <= square.bottomLeft.y) {
} else {
return point;
}
}
// undo the translations to get the midpoints back to the original 4-mirrored square.
let untranslatedMidpoints = [];
for (let i = 0; i square.topRight.x && point.y > square.topRight.y) {
return square.mirrorTop(square.mirrorRight(point));
} else if (point.x > square.topRight.x && point.y <= square.topRight.y) {
return square.mirrorRight(point);
} else if (point.x square.topRight.y) {
return square.mirrorTop(point);
} else {
return point;
}
}
return untranslatedMidpoints.map(unmirror);
And that’s all there is to it!
## Improvements, if I only had the time
There are a few improvements I’d like to make to this puzzle, but haven’t made the time (I’m writing a book, after all!).
1. Be able to drag the guards around.
2. Create new guards from an empty set of guards, with a button to “reveal” the solution.
3. Include a toggle that, when pressed, darkens the entire region of the square that can be hit by the assassin. For example, this would allow you to see if the target is in the only possible safe spot, or if there are multiple safe spots for a given configuration.
4. Perhaps darken the vulnerable spots by the number of possible paths that hit it, up to some limit.
5. The most complicated one: generalize to an arbitrary polygon (convex or not!), for which there may be no optional solution. The visualization would allow you to look for a solution using 2-4.
Pull requests are welcome if you attempt any of these improvements.
Until next time!
|
2021-01-15 14:29:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 168, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7531256079673767, "perplexity": 1149.182349636779}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703495901.0/warc/CC-MAIN-20210115134101-20210115164101-00429.warc.gz"}
|
https://www.gradesaver.com/textbooks/math/calculus/calculus-3rd-edition/chapter-8-techniques-of-integration-8-6-strategies-for-integration-exercises-page-431/8
|
## Calculus (3rd Edition)
By parts; that is, $u=x^2$ and $dv=\sqrt{x+1}dx$.
The integration $$\int x^2\sqrt{x+1}dx$$ can be done by parts; that is, $u=x^2$ and $dv=\sqrt{x+1}dx$.
|
2022-11-30 15:00:44
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9998681545257568, "perplexity": 666.8756107064723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710764.12/warc/CC-MAIN-20221130124353-20221130154353-00301.warc.gz"}
|
https://chemistry.stackexchange.com/questions/85894/isobutene-iupac-naming
|
# Isobutene IUPAC naming [closed]
What is the correct name of an isomer of isobutene $\ce{C4H8}$: 1,1-dimethylethene or 2-methylprop-1-ene? I have done an exam recently and I'm just curious about what what would be correct.
## closed as off-topic by Mithoron, Todd Minehardt, Jon Custer, bon, ronNov 23 '17 at 15:26
This question appears to be off-topic. The users who voted to close gave this specific reason:
If this question can be reworded to fit the rules in the help center, please edit the question.
|
2019-10-16 02:02:30
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4080139100551605, "perplexity": 2638.3778142877686}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986661296.12/warc/CC-MAIN-20191016014439-20191016041939-00146.warc.gz"}
|
http://j-node.homeip.net/tech_wiki/index.php/Fun_with_Non-Linear_Dynamics
|
Introduction
What's the difference between
f(x) = 2x
and
f(x) = x2?
The little change in the latter equation gives rise to a whole new branch of physics: non-linear dynamics
Some topics include:
• fixed points, linear stability analysis and bifurcations
• logistic equations
• limit cycles
• chaos: strange attractors, fractals, sensitivity to initial conditions
• Liapunov exponents
with applications like
• population dynamics in ecosystems
• temporal-spatial pattern formations
• activator-inhibitor systems
• Belousov-Zhabotinsky reaction
Perhaps the most surprising feature of non-linear dynamics, is that perceptively simple deterministic equations yield an amazingly complex and self-similar patterns of solutions which cannot be forecast. E.g., iterative equations like the logistic map (see applets [1], [2])
$\qquad x_{n+1} = r x_n (1-x_n)$
gives rise to the Feigenbaum diagram and the complex quadratic polynomial
$\qquad z_{n+1} = z_n^2 +c$
yields the Mandelbrot (watch on youtube.com) and the Buddhabrot sets (watch on youtube.com).
Linear Stability Analysis
To deal with the complex evolution of a non-linear system, linear stability analysis looks at the behavior close to fixed points. I.e.,
$\dot x = \frac{dx}{dt}= 0$
for fixed points
$\overline x$.
The method is a nice interface between
• analysis
and
• linear algebra.
The Setup
Non-linear dynamics given by
$\dot \vec x = \vec f(\vec x, \vec u, t)$
for the non-linear function $\vec f$ with control parameter $\vec u$.
It holds that
$(1) \qquad \dot x_i = f_i(\overline x) + \frac{\partial f_i (\overline x)}{\partial x_i} (x_i - \overline x_i) + \dots \approx \frac{\partial f_i (\overline x)}{\partial x_i} (x_i - \overline x_i)$
using a Taylor expansion and the fact that for fixed points fi = 0 and
$(2) \qquad \delta x_i := x_i - \overline x_i \quad \Rightarrow \quad \dot x_i = \delta \dot x_i$.
Combining Eqs. (1) and (2) yields
$(3) \qquad \frac{\partial \vec x}{\partial t} = \vec f (\vec x, \vec u, t) = \delta \dot \vec x \approx \left. \mathcal{J}\right|_{\overline x} \delta \vec x$
with the Jacobean matrix $\mathcal{J}$. Rephrasing this gives
$(4) \qquad\dot \vec y = \mathcal{J} \vec y = \lambda \vec y$
which is an eigenvalue equation.
Solutions
For a 2-dimensional system the solution to Eq. (4) is given by
$\lambda_{1,2} = \frac{1}{2} \left( \tau \pm \sqrt{\tau^2 - 4 \Delta} \right)$
where
$\tau := \mathrm{det} ( \left. \mathcal{J}\right|_{\overline y} )$
and
$\Delta := \mathrm{det} ( \left. \mathcal{J}\right|_{\overline y} )$.
The solution to Eq. (3) is hence
$\delta \dot \vec x = \delta \vec x (0) \; \mathrm{exp} (\mathcal{J} t) = \delta \vec x (0)\; e^{\lambda t}$
determining the dynamics.
It should be noted that the properties of τ and Δ suffice to classify the fixed points. They determine the stability of the fixed points, i.e., if they are
• attractors and repellors
• attractive and repulsive focuses
• centers
of the systems trajectories.
Outlook
Fixed points can be created or destroyed, or their stability can change by tuning the control parameter $\vec u$. The parameter values for which this occurs are called bifurcation points.
It is the focus of catastrophe theory, how small perturbations of the potential describing the dynamics results in dramatic changes in the evolution of the system.
In essence, instabilities (via control parameters or feedback) give rise to new dynamics and are the source of evolution.
Diffusion-Driven Instabilities: Emergence of Spatial-Temporal Pattern Formation
The idea going back to Alan Turing (1952 "The Chemical Basis for Morphogenesis") is to combine stable non-linear dynamics with a diffusion term to generate new dynamics. This gives the system's dynamics a spatial component and explains how structure emerges from a homogeneous spatial distribution. Also known as Turing patterns.
Starting point is a stable system described by Eq. (3) and adding a diffusion term
$(5) \qquad \frac{\partial \delta \vec x}{\partial t} = \left. \mathcal{J}\right|_{\overline x} \delta \vec x + \vec D \triangle \delta \vec x$
This can be solved using Fourier transformations or a separation of variables ansatz. It is then seen that under certain conditions, the diffusion terms break the stability. In two dimensions a necessary condition for the diffusion constants is
$D_1 \neq D_2$.
This is a diffusion-driven instability.
This formalism has been used in the so called Schnakenberg model describing a (two-species tri-molecular) chemical reaction giving rise to pattern formation in a species' morphogenesis. The requirement of a finite spacial extension (location of the embryo) results in discrete solutions of stable excitations which, as the embryo grows, give rise to more and more stripes in the example of a Zebra fish.
Similar mechanisms have been used to explain mammalian coat patterns, see J. D. Murray Mathematical Biology.
Biological Pattern Formation via Activator-Inhibitor Systems
Cool applet (in German): [3].
Further information: [4].
|
2018-11-21 11:44:04
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 19, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6887692213058472, "perplexity": 1919.2076996572018}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-47/segments/1542039748315.98/warc/CC-MAIN-20181121112832-20181121134832-00272.warc.gz"}
|
http://physics.aps.org/synopsis-for/10.1103/PhysRevLett.101.116601
|
# Synopsis: Charged shock waves
Shock waves, familiar from hydrodynamics, acoustics, and optics, have been observed in the changing charge state of iron defects in lithium niobate crystals upon application of even a modest voltage across the crystal.
A shock wave is a propagating discontinuity in an otherwise linear and well-behaved medium. Shock fronts form whenever a disturbance is driven through a medium faster than it would normally move and are commonly seen in hydrodynamic or acoustic settings. Now, in a paper appearing in Physical Review Letters, Stephan Gronenborn, Matthias Falk, Daniel Haertle, and Karsten Buse, of the University of Bonn, Germany, together with Boris Sturman in Russia, have observed shock-wave behavior in iron-doped ferroelectric lithium niobate crystals in an electric field—an unexpected phenomenon in a solid-state system.
The shock wave takes the form of a traveling change of transparency of the doped lithium niobate crystal. While undoped lithium niobate is transparent, defects created by iron doping absorb light shone through the crystal. The iron defects exist in two charge states: ${\text{Fe}}^{2+}$ and ${\text{Fe}}^{3+}$, and the crystal becomes increasingly transparent as the ratio of ${\text{Fe}}^{2+}$ to ${\text{Fe}}^{3+}$ decreases. When a voltage is applied to the crystal, a transparent region appears by the cathode, and this region increases in the wake of a propagating front as the ion ratio changes across the sample, in a fashion reminiscent of a traveling shock wave.
The shock wave is not fast: it takes several hours for it to travel one centimeter, but its observation may have important consequences for the fast-moving field of optoelectronics. – Daniel Ucko
More Features »
### Announcements
More Announcements »
## Subject Areas
Nonlinear Dynamics
## Previous Synopsis
Atomic and Molecular Physics
Magnetism
## Related Articles
Nonlinear Dynamics
### Viewpoint: Reservoir Computing Speeds Up
A brain-inspired computer made with optoelectronic parts runs faster thanks to a hardware redesign, recognizing simple speech at the rate of 1 million words per second. Read More »
Nonlinear Dynamics
### Synopsis: Chaos from a Chilled Cloud of Atoms
A map of chaos emerging in a Bose-Einstein condensate provides a rare glimpse of the behavior in a system of many quantum particles. Read More »
Geophysics
### Synopsis: Explaining Aftershock Clustering
A study of bursting phenomena like earthquakes suggests that events appear to cluster in time because of the way that small events like aftershocks are identified. Read More »
|
2017-02-26 23:31:47
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 6, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3485044240951538, "perplexity": 3618.9294311927347}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501172156.69/warc/CC-MAIN-20170219104612-00412-ip-10-171-10-108.ec2.internal.warc.gz"}
|
https://mathhelpboards.com/threads/if-c-ab-and-gcd-b-c-1-why-does-c-ac.6433/
|
# Number TheoryIf c|ab and gcd(b, c) = 1 why does c|ac?
#### MI5
##### New member
Theorem: If c|ab and (b, c) = 1 then c|a.
Proof: Consider (ab, ac) = a(b, c) = a. We have c|ab and clearly c|ac so c|a.
It's not so clear to me why c|ac. Perhaps I'm missing something really obvious.
#### Evgeny.Makarov
##### Well-known member
MHB Math Scholar
It's not so clear to me why c|ac.
This is because ac = a * c, i.e., by the definition of the | (divides) relation.
#### MI5
##### New member
This is because ac = a * c, i.e., by the definition of the | (divides) relation.
I still don't understand I'm afraid. Could you say bit more please?
#### Evgeny.Makarov
##### Well-known member
MHB Math Scholar
I still don't understand I'm afraid. Could you say bit more please?
I need to be sure you know the definition of the relation denoted by |. Could you write this definition?
#### MI5
##### New member
WOW! All I can say is thanks.
|
2021-09-18 10:03:10
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9249930381774902, "perplexity": 3684.5879197174386}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056392.79/warc/CC-MAIN-20210918093220-20210918123220-00052.warc.gz"}
|
https://irmar.univ-rennes1.fr/seminaire/geometrie-analytique/ruben-lizarbe
|
# Irreducible components of the space of foliations of degree three on $\mathbb P^3$
We show that the space of foliations of degree three on $\mathbb P^3$ has at least 28 distinct irreducible components,
some of them being generically non-reduced. This is a joint work with Raphael Constant and Jorge V. Pereira.
|
2021-06-21 19:12:41
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7984355092048645, "perplexity": 603.4437414519729}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623488289268.76/warc/CC-MAIN-20210621181810-20210621211810-00621.warc.gz"}
|
https://www.semanticscholar.org/paper/Mixing-and-overshooting-in-surface-convection-zones-Kupka-Zaussinger/d7231a7724ded07179119ed214b9103bdd45d122
|
# Mixing and overshooting in surface convection zones of DA white dwarfs: first results from ANTARES
@article{Kupka2018MixingAO,
title={Mixing and overshooting in surface convection zones of DA white dwarfs: first results from ANTARES},
author={Friedrich Kupka and Florian Zaussinger and Michael H. Montgomery},
journal={Monthly Notices of the Royal Astronomical Society},
year={2018},
volume={474},
pages={4660-4671}
}
• Published 2 December 2017
• Physics, Geology
• Monthly Notices of the Royal Astronomical Society
We present results of a large, high resolution 3D hydrodynamical simulation of the surface layers of a DA white dwarf (WD) with $T_{\rm eff}=11800$ K and $\log(g)=8$ using the ANTARES code, the widest and deepest such simulation to date. Our simulations are in good agreement with previous calculations in the Schwarzschild-unstable region and in the overshooting region immediately beneath it. Farther below, in the wave-dominated region, we find that the rms horizontal velocities decay with depth…
## Figures from this paper
Convective overshoot and macroscopic diffusion in pure-hydrogen-atmosphere white dwarfs
• Physics
Monthly Notices of the Royal Astronomical Society
• 2019
We present a theoretical description of macroscopic diffusion caused by convective overshoot in pure-hydrogen DA white dwarfs using 3D, closed-bottom, radiation hydrodynamics co5bold simulations.
Horizontal spreading of planetary debris accreted by white dwarfs
• Physics, Geology
Monthly Notices of the Royal Astronomical Society
• 2021
White dwarfs with metal-polluted atmospheres have been studied widely in the context of the accretion of rocky debris from evolved planetary systems. One open question is the geometry of accretion
Polluted White Dwarfs: Mixing Regions and Diffusion Timescales
• Physics, Environmental Science
The Astrophysical Journal
• 2019
Many isolated white dwarfs (WDs) show spectral evidence of atmospheric metal pollution. Since heavy element sedimentation timescales are short, this most likely indicates ongoing accretion. Accreted
Increases to Inferred Rates of Planetesimal Accretion Due to Thermohaline Mixing in Metal Accreting White Dwarfs
• Physics, Geology
• 2018
Many isolated, old white dwarfs (WDs) show surprising evidence of metals in their photospheres. Given that the timescale for gravitational sedimentation is astronomically short, this is taken as
Pure-helium 3D model atmospheres of white dwarfs
• Physics
Monthly Notices of the Royal Astronomical Society
• 2018
We present the first grid of 3D simulations for the pure-helium atmospheres of DB white dwarfs. The simulations were computed with the CO5BOLD radiation-hydrodynamics code and cover effective
Diffusion Coefficients in the Envelopes of White Dwarfs
• Physics
The Astrophysical Journal
• 2020
The diffusion of elements is a key process in understanding the unusual surface composition of white dwarfs stars and their spectral evolution. The diffusion coefficients of Paquette et al. (1986)
Damping rates and frequency corrections of Kepler LEGACY stars
• Physics
Monthly Notices of the Royal Astronomical Society
• 2019
Linear damping rates and modal frequency corrections of radial oscillation modes in selected LEGACY main-sequence stars are estimated by means of a non-adiabatic stability analysis. The selected
Modelling of stellar convection
• Physics
Living reviews in computational astrophysics
• 2017
A historically oriented introduction offers first glimpses on the physics of stellar convection and a number of important trends such as how to further develop low-dimensional models, how to use 3D models for that purpose, what effect recent hardware developments may have on 3D modelling, and others are reviewed.
Two-dimensional simulations of solar-like models with artificially enhanced luminosity
• Physics
Astronomy & Astrophysics
• 2021
We performed two-dimensional, fully compressible, time-implicit simulations of convection in a solar-like model with the MUSIC code. Our main motivation is to explore the impact of a common tactic
Where Are the Extrasolar Mercuries?
• Geology, Physics
The Astrophysical Journal
• 2020
We utilize observations of 16 white dwarf stars to calculate and analyze the oxidation states of the parent bodies accreting onto the stars. Oxygen fugacity, a measure of overall oxidation state for
## References
SHOWING 1-10 OF 40 REFERENCES
White dwarf envelopes: further results of a non-local model of convection
• Physics
• 2004
We present results of a fully non-local model of convection for white dwarf envelopes. We show that this model is able to reproduce the results of numerical simulations for convective efficiencies
Calibration of the Mixing-length Theory for Convective White Dwarf Envelopes
• Physics
• 2014
A calibration of the mixing-length parameter in the local mixing-length theory (MLT) is presented for the lower part of the convection zone in pure-hydrogen-atmosphere white dwarfs. The
Accretion and diffusion in white dwarfs - New diffusion timescales and applications to GD 362 and G 29-38
Context. A number of cool white dwarfs with metal traces, of spectral types DAZ, DBZ, and DZ have been found to exhibit infrared excess radiation due to circumstellar dust. The origin of this dust is
Pure-hydrogen 3D model atmospheres of cool white dwarfs
• Physics, Geology
• 2013
A sequence of pure-hydrogen CO5BOLD 3D model atmospheres of DA white dwarfs is presented for a surface gravity of log g = 8 and effective temperatures from 6000 to 13 000 K. We show that convective
Effects of resolution and helium abundance in A star surface convection simulations
• Physics
• 2009
We present results from 2D radiation-hydrodynamical simulations of fully compressible convection for the surface layers of A-type stars with the ANTARES code. Spectroscopic indicators for
Solution to the problem of the surface gravity distribution of cool DA white dwarfs from improved 3D model atmospheres
• Physics
• 2011
The surface gravities of cool (T eff < 13 000 K) hydrogen-atmosphere DA white dwarfs, determined from spectroscopic analyses, are found to be significantly higher than the canonical value of log g ∼
The frequency of planetary debris around young white dwarfs
• Physics, Geology
• 2014
Context. Heavy metals in the atmospheres of white dwarfs are thought in many cases to be accreted from a circumstellar debris disk, which was formed by the tidal disruption of a rocky planetary body
On the effects of coherent structures on higher order moments in models of solar and stellar surface convection
• Physics, Environmental Science
• 2007
Non-local models of stellar convection usually rely on the assumption that the transfer of convective heat flux, turbulent kinetic energy and related quantities can be described as a diffusion
The accretion-diffusion scenario for metals in cool white dwarfs
• Physics, Geology
• 2006
We calculated diffusion timescales for Ca, Mg, Fe in hydrogen atmosphere white dwarfs with temperatures between 5000 and 25 000 K. With these timescales we determined accretion rates for a sample of
|
2022-07-07 01:24:10
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5647934675216675, "perplexity": 5773.266297460946}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104683020.92/warc/CC-MAIN-20220707002618-20220707032618-00667.warc.gz"}
|
https://www.x-mol.com/paper/math/tag/97/journal/65413
|
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2020-08-07
Asaf Ferber; Asaf Shapira
A well-known observation of Lovász is that if a hypergraph is not 2-colourable, then at least one pair of its edges intersect at a single vertex. In this short paper we consider the quantitative version of Lovász’s criterion. That is, we ask how many pairs of edges intersecting at a single vertex should belong to a non-2-colourable n-uniform hypergraph. Our main result is an exact answer to this question
更新日期:2020-08-08
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2020-06-22
Charilaos Efthymiou
In this paper we propose a polynomial-time deterministic algorithm for approximately counting the k-colourings of the random graph G(n, d/n), for constant d>0. In particular, our algorithm computes in polynomial time a $(1\pm n^{-\Omega(1)})$ -approximation of the so-called ‘free energy’ of the k-colourings of G(n, d/n), for $k\geq (1+\varepsilon) d$ with probability $1-o(1)$ over the graph instances
更新日期:2020-08-06
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2020-08-06
Heng Guo; Mark Jerrum
We give a fully polynomial-time randomized approximation scheme (FPRAS) for the number of bases in bicircular matroids. This is a natural class of matroids for which counting bases exactly is #P-hard and yet approximate counting can be done efficiently.
更新日期:2020-08-06
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2020-07-27
Ryan Alweiss; Chady Ben Hamida; Xiaoyu He; Alexander Moreira
Given a fixed graph H, a real number p ∈ (0, 1) and an infinite Erdös–Rényi graph G ∼ G(∞, p), how many adjacency queries do we have to make to find a copy of H inside G with probability at least 1/2? Determining this number f(H, p) is a variant of the subgraph query problem introduced by Ferber, Krivelevich, Sudakov and Vieira. For every graph H, we improve the trivial upper bound of f(H, p) = O(p−d)
更新日期:2020-07-27
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2020-07-22
Patrick Bennett; Andrzej Dudek; Shira Zerbib
The triangle packing number v(G) of a graph G is the maximum size of a set of edge-disjoint triangles in G. Tuza conjectured that in any graph G there exists a set of at most 2v(G) edges intersecting every triangle in G. We show that Tuza’s conjecture holds in the random graph G = G(n, m), when m ⩽ 0.2403n3/2 or m ⩾ 2.1243n3/2. This is done by analysing a greedy algorithm for finding large triangle
更新日期:2020-07-22
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2020-07-22
Stefan Ehard; Stefan Glock; Felix Joos
A celebrated theorem of Pippenger states that any almost regular hypergraph with small codegrees has an almost perfect matching. We show that one can find such an almost perfect matching which is ‘pseudorandom’, meaning that, for instance, the matching contains as many edges from a given set of edges as predicted by a heuristic argument.
更新日期:2020-07-22
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2020-07-16
Orit E. Raz
We show that, for a constant-degree algebraic curve γ in ℝD, every set of n points on γ spans at least Ω(n4/3) distinct distances, unless γ is an algebraic helix, in the sense of Charalambides [2]. This improves the earlier bound Ω(n5/4) of Charalambides [2]. We also show that, for every set P of n points that lie on a d-dimensional constant-degree algebraic variety V in ℝD, there exists a subset S
更新日期:2020-07-16
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2020-06-30
Anita Liebenau; Yanitsa Pehova
A diregular bipartite tournament is a balanced complete bipartite graph whose edges are oriented so that every vertex has the same in- and out-degree. In 1981 Jackson showed that a diregular bipartite tournament contains a Hamilton cycle, and conjectured that in fact its edge set can be partitioned into Hamilton cycles. We prove an approximate version of this conjecture: for every ε > 0 there exists
更新日期:2020-06-30
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2020-06-30
M. Haythorpe; A. Newcombe
A set of graphs are called cospectral if their adjacency matrices have the same characteristic polynomial. In this paper we introduce a simple method for constructing infinite families of cospectral regular graphs. The construction is valid for special cases of a property introduced by Schwenk. For the case of cubic (3-regular) graphs, computational results are given which show that the construction
更新日期:2020-06-30
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2020-06-30
Gabriel Conant
We prove Bogolyubov–Ruzsa-type results for finite subsets of groups with small tripling, |A3| ≤ O(|A|), or small alternation, |AA−1A| ≤ O(|A|). As applications, we obtain a qualitative analogue of Bogolyubov’s lemma for dense sets in arbitrary finite groups, as well as a quantitative arithmetic regularity lemma for sets of bounded VC-dimension in finite groups of bounded exponent. The latter result
更新日期:2020-06-30
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2020-06-30
Shagnik Das; Andrew Treglown
Given graphs H1, H2, a graph G is (H1, H2) -Ramsey if, for every colouring of the edges of G with red and blue, there is a red copy of H1 or a blue copy of H2. In this paper we investigate Ramsey questions in the setting of randomly perturbed graphs. This is a random graph model introduced by Bohman, Frieze and Martin [8] in which one starts with a dense graph and then adds a given number of random
更新日期:2020-06-30
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2020-06-30
Sam Greenberg; Dana Randall; Amanda Pascoe Streib
Monotonic surfaces spanning finite regions of ℤd arise in many contexts, including DNA-based self-assembly, card-shuffling and lozenge tilings. One method that has been used to uniformly generate these surfaces is a Markov chain that iteratively adds or removes a single cube below the surface during a step. We consider a biased version of the chain, where we are more likely to add a cube than to remove
更新日期:2020-06-30
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2020-06-24
Richard Montgomery
Let $\{D_M\}_{M\geq 0}$ be the n-vertex random directed graph process, where $D_0$ is the empty directed graph on n vertices, and subsequent directed graphs in the sequence are obtained by the addition of a new directed edge uniformly at random. For each $$\varepsilon > 0$$ , we show that, almost surely, any directed graph $D_M$ with minimum in- and out-degree at least 1 is not only Hamiltonian (as
更新日期:2020-06-24
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2020-06-24
Frank Mousset; Rajko Nenadov; Wojciech Samotij
For fixed graphs F1,…,Fr, we prove an upper bound on the threshold function for the property that G(n, p) → (F1,…,Fr). This establishes the 1-statement of a conjecture of Kohayakawa and Kreuter.
更新日期:2020-06-24
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2020-06-23
Tao Jiang; Liana Yepremyan
A classical result of Erdős and, independently, of Bondy and Simonovits [3] says that the maximum number of edges in an n-vertex graph not containing C2k, the cycle of length 2k, is O(n1+1/k). Simonovits established a corresponding supersaturation result for C2k’s, showing that there exist positive constants C,c depending only on K such that every n-vertex graph G with e(G) ≥ Cn1+1/k contains at least
更新日期:2020-06-23
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2020-06-05
Heiner Oberkampf; Mathias Schacht
We study structural properties of graphs with bounded clique number and high minimum degree. In particular, we show that there exists a function L = L(r,ɛ) such that every Kr-free graph G on n vertices with minimum degree at least ((2r–5)/(2r–3)+ɛ)n is homomorphic to a Kr-free graph on at most L vertices. It is known that the required minimum degree condition is approximately best possible for this
更新日期:2020-06-05
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2020-05-18
Emma Yu Jin; Benedikt Stufler
We study random unlabelled k-trees by combining the colouring approach by Gainer-Dewar and Gessel (2014) with the cycle-pointing method by Bodirsky, Fusy, Kang and Vigerske (2011). Our main applications are Gromov–Hausdorff–Prokhorov and Benjamini–Schramm limits that describe their asymptotic geometric shape on a global and local scale as the number of (k + 1) -cliques tends to infinity.
更新日期:2020-05-18
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2020-05-15
Amir Yehudayoff
We prove an essentially sharp $\tilde \Omega (n/k)$ lower bound on the k-round distributional complexity of the k-step pointer chasing problem under the uniform distribution, when Bob speaks first. This is an improvement over Nisan and Wigderson’s $\tilde \Omega (n/{k^2})$ lower bound, and essentially matches the randomized lower bound proved by Klauck. The proof is information-theoretic, and a key
更新日期:2020-05-15
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2020-05-14
Alessandra Graf; Penny Haxell
We give an efficient algorithm that, given a graph G and a partition V1,…,Vm of its vertex set, finds either an independent transversal (an independent set {v1,…,vm} in G such that ${v_i} \in {V_i}$ for each i), or a subset ${\cal B}$ of vertex classes such that the subgraph of G induced by $\bigcup\nolimits_{\cal B}$ has a small dominating set. A non-algorithmic proof of this result has been known
更新日期:2020-05-14
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2020-03-24
Carlos Hoppen; Yoshiharu Kohayakawa; Richard Lang; Hanno Lefmann; Henrique Stagni
There has been substantial interest in estimating the value of a graph parameter, i.e. of a real-valued function defined on the set of finite graphs, by querying a randomly sampled substructure whose size is independent of the size of the input. Graph parameters that may be successfully estimated in this way are said to be testable or estimable, and the sample complexity qz = qz(ε) of an estimable
更新日期:2020-03-24
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2020-03-09
Dennis Clemens; Anita Liebenau; Damian Reding
For an integer q ⩾ 2, a graph G is called q-Ramsey for a graph H if every q-colouring of the edges of G contains a monochromatic copy of H. If G is q-Ramsey for H yet no proper subgraph of G has this property, then G is called q-Ramsey-minimal for H. Generalizing a statement by Burr, Nešetřil and Rödl from 1977, we prove that, for q ⩾ 3, if G is a graph that is not q-Ramsey for some graph H, then G
更新日期:2020-03-09
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2020-02-20
Agelos Georgakopoulos; John Haslegrave
We give an example of a long range Bernoulli percolation process on a group non-quasi-isometric with ℤ, in which clusters are almost surely finite for all values of the parameter. This random graph admits diverse equivalent definitions, and we study their ramifications. We also study its expected size and point out certain phase transitions.
更新日期:2020-02-20
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2020-02-18
Michael C. H. Choi; Pierre Patie
In this paper we develop an in-depth analysis of non-reversible Markov chains on denumerable state space from a similarity orbit perspective. In particular, we study the class of Markov chains whose transition kernel is in the similarity orbit of a normal transition kernel, such as that of birth–death chains or reversible Markov chains. We start by identifying a set of sufficient conditions for a Markov
更新日期:2020-02-18
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2020-02-13
Boris Bukh; Michael Tait
The theta graph ${\Theta _{\ell ,t}}$ consists of two vertices joined by t vertex-disjoint paths, each of length $\ell$ . For fixed odd $\ell$ and large t, we show that the largest graph not containing ${\Theta _{\ell ,t}}$ has at most ${c_\ell }{t^{1 - 1/\ell }}{n^{1 + 1/\ell }}$ edges and that this is tight apart from the value of ${c_\ell }$ .
更新日期:2020-02-13
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2020-02-04
Dániel Grósz; Abhishek Methuku; Casey Tompkins
Let c denote the largest constant such that every C6-free graph G contains a bipartite and C4-free subgraph having a fraction c of edges of G. Győri, Kensell and Tompkins showed that 3/8 ⩽ c ⩽ 2/5. We prove that c = 38. More generally, we show that for any ε > 0, and any integer k ⩾ 2, there is a C2k-free graph $G'$ which does not contain a bipartite subgraph of girth greater than 2k with more than
更新日期:2020-02-04
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2020-02-03
Jason Long
We show that a dense subset of a sufficiently large group multiplication table contains either a large part of the addition table of the integers modulo some k, or the entire multiplication table of a certain large abelian group, as a subgrid. As a consequence, we show that triples systems coming from a finite group contain configurations with t triples spanning $O(\sqrt t )$ vertices, which is the
更新日期:2020-02-03
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2019-12-06
James B. Martin; Roman Stasiński
We consider the behaviour of minimax recursions defined on random trees. Such recursions give the value of a general class of two-player combinatorial games. We examine in particular the case where the tree is given by a Galton–Watson branching process, truncated at some depth 2n, and the terminal values of the level 2n nodes are drawn independently from some common distribution. The case of a regular
更新日期:2019-12-06
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2019-12-03
Amin Coja-Oghlan; Tobias Kapetanopoulos; Noela Müller
Random constraint satisfaction problems play an important role in computer science and combinatorics. For example, they provide challenging benchmark examples for algorithms, and they have been harnessed in probabilistic constructions of combinatorial structures with peculiar features. In an important contribution (Krzakala et al. 2007, Proc. Nat. Acad. Sci.), physicists made several predictions on
更新日期:2019-12-03
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2019-11-27
Beka Ergemlidze; Ervin Győri; Abhishek Methuku; Nika Salia; Casey Tompkins; Oscar Zamora
The maximum size of an r-uniform hypergraph without a Berge cycle of length at least k has been determined for all k ≥ r + 3 by Füredi, Kostochka and Luo and for k < r (and k = r, asymptotically) by Kostochka and Luo. In this paper we settle the remaining cases: k = r + 1 and k = r + 2, proving a conjecture of Füredi, Kostochka and Luo.
更新日期:2019-11-27
• Comb. Probab. Comput. (IF 0.879) Pub Date : 2019-11-26
Annika Heckel
An equitable colouring of a graph G is a vertex colouring where no two adjacent vertices are coloured the same and, additionally, the colour class sizes differ by at most 1. The equitable chromatic number χ=(G) is the minimum number of colours required for this. We study the equitable chromatic number of the dense random graph ${\mathcal{G}(n,m)}$ where $m = \left\lfloor {p\left( \matrix{ n \cr 2 \cr} 更新日期:2019-11-26 • Comb. Probab. Comput. (IF 0.879) Pub Date : 2019-11-26 Joshua Zahl We prove that n plane algebraic curves determine O(n(k+2)/(k+1)) points of kth order tangency. This generalizes an earlier result of Ellenberg, Solymosi and Zahl on the number of (first order) tangencies determined by n plane algebraic curves. 更新日期:2019-11-26 • Comb. Probab. Comput. (IF 0.879) Pub Date : 2019-11-14 Noga Alon; Dan Hefetz; Michael Krivelevich; Mykhaylo Tyomkyn The inducibility of a graph H measures the maximum number of induced copies of H a large graph G can have. Generalizing this notion, we study how many induced subgraphs of fixed order k and size ℓ a large graph G on n vertices can have. Clearly, this number is$\left( {\matrix{n \cr k}}\right)$for every n, k and$\ell \in \left\{ {0,\left( {\matrix{k \cr 2}} \right)}\right\}$. We conjecture that 更新日期:2019-11-14 • Comb. Probab. Comput. (IF 0.879) Pub Date : 2019-11-07 Matija Bucić; Sven Heberle; Shoham Letzter; Benny Sudakov We prove that, with high probability, in every 2-edge-colouring of the random tournament on n vertices there is a monochromatic copy of every oriented tree of order$O(n{\rm{/}}\sqrt {{\rm{log}} \ n} )$. This generalizes a result of the first, third and fourth authors, who proved the same statement for paths, and is tight up to a constant factor. 更新日期:2019-11-07 • Comb. Probab. Comput. (IF 0.879) Pub Date : 2019-11-06 Hoi. H. Nguyen; Elliot Paquette We show that a nearly square independent and identically distributed random integral matrix is surjective over the integral lattice with very high probability. This answers a question by Koplewitz [6]. Our result extends to sparse matrices as well as to matrices of dependent entries. 更新日期:2019-11-06 • Comb. Probab. Comput. (IF 0.879) Pub Date : 2019-11-04 Omer Angel; Abbas Mehrabian; Yuval Peres For a rumour spreading protocol, the spread time is defined as the first time everyone learns the rumour. We compare the synchronous push&pull rumour spreading protocol with its asynchronous variant, and show that for any n-vertex graph and any starting vertex, the ratio between their expected spread times is bounded by$O({n^{1/3}}{\log ^{2/3}}n)$. This improves the$O(\sqrt n)$upper bound of Giakkoupis 更新日期:2019-11-04 • Comb. Probab. Comput. (IF 0.879) Pub Date : 2019-11-04 Benedikt Stufler We study random composite structures considered up to symmetry that are sampled according to weights on the inner and outer structures. This model may be viewed as an unlabelled version of Gibbs partitions and encompasses multisets of weighted combinatorial objects. We describe a general setting characterized by the formation of a giant component. The collection of small fragments is shown to converge 更新日期:2019-11-04 • Comb. Probab. Comput. (IF 0.879) Pub Date : 2014-03-05 Konstancja Bobecka,Paweł Hitczenko,Fernando López-Blázquez,Grzegorz Rempała,Jacek Wesołowski In the paper we develop an approach to asymptotic normality through factorial cumulants. Factorial cumulants arise in the same manner from factorial moments as do (ordinary) cumulants from (ordinary) moments. Another tool we exploit is a new identity for 'moments' of partitions of numbers. The general limiting result is then used to (re-)derive asymptotic normality for several models including classical 更新日期:2019-11-01 • Comb. Probab. Comput. (IF 0.879) Pub Date : 2019-10-24 Mickaël Maazoun The Brownian separable permuton is a random probability measure on the unit square, which was introduced by Bassino, Bouvel, Féray, Gerin and Pierrot (2016) as the scaling limit of the diagram of the uniform separable permutation as size grows to infinity. We show that, almost surely, the permuton is the pushforward of the Lebesgue measure on the graph of a random measure-preserving function associated 更新日期:2019-10-24 • Comb. Probab. Comput. (IF 0.879) Pub Date : 2019-10-21 Peter Keevash; Liana Yepremyan Akbari and Alipour [1] conjectured that any Latin array of order n with at least n2/2 symbols contains a transversal. For large n, we confirm this conjecture, and moreover, we show that n399/200 symbols suffice. 更新日期:2019-10-21 • Comb. Probab. Comput. (IF 0.879) Pub Date : 2019-10-18 Yuval Filmus The Friedgut–Kalai–Naor (FKN) theorem states that if ƒ is a Boolean function on the Boolean cube which is close to degree one, then ƒ is close to a dictator, a function depending on a single coordinate. The author has extended the theorem to the slice, the subset of the Boolean cube consisting of all vectors with fixed Hamming weight. We extend the theorem further, to the multislice, a multicoloured 更新日期:2019-10-18 • Comb. Probab. Comput. (IF 0.879) Pub Date : 2019-10-14 Ross G. Pinsky For $$\tau \in {S_3}$$ , let $$\mu _n^\tau$$ denote the uniformly random probability measure on the set of $$\tau$$ -avoiding permutations in $${S_n}$$ . Let $${\mathbb {N}^*} = {\mathbb {N}} \cup \{ \infty \}$$ with an appropriate metric and denote by $$S({\mathbb{N}},{\mathbb{N}^*})$$ the compact metric space consisting of functions $$\sigma {\rm{ = }}\{ {\sigma _i}\} _{i = 1}^\infty {\rm{ }}$$ 更新日期:2019-10-14 • Comb. Probab. Comput. (IF 0.879) Pub Date : 2019-10-10 Simeon Ball; Bence Csajbók We prove that, for q odd, a set of q + 2 points in the projective plane over the field with q elements has at least 2q − c odd secants, where c is a constant and an odd secant is a line incident with an odd number of points of the set. 更新日期:2019-10-10 • Comb. Probab. Comput. (IF 0.879) Pub Date : 2019-10-09 Patrick Bennett; Andrzej Dudek; Bernard Lidický; Oleg Pikhurko Motivated by the work of Razborov about the minimal density of triangles in graphs we study the minimal density of the 5-cycle C5. We show that every graph of order n and size$ (1 - 1/k) \left( {\matrix{n \cr 2 }} \right) $, where k ≥ 3 is an integer, contains at least $$({1 \over {10}} - {1 \over {2k}} + {1 \over {{k^2}}} - {1 \over {{k^3}}} + {2 \over {5{k^4}}}){n^5} + o({n^5})$$ copies of C5. 更新日期:2019-10-09 • Comb. Probab. Comput. (IF 0.879) Pub Date : 2019-10-08 Omer Angel; Asaf Ferber; Benny Sudakov; Vincent Tassion Given a graph G and a bijection f : E(G) → {1, 2,…,e(G)}, we say that a trail/path in G is f-increasing if the labels of consecutive edges of this trail/path form an increasing sequence. More than 40 years ago Chvátal and Komlós raised the question of providing worst-case estimates of the length of the longest increasing trail/path over all edge orderings of Kn. The case of a trail was resolved by 更新日期:2019-10-08 • Comb. Probab. Comput. (IF 0.879) Pub Date : 2019-10-08 Bo Ning; Xing Peng The famous Erdős–Gallai theorem on the Turán number of paths states that every graph with n vertices and m edges contains a path with at least (2m)/n edges. In this note, we first establish a simple but novel extension of the Erdős–Gallai theorem by proving that every graph G contains a path with at least $${{(s + 1){N_{s + 1}}(G)} \over {{N_s}(G)}} + s - 1$$ edges, where Nj(G) denotes the number of 更新日期:2019-10-08 • Comb. Probab. Comput. (IF 0.879) Pub Date : 2019-09-30 Shachar Sapir; Asaf Shapira The induced removal lemma of Alon, Fischer, Krivelevich and Szegedy states that if an n-vertex graph G is ε-far from being induced H-free then G contains δH(ε) · nh induced copies of H. Improving upon the original proof, Conlon and Fox proved that 1/δH(ε)is at most a tower of height poly(1/ε), and asked if this bound can be further improved to a tower of height log(1/ε). In this paper we obtain such 更新日期:2019-09-30 • Comb. Probab. Comput. (IF 0.879) Pub Date : 2019-08-05 Lorenzo Federico; Remco Van Der Hofstad; Frank Den Hollander; Tim Hulshof The Hamming graph H(d, n) is the Cartesian product of d complete graphs on n vertices. Let${m=d(n-1)}$be the degree and$V = n^d$be the number of vertices of H(d, n). Let$p_c^{(d)}$be the critical point for bond percolation on H(d, n). We show that, for$d \in \mathbb{N}$fixed and$n \to \infty$,$$p_c^{(d)} = {1 \over m} + {{2{d^2} - 1} \over {2{{(d - 1)}^2}}}{1 \over {{m^2}}} + O({m^{ - 3}}) 更新日期:2019-08-05 • Comb. Probab. Comput. (IF 0.879) Pub Date : 2019-07-25 Rajko Nenadov; Nemanja Škorić Given graphs G and H, a family of vertex-disjoint copies of H in G is called an H-tiling. Conlon, Gowers, Samotij and Schacht showed that for a given graph H and a constant γ>0, there exists C>0 such that if$p \ge C{n^{ - 1/{m_2}(H)}}$, then asymptotically almost surely every spanning subgraph G of the random graph 𝒢(n, p) with minimum degree at least$\delta (G) \ge (1 - \frac{1}{{{\chi _{{\rm{cr}}}}(H)}}
更新日期:2019-07-25
Contents have been reproduced by permission of the publishers.
down
wechat
bug
|
2020-08-09 07:59:38
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8834081888198853, "perplexity": 1159.5823867193922}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738523.63/warc/CC-MAIN-20200809073133-20200809103133-00041.warc.gz"}
|
https://www.groundai.com/project/entanglement-formation-under-random-interactions/
|
Entanglement formation under random interactions
# Entanglement formation under random interactions
Christoph Wick, Jaegon Um, and Haye Hinrichsen Universität Würzburg, Fakultät für Physik und Astronomie, Am Hubland,
97074 Würzburg, Germany
Quantum Universe Center, Korea Institute for Advanced Study, Seoul 130-722, Korea
###### Abstract
The temporal evolution of the entanglement between two qubits evolving by random interactions is studied analytically and numerically. Two different types of randomness are investigated. Firstly we analyze an ensemble of systems with randomly chosen but time-independent interaction Hamiltonians. Secondly we consider the case of a temporally fluctuating Hamiltonian, where the unitary evolution can be understood as a random walk on the group manifold. As a by-product we compute the metric tensor and its inverse as well as the Laplace-Beltrami for .
## 1 Introduction
If two initially separable quantum systems are exposed to random interactions they are expected to become entangled, exhibiting random quantum correlations. How do these quantum correlations arise as a function of time? To address this question we study the entanglement between two qubits subjected to random interactions as a function of time. The study of entanglement dynamics under random environments has attracted much interest recently, for instance, the emerging entanglement between coupled quantum systems through a bosonic heat bath [1]. Although our system is oversimplified in comparison with these dissipative systems, we believe that our study may give the upper bound for the entanglement under the strong random interactions.
In what follows we assume that the two-qubit system is initially prepared in a well-defined pure state. As examples we consider two different initial states, namely, a non-entangled pure state
ρ(0)=|11⟩⟨11| (1)
and in a fully entangled Bell state of the form
ρ(0)=|ϕ⟩⟨ϕ|,|ϕ⟩=1√2(|00⟩+|11⟩), (2)
where denotes the canonical qubit configuration basis. Starting with the given initial state the system then evolves unitarily as
ρ(t)=U(t)ρ(0)U†(t), (3)
where the time evolution operator is determined by and
iℏ∂tU(t)=H(t)U(t) (4)
with a randomly chosen interaction Hamiltonian.
Throughout this paper we consider two different types of randomness, namely
• Quenched randomness, where is time-independent. In this case one considers an ensemble of two-qubit systems starting from the same initial state, where each member evolves by a different but temporally constant Hamiltonian drawn from an -invariant distribution.
• Temporal randomness, where the dynamical evolution is generated by a time-dependent Hamiltonian which fluctuates randomly as a function of time [2]. On a single system the resulting temporal evolution of the state vector can be understood as a unitary random walk in .
The difference between the two cases is visualized in Fig. 1. In this figure the big red sphere stands symbolically for the 15-dimensional group manifold of . Each of point on the sphere represents a certain unitary transformation acting on the 4-dimensional Hilbert space of the two-qubit system. Starting with , which may be located e.g. at the ‘north pole’ of the sphere, the temporal evolution can be represented by a certain trajectory (blue line) on the group manifold.
Let us now think of an ensemble of such systems, represented by a set of statistically independent trajectories. In the quenched case (a), where a random Hamiltonian is chosen at and kept constant during the temporal evolution, these trajectories are straight, advancing at different pace and pointing in different directions, while in case (b) they may be thought of as random walks on the group manifold. At a given final observation time the trajectories of the ensemble terminate in different points marked by small blue bullets in the figure, each of them representing a unitary transformation. Applying this transformation to a pure initial state one obtains a final pure state with a certain individual entanglement. In the sequel we are interested in the statistical distribution of these final states and their entanglement.
To quantify the entanglement we use two different entanglement measures. For a pure state the entanglement is defined as the von-Neumann entropy of the reduced states
E(t)=−\Tr[ρ1(t)lnρ1(t)]=−\Tr[ρ2(t)lnρ2(t)], (5)
where denotes the time-dependent reduced density matrix of the respective qubit. In cases where the logarithm is too difficult to evaluate we resort to the so-called linear entropy
L(t)=1−\Tr[ρ21(t)]=1−\Tr[ρ22(t)] (6)
as an alternative entanglement measure. Note that both measures can be obtained from the more general Tsallis entanglement entropy [3]
Eq(t)=1−\Tr[ρq1(t)]q−1 (7)
in the limit and , respectively.
Furtheremore, the Rényi entanglement entropy [4]
Hq(t)=log\Tr[ρq1(t)]1−q (8)
is of interest. Also this entropy measure generalizes the von-Neuann entropy when setting .
Our main results are the following. In the first case (a), where a temporally constant Hamiltonian is randomly chosen, the mean entanglement is expected saturate at a certain value for . Our findings confirm this expectation, but surprisingly we observe that the average entanglement overshoots, i.e., it first increases, then reaches a local maximum, then slightly decreases again before it finally saturates at some stationary value.
In the second case (b), where the Hamiltonian changes randomly as a function of time, the average entanglement saturates as well, although generally at a different level. This saturation level, which is the average entanglement of a unitarily invariant distribution of 2-qubit states, has been computed previously in Refs. [5, 6, 7, 8]. Here we investigate the actual temporal behavior of the entanglement before it reaches this plateau. As a by-product, we compute the metric tensor and its inverse on the group manifold as well as the corresponding Laplace-Beltrami operator (see supplemental material), which to our knowledge have not been published before.
## 2 Random unitary transformations in four dimensions
### 2.1 Representation of Su(4) transformations
In what follows we use a particular representation of the group which was originally introduced by Tilma et alin Ref. [9]. As reviewed in the Appendix, the group elements are generated by 15 Gell-Mann matrices , allowing one to parametrize unitary transformations by
U\textsc$\boldmathα$ = eiλ3α1eiλ2α2eiλ3α3eiλ5α4eiλ3α5eiλ10α6eiλ3α7eiλ2α8 (9) ×eiλ3α9eiλ5α10eiλ3α11eiλ2α12eiλ3α13eiλ8α14eiλ15α15,
where the 15 Euler-like angles vary in certain ranges specified in (80). Applying such a unitary transformation to the non-entangled initial state one obtains the density matrix
ρ(\boldmathα)=U\textsc$%\boldmath$α$$ρ(0)U†\textsc\boldmathα (10) with the components ρ11(\boldmathα) = cos2(α2)cos2(α4)sin2(α6) ρ12(\boldmathα) = −12e2iα1cos2(α4)sin(2α2)sin2(α6) ρ13(\boldmathα) = −12ei(α1+α3)cos(α2)sin(2α4)sin2(α6) ρ14(\boldmathα) = ei(α1+α3+α5)cos(α2)cos(α4)cos(α6)sin(α6) ρ22(\boldmathα) = cos2(α4)sin2(α2)sin2(α6) ρ23(\boldmathα) = e−i(α1−α3)cos(α4)sin(α2)sin(α4)sin2(α6) ρ24(\boldmathα) = −e−i(α1−α3−α5)cos(α4)cos(α6)sin(α2)sin(α6) ρ33(\boldmathα) = sin2(α4)sin2(α6) ρ34(\boldmathα) = −eiα5cos(α6)sin(α4)sin(α6) ρ44(\boldmathα) = cos2(α6). (11) Remarkably, for this density matrix depends only on six angles out of 15. Since the observables investigated in this paper are invariant under local unitary transformations, any pure separable initial state will give the same result. Therefore, without loss of generality we can restrict ourselves to , taking advantage of the low number of parameters in this particular case. ### 2.2 Computing averages on the Su(4)-manifold In the following section we will consider an ensemble of trajectories of unitary transformations generated by random interactions. Using the above representation, each trajectory can be parametrized by a time-dependent vector of Euler angles . A statistical ensemble of trajectories is therefore characterized by a probability density to find a unitary transformation with the Euler angles at a given time . For a given probability density one can compute the ensemble average of any function (such as the density matrix or the entanglement )) by integrating over the complete parameter space of the manifold weighted by : ⟨f(t)⟩\textsc% \boldmathα=1VSU(4)∫VSU(4)p(% \boldmathα,t)f(\boldmathα)dVSU(4). (12) Here is the integrated group volume which serves as a normalization factor while dVSU(4)=μ(α)15∏j=1dαj (13) denotes the volume element on the group manifold. The actual integration measure is defined by the function which depends on the chosen representation. Here we use the uniform measure, also known as Haar measure [10], which is by itself invariant under unitary transformations.111For example, in spherical coordinates the invariant measure of the rotational group would be given by with . In the present case of with the parametrization defined above the Haar measure is defined by [9] μ(\boldmathα)=sin(2α2)sin(α4)sin5(α6)sin(2α8)sin3(α10)sin(2α12)cos3(α4)cos(α6)cos(α10). The total group volume, first computed by Marinov [11], is then given by VSU(4)=∫dVSU(4)=∫dα1⋯∫dα15μ(\boldmathα)=√2π93. (14) In summary, averages over the manifold weighted by the probability density can be carried out by computing the 15-dimensional integral ⟨f(t)⟩\textsc% \boldmathα=√2π93∫dα1⋯∫dα15μ(\boldmathα)p(% \boldmathα,t)f(\boldmathα), (15) with the measure (2.2) and the integration ranges specified in (80). The uniform Haar measure corresponds to taking . The transformed state is still pure but generally entangled. Being interested in the average entanglement of states generated by random unitary transformations, it makes a difference whether the entanglement is computed before taking average over or vice versa, as will be discussed in the following. ### 2.3 Average of the entanglement Let us first discuss the case of computing the entanglement before taking the average over all . In this case one has to compute the reduced density matrix of the first qubit for given , defined as the partial trace σ(\boldmathα)=\Tr2[ρ(\boldmathα)]. (16) For the initial state this is a -matrix with the elements σ11(\boldmathα) = cos2(α4)sin2(α6) σ12(\boldmathα) = −12ei(α1+α3)sin(2α4)sin2(α6)cos(α2) −12e−i(α1−α3−α5)sin(α2)sin(2α6)cos(α4) σ22(\boldmathα) = cos2(α6)+sin2(α4)sin2(α6). (17) In general the reduced density matrix is no longer pure and its von-Neumann entropy quantifies the entanglement between the two qubits. In order to compute the entropy we determine the eigenvalues of which are given by κ1,2(\boldmathα)==12±116(256sin(2α2)sin(α4)sin3(α6)cos2(α4)cos(2α1−α5)cos(α6) −24sin2(α6)cos(2α2)+cos(2α6)(8−40sin2(α6)cos(2α2)) −32sin2(2α6)cos2(α2)cos(2α4) +32sin2(α2)sin4(α6)cos(4α4)+6cos(4α6)+50)1/2. (18) Having determined these eigenvalues, the entanglement of is given by E(\boldmathα)=−2∑i=1κilnκi. (19) Finally, the entanglement has to be averaged over all trajectories (see Eq. (51)), i.e. ¯¯¯¯E=⟨E(\boldmathα)⟩\textsc\boldmathα. (20) However, if the average of the von-Neumann entropy is too difficult to compute, we will also use the linear entropy L(\boldmathα)=E2(\boldmathα)=1−2∑i=1κ2i (21) as an alternative entanglement measure. ### 2.4 Entanglement of the average Alternatively, we may first compute the average density matrix and then determine the entanglement of the resulting mixed state. To this end a suitable entanglement measure is needed. An interesting quantity in this context is Wootters concurrence [12] defined by C(ρ)=max(0,λ1−λ2−λ3−λ4), (22) where are the decreasingly sorted square roots of the eigenvalues of the matrix Λ=ρ(σy⊗σy)ρ∗(σy⊗σy). (23) In this expression is the Pauli matrix while denotes the complex conjugate of without taking the transpose. From the concurrence one can easily compute the entanglement of formation of the mixed state, which is given by EF(ρ)=−blnb−(1−b)ln(1−b), (24) where . ## 3 Quenched random interactions In the case (a) of quenched randomness each element of the ensemble is associated with a time-independent random Hamiltonian . Since the spectral decomposition H=4∑j=1Ej|ϕj⟩⟨ϕj| (25) of a randomly chosen Hamiltonian is always non-degenerate, the time evolution operator can be written as U(t)=e−iHt=4∑j=1e−iEjt|ϕj⟩⟨ϕj|. (26) Hence the state of the system evolves as ρ(t)=U(t)ρ(0)U†(t)=4∑j,k=1e−i(Ej−Ek)t⟨ϕj|ρ(0)|ϕk⟩|ϕj⟩⟨ϕk|, (27) where denotes the initial state. The Hamiltonian itself has to be drawn from a certain probabilistic ensemble of Hermitian random matrices [2, 13]. Here the most natural choice is again the Gaussian unitary ensemble (GUE). This ensemble has the nice property that the probability distributions for the eigenvalues and the eigenvectors factorize and thus can be treated independently. More specifically, the eigenvalues are known to be distributed as P(E1,…,E4)∝e−A∑jE2j∏n>m(En−Em)2, (28) where is a constant determining the width of the energy fluctuations and therewith the time scale of the temporal evolution. In the following the corresponding average over the energies will be denoted by . On the other hand, the orthonormal set of eigenvectors is randomly oriented in the four-dimensional Hilbert space according to Haar measure, independent of the eigenvalues. If one defines the qubit basis {|1⟩,|2⟩,|3⟩,|4⟩}:={|00⟩,|01⟩,|10⟩,|11⟩} (29) this average can be carried out by setting |ϕj⟩:=U†\textsc\boldmathα|j⟩ (30) and integrating over all Euler angles according to the Haar measure (see Appendix B). This average will be denoted by . The total GUE average thus factorizes as ⟨…⟩\textscGUE=⟨…⟩E⟨…⟩\textsc\boldmathα. (31) ### 3.1 Entanglement of the averaged density matrix Let us now compute the average density matrix ⟨ρ(t)⟩\textscGUE=4∑j,k=1⟨e−i(Ej−Ek)t⟩ERjk⟨|ϕj⟩⟨ϕj|ρ(0)|ϕk⟩⟨ϕk|⟩\textsc% \boldmathαTjk. (32) First we compute the average over the energies Rjk=1N∫+∞−∞dE1⋯dE4P\textscGUE(E1,…,E4)e−i(Ej−Ek)t, (33) where is the normalization factor. This leads us to the result (34) where we defined the scaled time τ:=t/√2A (35) and the function f(τ)=172e−τ2(−2τ10+25τ8−128τ6+276τ4−288τ2+72). (36) Thus Eq. (32) reduces to ⟨ρ(t)⟩\textscGUE=f(τ)4∑j,k=1Tjk+(1−f(τ))4∑j=1Tjj (37) What remains is to determine the operators Tjk=⟨|ϕj⟩⟨ϕj|ρ(0)|ϕk⟩⟨ϕk|⟩\textsc\boldmathα=⟨U†\textsc\boldmathα|j⟩⟨j|U\textsc\boldmathαρ(0)U†\textsc\boldmathα|k⟩⟨k|U\textsc\boldmathα% ⟩\textsc\boldmathα. (38) Obviously, the first sum in Eq. (37) is given by (39) As for the second sum in Eq. (37), we note that the distribution of eigenvectors is invariant under a permutation of the basis vectors , hence the four operators coincide. Moreover, under a unitary transformation they transform as VTjjV† = = ⟨U†\textsc%\boldmathα$$|j⟩⟨j|U\textsc$\boldmathα$(Vρ(0)V†)U†\textsc$\boldmathα$|j⟩⟨j|U\textsc$\boldmathα%$⟩\textsc$\boldmathα$,
where we have used the invariance of the GUE-eigenvectors under the replacement . This means that is invariant under if and only if commutes with the initial state . For a pure initial state this implies that has to be a linear combination of the identity and the initial state itself, i.e. . The linear coefficients and can be determined as follows. On the one hand, the identity
1=\Tr[⟨ρ(t)⟩\textscGUE]\lx@stackrel(???,???)=f(τ)+(1−f(τ))\Tr[4∑j=1Tjj] (41)
implies that , hence . On the other hand, we note that
\Tr[ρ(0)Tjj] = ⟨\Tr[ρ(0)|ϕj⟩⟨ϕj|ρ(0)|ϕj⟩⟨ϕj|]⟩\textsc$\boldmathα$=⟨⟨ϕj|ρ(0)|ϕj⟩2⟩\textsc$\boldmathα$ (42)
is invariant under unitary transformations of and independent of , hence we may choose and to obtain
\Tr[ρ(0)Tjj]=⟨⟨ϕ4|4⟩2⟨4|ϕ4⟩2⟩\textsc$\boldmathα$=⟨cos4(α6)⟩\textsc$\boldmathα%$=110, (43)
giving . Therefore, we arrive at the convex combination of and
¯ρ(t)=⟨ρ(t)⟩\textscGUE=1−f(τ)51+1+4f(τ)5ρ(0) (44)
with given in Eq. (36) and , which holds for any pure initial state . As expected, the averaged state lies on the segment between the initial state and the maximally mixed state due to the symmetries of the Haar measure of .
Having computed the mixed state of the ensemble we can now compute the corresponding entanglement of formation as a function of time. For a non-entangled initial pure state we find that for all times. However, if we start from the Bell state (2) with the initial entanglement we find numerically that the entanglement first decreases and vanishes at the finite scaled time (see Fig. 2).
### 3.2 Average of the individual entanglement
Instead of computing the entanglement of the average density matrix, let us now compute the average of the individual entanglement of each trajectory, i.e. the entanglement is computed before taking the GUE average. Although the von-Neumann entanglement entropy of the individual pure states would be straight-forward to compute (see (19)), we did not succeed to compute the average. For this reason let us consider the GUE average of the linear entropy
⟨L(t)⟩\textscGUE=1−⟨ρ21(t)⟩\textscGUE (45)
where denotes the reduced density matrix of the first qubit. In the qubit basis (29) this can be rewritten as
⟨L(t)⟩\textscGUE=1−2∑μ,β,γ,δ=1⟨⟨μβ|ρ(t)|γβ⟩⟨γδ|ρ(t)|δμ⟩⟩\textscGUE (46)
Inserting (27) and exploiting again that the GUE average factorizes, we get
⟨L(t)⟩\textscGUE = ⟨cμβj∗⟨ϕj|ρ(0)|ϕk⟩cγβkcγβl∗⟨ϕl|ρ(0)|ϕm⟩cδμm⟩\textsc$\boldmathα$,
where . For the initially non-entangled state this expression reduces further to
⟨L(t)⟩\textscGUE = 1−2∑μ,β,γ,δ=14∑j,k,l,m=1⟨e−i(Ej−Ek+El−Em)τ√2A⟩ERijkl(τ) ×⟨cμβj∗c11jc11k∗cγβkcγβl∗c11lc11m∗cδμm⟩\textsc$\boldmathα$Tμβγδjklm.
with the scaled time . As shown in D, the averages can be computed directly by integration over the given probability distributions in GUE, leading us to the final result
⟨L(τ)⟩\textscGUE=−1630e−2τ2(32τ8−128τ6+168τ4−72τ2+9) (49) −1840e−τ2(−2τ10+25τ8−128τ6+276τ4−288τ2+72) −1420e−3τ2(−54τ10+387τ8−832τ6+828τ4−288τ2+24) −1315e−4τ2(−256τ10+800τ8−1024τ6+552τ4−144τ2+9)+1370.
This function is plotted in Fig. 3. As one can see, the linear entanglement entropy (black line) first increases rapidly, then reaches a local maximum at , then decreases again and finally saturates at the value
limτ→∞⟨L(τ)⟩\textscGUE=13/70≃0.1857. (50)
Because it would need much more effort to calculate the analytical the linear entropy for different initial states analytically, we used numerical methods. The results are compared in Fig. 3. As one can see clearly, all the lines tend to touch the limit value at the fixed time and the curves do not intersect.
Fig. 1 explains the meaning of this result: Each single trajectory of the ensemble on the surface is deterministic with a given initial point, direction and velocity. Since all members of the ensemble share the same initial starting point on the upper half of the sphere, the probability for finding the walkers can be slightly higher on the upper half in the long-time limit because all trajectories will periodically return to this point. This is why the limit depends on the initial state and therefore deviates from the Haar measure. The bump can be seen as a transient state in which the probability distribution seems to be almost randomly distributed before saturating.
Note that in contrast to the case discussed before (see Fig. 2) the system remembers its initial state, saturating at different levels of entanglement in the limit .
## 4 Time-dependent random interactions
Let us now consider the case (b) of a temporally varying Hamiltonian, where the state vectors of the ensemble perform a unitary random walk on the manifold. In this case the quantity of interest is the probability distribution to find the time evolution operator with the Euler angles at time . This probability distribution allows one to compute the ensemble average of any function (such as the density matrix or the entanglement ) by integration over the complete volume weighted by , i.e. we have to compute the integral
⟨f(t)⟩=1VSU(4)∫VSU(4)f(%\boldmath$α$)p(\boldmathα,t)dVSU(4) (51)
over the ranges specified in (80), where denotes the volume element according to the Haar measure defined in Eq. (13).
### 4.1 Expected average entanglement of a uniform distribution
Before studying the temporal evolution in detail, let us consider the limit , where we expect the state vectors to be uniformly distributed on the group manifold. Since such an ensemble is by itself invariant under unitary transformations, the state vectors are distributed according to a Gaussian Unitary Ensemble (GUE). Starting from this observation, Page [5] conjectured a closed expression for the expected average entanglement of an arbitrary bipartite quantum system with Hilbert space dimensions and , which was later proven rigorously by Foong, Kanno, and Sen [6, 7]. This formula describes the average entanglement of a random pure state between two subsystems with Hilbert space dimensions and :
⟨Em,n⟩\textscGUE=((mn∑k=n+11k)−m−12n). (52)
Applying this formula to a random two-qubit system, one obtains
⟨E⟩\textscGUE=⟨E2,2⟩=13. (53)
This is the average entanglement between two qubits in a randomly chosen pure state.
### 4.2 Heat conduction equation on the Su(4) manifold
In order to compute the probability distribution one has to solve the heat conduction equation
(54)
on the curved group manifold. Here denotes the diffusion constant while is the so-called Laplace-Beltrami operator which generalizes the ordinary Laplacian on a curved space. As stated by [17] this Laplace-Beltrami operator is the Markov generator of the unitary brownian motion (see F). On a Riemannian manifold with the metric tensor the Laplace-Beltrami-Operator is given by
Δf=1√|g|∂i(√|g|gij∂jf), (55)
where are the components of the inverse metric tensor and .
To our best knowledge the explicit expressions for , and have not been published before. This is perhaps due to the fact that the formulas are so complex that even powerful computer algebra systems such as Mathematica are not able to compute the inverse metric directly. Instead one has to invert the matrix manually element by element. Our explicit results are included in the supplemental material attached to this paper.
### 4.3 Early-time expansion
The solution of the heat conduction equation (54) and as well the averaged function can be expanded as a Taylor series around :
(56)
⟨f(t)⟩=∞∑n=0tnn!∂n∂tn⟨f(t)⟩∣∣∣t=0 (57)
Using (51) we can compare the coefficients of the both Taylor series. giving
∂n∂tn⟨f(t)⟩∣∣t=0=1VSU(4)∫dVSU(4)f(\boldmathα)∂n∂tnp(\boldmathα,t)∣∣t=0, (58)
where denotes the volume element defined in Eq. (13). Using the heat equation (54) we can replace the partial derivative, obtaining
∂n∂tn⟨f(t)⟩∣∣∣t=0=1VSU(4)Dn∫dVSU(4)f(\boldmathα)Δn\textsc$\boldmathα$p(\boldmathα,t)∣∣t=0. (59)
The r.h.s. is an integral over derivatives of the probability density evaluated at . If all trajectories start at it is easy to see that this probability density at is given by
p(\boldmathα,t=0)=VSU(4)√|g|δ(\boldmathα−\boldmathα0). (60)
Inserting this expression into (59) the integral can be evaluated by partial integration, giving
∂n∂tn⟨f(t)⟩∣∣∣t=0=DnΔn\textsc$\boldmathα$f(\boldmathα)∣∣∣α=α0 (61)
### 4.4 Average of the density matrix
Using (10) we calculate the derivatives . We find that
△1\textsc$\boldmathα$ρ(\boldmathα)=−8ρ(\boldmathα)+2⋅\boldmath1. (62)
Therefore we can express all higher derivatives of in terms of the first derivative222Although this remarkable property calls for a deeper reason, we have no convincing explanation so far.
∂n∂tn⟨ρ(t)⟩∣∣∣t=0=Dn(−8)n−1△\textsc$% \boldmathα$ρ(\boldmathα). (63)
Hence, the solution of the averaged density matrix can be written as
⟨ρ(t)⟩=14⋅\boldmath1+(ρ(\boldmathα0)−14⋅\boldmath1% )e−8Dt (64)
Using the non-entangled initial state (which corresponds to taking ) this result specializes to
⟨ρ(t)⟩α0=0=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝14−14e−8Dt000014−14e−8Dt000014−14e−8Dt000014+34e−8Dt⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠. (65)
As can be seen, this density matrix relaxes exponentially and becomes fully mixed in the limite .
### 4.5 Average of the linear entropy
The same calculation can be carried out for the linear entropy defined in (21). Here we find that
△1\textsc$\boldmathα$L(\boldmathα)=−20L(\boldmathα)+4. (66)
For this reason, the calculation is completely analogous, giving
⟨L(t)⟩=15+(L(\boldmathα0)−15)e−20Dt. (67)
The result for an non-entangled initial state is therefore
⟨L(t)⟩=15−15e−20Dt. (68)
For a fully entangled initial state we get
⟨L(t)⟩=15+310
|
2020-06-01 04:28:17
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9411394000053406, "perplexity": 506.3549712595233}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347414057.54/warc/CC-MAIN-20200601040052-20200601070052-00168.warc.gz"}
|
https://qnap.ru/download/image/35010
|
JFIFC \$.' ",#(7),01444'9=82<.342C 2!!22222222222222222222222222222222222222222222222222^" }!1AQa"q2#BR\$3br %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz w!1AQaq"2B #3Rbr \$4%&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz ?( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( )f ԓY^#lꖪGU?5(J?ȎI2 sU9a>,WH'Ȉ[#dzY>'֯3j\$\(']K#d\@N#Y5F,YR&Hg\J#Ȃ У9+&mg|1+ @-ǎˌt`~˟X9O\$V~_JKnI LwPIk<hYuYzjƥ_+S_5}Bz_kkQ J2W_?¹ P=_cmjU:vWuVa?Uij~7'֏ƀ;a}OȩC6?p%w%?Ka5*H->apkzM+; =~#h7ٯ^czCn~UzȿX4@¾7CX*Uۦ?xǗHc?ޠm_hMT_*En:^coS 7{[J'_V >}DvD?FȆ̟vFF(POw:\$"]gVO^/vր>z>,Q z?z{M>/YZ\j-% )Ivhi !d @\d?@OnK֬KI\$Iwy vROZip (.eż\$ Y9?EkZ|VtW! @IuC?焇ٷU[v'g-mU?03n[wd0֡}a+\fy\$v(tUgC%q<h|R#F~{QJoeĿEZ_ơmR/L:'ف}iygVlsmƚuo7Eϭ/Y#cM/\$j X 7gn߽ƗjF YvCO|p#fz0}k;M?aM/ޛMP5]MV?pn(f&hihE?֜OGS4ݧޤÑGSL\$u@ }M!Si~b\w+JSHCzT^ }474iScPHz=`0SP=үLQ;"{rj\$\}hy!/U%?SNe'yx2M@͞MSdn;j<vH#}+_5wxH })LA\%Θw*IBG2+W|#h3WxMh;@鎇^[5 r:)+ܪDA껈>?dUO"cEVhUTb yRX1yqNLMt s>O' \$pJקk;ү<9umg\,K YH/s{g֘Xw;D65%ݰGgfx?g9O*3ӵbʑ i43ax, .<8oCۚ]ﱴXRvd)g"BAvlCLci%\$0l.\$T#Q8xZ[L'm` 6 xaֱ|:\`[Rƺ-^767#s ZT 2l@nq37>89hX_^\?6}͜ r}wxeHR76Aonh8 [Q0C%.12p7?~!i~<xaKdV [;9v\uhɘ(7%#9qӡ h\jT(bZMWK]OChla?31G"Bs.5;gUKPj7. [Lc,K7*ދ S|+#hV2[~mkV"&w8ș VfB_?B.?U5 ]Տ/u=;dme !wmל-Y;G\$fbYNL jS)k#`j˲qr *F×\$|=>S@ t=u= s\$Vn4L'@mj:CDTd,z\$ڼ0_@~zd>t|O? ?w[:ETf㰚9-~XGV/5, JbL͜>(w8 ==.&wJ4Ƥ#[Kn\$nok6E\ m? ~sOż[HʝsQ`=VLZdk\g|ݞج.KWzg3/<=?:o\Z%a.d4[?U^Ր]~2!w\$U=)kZBܬz q1iZ^i-!\$I@I5VmS"eRF}Fk`ҴECDo? ˈBFI5:u E@U)ڇ8lǵXj,Cу38(ODYNM[Cko,?JWj^ʰSP%{OMs!+ܴ^b[G;S?Mvvv0m-!"@~< ~WD6@n_Z*( .bIJ 95=|7uF̿{Jn.L"Ren5Q@C|%{Ug9S_WQ@n4#PO*-a,I^EyCaē,UKN3Q?5\$П|1muB gx>yNJmIL# n ݔKKǙoj(+,nyqbI\$\$>]xį!oj%m;(|/"}Y^ϔOOUN[WQi!?ʾ><մ#d(\$qtZTOll*:㿭}TxCzuiv{JeQFUG T>_6JUzH?K<UĊ&ߩvo])nQ>t?|W-|; t+z !^@ GiR2`+WK_7M%\$t+I 0D˵\$ea*Pف&*sx"'B?Wx2-1hJ+36{=3ƽ,=?79]E-3]WCob@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@Q@1O7A@[Mc'ĿV`p( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( ( (?
|
2021-09-18 01:38:30
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9060048460960388, "perplexity": 187.9846768165105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056120.36/warc/CC-MAIN-20210918002951-20210918032951-00108.warc.gz"}
|
https://pos.sissa.it/cgi-bin/reader/contribution.cgi?id=274/002
|
# PoS(HQL 2016)002
Recent Results form the T2K experiment
J. Lopez, on behalf of the T2K collaboration
Contribution: pdf
Abstract
The T2K (Tokai to Kamioka) experiment is a long-baseline neutrino oscillation experiment using the neutrino beamline at the J-PARC facility in Japan. T2K measures the neutrino beam at two near detectors located 280 m from the target and again at Super-Kamiokande, 295 km away. With measurements of the oscillated neutrinos at Super-Kamiokande and constraints from the near detectors, T2K is able to provide high precision measurements of neutrino oscillation parameters. T2K provided the first evidence for a non-zero value of the mixing angle \theta_{13} and has continued improving its results with a joint fit to muon neutrino disappearance and electron neutrino appearance samples. In 2015, T2K also released its first result on antineutrino oscillations. This work provides a summary of recent results in neutrino oscillation physics from T2K and also briefly discusses some of the other work done by T2K.
|
2017-06-28 13:58:43
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8649982213973999, "perplexity": 3196.1652198343972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128323682.21/warc/CC-MAIN-20170628134734-20170628154734-00483.warc.gz"}
|
https://www.mail-archive.com/[email protected]/msg07657.html
|
# Re: [HACKERS] [PATCHES] Continue transactions after errors in psql
Tom Lane wrote:
Andrew Dunstan <[EMAIL PROTECTED]> writes:
Tom Lane wrote:
I would far rather see people code explicit markers around statements
whose failure can be ignored. That is, a script that needs this
behavior ought to look like
BEGIN;
\begin_ignore_error
DROP TABLE foo;
\end_ignore_error
CREATE ...
...
COMMIT;
That's a lot of work.
How so? It's a minuscule extension to the psql patch already coded:
just provide backslash commands to invoke the bits of code already
written.
I meant it's a lot to type ;-)
In this particular case I would actually like to see us provide "DROP IF EXISTS ..." or some such.
That's substantially more work, with substantially less scope of
applicability: it would only solve the issue for DROP.
True. I wasn't suggesting it as an alternative in the general case. I still think it's worth doing, though - I have often seen it requested and can't think of a compelling reason not to provide it. But maybe that's off topic ;-)
cheers
andrew
---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?
http://archives.postgresql.org
|
2018-12-10 11:06:37
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33544158935546875, "perplexity": 4839.513680399972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823322.49/warc/CC-MAIN-20181210101954-20181210123454-00525.warc.gz"}
|
http://gbxt.cinziacioni.it/jakes-fading-simulation-matlab.html
|
I need first question solved neatlly in next hour. I am using 2006b version of matlab. Rayleigh Fading Simulation. Smith in 1975, with the title "a computer generated multipath >fading simulation". If you set K-factors to a row vector, the discrete path corresponding to a positive element of the K-factors vector is a Rician fading process with a Rician K-factor specified by that element. The following Matlab project contains the source code and Matlab examples used for rayleigh fading simulator. Contribute to wincle626/mimo-ofdm-matlab development by creating an account on GitHub. 30% discount is given when all the three ebooks are checked out in a single purchase. Figure 1: flow chart of Matlab simulation. To filter an input signal using a multipath Rayleigh fading channel:. * Pulse shaping techniques, matched filtering and partial response signaling, Design and implementation of linear equalizers - zero forcing and MMSE equalizers, using them in a communication link and. This project needs advanced quantum physics background. Engineering & Electrical Engineering Projects for $30 -$250. Hence, it has replaced old communication technologies in many systems such as wireless networks and 4G mobile communications. keonil kim. using Jakes-Clark model. MATLAB Interface: NetSim can be interfaced with MATLAB offline or online (run-time) IEEE 1609 defines the architecture and provides the standards for Wireless Access in Vehicular Environments (WAVE) that defines vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) wireless communications. Numerical examples of a satellite link design are shown using QPSK and/or 8PSK when the bit rate(Rb)is greater than the channel bandwidth Wc (Band-limited channel). This simulation process uses five. Despite the popularity and the large application of the Jakes' model for about 30 years, several important limitations are mentioned especially those related to non-zero CCF of the in-phase and. Visa mer Visa mindre. 开发工具:matlab. Small scale fading is often handled in a wireless system with diversity schemes. It is licensed as free software under the lesser GPL license. The Wireless Channel: Propagation and Fading. Please Hurry. The default value of this parameter is the Jakes Doppler spectrum (doppler('Jakes')). screenshot of Rayleigh channel output for Rayleigh fading (simulated using MATLAB) Figure 4. Figure 3 illustrates the relationship between large-scale and small-scale fading. The Multipath Rayleigh Fading Channel block implements a baseband simulation of a multipath Rayleigh fading propagation channel. However, it is difficult to verify the accuracy of our derived EC formula via simulation since. Based on multipath time delay spread small scale fading is classified as flat fading and frequency selective fading. Guan School of EEE Nanyang Technological University Nanyang Avenue, Singapore 639798 Abstract -Two ways for modifying the classical Jakes' fading simulator to generate multiple uncorrelated fading wavsforms are proposed. A defining characteristic of the mobile wireless channel is the variations of the channel strength over time and over frequency. The aim of this project is to develop a channel simulator for simulating both the amplitude and the phase processes of a Nakagami-m fading channel in Matlab. The simulation of the channel was done with a sampling rate of ×1 10. Multipath Fading Channel Simulation Pushkar Bagayatkar. English: Autocorrelation function of Rayleigh fading with a maximum Doppler shift of 10Hz. With the popular Jakes fading model, it is difficult to create multiple uncorrelated fading waveforms. Linear Block Codes: Performance analysis tools. It would be nice if Mathworks can modify (may be in the next release) berfading such that, the theoretical BER expected take care of such fading characteristics (in this case Doppler). I have some question about h, Why does the real and imaginary parts must have variance equal to 1/2 ?. % Generation of U uncorrelated Rayleigh fading channels according to the modified Jakes model: % [P. Machine Learning (ML) & Mathlab y Mathematica Projects for $100 -$150. To learn how to call the rayChan1 fading channel object to filter the transmitted signal through the channel, see Using Fading Channels. I embed the netlist in a loop and re. Each path has its own delay and average power gain. Despite the extensive acceptance and application of Jakes' simulator, some important limitations of. � Approach: As before, we break the simulation into three parts 1. This article is part of the book Wireless Communication Systems in Matlab, ISBN: 978-1720114352 available in ebook (PDF) format (click here) and Paperback (hardcopy) format (click here). Riset Penulisan & Penulisan Teknis Projects for $250 -$750. The MATLAB function is as below: chl_res=rayleighchan(Ts,Fd,Tau,PdB); Where, Ts=sampling time of the input signal in unit of seconds Fd= Maximum doppler shift in unit of Hz. Wireless_Communication_System-HW2-Rayleigh_fading_channel_simulator. Such parameters include shadow fading, angle spread, and delay spread, etc. For more details visit www. I can perform OFDM and Alamouti stbc matlab simulation separately. The 8-QAM system is rectangular with 2 possible values for the x basis vector and 4 possible values for the y basis vector such that the constellation consists of 8 symbols: MATLAB simulation program integrated QAM digital communication system available for everybody to study the reference. James has 4 jobs listed on their profile. path Channel Models for Early Simulations. >But I need to compute amplitude-time curve, as given in a paper by J. However, I am obtaining the following figure:. Made by Splash talk using MATLAB , converted to. PRICE CODE 17. Rather, one talks of "antenna shadows" or "ghost images". It describes Rician MATLAB simulation parameters with code script. If you assign a 1-by- N P cell array of calls to doppler using any of the specified syntaxes, each path has the Doppler spectrum specified by the corresponding Doppler. There are a ton of resources out there. This project studies and identifies the PSK-based digital modulation scheme (BPSK, QPSK or GMSK) that gives the best BER performance in a multipath fading environment using computer simulation (MATLAB). The results show that Jakes ’ simulator does not reproduce some important properties of the physical fading channel. In this paper, tow kinds of improved Jakes' simulators for Rayleigh fading channels are introduced and the second-order statistics of these improved Jakes' simulator are analyzed. Follow 60 views (last 30 days) Den on 9 Aug 2011. Haibin Lu. The effect in question,. The second part of the thesis is dedi-cated to decreasing the simulation time. Second release of 5G specification: Mar. Riset Penulisan & Penulisan Teknis Projects for $250 -$750. If you set K-factors to a row vector, the discrete path corresponding to a positive element of the K-factors vector is a Rician fading process with a Rician K-factor specified by that element. I need first question solved neatlly in next hour. See how you can: Analyze the throughput of LTE and LTE-Advanced transmission modes (TMs) under different scenarios. The code list below: % Rayleigh fading a = sqrt(0. I have some question about h, Why does the real and imaginary parts must have variance equal to 1/2 ?. 64 qam matlab code 64 qam matlab code. Simulation Models With Correct Statistical Properties for Rayleigh Fading Channels Yahong Rosa Zheng and Chengshan Xiao, Senior Member, IEEE Abstract— In this paper, new sum-of-sinusoids statistical sim-ulation models are proposed for Rayleigh fading channels. Figure 1: flow chart of Matlab simulation Our simulation supports two kinds of source data, either the randomly produced data or an image file. Get access to over 12 million other articles!. Rician Channel model PLOTS are also shown. Thus one can generate two appropriately filtered Gaussian noise signals and use these to modulate the signal and a 90 degree phase. However, the available network simulation tools face the challenges of providing accurate indoor channel models, three-dimensional (3-D) models, model portability, and effective validation. PID Controller Tuning in Matlab. Question: Jakes Model-Matlab Explain The Codes And Figures Below Simulation Of Jakes Model. Should be done in 2 weeks. 0267] , how i can extract the path gain from the channel impulse response. Baddour: "Autoregressive modeling for fading channel simulation," IEEE Transactions on Wireless Communications, July 2005. Configure Channel Objects Based on Simulation Needs. ieee projects in matlab,ieee projects matlab image processing chennai,ieee projects in chennai,matlab source codes,image processing source codes,matlab projects. 'a computer generated multipath fading simulation'. Motivation Demand for high speed data transmission on CubeSats. Future Work. If you set K-factors to a scalar, the first discrete path is a Rician fading process with a Rician K-factor of K-factors. com) Abstract: The paper simulated and analyzed the statistical perfomances of the Nakagami fading. I have simulated the fading with the time by using jakes mode but. Amos David: The Impact of New Technologies in Public Financial Management and Performance: Agenda for Public Financial Management Reformance in the Context of Global Best Practices. It describes Rician MATLAB simulation parameters with code script. Contribute to wincle626/mimo-ofdm-matlab development by creating an account on GitHub. There are a ton of resources out there. An effort has been made to illustrate the performance comparison of the Rayleigh and Rician fading channel models by using MATLAB simulation in terms of source velocity and outage probability. If you set K-factors to a scalar, the first discrete path is a Rician fading process with a Rician K-factor of K-factors. The Rayleigh fading model can be modeled using two different methods as described in model. " I played around with this entry and I agree. The Rayleigh faded signal of equation (13) has been generated with a carrier frequency of 900MHz (typical GSM system), and a number of NLOS paths N=10. This project needs advanced quantum physics background. % Generation of U uncorrelated Rayleigh fading channels according to the modified Jakes model: % [P. As you are looking for simple simulation you can simply assume the two values A and B, for example, LdB=128. 2 The wireless channel A good understanding of the wireless channel, its key physical parameters and the modeling issues, lays the foundation for the rest of the book. Since you are working in MATLAB, it is easy to check. Orthogonal Frequency Division Multiplexing (OFDM) is a multicarrier modulation system that provides an efficient way for faster data streaming over multipath fading environment. Results of Simulation To illustrate the fading effects of a typical mobile radio channel, some MATLAB script files were written and used for this purpose. Because of wave cancellation effects, the instantaneous received power seen by a moving antenna becomes a random variable, dependent on the location of the antenna. Rician bit error rate problem. Here, the extent to which Jakes ’ simulator adequately models the multipath Rayleigh fading propagation environment is examined. Viewed as a ‘black-box’, the general usage scheme of the Fading Simulation Component is shown in the following diagram, where the input/output relationship is governed by Equation 2. All simulations on MATLAB. We mainly focus on the design of Doppler filter and model it as an optimization problem. k is the Rician K-factor in linear scale. In need of a simple code evaluating MIMO technology using OFDM. Introduction: The Jakes fading model is a deterministic method for simulating time-correlated Rayleigh fading wave-. Refer channel model types>>, Rayleigh channel model with matlab code>> and Rician channel model with matlab code>>. Learn more about qam, rayleigh, fading, awgn, ber MATLAB. - Matlab code for channel estimation in LTE - Matlab GUI simulation to channel estimation - Re-write the code using C and compile it on OMAP-L138 C6000 DSP+ARM processor - Matlab integration of LTE Tx and Rx blocks - Hardware implementation of the matlab integrated Tx and Rx blocks on C700 SDR platform. All simulations on MATLAB. The hands-on, example-rich guide to modeling and simulating advanced communications systems. Further, simple and. The spectrum of this random function is determined by the Doppler spread of the channel. (code) Interface Arduino with Matlab. Non-convexity of the problem is proved in this paper and genetic algorithm (GA) is used to optimize it, which has never been used before in this area. MATLAB is used to simulate wireless fading channels environments that are either based on Doppler spread or Delay Spread. 0267] , how i can extract the path gain from the channel impulse response. Wireless_Communication_System-HW2-Rayleigh_fading_channel_simulator. Matlab\Rayleigh Fading Channel Simulation. The transformer works as a MATLAB rewriting system that carries out a sequence of source. Rayleigh Fading channel with Doppler shift is considered in this article. Rayleigh Fading Channel Signal Generator Free Download - Dent Rayleigh model is similar to Jakes but uses Hadamard function. ON THE SIMULATIONS OF CORRELATED NAKAGAMI-M FADING CHANNELS USING SUM-OF-SINUSOIDS METHOD presented by Dhiraj Dilip Patil, a candidate for the degree of MASTER OF SCIENCE, and hereby certify that, in their opinion, it is worthy of acceptance. The parameters such as source velocity and outage probability play very. Following section mentions Jakes' model with formula or equations. The performnace is evaluated in presence of AWGN and Rayleigh fading. In particular, Jakes' (1994) simulator and derivatives of Jakes' simulator have gained widespread acceptance. function ray = jakes(fm,fs,M,N_0,index) % Jakes Model of a Rayleigh fading channel % Written by Chandra Athaudage 12/10/2001, CUBIN % % jakes(fm,fs,M,N_0,index. By means channel measurement equipment borrowed from Telenor R&D, practical measurements of multipath channels are performed. In this paper, closed-form expressions for capacities per unit bandwidth for fading channels with impairments due to Branch Correlation are derived for optimal power and rate adaptation, constant transmit power, channel inversion with fixed rate, and truncated channel inversion policies for maximal ratio combining diversity reception case. If you assign a 1-by- N P cell array of calls to doppler using any of the specified syntaxes, each path has the Doppler spectrum specified by the corresponding Doppler. % Time (s) % Generate a Rayleigh fading signal power profile using Jakes' model, Matlab code for generating BAN fading power profile. OFDM Simulation in MATLAB - Duration: 11:03. MATLAB Simulation � Objective: Perform a Monte Carlo simulation of a narrow-band communication system with diversity and time-varying multi-path. If you set K-factors to a row vector, the discrete path corresponding to a positive element of the K-factors vector is a Rician fading process with a Rician K-factor specified by that element. Nguyen, "Differential Amplify-and-Forward relaying in time-varying Rayleigh fading channels," IEEE Wireless Communications and Networking Conference (WCNC), Shanghai, China, 2013,. i have tried to read the Rayleigh fading channel model,but do you have a matlab code to simulate and test ,this fading channel over a wirelees environment. Mathlab y Mathematica Projects for $30 -$250. of the simulation models were implemented based on specification of GSM system by CEPT-COST 207. If you assign a 1-by- N P cell array of calls to doppler using any of the specified syntaxes, each path has the Doppler spectrum specified by the corresponding Doppler. The fading, which is caused due to multipath propagation in a communication channel, challenges the wireless communication engineer who tries to establish a reliable communication path between transmitter and receiver. Grandmont, ABB Inc. Design PID controller in Matlab. 2 The wireless channel A good understanding of the wireless channel, its key physical parameters and the modeling issues, lays the foundation for the rest of the book. the first function generates the fade coefficients to simulate the fading channel ( Rayleigh / Rician ), the second one is the calculation of the BER through the Monte-Carlo method for a M-PAM modulation. screenshot of the noisy Rayleigh channel output for Rayleigh fading (simulated using MATLAB) 2. There are a ton of resources out there. 8 1 0 100 1000 10000 Number of Scatterers RMSLocationofTransmitte Parameters used for Simulation Values Coding Level Binary ( 1/0) Carrier Frequency 900 MHz PN sequence length 7 Data Length 100 Fading Channels Rician, Rayleigh Receiver type BPSK Number of Receivers 3 (1 Master, 2 slaves. However, it is difficult to verify the accuracy of our derived EC formula via simulation since. Gain deeper insight into the different factors that can affect the throughput in LTE and LTE-A. Therefore, analytical expressions are presented for the autocorrelation and cross-correlation function of in-phase and quadrature components. Now they coincide! Thanks a lot!Krishna. Either will work, but the usage is different. 5, 1 and >1 respectively. This chapter introduces several topics in Simulink using space–time block coding (STBC) for compensating for BER degradation due to channel fading. The following settings correspond to a narrowband fading channel with a completely flat frequency response. By being embedded in other MATLAB programs, the simulation can prove to be very useful for measuring the performance of various communication techniques. it is explained and explained that it is easy to use. Matlab code for ieee 33 bus system Matlab code for ieee 33 bus system. Should be done in 2 weeks. Orthogonal Frequency Division Multiplexing (OFDM) is a multicarrier modulation system that provides an efficient way for faster data streaming over multipath fading environment. This is the goal of this chapter. (BER) for single and multipath Rayleigh fading channel in outdoor environment by MATLAB simulation. Here are several programs written in C and Matlab scripts for the analysis and simulation of various linear block codes over BSC, AWGN and Rayleigh fading channels. Small scale fading is often handled in a wireless system with diversity schemes. Models for Shannon's channel capacity, unconstrained awgn channel, binary symmetric channel (BSC), binary erasure channel (BEC), constellation constrained capacities and ergodic capacity over fading channel. Channel Simulation Narrowband Rayleigh fading generator Jakes' U-shaped Doppler filter Impulsive noise generator (1st order Gaussian mixture) Q Function. In particular, Jakes' (1994) simulator and derivatives of Jakes' simulator have gained widespread acceptance. 1162-1163, June 1993]. Channel simulators are powerful tools that permit performance tests of the individual parts of a wireless communication system. Simulated various space-time (ST) coding and modulation. Simulation: 1) The rayleigh fading model is implemented as a function in matlab with following parameters: M=number of multipaths in the fading channel, N = number of samples to generate, fd=maximum Doppler spread in Hz, Ts = sampling period. 2 The wireless channel A good understanding of the wireless channel, its key physical parameters and the modeling issues, lays the foundation for the rest of the book. The simulation of the channel was done with a sampling rate of ×1 10. The Jakes’ method invoke the central limit theorem to show that the base-band signal received from a multipath fading channel is approximately a complex Gaussian process when the number of paths, L is large. Choose Realistic Channel Property Values. State of California. While random data is ideal to test the channel impact to the BER performance and signal constellation, image file give us an intuitive impression and comparison for different channels. chan = ricianchan(ts,fd,k) constructs a frequency-flat (single path) Rician fading-channel object. imkochar Newbie level 2. This channel model is used to implement real time fading observed in wireless communication system. ANALYSIS OF RAYLEIGH FADING CHANNELS. PHP & Inżynieria elektryczna Projects for $10 -$30. svg using plot2svg. We also plotted the BER vs SNR graphs of both the users and observed the. Second, we developed a Matlab simulator to generate aeronautical channels for air-borne networks. Justin Legarsky Dr. Simulation of QPSK modulation on an AWGN channel: Generating a correlated Rayleigh fading process: Simulation of a Reed-Solomon Block Code: Simulation of SCCCs in an AWGN channel: Simulation of a Multicode CDMA system on an AWGN channel: Using timers to measure execution time: Simulation of turbo equalizer in multipath channels: Tutorials. If increases, the value of the random fading F(x;y) should. The phenomenon of Rayleigh Flat fading and its simulation using Clarke’s model and Young’s model were discussed in the previous posts. A specialization of the comm. Made by Splash talk using MATLAB , converted to. Here are some tips for choosing property values that describe realistic channels:. Haibin Lu. We are going to implement a multipath fading channel simulator. If you are interested in learning how to use CML, please attend the tutorial at ICC-2009. Numerical examples of a satellite link design are shown using QPSK and/or 8PSK when the bit rate(Rb)is greater than the channel bandwidth Wc (Band-limited channel). where, fm is the maximum Doppler frequency of the channel fs is the sampling rate of the fading process M is the number of samples of the fading. - Matlab code for channel estimation in LTE - Matlab GUI simulation to channel estimation - Re-write the code using C and compile it on OMAP-L138 C6000 DSP+ARM processor - Matlab integration of LTE Tx and Rx blocks - Hardware implementation of the matlab integrated Tx and Rx blocks on C700 SDR platform. WiMAX capacity estimations are performed in Matlab for a selected channel model. OFDM Detailed OFDM simulation code, there is channel coding, modulation, channel estimation. The MatLab function jakes ralfunc(fm;fc;M;N0;index) provides a good approxima-tion of a Doppler frequency limited Rayleigh fading process based on Jakes Model as a tool for fading channel simulation. Based on your location, we recommend that you select:. There are a ton of resources out there. The data is modulated, encoded and transmitted through a. One of these tests consists of evaluating the system performance when a communication channel is considered. InitTime also controls the fading process timing offset. * Pulse shaping techniques, matched filtering and partial response signaling, Design and implementation of linear equalizers - zero forcing and MMSE equalizers, using them in a communication link and. 5, 1 and >1 respectively. Indeed, the first Kalman filter in Figure 4 uses the pilot symbol , the output of the analysis filter bank , and the latest estimated AR parameters to estimate the fading process , while the second Kalman filter uses the estimated. ii) Performed rigorous cross-layer analysis (data link layer and physical layer) and verified through Monte Carlo simulation in Matlab iii) Wrote one journal and several conference papers --- Algorithm: Infinite horizon Markov decision process, Q learning, and queueing analysis. Pishro-Nik 12. particularly QAM,16-QAM, …. I need help in Arduino code urgent task. ABSTRACT-In this paper we present a simulation of multipath Rayleigh fading and Recian fading channel. I would like to simulate a rayleigh fading channel in MATLAB, but I have quite a lot of problems to get started. The filtered waveform is stored in matrix out, where each column corresponds to the waveform at each of the receive antennas. Read Wireless Communication Systems in Matlab: (Black & White edition) book reviews & author details and more at Amazon. This simulation models a simplified view of the Fading phenomenon encountered in communications systems. matlab sensor network deployment Hi, currently i'm doing a project to develope an intelligent energy management in wireless sensor network which fuzzy logic is needed in order for me to proceed to another. MATLAB中文论坛MATLAB 信号处理与通信板块发表的帖子:Simulation Of Flat Fading Using MATLAB For Classroom Instruction。写的很好的,用matlab仿真平坦衰落信道. Rayleigh Fading simulation - Clark and Gans Method, Jakes' Method MATLAB Simulation Model - Duration: 28:28. screenshot of Rayleigh channel output for Rayleigh fading (simulated using MATLAB) Figure 4. That's beyond the scope of a usenet post, IMHO, so I suggest web searches on Rayleigh fading, Jakes model, multipath models, etc. MIMO-OFDM is a key technology for next-generation cellular communications (3GPP-LTE, Mobile WiMAX, IMT-Advanced) as well as wireless LAN (IEEE 802. 11 Oct 2007. You need to know that when you generate complex coefficients, and the real part and the imaginary part of this coefficients are randomly chosen according to a Gaussian distribution, the magnitude will have a Rayleigh distribution and the phase will have a uniform distribution between $0$ and $2 \pi$. To learn how to call the rayChan1 fading channel object to filter the transmitted signal through the channel, see Using Fading Channels. On the Matlab, and it is useful OQPSK the AWGN and frequency selective Fading channel simulation Through repeated training template has high recognition rate, single path or Multipath Rayleigh Fading channel simulation , Matlab program is running into data files as input parameters, for beginners has reference significance, using Matlab to. 11: MATLAB dirac delta 함수 (0) 2014. Simulation: 1) The rayleigh fading model is implemented as a function in matlab with following parameters: M=number of multipaths in the fading channel, N = number of samples to generate, fd=maximum Doppler spread in Hz, Ts = sampling period. The MatLab function jakes ralfunc(fm;fc;M;N0;index) provides a good approxima-tion of a Doppler frequency limited Rayleigh fading process based on Jakes Model as a tool for fading channel simulation. ing the development of the MATLAB sim-ulator for the OFDM system model. By being embedded in other MATLAB programs, the simulation can prove to be very useful for measuring the performance of various communication techniques. es: Viswanathan, Mathuranathan, Srinivasan, Varsha: Libros en idiomas extranjeros. Fading may be large scale fading or small scale fading [9]. In this section, there are many simulation scenarios devoted for evaluating the proposed scenarios of the audio signal transmission. On comparison of some importance statistical properties of the classical Jakes’. ing the development of the MATLAB sim-ulator for the OFDM system model. Discrete-time simulation of smart antenna systems in Network Simulator-2 Using MATLAB and Octave and the multipath and fading statistics. screenshot of the noisy Rayleigh channel output for Rayleigh fading (simulated using MATLAB) 2. MATLAB provides built in function by name 'ricianchan' as explained below along with rician. OFDM Detailed OFDM simulation code, there is channel coding, modulation, channel estimation. The frequency of the sinusoids is given by \fd cos tk) for k = 1,2,. This is equivalent to multiplying the I and Q components of the RF signal by (zero-mean) independent Gaussian variables with identical variance. 19 real dumps or 2V0-21. Rather, one talks of "antenna shadows" or "ghost images". We mainly focus on the design of Doppler filter and model it as an optimization problem. rar] - matlab 仿真jake 模型 用于信道估计 [Rayleigh6mod. txt rayleigh fading channels and rice fading channel matlab simulation code 立即下载 低至0. This program simulates and plots BER Vs SNR curves for BPSK, QPSK data transmission in Flat Fading Channels. Home; Publications; Research; Teaching; ML Group; Archive; Teaching Spring, 2020. MIMO-OFDM is a key technology for next-generation cellular communications (3GPP-LTE, Mobile WiMAX, IMT-Advanced) as well as wireless LAN (IEEE 802. This chapter introduces several topics in Simulink using space–time block coding (STBC) for compensating for BER degradation due to channel fading. It is also referred as log normal shadowing model. Should be done in 2 weeks. Actually, I would like to model the rayleigh fading and > lognormal shadowing, becasue I have to model the channel for mobile > users. The phenomenon of Rayleigh Flat fading and its simulation using Clarke’s model and Young’s model were discussed in the previous posts. Each path has its own delay and average power gain. The statistical properties of Clarke's fading model with a finite number of sinusoids are analyzed, and an improved reference model is proposed for the simulation of Rayleigh fading channels. Lecture 30 - Rayleigh Fading simulation - Clark and Gans Method, Jakes’ Method - Duration: 36:52. Based on your location, we recommend that you select:. All simulations on MATLAB. Monte Carlo simulation for ascertaining performance of digital modulation techniques in AWGN and fading channels - Eb/N0 Vs BER curves. This post is a part of the ebook : Simulation of digital communication systems using Matlab – available in both PDF and EPUB format. To filter an input signal using a multipath Rayleigh fading channel:. LTEMIMOChannel System object™ filters an input signal through an LTE multiple-input multiple-output (MIMO) multipath fading channel. / Parthasarathy, K. >There are a number of papers and scripts that deal with BER-SNR curve. In MIMO-OFDM Wireless Communications with MATLAB®, the authors provide a comprehensive introduction to the theory and practice of wireless channel modeling, OFDM, and. IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. Dent Rayleigh model is similar to Jakes but uses Hadamard function in MATLAB to make the oscillators (somewhat) independent and thus provide results that are a little more realistic than the Jakes model. Hide code when sharing and exporting live scripts. %We assume coherent receiver and perfect. Hence, it has replaced old communication technologies in many systems such as wireless networks and 4G mobile communications. Frequency Response of Digital System in Matlab. , channel modelling, propagation between different mobile and base stations. 0 the Rayleigh fading channel signals generated; Script for computing the BER for BPSK modulation in a. This project studies and identifies the PSK-based digital modulation scheme (BPSK, QPSK or GMSK) that gives the best BER performance in a multipath fading environment using computer simulation (MATLAB). Free scripts download - Matlab scripts - page 5 - Top4Download. The columns of matrix in correspond to the channel input waveforms at each transmit antenna. Mathlab y Mathematica Projects for $30 -$250. The phenomenon of Rayleigh Flat fading and its simulation using Clarke’s model and Young’s model were discussed in the previous posts. In this paper, we present a MATLAB-based down-link physical-layer simulator for LTE. In one of the previous article, simulation of BPSK over Rayleigh Fading channel was discussed. Actually, I would like to model the rayleigh fading and > lognormal shadowing, becasue I have to model the channel for mobile > users. Channel simulators are powerful tools that permit performance tests of the individual parts of a wireless communication system. See the complete profile on LinkedIn and discover James’ connections and jobs at similar companies. This is a typical channel model encountered in many applications of interest to TG4q as explained in [1]. OFDM Detailed OFDM simulation code, there is channel coding, modulation, channel estimation. One of the only books in the area dedicated to explaining simulation aspects Covers. Small scale fading is often handled in a wireless system with diversity schemes. The simulation results indicate that the number of taps re- Doppler shift, Jakes power spectrum, multipath and fading 16. Channel Estimation in OFDM Systems Zhibin Wu Yan Liu Xiangpeng Jing OUTLINE OFDM System Introduction Channel Estimation Techniques Performance Evaluation Conclusion OFDM Overview Divides high-speed serial information signal into multiple lower-speed sub-signals: Transmits simultaneously at different frequencies in parallel. Question: Jakes Model-Matlab Explain The Codes And Figures Below Simulation Of Jakes Model. Keywords: Matlab toolbox, parameter computation methods, Mobile fading channel simulators, Performance tests, Deterministic channel modelling Introduction Classical methods of modelling the fading behaviour. (code) How to add noise in signal using Matlab. One more question, since we treat the rayleigh multipath channel model as a FIR filter, each tap is a flat fading rayleigh process. MATLAB provides built in function by name 'ricianchan' as explained below along with rician. However, the available network simulation tools face the challenges of providing accurate indoor channel models, three-dimensional (3-D) models, model portability, and effective validation. Our paper fills this gap by studying the statistical properties of Jakes' simulator. In the simulation, signals generated by the modified jakes model into a ricean channel using the K factor 1. The Jakes' method invoke the central limit theorem to show that the base-band signal received from a multipath fading channel is approximately a complex Gaussian process when the number of paths, L is large. آموزش شبیه سازی کانال های محوشدگی (Fading) در متلب (MATLAB)، به صورت گام به گام و تخصصی x عدم نمایش آخرین ساعات جشنواره تخفیف ویژه ۴۰ ٪ تخفیف کد تخفیف: FTR99 کلیک کنید. All simulations on MATLAB. MATLAB provides built in function by name 'ricianchan' as explained below along with rician. Rician Channel model PLOTS are also shown. � Approach: As before, we break the simulation into three parts 1. Answered: Veera Kanmani on 20 Apr 2018 Hello! I am writing the simulaton code for BPSK simulation with Rayleigh channel. Non-convexity of the problem is proved in this paper and genetic algorithm (GA) is used to optimize it, which has never been used before in this area. Lecture 30 - Rayleigh Fading simulation - Clark and Gans Method, Jakes' Method - Duration: 36:52. Before buying the dumps, many customers may ask how to get the EADE19-001 sure practice bootcamp they buy, If you fail exam with our exam questions, you just need to send your EADE19-001 failure score scanned to our email address, we will full refund to you soon without any other doubt, We also carry out promotions and sales on EADE19-001 Valid Test Question - ArcGIS Desktop Entry 19-001. 2 Jakes Model 50 2. The probability of error varies according to any change that could happen in the probability of moving from one state to another[13-15]. import matlab. Matlab and Mathematica & Physics Projects for $30 -$250. More information on Jakes' model and other similar approaches is available here. • Create the simulation models of self-proposed ideas. suppose nSym=1e4, len_ofdm=64+16=80. Refer complete article on fading basics and fading types >>. For more details, see Fading Channels. $\begingroup$ Unfortunately, there are several incorrect statements in this answer. Generate Rayleigh fading sequences using Smith's method which is based on Clarke and Gan's fading model. Apply real-time SISO fading on Signal Studio waveforms for receiver tests with flexible fading parameters. 3 Ray-Based Channel Model 54 2. There are a ton of resources out there. / Khincha, H. A defining characteristic of the mobile wireless channel is the variations of the channel strength over time and over frequency. 开发工具:matlab. txt space time cdma;m-sequence;mimo channel;likelihood ascent search detector 环境和设置 瑞利衰落 中继信道 中断概率 蒙特卡洛 two-path中继, 互信息量, 中断概率,. The multipath Rayleigh fading channel System object™ includes two methods for random number generation. If you assign a single call to doppler , all paths have the same specified Doppler spectrum. C, C++, Haskell, Clojure, Java, C#, F#, JavaScript, CoffeScript, whatever it is, I refuse to admit t. English: Autocorrelation function of Rayleigh fading with a maximum Doppler shift of 10Hz. This is also covered on this quiz. Just to recap, building an LTE fading simulator with the desired temporal and spatial correlation is a three step procedure. The SCM is fully described in [1] and a freeware Matlab implementation can be downloaded from [2]. OFDM Simulation in MATLAB - Duration: 11:03. Rayleigh fading, standard deviation = 1, normalized freq = 0. Matlab GUIs Matlab has a tool for creating Graphical User Interfaces You can start it up by typing guideat the command prompt Let me know if you would like to know how to do these. Introduction to Wireless & Cellular Communications 4,177 views 36:52. 4 Frequency-Selective Fading Channel Model 61 2. English: Autocorrelation function of Rayleigh fading with a maximum Doppler shift of 10Hz. Simulation of Multipath Fading Channels: Improvements of Jake's Simulator. The simulation results indicate that the number of taps re- Doppler shift, Jakes power spectrum, multipath and fading 16. 8*log10(d) is used in many research papers (but for BS-MS and not BS-BS). Rakhshan and H. The models can be parameterized by channel bandwidth, carrier frequency, Doppler frequency, fading channel profile, etc. If you set K-factors to a scalar, the first discrete path is a Rician fading process with a Rician K-factor of K-factors. The Rayleigh fading model can be modeled using two different methods as described in model. frequency-flat fading channels. MATLAB toolboxes are professionally developed, rigorousl. Both of them have prepared me in digital signal processing and communications theory while I was at. You can use this block to model mobile wireless communication systems when the transmitted signal can travel to the receiver along a dominant line-of-sight or direct path. You can use this block to model mobile wireless communication systems. Rayleigh Fading: Rayleigh Fading model is used to simulate environments that has multiple scatterers and no Line Of Sight (LOS) path. Question 1: Yes, it is a multipath Rayleigh Channel with $1000$ taps. Jakes Model Matlab Code. We also plotted the BER vs SNR graphs of both the users and observed the. 2Associate Professor, Department of Electronic s and Communication Engineering. Please Hurry. We consider both single-input single-output. Ask Question Asked 2 years, 11 months ago. Multipath fading is a phenomenon of fading of transmitted signals due to refraction, reflection and diffraction from objects or obstacles present in the line or way of transmission. For all Doppler spectrum types except Jakes and Flat, A MATLAB-based Object-Oriented Approach to Multipath Fading Channel Simulation, a. 1 Multipath Channel Models for Early Simulations. SECURITY CLASSIFICATION OF THIS PAGE the Naval Postgraduate School, and the knowledge acquired has allowed me to proceed with the thesis work. Should be done in 2 weeks. Please, how could I include an EPA channel in the simulation of the Shannon boundary. tau is a vector of path delays, each specified in seconds. Conventionally, the multiplicative processes in the channel can be subdivided into three types of fading - path loss, shadowing (or slow fading) and multipath fading (or fast fading). SNR for 64 - QAM over AWGN and Rayleigh channels. Fading Channels Parametric Data Simulation Supported by Real Data from Outdoor Experiments Azra Kapetanovic, Simulations in the MATLAB environment, laboratory, and outdoor experiment results have been consistent in showing that our implementation of the second-order KF results in better estimation. Hence, it has replaced old communication technologies in many systems such as wireless networks and 4G mobile communications. Simulation models with correct statistical prop- Jakes erties for Rayleigh fading channels J]. As you are looking for simple simulation you can simply assume the two values A and B, for example, LdB=128. This is equivalent to multiplying the I and Q components of the RF signal by (zero-mean) independent Gaussian variables with identical variance. To simulate the Rayleigh fading channel, the parameters is given as follows: V = 50km/hr, f c = 900 MHz, T s = 1ms, N = 1000, V 1 1. The 8-QAM system is rectangular with 2 possible values for the x basis vector and 4 possible values for the y basis vector such that the constellation consists of 8 symbols: MATLAB simulation program integrated QAM digital communication system available for everybody to study the reference. y=h*x + n No. (code) How to add noise in signal using Matlab. 19 Exam Dumps Demo But you know good thing always need time and energy, For IT workers, if you choose our 2V0-21. The array which was produced by the MATLAB code was then input via the Simulink block “Matrix Channel Data” which at the start of the simulation loads all of the matrix into Simulinks. Remember that we have now shifted our focus from MATLAB to Python since its open and free to use. Faster radios are larger, more expensive, and more power hungry. How can I simulate OFDM with Rayleigh fading by Learn more about ofdm, rayleigh, fading, communications system toolbox. Numerical examples of a satellite link design are shown using QPSK and/or 8PSK when the bit rate(Rb)is greater than the channel bandwidth Wc (Band-limited channel). Subsequently, a comparison study is carried out to obtain the BER performance for each PSK-based. That is, a multiple-path fading model overspecifies a narrowband fading channel. Simulation Of Fading Channels Using Different Tests - written by Subrahmanyam. The phenomenon of Rayleigh Flat fading and its simulation using Clarke’s model and Young’s model were discussed in the previous posts. svg using plot2svg. For details about fading channels, see the references listed below. For more information on fading model processing, see Methodology for Simulating Multipath Fading Channels. 23 Nov 2006. The below figure represents the SNR (dB) for Gaussian approximated CDMA simulation with awgn fading for various values of user 0 power. Sajidul Alam (ID : 06310054) Sittul Muna (ID : 05310058) Department of Computer Science and Engineering December 2007. Rozpočet $10-30. Dent Rayleigh model is similar to Jakes but uses Hadamard function in MATLAB to make the oscillators (somewhat) independent and thus provide results that are a little more realistic than the Jakes model. If you assign a single call to doppler , all paths have the same specified Doppler spectrum. Beaulieu, Fellow, IEEE Abstract—The statistical properties of Clarke's fading model. Fading Channel Simulator in MATLAB. so the length of each tap is 80*1e4=8e5 which is a rayleigh process changing with time variable. rayleigh fading in matlab To generate a Rayleigh fading channel model, you first generate real and image part of the independent Gaussian process, then you need design a filter to have the spretrum similar to Jakes model, i. The Multipath Rician Fading Channel block implements a baseband simulation of a multipath Rician fading propagation channel. MIMO support the achievement of high transmission speed. Preface: All of these languages (except for the esoteric and Not-so languages) are incredibly capable and can solve any problem you throw at them. where, fm is the maximum Doppler frequency of the channel fs is the sampling rate of the fading process M is the number of samples of the fading. IETE Technical Review: Vol. I would also like to thank Professor Roberto Cristi, for his advice, encouragement and support given to me. In the m-fils rayleigh fading is Simulated with 3 different speed when the carrier frequency is fc = 1. A channel for which N > 1 is experienced as a frequency-selective fading channel by a signal of sufficiently wide bandwidth. of the simulation models were implemented based on specification of GSM system by CEPT-COST 207. ABSTRACT-In this paper we present a simulation of multipath Rayleigh fading and Recian fading channel. Appendix B shows full information of a trial of the OFDM simulation while Appendix C contains all the MATLAB source codes for this project with detailed comments for explanations. Please, how could I include an EPA channel in the simulation of the Shannon boundary. The phases of the sinusoids x for k = 1,. Simulation Of Fading Channels Using Different Tests - written by Subrahmanyam. This problem has been solved! See the answer. in - Buy Wireless Communication Systems in Matlab: (Black & White edition) book online at best prices in India on Amazon. Gli esempi forniti per illustrare il materiale del corso non sono solo. The theory of. The Rayleigh fading model can be modeled using two different methods as described in model. Closed-form expressions for system spectrum efficiency. This article deals with simulation of another important fading type : Rician Fading. The default value of this parameter is the Jakes Doppler spectrum (doppler('Jakes')). Ali⇑ Communications Research Group, School of Engineering and Informatics, University of Sussex, Brighton BN19QT, United Kingdom article info Article history: Received 10 May 2011. A defining characteristic of the mobile wireless channel is the variations of the channel strength over time and over frequency. itnd1=[1000]; % Number of directwave + Number of delayed wave % In this simulation one-path Rayleigh fading are considered now1=1; % Maximum Doppler frequency [Hz] % You can insert your favorite value fd=160; % You can decide two mode to simulate fading by changing the variable flat % flat : flat fading or not % (1->flat (only amplitude is. More information on Jakes' model and other similar approaches is available here. All simulations on MATLAB. As for "correct ways" of simulating a Rayleigh fading channel itself as a FIR filter, please refer to "Jakes' Channel Model" which uses a "sum of sinusoids" approach to modeling the Rayleigh channel. The Multipath Rayleigh Fading Channel block implements a baseband simulation of a multipath Rayleigh fading propagation channel. Here are some tips for choosing property values that describe realistic channels:. jakes_model using Jakes model to simulate rayleigh channel fading. Contribute to wincle626/mimo-ofdm-matlab development by creating an account on GitHub. As for "correct ways" of simulating a Rayleigh fading channel itself as a FIR filter, please refer to "Jakes' Channel Model" which uses a "sum of sinusoids" approach to modeling the Rayleigh channel. Some simulation experiments seldom consider the effects of Rayleigh fading and shadowing fading in wireless signal propagation, so lead to over-optimistic simulation results. 2 or any later version published by the Free Software Foundation. The SCM is fully described in [1] and a freeware Matlab implementation can be downloaded from [2]. MATLAB中文论坛MATLAB 信号处理与通信板块发表的帖子:Simulation Of Flat Fading Using MATLAB For Classroom Instruction。写的很好的,用matlab仿真平坦衰落信道. If anyone can help me to find the problem, I'll apreciate. Simulation of QPSK modulation on an AWGN channel: Generating a correlated Rayleigh fading process: Simulation of a Reed-Solomon Block Code: Simulation of SCCCs in an AWGN channel: Simulation of a Multicode CDMA system on an AWGN channel: Using timers to measure execution time: Simulation of turbo equalizer in multipath channels: Tutorials. The models are available as functions and System objects™ in MATLAB ® and as blocks in Simulink ®. Multipath Fading Channel Simulation Pushkar Bagayatkar. The MatLab function jakes ralfunc(fm;fc;M;N0;index) provides a good approxima-tion of a Doppler frequency limited Rayleigh fading process based on Jakes Model as a tool for fading channel simulation. The new Rician fading simulation model employs a zero-mean stochastic sinusoid as the specular (line-of-sight) component, in. successful application of Discrete time Simulation in fading channel by applying a tri-states memory-less Markov Model. %We assume coherent receiver and perfect. For more details, see Fading Channels. 19 prep + test bundle, we believe success and wealth will be yours, VMware 2V0-21. Channel simulators are powerful tools that permit performance tests of the individual parts of a wireless communication system. It's not a nifty MATLAB trick or utility function. Should be done in 2 weeks. Let the path loss be $P_{L_{lin}}$ (in linear scale, NOT in dB). Modified Jakes' model simulation algorithm After generating three waveforms r(t) as given in Jakes' model: Make sure that each r(t) generated has samples(for any integer k, say k=10). Multi-level loops are strongly discouraged (Matlab is very slow in processing loops). Guan School of EEE Nanyang Technological University Nanyang Avenue, Singapore 639798 Abstract -Two ways for modifying the classical Jakes' fading simulator to generate multiple uncorrelated fading wavsforms are proposed. Follow 60 views (last 30 days) Den on 9 Aug 2011. ii) Performed rigorous cross-layer analysis (data link layer and physical layer) and verified through Monte Carlo simulation in Matlab iii) Wrote one journal and several conference papers --- Algorithm: Infinite horizon Markov decision process, Q learning, and queueing analysis. (BER) for single and multipath Rayleigh fading channel in outdoor environment by MATLAB simulation. The Rayleigh fading model can be modeled using two different methods as described in model. Matlab Simulation Tool A publically available MATLAB simulation tool has been published in OFDM Wireless LANs: A Theoretical and Practical Guide, Juha Heiskala and John Terry, SAMS publishing, 2002. Keywords MATLAB, fading channels, distribution, simulation. rar] - reyleigh信道模型,该模型介绍了6种不同的建模方式,以及检测信道正确与否的方法,包括单径和多经. Al-Hussaibi, Falah H. 5 SUI Channel Model 65 3 MIMO Channel. Re: Matlab code for Rayleigh Fading Channel Hi , if we have channel impulse response = [0. In this part, you will simulate a Rayleigh flat-fading channel using Jake's model. Rayleigh fading is a statistical model for the effect of a propagation environment on a radio signal, such as that used by wireless devices. For BPSK this curve should match the BER vs Eb/N0 curve (SNR = Eb/N0 for BPSK), which can be generated using "bertool" implemented in Matlab. • Employ the acquired simulation skills efficiently in conjunction with the powerful MATLAB capabilities to design optimized MATLAB codes in terms of the code run time while economizing the memory space. 1 Analysis versus Computer Simulation A computer simulation is a computer program which attempts to represent the real world based on a model. There are a ton of resources out there. Lecture 30 - Rayleigh Fading simulation - Clark and Gans Method, Jakes’ Method - Duration: 36:52. We also plotted the BER vs SNR graphs of both the users and observed the. Clarke's model, one of the first Rayleigh implementations, has a huge cross correlation. Please, how could I include an EPA channel in the simulation of the Shannon boundary. We are going to implement a multipath fading channel simulator. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1. Select a Web Site. /My Documents/Matlab is created if it does not exist. With rician channel model, both Line of Sight (LOS) and non Line of Sight(NLOS) components are simulated between transmitter and receiver. To address associated complexities, Keysight was first to market with a 5G NR channel emulation solution, supporting the widest signal bandwidth and highest number of fading channels. Simulation of QPSK modulation on an AWGN channel: Generating a correlated Rayleigh fading process: Simulation of a Reed-Solomon Block Code: Simulation of SCCCs in an AWGN channel: Simulation of a Multicode CDMA system on an AWGN channel: Using timers to measure execution time: Simulation of turbo equalizer in multipath channels: Tutorials. But I can’t combine thats. The output of the function should be the complex valued fading process. The hands-on, example-rich guide to modeling and simulating advanced communications systems. Multi-level loops are strongly discouraged (Matlab is very slow in processing loops). The code list below: % Rayleigh fading a = sqrt(0. Closed-form expressions for system spectrum efficiency. For details about fading channels, see the references listed below. Simulation Of Jakes Model. An effort has been made to provide a standard independent aeronautical channel simulator that takes into account Doppler effects, multipath fading and slow-fading in the LOS component. channel estimation is to form an estimate of the amplitude and phase shift caused by the wireless channel from the available pilot information. It includes the Live Editor for creating scripts that combine code, output, and formatted text in an executable notebook. Configure Channel Objects Based on Simulation Needs. LTEMIMOChannel System objects offers pre-set configurations for use with LTE link level simulations. The results of simulation are shown in the Fig. Rozpočet$10-30. You need to know that when you generate complex coefficients, and the real part and the imaginary part of this coefficients are randomly chosen according to a Gaussian distribution, the magnitude will have a Rayleigh distribution and the phase will have a uniform distribution between $0$ and $2 \pi$. 19 Exam Dumps Demo We should use the strength to prove ourselves. it is explained and explained that it is easy to use. The objective being to discriminate and detect the best compromise among all considered Doppler spectra, for an appropriate design of the front-end digital. i have tried to read the Rayleigh fading channel model,but do you have a matlab code to simulate and test ,this fading channel over a wirelees environment. A filter-based simulator needs to be designed carefully because of possible instability and possible finite word-length effects when implemented with fixed-point arithmetic. Return the following: your MATLAB simulation code and one MATLAB figure that plots the theoretical and simulation BER vs. The SCM is fully described in [1] and a freeware Matlab implementation can be downloaded from [2]. Matlab GUIs Matlab has a tool for creating Graphical User Interfaces You can start it up by typing guideat the command prompt Let me know if you would like to know how to do these. Please Hurry. NO fd for k = No + 1 where fa is the doppler spread and M = 4N0 + 2. This post is a part of the ebook : Simulation of digital communication systems using Matlab – available in both PDF and EPUB format. BER performance analysis of rayleigh fading channel in an outdoor environment with MLSE We model the Rayleigh fading channel using Jakes Doppler power spectrum and AJakes Doppler power spectrum. , channel modelling, propagation between different mobile and base stations. Remaining may be completed within 24 Hours. Should be done in 2 weeks. does the fading of the channel change faster than a bit interval). As a novice of cognitive radio, I am now very interesting in the topic of energy detection (ED), especially the performance of ED over different fading channels. the simulation can be started. Simulation models with correct statistical prop- Jakes erties for Rayleigh fading channels J]. The Rayleigh fading model can be modeled using two different methods as described in model. In MIMO-OFDM Wireless Communications with MATLAB®, the authors provide a comprehensive introduction to the theory and practice of wireless channel modeling, OFDM, and. Simulation Of Fading Channels Using Different Tests - written by Subrahmanyam. 3 Ray-Based Channel Model 54 2. Despite the popularity and the large application of the Jakes' model for about 30 years, several important limitations are mentioned especially those related to non-zero CCF of the in-phase and. Jakes model matlab code. The principle of the SOS method was originally introduced by Rice and developed later for the simulation of Rayleigh fading channels by Jakes. Please Hurry. Our Wireless Channel Simulator is a set of MATLAB functions for simulating realistic wireless links. 5)*( randn( 1,. out = lteMovingChannel(model,in) implements the moving propagation conditions specified in TS 36. Gli esempi forniti per illustrare il materiale del corso non sono solo. 19 Exam Dumps Demo We should use the strength to prove ourselves. Here are some tips for choosing property values that describe realistic channels:. This post is a part of the ebook : Simulation of digital communication systems using Matlab – available in both PDF and EPUB format. Transmission Over MIMO Channel Model with Delay Profile TDL. In the simulator development, using M = 16. Rayleigh Fading Channel Simulation Overview This MATLAB function can be used to generate time-varying Rayleigh fading channels based on autoregressive models according to the work proposed by Kareem E. English: Autocorrelation function of Rayleigh fading with a maximum Doppler shift of 10Hz. Monte Carlo simulation for ascertaining performance of digital modulation techniques in AWGN and fading channels - Eb/N0 Vs BER curves. Jakes' Model has been used to obtained coefficients for rayleigh fading. Accoring to some papers, like On the energy detection of unknown signals over fading channels (which was included in IEEE Xplore, 2003). Changing this value produces parts of the fading process at different points in time. Fading, Shadowing, and Link Budgets Fading is a significant part of any wireless communication design and is important to model and predict accurately. However, we still used the Doppler spectrum proposed by Smith. Remaining may be completed within 24 Hours. Channel Simulation Narrowband Rayleigh fading generator Jakes' U-shaped Doppler filter Impulsive noise generator (1st order Gaussian mixture) Q Function. [email protected] This paper addresses various challenges when designing real and complex spectrum shaping filters with quantized coefficients for. • Employ the acquired simulation skills efficiently in conjunction with the powerful MATLAB capabilities to design optimized MATLAB codes in terms of the code run time while economizing the memory space. 19: MATLAB Figure 창 그림으로 저장 하기 saveas (0. MATLAB Simulation of a Wireless Communication System using OFDM Principle. I need first question solved neatlly in next hour. Then the total attenuation in case of shadow fading is given by: $P_{L_{lin}} * lognrnd(0,10)$ If you are wor. EBSCOhost serves thousands of libraries with premium essays, articles and other content including MATLAB Modeling and Simulation of a WiMAX System. This article designs a Rayleigh fading channel simulator using IIR filter (called Doppler filter), which is used to approximate the Jakes Doppler spectrum. 77 dB with elevation angle of 1 0 ° to 90°. Though the adaptive antenna array technique in early research could effectively promote antijamming freedom, overcome time-varying system, and mitigate narrowband and wideband. This simulation process uses five path propagation models. 1 Matlab Application Description In the first step it is necessary to order a number of bits for the input sequence generation (parameter N). svg using plot2svg. The multipath Rayleigh fading channel System object™ includes two methods for random number generation. We identify di er-ent research applications that are covered by our simula-tor. The code list below: % Rayleigh fading a = sqrt(0. Expert Answer. With rician channel model, both Line of Sight (LOS) and non Line of Sight(NLOS) components are simulated between transmitter and receiver. Graduate Teaching Assistant Rayleigh / Rician /Nakagami fading and jakes fading channel simulator). Nguyen, "Differential Amplify-and-Forward relaying in time-varying Rayleigh fading channels," IEEE Wireless Communications and Networking Conference (WCNC), Shanghai, China, 2013,. Choose Realistic Channel Property Values. In the wireless communication literature [1{3], it is well established that the received signal strength between two nodes can be modeled with three major dynamics: Multipath fading (or small-scale fading),. There are two very different types of fading: small scale fading and large scale fading (or shadowing). or Over an OFDM system. This section shows the results arrive from the simulation tool MATLAB 7. of the simulation models were implemented based on specification of GSM system by CEPT-COST 207. You can use this block to model mobile wireless communication systems.
|
2020-11-28 17:27:45
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6840605735778809, "perplexity": 1590.6066932061128}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141195687.51/warc/CC-MAIN-20201128155305-20201128185305-00245.warc.gz"}
|
https://www.lmfdb.org/L/rational/4/1900%5E2
|
Results (14 matches)
Label $\alpha$ $A$ $d$ $N$ $\chi$ $\mu$ $\nu$ $w$ prim $\epsilon$ $r$ First zero Origin
4-1900e2-1.1-c0e2-0-0 $0.973$ $0.899$ $4$ $2^{4} \cdot 5^{4} \cdot 19^{2}$ 1.1 $$0.0, 0.0 0 1 0 0.458087 Modular form 1900.1.g.a 4-1900e2-1.1-c0e2-0-1 0.973 0.899 4 2^{4} \cdot 5^{4} \cdot 19^{2} 1.1$$ $0.0, 0.0$ $0$ $1$ $0$ $1.05295$ Modular form 1900.1.u.a
4-1900e2-1.1-c0e2-0-2 $0.973$ $0.899$ $4$ $2^{4} \cdot 5^{4} \cdot 19^{2}$ 1.1 $$0.0, 0.0 0 1 0 1.12614 Modular form 1900.1.e.b 4-1900e2-1.1-c1e2-0-0 3.89 230. 4 2^{4} \cdot 5^{4} \cdot 19^{2} 1.1$$ $1.0, 1.0$ $1$ $1$ $0$ $0.574369$ Modular form 1900.2.c.c
4-1900e2-1.1-c1e2-0-1 $3.89$ $230.$ $4$ $2^{4} \cdot 5^{4} \cdot 19^{2}$ 1.1 $$1.0, 1.0 1 1 0 0.605625 Modular form 1900.2.c.a 4-1900e2-1.1-c1e2-0-2 3.89 230. 4 2^{4} \cdot 5^{4} \cdot 19^{2} 1.1$$ $1.0, 1.0$ $1$ $1$ $0$ $0.815035$ Modular form 1900.2.c.b
4-1900e2-1.1-c1e2-0-3 $3.89$ $230.$ $4$ $2^{4} \cdot 5^{4} \cdot 19^{2}$ 1.1 $$1.0, 1.0 1 1 0 1.06366 Modular form 1900.2.a.e 4-1900e2-1.1-c1e2-0-4 3.89 230. 4 2^{4} \cdot 5^{4} \cdot 19^{2} 1.1$$ $1.0, 1.0$ $1$ $1$ $0$ $1.08844$ Modular form 1900.2.i.b
4-1900e2-1.1-c1e2-0-5 $3.89$ $230.$ $4$ $2^{4} \cdot 5^{4} \cdot 19^{2}$ 1.1 $$1.0, 1.0 1 1 2 1.23019 Modular form 1900.2.a.d 4-1900e2-1.1-c1e2-0-6 3.89 230. 4 2^{4} \cdot 5^{4} \cdot 19^{2} 1.1$$ $1.0, 1.0$ $1$ $1$ $2$ $1.45625$ Modular form 1900.2.i.a
4-1900e2-1.1-c2e2-0-0 $7.19$ $2.68\times 10^{3}$ $4$ $2^{4} \cdot 5^{4} \cdot 19^{2}$ 1.1 $$2.0, 2.0 2 1 0 0.238259 Modular form 1900.3.e.b 4-1900e2-1.1-c2e2-0-1 7.19 2.68\times 10^{3} 4 2^{4} \cdot 5^{4} \cdot 19^{2} 1.1$$ $2.0, 2.0$ $2$ $1$ $0$ $0.468452$ Modular form 1900.3.e.a
4-1900e2-1.1-c3e2-0-0 $10.5$ $1.25\times 10^{4}$ $4$ $2^{4} \cdot 5^{4} \cdot 19^{2}$ 1.1 $$3.0, 3.0 3 1 0 0.658763 Modular form 1900.4.c.a 4-1900e2-1.1-c3e2-0-1 10.5 1.25\times 10^{4} 4 2^{4} \cdot 5^{4} \cdot 19^{2} 1.1$$ $3.0, 3.0$ $3$ $1$ $2$ $1.18773$ Modular form 1900.4.a.b
|
2022-10-06 16:27:20
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9713314771652222, "perplexity": 338.3117199975763}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337853.66/warc/CC-MAIN-20221006155805-20221006185805-00129.warc.gz"}
|
https://sbseminar.wordpress.com/2012/11/16/the-canonical-model-structure-on-cat/
|
## The canonical model structure on Cat November 16, 2012
Posted by Chris Schommer-Pries in Algebraic Topology, Category Theory.
In this post I want to describe the following result, which I think is pretty neat and should be more widely known:
Theorem: On the category of (small) categories there is a unique model structure in which the weak equivalences are the equivalences of categories.
This unique model structure is of course the so-called “canonical” model structure of André Joyal and Myles Tierney. (The fact that it is the unique one with these weak equivalences lends credence, I think, to using the name “canonical”). It is proper, cartesian, simplicial, combinatorial, and every object is both fibrant and cofibrant. I first learned of this uniqueness result from Steve Lack’s comments on this MathOverflow question, though there were some details left to fill in. I hope to do that here.
Below I will give an elementary proof of the above theorem, partly so I have it written down somewhere for future reference. Charles Rezk has a nice write-up of this model structure, and I will start by describing it. It consists of:
• the canonical cofibrations, which are those functors of categories which are injective on objects,
• the canonical acyclic cofibrations, which are those equivalences of categories which are injective on objects (these are necessarily injective on morphisms too),
• the canonical acyclic fibrations, which are those equivalences which are surjective on objects, and
• the canonical fibrations, which are the “iso-fibrations“. These are the functors $f:X \to Y$ such that for any $x \in X$ and any isomorphism $\alpha: f(x) \cong y$ in Y, then there exists an isomorphism in X which maps to $\alpha$ under f.
Let’s fix some notation which will be useful later.
• Let E be the free walking isomorphism. This is the contractible category with two objects. (A category is contractible if it is equivalent to the terminal category pt).
There is an equivalence $pt \to E$, which includes the terminal category as one of the objects. The canonical fibrations are precisely those maps which have the right lifting property with respect to this functor.
The Proof:
Now let us suppose that we have a model category structure on the category of categories in which the weak equivalences are the equivalences of categories. Such a structure consists of certain classes of fibrations and cofibrations. Our goal is to show that these must be the canonical fibrations and canonical cofibrations above.
The proof will use some basic properties about model categories:
1. The cofibrations and acyclic cofibrations are closed under retracts, compositions, and pushouts along arbitrary maps.
2. The acyclic cofibrations are precisely those cofibrations which are also (weak) equivalences. The acyclic fibrations are also weak equivalences.
3. The fibrations have the right lifting property with respect to the acyclic cofibrations and the acyclic fibrations have the right lifting property with respect to the cofibrations.
It also rests on a
Key Fact: every equivalence class of objects contains a fibrant representative and a cofibrant representative.
(Recall that an object is cofibrant if the unique map from the initial object (the empty category, in this case) is a cofibration. Dually, and object is fibrant if the unique map to the terminal object is a fibration.)
What do these facts tell us?
Trivial Lemma: The inclusion $\emptyset \to pt$ is a cofibration.
Proof: Since each equivalence class of categories contains a cofibrant representative, we know that there exists some cofibration $\emptyset \to A$ for a non-empty category A. The desired map is a retract of this, hence also a cofibration. ◊
The acyclic fibrations must have the right lifting property with resect to all cofibrations, hence with respect to this map $\emptyset \to pt$. This means the acyclic fibrations must be surjective on objects. Since they are equivalences too, this implies the following consequences.
1. the acyclic fibrations are a subset of the canonical acyclic fibrations, hence
2. the cofibrations contain the canonical cofibrations, hence
3. the acyclic cofibrations contain the canonical acyclic cofibrations, and hence
4. the fibrations are a subset of the canonical fibrations.
This means we are half-way there. We must rule out the possibility that there could be more cofibrations. (This includes the case there are more acyclic cofibrations).
A Somewhat Less Trivial Lemma: If the cofibrations contain a map which is not a canonical cofibration (i.e. fails to be injective on objects), then the following map is also a cofibration (hence an acyclic cofibration as it is an equivalence):
$E \to pt$.
Proof: Suppose that we have a functor $A \to B$ which is a cofibration but not injective on objects. Then there exists at least one pair of objects in the source category which map to the same object in the target category. Call these objects x and y, and their image p.
The cofibrations are closed under pushouts along arbitrary maps and this allows us to alter this map to make a new cofibration. First note that a functor from a category to E is the same as a partition of its objects into two disjoint sets. Thus we may choose a functor $A \to E$ which separates x and y. We may form the pushout along this map to get a new cofibration:
$E \to E \cup_{A} B =: X$.
At this point we would like to form a retract onto the desired morphism. The problem is that this might not be possible as the image in X of the non-trivial isomorphism in E might fail to be an identity. If that is the case we will not be able to retract onto the desired map.
However cofibrations are also closed under composition. Let $X^\delta$ be the contractible category with the same objects as X. There is a unique functor
$X \to X^\delta$
which is the identity on objects. Since it is injective on objects it is a canonical cofibration, hence this map must also be a cofibration. Composing gives us a new cofibration:
$E \to X^\delta$
and now this retracts onto the desired map. ◊
Whew! That was the hardest part of the proof. Glad that’s over.
So we have learned that if the cofibrations contain more than just the canonical cofibrations, then they also contain $E \to pt$, which is then necessarily an acyclic cofibration. This leads us to define the following:
Definition: A category is gaunt if every isomorphism is an identity.
Gaunt categories are what you get when you take category theory and strip away the fleshy meat of topology (in this case 1-types or groupoids). We also have this:
Another Trivial Lemma: If the acyclic cofibrations contain the map $E \to pt$, then the fibrant objects are necessarily gaunt.
The proof is just unraveling definitions. We also have a
Trivial Observation: Not every category is equivalent to a gaunt category (e.g. non-trivial groupoids).
But now we see a contradiction emerge. For a model structure, every equivalence class of objects must contain a fibrant representative. If the cofibrations contain more that the canonical cofibrations, then $E \to pt$ is a cofibration and hence the fibrant objects are gaunt. The equivalence class of, say, a non-trivial groupoid cannot be thus represented. We are thus led to conclude:
Theorem: There is precisely one model structure on the category of categories in which the weak equivalences are the equivalences of categories. It is the canonical model structure.
1. Charles Rezk - November 16, 2012
Nifty!
Question: If we add in $E\to pt$ to the acyclic cofibrations (and thus to the cofibrations), will we still get a model category? (Obviously, with more weak equivalences?)
2. Chris Schommer-Pries - November 16, 2012
Hi Charles!
Great question!
I think the answer is yes, but I haven’t completely thought through all the details.
I think I convinced myself that if you add the map $E \to pt$ to the canonical cofibrations, then you get that *every functor* is a cofibration. This would mean that the acyclic fibrations are just the isomorphisms.
This would also mean that the question of whether this gives a model structure is the same as whether the acyclic cofibrations are closed under the 2-out-of-3 property.
Now I know there is a “gauntification” functor $L^G$ (a localization) from categories to gaunt categories. It does not just send isomorphisms to identities; it can be very destructive.
I think the new acyclic cofibrations (= new weak equivalences) are precisely those functors $A \to B$ such that $L^G(A) \to L^G(B)$ is an isomorphism. Note that any equivalence of gaunt categories is automatically an isomorphism. If this is the case, then they clearly satisfy 2-out-of-3 and so this would form a model category.
At the homotopy theory level, this should be a sort of (left?) localization to the gaunt categories, but it is a little weird. For example it is not any kind of Bousfield localization.
This also reminds me of how there are precisely 9 model structures on the category of sets. I wonder how hard it would be to do a similar classification of all the model structures on Cat?
I have my own question for you regarding this post… in your note on the canonical model structure you mention that Joyal and Tierney constructed it first. Did you also discover it independently? What do you know of its history?
3. Charles Rezk - November 17, 2012
I used to think I discovered it independently, but I’m not sure. The reason I’m not sure is that I know that as a grad student I went through a paper by Bousfield, called “Homotopy Spectra Sequences and Obstructions”, in an appendix of which he uses the fact that Groupoid has a model category structure. Bousfield’s paper may have lodged in my mind somewhere. (The construction of the model category for Groupoid is virtually identical to that for Cat.)
Bousfield refers to a 1978 Bulletin article by Anderson, “Fibrations and Geometric Realizations”, where Anderson mentions the Groupoid model category on p.783. That is the earliest case I know.
4. Mike Shulman - November 18, 2012
Very nice! Clearly the same argument works for the category of groupoids. What about the category of 2-categories, with the biequivalences? (Maybe this was addressed in the cited MO question — I can’t get to it right now.)
5. Chris Schommer-Pries - November 19, 2012
Hi Mike,
That is also a good question.
Steve Lack would be the person to ask about the 2-categorical case, but here are a few observations. Any object equivalent to a cell retracts onto that cell. Using this you can make a similar argument as above to show any potential class of cofibrations contains the usual generating cofibrations. The above argument then certainly shows that any choice of cofibrations must be injective on objects. I am not sure that as written it can be adapted to show they are “relatively free” on 1-morphisms too. I am somewhat doubtful.
However there is a different argument which I think does the job, which is also more in line with Steve’s comments of the math overflow question. First, if I remember correctly, the Lack model structure on 2-Cat is still proper. Since this is a property of the weak equivalences alone (a non-elementary result), it means that any potential class of cofibrations must preserve arbitrary weak equivalences under pushouts.
So what you have to do is show that if you have a functor which is not relatively free, then there is an equivalence such that when you form the pushout you fail to get an equivalence. Actually after a couple of back of the envelope calculations it is not so obvious to me how to show this in general. Hmmmm….
6. Omar Antolín-Camarena - December 18, 2012
About classifying the model structures on the category of categories: it’s probably complicated since, to begin with, there should be lots of homotopy categories; among others you probably get the homotopy category of $n$-types for each $n \ge -2$ (on the category of sets you get these for $n$=-2, -1 and 0), and I would guess that you get each from lots of model structures.
(By the way, I did actually get around to writing up the nine model structures for sets after we talked about them: http://www.math.harvard.edu/~oantolin/notes/modelcatsets.html)
Sorry comments are closed for this entry
|
2015-05-03 15:52:35
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 22, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8557223677635193, "perplexity": 327.5545287483096}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1430448956264.24/warc/CC-MAIN-20150501025556-00016-ip-10-235-10-82.ec2.internal.warc.gz"}
|
https://chemistry.stackexchange.com/questions/40655/why-birch-reduction-of-alkyne-is-e-selective
|
# Why Birch reduction of alkyne is E-selective?
From Organic Chemistry(Clayden, Greeves and Warren, 2nd edition), pp. 682:
The sodium donates an electron to the LUMO of the triple bond (one of the two orthogonal $\pi^\ast$ orbitals). The resulting radical anion can pick up a proton from the ammonia solution to give a vinyl radical. A second electron, supplied again by the sodium, gives an anion that can adopt the more stable trans geometry. A final proton quench by a second molecule of ammonia or by an added proton source (t-butanol is often used, as in the Birch reduction) forms the E alkene.
I have some questions on this reaction:
(1) Double bond on the vinyl anion blocks rotation, but how can anion adopts trans geometry between cis and trans without rotation?
(2) Why anion with trans geometry is more stable? If it is because of steric repulsion between $R^1$ and $R^2$, why electronic repulsion between lone pair and $R^1$ can be ignored?
It's not rotation but inversion (remember amine lone pair inversion) from a linear $$sp$$ form to trigonal planar $$sp^2$$ that occurs. Lone pair and radical electron are held close to atom and repulse each other a lot so stable to keep them in anti dihedral positions. If the lone pair next to a radical electron was less repulsive, then it would have a lower energy transition state and be more readily formed according to Hammond's Postulate. But experimental data indicates that that is not the case.
|
2021-03-02 00:07:33
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 2, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.47317755222320557, "perplexity": 3082.9462207573147}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178363072.47/warc/CC-MAIN-20210301212939-20210302002939-00598.warc.gz"}
|
https://www.physicsforums.com/threads/epsilon-and-delta.916453/
|
Epsilon and delta
Karol
Homework Equations
Continuity:
$$\vert x-c \vert < \delta~\Rightarrow~\vert f(x)-f(c) \vert < \epsilon$$
$$\delta=\delta(c,\epsilon)$$
The Attempt at a Solution
$$\vert f(x)-f(c) \vert <\frac{1}{2}f(c)~\Rightarrow~\vert x-c \vert < \delta_1$$
So i have this δ1 but what do i do with it?
And ε=½f(c) is big, maybe it will be in the negative zone.
Maybe i have to find a δ such that ##~\vert f(x)-f(c) \vert =0~##?
There is such a δ, so why was advised to take such a large ε?
Homework Helper
Gold Member
2022 Award
The Attempt at a Solution
$$\vert f(x)-f(c) \vert <\frac{1}{2}f(c)~\Rightarrow~\vert x-c \vert < \delta_1$$
So i have this δ1 but what do i do with it?
And ε=½f(c) is big, maybe it will be in the negative zone.
Maybe i have to find a δ such that ##~\vert f(x)-f(c) \vert =0~##?
There is such a δ, so why was advised to take such a large ε?
You are still starting these proofs the wrong way round. Somehow you have to train yourself to stop writing things like:
$$\vert f(x)-f(c) \vert <\frac{1}{2}f(c)~\Rightarrow~\vert x-c \vert < \delta_1$$
You must, must, must stop yourself from doing this.
For this problem I would first try to "prove" it using a graph of the function and a geometric argument.
Homework Helper
Gold Member
2022 Award
Homework Equations
Continuity:
$$\vert x-c \vert < \delta~\Rightarrow~\vert f(x)-f(c) \vert < \epsilon$$
The above is continuity.
The Attempt at a Solution
$$\vert f(x)-f(c) \vert <\frac{1}{2}f(c)~\Rightarrow~\vert x-c \vert < \delta_1$$
And this is what you write. You must see the difference. Every time you turn it round the wrong way.
Karol
I was wrong at the definition of continuity:
$$\vert f(x)-f(c) \vert < \epsilon~\Rightarrow~\vert x-c \vert < \delta$$
The ε is to the intersection with x
Mentor
I was wrong at the definition of continuity:
$$\vert f(x)-f(c) \vert < \epsilon~\Rightarrow~\vert x-c \vert < \delta$$
No, what you have above is backwards. The implication you showed in post 1 has the implication in the right order.
In words, "If x is close to c, then f(x) will be close to f(c)"
The delta and epsilon quantify the "close to" terms.
Karol said:
The ε is to the intersection with x
??
Karol said:
In your drawing, where is x? Where is c? Is the circled point on the curve (c, f(c))?
Homework Helper
Gold Member
2022 Award
I was wrong at the definition of continuity:
$$\vert f(x)-f(c) \vert < \epsilon~\Rightarrow~\vert x-c \vert < \delta$$
SammyS
Dr.D
I had a friend, a fraternity brother actually, who always said that his ambition in life was to be an epsilon and delta picker.
PeroK
Karol
Karol
If f(c)>0 i can take ε small and it will still be ##~\vert f(x)-f(c) \vert >0~## and find a δ because of continuity, so why do i need the ##~\epsilon=\frac{1}{2}f(c)~##?
Mentor
In your drawing in post #8, you have ##\epsilon = f(c)##, which isn't what the hint is saying.
Karol
Thank you Mark, Dr.D and PeroK
Mentor
@Karol, did you actually prove the theorem? The problem asks you to prove that statement, and illustrate with a sketch.
Karol
Because of continuity i can find a δ for ##~\epsilon=\frac{1}{2}f(c)~##, so in this interval: ##~c-\delta<x<c+\delta~##, f(x)>0
|
2023-02-04 22:40:21
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7388928532600403, "perplexity": 1706.6386501051188}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500154.33/warc/CC-MAIN-20230204205328-20230204235328-00844.warc.gz"}
|
https://www.neetprep.com/question/6519-ratio-masses-hydrogen-magnesium-deposited-amount-ofelectricity-HSO-MgSO-aqueous-solution-area-b-c-d-none?courseId=8
|
• Subject:
...
• Topic:
...
The ratio of masses of hydrogen and magnesium deposited by the same amount of electricity from H2SO4 and MgSO4 in aqueous solution are:
(a) 1:8
(b) 1:12
(c) 1:16
(d) none of these
|
2019-03-25 06:22:14
|
{"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.869943380355835, "perplexity": 8322.831541679956}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912203755.18/warc/CC-MAIN-20190325051359-20190325073359-00336.warc.gz"}
|
https://www.gamedev.net/forums/topic/372374-declaring-a-static-class-and-accessing-it/
|
Declaring a static class and accessing it
This topic is 4554 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
Recommended Posts
Share on other sites
Hi bud,
If i understand you correctly then congrats on getting something working in WIN32, it's a nightmare.
Regarding your question, i wouldn't bother with trying to share a window object between several classes. Just 'new' a new one whenever you need to. Atleast for now anyways, get your program working before you try to make it work better.
Hope that helps,
Dave
Share on other sites
Quote:
Original post by DaveHi bud,If i understand you correctly then congrats on getting something working in WIN32, it's a nightmare.Regarding your question, i wouldn't bother with trying to share a window object between several classes. Just 'new' a new one whenever you need to. Atleast for now anyways, get your program working before you try to make it work better.Hope that helps,Dave
Haha, yeah Win32 is a pain. Thanks for the answer, but the way my program is designed needs to have a static class, or something to that effect.
Share on other sites
How would the singleton pattern fit this scenario?
Dave
Share on other sites
Try this:-
Main.h
CLoginWindow g_LoginWindow;static CLoginWindow* g_pLoginWindow = &g_LoginWindow;
I hope that helps [smile]
Share on other sites
I tried that. Seemed like it would work until it returned the same error of:
error C2146: syntax error : missing ';' before identifier 'g_Login'
error C4430: missing type specifier - int assumed. Note: C++ does not support default-int
error C4430: missing type specifier - int assume
Share on other sites
Hi,
It looks as if the LoginWindow class is not known at the time you instantiate it. Did you check if it's header file is included ?
kind regards
Uncle Remus
Share on other sites
This is my current code in those modules:
Main.h
#ifndef __MAIN_H#define __MAIN_H#pragma comment(lib, "comctl32.lib")#pragma comment(lib, "wsock32.lib")#define WIN32_LEAN_AND_MEAN//#define ISOLATION_AWARE_ENABLED 1#include <windows.h>#include <commctrl.h>#include <shellapi.h>#include <stdio.h>#include <malloc.h>#include "login.h"#define WM_SOCKET_ASYNC WM_USER + 100static bool bConnected = false; // Connected... or not =Pvoid CenterOnScreen(HWND hWnd, RECT Client);bool IsFileExist(LPSTR lpszFileName);LoginWindow g_Login;static LoginWindow* g_pLogin = &g_Login;#endif
#ifndef __LOGIN_H#define __LOGIN_H#include "main.h"class LoginWindow{public: bool CreateLoginWindow(); void DestroyLoginWindow(); HWND GetWindow() const; bool IsCreated() const;};#endif
Share on other sites
I think I have it working now. I'm still experimenting. I have create a new header file with all the statics. To avoid an include loopback error, I have referenced the new header to all the class header files and main.h. each class header refers to main.h as well. I'm still fiddling with it but so far it's looking promising and I have created some windows already so there's no compile errors... yet. Thanks for all your help so far guys!
1. 1
Rutin
26
2. 2
3. 3
JoeJ
20
4. 4
5. 5
• 10
• 10
• 9
• 9
• 10
• Forum Statistics
• Total Topics
631751
• Total Posts
3002087
×
|
2018-07-19 12:11:48
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24481862783432007, "perplexity": 10767.180401359541}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676590866.65/warc/CC-MAIN-20180719105750-20180719125750-00143.warc.gz"}
|
http://www.liesdamnedlies.com/2007/08/web-analytics-i-2.html
|
# Web Analytics in Europe
By now you may have read that I’ll be appearing at OX2’s Web Analytics Day in Brussels this month (on the 14th, to be precise). I’ll be delivering the first public preview of Gatineau beta 1 and showing some stuff that so far has only been shared with “special friends of Microsoft” (i.e. people who’ve signed an NDA with us).
But my imminent trip to the “Capital of Europe” (Rene, you might want to ask some non-Belgians about that claim), plus Avinash’s recent “Web Analytics industry 101” post, has got me thinking about the European web analytics industry, where I toiled for 6 years before coming to the US. And I felt that some of the vendors should get a mention. I know it’s all about Omniture, WebTrends, Visual Sciences, Google and CoreMetrics (oh, and us) these days, but Europe has contributed some interesting players of its own. So here’s a (non-exhaustive) list of who’s who (or who was who) in web analytics in Europe, in no particular order.
### IndexTools
Probably the best-known European vendor this side of the pond, hailing from (somewhat improbably) Hungary, IndexTools is run by the incomparable (and seemingly inexhaustible) Dennis Mortensen, who maintains a very good blog. Founded around the same time as WebAbacus (my alma mater – see below), IndexTools pains me slightly because when I look at it I see the company that I kind of wish WebAbacus had become. They’ve also recently made a successful leap into the US market (at least, I assume it’s successful).
My only gripe with IndexTools is that when Dennis is asked “why IndexTools?” he often replies “because we’re cheaper than the other guys!” This does a great disservice to the technology that Dennis and his team have built. Although a “lower-end” tool, IndexTools is a very nice piece of work, with one of its most stand-out pieces of functionality being an incredibly easy-to-use report builder that uses Ajax to achieve a kind of pivot-table-like UI inside the browser. So Dennis, stop selling your stuff short!
### NedStat
Based in the Netherlands (but with offices in London, Paris, Madrid, Antwerp and Frankfurt), NedStat is the biggest European web analytics vendor. They’ve been around for ever – since 1996 – and could possibly lay claim to being the first hosted web analytics solution. In the early days their stuff was really rudimentary, though they picked up some good traction amongst media owners because they were able to support ABCe audits before most other vendors could.
Their current product set includes NedStat Pro (aimed at SMBs) and SiteStat (an “Enterprise” solution). NedStat Pro is now a kind of HBX-like solution (though a little lighter on functionality), with a strong focus on paid search management and reporting, and represents the bulk of NedStat’s customers (they did have an even lower-end solution, but they sold it off a couple of years ago). SiteStat is a “custom reporting” solution, with the custom reports built by NedStat consultants. In the past, NedStat charged by the report; I don’t know if this is still their business model at this end.
NedStat has suffered in the last 18 months with the appearance of Google Analytics, which has undermined their low-end value proposition, and the entry into the European market of Omniture and improved solutions and support from Visual Sciences and Webtrends. To be honest I’m not sure where they go from here.
### Foviance (formerly known as WebAbacus)
Ok, I have to get a mention in for my former employer. WebAbacus (which merged with The Usability Company in 2005 to create Foviance) was founded in 2000 by spinning out the software development arm of Blue Sky Communications. Always an “Enterprise” (read: rather complicated) solution, WebAbacus was (and is) notable for its great flexibility and very strong ETL capabilities – as well as traditional web analytics, the software’s been used to analyze online media delivery, call logs from interactive payphone kiosks, DiTV logs, search logs, and proprietary e-commerce engine logs.
Available now as a hosted service or installable software, WebAbacus is now folded into Foviance’s “Experience Management Service”, a consulting-led service that aims to help organizations improve the usability of their websites through a combination of traditional usability expertise (including lab tests) and quantitative behavior analysis.
Looking back, I regret that WebAbacus didn’t follow the same trajectory as Omniture (also founded at around the same time), but the product has found itself a successful niche at the usability-focused end of the web analytics market, saving it from the harsh glare of the center of the market, where the more formidable US vendors play.
### Site Intelligence
This is another company I have a soft spot for, even though they beat WebAbacus to the punch for a number of significant deals at a crucial time in the company’s history. Site Intelligence was founded by John Woods (now to be found at Synature), and is also very much in the Enterprise space. They have some pretty major clients in the UK, including Tesco and Carphone Warehouse.
The Site Intelligence product, VBIS, is a pretty full-on web analytics data warehouse app, with good scalability characteristics (Tesco.com is a pretty busy website). The SI approach is very services-heavy, with custom development work often occurring for major clients. This obviously means that those clients can get pretty much exactly what they ask for, which has helped SI to win major deals.
I’m a little concerned about the future of SI, however, now that Omniture (with its Discover data warehouse product) is very much active in the UK. I’m not sure which nook SI could tuck themselves away in to avoid being swept away as major businesses migrate from custom solutions to “packaged” web analtyics. But if you have a big site and/or complex needs, you should definitely talk to SI as part of your selection process.
### Clickstream
No trip down web analytics memory lane would be complete without mentioning Clickstream – surely one of the most colorful web analytics vendors to emerge from the UK, not least because of the entertaining names of its management team (Titus Suck sticks in my mind). Clickstream’s claim to fame has always been a proprietary data collection mechanism that they used to claim (and occasionally still do) delivers “100% accurate web analytics data”.
They originally entered the market as a direct competitor with the likes of WebAbacus and Site Intelligence, offering reporting services as well as data collection, and generated a lot of heat (though not much light) in the early part of the decade about data accuracy. But they got out of the actual reporting business in around 2004, concentrating instead on forging partnerships with web analytics vendors who could use their data for generating reports.
Clickstream’s challenge has always been that, whilst alluring on paper, their technology is incredibly complicated in practice, whilst delivering marginal benefit in terms of accuracy (analysis of off-line browsing behavior, anyone?) We had our fingers burned a couple of times in trying to do
joint projects with them in my WebAbacus days. In recognition of this, Clickstream have turned their offering into a hosted service with the promise of “instant, accurate web traffic data” without having to tag your site. All you have to do to get this data is redirect your DNS entries via Clickstream’s proxy servers which will auto-tag your site. Redirect a major site’s DNS through a small company’s server farm? Hmm. Let me know how you get on with that.
Danish web analytics vendor Instadia had a unique twist on web analytics – integrated survey results. Through one analytics/survey interface you could deploy a survey on your site and then integrate and cross-reference the survey results with the behavior patterns of your visitors.
Instadia was successful enough at doing this that at the beginning of this year they were acquired by Omniture, partly for their survey capability, and partly for their European client footprint. Former Instadia ClientStep clients are now being encouraged to migrate onto Omniture SiteCatalyst. I’m not sure what is happening with the survey capability, but I imagine we’ll see something integrated into a future release of SiteCatalyst.
### WebtraffIQ
There are a number of minnows that livened up the UK market in the early years (M-tracking, Thinkmetrics) but WebtraffIQ deserves a special mention for the chutzpa of its CEO, Marcos Richardson. I never actually met anyone who was using WebtraffIQ, but Marcos did a great job of stirring up controversy on the pages of NMA (premier trade rag for the UK online industry), so that the company punched above its weight.
Ultimately, though, Google Analytics did for WebtraffIQ’s business model; Marcos has wisely rolled up the software part of his business and now concentrates on providing consultancy services around GA and other tools.
So that’s it. Not a bad list now that I come to look at it. As I said, if I’ve missed your company (my visibility of the smaller players in continental Europe was not perfect), let me know, and I’ll put in an honorable mention section at the end.
### 4 thoughts on “Web Analytics in Europe”
1. Hi Ian,
Thanks again for making your first public demo at our event 😉 I have to admit that at OX2 we are all very thrilled by this.
Regarding your comment on Brussels ‘Capital of Europe’, I know that not everybody in Europe will think this way. Mmmm, specially in the UK & France 😉
I’m not saying this as Belgian, as you know I’m Spaniard, and in Spain we consider Brussels being the capital of Europe. At least the political capital (regarding arts, I believe that now Berlin is considered the capital and regarding economy, the City is still the City). With all the European institutions based here and as the European Union power will grow in the future, I expect Brussels to be more and more considered as the European Capital.
I came to Belgium to live back in 1993 and since then I’ve seen how Brussels becomes more and more multicultural. You can go to any supermarket for example and hear easilly 6-7 different languages. I have to admit that I’m really passionate by the European Integration (I studied International Relations at the University) and I’m a great believer in the future of the EU.
If you asked me what I am, I would reply with my heart Spaniard and with my brain European 😉
Getting back to your post, I can think of a few other European Web Analytics vendors as Xiti and Weborama (French tools), Moniforce (a Dutch packet sniffing vendor), Imetrix (Italian vendor), stat24 (Polish vendor) and I’m sure I’m forgetting some. Europe being a fragmented market we find quite a few local Vendors that are more or less well establish in their local market. The future will tell if these will continue to exist or if the Web Analytics market will be rules by some big names, mostly americans.
See you in a couple of weeks!
Kind Regards,
René
CEO OX2
2. Hey Ian,
Contrary to what people might think, you being mixed up with WebAbacus and now a Microsoft employee, this is an exceptionally objective status and recap on Web Analytics players in Europe – simply a great list!
>>They’ve also recently made a successful leap into the US market (at least, I assume it’s successful).
It is almost one third of our business Ian. So very successful indeed! However; we (from a strategy point of view) do not really focus on any specific market. We SPENT more on sales and marketing in the US as of lately though (2007). BUT I could tell that the biggest growing market for us is one we did not entirely plan for. 🙂
Just typical is it not.
>>My only gripe with IndexTools is that when Dennis is asked “why IndexTools?” he often replies “because we’re cheaper than the other guys!” This does a great disservice to the technology
He he… I would be surprised if I used the word “cheaper” – it is Cost Effective Ian.. 🙂
But being serious for a second – the reasoning behind our strategy is that we strive to be that “fifth” vendor, after the four typical US vendors (as mentioned by you above). And because of that we never compete or try to compete with Google Analytics, WebtraffIQ or Nedstat for that matter. And at top 5 level, ALL of us have similar technology, some are better in some areas and vice versa.
Therefore (at least from a financial viewpoint) going to market with a pitch saying; that you can get 90% of the features of an e.g. Omniture solution for a fraction of the cost gives us an advantage – in a world saying that you NEED to employ analysts as well. To give you an example: We were shortlisted in recent case with two other vendors (in a 100.000.000+ page views per month web property) and the pricing was this:
IndexTools: $70.000 Omniture:$240.000
Visual Sciences \$420.000
Where you can hire 2 to 4 full time analyst on top of your IndexTools Enterprise solution – AND that matters to some. I honestly believe that the more analysts you have (whatever the WA solution), the better the return you have.
So repeat after me:
Omniture
IndexTools
Webtrends
CoreMetrics
Visual Sciences
Omniture
IndexTools
Webtrends
CoreMetrics
Visual Sciences
🙂
See you in Brussels shortly.
Cheers mate.
Dennis R. Mortensen, COO at IndexTools
My Web Analytics Blog
3. Ian,
I guess you are prepared for every European vendor to add to the blog with ‘corrections’ to your post? I see Dennis has already started.Nedstat.
Myth Buster 1
We do lay our well-founded claim to being the pioneer of the ASP WA model, because we are the pioneer.
Myth Buster 2
Sitestat is now a world away from the tool you may remember from your days in Europe. We have all the features you would expect from a top tier player and some you won’t find elsewhere, likeStream Sense (our unique browser based streaming media analytics module.)
Myth Buster 3
We have continued to successfully grow our client base as our Sitestat product offers outstanding value to Medium and Enterprise size clients.
Myth Buster 4
We are the leading WA provider in Europe, and as such we have a truly European focus, with local people supporting local clients in the local language (and in the local time zone).
I could go on, but I’ll sign off by answering your question, where do we go from here? Simple, we continue delivering a highly client focused, ROI centric complete solution that epitomises actionable analytics.
Cheers,
Steve Wind-Mozley – UK Country Manager, Nedstat
4. mobile phones with free gifts
Would you make the mistake of assuming a valuable free gift with your next mobile phone is too good to be true? How about if it was through a trusted name like Tesco? Includes free laptops, HDTV’s, Wii, Playstation etc. Check it out now
|
2022-09-28 23:01:40
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18722157180309296, "perplexity": 3755.5517702030647}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00777.warc.gz"}
|
https://plainmath.net/86241/frequently-a-hypothesis-is-supported-by
|
# Frequently, a hypothesis is supported by some studies but not by others. When faced with this situation, many researchers rely on a statistical technique known as _____. meta-analysis study enumeration discriminant analysis replication
Pader6u 2022-08-12 Answered
Frequently, a hypothesis is supported by some studies but not by others. When faced with this situation, many researchers rely on a statistical technique known as _____.
meta-analysis
study enumeration
discriminant analysis
replication
You can still ask an expert for help
Expert Community at Your Service
• Live experts 24/7
• Questions are typically answered in as fast as 30 minutes
• Personalized clear answers
Solve your problem for the price of one coffee
• Available 24/7
• Math expert for every subject
• Pay only if we can solve it
## Answers (1)
Lisa Acevedo
Answered 2022-08-13 Author has 18 answers
We know that
Study enumeration:
An enumeration is a complete, ordeblack listing of all the items in a collection. ... The precise requirements for an enumeration (for example, whether the set must be finite, or whether the list is allowed to contain repetitions) depend on the discipline of study and the context of a given problem.
Hence,
Option second is the correct choice.
Study enumeration is the correct choice.
###### Not exactly what you’re looking for?
Expert Community at Your Service
• Live experts 24/7
• Questions are typically answered in as fast as 30 minutes
• Personalized clear answers
Solve your problem for the price of one coffee
• Available 24/7
• Math expert for every subject
• Pay only if we can solve it
|
2022-09-27 04:26:15
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 9, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3821609914302826, "perplexity": 2918.6344640977472}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334987.39/warc/CC-MAIN-20220927033539-20220927063539-00202.warc.gz"}
|
https://codereview.stackexchange.com/questions/30166/threads-listening-on-tcp-and-rendering-in-a-loop
|
# Threads listening on TCP and rendering in a loop
I have two threads, where one listens on TCP and the other renders in a loop:
private void checkBox1_CheckedChanged(object sender, EventArgs e)
{
try
{
{
tcplisten.Start();
}
else
{
tcplisten.Stop();
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message, "Checkbox");
}
}
Is this a good way of handling the threads, to just start them and later kill them when I want to?
• Can you clarify this "To just start then, and kill them when i want to?". Are you asking how to do this or is this a good way to do this? – Aseem Bansal Aug 24 '13 at 11:44
• Is there any reason you're not using the TPL? It's a very nice abstraction and it'll really help you in such cases. – Benjamin Gruenbaum Aug 24 '13 at 12:24
• @Aseem Bansal I am asking if it´s good. – Zerowalker Aug 24 '13 at 13:01
• @Benjamin Gruenbaum Never heard of, can you put an answer with what you mean, to display it? – Zerowalker Aug 24 '13 at 13:02
No, this is absolutely not a good way. Thread.Abort() should never be used, because it is very hard to write correct code when an exception can happen at almost any point in your code.
Instead, you should implement cooperative cancellation either by using a volatile bool flag, or, even better, CancellationToken.
With that your code could look like this:
Thread ListenThread;
CancellationTokenSource CTS;
private void checkBox1_CheckedChanged(object sender, EventArgs e)
{
try
{
{
tcplisten.Start();
CTS = new CancellationTokenSource();
}
else
{
tcplisten.Stop();
CTS.Cancel();
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message, "Checkbox");
}
}
Your Listen() and Render() methods would then periodically check IsCancellationRequested of the passed in token and return if it's true.
• Good to know. Though, never used any of those CancellationToken or volatile bool. Can you show an example? – Zerowalker Aug 24 '13 at 13:02
• @Zerowalker It's not complicated, see update. – svick Aug 24 '13 at 14:36
• I tried that, but it doesn´t work. Am i supposed to implement the IsCancellationRequested myself? in While loop or something? – Zerowalker Aug 24 '13 at 15:11
• @Zerowalker Like I said, you need to check IsCancellationRequested yourself. So, if you have a loop in your Render() method, then you should check it at the start of each iteration, or something like that. – svick Aug 24 '13 at 15:18
• if i am supposed to do something like this: if (CTS.IsCancellationRequested) { break; } - Why can´t i just use my CheckBox value instead? – Zerowalker Aug 24 '13 at 15:18
|
2020-02-22 10:42:57
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20712728798389435, "perplexity": 2436.0367054358653}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875145657.46/warc/CC-MAIN-20200222085018-20200222115018-00228.warc.gz"}
|
https://blog.allardhendriksen.nl/posts/capture-templates-for-a-blog-post/
|
My Capture templates for ox-hugo
Apr 8, 2018
Creating a new ox-hugo blog post manually can be a bit of a pain. There are several properties you have to set for the org section headline of which the most important is :EXPORT_FILE_NAME:. Of course, setting the export file name automatically can be automated, and this is described in this blog post.
The instructions here are taken from / inspired by the ox-hugo documentation.
First, in the use-package declaration in the init.el file, add the following defun:
(use-package ox-hugo
:after ox
:ensure t
:config
(progn
(defun org-hugo-new-subtree-post-capture-template ()
"Returns org-capture' template string for new Hugo post.
(let* ((title (read-from-minibuffer "Post Title: ")) ;Prompt to enter the post title
(fname (org-hugo-slug title)))
(mapconcat #'identity
(
,(concat "* TODO " title)
":PROPERTIES:"
,(concat ":EXPORT_FILE_NAME: " fname)
":END:"
"%?\n") ;Place the cursor here finally
"\n"))))
)
This will load ox-hugo after the org export back-end (ox) has been loaded, and will define the org-hugo-new-subtree-post-capture-template function. This function creates a capture template based on a title. Note that it automatically creates the blog post filename, which is exactly what we wanted.
Second, add the following to the use-package declaration of org-mode:
(use-package org
;; ...
:custom
;; ...
(org-capture-templates
'(("t" "To Do / Next Actions")
;; ...
("h" "Hugo post"
entry (file+olp "~/projects/blog/content-org/blog.org" "Drafts")
(function org-hugo-new-subtree-post-capture-template))
))
;; ...
)
The declaration shows how org-capture-templates is customized and a h hotkey is added to create a new Hugo post. The entry is added to the Drafts heading in ~/projects/blog/content-org/blog.org, which is where I store my blog. When I invoke the capture template using C-c c h, I am first prompted for a blog title and can then write my post:
** TODO Test post
:PROPERTIES:
:EXPORT_FILE_NAME: test-post
:END:
This is a test post.
`
|
2020-08-14 05:43:00
|
{"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18405932188034058, "perplexity": 12252.595988249393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739177.25/warc/CC-MAIN-20200814040920-20200814070920-00567.warc.gz"}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.