text
stringlengths 104
605k
|
---|
## Intermediate Algebra: Connecting Concepts through Application
$7(g^2+9h^2)(g+3h)(g-3h)$
$\bf{\text{Solution Outline:}}$ To factor the given expression, $7g^4-567h^4 ,$ factor first the $GCF.$ Then use the factoring of the sum or difference of $2$ squares. $\bf{\text{Solution Details:}}$ The $GCF$ of the terms is $GCF= 7$ since it is the highest number that can evenly divide (no remainder) all the given terms. Factoring the $GCF,$ the expression above is equivalent to \begin{array}{l}\require{cancel} 7(g^4-81h^4) .\end{array} The expressions $g^4$ and $81h^4$ are both perfect squares (the square root is exact) and are separated by a minus sign. Hence, $g^4-81h^4 ,$ is a difference of $2$ squares. Using the factoring of the difference of $2$ squares, which is given by $a^2-b^2=(a+b)(a-b),$ the expression above is equivalent to \begin{array}{l}\require{cancel} 7[(g^2)^2-(9h^2)^2] \\\\= 7(g^2+9h^2)(g^2-9h^2) .\end{array} The expressions $g^2$ and $9h^2$ are both perfect squares (the square root is exact) and are separated by a minus sign. Hence, $g^2-9h^2 ,$ is a difference of $2$ squares. Using the factoring of the difference of $2$ squares, which is given by $a^2-b^2=(a+b)(a-b),$ the expression above is equivalent to \begin{array}{l}\require{cancel} 7(g^2+9h^2)[(g)^2-(3h)^2] \\\\= 7(g^2+9h^2)(g+3h)(g-3h) .\end{array}
|
# Induction of closed form of summation
On Wikipedia the following closed form is derived - Generalised formula
Can someone explain how the closed form below is derived?
Edit
The first one is a pretty standard exercise. Let $S_k = \sum_{k=a}^b r^k$. $\space$Then $$rS_k = r\sum_{k=a}^b r^k = \sum_{k=a}^b r^{k+1}$$ so $$rS_k-S_k = (r^{a+1}+r^{a+2}+ \dots + r^{b+1})- (r^{a}+r^{a+1}+ \dots + r^{b}) \\ = r^{b+1}-r^{a}$$ and since $$rS_k-S_k = S_k(r-1)$$ we have enough to know that $$S_k(r-1)=r^{b+1}-r^{a} \\ \implies S_k = \frac{r^{b+1}-r^{a}}{r-1}$$ Try to use a similar tactic in deriving the second equality you are interested in, or just plug in $r = \frac{1}{\hat{r}}$ and solve in terms of $\hat{r}$ so that it looks like the second equation.
• Thanks, not sure how to turn k negative. – Chris Degnen Nov 26 '14 at 16:18
• $r^{-k} = \frac{1}{r^k}$. Try making the substitution I mention in my last line. – graydad Nov 26 '14 at 16:19
• Did you get it yet? – graydad Nov 26 '14 at 17:46
• Yes, thanks. :-) – Chris Degnen Nov 26 '14 at 20:12
|
## College Algebra (6th Edition)
The quadratic formula is used for solving equations of the form $ax^{2}+bx+c=0$ (polynomial equations of degree 2). It generally does not apply when solving equations of a higher degree, for example, $x^{3}+2x-x+2=0$ can not be directly solved with the quadratic formula Note: some higher degree equations can be reduced (using substitutions) into a quadratic form, but the vast majority of equations can not. Verdict: doesn't make sense.
|
## What is the standard method for recording married names?
13
2
Suppose Mary A marries John B and then (after his death or divorce) she marries James C.
Rightly or wrongly my impression is that conventional practice in GEDCOM based programs is to record only Mary A as her name because her married names are "obvious". It is clear that this practice is creaking where Mary uses married names that are not "traditional" (where tradition is that of the English speaking world). Problems here include
• Mary A becoming Mary A B and then Mary B C (where the family-name is still one word and the previous family-name migrates to become a given name);
• Mary becomes Mary A-B (a new compound family-name);
However, this conventional practice can give rise to issues even when Mary's names are traditional, as any diagram (or report) centred on her 2nd marriage only will show, unless adjusted, that Mary A marries James C, when the marriage certificate probably records Mary B marrying James C.
To help overcome this, whenever a woman marries twice, I give her a secondary / alias name of her first married name (Mary B here) and adjust my diagrams to show multiple personal names. Should I be recording all married names to remove any risk of incorrect assumptions?
So - what is the best practice (and / or society standards) for recording married names? And what should it be?
Note husbands changing their name and issues with children's names are out of scope of my question. Note also - I work within GEDCOM practice where (stupidly) no dates can be assigned to Names.
Wish I could add another plus 1 the point about assigning dates to names. – GeneJ – 2012-11-04T01:26:03.243
9
Should I be recording all married names to remove any risk of incorrect assumptions?
It seems to me that you should be recording all names -- married or otherwise. Use notes to add detail where necessary.
If a person is born Mary Alice Aston and marries John Bower and changes her name to Mary Aston Bower (i.e. dropping her given middle name and using her original surname as a middle name), you should record this new name and add a note that she took this name upon her marriage. If she is referred to in records (i.e. incorrectly -- it does happen) as Mary Alice Bower, you should record this name and add a note. If she divorces and changes her name back, you should add a note recording this fact. If she then marries James Carter and changes her name to Mary Alice Carter, you should record this name and add a note.
There are just too many possible variations to "assume" that the married name will follow the "conventional" (English) rules.
6
Genealogy is all about building personal identities using many pieces of evidence including names. The first identifier an individual get is usually a name, so it makes sense to use the same identifier throughout life. Unfortunately names change in many ways, not just on marriage, some of which are official and leave a record. Other types of name changes include official deed poll (e.g. men changing thier name to inherit a title or under the terms of a will of a wealthy relative), double barred surnames, nicknames ... So the name alone makes a very poor primary database key. It is important to track the name changes, as vital parts of the person's identity. I would want the option of recording the period a name was used as it is relevant to building a personal identity.
Naming conventions vary considerably as do the way they are recorded in official records. For example English baptisms mostly do not give the mother's maiden, but Scottish baptisms do. Spanish people keep the name they were born with for life so a woman does not change her name on marrying and uses two surnames the first being the father's surname and the second being the mother's maiden name. The practice of changing surname on marriage is now rejected by many women for professional reasons and personal beliefs.
2In the 19th century and earlier, married Scots women were known just as much by their maiden name as married - arguably, more often for official purposes. Which caught me out on one occasion when I claimed the wrong 4G granny, as mine had "written down" her maiden name on the census form despite being in a full household of men with her married name. And naturally, there was another with her married name from the same town, of the same age, living on her own, who I thought was G-gran. – AdrianB38 – 2012-11-08T10:13:58.673
5
The way I learned genealogy, prior to genealogy software, was that women always be referred to only by their maiden names.
But as you note, in many Western cultures, for many years it has been the practice that women change their surname at marriage to that of their husband's. As a result, almost all the women in your family tree would have been know most of their life, not by their maiden name, but by their married name.
As a result, if you list women by just their maiden names, most will be unrecognizable to the people viewing the information.
GEDCOM does have a NAME_TYPE that part of the PERSONAL_NAME_STRUCTURE that does allow you record the birth name separately from the maiden name separately from married names and even also known as names. If your software program supports this, then this is the proper way to enter married names that the person has adopted.
Personally, I would hope software developers implement this by giving you the option to let the program automate the adding of married names to women. It would then present your example woman as:
• Mary (A B) C
and would index her in the name indexes under all three names, as: "A, Mary", "B, Mary" and "C, Mary"
In cases (which are more common now-a-days) where non-traditional surnames are adopted, the program should allow you to select which married names not to include, or which ones to override and what with (e.g. with Mary A-B) so that the person can be listed properly and indexed properly.
I don't agree with your suggestion that dates should be able to be assigned to Names in GEDCOM. Dates are assigned to events, not to facts. A name is a fact. The changing of a name is an event. So the date of the name change should be included as a custom name-change event in GEDCOM. This is often used when someone legally changes their name to something else, and many programs allow you to enter a name-change event.
For names changed at marriage, this event is usually assumed to coincide with the marriage date and an additional name-change event would not be required.
For best practice, I would record the birth surname as every person's surname. I would record non-marriage surname changes as name-change events. I would note and indicate non-traditional surname changes at marriage as notes with the name. I would try to find a program that would index people by all their surnames (birth, maiden, every marriage, other changes) and show all their surnames in reports and save all this surname information into GEDCOM so that it is potentially retrievable again.
1Interesting you say "prior to genealogy software ... women always be referred to only by their maiden names". As such reports are composed by humans, it would allow the writer to add an extra (sur)name in where necessary - e.g. "Mary A (the widow of John B) married...". But software isn't usually intelligent enough to see the need and add such clarifications. Hence a workable standard for pre-software era becomes a disadvantage for the software era. I wonder how many other "standards" there are like that. – AdrianB38 – 2012-11-06T16:44:11.360
1Re "GEDCOM does have a NAME_TYPE that part of the PERSONAL_NAME_STRUCTURE". Another thing I hadn't realised - then, when I check it out, it turns out to be another thing that's in GEDCOM 5.5.1 and so not in my strictly GEDCOM 5.5 compliant program! – AdrianB38 – 2012-11-06T16:47:33.033
1@AdrianB38 - Programs today should all move to GEDCOM 5.5.1. Even though it was never declared the standard, it is the de facto standard and is the only version of GEDCOM that allows a UTF-8 translation format for the Unicode character set which is an absolute must-have, among other enhancements. – lkessler – 2012-11-06T17:51:54.947
3
My approach is similar to yours Adrian. STEMMA records explicit married names - rather than assuming them to be obvious - but indicates the dates (or events) at which they changed. The recording of time-dependent names, and other types of alias or diminutive name, has a direct analogy in place names.
However, this is just part of the total data associated with that person. It impacts no specific diagrams since the label/title used for display purposes can be picked independently of the personal names.
|
## i want to write this query for php
SET @sql = NULL; SELECT GROUP_CONCAT(DISTINCT CONCAT( 'max(CASE WHEN ca.date = ''', date_format(date, '%Y-%m-%d'), ''' THEN coalesce(ca.remarks, ''N'') END) AS ', date_format(date, '%Y-%m-%d'), '' ) ) INTO @sql FROM time_dimension where date>='2020-09-01' and date <= '2020-09-30'; select @sql; SET @sql = CONCAT('SELECT ca.employee_id, ', @sql, ' from ( select c.date, a.employee_id,a.remarks from time_dimension c left join attendance a on c.date=a.date ) ca where ca.date>=''2020-09-01'' and ca.date <= ''2020-09-30'' and employee_id is not null group by ca.employee_id'); select @sql; PREPARE stmt FROM @sql; EXECUTE stmt; DEALLOCATE PREPARE stmt;
## I’m trying to write a randomly patrolling AI for my 2D Platformer
I’m trying to write a randomly patrolling AI for my 2D Platformer. The AI already has a ground checker function which checks if there are tiles nearby or not. What I want to do is randomize its actions and create an illusion of a somewhat "sentient" enemy. What I tried to create below is using the built in RNG to make the enemy either jump, change direction, or keep moving.
The problem is, it doesn’t seem to work properly. The enemy just jumps every second I want it to change behaviour. The change direction functions, however, don’t occur as frequently. I need to know what I’ve done wrong here. Thanks.
void Update() { //RNG behaviour = Random.Range(0,3); jumpSpeed = Random.Range(1,5); //clock timer += Time.deltaTime; } //time private float waitTime = 2.0f; private float timer = 0.0f; void MoveRandomizer() { if((Mathf.Round(timer%waitTime)) == 0) { if(behaviour == 0) { movingRight = false; } if(behaviour == 1) { movingRight = true; } if(behaviour == 2) { rb.velocity = new Vector2(rb.velocity.x, jumpSpeed); } } }
## How to write “∀x.F(x)” for “F(x)=λx.Φ(x)” in one expression (sequel from question about “∀(λφ. (φ x m→ φ y))”?
This question is sequel from How to understand quantifier without predication " ∀(λφ. (φ x m→ φ y))"? which further explains the notation and context.
So – I have anonymous Boolean-valued function F(y)=λx.Φ(x) (of course, y and x point to the same variable, I just used different syntactic names, to point out, that x is bound variable) and I would like to write the statement, that F(x) is true for all the values of the argument and it can be written ∀x.F(x). But F(x) is named function, but I would like to write the same expression for the anonymous function that uses lambda, so I am with my suggestion: ∀x.λx.Φ(x) or ∀x.λy.Φ(y)? And apparently they both are wrong.
What I am trying to achieve? I just want to build parser for language that is declared in https://www.isa-afp.org/browser_info/current/AFP/GoedelGod/GoedelGod.html. This language contains expressions like [∀(λΦ. P (λx. m¬ (Φ x)) m→ m¬ (P Φ))].
I am using ANTLR grammar for lambda calculus https://github.com/antlr/grammars-v4/blob/master/lambda/lambda.g4 and I understand that the 1) quantifiers; 2) logical connectives; 3) arithmetic functions are just another lambda functions (it is just syntactic sugar that they are written in the specific non-lambda syntax/prefix form etc.) and as such I express them in the existing lambda.g4 grammar https://github.com/antlr/grammars-v4/blob/master/lambda/lambda.g4. So – my first step is to write the cited expressions with the named functions and then I will just replace them with anonymous functions because lambda.g4 has no options to introduce named functions. But it is so confusing to write anonymous function and the quantifier function for the same argument.
Just side question – maybe there is better ANTLR grammar for lambda calculus with syntactic sugar for quantifiers and connectives?
## Why did Hopcroft and Karp write $M_0, M_1, M_2, \cdots, M_i, \cdots$? (Hopcroft – Karp Algorithm)
I am reading “An $$n^{\frac{5}{2}}$$ Algorithm for Maximum Matchings in Bipartite Graphs” by Hopcroft and Karp.
Please see the image below.
Let $$s$$ be the cardinality of a maximum matching.
I think any of $$M_0, M_1, M_2, \cdots, M_s$$ is a matching and $$M_s$$ is a maximum matching.
So, I think $$P_s$$ doesn’t exist.
But the authors wrote $$M_0, M_1, M_2, \cdots, M_i, \cdots$$ and $$|P_0|, |P_1|, \cdots, |P_i|, \cdots$$.
Why?
Maybe I am confused.
## How do you write a python\pseudo code that generates all pair permutations?
What would be a good pseudo code or Python 3 code for the following permutations problem? Let us define a n-permutation as a bijective function $$\pi: \{0,…,n-1\}\rightarrow \{0,…,n-1\}$$ and represent it using a list, meaning that $$\pi(i)=j$$ iff list[i]=j. Let us also define a pair permutation as a permutation in which for every $$i\neq j , \thinspace \pi(i)=j \Leftrightarrow \pi(j)=i$$ . I need to write a recursive function code, that takes an interger n, and generates all n-pair permutations (every permutation appears, and only once). [This question appeared on a test, that it’s solution remains confidential 🙁 ]
## Limiting where a Turing Machine can write
Suppose I have a Turing Machine A, generated by somebody else, trying to break my system. To break my system they would have to write to the tape at some point X (this can be a range of the tape).
If I can force the machine A to be in some state q1 at some point, can I make a Turing Machine B such that for any Machine A it will never write in X?
## FTP server and chroot: SSL3 alert write: fatal: protocol version
When i enable "chroot_local_user=YES" in my FTP server config /etc/vsftpd/vsftpd.conf
then the FTP client (WinSCP) says:
when it is commented out and "service vsftpd restart" , it login OK, but allows browsing system directories in the /.
This is CentOS 7 Linux.
These are…
FTP server and chroot: SSL3 alert write: fatal: protocol version
## Search Engine Ranker is trying to write on the link lists. How to stop?
I might have changed some settings and ended up at Search Engine Ranker trying to write on the link lists that I provide. Where can I find the settings to stop rewriting on the folders that I import links.
## How do I write a tooltip for this list of cities?
distanceToSanFrancisco[s_] := QuantityMagnitude[ TravelDistance[cityList[[859]], Interpreter["City"][s]]] (* table1 takes a while to run on my system*) table1 = Table[{cityList[[k]], distanceToSanFrancisco[cityList[[k]]]}, {k, Length[cityList]}]; citiesWithin[range_] := Module[{s = {}}, Do[If[table1[[k, 2]] <= range, s = Join[s, {cityList[[k]]}]], {k, Length[cityList]}]; s] (*This lists all cities in California that are within range of San Francisco.*) radius = 25; tolerance = 0.05; a1 = GeoDisk[ QuantityMagnitude[ LatitudeLongitude[Interpreter["City"]["San Francisco"]]], Quantity[radius, "Miles"]]; a3 = Complement[ citiesWithin[radius*(1 + tolerance)], citiesWithin[radius*(1 - tolerance)] ]; Table[a3[[k]] -> distanceToSanFrancisco[a3[[k]]], {k, 1, Length[a3]}] Show[a2, GeoListPlot[a3, PlotMarkers -> Point], ImageSize -> Medium]
This last one is what I want the tooltips on. For the red points around San Francisco, ideally I would like to get just city name. TIA
## I write message on whatsapp to stranger
I wrote few messages with girl on fotka.com (something like tinder and badoo). Girl is from Nigeria. She gave me her whatsapp’s number (with localization based in Nigeria). I downloaded whatsapp and wrote 1 message. Now I think it was silly and very,very irresponsible! And know my question is: what can be done with my number? Does someone can use it for something? Please answer me.
|
# Space generated by theorem labels (XeTeX)
With the thmtools package, combining \begin{foo}[name=bar,label=x] with the line \newtheorem{foo}{Foo} in the preamble typesets to Foo xxx (bar), where xxx is a number. After the (bar) there is an extra space which is about 6pt. To remove it completely, as I have seen at Extra space before labeled theorem body with thmbox or thmtools+thmbox, it is sufficient to add % after the label=x] part. The point is, if I add any number of \,s after it, they get completely ignored, whereas \hspaces, \quads and \qquads don't. Try typesetting:
%!TEX TS-program = xelatex
%!TEX encoding = UTF-8 Unicode
\documentclass[a4paper]{report}
\usepackage[italian]{babel}
\usepackage{thmtools}
\newtheorem{foo}{Foo}
\begin{document}
\begin{foo}[name=bar,label=x]\hspace{5cm}
With the space.
\end{foo}
\begin{foo}[name=bar2,label=x2]
Without the space.
\end{foo}
\begin{foo}[name=bar3,label=x3]\,\,\,\,\,\,\,\,
With 8 \verb"\,"s.
\end{foo}
\end{document}
On my computer, the \,s don't produce any space, whereas the \hspace does. Why does that happen?
Adding % after the label seems not to eliminate the space. Since this has generated a couple of overful \hboxes, I'd like to know how I can remove it.
%!TEX TS-program = xelatex
%!TEX encoding = UTF-8 Unicode
\documentclass[a4paper]{report}
\usepackage[italian]{babel}
\usepackage{thmtools}
\newtheorem{foo}{Foo}
\begin{document}
\begin{foo}[name=bar,label=x]%
With the \verb"%".
\end{foo}
\begin{foo}[name=bar2,label=x2]
Without the \verb"%".
\end{foo}
\end{document}
-
As David explains, the command \, is a bit dangerous. On the other hand a string of \, is surely wrong. – egreg Mar 20 '14 at 22:49
On the edited additional question, I'm not sure why you'd expect a % there to have an effect. White space never affects the start of a paragraph or list item. – David Carlisle Mar 25 '14 at 13:48
A theorem is a list item, and the space between (bar) and With is \labelsep so adding \setlength{\labelsep}{0pt} before \begin{foo} suppresses it. – David Carlisle Mar 25 '14 at 13:57
The reason was that I thought, since it has that effect in \label{}%, as stated in the link I put in the question, by extension, it should do the same if placed there. Obviously that was wrong thinking :). – MickG Mar 25 '14 at 16:07
The \, do generate space (as you can see in your image) but it is vertical space:
...\kern 1.70374
...\kern 1.70374
...\kern 1.70374
...\kern 1.70374
...\kern 1.70374
...\kern 1.70374
...\kern 1.70374
...\kern 1.70374
...\glue(\parskip) 0.0 plus 1.0
...\glue(\baselineskip) 2.0
...\hbox(7.5+2.5)x345.0, glue set 225.66599fil
....\hbox(7.5+2.5)x67.92326
.....\glue 0.0
.....\glue 0.0
.....\glue -5.0
.....\hbox(7.5+2.5)x67.92326
......\glue 5.0
......\OT1/cmr/bx/n/10 F
......\kern-0.95833
......\OT1/cmr/bx/n/10 o
......\kern0.31944
......\OT1/cmr/bx/n/10 o
......\glue 3.83331 plus 1.91666 minus 1.27777
......\OT1/cmr/bx/n/10 3
\hspace generates a \hskip but \, if not in math mode generates a kern which doesn't automatically start a paragraph, so in vertical mode (as here) it adds vertical space. probably it ought to have been defined with \leavevmode
-
Yes, it ought to. ;-) – egreg Mar 20 '14 at 22:48
Has anyone noticed my last edit, which added an issue to the question? No because it's been a while and no-one has answered :). – MickG Mar 25 '14 at 13:04
@MickG It's best to ask new questions rather than edit old ones, really. – David Carlisle Mar 25 '14 at 13:46
|
Help needed, rearranging polynomial for inverse equation
Hi, I need to rearrange an equation:
y = ax^2 + bx + c
to the form of:
x = ?
I'm not entirely sure how to go about this and the examples I've found require the equation to be in a different form. Any tips or a point in the right direction would be great!
Blog Entries: 8 Recognitions: Gold Member Science Advisor Staff Emeritus Complete the square.
Great thanks! That in mind I've got: x = $\frac{\sqrt{y - c - \frac{b^{2}}{4a}} - \frac{b}{2\sqrt{a}}}{\sqrt{a}}$
Help needed, rearranging polynomial for inverse equation
Check out the Wolfram Equation Solver:
http://www.wolframalpha.com/examples...onSolving.html
You would want to "solve an equation with parameters". Their answer looks a bit different, so you can look at their step by step breakdown and see whether your answer is equivalent.
Blog Entries: 5 Recognitions: Homework Help Science Advisor Do you know the quadratic formula, which is the default solution of ax² + bx + c = 0? Or is that what you are trying to prove here? Because if not, you can pull y to the other side of the equals sign and apply the quadratic formula.
Recognitions: Gold Member Science Advisor Staff Emeritus Complete the square or use the quadratic formula. Most people learn how to solve quadratic equations before they learn about "inverse functions". Also, at some point you will have a "plus or minus". Unless your domain is restricted, a quadratic function will NOT have an inverse function.
I'm actually writing a program that works out a, b and c, but then needs to work out x given y. I probably used the wrong terminology to describe something along the way ^^ The answer I first wrote was generated by getting the equation in the form of: y = (dx + e)^2 + f and then working out d, e and f. The wolfram example is much nicer solution though, and more efficient computer wise :) Thanks a lot for the help!
Blog Entries: 5 Recognitions: Homework Help Science Advisor If you already have a square you can solve for p = (dx + e) first.
|
# Get \jobname with normal catcodes
I would like to create a macro \myjob that has the same value as \jobname, but with letters having catcode 11 instead of 12. The goal is to be able to do something like
\documentclass{article}
\edef\myjob{\jobname}
\makeatletter
\newcommand{\mytest}[1]{\begingroup%
\def\@tempa{#1}%
% \expandafter\def\expandafter\@tempa\expandafter{\detokenize{#1}}%
\meaning\@tempa\par%
\meaning\myjob\par%
\ifx\@tempa\myjob%
YES%
\else%
NO%
\fi%
\endgroup}
\begin{document}
\mytest{test}
\end{document}
where \ifx\@tempa\myjob is true (assuming the correct string is passed to \mytest). I would prefer not to detokenize the input string. Rather, I am looking for a different way of defining \myjob that "retokenizes" \jobname.
• 'Retokenizing' is risky as there might be a % or similar: see e-TeX's \scantokens Oct 19, 2016 at 14:44
• @JosephWright if a user is silly enough to put % or other nonsense in the \jobname then I am fine with it causing problems. Oct 19, 2016 at 14:48
• The detokenizing of \@tempa contents is much more safer. It can be done by a simple \@onelevel@sanitize\@tempa, which does not even require e-TeX. The tokenization method via \scantokens is not reliable at all. Depending on the current category code settings and the used characters, the latter method can break with a cryptic error message; also, the result is not even stable in general. Oct 19, 2016 at 21:00
Thanks to the hint from Joseph Wright, all I need is \scantokens.
\documentclass{article}
\edef\myjob{\expandafter\scantokens\expandafter{\jobname\noexpand}}
\makeatletter
\newcommand{\mytest}[1]{\begingroup%
\def\@tempa{#1}%
% \expandafter\def\expandafter\@tempa\expandafter{\detokenize{#1}}%
\meaning\@tempa\par%
\meaning\myjob\par%
\ifx\@tempa\myjob%
YES%
\else%
NO%
\fi%
\endgroup}
\begin{document}
\mytest{test}
\end{document}
|
Recursive randomness of reals: summing a random decreasing sequence
In a comment to Recursive randomness of integers Rahul pointed out a continuous version of the same problem: pick ${x_0}$ uniformly in ${[0,1]}$, then ${x_1}$ uniformly in ${[0,x_0]}$, then ${x_2}$ uniformly in ${[0, x_1]}$, etc. What is the distribution of the sum ${S=\sum_{n=0}^\infty x_n}$?
The continuous version turns out to be easier to analyse. To begin with, it’s equivalent to picking uniformly distributed, independent ${y_n\in [0,1]}$ and letting ${x_n = y_0y_1\cdots y_n}$. Then the sum is
${\displaystyle y_0+y_0y_1 + y_0y_1y_2 + \cdots }$
which can be written as
${\displaystyle y_0(1+y_1(1+y_2(1+\cdots )))}$
So, ${S}$ is a stationary point of the random process ${X\mapsto (X+1)U}$ where ${U}$ is uniformly distributed in ${[0,1]}$. Simply put, ${S}$ and ${(S+1)U}$ have the same distribution. This yields the value of ${E[S]}$ in a much simpler way than in the previous post:
${E[S]=E[(S+1)U] = (E[S] + 1) E[U] = (E[S] + 1)/2}$
hence ${E[S]=1}$.
We also get an equation for the cumulative distribution function ${C(t) = P[S\le t]}$. Indeed,
${\displaystyle P[S\le t] = P[(S+1)U \le t] = P[S \le t/U-1]}$
The latter probability is ${\int_0^1 P[S\le t/u-1]\,du = \int_0^1 C(t/u-1)\,du}$. Conclusion: ${C(t) = \int_0^1 C(t/u-1)\,du}$. Differentiate to get an equation for the probability density function ${p(t)}$, namely ${p(t) = \int_0^1 p(t/u-1)\,du/u}$. It’s convenient to change the variable of integration to ${v = t/u-1}$, which leads to
${\displaystyle p(t) = \int_{t-1}^\infty p(v)\,\frac{dv}{v+1}}$
Another differentiation turns the integral equation into a delay differential equation,
${\displaystyle p'(t) = - \frac{p(t-1)}{t}}$
Looks pretty simple, doesn’t it? Since the density is zero for negative arguments, it is constant on ${[0,1]}$. This constant, which I’ll denote ${\gamma}$, is ${\int_{0}^\infty p(v)\,\frac{dv}{v+1}}$, or simply ${E[1/(S+1)]}$. I couldn’t get an analytic formula for ${\gamma}$. My attempt was ${E[1/(S+1)] = \sum_{n=0}^\infty (-1)^n M_n}$ where ${M_n=E[S^n]}$ are the moments of ${S}$. The moments can be computed recursively using ${E[S^n] = E[(S+1)^n]E[U^n]}$, which yields
${\displaystyle M_n=\frac{1}{n} \sum_{k=0}^{n-1} \binom{n}{k}M_k}$
The first few moments, starting with ${M_0}$, are 1, 1, 3/2, 17/6, 19/3, 81/5, 8351/80… Unfortunately the series ${\sum_{n=0}^\infty (-1)^n M_n}$ diverges, so this approach seems doomed. Numerically ${\gamma \approx 0.5614}$ which is not far from the Euler-Mascheroni constant, hence the choice of notation.
On the interval (1,2) we have ${p'(t) = -\gamma/t}$, hence
${p(t) = \gamma(1-\log t)}$ for ${1 \le t \le 2}$.
The DDE gets harder to integrate after that… on the interval ${[2,3]}$ the solution already involves the dilogarithm (Spence’s function):
${\displaystyle p(t) = \gamma(1+\pi^2/12 - \log t + \log(t-1)\log t + \mathrm{Spence}\,(t))}$
following SciPy’s convention for Spence. This is as far as I went… but here is an experimental confirmation of the formulas obtained so far (link to full size image).
To generate a sample from distribution S, I begin with a bunch of zeros and repeat “add 1, multiply by U[0,1]” many times. That’s it.
import numpy as np
import matplotlib.pyplot as plt
trials = 10000000
terms = 10000
x = np.zeros(shape=(trials,))
for _ in range(terms):
np.multiply(x+1, np.random.uniform(size=(trials,)), out=x)
_ = plt.hist(x, bins=1000, normed=True)
plt.show()
I still want to know the exact value of ${\gamma}$… after all, it’s also the probability that the sum of our random decreasing sequence is less than 1.
Update
The constant I called “${\gamma}$” is in fact ${\exp(-\gamma)}$ where ${\gamma}$ is indeed Euler’s constant… This is what I learned from the Inverse Symbolic Calculator after solving the DDE (with initial value 1) numerically, and calculating the integral of the solution. From there, it did not take long to find that
Oh well. At least I practiced solving delay differential equations in Python. There is no built-in method in SciPy for that, and although there are some modules for DDE out there, I decided to roll my own. The logic is straightforward: solve the ODE on an interval of length 1, then build an interpolating spline out of the numeric solution and use it as the right hand side in the ODE, repeat. I used Romberg’s method for integrating the solution; the integration is done separately on each interval [k, k+1] because of the lack of smoothness at the integers.
import numpy as np
from scipy.integrate import odeint, romb
from scipy.interpolate import interp1d
numpoints = 2**12 + 1
solution = [lambda x: 1]
integrals = [1]
for k in range(1, 15):
y0 = solution[k-1](k)
t = np.linspace(k, k+1, numpoints)
rhs = lambda y, x: -solution[k-1](np.clip(x-1, k-1, k))/x
y = odeint(rhs, y0, t, atol=1e-15, rtol=1e-13).squeeze()
solution.append(interp1d(t, y, kind='cubic', assume_sorted=True))
integrals.append(romb(y, dx=1/(numpoints-1)))
total_integral = sum(integrals)
print("{:.15f}".format(1/total_integral))
As a byproduct, the program found the probabilities of the random sum being in each integer interval:
• 56.15% in [0,1]
• 34.46% in [1,2]
• 8.19% in [2,3]
• 1.1% in [3,4]
• 0.1% in [4,5]
• less than 0.01% chance of being greater than 5
Recursive randomness of integers
Entering a string such as “random number 0 to 7” into Google search brings up a neat random number generator. For now, it supports only uniform probability distributions over integers. That’s still enough to play a little game.
Pick a positive number, such as 7. Then pick a number at random between 0 and 7 (integers, with equal probability); for example, 5. Then pick a number between 0 and 5, perhaps 2… repeat indefinitely. When we reach 0, the game becomes really boring, so that is a good place to stop. Ignoring the initial non-random number, we got a random non-increasing sequence such as 5, 2, 1, 1, 0. The sum of this one is 9… how are these sums distributed?
Let’s call the initial number A and the sum S. The simplest case is A=1, when S is the number of returns to 1 until the process hits 0. Since each return to 1 has probability 1/2, we get the following geometric distribution
Sum Probability 0 1/2 1 1/4 2 1/8 3 1/16 k 1/2k+1
When starting with A=2, things are already more complicated: for one thing, the probability mass function is no longer decreasing, with P[S=2] being greater than P[S=1]. The histogram shows the counts obtained after 2,000,000 trials with A=2.
The probability mass function is still not too hard to compute: let’s say b is the number of times the process arrives at 2, then the sum is 2b + the result with A=1. So we end up convolving two geometric distributions, one of which is supported on even integers: hence the bias toward even sums.
Sum Probability 0 1/3 1 1/6 2 5/36 3 7/72 k ((4/3)[k/2]+1-1)/2k
For large k, the ratio P[S=k+2]/P[s=k] is asymptotic to (4/3)/4 = 1/3, which means that the tail of the distribution is approximately geometric with the ratio of ${1/\sqrt{3}}$.
I did not feel like computing exact distribution for larger A, resorting to simulations. Here is A=10 (ignore the little bump at the end, an artifact of truncation):
There are three distinct features: P[S=0] is much higher than the rest; the distribution is flat (with a bias toward even, which is diminishing) until about S=n, and after that it looks geometric. Let’s see what we can say for a general starting value A.
Perhaps surprisingly, the expected value E[S] is exactly A. To see this, consider that we are dealing with a Markov chain with states 0,1,…,A. The transition probabilities from n to any number 0,…,n are 1/(n+1). Ignoring the terminal state 0, which does not contribute to the sum, we get the following kind of transition matrix (the case A=4 shown):
${\displaystyle M = \begin{pmatrix}1/2 & 0 & 0 & 0 \\ 1/3 & 1/3 & 0 & 0 \\ 1/4 & 1/4 & 1/4 & 0 \\ 1/5 & 1/5 & 1/5 & 1/5\end{pmatrix} }$
The initial state is a vector such as ${v = (0,0,0,1)}$. So ${vM^j}$ is the state after j steps. The expected value contributed by the j-th step is ${vM^jw}$ where ${w = (1,2,3,4)^T}$ is the weight vector. So, the expected value of the sum is
${\displaystyle \sum_{j=1}^\infty vM^jw = v\left(\sum_{j=1}^\infty M^j\right)w = vM(I-M)^{-1}w}$
It turns out that the matrix ${M(I-M)^{-1}}$ has a simple form, strongly resembling M itself.
${\displaystyle M(I-M)^{-1} = \begin{pmatrix} 1 & 0 & 0 & 0 \\ 1 & 1/2 & 0 & 0 \\ 1 & 1/2 & 1/3 & 0 \\ 1 & 1/2 & 1/3 & 1/4 \end{pmatrix} }$
Left multiplication by v extracts the bottom row of this matrix, and we are left with a dot product of the form ${(1,1/2,1/3,1/4)\cdot (1,2,3,4) = 1 + 1 + 1 + 1 = 4 }$. Neat.
What else can we say? The median is less than A, which is no surprise given the long tail on the right. Also, P[S=0] = 1/(A+1) since the only way to have zero sum is to hit 0 at once. A more interesting question is: what is the limit of the distribution of T = S/A as A tends to infinity? Here is the histogram of 2,000,000 trials with A=50.
It looks like the distribution of T tends to a limit, which has constant density until 1 (so, until A before rescaling) and decays exponentially after that. Writing the supposed probability density function as ${f(t) = c}$ for ${0\le t\le 1}$, ${f(t) = c\exp(k(1-t))}$ for ${t > 1}$, and using the fact that the expected value of T is 1, we arrive at ${c = 2-\sqrt{2} \approx 0.586}$ and ${k=\sqrt{2}}$. This is a pretty good approximation in some aspects: the median of this distribution is ${1/(2c)}$, suggesting that the median of S is around ${n/(4-2\sqrt{2})}$ which is in reasonable agreement with experiment. But the histogram for A=1000 still has a significant deviation from the exponential curve, indicating that the supposedly geometric part of T isn’t really geometric:
One can express S as a sum of several independent geometric random variables, but the number of summands grows quadratically in A, and I didn’t get any useful asymptotics from this. What is the true limiting distribution of S/A, if it’s not the red curve above?
Very fractional geometric progressions with an integer ratio
The geometric progression 1/3, 2/3, 4/3, 8/3, 16/3,… is notable for being dyadic (ratio 2) and staying away from integers as much as possible (distance 1/3 between this progression and the set of integers; no other dyadic progression stays further away from integers). This property is occasionally useful: by taking the union of the dyadic partition of an interval with its shift by 1/3, one gets a system of intervals that comfortably covers every point: for every point x and every (small) radius r there is an interval of size r, in which x is near the middle.
It’s easy to see that for any real number x, the distance between the progression {x, 2x, 4x, 8x, …} and the set of integers cannot be greater than 1/3. Indeed, since the integer part of x does not matter, it suffices to consider x between 0 and 1. The values between 0 and 1/3 lose immediately; the values between 1/3 and 1/2 lose after being multiplied by 2. And since x and 1-x yield the same distance, we are done.
Let’s find the most fractional progressions with other integer ratios r. When r is odd, the solution is obvious: starting with 1/2 keeps all the terms at half-integers, so the distance 1/2 is achieved. When r is even, say r = 2k, the best starting value is x = k/(2k+1), which achieves the distance x since rx = k-x. The values between 0 and k/(2k+1) are obviously worse, and those between k/(2k+1) and 1/2 become worse after being multiplied by r: they are mapped to the interval between k-x and k.
The problem is solved! But what if… x is irrational?
Returning to ratio r=2, it is clear that 1/3 is no longer attainable. The base-2 expansion of x cannot be 010101… as that would be periodic. So it must contain either 00 or 11 somewhere. Either of those will bring a dyadic multiple of x within distance less than 0.001111… (base 2) of an integer, that is distance 1/4.
The goal is to construct x so that its binary expansion is as balanced between 0 and 1 as possible, but without being periodic. The Thue-Morse constant does exactly this. It’s constructed by starting with 0 and then adding the complement of the sequence constructed so far: x = .0 1 10 1001 10010110 … which is approximately 0.412. The closest the dyadic geometric progression starting with x comes to an integer is 2x, which has distance about 0.175. The Wikipedia article links to the survey The ubiquitous Prouhet-Thue-Morse sequence by Allouche and Shallit, in which Corollary 2 implies that no other irrational number has a dyadic progression with a greater distance from integers, provided that this distance is attained. I have not been able to sort out the case in which the distance from a progression to the integers is not attained, but it seems very likely that Thue-Morse remains on top.
What about other ratios? When the ratio r is even, the situation is essentially the same as for r=2, for the following reason. In base r there are two digits nearest (r-1)/2, for example 4 and 5 in base 10. Using these digits in the Thue-Morse sequence, we get a strong candidate for the most fractional progression with ratio r: for example, 0.455454455445… in base 10, with the distance of about 0.445. Using any other digit loses the game at once: for example, having 3 in the decimal expansion implies that some multiple of 10 is within less than 0.39999… = 0.4 of an integer.
When the ratio is odd, there are three digits that could conceivably be used in the extremal x: namely, (r-1)/2 and its two neighbors. If the central digit (r-1)/2 is never used, we are back to the Thue-Morse pattern, such as x = 0.0220200220020220… in base 3 (an element of the standard Cantor set, by the way). But this is an unspectacular achievement, with the distance of about 0.0852. One can do better by starting with 1/2 = 0.1111111… and sprinkling this ternary expansions with 0s or 2s in some aperiodic way, doing so very infrequently. By making the runs of 1s extremely long, we get the distance arbitrarily close to 1 – 0.2111111… base 3, which is simply 1/2 – 1/3 = 1/6.
So it seems that for irrational geometric progressions with an odd ratio r, the distance to integers can be arbitrarily close to the number 1/2 – 1/r, but there is no progression achieving this value.
The Kolakoski-Cantor set
A 0-1 sequence can be interpreted as a point in the interval [0,1]. But this makes the long-term behavior of the sequence practically invisible due to limited resolution of our screens (and eyes). To make it visible, we can also plot the points obtained by shifting the binary sequence to the left (Bernoulli shift, which also goes by many other names). The resulting orbit is often dense in the interval, which doesn’t really help us visualize any patterns. But sometimes we get an interesting complex structure.
The vertical axis here is the time parameter, the number of dyadic shifts. The 0-1 sequence being visualized is the Kolakoski sequence in its binary form, with 0 and 1 instead of 1 and 2. By definition, the n-th run of equal digits in this sequence has length ${x_n+1}$. In particular, 000 and 111 never occur, which contributes to the blank spots near 0 and 1.
Although the sequence is not periodic, the set is quite stable in time; it does not make a visible difference whether one plots the first 10,000 shifts, or 10,000,000. The apparent symmetry about 1/2 is related to the open problem of whether the Kolakoski sequence is mirror invariant, meaning that together with any finite word (such as 0010) it also contains its complement (that would be 1101).
There are infinitely many forbidden words apart from 000 and 111 (and the words containing those). For example, 01010 cannot occur because it has 3 consecutive runs of length 1, which implies having 000 elsewhere in the sequence. For the same reason, 001100 is forbidden. This goes on forever: 00100100 is forbidden because it implies having 10101, etc.
The number of distinct words of length n in the Kolakoski sequence is bounded by a power of n (see F. M. Dekking, What is the long range order in the Kolakoski sequence?). Hence, the set pictured above is covered by ${O(n^p)}$ intervals of length ${2^{-n}}$, which implies it (and even its closure) is zero-dimensional in any fractal sense (has Minkowski dimension 0).
The set KC apparently does not have any isolated points; this is also an open problem, of recurrence (whether every word that appears in the sequence has to appear infinitely many times). Assuming this is so, the closure of the orbit is a totally disconnected compact set without isolated points, i.e., a Cantor set. It is not self-similar (not surprising, given it’s zero-dimensional), but its relation to the Bernoulli shift implies a structure resembling self-similarity:
Applying the transformations ${x\mapsto x/2}$ and ${x\mapsto (1+x)/2}$ yields two disjoint smaller copies that cover the original set, but with some spare parts left. The leftover bits exist because not every word in the sequence can be preceded by both 0 and 1.
Applying the transformations ${x\mapsto 2x}$ and ${x\mapsto 2x-1}$ yields two larger copies that cover the original set. There are no extra parts within the interval [0,1] but there is an overlap between the two copies.
The number ${c = \inf KC\approx 0.146778684766479}$ appears several times in the structure of the set: for instance, the central gap is ${((1-c)/2, (1+c)/2)}$, the second-largest gap on the left has the left endpoint ${(1-c)/4}$, etc. The Inverse Symbolic Calculator has not found anything about this number. Its binary expansion begins with 0.001 001 011 001 001 101 001 001 101 100… which one can recognize as the smallest binary number that can be written without doing anything three times in a row. (Can’t have 000; also can’t have 001 three times in a row; and 001 010 is not allowed because it contains 01010, three runs of length 1. Hence, the number begins with 001 001 011.) This number is obviously irrational, but other than that…
In conclusion, the Python code used to plot KC.
import numpy as np
import matplotlib.pyplot as plt
n = 1000000
a = np.zeros(n, dtype=int)
j = 0
same = False
for i in range(1, n):
if same:
a[i] = a[i-1]
same = False
else:
a[i] = 1 - a[i-1]
j += 1
same = bool(a[j])
v = np.array([1/2**k for k in range(60, 0, -1)])
b = np.convolve(a, v, mode='valid')
plt.plot(b, np.arange(np.size(b)), '.', ms=2)
plt.show()
Pisot constant beyond 0.843
In a 1946 paper Charles Pisot proved a theorem involving a curious constant ${\gamma_0= 0.843\dots}$. It can be defined as follows:
${\gamma_0= \sup\{r \colon \exists }$ monic polynomial ${p}$ such that ${|p(e^z)| \le 1}$ whenever ${|z|\le r \}}$
Equivalently, ${\gamma_0}$ is determined by the requirement that the set ${\{e^z\colon |z|\le \gamma_0\}}$ have logarithmic capacity 1; this won’t be used here. The theorem is stated below, although this post is really about the constant.
Theorem: If an entire function takes integer values at nonnegative integers and is ${O(e^{\gamma |z|})}$ for some ${\gamma < \gamma_0}$, then it is a finite linear combination of terms of the form ${z^n \alpha^z}$, where each ${\alpha }$ is an algebraic integer.
The value of ${\gamma_0}$ is best possible; thus, in some sense Pisot’s theorem completed a line of investigation that began with a 1915 theorem by Pólya which had ${\log 2}$ in place of ${\gamma_0}$, and where the conclusion was that ${f}$ is a polynomial. (Informally speaking, Pólya proved that ${2^z}$ is the “smallest” entire-function that is integer-valued on nonnegative integers.)
Although the constant ${\gamma_0}$ was mentioned in later literature (here, here, and here), no further digits of it have been stated anywhere, as far as I know. So, let it be known that the decimal expansion of ${\gamma_0}$ begins with 0.84383.
A lower bound on ${\gamma_0}$ can be obtained by constructing a monic polynomial that is bounded by 1 on the set ${E(r) = \{e^z \colon |z|\le r \}}$. Here is E(0.843):
It looks pretty round, except for that flat part on the left. In fact, E(0.82) is covered by a disk of unit radius centered at 1.3, which means that the choice ${p(z) = z-1.3}$ shows ${\gamma_0 > 0.82}$.
How to get an upper bound on ${\gamma_0}$? Turns out, it suffices to exhibit a monic polynomial ${q}$ that has all zeros in ${E(r)}$ and satisfies ${|q|>1}$ on the boundary of ${E(r)}$. The existence of such ${q}$ shows ${\gamma_0 < r}$. Indeed, suppose that ${p}$ is monic and ${|p|\le 1}$ on ${E(r)}$. Consider the function ${\displaystyle u(z) = \frac{\log|p(z)|}{\deg p} - \frac{\log|q(z)|}{\deg q}}$. By construction ${u<0}$ on the boundary of ${E(r)}$. Also, ${u}$ is subharmonic in its complement, including ${\infty}$, where the singularities of both logarithms cancel out, leaving ${u(\infty)=0}$. This contradicts the maximum principle for subharmonic functions, according to which ${u(\infty)}$ cannot exceed the maximum of ${u}$ on the boundary.
The choice of ${q(z) = z-1.42}$ works for ${r=0.89}$.
So we have ${\gamma_0}$ boxed between 0.82 and 0.89; how to get more precise bounds? I don’t know how Pisot achieved the precision of 0.843… it’s possible that he strategically picked some linear and quadratic factors, raised them to variable integer powers and optimized the latter. Today it is too tempting to throw some optimization routine on the problem and let it run for a while.
But what to optimize? The straightforward approach is to minimize the maximum of ${|p(e^z)|}$ on the circle ${|z|=r}$, approximated by sampling the function at a sufficiently fine uniform grid ${\{z_k\}}$ and picking the maximal value. This works… unspectacularly. One problem is that the objective function is non-differentiable. Another is that taking maximum throws out a lot of information: we are not using the values at other sample points to better direct the search. After running optimization for days, trying different optimization methods, tolerance options, degrees of the polynomial, and starting values, I was not happy with the results…
Turns out, the optimization is much more effective if one minimizes the variance of the set ${\{|p(\exp(z_k))|^2\}}$. Now we are minimizing a polynomial function of ${p(\exp(z_k)}$, which pushes them toward having the same absolute value — the behavior that we want the polynomial to have. It took from seconds to minutes to produce the polynomials shown below, using BFGS method as implemented in SciPy.
As the arguments for optimization function I took the real and imaginary parts of the zeros of the polynomial. The symmetry about the real axis was enforced automatically: the polynomial was the product of quadratic terms ${(z-x_k-iy_k) (z-x_k+iy_k)}$. This eliminated the potentially useful option of having real zeros of odd order, but I did not feel like special-casing those.
Three digits
Real part: 0.916, 1.186, 1.54, 1.783
Imaginary part: 0.399, 0.572, 0.502, 0.199
Here and below, only the zeros with positive imaginary part are listed (in the left-to-right order), the others being their conjugates.
Real part: 0.878, 1.0673, 1.3626, 1.6514, 1.8277
Imaginary part: 0.3661, 0.5602, 0.6005, 0.4584, 0.171
Four digits
Real part: 0.8398, 0.9358, 1.1231, 1.357, 1.5899, 1.776, 1.8788
Imaginary part: 0.3135, 0.4999 ,0.6163, 0.637, 0.553, 0.3751, 0.1326
Real part: 0.8397, 0.9358, 1.1231, 1.3571, 1.5901, 1.7762, 1.879
Imaginary part: 0.3136, 0.5, 0.6164, 0.6372, 0.5531, 0.3751, 0.1326
No, I didn’t post the same picture twice. The polynomials are just that similar. But as the list of zeros shows, there are tiny differences…
Five digits
Real part: 0.81527, 0.8553, 0.96028, 1.1082, 1.28274, 1.46689, 1.63723, 1.76302, 1.82066, 1.86273
Imaginary part: 0.2686, 0.42952, 0.556, 0.63835, 0.66857, 0.63906, 0.54572, 0.39701, 0.23637, 0.08842
Real part: 0.81798, 0.85803, 0.95788, 1.09239, 1.25897, 1.44255, 1.61962, 1.76883, 1.86547, 1.89069
Imaginary part: 0.26631, 0.4234, 0.54324, 0.62676, 0.66903, 0.65366, 0.57719, 0.44358, 0.26486, 0.07896
Again, nearly the same polynomial works for upper and lower bounds. The fact that the absolute value of each of these polynomials is below 1 (for lower bounds) or greater than 1 (for upper bounds) can be ascertained by sampling them and using an upper estimate on the derivative; there is enough margin to trust computations with double precision.
Finally, the Python script I used. The function “obj” is getting minimized while function “values” returns the actual values of interest: the minimum and maximum of polynomial. The degree of polynomial is 2n, and the radius under consideration is r. The sample points are collected in array s. To begin with, the roots are chosen randomly. After minimization runs (inevitably, ending in a local minimum of which there are myriads), the new starting point is obtained by randomly perturbing the local minimum found. (The perturbation is smaller if minimization was particularly successful.)
import numpy as np
from scipy.optimize import minimize
def obj(r):
rc = np.concatenate((r[:n]+1j*r[n:], r[:n]-1j*r[n:])).reshape(-1,1)
p = np.prod(np.abs(s-rc)**2, axis=0)
return np.var(p)
def values(r):
rc = np.concatenate((r[:n]+1j*r[n:], r[:n]-1j*r[n:])).reshape(-1,1)
p = np.prod(np.abs(s-rc), axis=0)
return [np.min(p), np.max(p)]
r = 0.84384
n = 10
record = 2
s = np.exp(r * np.exp(1j*np.arange(0, np.pi, 0.01)))
xr = np.random.uniform(0.8, 1.8, size=(n,))
xi = np.random.uniform(0, 0.7, size=(n,))
x0 = np.concatenate((xr, xi))
while True:
res = minimize(obj, x0, method = 'BFGS')
if res['fun'] < record:
record = res['fun']
print(repr(res['x']))
print(values(res['x']))
x0 = res['x'] + np.random.uniform(-0.001, 0.001, size=x0.shape)
else:
x0 = res['x'] + np.random.uniform(-0.05, 0.05, size=x0.shape)
Multipliers preserving series convergence
The Comparison Test shows that if ${\sum a_n}$ is an absolutely convergent series, and ${\{b_n\}}$ is a bounded sequence, then ${\sum a_nb_n}$ converges absolutely. Indeed, ${|a_nb_n|\le M|a_n|}$ where ${M}$ is such that ${|b_n|\le M}$ for all ${n}$.
With a bit more effort one can prove that this property of preserving absolute convergence is equivalent to being a bounded sequence. Indeed, if ${\{b_n\}}$ is unbounded, then for every ${k}$ there is ${n_k}$ such that ${|b_{n_k}|\ge 2^k}$. We can ensure ${n_k > n_{k-1}}$ since there are infinitely many candidates for ${n_k}$. Define ${a_n=2^{-k}}$ if ${n = n_k}$ for some ${k}$, and ${a_n=0}$ otherwise. Then ${\sum a_n}$ converges but ${\sum a_nb_n}$ diverges because its terms do not approach zero.
What if we drop “absolutely”? Let’s say that a sequence ${\{b_n\}}$ preserves convergence of series if for every convergent series ${\sum a_n}$, the series ${\sum a_n b_n}$ also converges. Being bounded doesn’t imply this property: for example, ${b_n=(-1)^n}$ does not preserve convergence of the series ${\sum (-1)^n/n}$.
Theorem. A sequence ${\{b_n\}}$ preserves convergence of series if and only if it has bounded variation, meaning ${\sum |b_n-b_{n+1}| }$ converges.
For brevity, let’s say that ${\{b_n\}}$ is BV. Every bounded monotone sequence is BV because the sum ${\sum |b_n-b_{n+1}| }$ telescopes. On the other hand, ${(-1)^n}$ is not BV, and neither is ${(-1)^n/n}$. But ${(-1)^n/n^p}$ is for ${p>1}$. The following lemma describes the structure of BV sequences.
Lemma 1. A sequence ${\{b_n\}}$ is BV if and only if there are two increasing bounded sequences ${\{c_n\}}$ and ${\{d_n\}}$ such that ${b_n=c_n-d_n}$ for all ${n}$.
Proof. If such ${c_n,d_n}$ exist, then by the triangle inequality $\displaystyle \sum_{n=1}^N |b_n-b_{n+1}| = \sum_{n=1}^N (|c_n-c_{n +1}| + |d_{n+1}-d_n|) = \sum_{n=1}^N (c_{n+1}-c_n) + \sum_{n=1}^N (d_{n+1}-d_n)$ and the latter sums telescope to ${c_{N+1}-c_1 + d_{N+1}-d_1}$ which has a limit as ${N\rightarrow\infty}$ since bounded monotone sequences converge.
Conversely, suppose ${\{b_n\}}$ is BV. Let ${c_n = \sum_{k=1}^{n-1}|b_k-b_{k+1}|}$, understanding that ${c_1=0}$. By construction, the sequence ${\{c_n\}}$ is increasing and bounded. Also let ${d_n=c_n-b_n}$; as a difference of bounded sequences, this is bounded too. Finally,
$\displaystyle d_{n+1}-d_n = c_{n+1} -c_n + b_n - b_{n+1} = |b_n-b_{n+1}|+ b_n - b_{n+1} \ge 0$
which shows that ${\{d_n\}}$ is increasing.
To construct a suitable example where ${\sum a_nb_n}$ diverges, we need another lemma.
Lemma 2. If a series of nonnegative terms ${\sum A_n}$ diverges, then there is a sequence ${c_n\rightarrow 0}$ such that the series ${\sum c_n A_n}$ still diverges.
Proof. Let ${s_n = A_1+\dots+A_n}$ (partial sums); then ${A_n=s_n-s_{n-1}}$. The sequence ${\sqrt{s_n}}$ tends to infinity, but slower than ${s_n}$ itself. Let ${c_n=1/(\sqrt{s_n}+\sqrt{s_{n-1}})}$, so that ${c_nA_n = \sqrt{s_n}-\sqrt{s_{n-1}}}$, and we are done: the partial sums of ${\sum c_nA_n}$ telescope to ${\sqrt{s_n}}$.
Proof of the theorem, Sufficiency part. Suppose ${\{b_n\}}$ is BV. Using Lemma 1, write ${b_n=c_n-d_n}$. Since ${a_nb_n = a_nc_n - a_n d_n}$, it suffices to prove that ${\sum a_nc_n}$ and ${\sum a_nd_n}$ converge. Consider the first one; the proof for the other is the same. Let ${L=\lim c_n}$ and write ${a_nc_n = La_n - a_n(L-c_n)}$. Here ${\sum La_n}$ converges as a constant multiple of ${\sum a_n}$. Also, ${\sum a_n(L-c_n)}$ converges by the Dirichlet test: the partial sums of ${\sum a_n}$ are bounded, and ${L-c_n}$ decreases to zero.
Proof of the theorem, Necessity part. Suppose ${\{b_n\}}$ is not BV. The goal is to find a convergent series ${\sum a_n}$ such that ${\sum a_nb_n}$ diverges. If ${\{b_n\}}$ is not bounded, then we can proceed as in the case of absolute convergence, considered above. So let’s assume ${\{b_n\}}$ is bounded.
Since ${\sum_{n=1}^\infty |b_n-b_{n+1}|}$ diverges, by Lemma 2 there exists ${\{c_n\} }$ such that ${c_n\rightarrow 0}$ and ${\sum_{n=1}^\infty c_n|b_n-b_{n+1}|}$ diverges. Let ${d_n}$ be such that ${d_n(b_n-b_{n+1}) = c_n|b_n-b_{n+1}|}$; that is, ${d_n}$ differs from ${c_n}$ only by sign. In particular, ${d_n\rightarrow 0}$. Summation by parts yields
$\displaystyle { \sum_{n=1}^N d_n(b_n-b_{n+1}) = \sum_{n=2}^N (d_{n}-d_{n-1})b_n + d_1b_1-d_Nb_{N+1} }$
As ${N\rightarrow\infty}$, the left hand side of does not have a limit since ${\sum d_n(b_n-b_{n+1})}$ diverges. On the other hand, ${d_1b_1-d_Nb_{N+1}\rightarrow d_1b_1}$ since ${d_N\rightarrow 0}$ while ${b_{N+1}}$ stays bounded. Therefore, ${\lim_{N\rightarrow\infty} \sum_{n=2}^N (d_{n}-d_{n-1})b_n}$ does not exist.
Let ${a_n= d_n-d_{n-1}}$. The series ${\sum a_n}$ converges (by telescoping, since ${\lim_{n\rightarrow\infty} d_n}$ exists) but ${\sum a_nb_n}$ diverges, as shown above.
In terms of functional analysis the preservation of absolute convergence is essentially the statement that ${(\ell_1)^* = \ell_\infty}$. Notably, the ${\ell_\infty}$ norm of ${\{b_n\}}$, i.e., ${\sup |b_n|}$, is the number that controls by how much ${\sum |a_nb_n|}$ can exceed ${\sum |a_n|}$.
I don’t have a similar quantitative statement for the case of convergence. The BV space has a natural norm too, ${\sum |b_n-b_{n-1}|}$ (interpreting ${b_0}$ as ${0}$), but it’s not obvious how to relate this norm to the values of the sums ${\sum a_n}$ and ${\sum a_nb_n}$.
Compact sets in Banach spaces
In a Euclidean space, a set is compact if and only if it is closed and bounded. This fails in all infinite-dimensional Banach spaces (and in particular in Hilbert spaces) where the closed unit ball is not compact. However, one still has a simple description of compact sets:
A subset of a Banach space is compact if and only if it is closed, bounded, and flat.
By definition, a set is flat if for every positive number r it is contained in the r-neighborhood of some finite-dimensional linear subspace.
Notes:
• The r-neighborhood of a set consists of all points whose distance to the set is less than r.
• In a finite-dimensional subspace every subset is vacuously flat.
Necessity: Suppose K is a compact set. Every compact set is closed and bounded, this is true in all metric spaces. Given a positive number r, let F be a finite set such that K is contained in the r-neighborhood of F; the existence of such F follows by covering K with r-neighborhoods of points and choosing a finite subcover. Then the linear subspace spanned by F is finite-dimensional and demonstrates that K is flat.
Sufficiency: to prove K is compact, we must show it’s complete and totally bounded. Completeness follows from being a closed subset of a complete space, so the issue is total boundedness. Given r > 0, let M be a finite-dimensional subspace such that K is contained in the (r/2)-neighborhood of M. For each point of K, pick a point of M at distance less than r/2 from it. Let E be the set of all such points in M. Since K is bounded, so it E. Being a bounded subset of a finite-dimensional linear space, E is totally bounded. Thus, there exists a finite set F such that E is contained in the (r/2)-neighborhood of F. Consequently, K is contained in the r-neighborhood of F, which shows its total boundedness.
It’s worth noting that the equivalence of compactness with “flatness” (existence of finite-dimensional approximations) breaks down for linear operators in Banach spaces. While in a Hilbert space an operator is compact if and only if it is the norm-limit of finite-rank operators, some Banach spaces admit compact operators without a finite-rank approximation; that is, they lack the Approximation Property.
|
CiteULike is a free online bibliography manager. Register and you can start organising your references online.
Tags
# Zero- and Low-Temperature Behavior of the Two-Dimensional $\ifmmode±\else\textpm\fiJ$ Ising Spin Glass
Phys. Rev. Lett., Vol. 107 (2011), pp. 047203-047203, doi:10.1103/physrevlett.107.047203 Key: citeulike:11406698
## Likes (beta)
This copy of the article hasn't been liked by anyone yet.
### Abstract
Scaling arguments and precise simulations are used to study the square lattice ±J Ising spin glass, a prototypical model for glassy systems. Droplet theory explains, and our numerical results show, entropically stabilized long-range spin-glass order at zero temperature, which resembles the energetic stabilization of long-range order in higher-dimensional models at finite temperature. At low temperature, a temperature-dependent crossover length scale is used to predict the power-law dependence on temperature of the heat capacity and clarify the importance of disorder distributions.
|
# Text at bottom of last page
I am working on a template in LO Writer and I'd like to have text (a signature line) at the bottom of the very last page. Above the text is a table which is automatically filled with often long texts that can cause page breaks. The solution should therefore work regardless of the number of pages. The last page might be anywhere from first page to 100+th page. The last page will likely still have content (the table) other than the signature line. I cannot use endnotes (that's at the top of the page) or a footer (that's on every page and I can't use a different style for the last page because it might also be the first page, it's automatically generated) or anchor it to the page (that's just on one page and you have to specify which one) and frankly, I'm at loss. Any ideas?
edit retag close merge delete
Sort by » oldest newest most voted
You may insert a frame
• anchor to paragraph, properties :: type :: position :: vertical bottom to page text area
For "power users" you can manage a protection of the "last line" like this:
• Insert a section as the very last object in your text; in that way that there is no paragraph behind it. So nobody is able to write below the section.
• Insert the frame as above mentioned
• Protect the section - done
[/edit]
more
Unfortunately, this does not work for me, it stays on the first page.
( 2018-05-23 18:35:16 +0200 )edit
The trick is to anchor the frame to a dedicated paragraph. See my answer update.
( 2018-05-23 22:33:02 +0200 )edit
I tried it again but it did not work. Then I tried anchoring it to character rather than to paragraph and it worked! Everything else in your answer was very helpful and accurate but I'm afraid you must have confused it there but now all is well. I'd appreciate it if you could correct that in your answer so I can mark it as correct. I also noticed that this solution also works with a Textbox. Thank you very much for your help! I couldn't have done it without you.
( 2018-05-24 08:22:40 +0200 )edit
Anchoring as character instead of to paragraph sometimes gives better results. It depends on circumstances, but it may prove "dangerous" when you edit the paragraph: erasing the anchor location will delete the frame (which does not happen wit "to paragraph"). Once again, choosing the right anchor is a matter of experimenting, due to the effective local formatting and context.
( 2018-05-24 13:03:26 +0200 )edit
Wrt improving my answer: 1) do you suggest I mention character anchor? 2) where is my confusion? Initially I thought only of automating signature insertion. Should I keep only the alignment issue?
( 2018-05-24 13:05:51 +0200 )edit
Three directions:
• If you really design a template in LO concept, i.e. a file with extension .ott, just put your "signature" paragraph as the last element in the template.
When you "instantiate" the template, begin to type above your "signature" or, if you already have more fixed fixed text in your template (TOC or index placeholder, tables, copyright, …), where the logical start of your document is.
If you don't need the "signature" in the document, delete it (it won't affect the template).
• Define an "AutoText" for your "signature" and trigger its expansion when you need it.
• Use the replacement capabilities of Tools>AutoCorrect>AutoCorrect Options, Replace tab: define a new pattern in Replace and your "signature" in With. After that, when you type the pattern, it is automatically replaced by your "signature". Note you will need to type the final Return to end the paragraph signature.
EDIT to cope with alignment specification
In your template, add an empty paragraph in the very last position. Adjust the spacing properties and font size so that it is as unobtrusive as possible. With the cursor inside, Insert>Frame, anchored To paragraph. Position properties are : Vertical Bottom to Page Text Area, Horizontal as you seem fit (Center is quite common for a "signature`). Width in Size should be large enough for your purpose. Do not forget to remove the default border.
Type your signature inside the frame.
The frame is automatically aligned at the bottom of the last page. Tune the properties of the last empty paragraph to avoid some undesired effect such as the "signature" shifted to a new page although there is enough room for it in the preceding page (this is an effect of paragraph spacing).
If this answer helped you, please accept it by clicking the check mark ✔ to the left and, karma permitting, upvote it. If this resolves your problem, close the question, that will help other people with the same question.
more
Please take into account that I need the signature line at the bottom of the last page, not just on the last page. Even if there's only two or three words on the page (other than the ones with the signature line) the signature line has to be at the very bottom. Also, once I have created the template, I will not be instantiating the template manually, the process will be automated.
( 2018-05-23 18:27:54 +0200 )edit
@wowza42
I edited my first solution, adding a protected section as the very last part of the text.
How to fix the section as last part:
• Type some letters in the last paragraph, mark the entire last paragraph, then goto menu Insert → section
If you fail:
• Delete the last paragraph of text by going to the last paragraph of section, then hit CTRL+SHIFT+DEL
The way to break that:
Hit ALT+ENTER (last paragraph of section) then you create a paragraph below, even if section protected
( 2018-05-24 14:06:27 +0200 )edit
@wowza42 wrote: Also, once I have created the template, I will not be instantiating the template manually, the process will be automated.
I have no clue what to do then and what automation you are going to start...
( 2018-05-24 14:10:29 +0200 )edit
|
## Sparsity and compressed sensing
When you look at an average webpage, chances are very high, that your computer loads one or more JPEG images. A image saved in the JPEG Format usually takes between 30 and 150 times less memory than a uncompressed image, without looking much different.
The reason why images (and music and many other things) are so well compressible has to do with sparsity. Sparsity is a property of signals in a vectorspace with a basis, but in contrast to many other properties it is difficult to tackle within the framework of linear algebra. In this post I want to write a bit about signals, vectorspaces and bases, about sparsity and about a very cool technique that I discovered recently.
## Signals and vectorspaces
If you want to represent a image in the computer, one of the possibilities is, to divide it into small rectangular regions, call these pixels, and then use numbers for each pixel to describe its color. In the case of a grey scale picture, a single number per pixel suffices. We can now write down these numbers one after the other, and end up with a long string of numbers, representing a image.
Now something differnt: From school you might know the concept of a vector. It is usually represented by a arrow pointing from the origin to some other place, together with a $x$ and $y$ axes:
The vector can also be described as a pair of numbers, like this
r = (21.3)
If you start out at the origin, the upper and lower numbers tell you, how far you have to go in the direction of the $x$ and $y$ axis to arrive at the point the vectors pointing to. The two numbers are called coordinates, the corresponding axes are called coordinate axes. Because there are two of them, one calls this a two-dimensional vectorspace. But one needs not to stop there, one could have three coordinate axes, then the arrows would point to places outside the screen plane, and we would need three numbers.
Or we could have several million axes, then it is difficult to imagine an arrow, but we could still work with it by writing down several million coordinate values. Just like the image I talked about earlier could be represented by a few million numbers. So we could think about the image representation as a vector in a high dimensional vector space.
In fact many things can be represented as long strings of numbers, e.g. sounds, time series, and many more. To make general statements about all these things, they are given the name signal. For example: A signal is a vector in a (usually high dimensional) vector space.
## Sparsity and bases
A signal is called sparse, if only a few of its coordinates are nonzero, and most of them are zero. Below I show an 2D example of a number of sparse signales, because in 2D one can draw arrows. A sparse signal in 2D is located on one of the coordinate axes. I have shifted the origin to the middle of the plot and suppressed the arrows, otherwise it would be a very crowded plot.
Usually this is not the case with images, because the numbers we use to describe the grey value of a greyscale picture are zero only when the pixel is black. So a black picture with only a few brighter spots would be sparse.
The coordinate values tell us, how far we have to go in the direction of the corresponding coordinate axis. So what happens if we just use a different set of coordinate axes.
A set of coordinate axes is called basis. The same vector has different coordinates when expressed in a different basis. The $x$-$y$ axis we have used until now is called the standard basis. If one knows the coordinates of a vector in one basis, one can caluclate the coordinates with respect to another basis (transform the vector). This is easy, but I will not explain that here.
So can we make a signal sparse by changing the basis. The answer is yes. Here is an example of a number of vectors, that have nonzero coordinates in the standard basis, but when we change to the basis given by the two arrows, they become sparse
So sparseness is a property for a collection of signals expressed in a basis. A signal can be sparse in one basis and non-sparse in another. But for every basis there are signals that are not sparse.
So for the case of images, there could be a basis in which (good approximations to) images are sparse, despite the fact, that images are not sparse in the standard basis. However, if we expressed a piece of music or a time series in this basis, it probably would not be sparse. And there are indeed bases in which images are sparse, like the DCT base or the wavelet base.
So looked at it in yet another way, a set of signals is sparse in a basis, if they are located on coordinate axes and other low-dimensional subspaces.
Sparsity is difficult for linear algebra, because in linear algebra one is concerned with all possible vectors, and sparsity is a property that can only hold for a subset of all vectors.
## Compression
It is of course great when we know a basis in which a signal is sparse. We then only have to store which coefficients are nonzero and what their value is, which can usually be accomplished with less storage than storing a non-sparse representation.
Of course, a basis transform into a good basis is only one of the steps performed in modern-day formats like JPEG. There are other techniques like prediction schemes, entropy coding and quantisation that play an important role for the overall compression performance.
## Compressed sensing
A few weeks ago I read an newspaper article about a revolutionary development for digital cameras. It talks about a camera that has only one single pixel sensor abd can take blurry pictures, but needs 10 minutes to take a picture, and something about compression. And I did not understand what exactly with this camera is supposed to be the revolution. So I read the original scholarly articles that were linked, and found out that there indeed is a very cool idea behind all that, but the newspaper article completely missed the point. And this camera is only a proof of concept of this really cool idea.
A standard digital camera works roughly like this: The objects you want to photograph emit light, and the lenses make sure that this light hits a CCD sensor array in the back of the camera. This sensor array has a sensor for every pixelof your final image. So in other words, each sensor measures the coordinate of the signal corresponding to his pixel in the standard basis. And the CCD array can do several million measurements in parallel in a tiny fraction of a second.
If we look at the measurement in terms of vector spaces, a measurement corresponds to a scalar product of the signal with a measurement vector $\mathbf{m}_i$, yielding a single number $m_i$. The measurements of a standard digital camera correspond to projections on the coordinate axes:
m1 = (10)(2.01.3) = 2.0
m2 = (01)(2.01.3) = 1.3
But we know that we only need a few nonzero coordinates to represent a picture, because we know a basis, in which the image signal is sparse. So the question is, can we in principle measure the few nonzero coordinates directly, instead of measuring many pixels, and afterwards transforming to a sparse representation.
It turns out the answer is yes. One can obtain a picture with many pixels by making much fewer measurements with appropriately chosen measurement vectors, because we know that the signal is sparse in a certain basis. The reconstruction is more difficult and requires more computation, but it is still feasible. This technique is called compressed (or compressive) sensing, and this is indeed a very cool idea.
I looked a bit into the literature and there is very deep and apparently very beautiful mathematics behind that, and everyone is pretty excited. It is quite funny, because years back in university in a course I gave a presentation about one of the central algorithms in the reconstruction part. A good overview over compressed sensing is in this paper, the digital camera is described here.
Of course, a digital camera is not the best example for that because it is very easy to make a few million measurements in parallel, so the measurement part is not the bottleneck. However, for example with other applications like MRI, the bottleneck is indeed the measurement, and compressive sensing gives impressive improvements.
## A magical compressive sensing game
Maybe you know the game Mastermind. A variation of this might be the following game (lets call it SparsterMind). It is played on a board not unlike the Mastermind board, but with 100 holes. The codemaster places a small number of colored pegs in some of the slots. He can choose how many pegs he places, but not more than 3. Different colors are worth different numbers of points, some colors have negative values. The game then progresses in rounds.
1. The code breaker marks some holes on his side of the board with white pegs.
2. The code master indicates the total score of the indicated holes
The game goes on for 20 rounds or so. If the code breaker has not guessed the correct holes and colors of the codemaster, he looses, otherwise he wins.
On first sight this seems very difficult, but with the knowledge about compressed sensing that we have, we can see, why it is possible. We think about the pegs in the holes as a high dimensional, yet sparse vector, the coordinate value corrsponds to the score of color of the peg. Each round is a measurement. Now we know that with compressed sensing it is possible to recover the signal with much less than 100 measurements.
The compressive sensing strategy might be not easy to play for humans, because one has to solve large linear optimisation problems, but a computer can do it very easily. I implemented this game in python and put it on github, so you can be the codemaster and see how the computer figures it out.
|
# How to calculate divergence of some special fields
1. Mar 5, 2010
### netheril96
$$$\nabla \cdot \frac{{\vec e_r }}{{r^2 }} = 4\pi \delta (\vec r)$$$
This can be seen from$$$\nabla \cdot \frac{{\vec e_r }}{{r^2 }} = \frac{1}{{r^2 }}\frac{\partial }{{\partial r}}(r^2 \cdot \frac{1}{{r^2 }}) = \frac{1}{{r^2 }}\frac{\partial }{{\partial r}}(1) = 0(r \ne 0)$$$
And from Gauss' Theorem$$$\int_V {(\nabla \cdot \frac{{\vec e_r }}{{r^2 }})dV = \oint_S {\frac{{\vec e_r }}{{r^2 }} \cdot d\vec S} } = 4\pi$$$
But if I want to directly using the formula of divergence in spherical coordinates,I can only get$$$\nabla \cdot \frac{{\vec e_r }}{{r^2 }} = \frac{1}{{r^2 }}\frac{\partial }{{\partial r}}(\frac{{r^2 }}{{r^2 }})$$$
And integrating this over a volume cannot give me the result of 4π$$$\int_V {(\nabla \cdot \frac{{\vec e_r }}{{r^2 }})dV = } \int_0^\pi {\sin \theta d\theta \int_0^{2\pi } {d\phi \int_0^R {\frac{\partial }{{\partial r}}(\frac{{r^2 }}{{r^2 }})} } } dr = 4\pi \int_0^R {\frac{\partial }{{\partial r}}(\frac{{r^2 }}{{r^2 }})} dr$$$
(Here V is a sphere with radius of R)
So how can I connect it with Dirac Delta?
By the way,I post this here because this problem arises in the electrostatic field of a point charge and I found nothing about such thing in any book concerning δ(x).
2. Mar 5, 2010
### gabbagabbahey
The problem is that $\frac{r^2}{r^2}=\infty$ at $r=0$.
3. Mar 5, 2010
### netheril96
So how can I get$$$\int_0^R {\frac{\partial }{{\partial r}}(\frac{{r^2 }}{{r^2 }})} dr = 1$$$
Without integration,you cannot conclude some function with a singularity is δ(x)
4. Mar 5, 2010
### gabbagabbahey
Other than just using Gauss' Law, I suppose an appropriate limiting procedure can be used. I'd start with your expression for $\mathbf{\nabla}\cdot\left(\frac{\textbf{e}_r}{r^2}\right)$ and calculate the limit of it as $r\to 0$
5. Mar 5, 2010
### clem
As you have seen $$\delta({\vec r})$$ is not easily treated in spherical coordinates.
What is wrong with your first two lines? They constitute one of the definitions of the delta function, which is as 'direct' as you can get.
|
1. ## Bases
1 2 1
1 0 3)][/tex] thank you..
2. Sure- what help do you need?
You forgot the leading [ MATH ] for you Latex- but then you didn't use LaTex code, anyway. Click on $\begin{bmatrix}0 & 0 & -2 \\ 1 & 2 & 1 \\ 1 & 0 & 3\end{bmatrix}$ to see the code.
Now, to find the bases for the eigenspace the first thing you need to do is find the eigenvalues:
can you solve the equation $\left|\begin{array}{ccc}-\lambda & 0 & -2 \\1 & 2- \lambda & 1 \\1 & 0 & 3- \lambda\end{array}\right|= 0$? That reduces to solving a cubic equation. I recommend expanding the determinant on the first row.
|
# perturbatr cookbook In perturbatr: Statistical Analysis of High-Throughput Genetic Perturbation Screens
BiocStyle::markdown()
knitr::opts_chunk$set(echo = TRUE) options(warn = -1) library(dplyr) library(tibble) library(methods) library(perturbatr) data(rnaiscreen) rnaiscreen <- dataSet(rnaiscreen) %>% dplyr::select(Condition, Replicate, GeneSymbol, Perturbation, Readout, Control, Design, ScreenType, Screen) %>% as.tibble() # Introduction perturbatr does stage-wise analysis of large-scale genetic perturbation screens for integrated data sets consisting of multiple screens. For multiple integrated perturbation screens a hierarchical model that considers the variance between different biological conditions is fitted. That means that we first estimate relative effect sizes for all genes. The resulting hit lists is then further extended using a network propagation algorithm to correct for false negatives. Here we show an example data analysis using a pan-pathogenic data set of three RNAi screening studies. The data set consists of two kinome and a druggable genome wide RNAi screen and have been published in @reiss2011recruitment (HCV) and @de2015kinome (SARS). # Data analysis tutorial This tutorial walks you to the basic functionality of perturbatr. ## Creating a PerturbationData object You supposedly start with something like a data.frame or tibble: head(rnaiscreen) In order to start your analysis you need to create a perturbation data set first.For this you only need to call the as method on your data.frame: rnaiscreen <- methods::as(rnaiscreen, "PerturbationData") Coercing your data.frame to PerturbationData will automatically warn you if your table is formatted wrongly. You need at least the following column names order to be able to do analysis of perturbation screens using perturbatr: • Condition: an identifier that best describes the respective screen. For instance this can be the name of a virus for pathogen screens, the name of a cell line, organoid or the like. The condition describes a single data set, i.e. if you want to integrate multiple different data sets, make sure to give each a different condition. • Replicate: an integer representing the replicate number of a screen. • GeneSymbol: the HUGO identifier, ENTREZ id, etc. as character. • Perturbation: a siRNA id or gRNA id that describes the knockout/knockdown for the gene. • Readout: a normalized readout like a log-fold change for gRNAs, a GFP signal, etc. • Control: vector of integers representing perturbations that have been used as negative or positive controls. A negative control is marked with '-1', a positive control with '1' and a normal sample with '0'. Depending on how you want to model the readout using the hierarchical model, you might want to add additional columns. For the sake of simplicity this suffices though. ## Working with PerturbationData S4 objects A PerturbationData object consists of a single slot that stores your data. We bundled your data into an S4 object such that dispatch is easier to handle and to make sure that your data set has the correct columns: rnaiscreen dataSet(rnaiscreen) PerturbationData has some basic filter and rbind functionality. Similar to dplyr::filter you can select rows by some predicate(s). In the example below we extract all rows from the data set that have a positive readout. perturbatr::filter(rnaiscreen, Readout > 0) Filtering on multiple rows works by just adding predicates: perturbatr::filter(rnaiscreen, Readout > 0, Replicate == 2) If you want to combine data sets you can call rbind, which will automatically dispatch on PerturbationData object: dh <- perturbatr::filter(rnaiscreen, Readout > 0, Replicate == 2) rbind(dh, dh) ## Data analysis using a hierarchical model and network diffusion Finally, after having set up the data set, we analyse it using a hierarchical model and network diffusion. We expect you already normalized the data sets accordingly. As noted above, if you want to analyse multiple data sets, make sure that every data set corresponds to a unique Condition. First, let's have a rough look at the data set that we are using: plot(rnaiscreen) We have roughly the same number of replicates per gene, but the HCV screen has less genes than the SARS data set. That is no problem however, because we automatically filter such that the genes are same. We also automatically remove positive controls for obvious reasons. Next we rank the genes using a hierarchical model which requires explicitely modelling the readout of our data set using an R formula. Let's look at the data in more detail first: dataSet(rnaiscreen) %>% str() Here, variables like Replicate, Plate, RowIdx/ColIdx should not be associated with a change in the response Readout as we normalized the data and corrected for batch effects. However, the Readouts should definitely have been different between ScreenTypes: dataSet(rnaiscreen) %>% pull(ScreenType) %>% unique() where E/R represents that the screen has measured the effect of a gene knockdown during the entry and replication stages of the viral lifecycle while A/R repesents the gene knockdown's effect having been measures during the assembly and release stages of the lifecycle. In the life cycle of positive-sense RNA viruses we know that viruses make use of different host factors during their life cycle. That means while some genes are required during entry and replication, others might play a role in assembly and release of the virions. So we have reason to believe that the stage of the infection also introduces a clustering effect. In that case we would need to add a random effect for the stage of the infection. A model selection using the Bayesian information criterion indeed suggests the following hierarchical random intercept model: $$y_{cgtp} \mid\gamma_g, \delta_{cg}, \zeta_t , \xi_{ct} \sim \mathcal{N}(x_c \beta + \gamma_g + \delta_{cg} + \zeta_t + \xi_{ct}, \sigma^2),$$ where$y_{cgtp}$is the readout of virus$c$, gene$g$, stage of the viral lifecycle$t$(E/R vs A/R) and$p$is the perturbation (siRNA) used for gene$g\$. We estimate the parameters of the model using lme4 [@bates2014lme4]:
frm <- Readout ~ Condition +
(1|GeneSymbol) + (1|Condition:GeneSymbol) +
(1|ScreenType) + (1|Condition:ScreenType)
res.hm <- hm(rnaiscreen, formula = frm)
Note that for your own data different effects might be visible. Thus, before modelling you need to exploratorily detect possible effects.
Let's take the last result and plot them. This yields a list of multiple plots. The first plot shows the first 25 strongest gene effects ranked by their absolute effect sizes. Most of the genes are colored in blue which represents that a gene knockdown leads to an inhibition of viral growth on a pan-viral level. Bars colored in red represent genes for which a knockdown results in increased viral viability. If you are interested in the complete ranking of genes, use geneEffects(res.hm).
pl <- plot(res.hm)
pl[[1]]
The second plots shows the nested gene effects, i.e. the estimated effects of a gene knockdown for a single virus. The genes shown here are the same as in the first plot, so it might be possible that there are nested gene effects that are stronger which are just not plotted. You can get all nested gene effects using nestedGeneEffects(res.hm).
pl[[2]]
Next we might want to smooth the effect from the hierarchical model using network diffusion, by that possibly reduce the number of some false negatives. For that we need to supply a graph as a data.frame and call the diffuse function:
system.file("extdata", "graph_small.rds",package = "perturbatr"))
diffu <- diffuse(res.hm, graph=graph, r=0.3)
If we plot the results we get a list of reranked genes. Note that the ranking uses the network diffusion computes a stationary distribution of a Markov random walk with restarts.
plot(diffu)
Further note that we used a very small network here. You might want to redo this analysis with the full graph which is located in system.file("extdata", "graph_full.rds",package = "perturbatr").
sessionInfo()
## Try the perturbatr package in your browser
Any scripts or data that you put into this service are public.
perturbatr documentation built on Nov. 8, 2020, 5:15 p.m.
|
高级检索
作者:Ping Li , Yi-zhi Chen ... 来源:[J].Trials(IF 2.206), 2017, Vol.18 (1)Springer 摘要:IgA nephropathy (IgAN) is one of the most common primary glomerular diseases worldwide, but effective therapy remains limited and many patients progress to end-stage renal disease (ESRD). Only angiotensin-converting enzyme inhibitors (ACE-I)/angiotensin-receptor blockers (ARB) sh...
作者:Ping Li , Zhiwen Xu ... 来源:[J].BMC Microbiology(IF 3.104), 2017, Vol.17 (1)Springer 摘要:The complexity of the pathogenic mechanism underlying the host immune response to Actinobacillus pleuropneumonia ( App ) makes the use of preventive measures difficult, and a more global view of the host-pathogen interactions and new insights into this process are urgently needed...
作者:Ping Li , Jian-Wei Xie ... 来源:[J].BMC Surgery(IF 1.973), 2017, Vol.17 (1)Springer 摘要:The presence and the prognostic significance of perigastric tumor deposits (TDs) in primary gastric cancer have not been extensively studied. The aim of this study was to evaluate the prognostic significance perigastric TDs in primary gastric cancer.
作者:Ping Li , Hai Wang ... 来源:[J].Cell & Bioscience(IF 3), 2017, Vol.7 (1)Springer 摘要:The 14-3-3 family of proteins have been reported to play an important role in development in various mouse models, but the context specific developmental functions of 14-3-3ζ remain to be determined. In this study, we identified a context specific developmental function of 1...
作者:Ping Li , Jiabao Geng ... 来源:[J].Virology Journal(IF 2.092), 2017, Vol.14 (1)Springer 摘要:The amino acid substitution at position 181 of the Hepatitis B virus (HBV) polymerase is a multi-drug resistance affecting both the L-nucleoside and acyclic phosphonate nucleotide groups. Data is limited on the efficacy of entecavir (ETV) rescuing chronic hepatitis B (CHB) patien...
作者:Ping Li , Bin Lu , Zhanzhou Luo 来源:[J].Bulletin of Materials Science(IF 0.584), 2017, Vol.40 (6), pp.1069-1074Springer 摘要:A facile hydrothermal process was developed to synthesize novel wheatear-shaped ZnO microstructures at a low temperature ( $$85^{\circ }\hbox {C}$$ ) without the assistance of any template agent. X-ray diffraction and field emission scanning electron microscopy were used to ...
作者:... Wei-Min He , Ping Li , Feng Wang 来源:[J].Radiation Oncology(IF 2.107), 2017, Vol.12 (1)Springer 摘要:Radiation for Graves’ ophthalmopathy (GO) has traditionally utilized lateral opposing fields (LOF) or three-dimensional conformal radiotherapy (3DCRT) technique. The current study was conducted to report clinical outcomes and therapeutic effects of intensity modulated radiat...
作者:Ping Li , Matthew Garratt ... 来源:[J].Journal of Intelligent & Robotic Systems(IF 0.827), 2017, Vol.87 (3-4), pp.439-454Springer 摘要:In this paper, a visual inertial fusion framework is proposed for estimating the metric states of a Micro Aerial Vehicle (MAV) using optic flow (OF) and a homography model. Aided by the attitude estimation from the on-board Inertial Measurement Unit (IMU), the computed homography...
作者:Ping Li , Bao-Guo Yuan ... 来源:[J].Rare Metals(IF 0.493), 2017, Vol.36 (4), pp.242-246Springer 摘要:Thermohydrogen processing can enhance workability, decrease flow stress and deforming temperature of titanium alloys. In this study, thermohydrogen processing was carried out for metastable β-type TB8 alloy. The microstructures of hydrogenated TB8 alloy were investigated bas...
作者:... Lingxia Wang , Ping Li , Shuangcheng Li 来源:[J].Rice(IF 2.381), 2017, Vol.10 (1)Springer 摘要:Male fertility is crucial for rice yield, and the improvement of rice yield requires hybrid production that depends on male sterile lines. Although recent studies have revealed several important genes in male reproductive development, our understanding of the mechanisms of rice p...
|
# Properties
Label 735.2.i.f Level 735 Weight 2 Character orbit 735.i Analytic conductor 5.869 Analytic rank 0 Dimension 2 CM no Inner twists 2
# Related objects
## Newspace parameters
Level: $$N$$ = $$735 = 3 \cdot 5 \cdot 7^{2}$$ Weight: $$k$$ = $$2$$ Character orbit: $$[\chi]$$ = 735.i (of order $$3$$, degree $$2$$, not minimal)
## Newform invariants
Self dual: no Analytic conductor: $$5.86900454856$$ Analytic rank: $$0$$ Dimension: $$2$$ Coefficient field: $$\Q(\sqrt{-3})$$ Coefficient ring: $$\Z[a_1, a_2, a_3]$$ Coefficient ring index: $$1$$ Twist minimal: no (minimal twist has level 105) Sato-Tate group: $\mathrm{SU}(2)[C_{3}]$
## $q$-expansion
Coefficients of the $$q$$-expansion are expressed in terms of a primitive root of unity $$\zeta_{6}$$. We also show the integral $$q$$-expansion of the trace form.
$$f(q)$$ $$=$$ $$q + 2 \zeta_{6} q^{2} + ( 1 - \zeta_{6} ) q^{3} + ( -2 + 2 \zeta_{6} ) q^{4} + \zeta_{6} q^{5} + 2 q^{6} -\zeta_{6} q^{9} +O(q^{10})$$ $$q + 2 \zeta_{6} q^{2} + ( 1 - \zeta_{6} ) q^{3} + ( -2 + 2 \zeta_{6} ) q^{4} + \zeta_{6} q^{5} + 2 q^{6} -\zeta_{6} q^{9} + ( -2 + 2 \zeta_{6} ) q^{10} + ( 6 - 6 \zeta_{6} ) q^{11} + 2 \zeta_{6} q^{12} + 3 q^{13} + q^{15} + 4 \zeta_{6} q^{16} + ( -4 + 4 \zeta_{6} ) q^{17} + ( 2 - 2 \zeta_{6} ) q^{18} + \zeta_{6} q^{19} -2 q^{20} + 12 q^{22} + 4 \zeta_{6} q^{23} + ( -1 + \zeta_{6} ) q^{25} + 6 \zeta_{6} q^{26} - q^{27} -8 q^{29} + 2 \zeta_{6} q^{30} + ( 1 - \zeta_{6} ) q^{31} + ( -8 + 8 \zeta_{6} ) q^{32} -6 \zeta_{6} q^{33} -8 q^{34} + 2 q^{36} -7 \zeta_{6} q^{37} + ( -2 + 2 \zeta_{6} ) q^{38} + ( 3 - 3 \zeta_{6} ) q^{39} + 6 q^{41} + q^{43} + 12 \zeta_{6} q^{44} + ( 1 - \zeta_{6} ) q^{45} + ( -8 + 8 \zeta_{6} ) q^{46} + 2 \zeta_{6} q^{47} + 4 q^{48} -2 q^{50} + 4 \zeta_{6} q^{51} + ( -6 + 6 \zeta_{6} ) q^{52} + ( -4 + 4 \zeta_{6} ) q^{53} -2 \zeta_{6} q^{54} + 6 q^{55} + q^{57} -16 \zeta_{6} q^{58} + ( -8 + 8 \zeta_{6} ) q^{59} + ( -2 + 2 \zeta_{6} ) q^{60} -14 \zeta_{6} q^{61} + 2 q^{62} -8 q^{64} + 3 \zeta_{6} q^{65} + ( 12 - 12 \zeta_{6} ) q^{66} + ( -7 + 7 \zeta_{6} ) q^{67} -8 \zeta_{6} q^{68} + 4 q^{69} + 6 q^{71} + ( 1 - \zeta_{6} ) q^{73} + ( 14 - 14 \zeta_{6} ) q^{74} + \zeta_{6} q^{75} -2 q^{76} + 6 q^{78} + \zeta_{6} q^{79} + ( -4 + 4 \zeta_{6} ) q^{80} + ( -1 + \zeta_{6} ) q^{81} + 12 \zeta_{6} q^{82} -2 q^{83} -4 q^{85} + 2 \zeta_{6} q^{86} + ( -8 + 8 \zeta_{6} ) q^{87} -12 \zeta_{6} q^{89} + 2 q^{90} -8 q^{92} -\zeta_{6} q^{93} + ( -4 + 4 \zeta_{6} ) q^{94} + ( -1 + \zeta_{6} ) q^{95} + 8 \zeta_{6} q^{96} + 6 q^{97} -6 q^{99} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$2q + 2q^{2} + q^{3} - 2q^{4} + q^{5} + 4q^{6} - q^{9} + O(q^{10})$$ $$2q + 2q^{2} + q^{3} - 2q^{4} + q^{5} + 4q^{6} - q^{9} - 2q^{10} + 6q^{11} + 2q^{12} + 6q^{13} + 2q^{15} + 4q^{16} - 4q^{17} + 2q^{18} + q^{19} - 4q^{20} + 24q^{22} + 4q^{23} - q^{25} + 6q^{26} - 2q^{27} - 16q^{29} + 2q^{30} + q^{31} - 8q^{32} - 6q^{33} - 16q^{34} + 4q^{36} - 7q^{37} - 2q^{38} + 3q^{39} + 12q^{41} + 2q^{43} + 12q^{44} + q^{45} - 8q^{46} + 2q^{47} + 8q^{48} - 4q^{50} + 4q^{51} - 6q^{52} - 4q^{53} - 2q^{54} + 12q^{55} + 2q^{57} - 16q^{58} - 8q^{59} - 2q^{60} - 14q^{61} + 4q^{62} - 16q^{64} + 3q^{65} + 12q^{66} - 7q^{67} - 8q^{68} + 8q^{69} + 12q^{71} + q^{73} + 14q^{74} + q^{75} - 4q^{76} + 12q^{78} + q^{79} - 4q^{80} - q^{81} + 12q^{82} - 4q^{83} - 8q^{85} + 2q^{86} - 8q^{87} - 12q^{89} + 4q^{90} - 16q^{92} - q^{93} - 4q^{94} - q^{95} + 8q^{96} + 12q^{97} - 12q^{99} + O(q^{100})$$
## Character values
We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/735\mathbb{Z}\right)^\times$$.
$$n$$ $$346$$ $$442$$ $$491$$ $$\chi(n)$$ $$-\zeta_{6}$$ $$1$$ $$1$$
## Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
226.1
0.5 − 0.866025i 0.5 + 0.866025i
1.00000 1.73205i 0.500000 + 0.866025i −1.00000 1.73205i 0.500000 0.866025i 2.00000 0 0 −0.500000 + 0.866025i −1.00000 1.73205i
361.1 1.00000 + 1.73205i 0.500000 0.866025i −1.00000 + 1.73205i 0.500000 + 0.866025i 2.00000 0 0 −0.500000 0.866025i −1.00000 + 1.73205i
$$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
## Inner twists
Char Parity Ord Mult Type
1.a even 1 1 trivial
7.c even 3 1 inner
## Twists
By twisting character orbit
Char Parity Ord Mult Type Twist Min Dim
1.a even 1 1 trivial 735.2.i.f 2
7.b odd 2 1 105.2.i.b 2
7.c even 3 1 735.2.a.a 1
7.c even 3 1 inner 735.2.i.f 2
7.d odd 6 1 105.2.i.b 2
7.d odd 6 1 735.2.a.b 1
21.c even 2 1 315.2.j.a 2
21.g even 6 1 315.2.j.a 2
21.g even 6 1 2205.2.a.k 1
21.h odd 6 1 2205.2.a.m 1
28.d even 2 1 1680.2.bg.l 2
28.f even 6 1 1680.2.bg.l 2
35.c odd 2 1 525.2.i.a 2
35.f even 4 2 525.2.r.d 4
35.i odd 6 1 525.2.i.a 2
35.i odd 6 1 3675.2.a.o 1
35.j even 6 1 3675.2.a.p 1
35.k even 12 2 525.2.r.d 4
By twisted newform orbit
Twist Min Dim Char Parity Ord Mult Type
105.2.i.b 2 7.b odd 2 1
105.2.i.b 2 7.d odd 6 1
315.2.j.a 2 21.c even 2 1
315.2.j.a 2 21.g even 6 1
525.2.i.a 2 35.c odd 2 1
525.2.i.a 2 35.i odd 6 1
525.2.r.d 4 35.f even 4 2
525.2.r.d 4 35.k even 12 2
735.2.a.a 1 7.c even 3 1
735.2.a.b 1 7.d odd 6 1
735.2.i.f 2 1.a even 1 1 trivial
735.2.i.f 2 7.c even 3 1 inner
1680.2.bg.l 2 28.d even 2 1
1680.2.bg.l 2 28.f even 6 1
2205.2.a.k 1 21.g even 6 1
2205.2.a.m 1 21.h odd 6 1
3675.2.a.o 1 35.i odd 6 1
3675.2.a.p 1 35.j even 6 1
## Hecke kernels
This newform subspace can be constructed as the intersection of the kernels of the following linear operators acting on $$S_{2}^{\mathrm{new}}(735, [\chi])$$:
$$T_{2}^{2} - 2 T_{2} + 4$$ $$T_{13} - 3$$ $$T_{17}^{2} + 4 T_{17} + 16$$
## Hecke Characteristic Polynomials
$p$ $F_p(T)$
$2$ $$1 - 2 T + 2 T^{2} - 4 T^{3} + 4 T^{4}$$
$3$ $$1 - T + T^{2}$$
$5$ $$1 - T + T^{2}$$
$7$ 1
$11$ $$1 - 6 T + 25 T^{2} - 66 T^{3} + 121 T^{4}$$
$13$ $$( 1 - 3 T + 13 T^{2} )^{2}$$
$17$ $$1 + 4 T - T^{2} + 68 T^{3} + 289 T^{4}$$
$19$ $$( 1 - 8 T + 19 T^{2} )( 1 + 7 T + 19 T^{2} )$$
$23$ $$1 - 4 T - 7 T^{2} - 92 T^{3} + 529 T^{4}$$
$29$ $$( 1 + 8 T + 29 T^{2} )^{2}$$
$31$ $$1 - T - 30 T^{2} - 31 T^{3} + 961 T^{4}$$
$37$ $$1 + 7 T + 12 T^{2} + 259 T^{3} + 1369 T^{4}$$
$41$ $$( 1 - 6 T + 41 T^{2} )^{2}$$
$43$ $$( 1 - T + 43 T^{2} )^{2}$$
$47$ $$1 - 2 T - 43 T^{2} - 94 T^{3} + 2209 T^{4}$$
$53$ $$1 + 4 T - 37 T^{2} + 212 T^{3} + 2809 T^{4}$$
$59$ $$1 + 8 T + 5 T^{2} + 472 T^{3} + 3481 T^{4}$$
$61$ $$( 1 + T + 61 T^{2} )( 1 + 13 T + 61 T^{2} )$$
$67$ $$1 + 7 T - 18 T^{2} + 469 T^{3} + 4489 T^{4}$$
$71$ $$( 1 - 6 T + 71 T^{2} )^{2}$$
$73$ $$1 - T - 72 T^{2} - 73 T^{3} + 5329 T^{4}$$
$79$ $$1 - T - 78 T^{2} - 79 T^{3} + 6241 T^{4}$$
$83$ $$( 1 + 2 T + 83 T^{2} )^{2}$$
$89$ $$1 + 12 T + 55 T^{2} + 1068 T^{3} + 7921 T^{4}$$
$97$ $$( 1 - 6 T + 97 T^{2} )^{2}$$
|
# Reviewing Indonesian Teachers Training
There are two types of teacher training that conducted by 12 P4TKs and supported by 30 LPMPs, namely face-to-face trainings and e-training. The allocated budget for teacher training in 12 P4TKs is about 180 billion rupiahs (Ditjen PMPTK, 2009). According to national standard of budgeting for 100 hours training, face-to-face training unit cost is about 2 million rupiahs per teacher per training and for e-training is about 3,5 million rupiahs per teacher per training. Thus every year the Ditjen PMPTK will conduct teacher training for about 100,000 teachers per year, so it is believe that it will take more than ten years for disseminating new curriculum standard or teaching methodology to more than a million teachers.
Face-to-face training is conducted either on campus or off campus involving participants from several provinces. This training has several patterns such as 50 hours, 100 hours, 200 hours, and 600 hours which every training group is followed by about 30 participants. For instance, P4TK TK & PLB$^{1}$ is able to conduct 50 hours on campus training followed by two groups of 30 participants. On the other hand, e-training is off-campus conducted involving participant from almost all 30 provinces. This training has a specific pattern which is 100 hour for academic writing guidance. The P4TK which has been developing and conducting e-training with national coverage is only P4TK TK & PLB involving more than 5000 participants a year.
Besides unit cost of training and participants, there is an extrinsic factor which is government policy about education quality improvement throughout teacher certification that gives profession incentive up to 100% of main salary to accredited teachers. Obviously, this policy will lead the increase of the national education budget that gives direct implication to the teacher training budget in the future because of limited foreign exchange reserves. It is believe that if global economic crisis will still remain for the next five years, then there will be teacher training budget crisis.
This over viewing about Indonesian teacher training and budgeting condition encourage us to make a breakthrough action to endorse teachers’ quality improvement throughout training in the limited budget condition. In the other words, we should develop teacher training model that has optimum both units cost and number of participant. We can combine the positive aspects of the face-to-face training and e-training respectively, which are the low face-to-face training unit cost and the relative high number of e-training participants, to establish an optimum training model. The convergence face-to-face training and e-training which is called blended training comes from blended learning concept.
$^{1}$P4TK TK & PLB is a P4TK that takes care kindergarten and special needs school teachers
Ditjen PMPTK (2009). Rencana Kerja Anggaran – Kementrian Lembaga Ditjen PMPTK. Government Budgeting System, Software, Jakarta: Direktorat Jenderal Anggaran.
|
# CellEvaluationFunction or \$PreRead stripping inline cells from text cells
I want to create a textual style that has some CellEvaluationFunction that processes the contents of the cell a certain way. Particularly, it has to do something with the contents of inline cells (formulas in the middle of the text).
The problem is that it seems the CellEvaluationFunction receives an already parsed string (something like what you would get when you "copy as input", so I lose the inline cells. If the cell is one of those that start with an empty box, boxdata, like in StandardForm or TraditionalForm, then the function does get the box structure, but then the writing is not the same: spaces become boxes, it formats as formulas, and inline cells get formatted as text.
How could I solve this?
Basic example
Cell[TextData[{
"hello ",
Cell[BoxData[
FormBox[
}], "myText", CellEvaluationFunction->myEvalFun, Evaluatable->True]
suppose I want it turned into the string "hello PP\frac{3}{8}PP", and in general, the textual part remaining the same and the inline cells wrapped in PP in TeXForm
-
@MikeHoneychurch yes, I see a red x^2 when I execute the output cell you get after executing the example – Rojo Mar 1 '12 at 0:29
@MikeHoneychurch, you actually are executing "the output cell" right? – Rojo Mar 1 '12 at 0:46
Could you please include a Cell Expression with appearance that you want, with inline boxes, etc.? – Mr.Wizard Mar 1 '12 at 0:56
Rojo, whoops. I've had several windows open and working on a few things at once. the result is lack of attention! – Mike Honeychurch Mar 1 '12 at 1:05
@Rojo If you want to understand how the FE communicates with the kernel, you should learn to use LinkSnooper. I use it all the time to see what's actually being passed back and forth on the links. Look up LinkSnooper in the docs, which links to a Javadoc page that documents it. Given the sorts of things you express an interest in, I think you would find it a highly instructive tool. If you have any questions about using it, post them here and I'll try to keep an eye out for them. – John Fultz Mar 1 '12 at 12:20
You're not going to get this to work on raw TextData cells. The FE evaluates TextData cells using EnterTextPacket, which merely sends a string along. And, so, the contents must be encoded as a string. Which means you're going to lose all of your typesetting structure. So, let's assume that you've embedded the above cell in a typeset cell where we'll have some more choices. E.g.,
Cell[BoxData[Cell[TextData[{
"hello ",
Cell[BoxData[
FormBox[
}]]], "myText",
Evaluatable->True,
CellEvaluationFunction->myEvalFun]
Now the FE is going to send an EnterExpressionPacket which maintains the full box structure, including inline cells, and that we can work with. From that starting point, I wrote a version of myEvalFun which works for your sample input. It's not very robust...in particular, it assumes that the cell contains one TextData cell with, at most, one level of BoxData cells inside of it. And that the contents of the TextData cell contain nothing other than strings and BoxData cells (other valid TextData contents include StyleBoxes, ButtonBoxes, and TextData cells). But I think it'll give you a good starting point to work from.
myEvalFun[boxes_, form_] := Module[{inlineExprs, val, topExpr},
inlineExprs =
Cases[boxes,
Cell[val : BoxData[_], ___] :>
"PP" <> ToString[TeXForm[ToExpression[val, form]]] <> "PP",
Infinity];
topExpr = boxes[[1]] /. Cell[_BoxData, ___] :> "";
topExpr =
topExpr /. {Cell[TextData[{val___String}], ___] :> StringJoin[val],
Cell[val_String] :> val};
If[StringQ[topExpr], StringForm[topExpr, Sequence @@ inlineExprs],
"Error"]]
-
|
## MATH443 Algebraic topology课程简介
Topology is the branch of mathematics concerned with the study of properties that are preserved under continuous transformations. One of the main goals of topology is to classify topological spaces up to homeomorphism, which is a continuous transformation that can be reversed.
However, determining whether two topological spaces are homeomorphic can be a difficult task. Algebraic constructions known as invariants are used to help determine whether two differently presented topological spaces are indeed different. These invariants are algebraic objects that can be associated with a given topological space and remain unchanged under homeomorphism. This allows us to distinguish between topological spaces that are not homeomorphic.
Homotopy equivalence is a fundamental concept in algebraic topology. Two topological spaces are homotopy equivalent if they can be continuously deformed into each other. More precisely, if there exists a continuous map between the spaces that is invertible up to homotopy, then the spaces are said to be homotopy equivalent. Homotopy equivalence is an equivalence relation, which means that it is reflexive, symmetric, and transitive.
## PREREQUISITES
Group presentations and homomorphisms are also important concepts in algebraic topology. A group presentation is a way of describing a group using generators and relations. Homomorphisms are maps between groups that preserve their algebraic structure. In algebraic topology, groups can be associated with topological spaces, and homomorphisms between groups can be used to study the properties of the associated spaces.
Covering spaces are another important topic in algebraic topology. A covering space is a space that locally looks like a product of a space and a discrete set. Covering spaces can be used to study the topology of a space by relating it to the topology of a simpler space.
The fundamental group is an algebraic invariant that can be associated with a topological space. The fundamental group is a group that captures the essential features of the topology of the space. It is a measure of the number of ways that loops in the space can be continuously deformed. The fundamental group is an important tool for distinguishing between topological spaces.
Homology theories are another set of algebraic invariants that can be associated with a topological space. Homology theories assign algebraic objects to topological spaces that measure the number of holes in the space of a given dimension. Homology theories are more general than the fundamental group and can be used to study a wider class of topological spaces.
Finally, the cohomology ring of a space is a more advanced concept that can be used to study the topology of a space. The cohomology ring is an algebraic object that is associated with a space and captures information about the ways in which the space can be decomposed into simpler spaces. The cohomology ring is a more powerful invariant than the homology groups and can be used to study more complex topological spaces.
## MATH443 Algebraic topology(EXAM HELP, ONLINE TUTOR)
Lemma 1. Let $f_0, f_1$ and $f_2$ be maps $X \rightarrow Y$. If $f_0 \simeq f_1$ and $f_1 \simeq f_2$ then $f_0 \simeq f_2$.
Proof. Let $F_0: X \times I \rightarrow Y$ be a homotopy between $f_0$ and $f_1$, and $F_1: X \times I \rightarrow Y$ a homotopy between $f_1$ and $f_2$.
Define $F: X \times I \rightarrow Y$ by
$$F(x, t)= \begin{cases}F_0(x, 2 t), & t \in[0,1 / 2] \ F_1(x, 2 t-1), & t \in[1 / 2,1] .\end{cases}$$
If $t=1 / 2$ then $F_0(x, 2 t)=F_0(x, 1)=f_1(x)=F_1(x, 0)=F_1(x, 2 t-1)$, i.e. the map $F$ is well-defined. By the pasting lemma, $F$ is continuous. Since $F(x, 0)=F_0(x, 0)=f_0(x)$ and $F(x, 1)=F_1(x, 1)=f_2(x), F$ is a homotopy between $f_0$ and $f_2$.
To elaborate, the idea behind the proof is to “combine” the two given homotopies $F_0$ and $F_1$ to obtain a homotopy between $f_0$ and $f_2$. This is achieved by defining $F$ to be a combination of $F_0$ and $F_1$, where $F_0$ is used for the first half of the interval $[0,1]$ and $F_1$ is used for the second half. The key observation is that $F$ is well-defined at $t=1/2$ because $f_1 = F_0(\cdot,1) = F_1(\cdot,0)$.
To show that $F$ is continuous, the pasting lemma is used. Specifically, we note that $F$ is continuous on $X \times [0,1/2]$ because it is the restriction of the continuous function $F_0$. Similarly, $F$ is continuous on $X \times [1/2,1]$ because it is the restriction of the continuous function $F_1$. Since the two sets $X \times [0,1/2]$ and $X \times [1/2,1]$ intersect only at the point $(x,1/2)$, and $F$ agrees with both $F_0$ and $F_1$ at this point, the pasting lemma implies that $F$ is continuous on the entire interval $[0,1]$.
Finally, it is clear from the definition of $F$ that $F(x,0) = F_0(x,0) = f_0(x)$ and $F(x,1) = F_1(x,1) = f_2(x)$, so $F$ is a homotopy between $f_0$ and $f_2$.
Lemma 2. If $f_0, f_1: X \rightarrow Y$ are homotopic and $g_0, g_1: Y \rightarrow Z$ are homotopic then $g_0 f_0, g_1 f_1: X \rightarrow$ $Z$ are homotopic.
Proof. Let $F: X \times I \rightarrow Y$ be a homotopy between $f_0$ and $f_1$, and let $G: Y \times I \rightarrow Z$ be a homotopy between $g_0$ and $g_1$.
One proof: Now the composite $g_0 F: X \times I \rightarrow Z$ is a homotopy between $g_0 f_0$ and $g_0 f_1$, and the composite $G\left(f_1 \times \operatorname{id}_I\right): X \times I \rightarrow Z$ is a homotopy between $g_0 f_1$ and $g_1 f_1$. By lemma 1 , $g_0 f_0 \simeq g_1 f_1$.
The proof is correct.
To elaborate, the proof shows that the composite maps $g_0 f_0$ and $g_1 f_1$ are homotopic by constructing a homotopy between them. The idea is to use the given homotopies $F$ and $G$ to “combine” $g_0$ and $g_1$ with $f_0$ and $f_1$, respectively. Specifically, we define two maps $H_0$ and $H_1$ by
H_0(x, t) = (g_0 \circ F)(x, t) \quad \text{and} \quad H_1(x, t) = (g_1 \circ F)(x, t).H0(x,t)=(g0∘F)(x,t)andH1(x,t)=(g1∘F)(x,t).
Intuitively, $H_0$ and $H_1$ are obtained by “pushing” each point $(x, t)$ in $X \times I$ along $F$ to a point in $Y$, and then applying either $g_0$ or $g_1$ to that point. Since $F$ is a homotopy between $f_0$ and $f_1$, we have that $H_0(x, 0) = (g_0 \circ f_0)(x)$ and $H_1(x, 0) = (g_1 \circ f_0)(x)$, and $H_0(x, 1) = (g_0 \circ f_1)(x)$ and $H_1(x, 1) = (g_1 \circ f_1)(x)$.
To show that $H_0$ and $H_1$ are homotopic, we use the homotopy $G$ to “slide” each point $(y, t)$ in $Y \times I$ to a nearby point $(y’, t)$, where $y’$ is on the path from $g_0(y)$ to $g_1(y)$. Specifically, we define a map $H: X \times I \rightarrow Z$ by
H(x, t) = (G \circ (H_0 \times \operatorname{id}_I))(x, t) \quad \text{if } t \in [0, 1/2],H(x,t)=(G∘(H0×idI))(x,t)if t∈[0,1/2],
and
H(x, t) = (G \circ (H_1 \times \operatorname{id}_I))(x, 2t-1) \quad \text{if } t \in [1/2, 1].H(x,t)=(G∘(H1×idI))(x,2t−1)if t∈[1/2,1].
Intuitively, $H$ is obtained by first applying $H_0$ or $H_1$ to each point $(x, t)$ in $X \times I$, depending on whether $t$ is in the first or second half of the interval $[0,1]$, and then “sliding” each resulting point along $G$ to a nearby point in $Z$. Since $H_0$ is a homotopy between $g_0 f_0$ and $g_0 f_1$, and $H_1$ is a homotopy between $g_1 f_0$ and $g_1 f_1$, it follows that $H$ is a homotopy between $g_0 f_0$ and $g_1 f_1$.
The last step of the proof uses Lemma 1 to conclude that $g_0 f_0$ and $g_1 f_1$ are homotopic, since we have constructed a homotopy $H$ between them.
## Textbooks
• An Introduction to Stochastic Modeling, Fourth Edition by Pinsky and Karlin (freely
available through the university library here)
• Essentials of Stochastic Processes, Third Edition by Durrett (freely available through
the university library here)
To reiterate, the textbooks are freely available through the university library. Note that
you must be connected to the university Wi-Fi or VPN to access the ebooks from the library
links. Furthermore, the library links take some time to populate, so do not be alarmed if
the webpage looks bare for a few seconds.
Statistics-lab™可以为您提供luc.edu MATH443 Algebraic topology代数拓扑课程的代写代考辅导服务! 请认准Statistics-lab™. Statistics-lab™为您的留学生涯保驾护航。
|
# Area between two curves measurable?
How do I show that for $f,g\in C[a,b]$ the set $A=\{(x,y)\in \mathbb{R}^2:a\leq x\leq b, f(x)\leq y\leq g(x)\}$ is Lebesgue-measurable?
• – tilper Dec 22 '16 at 16:36
Since $f$ and $g$ are continuous, $A$ is a closed subset of $\mathbb R^2,$ which gives the assertion.
|
# Palmgren-Miner linear damage model¶
The function palmgren_miner_linear_damage uses the Palmgren-Miner linear damage hypothesis to find the outputs listed below.
Inputs:
• rated_life - an array or list of how long the component will last at a given stress level
• time_at_stress - an array or list of how long the component is subjected to the stress that gives the rated_life
• stress - what stress the component is subjected to. Not used in the calculation but is required for printing the output.
Note
1. Ensure that the time_at_stress and rated_life are in the same units. The answer will also be in those units.
2. The number of items in each input must be the same.
Outputs:
• Fraction of life consumed per load cycle
• Service life of the component
• Fraction of damage caused at each stress level
In the following example, we consider a scenario in which ball bearings fail after 50000 hrs, 6500 hrs, and 1000 hrs, after being subjected to a stress of 1kN, 2kN, and 4kN respectively. If each load cycle involves 40 mins at 1kN, 15 mins at 2kN, and 5 mins at 4kN, how long will the ball bearings last?
from reliability.PoF import palmgren_miner_linear_damage
palmgren_miner_linear_damage(rated_life=[50000,6500,1000], time_at_stress=[40/60, 15/60, 5/60], stress=[1, 2, 4])
'''
Palmgren-Miner Linear Damage Model results:
Each load cycle uses 0.01351 % of the components life.
The service life of the component is 7400.37951 load cycles.
The amount of damage caused at each stress level is:
Stress = 1 , Damage fraction = 9.86717 %.
Stress = 2 , Damage fraction = 28.463 %.
Stress = 4 , Damage fraction = 61.66983 %.
'''
References:
• Probabilistic Physics of Failure Approach to Reliability (2017), by M. Modarres, M. Amiri, and C. Jackson. pp. 33-37
|
# Show that $\sum_{n=1}^{\infty}\frac{\log (1+1/n)}{n}$ converges
I need some help here. I can show that if the Cauchy condensation test holds, then I get two separate series, one which converges by the comparison test, and one that converges by the ratio test. But I don't even know if this is a valid argument, since I'm not sure how to even check that the terms are decreasing. So I don't think this approach works.
I see that the same question has been asked here, but I'm not really satisfied with the answers. Are there simple ways to determine convergence, with something like the comparison test?
• Both Mark Viola's answer and DeepSea's answer in the link provides a solution using comparison test. Also, limit comparison test with $\frac{1}{n^2}$ is valid. – Sangchul Lee Oct 12 '18 at 4:05
• @Lee But what if I I'm in a test taking scenario and don't know how, or don't have time, to prove that $\log (x+1) < x$ for $x>1$? – Wesley Oct 12 '18 at 4:09
• Then I recommend using limit comparison test together with the knowledge that $\lim_{x\to0} \frac{\log(1+x)}{x} = 1$. Here, the statement of limit comparison test is as follows: Let $(a_n)$ and $(b_n)$ be sequences of positive real numbers such that $a_n/b_n$ converges to a number in $(0, \infty)$. Then $\sum a_n$ converges if and only if $\sum b_n$ converges. – Sangchul Lee Oct 12 '18 at 4:11
• $\displaystyle{\ln\left(1 + 1/n\right) \over n} \sim {1 \over n^{2}}$ as $\displaystyle n \to \infty$. So ?. – Felix Marin Oct 12 '18 at 20:47
By summation by parts
$$\sum_{n=1}^{N}\frac{\log(1+1/n)}{n}=\frac{\log(N+1)}{N}+\sum_{n=1}^{N-1}\frac{\log(n+1)}{n(n+1)}$$ and by the Cauchy-Schwarz inequality $$\log(n+1)\leq \sqrt{n+1}-\frac{1}{\sqrt{n+1}}$$, such that the rearranged/decelerated series $$\sum_{n\geq 1}\frac{\log(n+1)}{n(n+1)}$$ is blatantly absolutely convergent.
By Frullani's theorem we also have the integral representation $$\sum_{n\geq 1}\frac{\log(n+1)}{n(n+1)}=\int_{0}^{+\infty}\frac{(e^{-x}-1)\log(1-e^{-x})}{x}\,dx=\int_{0}^{1}\frac{(1-x)\log(1-x)}{x\log x}\,dx.$$
$$\log(1+x)\le x$$
implies
$$\sum_{n=1}^\infty\frac{\log\left(1+\dfrac1n\right)}{n}\le \sum_{n=1}^\infty\frac1{n^2}$$
is simple and based on the comparison test.
|
# concatenate -- join strings
## Description
concatenate(s,t,...,u) yields the concatenation of the strings s,t,...,u.
The arguments may also be lists or sequences of strings and symbols, in which case they are concatenated recursively. Additionally, an integer may be used to represent a number of spaces, and null will be represented by the empty string.
i1 : concatenate {"a",("s",3,"d",),"f"} o1 = as df
• String -- the class of all strings
## Ways to use concatenate :
• "concatenate(BasicList)"
• "concatenate(Nothing)"
• "concatenate(String)"
• "concatenate(Symbol)"
• "concatenate(ZZ)"
## For the programmer
The object concatenate is .
|
# Plot with Legends and Markers [duplicate]
Possible Duplicate:
Creating legends for plots with multiple lines?
I must say I have seen a couple of similar questions but they were not addressing my specific problem.
Level one: I'm trying to combine two plots, a ListPlot of "data" say, and a plot of the "theory". Level two: the ListPlot has a legend so I should use ShowLegend[data,theory,....], ok fine. The problem now comes at level three: quite obviously the ListPlot ("data") has markers (disks, boxes, and diamonds which are the Automatic ones). How can I reproduce those exact markers with the ShowLegend directive?
István Zachar's answer to this question explicitly mentions that producing a legend for a plot with markers is more complex.
Any suggestions?
## marked as duplicate by rm -rf♦Oct 17 '12 at 5:44
• Please see this question, where your questions have been addressed. First off, PlotLegends is a terrible package and using it will only result in untold misery and headache. Jens has developed an excellent legending system, which you can get from the linked question. It also handles cases with markers easily. Please give that a try. – rm -rf Oct 17 '12 at 5:46
|
Starting from rest, you ride your bike with a constant acceleration and reach a speed of 15.0 miles/hr in a time of 20.0 s and thereafter you maintain that constant speed. The wheel has a radius R=0.300 m.
(a) Calculate the angular acceleration of the wheel.
(b) Calculate the final angular speed of the wheel (reached at the end of the acceleration period).
(c) How many revolutions does the wheel make in that time?
(d) When you are riding with a constant speed, does a piece of gum which got stuck to the rim of the wheel have a centripetal acceleration? Explain. If yes, calculate this acceleration.
(e) When you are riding with a constant speed, oes a piece of gum which got stuck to the rim of the wheel have an angular acceleration? Explain. If yes, calculate this acceleration.
(f) During the acceleration period, what kind of acceleration(s) does the piece of gum have? Explain.
|
Determine if the following are in proportion.(a) 15, 45, 40, 120 (b) 33, 121, 9,96 (c) 24, 28, 36, 48(d) 32, 48, 70, 210 (e) 4, 6, 8, 12 (f) 33, 44, 75, 100.
Complete Python Prime Pack
9 Courses 2 eBooks
Artificial Intelligence & Machine Learning Prime Pack
6 Courses 1 eBooks
Java Prime Pack
9 Courses 2 eBooks
To do:
We have to determine whether the given numbers are in proportion.
Solution:
We know that,
The proportion is defined as the equality of two ratios.
If $p,q,r,s$ are in proportion then,
$\frac{p}{q} = \frac{r}{s}$.
(a) The ratio of the first two numbers $=\frac{15}{45}$
$=\frac{1}{3}$
The ratio of the second two numbers $=\frac{40}{120}$
$=\frac{1}{3}$
Since,
$\frac{15}{45} = \frac{1}{3} = \frac{40}{120}$
The given numbers are in proportion.
(b) The ratio of the first two numbers $=\frac{33}{121}$
$=\frac{3}{11}$
The ratio of the second two numbers $=\frac{9}{96}$
$=\frac{3}{32}$
Since,
$\frac{33}{121} ≠ \frac{9}{96}$
The given numbers are not in proportion.
(c) The ratio of the first two numbers $=\frac{24}{28}$
$=\frac{6}{7}$
The ratio of the second two numbers $=\frac{36}{48}$
$=\frac{3}{4}$
Since,
$\frac{24}{28} ≠ \frac{36}{48}$
The given numbers are not in proportion.
(d) The ratio of the first two numbers $=\frac{32}{48}$
$=\frac{2}{3}$
The ratio of the second two numbers $=\frac{70}{210}$
$=\frac{1}{3}$
Since,
$\frac{32}{48} ≠ \frac{70}{210}$
The given numbers are not in proportion.
(e) The ratio of the first two numbers $=\frac{4}{6}$
$=\frac{2}{3}$
The ratio of the second two numbers $=\frac{8}{12}$
$=\frac{2}{3}$
Since,
$\frac{4}{6}=\frac{2}{3}=\frac{8}{12}$
The given numbers are in proportion.
(f) The ratio of the first two numbers $=\frac{33}{44}$
$=\frac{3}{4}$
The ratio of the second two numbers $=\frac{75}{100}$
$=\frac{3}{4}$
Since,
$\frac{33}{44}=\frac{3}{4}=\frac{75}{100}$
The given numbers are in proportion.
Updated on 10-Oct-2022 13:36:49
|
We can help you with your SPSS projects or homework assignment, at any level! Get professional graphs, tables, syntax, and fully completed SPSS projects, with meaningful interpretations and write up, in APA or any format you prefer. Whether it is for a Statistics class, Business Stats class, a Thesis or Dissertation, you'll find what you are looking for with us Our service is convenient and confidential. You will get excellent quality SPSS help for you. Our rate starts at $25/hour. Free quote in hours. Quick turnaround! # Help with SPSS Statistical Data Analysis Do you want to receive a FREE SPSS tutorial?? SIGN UP HERE • Survey Analysis • Design of surveys • Reports - Crosstabs • Scales - Reliability - Cronbach's Alpha • Factor Analysis - Principal Components • Validity • Questionnaire Analysis using SPSS • Discriminant Analysis • Data Analysis, Sampling and Charts • Sampling Methods: Random, Stratified, Cluster, etc. • Histograms • Stem and Leaf • Box-Plot • Normality Tests (Anderson-Darling, Kolmogorov) • Measures of Central Tendency and Dispersion • Mean • Standard Deviation • Variance • Standard Error of the Mean • Range • Skewness • Coefficient of variation • Regression Analysis • Coefficient of Correlation • Coefficient of Determination • Least Squares Method • Multiple Linear Regression • Multivariate Statistical Analysis • Logistic Regression in SPSS • Times Series • Control Charts • Autocorrelation • Seasonal Indexes • Trends, cycles • Forecasting • Probability Distributions • Binomial Distribution • Poisson Distribution • Exponential Distribution • Standard Normal Distribution • Hypothesis Testing • Z-test, two independent samples • t-test, two independent samples, paired samples • F-test • Non-parametric Chi Square test • One Way and Two way ANOVA • Factorial ANOVA • ANCOVA and MANOVA in SPSS • Levene's Test • Crosstabs • Chi Square Tests • Non-Parametric Tests With our help, you'll get that extra edge you need! Don't hesitate contact us and we'll provide you with professional SPSS help. Either for simple problems or for dissertation help with SPSS, we can assist you providing you an accurate and clear interpretation of those SPSS outputs. The information we provide will give you hints on how to learn SPSS ## Why we can help with your SPSS projects? #### Year of Experience We have been online for more than 10 years, we have worked with thousands of customers who have been able to appreciate the quality of our work #### SPSS Expertise SPSS is a software package that demands technical knowledge of both the software itself and the technical statistical aspects involved in the procedures. Our tutors are the right experts to help you with your homework or anything academic related with SPSS projects or assignments #### Step-by-Step Solutions Our tutors provide detailed, step-by-step solutions, and we put a lot of care in double checking our calculations #### Free Quote You can e-mail us your problems 24x7. We will send a free quote ASAP #### Very Competitive Prices We try to accommodate to all budgets. No job is too big or too small with us. We make our best to accommodate to our customers' needs #### We take pride of our work We do our work with care. We are experts and we take pride in what we do. Our main objective is our customers' complete satisfaction. We take great care in paying attention to all the requirements and details, with the purpose of fulfilling work of the highest quality ## and more... #### Prices Prices start at$35 per hour, depending on the complexity of the work and the turnaround time
You can e-mail us your problems for a free quote.
Problem 1:
Analyze the data using the methods of this chapter (ANOVA).
· Based on the results, does it appear that there is sufficient evidence to support the claim that the drug lowers pulse rate?
· Are there any serious problems with the design of the experiment?
· Given that only males were involved in the experiment do the results also apply to females?
· The project manager compared the post treatment pulse rates to the mean pulse rate for adult males. Is there a better way to measure the drug’s effectiveness in lowering pulse rates?
· How would you characterize the overall validity of the experiment?
· Based on the available results, should the drug be approved?
· Write a brief report summarizing your findings.
Placebo Group 10-mg Treatment Group 20-mg Treatment Group 77 67 72 61 48 94 66 79 57 63 67 63 81 57 69 75 71 59 66 66 64 79 85 82 66 75 34 75 77 76 48 57 59 70 45 53
Solution: (a) We need to test the following hypotheses:
\begin{align} & {{H}_{0}}:{{\mu }_{P}}={{\mu }_{10Mg}}={{\mu }_{20\,Mg}} \\ & {{H}_{A}}:\text{Not all the means are equal} \\ \end{align}
We perform an ANOVA analysis with the aid of SPSS. The results are shown below:
The ANOVA table shows that the F-statistics is F = 0.287, and the p-value is p = 0.752, which is greater than the significance level 0.05, which means that we fail to reject the null hypothesis of equal variances. This means that we don’t have enough evidence to claim that the means are not the same, at the 0.05 significance level.
(b) The design doesn't seem to have any serious problem, other than it was applied only to men. If the test was meant to be valid for both men and women, then the design is flawed.
In terms of the assumption for ANOVA, the homogeneity of variance is satisfied as shown in the following table:
The p-value of the test is p = 0.457, which means that we fail to reject the null hypothesis of equal variances.
(c) The results don’t apply to women, since only men participated in the experiment.
(d) For this type of experiment, it would have been convenient to add a four group with subject with normal pulse rates. Then, applying ANOVA we can determine if there is a significant difference between the groups. If there’s a significant difference, we can apply a Post Hoc test to determine with group has a different mean.
(e) If the goal of the experiment is to study the effect of the treatments on the pulse rate of men, the validity doesn’t seem to be seriously flawed. Nevertheless, if the purpose of the test is to assess the effect of the treatments on the pulse rate in general, the validity may be low.
(f) The drug shouldn’t be approved because it doesn’t seem to have a significant effect on the pulse. Besides, since the experiment had only male participants, the conclusions are biased towards one gender.
Problem 2: The VP of HR at the large software company in your region is concerned that the company is not doing enough to recognize generational differences with their employees. She is concerned that younger generations of employees are less satisfied than older generations and, what’s more, that the age-old strategy of paying employees more to increase their satisfaction isn’t working as well with the younger group.
You have been hired by the VP to dig deeper into these issues of generational differences, job satisfaction, and the satisfaction involved with higher income. Working with the dataset labeled “P 6_generational-job-sat”, analyze and interpret the main effects and interaction effects of generation (age category or ‘age_bin’, indicating this is a binary variable) and income (income category or ‘inccat’) on job satisfaction (‘jobsat’). (*Note: both independent variables are measured as categorical variables for all analysis purposes).
• Statement of what analysis you will use to analyze these data
• Acknowledgment of key assumptions of the analysis you use
• Statement of null and alternative hypotheses
• All test statistics and p-values relevant to hypotheses
• Conclusion in terms of hypotheses
• Graphical illustration of the interaction effect (whether or not it is significant)
• Complete interpretation of results (referring to all effects tested), put into the original context of the VP’s concern
Solution: For the sake of the analysis, we will use Job Satisfaction as an interval variable in spite of the fact that it is defined as an ordinal variable. With that assumption in mind, a Two-Way ANOVA will be performed with JobSat as the dependent variable and Age_bin and IncCat as the factors.
We are interested in testing
\begin{align}& {{H}_{0}}:\text{Income doesn }\!\!'\!\!\text{ t have an effect on Job Satisfaction} \\ & {{H}_{A}}:\text{Income has an effect on Job Satisfaction} \\ \end{align}
\begin{align}& {{H}_{0}}:\text{Age doesn }\!\!'\!\!\text{ t have an effect on Job Satisfaction} \\ & {{H}_{A}}:\text{Age has an effect on Job Satisfaction} \\ \end{align}
and
\begin{align} & {{H}_{0}}:\text{ The interaction term is not significant} \\ & {{H}_{A}}:\text{ The interaction term is significant} \\ \end{align}
The assumption of homogeneity of variances is not met, p = 0.000.
The following ANOVA results are obtained:
Notice that the interaction term is significant, F = 10.516, p =0.000. Also, the main effects are significant. In fact IncCat is significant (F = 10.657, p = 0.000) and Age_bin is also significant (F = 176.498, p = 0.000).
Graphically:
|
1. ## Logarithmic Differentiation
Hi I am having trouble solving a couple of problems involving logarithmic differentiation.
1.Find if
2.If , find .
3.Let
Determine the derivative at the point .
4. If , find .
Any tips on how to do these? Thanks.
2. Originally Posted by KK88
Hi I am having trouble solving a couple of problems involving logarithmic differentiation.
1.Find if
2.If , find .
3.Let
Determine the derivative at the point .
4. If , find .
Any tips on how to do these? Thanks.
For the first question, use the chain rule by solving:
$\displaystyle \frac{\mbox{d log(u)}}{\mbox{du}} \frac{\mbox{du}}{\mbox{dx}}$ where $\displaystyle u = \sqrt{\frac{4x+8}{5x+7}}$
and differentiate..
For further assistance, show your work on this and the other problems on where you are getting stuck.
3. Originally Posted by KK88
Hi I am having trouble solving a couple of problems involving logarithmic differentiation.
1.Find if
2.If , find .
3.Let
Determine the derivative at the point .
4. If , find .
Any tips on how to do these? Thanks.
2:
The first part of this is easy, but the derivative of $\displaystyle x^x$ is not, so let us go through that
$\displaystyle y = x^x$
$\displaystyle lny = xlnx$
$\displaystyle \frac{1}{y} y = lnx + 1$
$\displaystyle y = x^x lnx + x^x$
Thus,
$\displaystyle F(x) = 4sinx + 3x^x$
$\displaystyle F(x) = 4cosx + 3x^x( lnx + 1)$
3:
For $\displaystyle y= ln(x^2 + y^2)$
$\displaystyle y = \frac{1}{x^2 + y^2} (x^2+y^2) = \frac{1}{x^2 + y^2} (2x + 2y y )$
Bring y prime over to one side and factor it out,
$\displaystyle y[ 1 - \frac{2y}{x^2 + y^2} ] = \frac{1}{x^2 + y^2} (2x)$
$\displaystyle y = \frac { \frac{2x}{x^2 + y^2} }{ 1 - \frac{2y}{x^2 + y^2} }$
Sub in the point (1,0) to find the value.
4:
is a repeat of 2
4. Originally Posted by KK88
Hi I am having trouble solving a couple of problems involving logarithmic differentiation.
1.Find if
2.If , find .
3.Let
Determine the derivative at the point .
4. If , find .
Any tips on how to do these? Thanks.
Here is abother way to approach the first problem. Use your log rules to seperate the the experession.
$\displaystyle y = \frac{1}{2}\bigg(\ln{(4x+8)} - \ln{(5x-7)}\bigg)$
$\displaystyle y' = \frac{1}{2}\bigg(\frac{4}{4x+8}-\frac{5}{5x-7}\bigg)$
now just simplify
|
# Math Help - Hey can u differentiate the following equation for me plz...
1. ## Hey can u differentiate the following equation for me plz...
3x^2=y^3 - 2x^3
2. ## Re: Hey can u differentiate the following equation for me plz...
Originally Posted by Chad4087
3x^2=y^3 - 2x^3
Yes, we can
3. ## Re: Hey can u differentiate the following equation for me plz...
Actually
1) This is in the wrong place. It has nothing to do with "differential equations".
2) The question makes no sense. You do not differentiate "equations" you differentiate functions with respect to a variable. What is the function and what is the variable here?
|
# Quadtree decomposition of Discrete Wavelet Transform using bio4.4/CDF wavelet
My problem is pretty basic but fundamental. It relates to the way discrete wavelet transform behaves for biorothognal 4.4 or CDF wavelets. When using most wavelets (e.g., CDF 9/7 or bio4.4 or Daubechies higher order wavelets) the size of the returned approximation and detail matrices is not a power of two. For my application (Embedded Zero Tree compression), this presents a problem because I want to construct a quad tree decomposition of the transformed image which requires all decompositions (LL, LH, HL and HH) to be of size a power of two. For example, consider the Mathematica code:
data = RandomReal[{0, 1}, {16, 16}];
dwd = DiscreteWaveletTransform[data, CDFWavelet[]];
dwd["Dimensions"]
(*output*)
{{0} -> {12, 12}, {1} -> {12, 12}, {2} -> {12, 12}, {3} -> {12,
12}, {0, 0} -> {10, 10}, {0, 1} -> {10, 10}, {0, 2} -> {10,
10}, {0, 3} -> {10, 10}, {0, 0, 0} -> {9, 9}, {0, 0, 1} -> {9,
9}, {0, 0, 2} -> {9, 9}, {0, 0, 3} -> {9, 9}, {0, 0, 0, 0} -> {9,
9}, {0, 0, 0, 1} -> {9, 9}, {0, 0, 0, 2} -> {9, 9}, {0, 0, 0,
3} -> {9, 9}}
Here the dimensions of various decomposition levels is given as rules, e.g., $\{1\}\rightarrow \{12, 12\}$ means that the first LH decomposition matrix is of size $12 \times 12$.
What should I do? Should I simply truncate the matrices to nearest 2's power? or something else.
• Do you have the possibility to test the code library.wolfram.com/infocenter/Demos/447 in 1D and check whether the same phenomenon happens? – Laurent Duval Aug 21 '17 at 14:19
• It would seem from the documentation that the returned decomposition signals (approximation and detail) are of power of two sizes. But it would a be a lot of work to use these for 2D transforms. Also, it is unclear how to specify filters for the CDFWavelet for the functions given (there is the option of specifying the low pass signal only, the high pass is automatically derived from it). After a lot of head-scratching, I think its best to use MATLAB with its dwtmode set to 'per'. I plan to use MATLink to connect MATLAB with the rest of the code in Mathematica at runtime. – Iconoclast Aug 21 '17 at 15:44
• It wouldn't be such a lot of work, I believe. One DWT level on each row, then one on each column of the results, and so on on the low-pass/low-pass subband for the others levels – Laurent Duval Aug 21 '17 at 16:04
First, for compression, it is neither advised to truncate the data nor to the nearest 2th power. First, for compression, it is generally better to expand the original image. After all, this in in use for JPEG DCT padding. Second, you can expand the image to the next integer divisible by $2^L$, where $L$ is the number of wavelet levels. For standard images, $L=4,5,6$ is sufficient, and this is less expansive than "the next power of two". If you expand the image smoothly (half-sample or whole-sample symmetry or antisymetry, depending on the image content), the expansion gets packed easily in the low-pass part of the wavelets, and does not cost a lot, if $2^L$ is sufficiently far away from the image size.
• I do apologize, I probably answered too fast. Are your image $16 \times 16$ ? Such a small size can enter in conflict with filter length: 9/7 is about half the size. – Laurent Duval Aug 20 '17 at 19:31
• My image is of large size $(512 \times 512)$. The smallest image size that the 9/7 filter can handle is $9 \times 9$. The problem is not image size though. I used a small size just to illustrate. The problem is the size of wavelet decompositions. As an example the decompositions for $512 \times 512$ pixel image are $260 \times 260, 134 \times 134, 71 \times 71, 40 \times 40, 24 \times 24, 16 \times 16, 12 \times 12, 10 \times 10$ and $9 \times 9$ How do i construct a zero tree in such a scenario. – Iconoclast Aug 20 '17 at 19:41
|
Dragonflies are often confused with damselflies as they are closely related. However, dragonflies have much larger eyes, which take up most of their head. Both have two sets of wings, but the hind wings of a dragonfly are larger and are extended out like aeroplane wings, even when resting. Dragonflies that you might see around Wellesley Woodlands include the Keeled Skimmer and Emporer dragonfly.
Did you know: Dragonflies are thought to have been on the planet for 300 million years, and once had a wingspan of 2 1/2 feet.
|
## Mathematicians Of The Day
### 3rd March
Click on for a poster.
#### Quotation of the day
##### From Paul Halmos
Computers are important, but not to mathematics.
|
## How Much Time Do We Need?
• Lesson
6-8
1
Students consider the amount of time that space travelers need to travel to the four terrestrial planets. Students also think about what kinds of events might occur on Earth while the space travelers are on their journey.
Getting Started
The terrestrial planets are the four innermost planets in the solar system: Mercury, Venus, Earth, and Mars. They are called terrestrial because they have a rocky, compact surface like the Earth's. Jupiter, Saturn, Uranus, and Neptune are known as Jovian, or Jupiter-like, planets because they are gigantic planets when compared with Earth and have a gaseous nature like Jupiter's. Jovian planets are sometimes called the gas giants. Pluto is not a member of either group. Its composition is unknown, but it is probably composed mostly of rock, ice, and frozen gases.
Developing the Activity
Present the following scenario to students:
Since humankind wants to know more about each of our planetary neighbors, we need to plan our travel to the planets. Select one terrestrial planet and one Jovian planet. Plan trips to the two planets and to Pluto. Describe the speed of your spacecraft as well as the time required to reach the planet, stay one Earth year to explore it, and return to Earth. You may assume that advances will be made in the development of spacecraft and that speeds up to 50,000 miles per hour will be possible.
As a class, you will need to determine a Launch Day for all missions. Based upon that Launch Day, on what date will you arrive at the targeted planet? On what date will you return from each mission?
These questions will require that students convert such time intervals as 10.2 years into years and days. When the conversion results in a part of a day, round the value to the nearest day. Students may not be familiar with thinking about a date in the year as having an ordinal value in relation to the year; for example, 1 July is the 183rd day of the year. Locate a reference calendar where the ordinal value is given along with the date. Remind students they may not necessarily be beginning with 1 January, however. Launch could be on any day of the year. Since the year 2008 is a leap year, students need to use 366 days for that year as well as other leap years spent traveling to other planets.
Each member of the mission team should write about one of the trips to a planet. The description should include the launch date, the destination, the speed of travel, the time to reach the planet, the date of arrival, and the date the crew returned to Earth.
To this point in the lesson, students have been considering space travel from the perspective of what happens to the space traveler on the journey. But while the space travelers are visiting distant planets, life continues on its usual course at home on planet Earth. Students should be familiar with the aspect of all travel from their previous experiences. While they are away from home, life goes on; on their return, they need time to catch up on all the news and events. Occasionally, an event occurs while they are gone that has a profound effect on them when they return.
Extending the Activity
Return the students to their mission teams. Tell them to imagine that their team was actually sent on a mission to their selected terrestrial planet. They know their launch data and have computed the duration of their trip and the date of their return. Although NASA kept the crew members posted on the news, they have missed many important events, both personal and public.
Each mission team serves as the ground crew for a space-traveling counterpart. The ground crew's task is to debrief the astronauts on their return to Earth.
Each ground crew makes a list of important events that the astronaut crew should know about on its return. Of course, the names and some of the events will be fictitious, but they should be plausible for the time that passed on the journey. Be certain to include the results of regularly occurring events. Such personal events as graduations for family members should be mentioned, too. Sports events, such as the Olympics, the Super Bowl, and the World Series, maybe important events for some students.
Closing the Activity
Students present the briefings they have written for the returning astronauts. These reports could take many forms. Some mission teams may make time lines. Others may present their briefing as a newscast. They might use technology to support their presentation. Another team member may make a scrapbook. Do not place limits on their creativity.
All students should take time to reflect on the mathematics of this lesson. The calendar, the time conversions, the distances in space, and the speeds required to complete space travel are all important concepts for students to think about as they construct their understanding of the world and the mathematics that describes it.
• Paper (to be used as a journal)
none
none
### Space Shuttle
6-8
Students consider the amount of time that space travelers must spend on their journey. Students improve their concept of time and distance, while at the same time learn more about the solar system.
### Learning Objectives
Students will:
• Calculate the amount of time needed to travel to the four terrestrial planets
• Convert time intervals, such as 10.2 years, into years and days
• Reflect upon events that would occur on Earth while the space travelers are on their journey
### Common Core State Standards – Mathematics
Understand the concept of a unit rate a/b associated with a ratio a:b with b ≠ 0, and use rate language in the context of a ratio relationship. For example, ''This recipe has a ratio of 3 cups of flour to 4 cups of sugar, so there is 3/4 cup of flour for each cup of sugar.'' ''We paid $75 for 15 hamburgers, which is a rate of$5 per hamburger.''
|
# Math Help - rectangular garden problem
1. ## rectangular garden problem
For some reason I just can't visualize this (and so I can't solve it). But I do know that the answer should come out to 8.
Your neighbor is planning to fence a 10ft by 15ft rectangular plot of ground to use as a garden. She intends to plant a 1ft wide border of flowers along the inside of the entire perimeter. The rectangular section surrounded by this border will be planted with vegetables in 11-foot-long rows parallel to the longer sides. Now, when your neighbor plants the vegetables, she wants the center lines of adjacent rows to be at least 10 inches apart. She also wants the center lines of the outermost rows to be at least 10 inches from the inner edge of the flower border. According to these planting restrictions, what is the maximum number of 11-foot-long rows of vegetables that could be planted within this garden plot?
2. The vegetables will be planted in a 8ft by 13ft rectangle...OK?
The width of 8 feet = 96 inches...OK?
96 inches accomodates 9 "10 inches" widths...OK?
9 "10 inches" widths accomodates 8 rows...OK?
.................................edge
10
.................................row
10
.................................row
10
.................................row
10
.................................row
10
.................................row
10
.................................row
10
.................................row
10
.................................row
16
.................................edge
Hope that's sufficient for you to visualize this...
3. I'm not sure how you got the 8ft by 13ft dimension? How can we conclude this? I guess I just have trouble still seeing the rows with the 10 inches and the fitting into the original dimensions (minus the 1ft border)--the orientation is confusing still..
4. Hello
I'll convert to inches.
10*12=120.
That's the space within which the rows have to fit.
I would assume that the two veg rows are placed as near as possible to the flower border and then calculate the distance between them.
so 120-(2*flower border (24) -(2*minimum distance from flower border to center of veg row(20))
Then take from this figure one more space and divide by 10...
I think the previous poster will help you to visualise this much better.
5. 8*13 is what your left with when you take the flower bed off. The 13 is not relevent to the problem
6. Ahh that does make sense now, and your sketch too Wilmer. Thanks!
7. Originally Posted by dannyc
i'm not sure how you got the 8ft by 13ft dimension?
Code:
..................15.......................
. 1..............13..................... 1.
. . . .
. . 8. .10
. . . .
. ....................................... .
...........................................
ok?
|
# POJ 1269 Intersecting Lines(直线相交?平行?重合?,求交点)
Intersecting Lines
Time Limit: 1000MS Memory Limit: 10000K Total Submissions: 8342 Accepted: 3789
Description
We all know that a pair of distinct points on a plane defines a line and that a pair of lines on a plane will intersect in one of three ways: 1) no intersection because they are parallel, 2) intersect in a line because they are on top of one another (i.e. they are the same line), 3) intersect in a point. In this problem you will use your algebraic knowledge to create a program that determines how and where two lines intersect.
Your program will repeatedly read in four points that define two lines in the x-y plane and determine how and where the lines intersect. All numbers required by this problem will be reasonable, say between -1000 and 1000.
Input
The first line contains an integer N between 1 and 10 describing how many pairs of lines are represented. The next N lines will each contain eight integers. These integers represent the coordinates of four points on the plane in the order x1y1x2y2x3y3x4y4. Thus each of these input lines represents two lines on the plane: the line through (x1,y1) and (x2,y2) and the line through (x3,y3) and (x4,y4). The point (x1,y1) is always distinct from (x2,y2). Likewise with (x3,y3) and (x4,y4).
Output
There should be N+2 lines of output. The first line of output should read INTERSECTING LINES OUTPUT. There will then be one line of output for each pair of planar lines represented by a line of input, describing how the lines intersect: none, line, or point. If the intersection is a point then your program should output the x and y coordinates of the point, correct to two decimal places. The final line of output should read "END OF OUTPUT".
Sample Input
5
0 0 4 4 0 4 4 0
5 0 7 6 1 0 2 3
5 0 7 6 3 -6 4 -3
2 0 2 27 1 5 18 5
0 3 4 0 1 2 2 5
Sample Output
INTERSECTING LINES OUTPUT
POINT 2.00 2.00
NONE
LINE
POINT 2.00 5.00
POINT 1.07 2.20
END OF OUTPUT
Source
#include <iostream>
#include <stdio.h>
#include <string.h>
#include <algorithm>
#include <queue>
#include <map>
#include <vector>
#include <set>
#include <string>
#include <math.h>
using namespace std;
const double eps = 1e-8;
int sgn(double x)
{
if(fabs(x) < eps)return 0;
if(x < 0)return -1;
else return 1;
}
struct Point
{
double x,y;
Point(){}
Point(double _x,double _y)
{
x = _x;y = _y;
}
Point operator -(const Point &b)const
{
return Point(x - b.x,y - b.y);
}
double operator ^(const Point &b)const
{
return x*b.y - y*b.x;
}
double operator *(const Point &b)const
{
return x*b.x + y*b.y;
}
};
struct Line
{
Point s,e;
Line(){}
Line(Point _s,Point _e)
{
s = _s;e = _e;
}
pair<Point,int> operator &(const Line &b)const
{
Point res = s;
if(sgn((s-e)^(b.s-b.e)) == 0)
{
if(sgn((b.s-s)^(b.e-s)) == 0)
return make_pair(res,0);//两直线重合
else return make_pair(res,1);//两直线平行
}
double t = ((s-b.s)^(b.s-b.e))/((s-e)^(b.s-b.e));
res.x += (e.x - s.x)*t;
res.y += (e.y - s.y)*t;
return make_pair(res,2);//有交点
}
};
int main()
{
//freopen("in.txt","r",stdin);
//freopen("out.txt","w",stdout);
int T;
scanf("%d",&T);
double x1,y1,x2,y2,x3,y3,x4,y4;
printf("INTERSECTING LINES OUTPUT\n");
while(T--)
{
scanf("%lf%lf%lf%lf%lf%lf%lf%lf",&x1,&y1,&x2,&y2,&x3,&y3,&x4,&y4);
Line line1 = Line(Point(x1,y1),Point(x2,y2));
Line line2 = Line(Point(x3,y3),Point(x4,y4));
pair<Point,int> ans = line1 & line2;
if( ans.second == 2)printf("POINT %.2lf %.2lf\n",ans.first.x,ans.first.y);
else if(ans.second == 0)printf("LINE\n");
else printf("NONE\n");
}
printf("END OF OUTPUT\n");
return 0;
}
04-28 391
05-16 697
08-13 448
08-03 70
08-24 809
07-30 675
07-18 681
06-05 9873
©️2020 CSDN 皮肤主题: 编程工作室 设计师:CSDN官方博客
|
Seminars
3:30 pm, Seminar hall On the boundaries of the Arnold tongues Kuntal Banerjee HRI, Allahabad. 28-01-14 Abstract For a family of two-parameter analytic circle diffeomorphism, the space of the parameters can be partitioned according to the rotation number of the circle diffeomorphism corresponding to each parameter. An Arnold tongue of rotation number $\theta$ is defined as the collection of parameters for which each circle diffeomorphism with those parameters has rotation number $\theta$. Some results on the shapes and boundaries of these Arnold tongues will be discussed.
|
location: Publications → journals
Search results
Search: All articles in the CJM digital archive with keyword harmonic map
Expand all Collapse all Results 1 - 3 of 3
1. CJM 2012 (vol 65 pp. 879)
Kawabe, Hiroko
A Space of Harmonic Maps from the Sphere into the Complex Projective Space Guest-Ohnita and Crawford have shown the path-connectedness of the space of harmonic maps from $S^2$ to $\mathbf{C} P^n$ of a fixed degree and energy.It is well-known that the $\partial$ transform is defined on this space. In this paper,we will show that the space is decomposed into mutually disjoint connected subspaces on which $\partial$ is homeomorphic. Keywords:harmonic maps, harmonic sequences, gluingCategories:58E20, 58D15
2. CJM 1999 (vol 51 pp. 470)
Bshouty, D.; Hengartner, W.
Exterior Univalent Harmonic Mappings With Finite Blaschke Dilatations In this article we characterize the univalent harmonic mappings from the exterior of the unit disk, $\Delta$, onto a simply connected domain $\Omega$ containing infinity and which are solutions of the system of elliptic partial differential equations $\fzbb = a(z)f_z(z)$ where the second dilatation function $a(z)$ is a finite Blaschke product. At the end of this article, we apply our results to nonparametric minimal surfaces having the property that the image of its Gauss map is the upper half-sphere covered once or twice. Keywords:harmonic mappings, minimal surfacesCategories:30C55, 30C62, 49Q05
3. CJM 1998 (vol 50 pp. 1119)
Anand, Christopher Kumar
Ward's solitons II: exact solutions In a previous paper, we gave a correspondence between certain exact solutions to a $$(2+1)$$-dimensional integrable Chiral Model and holomorphic bundles on a compact surface. In this paper, we use algebraic geometry to derive a closed-form expression for those solutions and show by way of examples how the algebraic data which parametrise the solution space dictates the behaviour of the solutions. Dans un article pr\'{e}c\'{e}dent, nous avons d\'{e}montr\'{e} que les solutions d'un mod\`{e}le chiral int\'{e}grable en dimension $$(2+1)$$ correspondent aux fibr\'{e}s vectoriels holomorphes sur une surface compacte. Ici, nous employons la g\'{e}om\'{e}trie alg\'{e}brique dans une construction explicite des solutions. Nous donnons une formule matricielle et illustrons avec trois exemples la signification des invariants alg\'{e}briques pour le comportement physique des solutions. Keywords:integrable system, chiral field, sigma model, soliton, monad, uniton, harmonic mapCategory:35Q51
top of page | contact us | privacy | site map |
|
• ### Popular Now
• 13
• 16
• 27
• 9
• 9
#### Archived
This topic is now archived and is closed to further replies.
# >> operator in C++
This topic is 5842 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.
## Recommended Posts
Hello, I am going through Bryan Turner''s RoamSimple.cpp class and I''ve come across this line: // Compute X coordinate of center of Hypotenuse int centerX = (leftX + rightX) >>1; I''m assuming >>1 somehow means divide by 2, but how? Feel free to be as condescending in your response as you''d like if this is something everyone should know. I can take it Wait! It still would be great if you responded.... Thanks for all the help,
##### Share on other sites
it''s called a bitwise shift operator.
it shifts all the bits to the right the specified number.
like this:
210 >> 1
Before:
11010010
After:
01101001
Effectively, it divides by 2^n where n is on the right of the >>
##### Share on other sites
>> is the right shift operator.
Since you are working in binary everything is in powers of 2 so a right shift is the same as dividing by 2 just like how if I were to shift 100 to the right once, I would get 100 divided by 10 which is 10 (because our numerical system is base 10 ).
However, there is a flaw in the example you provided. Because you are using the type int, and not unsigned int, the leftmost bit is used for sign. This means that if you had a negative value, then that 1 in the leftmost bit would be shifted and you wouldn't t get the number divided by two, you'd actually get the number divided by 2 plus 64.
EDIT: I'm always late!
--------------------
Matthew Calabrese
Realtime 3D Orchestra:
Programmer, Composer,
and 3D Artist/Animator
"I can see the music..."
[edited by - Matt Calabrese on March 19, 2002 7:14:49 PM]
##### Share on other sites
thanks for your help again, oh great fountain of knowledge.
one of these days I am going to create a loop of beautiful female voices singing in latin and play in while I''m programming to keep the feeling that angels are assisting me.
Thats a bit of an overstatement I guess, but I still want to make the loop.
##### Share on other sites
quote:
Original post by Matt Calabrese
However, there is a flaw in the example you provided. Because you are using the type int, and not unsigned int, the leftmost bit is used for sign. This means that if you had a negative value, then that 1 in the leftmost bit would be shifted and you wouldn''t t get the number divided by two, you''d actually get the number divided by 2 plus 64.
-8 >> 1 == -4
8 >> 1 == 4
>> works fine on negative numbers. (unless I''m special and it only works for me. )
-scott
##### Share on other sites
hmm, that''s weird! Is the right shift operator overloaded for signed datatypes to not shift the sign bit!? If it literally did right shit every bit, the outcome would be what I mentioned, but it must not.
--------------------
Matthew Calabrese
Realtime 3D Orchestra:
Programmer, Composer,
and 3D Artist/Animator
"I can see the music..."
##### Share on other sites
quote:
If it literally did right shit every bit, the outcome would be what I mentioned, but it must not.
Remember that signed values are stored in two's complement.
To represent a negative number, you take the absolute value of that number and invert all the bits, and then add 1 to it.
So, if you had -8 (00001000), you'd invert the bits to get 11110111. Then you'd add one to get 11111000. So, -8 in binary is 11111000, and 11111000 >> 1 would be 11111100 (-4).
[edited by - SilentCoder on March 19, 2002 8:14:07 PM]
##### Share on other sites
Thanks, I haven't really taken any classes so I've been learning C++ gradually over the last 4 months from just little bits and pieces of tutorials everywhere. I remembered reading that the first bit was the sign bit, but I didn't know the rest was inverted. Though that does make a lot of sense now that I think about it -- since it's all inverted that means that operations can be handled bitwise in the the same way wether negative or positive. Sorry for that! *crawls back into his shell*
--
On a side note, I just got an email response from the Yale College Composers' Group Composition Competition saying they recieved my song! Everyone head on over
Here and pick up a copy of my submittion! I'll know by the end of March if I won. Very general contest rules -- a song using one or any combination of 1 piano, 1 violin, 1 viola, 1 cello, 1 acoustic guitar, 1 vocal part.
Wish me luck -- this is also one of the songs I'm setting up for Realtime 3D Orchestra as a demo. Imagine watching a 3D piano with keys that go down corresponding to the notes that you can walk around and listen to in realtime. Should be done by the beginning of the summer.
Sorry for being off topic! I'll start another topic in "My announcements" for replies.
--------------------
Matthew Calabrese
Realtime 3D Orchestra:
Programmer, Composer,
and 3D Artist/Animator
"I can see the music..."
[edited by - Matt Calabrese on March 19, 2002 8:28:10 PM]
##### Share on other sites
The >> operator acts differently on signed and unsigned data types.
#include <stdio.h>int main() { union { unsigned int unsigned_int; signed int signed_int; }; printf("unsigned shift:\n"); signed_int = -8; unsigned_int >>= 1; printf("%i\n", signed_int); printf("signed shift:\n"); signed_int = -8; signed_int >>= 1; printf("%i\n", signed_int);}
|
# Rule with independent random variables and conditional expectations
I want to use a rule for conditional expectation I found in (German) wikipedia, not in my script/textbook of probability theory, I guess it should be simple and follow more or less straight from the general definition (I want have a proof to be sure that I don't build up on a wikipedia mistake)
Let X be independent of Z and of Y (XY integrable and X,Y,Z random variables) $$E(XY|Z) = E(X) E(Y|Z)$$
My idea: Showing that the rhs meets the conditions of the general definition of E(XY|Z), that is (i) it should be $\sigma(Z)$-measureable, check. (ii) $E\left( E(X) E(Y|Z) 1_A \right) \stackrel{!}{=} E(XY 1_A) \forall A \in \sigma(Z)$ Now the lhs$=E(X) E(E(Y|Z)1_A) = E(X)E(Y 1_A)$ (according to (ii) of the definition of $E(Y|Z)$, for all $A\in \sigma(Z)$)
$= E(XY1_A)$ as wished (X, Y are independent).
But, in this proof I did not use that X,Z are independent, so it would follow as well $E(XY|Z)=E(Y)E(X|Z)=E(Y)E(X)=E(XY)$ which shouldn't be this way.
Maybe it's all much simpler and I have just the wrong point of view on it. Q: Does anybody see the flaw in my proof? Can anybody hint me to a proof or a reference to a proof?
(@Didier, i try to get Williams book, I have to see if my library can get it for me).
-
The faulty step is when you assert that $E(X)E(Y1_A)=E(XY1_A)$. Here you must not only assume that $X$ is independent of $Y$ (which is not enough to conclude) but that $X$ is independent of $(Y,Z)$. (+1 for showing the steps you tried.) – Did Aug 4 '11 at 8:56
The usual counterexample works: take X and Y i.i.d. centered Bernoulli random variables and Z=XY. Then (X,Y) is independent (by definition), as are (Y,Z) and (Z,X) (easy to check), but (X,Y,Z) is not (for example P(X=Y=1,Z=-1)=0 instead of $\frac18$). And E(XY|Z)=Z although E(X)E(Y|Z)=0. – Did Aug 4 '11 at 12:05
@did I think your first comment serves as a good answer for me already. By (Y,Z) you mean $\sigma(\sigma(Y), \sigma(Z))$? And independence of Y, Z might not be enough to have independence from (Y,Z) (see my new question ) – Johannes L Aug 4 '11 at 12:34
Yes, [X is independent of (Y,Z)] means that the sigma-algebras sigma(X) and sigma(Y,Z) are independent. And sigma(Y,Z) coincides with sigma(sigma(Y),sigma(Z)). – Did Aug 4 '11 at 13:37
So I would have to have (i) X independent from (Y,Z).. I wonder if that follows if I assume Y and Z independent (which would be more natural to my application than to have to introduce the assumption (i)) - i make a new question out of this. – Johannes L Aug 5 '11 at 11:47
The faulty step is when you assert that $E(X)E(Y1_A)=E(XY1_A)$. Here one must not only assume that $X$ is independent on $Y$ but that $X$ is independent on $(Y,Z)$.
To see that there is a difference, the usual example works here: take $X$ and $Y$ i.i.d. centered Bernoulli random variables and $Z=XY$. Then $X$ and $Y$ are independent (by definition), as are $Y$ and $Z$, as are $Z$ and $X$ (easy to check), but $X$, $Y$ and $Z$ are not*. And $E(XY|Z)=Z$ although $E(X)E(Y|Z)=0$.
*For example, $P(X=1,Y=1,Z=-1)=0$ but $P(X=1)P(Y=1)P(Z=-1)=1/8$.
|
# Calculate Rudder Angle by Bank Angle [closed]
My question: Is rudder angle equal to bank angle? Or, does rudder angle has some relationship with ROT formula?
Below is my steps trying to solve this question.
Step 1: Found below graph from this article showing that vessel's rudder angle (guess it is a close case study) should be in a quadratic equation to the radius like radius = a * rudder_angle^2 + b * rudder_angle + c
Step 2: I check back the ROT equation, ROT (°/sec) = 1091 * tan(bank angle) / speed in knots. Step 1 is somehow make sense to me because it used tan for the angle.
Then, assume all factors are fixed, any suggestion for the next step to prove the relationship or coefficient between rudder angle and bank angle? Thanks.
• An aircraft rudder has very little in common with a ship’s rudder. Jan 14 at 13:43
• Once established in a coordinated turn the rudder angle should be zero. Jan 14 at 17:33
• @MichaelHall not really, you need some
– Federico
Jan 15 at 11:18
• @Federico, I’ve never found it necessary to hold any rudder pressure once the adverse yaw from the initial roll has been compensated for. Heck, half the time I don’t even bother with rudder for that. (laziness from years of flying a jet with yaw damper I guess…) Jan 15 at 17:52
• Bank angle, as in the angle of roll of the vehicle. Jan 16 at 4:51
Then, assume all factors are fixed, any suggestion for the next step to prove the relationship or coefficient between rudder angle and bank angle? Thanks.
There is absolutely no relationship or coefficient between rudder angle and bank angle in an aircraft turning. Your understanding of aerodynamics is incorrect and your theory is flawed.
An aircraft rudder is only there to control yaw. The effectiveness of the rudder also can’t be quantified by the rudder angle alone because there are too many variables.
A few of the variables include density of the air, speed of the air, size of the rudder, the moment of the rudder relative to Center of Gravity, the airfoil shape of the rudder, size and shape of trim tabs and aerodynamic horns, etc. I am sure there are many more variables.
• Thanks for the direct answer "no relationship or coefficient between rudder angle and bank angle", so that I don't need to stuggle in those angles. Jan 14 at 18:07
You are mixing two completely different effects here. With a boat, the rudder is the primary steering control, while with an airplane, the bank angle is the primary driver of a turn, and bank angle is controlled by varying the roll rate with the ailerons. And naturally, a roll rate of zero is still compatible with banked flight, so we can still be turning even with ailerons and rudder centered. (The subtle yaw and roll trim effects that comprise the basis of lateral "stability" will be left beyond the scope of this brief answer.)
I won't try to assess the accuracy of the article about boats, but I can tell you that, depending on the airspeed, the bank angle, the shape of the aircraft itself (e.g. high aspect ratio sailplane versus medium aspect ratio light plane versus low aspect ratio jet fighter), and whether or not any thrust asymmetry exists (e.g. p-factor), in a constant-bank angle turn, for optimal "coordination" (yaw centered or slip-skid ball centered), the rudder may be need to be kept significantly deflected toward the inside of the turn, or no significant deflection may be needed. (Deflection toward the outside of the turn would be unusual, except to compensate for p-factor.) The rudder required for optimal coordination while actually changing the bank angle (rolling) is another matter; generally the rudder must be deflected toward the descending wingtip, especially with high aspect-ratio aircraft.
But to a very rough first approximation, airplane turning dynamics are all about the bank angle, and the rudder can be considered optional. Many small, fast, low aspect-ratio radio-controlled model airplanes have no rudder at all, and still are very maneuverable and aerobatic, apart from snap rolls, spins, knife-edge flight, and other such sideslip-based maneuvers.
So no, you can't just combine a formula for boats with a formula for airplanes like this.
• Any stuff I should take a look if I want to explore more on the relationship between turning and rudder angle Jan 14 at 13:54
• @PakHoCheung -- you could start by looking at any ASE answers dealing with turn "coordination", "yaw string", "sideforce" etc. I've written more than a few myself. I highly recommend John S Denker's "See How It Flies" website too. Aircraft yaw dynamics are complex, it's not an easy thing to really understand. Google an article "Circling the Holighaus way". But start by understanding the basic relationship between banking and turning. Many small, fast, low aspect-ratio radio-controlled model airplanes have no rudder at all, and still are very maneuverable and aerobatic. Jan 14 at 13:57
• Thanks for your suggestion. I'm trying to learn more about the numbers used in aircraft, so that I can apply more real case for my work as I'm a programmer to design a simple software. Will look at those books and see if having some numbers I can apply:) Jan 14 at 18:06
• @PakHoCheung -- basically I think unless you have a rather sophisticated computer model of the aerodynamics, you should ignore the rudder, i.e. assume no rudder deflection is needed, just start by figuring out the relationship between bank angle and turn rate, which depends on airspeed. Jan 14 at 20:35
The dynamics of a turn involve bank angle AND the weathervaning effect of the vertical fin (the rudder has a secondary function) (plus pitch, but we'll leave that part out to keep it simple).
When you bank, the wing's lift vector is tilted, and the lateral force introduced by the tilt drives the aircraft sideways (it sideslips). Without a weathervaning effect, it would move laterally while remaining pointed in its original direction; just continue sideslipping. This sideslip happens initially (briefly) when bank is first applied, and is critical for dihedral effect to work for roll stability.
The result of the sideslip is a lateral angle of attack acting on the airplane's vertical profile, resulting from the sideways movement created by the bank, mostly acting on the fin, or more correctly acting on the vertical aerodynamic center, the vertical Neutral Point, of the entire aircraft. As long as the Neutral Point is aft of the airplane's C of G (what the fin is there to do), there is a positive weathervaning tendency, or positive static stability in yaw.
So the bank makes the plane slew sideways, the fin develops lateral lift in response (after a slight lag to allow dihedral effect to do its thing in the event you didn't actually want to bank), and rotates the body about the yaw axis to reduce the angle of attack on the fin to near zero.
So the airplane banks, starts to slew sideways, but immediately afterward the weathervaning effect of the fin rotates the body in yaw to keep the body aligned into the airstream and the sideways motion is accompanied by a rotation that makes the airplane follow the arc of the turn.
What the rudder does is allow the camber of the vertical fin to be varied by the pilot (being more or less a wing flap that works in both directions), to apply forces beyond the basic weathervaning tendency, when the weathervaning tendency is insufficient. What makes the weathervaning tendency insufficient is aileron adverse yaw.
So rudder application is used to adjust the camber of the vertical tail, to apply force beyond the basic weathervaning effect, to cancel out the yawing moment created by adverse yaw from the ailerons. If the ailerons are neutral while banked, little to no rudder is required because there is no adverse yaw.
So for a turn:
• Bank creates a lateral thrust component to move the plane sideways.
• The vertical fin responds to the sideways movement by continuously weathervaning the body into the lateral airflow created by the bank. The fin needs to be small enough to allow a slight lag in this reaction to allow dihedral effect to work, where the bank was induced by a bump and the objective is to make the plane return to level flight on its own, but large enough to give a positive weathervaning tendency beyond that when the bank is deliberate.
• The rudder allows the camber of the vertical tail to be varied to increase or decrease the weathervaning force, mostly to cancel out the yaw forces induced by the aileron displacement. The rudder can also be used to take out the small lag in weathervaning effect you normally get when you bank as mentioned previously. Rudder application is normally roughly in proportion to the up aileron displacement. Airplanes with rudder interconnect systems, like the Piper Tripacer, mechanically directly gear rudder movement to aileron movement (using bungee springs) and the pilot doesn't need to move the pedals to maintain a coordinated turn - left aileron gives left rudder and right aileron gives right rudder and neutral aileron gives neutral rudder.
|
{[ promptMessage ]}
Bookmark it
{[ promptMessage ]}
# Sect1.1 - SECTVON 1.1 Phone/n ‘1> with(DEtools>...
This preview shows pages 1–7. Sign up to view the full content.
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This preview has intentionally blurred sections. Sign up to view the full version.
View Full Document
This is the end of the preview. Sign up to access the rest of the document.
Unformatted text preview: SECTVON 1.1 Phone/n ‘1 > with (DEtools) : > eqn:=diff(y(t)}t)=—1-2*y(t); 8 ‘(t)=-1-2)I(t) e :=- qn Qty > DEplot (eqn, [t, y] ,t=-2 . .2, arrows=THIN,y=-2 . .2) ; i‘ivs\v\.v\v\\v\\ _ «.Ekuuufviu.\\‘\v\ \1 _ tri‘.1§.‘1\\n1\\\ \vivk‘iv..\\1\.\\u IE5\\\¥\V\4\V\\‘\ ‘illi‘mv\V\\-\\v\‘\‘ 511\$\.v\.v\v\\r\\\ svkiviviumqiu\\\\ tL.rLT\\1\~\.V\V\.\\ ...\......v\.v...\\v.\1\‘\:\\ \1 1{\\V\V\V\V\\\ .......\1\l.\\\1.\‘\w\v\ \1 iv..L.\.v\.v\v1.v\.v\.\\. \.v..\\r\v\v\\\\.\\1 {\\r\v\v\v\\\. ivivivi»\v\\v\\\. inti\$N\\\v\\..\u\\ a“. x x 4W» i \ x i x \ l \ aaanwafiw-‘a—‘a Refiaraea’a,’ ////////// ////////// “MMNHHHWH HH‘N‘“H“‘ //«/4/¢lol 13/1/17]. //f/II ///..l./rr //f/Il ./a//.,IG/4.I //ff..1.r.. /c/4/4I1..ul v/u/c/a/frl! l/ff/I. zl/llh ////Il //./.t/«lfr //¢/../I.I. o/a/y/IL/cl //¢/4/414l. ///f.f.f. ////if ///f.1.¢l //f.4/.II. As t —> infinity the solution y-> -1/2 for all initial conditions . , [JP-1 PROBLE" 0 > with (DEtools) :. > eqnz=diff (y (t) ,t)=y(t) +2; 3 eqn .= atym —y(t) + 2 > DEplot (eqn, [t\,y] ,t=-2 . .2, arrows=THIN,y=-4 . .0) ; > ' ' ///’/////// \\\\\\\\\\' \\\\\\\\\.\ \\\\\\\\\\ \\\\\\\\\\ \\\\\\\\\\ \\\\\\\\\\. As t —> infinity the solution y—> infinity, negative infinity or -2 depending on the initial condition. If y(0)>-2 then y¥> infinity as t-> infinity. If y(0)=-2 then y=-2 for all t.‘ If y(0) <—2 then‘y-> negative infinity as t-> infinity. T.’ pa \ Section 1.1, Problem 9 Problem. Write down a differential equation of the form (711% = ay + b Whose solutions have the required behavior as t -> 00: All other solutions diverge from y = 2; Solution. y = 2 must be an equilibrium solution; when y = 2, j—lfi = 0. Therefore 01(2) + b i 0 and b = —2a (see Example 3). The solutions must diverge; ‘ so we want a to be positive; for example, let a,= 2. Then an (not the only) equation is In Maple, one can check the solution by entering: > with(DEtools): > eqn := diff(y(t), t) = 2 * y(t) - 4; > DEplot(eqn, y(t)«, t = -2.4V.2, arrows = THIN, y = -2. .4); (Tim Section 1.1, Problem 12 Problem. Based on the direction field, determine the behavior of y as t ’—-+ 00. If this behavior depends on the initial value of y at t = 0, describe this dependency: M=-w®—y) Solution. To draw the direCtion field, in Maple enter: > with(DEtools)': L > eqn := diff(y(t), t) = -y(t) * (5 - y_(t)); > DEplot(eqn, y(t), t = -2.-.2? arrows = THIN, y = -2‘.-.4); exxxx““xx\x\\\\x 5\\\ii. ii\\x\x;i killil ;5llllllél ‘ \vhv‘lr »‘ I‘lt‘v )1 \xallx eiixlxliir \\»\\\ xxxrlyxxx “WNW \HMHH \\xxxx xxxrix\\\ f‘illll 'ii‘l‘lilili \‘l\:\‘:\4\i\v‘-\.\i\:\v\\\:\v\ ‘\\\\\\\ \\\\\\\\\ ‘ e ‘ .\\\\\:T\ \\\\\\\\\\\‘\ ‘2" ' a",/ ‘//’/‘ w .. .- . ///’4“ /d ////////////V/{// / .ZZZI/f/7/{JIIZ4ZI g ;/.;l!7-fiJ/,izii/;Z / flillfliléiézllli- l ,liill—emmlml There appear to be two equilibria, one at y = 0 and another at y = 5 (as solving for y’ = 0 indicates). If the initial condition is 31(0) < 0, then y(t) will increase and approach y = 0. If the initial condition is 31(0) L> 0 but also 31(0) < 5, then y(t) decreases and approaches 3;. = 0. If [the initial condition is y(0) >‘5, then y(t) grows without bound. 1 ’o'fiq Section 1.1, Problem 16 Problem. A spherical raindrop evaporates at a rate proportional to its surface area. Write a differential equation for the volume of the raindrop as a function of time. Solution. The most likely best choice for the independent variable in this problem is time, as one is told to write a differential equation for the volume of the raindrop as a function of time. The most likely best choice for dependent variable is the volume of the raindrop, as that is the quantity whose rate of change one is ’ trying to model. .At a given time t its volume would be written as V(t). Given that one is modeling a raindrop, the time t is probably best measured in seconds or minutes; let us take seconds. The volume is probably best measured in cubic centimeters. (The units turn out not to affect this particular question, which is only setting up the model, but they can affect the Solution of the differential equation, and selecting the units is worth practicing.)- By assumption, one knows that the rate of change of the volume V(t) of the raindrop is proportional to its surface area A(t), and its volume is decreasing; so one may immediately write 3% = —kA(t) (with it understood that k, the rate of evaporation, is positive). ' To complete the model, one needs to know the area A(t) of the raindrop in terms of the dependent variable V(t). As the raindrop is by assumption spherical, one knows that the volume V = 117W3, and the surface area A = 47(7’2. 3 One can then solve the first expression for r, and find that r = 3 34%. Sub- stltutmg that expressmn for r 1n the area expressron gives one A = 47r(%)§, or A = 3%(47r)%v%. Then, finally, that turns the expression for % into: 2—? = —k3%(47r)%v, (Of course, since 3§(47r)% is just a number, one could replace the constant k with the constant K 2. 3% (470% k, and simplify the equation further to 43:— : —-KV§ .) calm Ill”; Section 1.1, Problem 17 Problem. A certain drug is being administeredintravenously to a hospital patient. Fluid containing 52% of the drug enters the patient’s bloodstream at arate of 100%. The drug is absorbed by bodytis’sues or otherwise leaves the bloodstream at a rate proportional to the amount present, with ‘a rate constant of 0.4%. a. Assuming that the drug is always uniformly present throughout the blood- stream, write a differentialeq-uation .for the amount of the drug that is present in the bloodstream at any time. , Solution. The independent variable is'tinie 't, measured in hours (based on the problem’s statement). The dependent variable is the amount of drug in the system; call it M (t) and measure it in milligrams. At any given time t, the amount of drugfin the system is increasing at the rate of Syn—3; - 100%";3 = 500%}; and it is decreaSing at the rate of 0.4M (wig—f A differential equation describing the rate of change is therefore ’ dM 73? = —0.4M(t) + 500 b. How much of the drug is present in the bloodstream after a long time? Solution. The onlyequilibrium of the system is when —0.4M + 500 = 0, or M = 1250. The question is then whether solutions converge to, or diverge from, this equilibrium. As the equation has the form 4%: TM + k, with r = —0.4.a negative number, it seems likely solutions will converge to M = 12507719 This can be checked: In Maple, enter the commands: > with(DEtools) :_ l _ > eqn := diff (M(t) , t) = -- 0.4 * M(t) ‘+' 500; ‘ > DEplot(eqn, M(t) , t = 0. .10, arrows = THIN, M' =’ 0. .2000); So after a long time there will be. about 1250mg in the bloodstream. ' . (ONTIN U51) 1,119.6 . / ‘ x. / / _/ (x. x/ x. / /////// //////// //// /////////////:////// //.////////////./////. ://1 /; .//»/ll;/»//l/»/ll//A/l// r/l. 1/»/.,/./yli./;Ilali./;/£/l./»/l.//./Il../\$ .fi/yllr/rli l/r’r/k Ibllliiltlllr. iii/91 \vXL.\LY}.\..\‘v.x.\v\\vxL..\r.\v.\v.\v\\v.\r.\v.KJ\v g0 i\\1\\.\v\i..\v\\...\\\¢\\‘\«\w\\‘\.\Vw\\\V\V \\:e‘\\\\«\\ \. \«\\v\1\..\y\.\ \4\\v\. \\\\\\\\\\\\\\\\\\\ \\\\ \\\\\\\\ \\\\ \\\\\\\\ ms \\\\ \\\\\\\\ \NNN \m\\\\\\ 3“ “Ex“ \NNN \ NNNWNW m m. 8 ...
View Full Document
{[ snackBarMessage ]}
### Page1 / 7
Sect1.1 - SECTVON 1.1 Phone/n ‘1> with(DEtools>...
This preview shows document pages 1 - 7. Sign up to view the full document.
View Full Document
Ask a homework question - tutors are online
|
In 1963, Congress approved the Community Mental Health : GMAT Critical Reasoning (CR)
Check GMAT Club Decision Tracker for the Latest School Decision Releases https://gmatclub.com/AppTrack
It is currently 23 Feb 2017, 05:11
### GMAT Club Daily Prep
#### Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
# Events & Promotions
###### Events & Promotions in June
Open Detailed Calendar
# In 1963, Congress approved the Community Mental Health
Author Message
TAGS:
### Hide Tags
Verbal Forum Moderator
Joined: 23 Oct 2011
Posts: 283
Followers: 36
Kudos [?]: 774 [2] , given: 23
In 1963, Congress approved the Community Mental Health [#permalink]
### Show Tags
28 Apr 2012, 21:44
2
KUDOS
3
This post was
BOOKMARKED
00:00
Difficulty:
75% (hard)
Question Stats:
54% (02:37) correct 46% (01:49) wrong based on 407 sessions
### HideShow timer Statistics
In 1963, Congress approved the Community Mental Health Centers Act, which outlined
plans to release the mentally ill from institutions, incorporate these individuals into their
communities, and provide outpatient treatment. Leading associations of mental health
professionals overwhelmingly applauded these goals and approved of these plans because,
the experts said, the treatment rather than the institutional environment was the crucial
element for the welfare of these patients. Within twenty years, state authorities succeeded in
discharging 95% of these patients from institutional care. In 1983, however, executives from
these same professional associations said that the plight of the mentally ill was worse than
ever. Which if the following, if true, best resolves the paradox in the above passage?
A) More people were diagnosed with psychiatric disorders in 1983 than in 1963.
b) Many mental health professionals believe that if their peers had administered the project
rather than the state authorities, the results would have been better.
c) The state budget allocation for services to the mentally ill has not increased faster than the
rate of inflation.
d) Congress agreed to fund these outpatient services, provided that the money come from cuts
in other domestic programs; these cuts, however, never materialized.
e) Many of the released patients had, at some time, been addicted to illegal narcotics.
Main CR Qs link - cr-qs-600-700-level-131508.html
[Reveal] Spoiler: OA
_________________
********************
Push +1 kudos button please, if you like my post.
If you have any questions
New!
Magoosh GMAT Instructor
Joined: 28 Dec 2011
Posts: 3832
Followers: 1329
Kudos [?]: 6102 [3] , given: 71
### Show Tags
30 Apr 2012, 13:24
3
KUDOS
Expert's post
With this one, it helps to be familiar with the history behind it, though that's not strictly necessary.
In 1963, Congress approved the Community Mental Health Centers Act, which outlined plans to release the mentally ill from institutions, incorporate these individuals into their communities, and provide outpatient treatment. Leading associations of mental health professionals overwhelmingly applauded these goals and approved of these plans because, the experts said, the treatment rather than the institutional environment was the crucial element for the welfare of these patients. Within twenty years, state authorities succeeded in discharging 95% of these patients from institutional care. In 1983, however, executives from these same professional associations said that the plight of the mentally ill was worse than ever.
Which if the following, if true, best resolves the paradox in the above passage?
Discharging them is a great idea, IF they get the outpatient treatment, which could help them more than remaining locked in an institution. So, if the treatment could help them so much, why weren't they helped?
A) More people were diagnosed with psychiatric disorders in 1983 than in 1963.
The passage is about a mass of patients, who had be diagnosed and locked in institutions over the course of years, perhaps decades. It's not just about the folks diagnosed in a single calendar year.
B) Many mental health professionals believe that if their peers had administered the project rather than the state authorities, the results would have been better.
The question is: why didn't the outpatient treatment help these patients as expected --- that part would have been administered by mental health professionals, so this statement is somewhat beside the point.
C) The state budget allocation for services to the mentally ill has not increased faster than the rate of inflation.
OK, this is getting there . . maybe the money for the programs wasn't keeping up with inflation, so the dollars allocated didn't go as far as they should have. Not bad.
D) Congress agreed to fund these outpatient services, provided that the money come from cuts in other domestic programs; these cuts, however, never materialized.
In other words, they never came up with the money, so the patients never got the outpatient treatments that theoretically would have been helpful to them. Bingo! That would precisely explain why, 20 years later, the mental ill folks were worse off than ever --- they had been kicked out of their state-sponsored institutions, given no help with services, and left to fend for themselves. Historically, this is exactly what happened in California, when then-Governor Reagan signed the bill to close all those state-run institutions, a huge budget cut, and while all kinds of outpatient care were promised, in practice not a dime was allocated toward them. The vast majority of the mentally ill became homeless, and are still so today, thanks to those decisions now almost half-a-century old.
E) Many of the released patients had, at some time, been addicted to illegal narcotics.
An aggravating factor, to be sure, but it doesn't explain the wholesale failure of the promised and promising outpatient treatments.
The best resolution of the paradox is what, in real history, actually happened. The outpatient services, however potentially beneficial, didn't help these released mentally ill folks because they never received those services. Super-helpful treatment programs were promised in political grandstanding, but never supported with any money, and the homeless, kicked of the state-run institutions, were left on the street with no care.
Here's another paradox question for practice.
http://gmat.magoosh.com/questions/1320
When you submit your answer, the next page will have a complete video explanation.
Does all this make sense? Let me know if you have any questions.
Mike
_________________
Mike McGarry
Magoosh Test Prep
Senior Manager
Joined: 19 Apr 2011
Posts: 289
Schools: Booth,NUS,St.Gallon
Followers: 5
Kudos [?]: 285 [0], given: 51
Re: In 1963, Congress approved the Community Mental Health [#permalink]
### Show Tags
05 Aug 2012, 00:51
@mikemcgarry -very nice explanation !!
_________________
+1 if you like my explanation .Thanks
Manager
Joined: 16 Oct 2011
Posts: 135
Location: United States
Followers: 0
Kudos [?]: 14 [0], given: 5
### Show Tags
05 Aug 2012, 00:59
mikemcgarry wrote:
With this one, it helps to be familiar with the history behind it, though that's not strictly necessary.
In 1963, Congress approved the Community Mental Health Centers Act, which outlined plans to release the mentally ill from institutions, incorporate these individuals into their communities, and provide outpatient treatment. Leading associations of mental health professionals overwhelmingly applauded these goals and approved of these plans because, the experts said, the treatment rather than the institutional environment was the crucial element for the welfare of these patients. Within twenty years, state authorities succeeded in discharging 95% of these patients from institutional care. In 1983, however, executives from these same professional associations said that the plight of the mentally ill was worse than ever.
Which if the following, if true, best resolves the paradox in the above passage?
Discharging them is a great idea, IF they get the outpatient treatment, which could help them more than remaining locked in an institution. So, if the treatment could help them so much, why weren't they helped?
A) More people were diagnosed with psychiatric disorders in 1983 than in 1963.
The passage is about a mass of patients, who had be diagnosed and locked in institutions over the course of years, perhaps decades. It's not just about the folks diagnosed in a single calendar year.
B) Many mental health professionals believe that if their peers had administered the project rather than the state authorities, the results would have been better.
The question is: why didn't the outpatient treatment help these patients as expected --- that part would have been administered by mental health professionals, so this statement is somewhat beside the point.
C) The state budget allocation for services to the mentally ill has not increased faster than the rate of inflation.
OK, this is getting there . . maybe the money for the programs wasn't keeping up with inflation, so the dollars allocated didn't go as far as they should have. Not bad.
D) Congress agreed to fund these outpatient services, provided that the money come from cuts in other domestic programs; these cuts, however, never materialized.
In other words, they never came up with the money, so the patients never got the outpatient treatments that theoretically would have been helpful to them. Bingo! That would precisely explain why, 20 years later, the mental ill folks were worse off than ever --- they had been kicked out of their state-sponsored institutions, given no help with services, and left to fend for themselves. Historically, this is exactly what happened in California, when then-Governor Reagan signed the bill to close all those state-run institutions, a huge budget cut, and while all kinds of outpatient care were promised, in practice not a dime was allocated toward them. The vast majority of the mentally ill became homeless, and are still so today, thanks to those decisions now almost half-a-century old.
E) Many of the released patients had, at some time, been addicted to illegal narcotics.
An aggravating factor, to be sure, but it doesn't explain the wholesale failure of the promised and promising outpatient treatments.
The best resolution of the paradox is what, in real history, actually happened. The outpatient services, however potentially beneficial, didn't help these released mentally ill folks because they never received those services. Super-helpful treatment programs were promised in political grandstanding, but never supported with any money, and the homeless, kicked of the state-run institutions, were left on the street with no care.
Here's another paradox question for practice.
http://gmat.magoosh.com/questions/1320
When you submit your answer, the next page will have a complete video explanation.
Does all this make sense? Let me know if you have any questions.
Mike
Hi Mike,
But the passage doesn't say that the funds were responsible to help the needy when they were at the institution. In fact it doesn't talk about the funds.
However, it was the environment that discomforted those people.
Doesn't E say that the unexpected addiction to illegal drugs exacerbated the environment of those who were released??
Or is it probably even if "many" got addicted, the situation wouldn't be worse than it was before... So, D is a bit more universal in terms the lack of treatment to the whole lot??
Manager
Joined: 02 Jan 2011
Posts: 201
Followers: 1
Kudos [?]: 53 [0], given: 22
Re: In 1963, Congress approved the Community Mental Health [#permalink]
### Show Tags
05 Aug 2012, 21:42
A) More people were diagnosed with psychiatric disorders in 1983 than in 1963. - Irrelevant Information - Incorrect
b) Many mental health professionals believe that if their peers had administered the project rather than the state authorities, the results would have been better. - Raises more questions than answers. - Incorrect
c) The state budget allocation for services to the mentally ill has not increased faster than the rate of inflation. - Even if the budget has not increased, the current budget would have been helpful to treat mentally ill people but does not explain why the plight has increased - Incorrect
d) Congress agreed to fund these outpatient services, provided that the money come from cuts in other domestic programs; these cuts, however, never materialized. - The whole idea was to treat the people with outpatient services and not allocating enough budget to these programs has worsened the situation of the people - Correct
e) Many of the released patients had, at some time, been addicted to illegal narcotics. - Irrelevant information - Incorrect
GMAT Club Legend
Joined: 01 Oct 2013
Posts: 10627
Followers: 941
Kudos [?]: 207 [0], given: 0
Re: In 1963, Congress approved the Community Mental Health [#permalink]
### Show Tags
16 Jul 2014, 05:28
Hello from the GMAT Club VerbalBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
GMAT Club Legend
Joined: 01 Oct 2013
Posts: 10627
Followers: 941
Kudos [?]: 207 [0], given: 0
Re: In 1963, Congress approved the Community Mental Health [#permalink]
### Show Tags
20 Jul 2015, 07:09
Hello from the GMAT Club VerbalBot!
Thanks to another GMAT Club member, I have just discovered this valuable topic, yet it had no discussion for over a year. I am now bumping it up - doing my job. I think you may find it valuable (esp those replies with Kudos).
Want to see all other topics I dig out? Follow me (click follow button on profile). You will receive a summary of all topics I bump in your profile area as well as via email.
Optimus Prep Instructor
Joined: 06 Nov 2014
Posts: 1787
Followers: 54
Kudos [?]: 403 [0], given: 21
Re: In 1963, Congress approved the Community Mental Health [#permalink]
### Show Tags
21 Jul 2015, 04:21
In 1963, Congress approved the Community Mental Health Centers Act, which outlined
plans to release the mentally ill from institutions, incorporate these individuals into their
communities, and provide outpatient treatment. Leading associations of mental health
professionals overwhelmingly applauded these goals and approved of these plans because,
the experts said, the treatment rather than the institutional environment was the crucial
element for the welfare of these patients. Within twenty years, state authorities succeeded in
discharging 95% of these patients from institutional care. In 1983, however, executives from
these same professional associations said that the plight of the mentally ill was worse than
ever. Which if the following, if true, best resolves the paradox in the above passage?
The patients were supposed to be better off with treatment outside than in the institution. It's necessary then to find a reason why, perhaps the patients did not receive the treatment or why the treatment was not successful. The question then is, why was the treatment idea unsuccessful?
A) More people were diagnosed with psychiatric disorders in 1983 than in 1963. An increase in diagnoses doesn't provide an explanation.
b) Many mental health professionals believe that if their peers had administered the project
rather than the state authorities, the results would have been better.The belief of health professionals doesn't provide an explanation.
c) The state budget allocation for services to the mentally ill has not increased faster than the
rate of inflation. Not increasing to match inflation doesn't mean that the services aren't provided.
d) Congress agreed to fund these outpatient services, provided that the money come from cuts
in other domestic programs; these cuts, however, never materialized.No money for the treatment means no treatment, which explains its lack of success.
e) Many of the released patients had, at some time, been addicted to illegal narcotics. Narcotics addiction of some patients is not related to the failure of the treatment plan.
_________________
# Janielle Williams
Customer Support
Special Offer: $80-100/hr. Online Private Tutoring GMAT On Demand Course$299
Free Online Trial Hour
Re: In 1963, Congress approved the Community Mental Health [#permalink] 21 Jul 2015, 04:21
Similar topics Replies Last post
Similar
Topics:
At the beginning of 2004, Congress enacted a 15% 4 03 Dec 2013, 15:03
1 In 1963, Congress approved the Community Mental Health 6 08 Oct 2012, 06:56
19 Mental Health CR 13 05 Aug 2010, 21:31
Source: 1000CR) Between 1951 and 1963, it was illegal in the 5 28 Jun 2007, 19:17
Between 1951 and 1963, it was illegal in the country of 4 11 Mar 2007, 22:40
Display posts from previous: Sort by
|
# Mathematical Sciences Research Institute
Home > Pages > Printer FAQ (Frequently Asked Questions)
# Printer FAQ (Frequently Asked Questions)
Q1: What's the command to send a job to the printer?
A: lpr -Pprinter_name filename (where printer_name = 3rdFloorHP, LibHP, etc...)
Q2: Where can I pick up my output?
A: If you do not specify a printer name with the -P option, output should go to the printer on the same floor as the machine from which the job was submitted. Look here for output locations.
Q3: How can I learn all the available options to the lpr command?
A: type man lpr. This will give you a help page that contains all the available options for lpr. You may find additional help at CUPS Printing & Options Help.
Q4: How can I find out where my job is in the queue?
A: type lpstat -o -p. This will show you all queued jobs on all printers on the network.
Q5: My job has already started to print, but I need to kill it. How can I do this?
A: Turn the printer off (being careful to prevent a paper jam by cutting power between paper tray accesses). This clears the printer's memory so that the job will not re-seed itself. Once power is restored, the printer should resume activity starting with the next job in the queue. Please do this only if you know that it is indeed your print job being printed at the moment.
Q6: The printer is out of paper. Where can I get more paper for the printer?
A: Extra printer paper is located below member mailboxes in the mail/copy room on the first floor.
Q7: How much paper can be put into each paper tray?
A: One ream (the entire contents of one paper package).
Q8: Is it possible to get double-sided output?
A: Yes. All three printers accessible to the computers in your office print double-sided by default from the computers on the MSRI network. You may also specify it on the command line with lpr -o sides=two-sided.
Q9: Is it possible to get double-sided output and print two pages per sheet?
A: Yes. The option to print multiple sheets per page is -o number-up=pagespersheet. To specify both options on the command line, use lpr -o sides=two-sided -o number-up=2 . Be sure to precede every option you want with -o.
Q10: How can I print from the wireless network?
A: You can connect to any of the three printers by following our wireless printing instructions.
Q11: Hey! The 2nd floor lab is locked; how do I get my print job??
A: If you have already sent the print job, you may pick it up the next morning at 8:30, or re-send your job to the third floor printer.
Because our lab must be locked and the alarm must be armed when our administrative staff leaves for the day, you should print to the third floor printer after 5:00pm.
You can do this by either choosing "3rdFloorHP" if your application gives you that option, or by using the -P option with the lpr command:
lpr -P3rdFloorHP file-name.
Q12: Why doesn't dvips print to my default printer?
A: You are probably used to the command dvips file.dvi sending the file directly to your default printer. As we currently use Debian Linux, the Debian developers have changed this default behavior to instead send the output to a PostScript file, rather than to your default printer.
Instead of using dvips, you can print dvi files with lpr, as with any other text, PDF, or postscript file.
If you would rather use dvips, you must specify the printer you would like to print to with the -P option, such as:
dvips -P2ndFloorHP file.dvi
Q13: The HP LaserJet 8150 printer is telling me PRINTER ERROR 79.00FE, a red light is blinking, and no print jobs are coming out. What do I do?
A: The quickest resolution may be to simply turn off the printer [the power button is located next to the bottom drawer, on the left side], then turn it back on again.
Our HP LaserJet 8150 printers are somewhat fickle with certain files, particularly certain PDFs from Macs connected to our wireless network. If you notice that you are printing to one of our 8150s and always get this error after printing a particular file, please send the file to [email protected] and we will print it for you.
If you continue to see this error and turning the printer off and on does not help, please let a member of the Computing Department know, either by e-mail or in office 214.
|
# Algebra: Chapter 0
###### Paolo Aluffi
Publisher:
American Mathematical Society
Publication Date:
2009
Number of Pages:
713
Format:
Hardcover
Series:
Price:
89.00
ISBN:
9780821847817
Category:
Textbook
[Reviewed by
Michael Berg
, on
09/17/2009
]
An obvious question: why another graduate algebra book? Aren’t there all but too many already? There are the classics some one my age naturally gravitates to: B.L. Van der Waerden’s (once) Modern(e) Algebra, Serge Lang’s occasionally idiosyncratic but hugely influential Algebra (witness e.g. the notorious exercise on homological algebra in the first edition), Mac Lane-Birkhoff’s Algebra (not Birkhoff-MacLane), the marvelous series by Nathan Jacobson, and so forth. Somewhat more recently J.J. Rotman’s encyclopedic Advanced Modern Algebra appeared, and quite recently (and reviewed in this venue) Anthony Knapp produced the even more encyclopedic pair, Basic Algebra, Advanced Algebra (and both Rotman’s and Knapp’s books are also excellent, by the way — to no one’s surprise). Is there anything genuinely novel to be done when it comes to educating fledgling graduate students in this subject, given such an already well-populated and high-quality field?
Well, the answer is yes. And the title of the book under review, Algebra: Chapter 0, is already a clue to what the author, Paolo Aluffi, is up to. In a perhaps Bourbakian sense, the prevailing motivation and objective is to present the subject at hand in a manner that pays proper due to relatively new foundations, making for a rather different orientation and flavor for what ensues. Clearly, when it comes to algebra we can historically identify three such foundational paradigm shifts, so to speak: first, the shift from the original conceptions of Kronecker, Weber, Dedekind, and Hilbert, to the abstrakte Algebra of Emmy Nöther, who, by the way, was apt to give a huge portion of the credit the to Dedekind; second, the maneuver of basing not just algebra but nigh on all of mathematics on set theory, a move sometimes associated with Bourbaki in its (or his) heyday; and third, the rather recent incursions made by category theory into, again, nigh on everything, at least in potentio. For the latter it is not wrong to give the lion’s share of historical credit (or blame) to Grothendieck, both for this introduction of a sweeping categorical perspective in this area and for the development of a huge number of attendant techniques. What he did for algebraic geometry has manifestly taken on a largely autonomous character and has come to inform the foundations of any number of mainstream mathematical disciplines at this point in time.
Aluffi’s book accordingly aims at developing algebra, at the usual advanced undergraduate to beginning graduate level, on an explicitly category-theoretical foundation. But Aluffi tempers his revolutionary zeal by also giving set theory its due: his first chapter is titled, “Preliminaries: set theory and categories,” so (for those who still don’t have any time for categories and functors) it could be worse.
His treatment of these preliminaries is thorough as well as eminently accessible: Aluffi writes well, clearly and engagingly. This characterizes all of Algebra: Chapter 0, actually, and makes it easy to recommend the book enthusiastically even aside from the fact that I am a big fan of category theory to begin with. The sequence of subsequent chapters of the book is as follows:
• Ch. II, “Groups, first encounter,” taking one from the basic definitions through group actions, and then appending a section titled “Group objects in categories”;
• Ch. III, “Rings and modules,” including a welcome subsection differentiating between the notions of finite generation and finite type, and capped off by a section on complexes and homology (ending with the snake lemma);
• Ch. IV, “Groups, second encounter,” including e.g. the class formula, Sylow Theory, Jordan-Hölder (and Schreier), the extension problem (i.e. second cohomology, really, although Aluffi restricts himself to semi-direct products), and finite abelian groups;
• Ch. V, “Irreducibility and factorization in integral domains,” starting with chain conditions, ending with a thorough discussion of polynomial rings (and Fermat’s theorem on sums of squares as icing on the cake);
• Ch. VI, “Linear algebra,” covering “just about everything” (including e.g. the Euler characteristic and the Grothendieck group, presentations and resolutions, and sundry canonical forms);
• Ch. VII, “Fields,” including a decent dosage of algebraic geometry (surrounding the Nulstellensatz), geometric impossibilities, and “[a] little Galois theory” — or more than a little, given that the last subsection of this chapter is titled, “Abelian groups as Galois groups over Q”;
• Ch. VIII, “Linear algebra: reprise,” introducing a functorial perspective into the affair, followed by coverage of limits and colimits, tensor products, Tor, Hom, Ext, intermingled with treatments of duality, projective and injective modules, and adjunction;
• Ch. IX, “Homological algebra,” taking the reader from “(the) necessary categorical preliminaries” to, in order, additive and abelian categories, “
omplexes and homology, again” (replete with the long exact sequence in cohomology), triangles, derived categories, a very thorough discussion of homotopy, derived functors, a return to (e.g. group) cohomology from this new perspective, double complexes, various things acyclic, Tor and Ext again, and finally a brief discussion of derived categories, triangulated categories, and spectral sequences, grouped together under the heading “Further topics.”</li>
</ul>
<p>It is clear, then, that Aluffi’s grand undertaking (we’re talking about a little over 700 pages!) is indeed a most useful and welcome labor: he has composed a coherent treatment of mainstream graduate algebra tied together with material from homological algebra that not all that long ago was dealt with separately and subsequently. As already suggested, the approach chosen in <em>Algebra: Chapter 0</em> properly reflects relatively recent changes in the way research in algebra is done: there is a far greater presence of homological algebraic methods early on, and, indeed, categories (even derived, triangular ones) come into the game much as a matter of course. In this sense <em>Algebra: Chapter 0</em> certainly breaks new ground, and does so with <em>élan</em>.</p>
<p>Finally, Aluffi also possesses the gift of a light touch: the book has a lot of humor in it. For example, his introduction of group theory to the presumably uninitiated starts with “Joke 1.1. Definition: A group is a groupoid with a single object,” and we find on p. 334 the (revealing) phrase, “Against our best efforts, we cannot resist extending these simple observations to more general complexes…” and it’s on to the Euler characteristic and the Grothendieck group. There are also a lot of good exercises and very useful (and pedagogically astute) footnotes. <em>Algebra: Chapter 0</em> is a very good book that should be used in a huge number of departments across the county (and beyond).</p>
<hr />
<p>Michael Berg is Professor of Mathematics at Loyola Marymount University in Los Angeles, CA.</p>
|
# Sensor array
A sensor array is a group of sensors, usually deployed in a certain geometry pattern, used for collecting and processing electromagnetic or acoustic signals. The advantage of using a sensor array over using a single sensor lies in the fact that an array adds new dimensions to the observation, helping to estimate more parameters and improve the estimation performance. For example an array of radio antenna elements used for beamforming can increase antenna gain in the direction of the signal while decreasing the gain in other directions, i.e., increasing signal-to-noise ratio (SNR) by amplifying the signal coherently. Another example of sensor array application is to estimate the direction of arrival of impinging electromagnetic waves. The related processing method is called array signal processing. Application examples of array signal processing include radar/sonar, wireless communications, seismology, machine condition monitoring, astronomical observations fault diagnosis, etc.
Using array signal processing, the temporal and spatial properties (or parameters) of the impinging signals interfered by noise and hidden in the data collected by the sensor array can be estimated and revealed. This is known as parameter estimation.
## Plane wave, time domain beamforming
Figure 1 illustrates a six-element uniform linear array (ULA). In this example, the sensor array is assumed to be in the far-field of a signal source so that it can be treated as planar wave.
Parameter estimation takes advantage of the fact that the distance from the source to each antenna in the array is different, which means that the input data at each antenna will be phase-shifted replicas of each other. Eq. (1) shows the calculation for the extra time it takes to reach each antenna in the array relative to the first one, where c is the velocity of the wave.
${\displaystyle \Delta t_{i}={\frac {(i-1)d\cos \theta }{c}},i=1,2,...,M\ \ (1)}$
Each sensor is associated with a different delay. The delays are small but not trivial. In frequency domain, they are displayed as phase shift among the signals received by the sensors. The delays are closely related to the incident angle and the geometry of the sensor array. Given the geometry of the array, the delays or phase differences can be used to estimate the incident angle. Eq. (1) is the mathematical basis behind array signal processing. Simply summing the signals received by the sensors and calculating the mean value give the result
${\displaystyle y={\frac {1}{M}}\sum _{i=1}^{M}{\boldsymbol {x}}_{i}(t-\Delta t_{i})}$ .
Because the received signals are out of phase, this mean value does not give an enhanced signal compared with the original source. Heuristically, if we can find delays of each of the received signals and remove them prior to the summation, the mean value
${\displaystyle y={\frac {1}{M}}\sum _{i=1}^{M}{\boldsymbol {x}}_{i}(t)}$
will result in an enhanced signal. The process of time-shifting signals using a well selected set of delays for each channel of the sensor array so that the signal is added constructively is called beamforming. In addition to the delay-and-sum approach described above, a number of spectral based (non-parametric) approaches and parametric approaches exist which improve various performance metrics. These beamforming algorithms are briefly described as follows.
## Array design
Sensor arrays have different geometrical designs, including linear, circular, planar, cylindrical and spherical arrays. There are sensor arrays with arbitrary array configuration, which require more complex signal processing techniques for parameter estimation. In uniform linear array (ULA) the phase of the incoming signal ${\displaystyle \omega \tau }$ should be limited to ${\displaystyle \pm \pi }$ to avoid grating waves. It means that for angle of arrival ${\displaystyle \theta }$ in the interval ${\displaystyle [-{\frac {\pi }{2}},{\frac {\pi }{2}}]}$ sensor spacing should be smaller than half the wavelength ${\displaystyle d\leq \lambda /2}$. However, the width of the main beam, i.e., the resolution or directivity of the array, is determined by the length of the array compared to the wavelength. In order to have a decent directional resolution the length of the array should be several times larger than the radio wavelength.
## Types of sensor arrays
### Antenna array
• Antenna array (electromagnetic), a geometrical arrangement of antenna elements with a deliberate relationship between their currents, forming a single antenna usually to achieve a desired radiation pattern
• Directional array, an antenna array optimized for directionality
• Phased array, An antenna array where the phase shifts (and amplitudes) applied to the elements are modified electronically, typically in order to steer the antenna system's directional pattern, without the use of moving parts
• Smart antenna, a phased array in which a signal processor computes phase shifts to optimize reception and/or transmission to a receiver on the fly, such as is performed by cellular telephone towers
• Digital antenna array, this is smart antenna with multi channels digital beamforming, usually by using FFT.
• Interferometric array of radio telescopes or optical telescopes, used to achieve high resolution through interferometric correlation
• Watson-Watt / Adcock antenna array, using the Watson-Watt technique whereby two Adcock antenna pairs are used to perform an amplitude comparison on the incoming signal
## Delay-and-sum beamforming
If a time delay is added to the recorded signal from each microphone that is equal and opposite of the delay caused by the additional travel time, it will result in signals that are perfectly in-phase with each other. Summing these in-phase signals will result in constructive interference that will amplify the SNR by the number of antennas in the array. This is known as delay-and-sum beamforming. For direction of arrival (DOA) estimation, one can iteratively test time delays for all possible directions. If the guess is wrong, the signal will be interfered destructively, resulting in a diminished output signal, but the correct guess will result in the signal amplification described above.
The problem is, before the incident angle is estimated, how could it be possible to know the time delay that is 'equal' and opposite of the delay caused by the extra travel time? It is impossible. The solution is to try a series of angles ${\displaystyle {\hat {\theta }}\in [0,\pi ]}$ at sufficiently high resolution, and calculate the resulting mean output signal of the array using Eq. (3). The trial angle that maximizes the mean output is an estimation of DOA given by the delay-and-sum beamformer. Adding an opposite delay to the input signals is equivalent to rotating the sensor array physically. Therefore, it is also known as beam steering.
## Spectrum-based beamforming
Delay and sum beamforming is a time domain approach. It is simple to implement, but it may poorly estimate direction of arrival (DOA). The solution to this is a frequency domain approach. The Fourier transform transforms the signal from the time domain to the frequency domain. This converts the time delay between adjacent sensors into a phase shift. Thus, the array output vector at any time t can be denoted as ${\displaystyle {\boldsymbol {x}}(t)=x_{1}(t){\begin{bmatrix}1&e^{-j\omega \Delta t}&\cdots &e^{-j\omega (M-1)\Delta t}\end{bmatrix}}^{T}}$, where ${\displaystyle x_{1}(t)}$ stands for the signal received by the first sensor. Frequency domain beamforming algorithms use the spatial covariance matrix, represented by ${\displaystyle {\boldsymbol {R}}=E\{{\boldsymbol {x}}(t){\boldsymbol {x}}^{T}(t)\}}$. This M by M matrix carries the spatial and spectral information of the incoming signals. Assuming zero-mean Gaussian white noise, the basic model of the spatial covariance matrix is given by
${\displaystyle {\boldsymbol {R}}={\boldsymbol {V}}{\boldsymbol {S}}{\boldsymbol {V}}^{H}+\sigma ^{2}{\boldsymbol {I}}\ \ (4)}$
where ${\displaystyle \sigma ^{2}}$ is the variance of the white noise, ${\displaystyle {\boldsymbol {I}}}$ is the identity matrix and ${\displaystyle {\boldsymbol {V}}}$ is the array manifold vector ${\displaystyle {\boldsymbol {V}}={\begin{bmatrix}{\boldsymbol {v}}_{1}&\cdots &{\boldsymbol {v}}_{k}\end{bmatrix}}^{T}}$ with ${\displaystyle {\boldsymbol {v}}_{i}={\begin{bmatrix}1&e^{-j\omega \Delta t_{i}}&\cdots &e^{-j\omega (M-1)\Delta t_{i}}\end{bmatrix}}^{T}}$. This model is of central importance in frequency domain beamforming algorithms.
Some spectrum-based beamforming approaches are listed below.
### Conventional (Bartlett) beamformer
The Bartlett beamformer is a natural extension of conventional spectral analysis (spectrogram) to the sensor array. Its spectral power is represented by
${\displaystyle {\hat {P}}_{Bartlett}(\theta )={\boldsymbol {v}}^{H}{\boldsymbol {R}}{\boldsymbol {v}}\ \ (5)}$.
The angle that maximizes this power is an estimation of the angle of arrival.
### MVDR (Capon) beamformer
The Minimum Variance Distortionless Response beamformer, also known as the Capon beamforming algorithm,[1] has a power given by
${\displaystyle {\hat {P}}_{Capon}(\theta )={\frac {1}{{\boldsymbol {v}}^{H}{\boldsymbol {R}}^{-1}{\boldsymbol {v}}}}\ \ (6)}$.
Though the MVDR/Capon beamformer can achieve better resolution than the conventional (Bartlett) approach, but this algorithm has higher complexity due to the full-rank matrix inversion. Technical advances in GPU computing have begun to narrow this gap and make real-time Capon beamforming possible.[2]
### MUSIC beamformer
MUSIC (MUltiple SIgnal Classification) beamforming algorithm starts with decomposing the covariance matrix as given by Eq. (4) for both the signal part and the noise part. The eigen-decomposition of is represented by
${\displaystyle {\boldsymbol {R}}={\boldsymbol {U}}_{s}{\boldsymbol {\Lambda }}_{s}{\boldsymbol {U}}_{s}^{H}+{\boldsymbol {U}}_{n}{\boldsymbol {\Lambda }}_{n}{\boldsymbol {U}}_{n}^{H}\ \ (7)}$.
MUSIC uses the noise sub-space of the spatial covariance matrix in the denominator of the Capon algorithm
${\displaystyle {\hat {P}}_{MUSIC}(\theta )={\frac {1}{{\boldsymbol {v}}^{H}{\boldsymbol {U}}_{n}{\boldsymbol {U}}_{n}^{H}{\boldsymbol {v}}}}\ \ (8)}$.
Therefore MUSIC beamformer is also known as subspace beamformer. Compared to the Capon beamformer, it gives much better DOA estimation.
### SAMV beamformer
SAMV beamforming algorithm is a sparse signal reconstruction based algorithm which explicitly exploits the time invariant statistical characteristic of the covariance matrix. It achieves superresolution and robust to highly correlated signals.
## Parametric beamformers
One of the major advantages of the spectrum based beamformers is a lower computational complexity, but they may not give accurate DOA estimation if the signals are correlated or coherent. An alternative approach are parametric beamformers, also known as maximum likelihood (ML) beamformers. One example of a maximum likelihood method commonly used in engineering is the least squares method. In the least square approach, a quadratic penalty function is used. To get the minimum value (or least squared error) of the quadratic penalty function (or objective function), take its derivative (which is linear), let it equal zero and solve a system of linear equations.
In ML beamformers the quadratic penalty function is used to the spatial covariance matrix and the signal model. One example of ML beamformer penalty function is
${\displaystyle L_{ML}(\theta )=\|{\hat {\boldsymbol {R}}}-{\boldsymbol {R}}\|_{F}^{2}=\|{\hat {\boldsymbol {R}}}-({\boldsymbol {V}}{\boldsymbol {S}}{\boldsymbol {V}}^{H}+\sigma ^{2}{\boldsymbol {I}})\|_{F}^{2}\ \ (9)}$ ,
where ${\displaystyle \|\cdot \|_{F}}$ is the Frobenius norm. It can be seen in Eq. (4) that the penalty function of Eq. (9) is minimized by approximating the signal model to the sample covariance matrix as accurate as possible. In other words, the maximum likelihood beamformer is to find the DOA ${\displaystyle \theta }$, the independent variable of matrix ${\displaystyle {\boldsymbol {V}}}$, so that the penalty function in Eq. (9) is minimized. In practice, the penalty function may look different, depending on the signal and noise model. For this reason, there are two major categories of maximum likelihood beamformers: Deterministic ML beamformers and stochastic ML beamformers, corresponding to a deterministic and a stochastic model, respectively.
Another idea to change the former penalty equation is the consideration of simplifying the minimization by differentiation of the penalty function. In order to simplify the optimization algorithm, logarithmic operations and the probability density function (PDF) of the observations may be used in some ML beamformers.
The optimizing problem is solved by finding the roots of the derivative of the penalty function after equating it with zero. Because the equation is non-linear a numerical searching approach such as Newton–Raphson method is usually employed. The Newton–Raphson method is an iterative root search method with the iteration
${\displaystyle x_{n+1}=x_{n}-{\frac {f(x_{n})}{f'(x_{n})}}\ \ (10)}$.
The search starts from an initial guess ${\displaystyle x_{0}}$. If the Newton-Raphson search method is employed to minimize the beamforming penalty function, the resulting beamformer is called Newton ML beamformer. Several well-known ML beamformers are described below without providing further details due to the complexity of the expressions.
Deterministic maximum likelihood beamformer
In deterministic maximum likelihood beamformer (DML), the noise is modeled as a stationary Gaussian white random processes while the signal waveform as deterministic (but arbitrary) and unknown.
Stochastic maximum likelihood beamformer
In stochastic maximum likelihood beamformer (SML), the noise is modeled as stationary Gaussian white random processes (the same as in DML) whereas the signal waveform as Gaussian random processes.
Method of direction estimation
Method of direction estimation (MODE) is subspace maximum likelihood beamformer, just as MUSIC, is the subspace spectral based beamformer. Subspace ML beamforming is obtained by eigen-decomposition of the sample covariance matrix.
## References
1. J. Capon, “High–Resolution Frequency–Wavenumber Spectrum Analysis,” Proceedings of the IEEE, 1969, Vol. 57, pp. 1408–1418
2. Asen, Jon Petter; Buskenes, Jo Inge; Nilsen, Carl-Inge Colombo; Austeng, Andreas; Holm, Sverre (2014). "Implementing capon beamforming on a GPU for real-time cardiac ultrasound imaging". IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control. 61: 76. doi:10.1109/TUFFC.2014.6689777.
• H. L. Van Trees, “Optimum array processing – Part IV of detection, estimation, and modulation theory”, John Wiley, 2002
• H. Krim and M. Viberg, “Two decades of array signal processing research”, IEEE Transactions on Signal Processing Magazine, July 1996
• S. Haykin, Ed., “Array Signal Processing”, Eaglewood Cliffs, NJ: Prentice-Hall, 1985
• S. U. Pillai, “Array Signal Processing”, New York: Springer-Verlag, 1989
• P. Stoica and R. Moses, “Introduction to Spectral Analysis", Prentice-Hall, Englewood Cliffs, USA, 1997. available for download.
• J. Li and P. Stoica, “Robust Adaptive Beamforming", John Wiley, 2006.
• J. Cadzow, “Multiple Source Location—The Signal Subspace Approach”, IEEE Transactions on Acoustics, Speech, and Signal Processing, Vol. 38, No. 7, July 1990
• G. Bienvenu and L. Kopp, “Optimality of high resolution array processing using the eigensystem approach”, IEEE Transactions on Acoustics, Speech and Signal Process, Vol. ASSP-31, pp. 1234–1248, October 1983
• I. Ziskind and M. Wax, “Maximum likelihood localization of multiple sources by alternating projection”, IEEE Transactions on Acoustics, Speech and Signal Process, Vol. ASSP-36, pp. 1553–1560, October 1988
• B. Ottersten, M. Verberg, P. Stoica, and A. Nehorai, “Exact and large sample maximum likelihood techniques for parameter estimation and detection in array processing”, Radar Array Processing, Springer-Verlag, Berlin, pp. 99–151, 1993
• M. Viberg, B. Ottersten, and T. Kailath, “Detection and estimation in sensor arrays using weighted subspace fitting”, IEEE Transactions on Signal Processing, vol. SP-39, pp 2346–2449, November 1991
• M. Feder and E. Weinstein, “Parameter estimation of superimposed signals using the EM algorithm”, IEEE Transactions on Acoustic, Speech and Signal Proceeding, vol ASSP-36, pp. 447–489, April 1988
• Y. Bresler and Macovski, “Exact maximum likelihood parameter estimation of superimposed exponential signals in noise”, IEEE Transactions on Acoustic, Speech and Signal Proceeding, vol ASSP-34, pp. 1081–1089, October 1986
• R. O. Schmidt, “New mathematical tools in direction finding and spectral analysis”, Proceedings of SPIE 27th Annual Symposium, San Diego, California, August 1983
|
# Legendre transformations
The Hamiltonian form is defined in terms positions and conjugate momenta, whereas the Lagrangian form uses positions and velocities.
We need to be able to transform velocities into momenta, and to do this, we use the Legendre transformation.
### Simple case - two variables
Figure 6.1 - A monotonic function
We first suppose that we have two variables, $v, p$, such that $$p = p(v)$$ is a monotonic function of $v$. We also suppose that $$v = 0 \Leftrightarrow p = 0$$
Since $p$ is monotonic on $v$, the inverse also exists, $$v = v(p)$$
Nb. We're using $v, p$, along with $L, H$ below, for obvious reasons but this transformation has many applications in both maths and physics.
We now define two functions, \begin{align*}L = L(v) & \text{ where } \frac{\mathrm{d}}{\mathrm{d} v} L(v) = p\\H = H(p) & \text{ where } \frac{\mathrm{d}}{\mathrm{d} p} H(p) = v\end{align*}
Even though we defined $L$ in terms of $v$, and $H$ in terms of $p$, both $L, H$ can be considered as functions of either variable, \begin{align*}L &= L(v) = L\left(v(p)\right) = L(p)\\H &= H(p) = H\left(p(v)\right) = H(v)\end{align*}
Figure 6.2 - H, L as integrals
Now, we can integrate $p$ over $[0, v]$, $$\int_{0}^{v} p \mathrm{d}v = \int_{0}^{v} \frac{\mathrm{d} L}{\mathrm{d} v} \mathrm{d}v = L(v)$$
We can also integrate $v$ over $[0, p]$, $$\int_{0}^{p} v \mathrm{d}p = \int_{0}^{p} \frac{\mathrm{d} H}{\mathrm{d} p} \mathrm{d}p = H(p)$$
From the diagram, however, the sum of the two integrals is just the product of the two variables, $$pv = \int_{0}^{v} p \mathrm{d}v + \int_{0}^{p} v \mathrm{d}p = L + H$$
So, we find a relationship between $H, L$, $$H = pv - L$$
To confirm this relationship holds, we note that \begin{align*}\delta H &= p \delta v + v \delta p - \frac{\partial L}{\partial v} \delta v\\ &= v \delta p + \left( p - \frac{\partial L}{\partial v} \right) \delta v\\ &= v \delta p\\ &\Rightarrow \frac{\mathrm{d} H}{\mathrm{d} p} = v\end{align*} as expected.
### Many variables
Suppose now we can consider many variables. In short, we define (in Lagrangian terms), $$\frac{\partial L}{\partial v_i} = p_i$$ and then $H$ becomes $$H = \sum_{i} \left\{p_i v_i \right\} - L$$
Then, \begin{align*}\delta H &= \sum_{i} \left\{ p_i \delta v_i + v_i \delta p_i \right\} - \sum_{i}\frac{\partial L}{\partial v_i} \delta v_i\\&= \sum_{i} \left\{ v_i \delta p_i + \left(p_i - \frac{\partial L}{\partial v_i} \right) \delta v_i \right\}\\&= \sum_{i} v_i \delta p_i\end{align*} and so we get $$\frac{\partial H}{\partial p_i} = v_i$$
|
# High Obesity levels found among fat-tailed distributions
April 11, 2013
By
(This article was first published on Probability and statistics blog » r, and kindly contributed to R-bloggers)
In my never ending quest to find the perfect measure of tail fatness, I ran across this recent paper by Cooke, Nieboer, and Misiewicz. They created a measure called the “Obesity index.” Here’s how it works:
• Step 1: Sample four times from a distribution. The sample points should be independent and identically distributed (did your mind just say “IID”?)
• Step 2: Sort the points from lowest to highest (that’s right, order statistics)
• Step 3: Test whether the sum of the smallest and greatest number is larger than the sum of the two middle.
The Obesity index is the probability that the sum of these end points is larger than the sum of the middle numbers. In mathy symbols:
$Ob(X) = P (X_1 + X_4 > X_2 + X_3 | X_1 \leq X_2 \leq X_3 \leq X_4), X_i~IID$
The graph at the top of this post shows how the Obesity index converges for different distributions. As always, I’ve included my R code at the end of this article, so you can run this simulation for yourself (though, as usual, I forgot to set a random seed so that you can run it exactly like I did).
The dots in the graph represent the mean results from 8, 16, 32, and so on, up to 4096 trials from each of the distributions I tested. Note that each trial involves taking 4 sample points. Confused? Think of it this way: each sample of 4 points gives us one Bernoulli trial from a single distribution, which returns a 0 or 1. Find the average result after doing 4096 of these trials, and you get one of the colored dots at the far right of the graph. For example, the red dots are averages from a Uniform distribution. The more trials you do, the closer results from the Uniform will cluster around 0.5, which is the “true” Obesity value for this distribution. The Uniform distribution is, not coincidentally, symmetric. For symmetric distributions like the Normal, we only consider positive values.
The graph gives a feel for how many trials would be needed to distinguish between different distributions based on their Obesity index. I’ve done it this way as part of my Grand Master Plan to map every possible distribution based on how it performs in a variety of tail indices. Apparently the Obesity index can be used to estimate quantiles; I haven’t done this yet.
My initial impressions of this measure (and these are very initial!) are mixed. With a large enough number of trials, it does a good job of ordering distributions in a way that seems intuitively correct. On the other hand, I’d like to see a greater distance between the Uniform and Beta(0.01, 0.01) distribution, as the latter is an extreme case of small tails.
Note that Obesity is invariant to scaling:
$Ob(x) = Ob(k*X)$
but not to translations:
$Ob(X) \neq Ob(X+c)$
This could be a bug or a feature, depending on what you want to use the index for.
Extra special karma points to the first person who comes up with a distribution whose Obesity index is between the Uniform and Normal, and that isn’t a variant of one I already tested.
Here’s the code:
# Code by Matt Asher for StatisticsBlog.com # Feel free to redistribute, but please keep this notice # Create random varaibles from the function named in the string generateFromList = function(n, dist, ...) { match.fun(paste('r', dist, sep=''))(n, ...) } # Powers of 2 for testAt testAt = 3:12 testAtSeq = 2^testAt testsPerLevel = 30 distros = c() distros[1] = 'generateFromList(4,"norm")' distros[2] = 'generateFromList(4,"unif")' distros[3] = 'generateFromList(4,"cauchy")' distros[4] = 'generateFromList(4,"exp")' distros[5] = 'generateFromList(4,"chisq",1)' distros[6] = 'generateFromList(4,"beta",.01,.01)' distros[7] = 'generateFromList(4,"lnorm")' distros[8] = 'generateFromList(4,"weibull",1,1)' # Gotta be a better way to do this. dWords = c("Normal", "Uniform", "Cauchy", "Exponential", "Chisquare", "Beta", "Lognormal", "Weibull") par(mar=c(4,5,1.5,.5)) plot(0,0,col="white",xlim=c(min(testAt),max(testAt)), ylim=c(-.5,1), xlab="Sample size, expressed in powers of 2", ylab="Obesity index measure", main="Test of tail fatness using Obesity index") abline(h=0) colorList = list() colorList[[1]]=rgb(0,0,1,.2) colorList[[2]]=rgb(1,0,0,.2) colorList[[3]]=rgb(0,1,0,.2) colorList[[4]]=rgb(1,1,0,.2) colorList[[5]]=rgb(1,0,1,.2) colorList[[6]]=rgb(0,1,1,.2) colorList[[7]]=rgb(0,0,0,.2) colorList[[8]]=rgb(.5,.5,0,.2) # Create the legend for(d in 1:length(distros)) { x = abs(rnorm(20,min(testAt),.1)) y = rep(-d/16,20) points(x, y, col=colorList[[d]], pch=20) text(min(testAt)+.25, y[1], dWords[d], cex=.7, pos=4) } dCounter = 1 for(d in 1:length(distros)) { for(l in testAtSeq) { for(i in 1:testsPerLevel) { count = 0 for(m in 1:l) { # Get the estimate at that level, plot it testsPerLevel times x = sort(abs(eval(parse( text=distros[dCounter] )))) if ( (x[4]+x[1])>(x[2]+x[3]) ) { count = count + 1 } } # Tiny bit of scatter added ratio = count/l points(log(l, base=2), ( ratio+rnorm(1,0,ratio/100)), col=colorList[[dCounter]], pch=20) } } dCounter = dCounter + 1 }
|
# A Bernoulli differential equation is one of the formdydx+P(x)y=Q(x)yn.dydx+P(x)y=Q(x)yn.Observe that, if n=0n=0 or 11, the Bernoulliequation is linear. For other
###### Question:
A Bernoulli differential equation is one of the form dydx+P(x)y=Q(x)yn.dydx+P(x)y=Q(x)yn. Observe that, if n=0n=0 or 11, the Bernoulli equation is linear. For other values of nn, the substitution u=y1−nu=y1−n transforms the Bernoulli equation into the linear equation dudx+(1−n)P(x)u=(1−n)Q(x).dudx+(1−n)P(x)u=(1−n)Q(x). Use an appropriate substitution to solve the equation y′−9xy=y5x9,y′−9xy=y5x9, and find the solution that satisfies y(1)=1.y(1)=1. y(x)=
#### Similar Solved Questions
##### 4. A sample of HI is placed in a sealed container and allowed to come to...
4. A sample of HI is placed in a sealed container and allowed to come to equilibrium. The equilibrium reaction and equilibrium constant are HSO4 (aq) + H20(1) = H,O*(aq) +50:(aq) K.-6.6 x 10-4 A sample mixture was found to have the following equilibrium concentrations: (HSO4] -0.056 M [H30 ) - 0.971...
##### What keeps the earth's atmosphere from dissipating?
What keeps the earth's atmosphere from dissipating?...
##### QUESTION 43 How many doses of H1N1 vaccine must be administered to children younger than the...
QUESTION 43 How many doses of H1N1 vaccine must be administered to children younger than the age of 10 years? two three five none; it is contraindicated 1 points QUESTION 44 Why must intramuscular immunoglobulin (IGIM) NOT be injected intravenously? Bec...
##### 10.0 g of the weakly soluble solid Ag2co3 is added to 1.00 L of water and the following equilibrium established: Ag2co3 (s) ~ 2 Ag" (aq) COz" (aq)For this reaction, at 25" C,K= 8.2 * 10-12 M?. This is usually called Ksp or the solubility constant which is measure of how much of an 'insoluble" solid will dissolve.Write the expression for the solubility constant in terms of [reactants] and [products]:Ksp
10.0 g of the weakly soluble solid Ag2co3 is added to 1.00 L of water and the following equilibrium established: Ag2co3 (s) ~ 2 Ag" (aq) COz" (aq) For this reaction, at 25" C,K= 8.2 * 10-12 M?. This is usually called Ksp or the solubility constant which is measure of how much of an &...
##### 44. [~V1 Points]DETAILSTANAPCALC1O 6.2.054 _MY NOTESASK YOUR TFind the function given that the slope of the tangent Iine al any point (x, f(x)) is / %(x) ard that the graph of / passes through the given point: f'(x) =1Need Help?W62OSubmit Ansiver
44. [~V1 Points] DETAILS TANAPCALC1O 6.2.054 _ MY NOTES ASK YOUR T Find the function given that the slope of the tangent Iine al any point (x, f(x)) is / %(x) ard that the graph of / passes through the given point: f'(x) =1 Need Help? W62O Submit Ansiver...
##### Problem 1 (3pts) Let A be a 3 X 3 matrix. Applying elementary rOw operations to A, we obtained 3 X 3 matrices A1, Az, and Az as indicated belowiR1 _ R1 Al R1 + R2 Az -3R1 + R3 1 R3 A3where~2 3 Ag = 1 0Find det(A).
Problem 1 (3pts) Let A be a 3 X 3 matrix. Applying elementary rOw operations to A, we obtained 3 X 3 matrices A1, Az, and Az as indicated below iR1 _ R1 Al R1 + R2 Az -3R1 + R3 1 R3 A3 where ~2 3 Ag = 1 0 Find det(A)....
##### I 6) 0 recte Suler region AL Yy-plsae IEce biIt Ue r (0,%9; (0,1,0) and (1,4,0) Find e Surfece arec c tLe a < + G Ibe grapl Gt31 +y2 Text les oves p Fid the sttronarj Perit ef (he fvnclon (b) U = xfyl Sukjed L Ike Covstratuk xlt1l+Jx JJ+ /=0
I 6) 0 recte Suler region AL Yy-plsae IEce biIt Ue r (0,%9; (0,1,0) and (1,4,0) Find e Surfece arec c tLe a < + G Ibe grapl Gt31 +y2 Text les oves p Fid the sttronarj Perit ef (he fvnclon (b) U = xfyl Sukjed L Ike Covstratuk xlt1l+Jx JJ+ /=0...
##### No:Date:So Ye~Rallouting abesa) Msins 75 e4ppenJix Tygnanebix_Subslllion
No: Date: So Ye ~Rallouting abesa) Msins 75 e 4ppenJix Tygnanebix_Subslllion...
##### Use c++ to code: Problem 1 Implement a class Clock whose get hours and get_minutes member functions return the current...
Use c++ to code: Problem 1 Implement a class Clock whose get hours and get_minutes member functions return the current time at your location. To get the current time, use the following code, which requires that you include the <ctime>header time_t current_time time (e); ta local time - localt...
##### You should aim to take an evaluative, critical stance in your essay on whether research by...
You should aim to take an evaluative, critical stance in your essay on whether research by Bigelow and La Gaipa has been useful for understanding children's friendship. You should aim to describe and weigh up the arguments for and against the assertion that Bigelow and La Gaipa's work has us...
##### A person travels by car from one city to another with different constant speeds between pairs of cities She drives for 15.0 min at 100.0 km/h, 15.0 min at 60.0 km/h, and 35.0 min at 30.0 km/h and spends 15.0 min eating lunch and buying gas: (a) Determine the average speed for the trip. km/h(b) Determine the distance between the initial and final cities along the route, km
A person travels by car from one city to another with different constant speeds between pairs of cities She drives for 15.0 min at 100.0 km/h, 15.0 min at 60.0 km/h, and 35.0 min at 30.0 km/h and spends 15.0 min eating lunch and buying gas: (a) Determine the average speed for the trip. km/h (b) Dete...
##### What would cause prices to drop
What would cause prices to drop?Which of the following would cause prices to drop? A.) Increased production by business. B.) Increased taxes on business. C.) Higher levels of demand by consumers. or D.) A reduction in the money supply. I would truly appreciate any help. This question has been stumpi...
##### Which of the following parasites possess a larval form known as tetrathyridium?Taenia spp.Dipylidium caninumMesocestoides sp.Diphyllobothrium latumQuestion 43 ptsCalodium hepatica (referred to as Capillaria hepatica in the textbook) causes damage ina specific organ as the adults wander in the body of the definitive host: Whichorgan is most commonly associated with pathologic changes in hosts infected with this parasite?kidneyliverbladderheart
Which of the following parasites possess a larval form known as tetrathyridium? Taenia spp. Dipylidium caninum Mesocestoides sp. Diphyllobothrium latum Question 4 3 pts Calodium hepatica (referred to as Capillaria hepatica in the textbook) causes damage ina specific organ as the adults wander in the...
##### For each of the following species, write the electron configuration (assuming no s-p hybridization) and compute...
For each of the following species, write the electron configuration (assuming no s-p hybridization) and compute the bond order. Then tell: (a) Which should have the longer bond, O2 or O22? (b) Which should have the stronger bond, B2 or B22-? (c) Which should have the weaker bond, C2 or C22? (d) Whic...
##### The proportion of light bulbs that last longer than nours predicted to beO01e-0.OO1s ds. Use this formula to find the proportion of Ilght bulbs that will last longer than 800 hours_(Round your answerthe nearest whole number )0.001e @CL0.001Hours
The proportion of light bulbs that last longer than nours predicted to be O01e-0.OO1s ds. Use this formula to find the proportion of Ilght bulbs that will last longer than 800 hours_ (Round your answer the nearest whole number ) 0.001e @CL 0.001 Hours...
##### Medication affecting Reproductive tract: Use of contraceptive transdermal patch
Medication affecting Reproductive tract: Use of contraceptive transdermal patch...
##### De Moivre's Theorem can be used to find reciprocals of complex numbers. Recall from algebra that the reciprocal of $x$ is $1 / x,$ which can be expressed as $x^{-1} .$ Use this fact, along with de Moivre's Theorem, to find the reciprocal of each number below. $\sqrt{3}+i$
De Moivre's Theorem can be used to find reciprocals of complex numbers. Recall from algebra that the reciprocal of $x$ is $1 / x,$ which can be expressed as $x^{-1} .$ Use this fact, along with de Moivre's Theorem, to find the reciprocal of each number below. $\sqrt{3}+i$...
##### Thermodynamics. Thank you. For the control volume shown below, draw a one-dimension (10) velocity profile entering...
Thermodynamics. Thank you. For the control volume shown below, draw a one-dimension (10) velocity profile entering through the blue region....
##### SuPPLY FuNCTIONS The supply function for a certain make of satellite radio is given by $p=f(x)=0.0001 x^{5 / 4}+10$ where $x$ is the quantity supplied and $p$ is the unit price in dollars. a. Find $f^{\prime}(x)$. b. What is the rate of change of the unit price if the quantity supplied is 10,000 satellite radios?
SuPPLY FuNCTIONS The supply function for a certain make of satellite radio is given by $p=f(x)=0.0001 x^{5 / 4}+10$ where $x$ is the quantity supplied and $p$ is the unit price in dollars. a. Find $f^{\prime}(x)$. b. What is the rate of change of the unit price if the quantity supplied is 10,000...
##### The profit and loss statement is undoubtedly the most recognized tool of financial statements and activity-based...
The profit and loss statement is undoubtedly the most recognized tool of financial statements and activity-based reporting. However, it tells little, if anything, of performance on the “shop floor.” Why is knowledge at both 30,000 ft. and on the ground critical for financial success?...
##### Concentration effects: Given the rate law RATE = k[CO2][O2] . How much faster is a reaction if [COz] is doubled and [02] is divided by half?new rate increases by 3new rate decreases by-new rate decreases by- 3new rate decreases by 9
concentration effects: Given the rate law RATE = k[CO2][O2] . How much faster is a reaction if [COz] is doubled and [02] is divided by half? new rate increases by 3 new rate decreases by- new rate decreases by- 3 new rate decreases by 9...
##### QUESTION 43 How much energy does it take to heat 5.00 g of solid water from...
QUESTION 43 How much energy does it take to heat 5.00 g of solid water from 10°C to 0°C/ Specific heats: ice - 2090 J/xg°C, water - 4186 kg", water vapor - 2010 kyºc. A. 104,500) B. 209.3) C. 104.5) OD. 10.45) QUESTION 44 How much energy does it take to heat 5.00 g of liquid wat...
##### Find the following limits analytically if they exist_ Enter INF for positive infinity and ~INF for negative infinity. If a limit does not exist, enter DNE_X -42 X < 49 f(x) = Vx x 2 49(a) lim_ f(x) x-49 Submi: Answcr Tries 0/99(b) lim_f(x) x-'49+ Submi: Answcr Tries 0/99 (c) lim f(x) X-49 Submi: Answcr Tries 0/99
Find the following limits analytically if they exist_ Enter INF for positive infinity and ~INF for negative infinity. If a limit does not exist, enter DNE_ X -42 X < 49 f(x) = Vx x 2 49 (a) lim_ f(x) x-49 Submi: Answcr Tries 0/99 (b) lim_f(x) x-'49+ Submi: Answcr Tries 0/99 (c) lim f(x) X-...
##### Q2: Why did the U.S. oppose the 1970's demands for a New World Information and Communication...
Q2: Why did the U.S. oppose the 1970's demands for a New World Information and Communication Order (NWICO)? How relevant are the debates in this century?...
##### The graph below shows W in metres as an exponential function of t, in seconds. The scale for W is a log scale: The scale for t is linear: Determine the doubling time for W _ Instructions There is no need to find a formula for W Do not enter units with your answer: So if the answer is 56 seconds, enter "56" not "56 seconds"1Tecroned
The graph below shows W in metres as an exponential function of t, in seconds. The scale for W is a log scale: The scale for t is linear: Determine the doubling time for W _ Instructions There is no need to find a formula for W Do not enter units with your answer: So if the answer is 56 seconds, ent...
##### If you standardized NaOH solution against KHP bul; your buret was not clean as several drops of NaOH solution remained clinging to the inside of the buret, what effect would this have on your calculated concentration? Bespecilic_
If you standardized NaOH solution against KHP bul; your buret was not clean as several drops of NaOH solution remained clinging to the inside of the buret, what effect would this have on your calculated concentration? Bespecilic_...
##### How do you solve 3.5x - 49= 2.2x + 3?
How do you solve 3.5x - 49= 2.2x + 3#?...
##### The health of the beat population in park is monitored by periodic measurements taken from anesthetized bears A sample of the weights such bears available below: Use single value to estimate the mean weight of all such bears_ Find 95% confidence interval estimate of the mean of the population of all such bear weights. Click the icon to view the table bear weights:The estimate of the population of all such bear weights (Round t0 one decimal place as needed: )pounds.Find the 95% confidence interva
The health of the beat population in park is monitored by periodic measurements taken from anesthetized bears A sample of the weights such bears available below: Use single value to estimate the mean weight of all such bears_ Find 95% confidence interval estimate of the mean of the population of all...
##### On January 1, 2018, South Bend Airlines purchased a used airplane for $53,600,000. South Bend Airlines... On January 1, 2018, South Bend Airlines purchased a used airplane for$53,600,000. South Bend Airlines expects the plane to remain useful for four years (6,000,000 miles) and to have a residual value of $5,600,000. The company expects the plane to be flown 1,100,000 miles... 1 answer ##### Prueba Corta Distribuciones de Probabilidade, Varianta y Desviación str.3) SLAR Weight: 1 3) For the following... Prueba Corta Distribuciones de Probabilidade, Varianta y Desviación str.3) SLAR Weight: 1 3) For the following probability distribution of number of left handed (LH) students in a class of size eight, when p0.04, the probability that class has at most three LH students, is: - 2 5 6 PX) P3p 3p... 5 answers ##### A space capsule weighing 5000 pounds is propelled to an altitude of 200 miles above the surface of the earth. How much work is done against the force of gravity? Assume that the earth is a sphere of radius 4000 miles and that the force of gravity is$f(x)=-k / x^{2}$, where$x$is the distance from the center of the earth to the capsule (the inverse-square law). Thus, the lifting force required is$k / x^{2}$, and this equals 5000 when$x=4000$. A space capsule weighing 5000 pounds is propelled to an altitude of 200 miles above the surface of the earth. How much work is done against the force of gravity? Assume that the earth is a sphere of radius 4000 miles and that the force of gravity is$f(x)=-k / x^{2}$, where$x$is the distance from ... 1 answer ##### Your financial planner offers you two different investment plans. Plan X is a$14,000 annual perpetuity...
Your financial planner offers you two different investment plans. Plan X is a $14,000 annual perpetuity Plan Y is a 13-year,$20,000 annual annuity. Both plans will make their first payment one year from today At what discount rate would you be indifferent between these two plans? (Do not round inte...
|
Test p
Usually, X {\displaystyle X} is a test statistic , rather than any of the actual observations. A test statistic is the output of a scalar function of all the observations. This statistic provides a single number, such as the average or the correlation coefficient, that summarizes the characteristics of the data, in a way relevant to a particular inquiry. As such, the test statistic follows a distribution determined by the function used to define that test statistic and the distribution of the input observational data.
Test p
Media:
http://buy-steroids.org
|
# Missing sketch in K&K Mechanics book
Hello guys,
I'm currently reading through K&K Intro to Mechanics book, and I'm on page 26 where I encounter a bit odd derivation.
The authors say:
...
Using the angle ##\delta \theta## defined in the sketch,
##|\delta A| = 2A \sin{\frac{\delta \theta}{2}}##
I'm rather lost on this part. I don't know which sketch corresponds to this equation.
Related Other Physics Topics News on Phys.org
jtbell
Mentor
The diagram is indeed missing.
The three vectors form an isosceles triangle. The dashed line bisects it into two right triangles. To get the equation, consider one of the right triangles.
If you take the diagram at the top left of page 26, and fill in the hypotenuse which is ##\vec A (t + \Delta t)##, you get a similar diagram, but with the right angle in a different location. The discrepancy disappears in the limit as ##\Delta \theta \rightarrow 0## and ##\Delta \vec A \rightarrow 0##.
#### Attachments
• 4.1 KB Views: 444
Last edited:
Ok that makes it clear. But it raises another question on why the vector ##\Delta A## is not perpendicular to ##A##? Whereas the book says it is necessary for it to be so because ##A## is not changing in magnitude?
jtbell
Mentor
In the limit as Δt goes to zero (which is what you need to do in order to have the instantaneous derivative), ΔA becomes perpendicular to both A(t) and A(t+Δt). And of course A(t) and A(t+Δt) become equal to each other.
In the limit as Δt goes to zero (which is what you need to do in order to have the instantaneous derivative), ΔA becomes perpendicular to both A(t) and A(t+Δt). And of course A(t) and A(t+Δt) become equal to each other.
Ah yes I didn't think of that carefully. Thanks for the remark and the sketch. :)
|
# What are electrons, protons and neutrons?
Protons and neutrons are massive particles that are conceived to constitute atomic nuclei. Protons have a unit positive charge, and neutrons are neutral. A strong nuclear force operates at $\text{nucular}$ distances that binds the nuclear particles together, and overcomes electrostatic repulsion.
The number of protons in a nucleus is the atomic number, $Z$, and identfies the element: $Z = 1 , \text{hydrogen}$; $Z = 2 , \text{helium}$; $Z = 3 , \text{lithium}$;.................$Z = 23 , \text{vanadium}$;.........................
|
# Evaluate: 3x-9 <-18
## Expression: $3x-9 < -18$
Move the constant to the right-hand side and change its sign
$3x < -18+9$
Calculate the sum
$3x < -9$
Divide both sides of the inequality by $3$
\begin{align*}&x < -3 \\&\begin{array} { l }x \in \langle-\infty, -3\rangle\end{array}\end{align*}
Random Posts
Random Articles
|
Diff from to
# File t2/MathVentures/bugs-in-square-mathml.xhtml.wml
<latemp_subject "Bugs in a Square (MathML Enabled Version)" />
+#include "mathjax.wml"
+
<p>
I first encountered this problem in the science journal of a laboratory
building where I used to study physics. It is rather well known, and I found
|
# Estimating effects of a structural break on multiple/panel regressions coefficients and R implementation
I would appreciate some methodology and R implementation help for a thing I'm working on.
I have daily observations of $Y$ for several countries over several years, I will use quite a few independent variables $X$.
My hypothesis is that there was a break at a specific time that has affected the determination process of $Y$ and altered the coefficients of a regression. I’m interested in knowing what the effects of $X$ on $Y$ were before and after the break and whether there was a significant change in these relationships after the break.
My idea is to use multiple regression for each country of the form:
$$Y = a + B_1 X_1 + B_2 X_2 + B_3 X_3 + D + B_4 D X_1 + B_5 D X_2 + B_6 D X_3 + e$$
where $D$ is a dummy variable equal to 1 after the suspected break.
My test for whether the coefficients are different after the break is then simply to test the significance of the coefficients on the interaction terms: $B_4, B_5, B_6$. I know how run this regression for individual countries.
Q1. Will this tell me what I’m looking for or should I use something else like a Wald or Chow test?
Q2. Is this called a natural experiment?
Q3. If I want to run this as a time fixed effects panel regression, how is this done in R?
The suspected break is the introduction date of a new financial regulation. It’s possible that there was a gradual change over a few months in anticipation of the regulation.
Q4. Could I cut out a few months of the data to remove the effects of a gradual change?
Q5. Should I use some method to look for a break before the suspected break date?
Bonus question: In R, the output of these two regressions are the exact same, I’m using the lm function:
$$\begin{array}{rcl} Y & = & a + BX + D + BXD \\Y & = & a + BX + BXD \end{array}$$
Both models return a coefficient specific to the dummy variable D, even though the second model only uses D in the interaction term, why is this and can prevent it?
## 1 Answer
There is a lot going on in the question. You'll get better responses if you narrow the focus of your question. Further, you'll better understand the question yourself.
Here is some suggests which will lead you in the right direction:
# Theory:
### Chow Test
You're going to want to do a simple chow test first around the suspected break data. This is a good start, however, you're example suggests that you'd rather be agnostic about the exact break data, which is a good idea.
### Endogenous Testing
The problem with your suggested approach is that the distribution of the test statistics is not going to follow a typical normal distribution. To begin to understand why this is, image you have 100 different potential breaks and you test each of the resulting 100 dummy variables, using a 5% significant level. By pure luck you're going to fail to reject that dummy variable is different from zero for 5 breaks on average.
So what do you do? Fortunately, some very smart people figured out the correct distribution of the test statistics. The testing procedure is as follows, roll a chow test over your data, compare each of these test statistics to critical values obtained by the aforementioned very smart people. Break date which corresponds to the maximum test statistics, which is statistically significant, is the most likely break date
### Casual Inference
Imagine that you find a break date. Good! One way to estimate the effect of the break is to now run a model which includes a dummy variable which is zero before the date and 1 after. However, the question now is this event casual?
Let's look at a simple example, assume a country deploys some stimulus package on a given date and you want to measure the impact of the stimulus on GDP. Does this coefficient corresponding to the occurrence of the event measure a casual relationship? The answer is, maybe. But probably not. Presumably, the stimulus was dispensed because the country was doing poorly. Therefore, poor GDP could have caused the stimulus, not the other way. This is known as an endogenity problem. The stimulus package is not truly a natural experiment.
# Implementation in R
To implement this in R, you're going to want to use the strucchange package. The documentation is pretty good, here is the vignette
• Great answer, thank you. I read up some more and I have a few quick follow-ups: Is rolling a chow test over the data also called Quandt likelihood ratio/sup-Wald? I guess the function Fstats from strucchange is what I need. In the example in the link you provided they feed an ECM model into this function, do I need that or can I just use a normal lm(Y ~ X)? Could I feed a panel regression into the Fstats function to consider all my countries simultaneously? – Mr. T-stat Mar 25 '17 at 15:59
• I like the point you made about endogeneity problems. Are such issues addressed by theoretical reasoning? I have an argument for why the regulation (break-causer) would affect Y but the other way around doesn’t make that much sense, but I will think about it and include it in my paper. – Mr. T-stat Mar 25 '17 at 15:59
• Yes, you're going to want to use the Fstats. You don't need to feed it any ECM model, any model will work. However, your data many necessitate that you use an ECM model. For example, if your data follows a stochastic trend, (i.e. unit root) then this will impact your testing. You'll want to read about stochastic trends a bit to see if you think you have such a problem. It will depend on you situation and data. – Jacob H Mar 25 '17 at 21:12
• Endogeneity is a very thorny problem and there are not great ways to test if it is present. Generally, the best way to convince people you don't have such a problem is by arguing it. Therefore including an argument within the paper would be a great idea. However, remember, there are other causes of endogeneity. For example, omitted variable bias. – Jacob H Mar 25 '17 at 21:15
|
Linear operator and the wave equation.
I'm working on the following problem in preparation for an exam.
1. Let $L$ denote the one-dimensional wave operator defined for functions: $u(x,t)$ (with $x∈(0,L)$ and $t>0$) by: $$L(u)=\frac{∂^2 u}{∂t^2}−c^2 \frac{∂^2 u}{∂x^2}$$ where the constant $c>0$ is the speed of sound in the medium.
a) Prove that $L$ is a linear operator. If $u_1$, $u_2$ and $f$ are three functions which satisfy $L(u_1 )=f$ and $L(u_2)=2f$. Find a solution of the homogeneous wave equation $L(u)=0$.
Here's my solution so far.
Part a was easy enough, Let $L(u)=L(au+bv)$ such that $$\frac{∂^2 u}{∂t^2}−c^2 \frac{∂^2 u}{∂x^2}= 0 \Rightarrow \frac{∂^2 (au+bv)}{∂t^2}−c^2 \frac{∂^2 (au+bv)}{∂x^2} \\ =a\frac{∂^2 u}{∂t^2}−ac^2 \frac{∂^2 u}{∂x^2} +b\frac{∂^2 v}{∂t^2}−bc^2 \frac{∂^2 v}{∂x^2}= a(\frac{∂^2 u}{∂t^2}−c^2 \frac{∂^2 u}{∂x^2})+b(\frac{∂^2 v}{∂t^2}−c^2 \frac{∂^2 v}{∂x^2})=0$$
Which is linear and homogenous.
So my question is regarding part b. My initial thought on how to start the problem was to relate $L(u_1)$ and $L(u_2)$ in the following way. $$L(u_1)=\frac{∂^2 u_1}{∂t^2}−c^2 \frac{∂^2 u_1}{∂x^2}=f \\ L(u_2)=\frac{∂^2 u_2}{∂t^2}−c^2 \frac{∂^2 u_2}{∂x^2} =2f \\ \Rightarrow L(u_2)=2L(u_1) \Rightarrow \frac{∂^2 u_2}{∂t^2}−c^2 \frac{∂^2 u_2}{∂x^2}=2\left [ \frac{∂^2 u_1}{∂t^2}−c^2 \frac{∂^2 u_1}{∂x^2}\right ]$$ Some algebra later, I get that $$\frac{∂^2 (u_2 - 2u_1)}{∂t^2}−c^2 \frac{∂^2 (u_2+2u_1)}{∂x^2} =0$$
What I was expecting to get out of this was to say something like "Let $u=u_2-2u_1$" with the hopes of getting an equation with a single $u$ only. But, it didn't quite work out that way.
I'm not really sure where to go with this, any help is appreciated.
• It seems all right except for the last equation...it should be: $u_2-2u_1$ also in the second term – MattG88 Mar 29 '17 at 20:59
• That's what I was thinking, but the algebra has it as a +. I'll redo that part after I get off work, maybe I dropped a negative. – Kosta Mar 29 '17 at 21:09
We can write: $$\frac{∂^2 u_2}{∂t^2}−c^2 \frac{∂^2 u_2}{∂x^2}=2\left [ \frac{∂^2 u_1}{∂t^2}−c^2 \frac{∂^2 u_1}{∂x^2}\right ]$$ $$\frac{∂^2 u_2}{∂t^2}−c^2 \frac{∂^2 u_2}{∂x^2}-2\left [ \frac{∂^2 u_1}{∂t^2}−c^2 \frac{∂^2 u_1}{∂x^2}\right ] =0$$ $$\frac{∂^2 u_2}{∂t^2}−2\frac{∂^2 u_1}{∂t^2}−c^2\left[\frac{∂^2 u_2}{∂x^2}-2\frac{∂^2 u_1}{∂x^2}\right]=0$$
|
# inlinedef – Inline expansions within definitions
The package provides a macro \Inline that precedes a \def or \gdef. Within the definition text of an inlined definition, keywords such as \Expand may be used to selectively inline certain expansions at definition-time. This eases the process of redefining macros in terms of the original definition, as well as definitions in which the token that must be expanded is deep within, where \expandafter would be difficult and \edef is not suitable. Another application is as an easier version of \aftergroup, by defining a macro in terms of expanded local variables, then ending the group with \expandafter\endgroup\macro.
Sources /macros/latex/contrib/inlinedef Documentation READMEPackage documentation Version 1.0 Licenses The LaTeX Project Public License Copyright 2008 Stephen D. Hicks Maintainer Stephen Hicks Contained in TeX Live as inlinedefMiKTeX as inlinedef Topics Defining Macro
|
# Finite Math Examples
Replace with .
Interchange the variables.
Solve for .
Since is on the right side of the equation, switch the sides so it is on the left side of the equation.
Since does not contain the variable to solve for, move it to the right side of the equation by subtracting from both sides.
Divide each term by and simplify.
Divide each term in by .
Simplify the left side of the equation by cancelling the common factors.
Reduce the expression by cancelling the common factors.
Factor out of .
Cancel the common factor.
Rewrite the expression.
Move the negative one from the denominator of .
Simplify the expression.
Multiply by to get .
Rewrite as .
Simplify each term.
Reduce the expression by cancelling the common factors.
Factor out of .
Cancel the common factor.
Rewrite the expression.
Move the negative in front of the fraction.
Simplify .
Multiply by to get .
Multiply by to get .
Move the negative in front of the fraction.
Solve for and replace with .
Replace the with to show the final answer.
Set up the composite result function.
Evaluate by substituting in the value of into .
Simplify each term.
Apply the distributive property.
Simplify .
Write as a fraction with denominator .
Multiply and to get .
Multiply by to get .
Simplify .
Multiply by to get .
Write as a fraction with denominator .
Multiply and to get .
Simplify each term.
Divide by to get .
Reduce the expression by cancelling the common factors.
Cancel the common factor.
Divide by to get .
Simplify the expression.
Remove unnecessary parentheses.
Subtract from to get .
Since , is the inverse of .
We're sorry, we were unable to process your request at this time
Step-by-step work + explanations
• Step-by-step work
• Detailed explanations
• Access anywhere
Access the steps on both the Mathway website and mobile apps
$--.--/month$--.--/year (--%)
|
## Calculus (3rd Edition)
$$global~max:2,\ \ \ \ global~min: 0$$
Given $$f(x, y)=x+y, \quad 0 \leq x \leq 1, \quad 0 \leq y \leq 1$$ The maximum of $x$ is $1$ and of $y$ is $1$. The maximum of $x+y$ is $1+1$ and the minimum is $0+0$. Hence, the global maximum of $f$ on the given set is $$f (1, 1) = 1 + 1 = 2$$ and the global minimum is $$f (0, 0) = 0 + 0 = 0$$
|
Math 312 Bard College
# Homework 3
Due Date: Friday, September 22
Instructions: Feel free to work together with other students in the class, though you must turn in your own copy of the solutions, and you must acknowledge anyone that you worked with.
1. The following figure shows the image of a square grid under a nonlinear function $$\mathbf{f}\colon\mathbb{R}^2\to\mathbb{R}^2$$. Each square of the original grid had side length $$0.2$$.
1. The point $$\mathbf{f}(1.6,1.4)= (0.8,0.7)$$ is shown in green. Estimate the matrix for $$[D\mathbf{f}(1.6,1.4)]$$ as accurately as you can.
2. Use linear approximations to estimate $$\mathbf{f}(1.62,1.4)$$, $$\mathbf{f}(1.6,1.42)$$, and $$\mathbf{f}(1.63,1.39)$$.
3. The point $$\mathbf{f}(1.8,0.2)= (0.5,0.2)$$ is shown in yellow. Estimate the matrix for $$[D\mathbf{f}(1.8,0.2)]$$ as accurately as you can.
4. Use linear approximations to estimate $$\mathbf{f}(1.82,0.2)$$, $$\mathbf{f}(1.8,0.22)$$, and $$\mathbf{f}(1.83,0.21)$$.
5. Use a linear approximation to estimate the point $$(x_0,y_0)$$ for which $$\mathbf{f}(x_0,y_0) = (0.5,0.22)$$.
2. A function $$\mathbf{f}\colon \mathbb{R}^2\to\mathbb{R}^2$$ satisfies $$\mathbf{f}(2,4) = (5,7)$$, $$\mathbf{f}(2.2,4.1) = (5.5,7.4)$$, and $$\mathbf{f}(2.1,4.2) = (5.1,7.5)$$. Use this information to estimate the matrix for $$[D\mathbf{f}(2,4)]$$.
3. Recall that a $$2\times 2$$ matrix $\begin{bmatrix}x_1 & x_2 \\ x_3 & x_4\end{bmatrix}$ can be viewed as a point $$(x_1,x_2,x_3,x_4)$$ in $$\mathbb{R}^4$$. Let $$\mathbf{f}\colon \mathbb{R}^4\to\mathbb{R}^4$$ be the function that squares a $$2\times 2$$ matrix, i.e. $$\mathbf{f}(A) = A^2$$ for any $$2\times 2$$ matrix $$A$$.
1. Write an explicit formula for $$\mathbf{f}(x_1,x_2,x_3,x_4)$$, and compute the $$4\times 4$$ Jacobian matrix $$[D\mathbf{f}(x_1,x_2,x_3,x_4)]$$.
2. Compute $$[D\mathbf{f}(1,2,3,4)]$$, and use a linear approximation to estimate $$\mathbf{f}(1.01,2.02,3.01,4.03)$$. How does this compare with the actual value of $$\begin{bmatrix}1.01 & 2.02 \\ 3.01 & 4.03\end{bmatrix}^2$$?
3. Use a linear approximation to find a $$2\times 2$$ matrix $$A$$ for which $A^2 \approx \begin{bmatrix}\phantom{1}7.2 & 10.4 \\ 15.2 & 22.6\end{bmatrix}.$ (Feel free to use a calculator or computer for the row reduction.) How close is the square of your answer to the desired value?
|
# Automatic Target Recognition (ATR) in SAR Images
This example shows how to train a Region-based Convolutional Neural Networks (R-CNN) for target recognition in large scene Synthetic Aperture Radar (SAR) images using the Deep Learning Toolbox™ and Parallel Computing Toolbox™.
The Deep Learning Toolbox provides a framework for designing and implementing deep neural networks with algorithms, pretrained models, and apps.
The Parallel Computing Toolbox lets you solve computationally and data-intensive problems using multicore processors, GPUs, and computer clusters. It enables you to use GPUs directly from MATLAB and accelerate the computation capabilities needed in deep learning algorithms.
Neural network based algorithms, have shown remarkable achievement in diverse areas ranging from natural scene detection to medical imaging. It has shown huge improvement over the standard detection algorithms. Inspired by these advancements, researchers have put efforts to apply deep learning based solutions to the field of SAR imaging. In this example, the solution has been applied to solve the problem of target detection and recognition. The R-CNN network employed here not only solves problem of integrating detection and recognition but also provide effective and efficient performance solution that scales to large scene SAR images as well.
This example demonstrates how to:
• Load and analyze image data.
• Define the network architecture.
• Specify training options.
• Train the network.
• Evaluation of network.
To illustrate this workflow, Moving and Stationary Target Acquisition and Recognition (MSTAR) clutter dataset published by the Air Force Research Laboratory is utilised. The dataset is available for download here. Alternatively, a subset of the data used to showcase the workflow is provided. The goal is to develop a model that can detect and recognize the targets.
This example uses a subset of the MSTAR clutter dataset that contains 300 training and 50 testing clutter images with 5 different targets. The data was collected using an X-band sensor in spotlight mode, with a 1-foot resolution. The data contains rural and urban types of clutters. The type of target used are BTR-60 (armoured car), BRDM-2 (fighting vehicle), ZSU-23/4 (tank), T62 (tank) and SLICY (multiple simple geometric shaped static target). The images were captured at a depression angle of 15 degrees. The clutter data is stored in PNG image format and the corresponding ground truth data is stored in `groundTruthMSTARClutterDataset.mat` file. The file contains 2-D bounding box information for five classes, which are SLICY, BTR-60, BRDM-2, ZSU-23/4 and T62 for training and testing data respectively. The size of the dataset is 1.6 GB.
Download the dataset from the given URL using the `helperDownloadMSTARClutterData` helper function, defined at the end of this example.
```outputFolder = pwd; dataURL = ('https://ssd.mathworks.com/supportfiles/radar/data/MSTAR_ClutterDataset.tar.gz'); helperDownloadMSTARClutterData(outputFolder,dataURL);```
Depending on your Internet connection, the download process can take some time. The code suspends MATLAB® execution until the download process is complete. Alternatively, download the dataset to local disk using web browser and extract the file. When using the alternative approach, change the outputFolder variable in the example to the location of the downloaded file.
Download the pretrained network from the given URL using the `helperDownloadPretrainedSARDetectorNet` helper function, defined at the end of this example. The pretrained model allows you to run the entire example without having to wait for training to complete. To train the network, set the `doTrain` variable to true.
```pretrainedNetURL = ('https://ssd.mathworks.com/supportfiles/radar/data/TrainedSARDetectorNet.tar.gz'); doTrain = false; if ~doTrain helperDownloadPretrainedSARDetectorNet(outputFolder,pretrainedNetURL); end```
Load the ground truth data (training set and test set). These images are generated in such a way that it places target chips at random location on a background clutter image. The clutter image is constructed from the downloaded raw data. The generated target will be used as ground truth targets to train and test the network.
`load('groundTruthMSTARClutterDataset.mat', "trainingData", "testData");`
The ground truth data is stored in a six-column table, where the first column contains the image file paths and the second to the sixth column contains the different target bounding boxes.
```% Display the first few rows of the data set trainingData(1:4,:)```
```ans=4×6 table imageFilename SLICY BTR_60 BRDM_2 ZSU_23_4 T62 ______________________________ __________________ __________________ __________________ ___________________ ___________________ "./TrainingImages/Img0001.png" {[ 285 468 28 28]} {[ 135 331 65 65]} {[ 597 739 65 65]} {[ 810 1107 80 80]} {[1228 1089 87 87]} "./TrainingImages/Img0002.png" {[595 1585 28 28]} {[ 880 162 65 65]} {[308 1683 65 65]} {[1275 1098 80 80]} {[1274 1099 87 87]} "./TrainingImages/Img0003.png" {[200 1140 28 28]} {[961 1055 65 65]} {[306 1256 65 65]} {[ 661 1412 80 80]} {[ 699 886 87 87]} "./TrainingImages/Img0004.png" {[ 623 186 28 28]} {[ 536 946 65 65]} {[ 131 245 65 65]} {[1030 1266 80 80]} {[ 151 924 87 87]} ```
Display one of the training images and box labels to visualize the data.
```img = imread(trainingData.imageFilename(1)); bbox = reshape(cell2mat(trainingData{1,2:end}),[4,5])'; labels = {'SLICY', 'BTR_60', 'BRDM_2', 'ZSU_23_4', 'T62'}; annotatedImage = insertObjectAnnotation(img,'rectangle',bbox,labels,... 'TextBoxOpacity',0.9,'FontSize',50); figure imshow(annotatedImage); title('Sample Training image with bounding boxes and labels')```
### Define Network Architecture
Create an R-CNN object detector for five targets: 'SLICY', 'BTR_60', 'BRDM_2', 'ZSU_23_4', 'T62'.
`objectClasses = {'SLICY', 'BTR_60', 'BRDM_2', 'ZSU_23_4', 'T62'};`
The network must be able to classify 5 targets specified above and a background class in order to be trained using `trainRCNNObjectDetector` available in Deep Learning Toolbox™. `1` is added in the code below to include the background class.
`numClassesPlusBackground = numel(objectClasses) + 1;`
The final fully connected layer of the network defines the number of classes, that it can classify. Set the final fully connected layer to have an output size equal to `numClassesPlusBackground`.
```% Define input size inputSize = [128,128,1]; % Define network layers = createNetwork(inputSize,numClassesPlusBackground);```
Now, these network layers can be used to train an R-CNN based 5-class object detector.
### Train Faster R-CNN
Use `trainingOptions` to specify network training options. `trainingOptions` by default uses a GPU if one is available (requires Parallel Computing Toolbox™ and a CUDA® enabled GPU with compute capability 3.0 or higher). Otherwise, it uses a CPU. You can also specify the execution environment by using the `'ExecutionEnvironment'` name-value pair argument of `trainingOptions`. To automatically detect if you have a GPU available, set `ExecutionEnvironment` to '`auto`'. If you do not have a GPU, or do not want to use one for training, set `ExecutionEnvironment` to '`cpu`'. To ensure the use of a GPU for training, set `ExecutionEnvironment` to '`gpu`'.
```% Set training options options = trainingOptions('sgdm', ... 'MiniBatchSize', 128, ... 'InitialLearnRate', 1e-3, ... 'LearnRateSchedule', 'piecewise', ... 'LearnRateDropFactor', 0.1, ... 'LearnRateDropPeriod', 100, ... 'MaxEpochs', 10, ... 'Verbose', true, ... 'CheckpointPath',tempdir,... 'ExecutionEnvironment','auto');```
Use `trainRCNNObjectDetector` to train R-CNN object detector if `doTrain` is true. Otherwise, load the pretrained network. If training, adjust '`NegativeOverlapRange`' and '`PositiveOverlapRange`' to ensure that training samples tightly overlap with ground truth,
```if doTrain % Train an R-CNN object detector. This will take several minutes detector = trainRCNNObjectDetector(trainingData, layers, options,'PositiveOverlapRange',[0.5 1], 'NegativeOverlapRange', [0.1 0.5]); else % Load a previously trained detector preTrainedMATFile = fullfile(outputFolder,'TrainedSARDetectorNet.mat'); load(preTrainedMATFile); end```
### Evaluate Detector on a Test Image
To get a qualitative idea of the functioning of detector, pick a random image from the test set and run it through the detector. The detector is expected to return a collection of bounding boxes where it thinks the detected targets are, along with scores indicating confidence in each detection.
```% Read test image imgIdx = randi(height(testData)); testImage = imread(testData.imageFilename(imgIdx)); % Detect SAR targets in the test image [bboxes,score,label] = detect(detector,testImage,'MiniBatchSize',16);```
To understand the results achieved, overlay the detector's results with the test image. A key parameter is the detection threshold, the score above which the detector "detected" a target. A higher threshold will result in fewer false positives however, it will also result in more false negatives.
```scoreThreshold = 0.8; % Display the detection results outputImage = testImage; for idx = 1:length(score) bbox = bboxes(idx, :); thisScore = score(idx); if thisScore > scoreThreshold annotation = sprintf('%s: (Confidence = %0.2f)', label(idx),... round(thisScore,2)); outputImage = insertObjectAnnotation(outputImage, 'rectangle', bbox,... annotation,'TextBoxOpacity',0.9,'FontSize',45,'LineWidth',2); end end f = figure; f.Position(3:4) = [860,740]; imshow(outputImage) title('Predicted boxes and labels on test image')```
### Evaluate Model
By looking at the images sequentially, the detector performance can be understood. To perform more rigorous analysis using the entire test set, run the test set through the detector.
```% Create a table to hold the bounding boxes, scores and labels output by the detector numImages = height(testData); results = table('Size',[numImages 3],... 'VariableTypes',{'cell','cell','cell'},... 'VariableNames',{'Boxes','Scores','Labels'}); % Run detector on each image in the test set and collect results for i = 1:numImages imgFilename = testData.imageFilename{i}; % Read the image I = imread(imgFilename); % Run the detector [bboxes, scores, labels] = detect(detector, I,'MiniBatchSize',16); % Collect the results results.Boxes{i} = bboxes; results.Scores{i} = scores; results.Labels{i} = labels; end```
The possible detections and their bounding boxes for all images in the test set can be used to calculate the detector's Average Precision(AP) for each class. The AP is the average of the detector's precision at different levels of recall, so let us define precision and recall.
• $Precision=\frac{tp}{tp+fp}$
• $Recall=\frac{tp}{tp+fn}$
where
• $tp$ - number of true positives (the detector predicts a target when it is present)
• $fp$ - number of false positives (the detector predicts a target when it is not present)
• $fn$ - number of false negatives (the detector fails to detect a target when it is present)
A detector with a precision of 1 is considered good at detecting targets that are present while a detector with a recall of 1 is good at avoiding false detections. Precision and recall have an inverse relationship.
Plot the relationship between precision and recall for each class. The average value of each curve is the AP. Curves for 0.5 detection thresholds are plotted.
For more details, see the documentation for `evaluateDetectionPrecision`.
```% Extract expected bounding box locations from test data expectedResults = testData(:, 2:end); threshold = 0.5; % Evaluate the object detector using average precision metric [ap, recall, precision] = evaluateDetectionPrecision(results, expectedResults,threshold); % Plot precision recall curve f = figure; ax = gca; f.Position(3:4) = [860,740]; xlabel('Recall') ylabel('Precision') grid on; hold on; legend('Location', 'southeast'); title('Precision Vs Recall curve for threshold value 0.5 for different classes'); for i = 1:length(ap) % Plot precision/recall curve plot(ax,recall{i},precision{i},'DisplayName',['Average Precision for class ' trainingData.Properties.VariableNames{i+1} ' is ' num2str(round(ap(i),3))]) end```
The AP for most of the classes is more than 0.9. Out of these, the trained model appears to struggle the most in detecting 'SLICY' targets. However, it is still able to achieve AP of 0.7 for the class.
### Summary
This example demonstrates how to train a R-CNN for target recognition in SAR images. The pretrained network attained an accuracy of AP of more than 0.9.
### Helper Function
The function `createNetwork` takes as input the image size `inputSize `and number of classes `numClassesPlusBackground`. The function returns a convolution neural network architecture.
```function layers = createNetwork(inputSize,numClassesPlusBackground) layers = [ imageInputLayer(inputSize) % Input Layer convolution2dLayer(3,32,'Padding','same') % Convolution Layer reluLayer % Relu Layer convolution2dLayer(3,32,'Padding','same') batchNormalizationLayer % Batch normalization Layer reluLayer maxPooling2dLayer(2,'Stride',2) % Max Pooling Layer convolution2dLayer(3,64,'Padding','same') reluLayer convolution2dLayer(3,64,'Padding','same') batchNormalizationLayer reluLayer maxPooling2dLayer(2,'Stride',2) convolution2dLayer(3,128,'Padding','same') reluLayer convolution2dLayer(3,128,'Padding','same') batchNormalizationLayer reluLayer maxPooling2dLayer(2,'Stride',2) convolution2dLayer(3,256,'Padding','same') reluLayer convolution2dLayer(3,256,'Padding','same') batchNormalizationLayer reluLayer maxPooling2dLayer(2,'Stride',2) convolution2dLayer(6,512) reluLayer dropoutLayer(0.5) % Dropout Layer fullyConnectedLayer(512) % Fully connected Layer. reluLayer fullyConnectedLayer(numClassesPlusBackground) softmaxLayer % Softmax Layer classificationLayer % Classification Layer ]; end function helperDownloadMSTARClutterData(outputFolder,DataURL) % Download the data set from the given URL to the output folder. radarDataTarFile = fullfile(outputFolder,'MSTAR_ClutterDataset.tar.gz'); if ~exist(radarDataTarFile,'file') disp('Downloading MSTAR Clutter data (1.6 GB)...'); websave(radarDataTarFile,DataURL); untar(radarDataTarFile,outputFolder); end end function helperDownloadPretrainedSARDetectorNet(outputFolder,pretrainedNetURL) % Download the pretrained network. preTrainedMATFile = fullfile(outputFolder,'TrainedSARDetectorNet.mat'); preTrainedZipFile = fullfile(outputFolder,'TrainedSARDetectorNet.tar.gz'); if ~exist(preTrainedMATFile,'file') if ~exist(preTrainedZipFile,'file') disp('Downloading pretrained detector (29.4 MB)...'); websave(preTrainedZipFile,pretrainedNetURL); end untar(preTrainedZipFile,outputFolder); end end```
#### References
[1] MSTAR Dataset. https://www.sdms.afrl.af.mil/index.php?collection=mstar
|
# Antenna Theory - Isotropic Radiation
In the previous chapter, we have gone through the radiation pattern. To have a better analysis regarding the radiation of an antenna, a referential point is necessary. The radiation of an isotropic antenna, fills this space.
## Definition
Isotropic radiation is the radiation from a point source, radiating uniformly in all directions, with same intensity regardless of the direction of measurement.
The improvement of radiation pattern of an antenna is always assessed using the isotropic radiation of that antenna. If the radiation is equal in all directions, then it is known as isotropic radiation.
• The point source is an example of isotropic radiator. However, this isotropic radiation is practically impossible, because every antenna radiates its energy with some directivity.
• It has a doughnut-shaped pattern when viewed in 3D and a figure-of-eight pattern when viewed in 2D.
The figures given above show the radiation pattern of an isotropic or Omni-directional pattern. Figure 1 illustrates the doughnut shaped pattern in 3D and Figure 2 illustrates the figure-of-eight pattern in 2D.
### Gain
The isotropic radiator has unity gain, which means having a gain factor of 1 in all directions. In terms of dB, it can be called as 0dB gain (zero loss).
According to the standard definition, “The amount of power that an isotropical antenna radiates to produce the peak power density observed in the direction of maximum antenna gain, is called as Equivalent Isotropic Radiated Power.”
If the radiated energy of an antenna is made to concentrate on one side or a particular direction, where the radiation is equivalent to that antenna’s isotropic radiated power, such a radiation would be termed as EIRP i.e. Equivalent Isotropic Radiated Power.
### Gain
Though isotropic radiation is an imaginary one, it is the best an antenna can give. The gain of such antenna will be 3dBi where 3dB is a factor of 2 and ‘i’ represents factor of isotropic condition.
If the radiation is focused in certain angle, then EIRP increases along with the antenna gain. Gain of the antenna is best achieved by focusing the antenna in certain direction.
$$ERP(dBW) = EIRP(dBW) - 2.15dBi$$
|
A distillation column handling a binary mixture of $A$ and $B$ is operating at total reflux. It has two ideal stages including the reboiler. The mole fraction of the more volatile component in the residue $(x_W)$ is $0.1$. The average relative volatality $\alpha_{AB}$ is $4$. The mole fraction of $A$ in the distilate $(x_D)$ is _________ (round off to $2$ decimal places).
|
# Symmetric tensors and symmetric tensor rank
3 GALAAD - Geometry, algebra, algorithms
CRISAM - Inria Sophia Antipolis - Méditerranée , UNS - Université Nice Sophia Antipolis, CNRS - Centre National de la Recherche Scientifique : UMR6621
Abstract : A symmetric tensor is a higher order generalization of a symmetric matrix. In this paper, we study various properties of symmetric tensors in relation to a decomposition into a symmetric sum of outer product of vectors. A rank-1 order k tensor is the outer product of $k$ non-zero vectors. Any symmetric tensor can be decomposed into a linear combination of rank-1 tensors, each of them being symmetric or not. The rank of a symmetric tensor is the minimal number of rank-1 tensors that is necessary to reconstruct it. The symmetric rank is obtained when the constituting rank-1 tensors are imposed to be themselves symmetric. It is shown that rank and symmetric rank are equal in a number of cases, and that they always exist in an algebraically closed field. We will discuss the notion of the generic symmetric rank, which, due to the work of Alexander and Hirschowitz, is now known for any values of dimension and order. We will also show that the set of symmetric tensors of symmetric rank at most r is not closed, unless r=1.
Keywords :
Document type :
Journal articles
SIAM Journal on Matrix Analysis and Applications, Society for Industrial and Applied Mathematics, 2008, 30 (3), pp.1254-1279
Domain :
https://hal.archives-ouvertes.fr/hal-00327599
Contributor : Pierre Comon <>
Submitted on : Wednesday, October 8, 2008 - 9:41:56 PM
Last modification on : Friday, November 13, 2009 - 11:33:09 AM
Document(s) archivé(s) le : Friday, June 4, 2010 - 12:23:09 PM
### Files
genericity41noir.pdf
Files produced by the author(s)
### Identifiers
• HAL Id : hal-00327599, version 1
### Citation
Pierre Comon, Gene Golub, Lek-Heng Lim, Bernard Mourrain. Symmetric tensors and symmetric tensor rank. SIAM Journal on Matrix Analysis and Applications, Society for Industrial and Applied Mathematics, 2008, 30 (3), pp.1254-1279. <hal-00327599>
Record views
|
# [OS X TeX] Avoiding page breaks
David Watson dewatson at mac.com
Mon Apr 14 12:06:43 EDT 2008
I think that your adding the \par is going to make that happen
sometimes, you might also try replacing the center command with a
{ \centering ... } declaration instead, which should prevent you from
having to use the \vspace{-2 mm} as well.
On Apr 14, 2008, at 10:46 AM, Toke Lindegaard Knudsen wrote:
> Dear all,
>
> I have a perhaps stupid question, but it is something that I have
> not been able to solve, and so I am hoping that one of you will have
> a hint.
>
> I have defined the following:
>
> \newenvironment{versetxt}[2]
> {
> \begin{center}$\sim$ {\sc #2} $\sim$\end{center}\vspace{-2 mm}
> \par ({#1}) \begin{large}\begin{bf}
> }
> {
> \end{bf}\end{large}\\
> }
>
> #1 gives the number of the verse in the Sanskrit text, while #2
> gives a brief description of its contents. Then the translation
> follows.
>
> The problem is this: Sometimes the description ends up at the
> bottom of a page while the translation begins on the next page.
> This looks rather silly and I would like for that to not happen.
>
> I have tried to insert \nopagebreak and \samepage at various place,
> but without success.
>
> Would any of you have an idea about how I can avoid a page break
> between the description and the translation?
>
> With many, many thanks!
>
> Sincerely,
> Toke
>
>
|
# Why is ∑_(k=1)^n k^m a polynomial with degree m+1 in n?
Why is $\sum _{k=1}^{n}{k}^{m}$ a polynomial with degree $m+1$ in $n$?
You can still ask an expert for help
• Questions are typically answered in as fast as 30 minutes
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
Solomon Fernandez
Let $V$ be the space of all polynomials $f:{\mathbb{N}}_{\ge 0}\to F$ (where $F$ is a field of characteristic zero). Define the forward difference operator $\mathrm{\Delta }f\left(n\right)=f\left(n+1\right)-f\left(n\right)$. It is not hard to see that the forward difference of a polynomial of degree $d$ is a polynomial of degree $d-1$, hence defines a linear operator ${V}_{d}\to {V}_{d-1}$ where ${V}_{d}$ is the space of polynomials of degree at most $d$. Note that $\mathrm{dim}{V}_{d}=d+1$.
We want to think of $\mathrm{\Delta }$ as a discrete analogue of the derivative, so it is natural to define the corresponding discrete analogue of the integral $\left(\int f\right)\left(n\right)=\sum _{k=0}^{n-1}f\left(k\right)$. But of course we need to prove that this actually sends polynomials to polynomials. Since $\left(\int \mathrm{\Delta }f\right)\left(n\right)=f\left(n\right)-f\left(0\right)$ (the "fundamental theorem of discrete calculus"), it suffices to show that the forward difference is surjective as a linear operator ${V}_{d}\to {V}_{d-1}$.
But by the "fundamental theorem," the image of the integral is precisely the subspace of ${V}_{d}$ of polynomials such that $f\left(0\right)=0$, so the forward difference and integral define an isomorphism between ${V}_{d-1}$ and this subspace.
More explicitly, you can observe that $\mathrm{\Delta }$ is upper triangular in the standard basis, work by induction, or use the Newton basis $1,n,\left(\genfrac{}{}{0}{}{n}{2}\right),\left(\genfrac{}{}{0}{}{n}{3}\right),...$ for the space of polynomials. In this basis we have $\mathrm{\Delta }\left(\genfrac{}{}{0}{}{n}{k}\right)=\left(\genfrac{}{}{0}{}{n}{k-1}\right)$, and now the result is really obvious.
The method of finite differences provides a fairly clean way to derive a formula for $\sum {n}^{m}$ for fixed $m$. In fact, for any polynomial $f\left(n\right)$ we have the "discrete Taylor formula"
$f\left(n\right)=\sum _{k\ge 0}{\mathrm{\Delta }}^{k}f\left(0\right)\left(\genfrac{}{}{0}{}{n}{k}\right)$
and it's easy to compute the numbers ${\mathrm{\Delta }}^{k}f\left(0\right)$ using a finite difference table and then to replace $\left(\genfrac{}{}{0}{}{n}{k}\right)$ by $\left(\genfrac{}{}{0}{}{n}{k+1}\right)$. I wrote a blog post that explains this, but it's getting harder to find.
###### Did you like this example?
Thordiswl
The formula just drops right out if we use the Euler Maclaurin Summation Formula.
For $f\left(x\right)={x}^{m}$ we have
Where ${B}_{j}$ are the Bernoulli numbers and ${f}^{\left(j\right)}\left(x\right)$ is the ${j}^{th}$ derivative of $f$.
Since $f\left(x\right)$ is polynomial, the terms in
$\sum _{j=0}^{\mathrm{\infty }}\frac{{B}_{2j}}{\left(2j\right)!}\left({f}^{\left(2j-1\right)}\left(n\right)-{f}^{\left(2j-1\right)}\left(0\right)\right)$
all are zero after a point $2j-1>m$ and thus we get the formula for
as a polynomial in $n$, with degree $m+1$.
|
# Tag Info
## Hot answers tagged tree
20
The second-smallest spanning tree differs from the minimum spanning tree by a single edge swap. That is, to get the second-smallest tree, you need to add one edge that's not already in the minimum spanning tree, and then remove the heaviest edge on the cycle that the added edge forms. If you already have the minimum spanning tree, this can all be done in ...
9
According to https://www.cse.ust.hk/~golin/pubs/ANALCO_05.pdf there is no closed-form formula known. According to http://arxiv.org/pdf/cond-mat/0004341v1.pdf the number is asymptotic (for $n$ and $m$ both large) to $$\exp (z_{\mathrm{sq}}mn)$$ where $$z_{\mathrm{sq}}=\frac{4}{\pi}\sum_{i=0}^\infty\frac{(-1)^i}{(2i+1)^2}\approx 1.16624$$ but I'm not sure ...
8
Empire colouring is NP-hard for trees. Let $r$ and $s$ be fixed positive integers, and let $G$ be a graph whose vertex set is partitioned into blocks (or empires) each containing exactly $r$ vertices. The $(s, r)$-colouring problem $s$-$\text{COL}_r$ asks for a colouring of the vertices of the graph $G$ that uses at most $s$ colours, never assigns the ...
7
The Travelling Repairman Problem (TRP) is known to be NP-hard on weighted trees. In this problem, which is also sometimes called the Minimum Latency Problem, the goal is to find a tour that visits all the vertices of a graph while minimizing the average latency. The latency of a vertex $u$ is the cost of the tour from the origin until the tour visits $u$. ...
7
2.09 bits per element is practically achievable. See http://cmph.sourceforge.net/: "[Compress, Hash, Displace] can generate MPHFs that can be stored in approximately 2.07 bits per key." 1.44 bits per element is optimal. See "Hash, displace, and compress" "Improved Bounds For Covering Complete Uniform Hypergraphs" Data Structures and Algorithms , Vol. 1: ...
7
The problem has name "fringe marked ancestor problem" and indeed has $O(\log \log n)$ worst-case solution for both operations [1], thus overcoming the lower bound for generic version of the problem. Their solution is based on Euler tour of the tree with union-split-find structure (and fast LCA for trees with unbounded degree). The same paper states that it ...
7
The problem is L-complete. It’s easier to think about it when the edges are written backwards. That is, I will consider the problem formulated as follows: given a directed acyclic graph such that every node has out-degree at most $1$, and vertices $s$ and $t$, determine if $t$ is reachable from $s$. To see that it is in L, just follow the unique path ...
6
A harmonious coloring of a simple graph is a proper vertex coloring such that each pair of colors appears together on at most one edge. The Harmonious Chromatic Number of a graph is least number of colors in a harmonious coloring of the graph. This problem of finding Harmonious Chromatic Number was shown to be NP-complete on trees by Edwards and McDiarmid. ...
6
EDIT As noted in comments below, I originally read the question incorrectly. I thought the goal was to determine if removing $k$ edges could increase the MST weight of $G$ above some given threshold $t$. This problem is often known as "$k$ Most Vital Edges (for MST)", simply $k$-MVE (or sometimes $k$-MVE-MST to distinguish from other variations), as cited ...
5
It can be done with a linear number of operations. Suppose you start with an arbitrary given tree $T_0$ over keys $[n]$ and want to reach an arbitrary given $T$ over keys $[n]$ using splay operations. (In case we have to start with the empty tree, just insert $[n]$ in any order.) A result of Cleary [*1] (see also Lucas [*2]) shows that you can get from $... 5 First, after each stage throw away any isolated vertices. With these vertices removed (even when the graph is disconnected) the number of vertices in the next stage will be at most twice the number of edges. Next, use the fact that (with isolated vertices removed, in each stage after the first)$|V|\le 2|E|$to simplify the time bound in each stage (after ... 4 Could not add a comment due to lack of reputation. As commented by David Eppstein you can find the proof of the fact that the second-smallest spanning tree differs from the minimum spanning tree by a single edge swap in the article "A combinatorial ranking problem", Burns and Haff. Basically, in the article an algorithm is presented for finding an$...
4
The problem is solved in the paper P. Slater. R-domination in graphs. J. ACM, 23(3):446–450, July 1976. It considers an even more general problem using dynamic programming.
4
Let's consider a general model in which $L_n(\mu)$ is the (random) length of an MST on $K_n$, where the weight of each edge is sampled independently from a probability distribution $\mu$. When $\mu$ is uniform on $[0,1]$, Frieze showed that $\lim_{n \to \infty} \mathbb{E}[L_n(\mu)] = \sum_{k\ge1}{\frac{1}{k^3}} = \zeta(3)$. Steele showed that for a $\mu$ ...
4
1.56 bits per key is now possible using "RecSplit: Minimal Perfect Hashing via Recursive Splitting" by Emmanuel Esposito, Thomas Mueller Graf, and Sebastiano Vigna. It is quite expensive: 1,700 times more expensive than 1.79 bits per key!
3
I think that the problem is not hard, because if I understood the problem statement correctly, it can be solved in $O(|V|^2)$ time as follows: We have two $0$-$1$-labeled rooted perfect full binary trees $(A, x)$ and $(B, y)$ with $2^k-1$ nodes. We compute the minimum number of mismatches in any isomorphism between the trees denoted by $F(A, B)$ recursively ...
3
I think the easiest way of enforcing tree shape is the set of conditions $q_0$ is not in the image of $\delta$, $\delta$ is injective, and $M$ is connected (to avoid isolated cycles). Note that this one is global, not local, which may be unavoidable. Then we can prove (by induction) that for any state $q$ there is a unique path from $q_0$ to $q$.
3
A graph $G(V, E)$ is 2-splittable if it is possible to partition its edge set into two subsets such that the induced subgraphs are isomorphic. Deciding whether a given graph is 2-splittable is $NP$-complete even if input is restricted to trees. Formally, the problem is: PARTITIONED GRAPH ISOMORPHISM INSTANCE: A tree $T = (V,E)$ QUESTION: Is there a ...
3
I think the best way is to make a recursive algorithm. You could divide your input in half (approx), and keep out the central element. Then you recursively build a tree with the left subarray which will be the left child of the central element, and equivalently the tree resulting from the right subarray will be the right child. By induction you can prove ...
3
Let me give a side answer to your question. Consider the variant where you only care about edge-minimal spanning trees: for 2 terminals $s$ and $t$, the problem is equivalent to counting simple $s,t$-paths, which is $\# P$-complete; also, the parameterized version where you ask for paths of length $k$ is $\# W[1]$-hard. If you care about arbitrary $T$-...
3
Although not specifically aimed at (rooted) trees, I think the G-trie data structure might perform quite well in your setting. It is an adapation of the trie (for searching sets of strings) to graphs.
3
This problem is known as Decremental Connectivity. In general, decremental connectivity is where you need to support the operations: Connected($u$,$v$) : Check whether vertex $u$ is connected to vertex $v$ Delete($e$): Remove an edge $e$ Given $n$ queries of the first kind and $m$ queries of the second, Even and Shiloach [1] gave an $O(n\log{n} + m)$ ...
3
Here's a nice property of WQOs: If $R$ is a WQO on terms, and $S$ is another transitive relation such that $$R\ \subseteq\ S$$ Then $S$ is a WQO Proof: Let $t_1,\ldots, t_n,\ldots$ be an infinite sequence of terms. Because $R$ is a WQO, there are $i, j$ with $i<j$ such that $t_i\ R\ t_j$. But this implies $t_i\ S\ t_j$, so $S$ is a WQO as well. ...
3
Follow-up work by Holm, Rotenberg and Thorup [1] showed that there exists a reachability oracle for planar graphs of size $O(n)$ and query time $O(1)$. This is optimal also for trees (e.g., if the input is a star graph, then you need to know the orientation of every one of the $n-1$ edges). [1] Holm, Jacob, Eva Rotenberg, and Mikkel Thorup. "Planar ...
2
Although this is not exactly the question you asked, you can also build balanced trees from ordered data in an online manner. That is to say, you could walk your array from left-to-right building up partial results, and if some asked you to stop after k items you could finish building the tree in log k time (with the total time being O(k), and the extra ...
2
k-Balanced Partition Problem on graphs, in which one has to partition the $n$ vertices into $k$ connected components of size at most $\lceil\frac{n}{k}\rceil$ each and at the same time minimize the total cost of edges connecting vertices in different sets, called the cut cost. This problem is actually APX-hard even on unweighted trees of constant maximum ...
2
Somehow i missed the Achromatic Number problem in the last answer, but this is one of the most natural problems i know of, which are NP-complete on trees. A complete coloring of a graph is a proper coloring such that there is an edge between every pair of color classes. The coloring can be stated in contrast to Harmonious Coloring, as a proper coloring such ...
2
I believe that the answer is as you suggest that no other asymptotics than $\Theta(1)$, $\Theta(\sqrt{n})$ and $\Theta(n)$ are possible. A promising route to prove this could be to apply the techniques from the paper which derives the $\Theta(\sqrt{n})$ asymptotics to the run trees of the regular language. Notice that a tree is accepted if there exists a run ...
2
A zipper is in general a pair of things: it's a structure-with-a-hole, a focus, representing where in the structure you are, together with a path, recording how you got to that focus. (This path is LYAH's trail of breadcrumbs.) The path is how you actually apply changes to the structure: "go down, go left, increment the value". By repeatedly applying "go up"...
2
An alternating tree automaton for arbitrary degree trees has a transition function of the following type: $$\delta:Q\times \Sigma\times D\to {\cal B}(\mathbb{N}\times Q)$$ where ${\cal B}$ is the set of Boolean functions over the given set. This has a limitation that $\delta(q,\sigma,d)$ only outputs values between $1,...,d$ (so it fits the degree of the ...
Only top voted, non community-wiki answers of a minimum length are eligible
|
# Custom Gobos
#### DHSLXOP
##### Active Member
Hi everyone
I'm TD for an upcoming show at my high school and the director decided that she wants a gobo to be made of the show's logo to be on the main curtain while the house is entering.
I have found the rosco custom gobo page with all the directions but was wondering if someone could give me some better instructions than what they give:
1) They say email to the local dealer - who would be my local dealer? (I'm in Florida)
2) We are using a Source Four 575 Watt fixture - would a steel gobo be what we need?
3) What is the approximate cost for a custom gobo?
4) In general, once I have compiled all of my information, who do I send this all to?
Thanks so much for your help!
#### ScaredOfHeightsLD
##### Active Member
Hi everyone
I'm TD for an upcoming show at my high school and the director decided that she wants a gobo to be made of the show's logo to be on the main curtain while the house is entering.
I have found the rosco custom gobo page with all the directions but was wondering if someone could give me some better instructions than what they give:
1) They say email to the local dealer - who would be my local dealer? (I'm in Florida)
http://www.rosco.com/us/retail/index.asp that will help you find a rosco dealer. But don't discredit other manufacturers out there. I have had very positive experiences with Apollo custom gobos as well as some other companies out there. This company in particular has been very good in the past. They make their gobos out of a much thicker steel which tends to last longer. They charge about 50-60 dollars a piece and you get three cuts of each gobo. Shipping is usually very fast and reasonable.
2) We are using a Source Four 575 Watt fixture - would a steel gobo be what we need?
If you are just doing a show logo in black/white this should be perfect. Just make sure you have the correct lens. If your show logo has colors which you would like projected as well, you could look at glass gobos or the Rosco Image Pro If you do go with steel, keep in mind that you need to leave connections in the image for letters like O. SOmetimes the company will take care of this. But the artwork needs to be "Gobo Ready" Also, think about what you want to be lit and what you want to be dark. Make sure this is clear on the artwork.
3) What is the approximate cost for a custom gobo?
I find they can run anywhere from 50 dollars and up for a custom steel gobo.
4) In general, once I have compiled all of my information, who do I send this all to?
Your artwork should meet the specs asked for by whichever company you are having produce the gobo and then it either goes directly to them or to your dealer. Your dealer will have better information in regards to this.
Thanks so much for your help!
Let me know if you have any more questions. Good Luck!
Last edited:
#### derekleffew
##### Resident Curmudgeon
Senior Team
...
1) They say email to the local dealer - who would be my local dealer? (I'm in Florida)
2) We are using a Source Four 575 Watt fixture - would a steel gobo be what we need?
3) What is the approximate cost for a custom gobo?
4) In general, once I have compiled all of my information, who do I send this all to?
Thanks so much for your help!
1) Looks to be about 30 dealers in Florida.
2) Steel would be the least expensive. May need to go to glass if the design is intricate or multi-color
3) Between $50-$150 for steel; higher for glass.
#### icewolf08
##### CBMod
CB Mods
As has been mentioned, be careful of copyright issues. Also I believe for all of the major manufacturers you need to make the actual purchase through a dealer. Generally what that mean is that the dealer submits an order to the manufacturer, but you deal directly with the manufacturer for the artwork. I have always had my custom templates done through Apollo, they do a great job and they work well with you to get the artwork ready.
Here are some other things to think about. If you are doing text, you should consider EDLT lenses for your projecting unit. Also, it is really a must that you have the template made in "A" size (unless you are not using source 4's/SLs/selecons). The larger image area of an "A" size template helps a lot with readability.
If you can, post the artwork and we may be able to give more pointers.
#### LD4Life
##### Active Member
Since its a high school show, meaning probably not too many performances and not too much use of the gobo, you could probably create your own out of a pie tin, an exacto (or similar) knife, and some time. You can get a pretty good looking gobo out of those simple ingredients, and chances are the audience wouldn't know the difference.
I would have to agree with DarSax on this one, since its not going to be a long run, heavy use gobo, just cut it our of a pie tin with an exacto knife.
#### bobgaggle
##### Well-Known Member
If your high school has a Tech-ed department with a CAD program, you can design the logo and route it out of a thin sheet of metal. The router talks to the program and produces an exact replica. We did this for a show at my high school and it worked well.
|
# Math Help - HH = H, etc.
1. ## HH = H, etc.
"H is a subgroup of G, a is in H. Show that Ha = H."
Is this as simple as showing that since H is a group, all operations are closed within that group? Or is there more to it than that? If it's the latter, I'm stuck.
2. ## Re: HH = H, etc.
Originally Posted by phys251
"H is a subgroup of G, a is in H. Show that Ha = H."
Is it clear to you that $Ha=\{x*a:~x\in H\}~?$ (of course $*$ is the operation in $G$)
Is it clear that $Ha\subseteq H~?$ Why?
Can you show $H\subseteq Ha~?$
3. ## Re: HH = H, etc.
Originally Posted by Plato
Is it clear to you that $Ha=\{x*a:~x\in H\}~?$ (of course $*$ is the operation in $G$)
Right, that's just the definition of Ha.
Is it clear that $Ha\subseteq H~?$ Why?
Because of closure?
Can you show $H\subseteq Ha~?$
So the trick is to show that the sets contain each other. Is $H\subseteq Ha$ again a direct result of closure, or is there more to it than that?
4. ## Re: HH = H, etc.
Originally Posted by phys251
So the trick is to show that the sets contain each other. Is $H\subseteq Ha$ again a direct result of closure, or is there more to it than that?
$\\\text{If }h\in H \text{ is it true that }h*a^{-1}\in H~?\text{ WHY?}\\\text{So is it true that }(h*a^{-1})*a\in Ha~?$
5. ## Re: HH = H, etc.
Originally Posted by Plato
$\\\text{If }h\in H \text{ is it true that }h*a^{-1}\in H~?\text{ WHY?}\\\text{So is it true that }(h*a^{-1})*a\in Ha~?$
Wait, this may be simpler than I realize. Since h*a always maps to an element in H, via the properties of groups, HH always maps to H.
|
Unlimited PS Actions, graphics, videos & courses! Unlimited asset downloads! From \$16.50/m
Create an Illustrated Watercolor and Ink Photo Effect in Photoshop
Difficulty:IntermediateLength:LongLanguages:
Artists often ask how to apply watercolor and splattered ink effects to their images. While there may be some filters and effects out there that can automate this task, illustrating this by hand will give you the highest quality final result, as well as the most flexibility. In this tutorial, we will show you how to create a watercolor and splattered ink effect, in Photoshop, and apply it to a photo. Let's get started!
Tutorial Assets
To complete the tutorial you will need the following assets. Please download them before you begin. If they are not available, please find alternatives.
1. Sketching
Step 1
Create a New Document and size it so it is relatively landscape format. Also make sure the Background Contents is set to White.
Step 2
Open the photo (stock image) you wish to illustrate and work with. I have chosen this particular Dirt Bike racer because of the nice angle of the bike and figure.
Step 3
Drag the whole image, and drop it onto your New Document.
Step 4
Free Transform to position it into your composition and scale up it a little. Hold the Shift key to keep the image in proportion as you scale it up.
Step 5
Drop the Opacity of the photo layer in the layers window down to 46%.
Step 6
Create a New Layer over your photo layer to use as the sketch over layer, and title the layer: SKETCH.
Step 7
With the default Photoshop Hard Round Brush (it is usually the default brush at the top of the list). Set it to about 2px and begin sketching in the lineart over top of photo with #000000 black. At the moment, it doesn't have to be too clean and perfect. We are going for a slightly rough and sketchy look. Alternate the stroke width for more variation and movement.
Step 8
Shade in little areas with a crosshatching technique. Mostly go off the photo reference, but you can also use your best judgement to add in some extra shadows to define some stronger areas and give the shapes a little more dimension.
Step 1
Use the Crop tool (C) to stretch canvas out a little more and make it a bit wider.
Step 2
Fill (G) the Background layer with a light tan color. I have used #d0b391
Step 3
Open up Texture1 stock image, and drag and drop it over your Background layer.
Step 4
Use Free Transform to scale and stretch it to cover the entire canvas.
Step 5
Set the Blending Mode on the Layers Panel to Linear Burn.
Step 6
Drop the Opacity down to about 42%.
Step 7
Create New Layer just underneath Sketch layer and title it Figure Shading.
Step 8
Command/Ctrl-Click both the Sketch Layer and Shading layer and group them together (Command/Ctrl-G).
Step 9
Title the new group FIGURE LAYERS to keep them organized for when you need to return to them easily.
Step 10
Take the Round Airbrush (Airbrush full), and drop the brush Opacity to about 29%. Lightly paint in some shades on the Figure Shading layer. I used the color #332822
Step 11
Start painting in some dark tones lightly, beginning with the riders clothing which will bring out his form before moving onto the bike.
Step 12
Create a New Layer and title it Shadows
Step 13
Lightly build up the shades to an almost black. I used the color #130f0c.
Step 14
Make stronger shapes such as the wheels and inner gears of the bike.
Step 15
Create a New Layer underneath Figure Shading the layer. Title it Figure Light.
Step 16
Select an off white color. I have used #f4d2adc.
Step 17
Roughly paint in lighter areas with the brush set at a low Opacity. About 38%
Step 18
Duplicate (Command/Ctrl-J) your SKETCH layer (for more visibility and thickness of the sketch lines)
Step 19
Create a New Layer above the other layers in the FIGURE LAYERS group, and title it Sketch Highlights
Step 20
Using a lighter shade than before and crosshatch some stronger highlights. I used #fff8f1
3. Ground & Texture
Step 1
Create a new group above your Background and Texture Layer. Title it GROUND
Step 2
Open up the next stock image; GrungeTexture1
Step 3
Drag and drop it into your newly created GROUND folder.
Step 4
Set the blending mode to Multiply
Step 5
Free Transform and position the texture to the bottom left and drop the Opacity in the layers window to about 80%.
Step 6
Duplicate the texture layer (Command/Ctrl-J), and drag it to the other corner.
Step 7
Go to Free Transform > Flip Horizontal...
Step 8
Stretch and position the texture layer to cover some more ground.
Step 9
Hold Command/Ctrl-Alt to duplicate that layer again.
Step 10
Move the layer slightly to the left.
Step 11
Free Transform to stretch the layer as it overlaps the texture underneath.
Step 12
Open up BrushStrokes01 texture file.
Step 13
Make a selection with the Rectangular Marquee Tool (M).
Step 14
Hold Command/Ctrl, and drag and drop it into GROUND folder once again.
Step 15
Use Free Transform > Flip Horizontal...
Step 16
Stretch and skew it to fit to the right of the motorbike rider.
Step 17
Set to Multiply and drop the Opacity to 56%
Step 18
Take the same selection from BrushStrokes01 sample again.
Step 19
Drag and drop it onto your painting.
Step 20
Free Transform and position it behind motorbike rider once more.
Set to Multiply.
Step 22
Also drop the Opacity to 49%
Step 23
Roughly Erase (E) where it overlaps the motorbike rider.
Step 24
Sample BrushStrokes01 one more time and drag it onto your painting.
Step 25
Use Free Transform and rotate it and position it to the left of the motorbike rider this time.
Step 26
Again, set it to Multiply and drop the Opacity down to 50%
Step 27
Erase away where it overlaps the motorbike rider.
Step 28
Open up SplatterSmear01 paint texture.
Step 29
Desaturate it (Command/Ctrl-Shift-U).
Step 30
Open up Levels (Command/Ctrl-L) and adjust until all the grey is just about solid black.
Step 31
Drag and drop the whole image onto your painting.
Step 32
Free Transform and angle it clockwise to flip it upside down.
Step 33
Stretch horizontally to fit along the ground.
Set to Multiply.
Step 35
Bring the Opacity down to 85%
Step 36
Now open up SplatterThickPaint02 asset.
Step 37
Desaturate the image (Command/Ctrl-Shift + U).
Step 38
Adjust the Levels (Command/Ctrl-L) and bring it down to black.
Step 39
Drag and drop it onto your painting.
Step 40
Use Free Transform again.
Step 41
Now Flip Horizontal.
Step 42
Position the layer toward the front wheel of the bike.
Step 43
Set it to Multiply.
Step 44
Drop the Opacity to around 90%
Step 45
Duplicate the layer (Command/Ctrl-J).
Step 46
Move up and position the duplicated layer near the rear wheel.
Step 47
Free Transform and scale it up a little.
Step 48
Duplicate again (Command/Ctrl-J)
Step 49
Free Transform > Flip Horizontal...
Step 50
Position the layer closer to the back wheel this time.
Step 51
Finally, Free Transform one more time, and use Warp to give it some curve.
Step 52
Time to give it some color. Open up the Hue/Saturation menu by going; Image > Adjustments > Hue/Saturation (Command/Ctrl-U)
.
Step 53
Set it to Colorize in the Hue/Saturation window.
Step 54
With these color settings, give the paint splatter a nice red/brown hue.
Step 55
Drop Opacity down to 85%
Step 56
Finally, erase away where the red overlaps the motorbike rider illustration.
4. Splatter Effects
Step 1
Create a new group over the GROUND folder and title it SPLATTER
Step 2
Open up stock image Splatter01
Step 3
Desaturate again (Command/Ctrl-Shift + U).
Step 4
Open up the Levels menu (Command/Ctrl-L) and adjust accordingly to strengthen the black.
Step 5
Drag and drop the Splatter01 image onto your illustration (within the SPLATTER folder).
Step 6
Set the layer to Multiply.
Step 7
Use Free Transform.
Step 8
Scale it and position behind the back wheel of the bike.
Step 9
Use the Warp Tool and skew it slightly
Step 10
Duplicate the layer (Command/Ctrl-J)
Step 11
Use Free Transform on the new layer. Scale and position it closer to the back wheel.
Step 12
Erase (E) away some areas where the black splatter begins to overlap the illustration again.
Step 13
Rename those two Layers: Splatter Main and Splatter Small.
Step 14
Duplicate the layer Splatter Main (Command/Ctrl-J)
Step 15
Use Free Transform, and rotate to the left (counter clockwise).
Step 16
Position the layer towards the top right hand corner and almost out of the image (so just the scattered bits of paint are showing from the edge).
Step 17
Duplicate (Command/Ctrl-J) your Splatter Main layer once more.
Step 18
Free Transform and scale the new layer and position it down in the bottom left hand corner.
Step 19
Use Free Transform > Flip Horizontal...
Step 20
Rotate it slightly counter clockwise.
Step 21
Warp to give it some curvature.
Step 22
Again, give it some color by going to: Image> Adjustments > Hue/Saturation...
Step 23
Check the Colorize box in the Hue/Saturation window and adjust the colors like so:
Step 24
You can see how all the textured and paint splatter layers are beginning to blend together as one and add a certain amount of movement to the picture already. Now we just need to balance out the composition a bit more.
Step 25
Open up the texture GrungePaint02
Step 26
With the Rectangular Marquee selection tool (M), select the top half of the stock.
Step 27
Drag that over to your painting into the SPLATTER folder.
Step 28
Rename the layer Corner.
Step 29
Set it to Multiply.
Step 30
Free Transform and position it down the bottom left hand corner. Scale it relatively smaller.
Step 31
Drop the Opacity down to 81%
Step 2
Select the Shadows layer and duplicate it (Command/Ctrl-J)
Step 3
Open up the Levels menu (Command/Ctrl-L) on the Shadows copy layer, and strengthen the black slightly.
Step 4
Also select the Sketch layer and duplicate (Command/Ctrl-J) to make the lines a little stronger too.
Step 5
Now open up the texture file BrushStrokes028.jpg
Step 6
Invert the whole image (Command/Ctrl-I)
Step 7
Drag it onto your painting and place it within the GROUND group.
Step 8
Set the blending mode to Screen.
Step 9
Use Free Transform, and position it just behind the motorbike rider.
Step 10
Drop the Opacity down to 42%
Step 11
Use the Eraser (E) set with a soft edged brush setting, and lightly erase away some of the heavier areas of white.
6. Final Effects
Step 1
Create a new group above the SPLATTER group, and title it SPLATTER 2. The final effects will go into this folder.
Step 2
Open up the stock image SplatterThickPaint03
Step 3
Invert the whole image (Command/Ctrl-I)
Step 4
Drag it into your SPLATTER2 group and name the layer White splatter
Step 5
Free Transform and rotate it around clockwise.
Step 6
Set the blending mode to Screen.
Step 7
Use Free Transform and Warp it so it sits a little bit nicer.
Step 8
Erase (E) away the section to the left.
Step 9
Now go to Image > Adjustments > Hue/Saturation (Command/Ctrl-U). Leave the Colorize box unchecked this time. Adjust so the color goes from blue to a softer orange/red.
Step 10
Duplicate the Layer (Command/Ctrl-J) to make it significantly stronger.
Step 11
Open up stock file BrushStrokes01 again.
Step 12
With the Lasso Tool (L), make a selection of the top brush stroke.
Step 13
While holding Command/Ctrl, drag that little selection onto your piece.
Step 14
Adjust the color by going to Image > Adjustments > Hue/Saturation (Command/Ctrl-U)
Step 15
Check the Colorize box, and adjust the Hue and Lightness until it becomes a dark red.
Step 16
Set the blending mode to Multiply.
Step 17
Use Free Transform to position and scale it over the front fender, giving it a literal streak of color.
Step 18
Duplicate the layer (Command/Ctrl-J).
Step 19
Drag the duplicated layer to the rear fender now.
Step 20
Use Free Transform > Flip Horizontal...
Step 21
Scale it to fit over the fender a little better.
Step 22
Erase (E) where it overlaps the riders sleeve and hand.
Step 23
Open up texture file DecalsStain0012.
Step 24
Drag and drop it onto your painting within the SPLATTER2 group.
Step 25
Rename the layer to Rust.
Step 26
Open up the Levels menu (Command/Ctrl-L), and bring up the whites on the scale
Step 27
Set the Rust layer to Multiply.
Step 28
Adjust the Levels (Command/Ctrl-L) bit more to get rid of the last of the dark outline.
Step 29
Free Transform and scale it to sit diagonal to the rider.
Step 30
Drop the Opacity down to 46%
Step 31
Open up stock file BrushStrokes01 once more, and make a selection with the lasso tool, this time on the second brush stroke.
Step 32
Drag that over to your painting.
Step 33
Free Transform and scale/stretch it.
Step 34
Position it in the upper left hand corner, and set the blending mode to Multiply.
Step 35
Drop the Opacity down to 74%
Step 36
Duplicate the layer (Command/Ctrl-J), and move it closer to the upper left corner behind the last brush stroke texture.
Step 1
Finally, create a New Layer above everything else in the layers window, and title that layer Figure Highlights
Step 2
With a Hard Brush (B) set at 2px - 3px, crosshatch in some more highlights with an off white color (#fff0e2). This will bring out the figure and detail a bit more against the textures and effects that were just added.
Congratulations! You're Done.
In this tutorial, I have explained how to create an exciting illustration using various stock elements, which imitate the look of a traditional art piece. This is a fun style to experiment with because you can create lots of interesting and eye-catching images with it. I hope that you have learned something from this tutorial and can use the techniques explained to create some fun projects of your own.
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Prospects for strongly coupled atom-photon quantum nodes
## Abstract
We discuss the trapping of cold atoms within microscopic voids drilled perpendicularly through the axis of an optical waveguide. The dimensions of the voids considered are between 1 and 40 optical wavelengths. By simulating light transmission across the voids, we find that appropriate shaping of the voids can substantially reduce the associated loss of optical power. Our results demonstrate that the formation of an optical cavity around such a void could produce strong coupling between the atoms and the guided light. By bringing multiple atoms into a single void and exploiting collective enhancement, cooperativities ~400 or more should be achievable. The simulations are carried out using a finite difference time domain method. Methods for the production of such a void and the trapping of cold atoms within it are also discussed.
## Introduction
The introduction of cold atoms into microscopic holes in optical waveguides allows the integration of atomic components into otherwise purely photonic devices, with potential applications in sensing and quantum information processing1,2. While alternative techniques are available for coupling guided light to cold atoms — for example the use of tapered nanofibres3,4,5 or hollow core fibres6,7,8 — microscopic holes offer a unique set of advantages that make them ideally suited for certain applications. Firstly, the technique of introducing cold atoms via a microscopic hole is just as applicable in a 2D waveguide chip as in a fibre, allowing the direct combination of cold atoms with photonic circuit devices. Secondly, while the overall optical depth of an atom cloud contained in a microscopic hole is likely to be less than that obtained using a nanofibre or hollow core fibre, the optical depth per unit length should be able to match that achievable in free space, which is substantially greater than that typical of hollow core fibre or nanofibre experiments. This may have important implications for spatial resolution in sensing applications. Finally, the spatial separation between the atoms and the solid material of the waveguide can be much larger in a microscopic hole than is typical of a hollow core fibre or possible using a nanofibre. This will be important in precision sensing and spectroscopy experiments, where atom-surface interactions might otherwise adversely affect the results, as well as for any experiments involving Rydberg atoms, which are currently one of the most promising candidates for the implementation of multi-qubit gates9,10,11.
In this article we simulate transmission of light across microscopic holes in optical waveguides. We focus on holes drilled perpendicular to the core of the waveguide, with diameters in the range of 1 to 40 optical wavelengths. Holes of this kind can be fabricated by pulsed laser drilling12,13 — see for example those shown in Fig. 1(d), which were fabricated by Workshop of Photonics14 in 2015. We find that appropriate shaping of the holes can significantly improve the overlap of the transmitted light with the guided mode, thus reducing the losses associated with traversing the hole, and present results for a range of different shapes of hole. We then discuss the implications of our results with respect to the prospects of reaching the strong coupling regime for atoms confined in such a hole. We also examine the intensity distribution of the light inside the hole and use our results to show that both crossed and individual waveguides are capable of forming a full, 3D dipole trap for ultracold atoms using only guided light.
While this article focuses on the specific application of a light-atom interface, the results may also be applicable within other fields. These include fibre-based gas sensors15,16 and the efficient transmission of light through integrated optical or optofluidic elements17,18 in a fibre or waveguide chip. Coupling of light between waveguides and/or optical fibres is also an important area of application.
## Simulation Methods
In order to identify a geometry which allows a gap in the micrometer range with high optical transmission, simulations based on solving Maxwell’s equations were performed using Optiwave software (Optiwave Systems Inc.). For most data we use a three dimensional finite difference time domain (FDTD) method, which is a numerical solution of the full Maxwell’s equations19.
Where applicable, we compare that to the results of a beam propagation method (BPM). While numerically less intensive than the FDTD method, the BPM we employ makes several approximations — most notably the paraxial approximation — that render it less accurate than the FDTD method. We therefore use it only as a guide to the overall trends in the system’s behaviour, which helps to highlight the most interesting areas for investigation with the more time-consuming FDTD simulations.
The key parameters of the FDTD simulation method are the mesh spacing in each dimension Δx, Δy, Δz (with the z axis corresponding to the direction of light propagation along the waveguide), the time step size Δt and the number of time steps for which the simulation was run, Nt. The parameters used for each simulation are given in Supplementary Table 1. The finite size of the spatial and temporal discretisation inevitably leads to a numerical error in the results of the simulations, an estimate of which is shown as error bars. Full details of how we estimate the magnitude of this error are given in the supplementary material. The boundary condition used for the FDTD simulations was the built-in anisotropic, perfectly-matched layers (PML) condition, which accurately approximates a perfect absorber.
In general we consider the overlap of the transmitted light with the fundamental mode of the waveguide (MOTL), rather than the transmission coefficient. The difference between these two is the additional loss of light resulting from reflections at the glass to air/vacuum interfaces. In the case of rectangular holes interference effects are important, and the reflection coefficient varies strongly with the length of the hole. We therefore calculated reflection coefficients for several illustrative cases of rectangular hole. The maximum and minimum (excluding sub-wavelength holes where the reflection coefficient tends to zero as the length tends to zero) were 17.8% and 0.5%, occurring for holes with lengths of 4 and 1.7 micrometers respectively. Reflection coefficients for rectangular holes with lengths of 1, 5, 12, 22 and 30 micrometers were found to be 15.9, 1.6, 14.8, 15.6 and 11.9% respectively.
It is also worth noting that for rectangular holes the reflected light typically overlaps well (~97% intensity overlap) with the guided mode of the waveguide. Therefore, if such a hole were placed inside an optical resonator, it would not be accurate to regard the majority of the reflected light as lost from the system.
However, calculation of accurate reflection/transmission coefficients requires a long simulation time, to allow for multiple reflections within the system. When the surfaces of the hole have even modest curvature, the poor overlap and phase averaging between the light reflected via different pathways means that interference effects are essentially negligible, and the reflection losses at each interface can be treated as independent. This yields reflection losses of ~4% per interface, with little variation according to the parameters of the hole. In these cases the MOTL is therefore the most interesting system property to consider.
With a specific application in mind, i.e. the interaction of photons with Caesium atoms, we consider the case of 852 nm light (resonant with the D2 line in Caesium) in a waveguide whose refractive index profile matches a commercial optical fibre (Thorlabs 780 HP), as a representative example for a typical singlemode optical waveguide. However, the general trends and qualitative behaviours observed are likely to be widely applicable. We also consider a specific case based on the waveguide chip described in1, for which we find agreement between our simulations and the work of the original authors.
## Simulation Results
The first hole shape considered is the simplest geometry — a cylinder, and the results are plotted in Fig. 2(a). The dip in MOTL between 10 and 1.5 μm is due to additional divergence caused by the concave surface curvature, and MOTL eventually tends upwards to 1 as the hole size is reduced to zero. For holes with diameters greater than about three micrometers, the highest achievable MOTL is ~39%. This occurs at a plateau of MOTL as a function of diameter, for hole diameters from 20 to 40 micrometers.
The next case considered was a rectangular hole, as the flat faces of the rectangular hole were expected to eliminate the dip at small radii seen in the cylindrical hole due to concave lensing effects. This was indeed found to be the case, and the results are plotted in Fig. 2(b).
The FDTD and BPM results show a similar trend, where the FDTD result is consistently lower, owing to the remaining inaccuracy of the BPM data due to the paraxial approximation not covering all beam paths in this regime. For holes up to 10 optical wavelengths, a mode overlap >95% is achievable.
For rectangular holes some experimental data is available in the literature1. This was found to yield 65% power transmission across a 16 μm gap between 4 μm square waveguides in silica. Based on the refractive index contrast (0.75%) and laser wavelength used, we simulated this situation using the FDTD method and found an estimated transmission of ~78% (including reflection losses). The theoretical result is an upper bound to what is achievable experimentally and the slightly lower result can be explained by potential imperfections in the polishing of the end facets, a small angle between the end facets or a small amount of contamination. We therefore find that these figures are consistent with our expectations.
From the two initial simulations, it was expected that the use of convex surfaces should enhance the mode overlap for larger holes, as the focal power of these surfaces will compensate for the beam divergence related to the numerical aperture of the fibre core and allow recapture of light into the guided mode.
One relevant case to consider is that of a hole with spherically-curved surfaces on the input and output facets. We simulated this case for a range of radii of curvature, with the closest approach between the input and output surfaces being locked to 30 μm. The results are plotted in Fig. 3(a). It can be seen from the FDTD simulations that for a radius of curvature of ~16 μm the MOTL exceeds 93%, a major improvement over the rectangular hole of 30 μm length where the MOTL was only ~70% (see Fig. 2(b)). As a consistency check, additional simulations were also run for radii of curvature up to 3 mm, well beyond the range of Fig. 3(a). It was confirmed that, as expected, the MOTL gradually drops off as the radius of curvature is increased and ultimately tends to the same value predicted for the 30 μm rectangular hole.
With the practicalities of making a hole of this shape in mind, we also considered the effect of using cylindrically curved convex surfaces instead of spherically curved surfaces. We expect such holes to be easier to make as they have a constant cross-section. The results are shown in Fig. 3(b).
Parabolic surface curvatures were also investigated using the FDTD method, and were found to provide even better mode overlap for the transmitted light. The results are plotted in Fig. 4(a). In particular, for circularly symmetric, convex, parabolic surface curvature of the hole surfaces with $$\alpha =\frac{\delta z}{{r}^{2}}=0.068\,\mu {{\rm{m}}}^{-1}$$ we find $${\mathrm{(99.5}}_{-1.3}^{+0.5}) \%$$ MOTL for a hole where the distance of closest approach is 20 μm. The convex surfaces also lead to focusing of the light within the hole. The enhancement in peak intensity that results from this could be useful for the production of an optical dipole trap within the hole (see below), or to allow the strong coupling regime to be reached with smaller atomic ensembles as discussed below. We observe enhancements of up to a factor of 15.5 in peak intensity, with a general trend towards greater surface curvature producing a larger peak intensity enhancement. In the example given above (parabolic curvature with α = 0.068 μm−1) we find that the peak intensity is increased by a factor of ~5.
In order to constitute a quantum memory, or to allow longer interrogation times in sensing applications, it is advantageous to hold cold atoms within the junction, e.g. in an optical dipole trap. Where in other systems the creation of a small stable trap could be challenging, here they can be created using only light guided in the waveguides themselves. The trapping region then also automatically overlaps with the interrogation region of the probe light.
The FDTD method of simulation provides full data on the electric field as a function of position within the junction and can therefore be used to determine the light intensity, and hence the optical dipole potential, as a function of position within the junction. Figure 5(a) shows the dipole potential generated for Cs atoms by 1 mW of light at 1064 nm crossing a 20 μm hole with convex parabolic surface curvature (α = 0.063 μm−1) in a waveguide with parameters matching Thorlabs 780 HP optical fibre. It can be seen from this that already a single beam forms a full 3D trap - a result of focusing of the guided light at the convex interfaces. Figure 5(b) shows the dipole potential generated in a junction formed at the intersection of two 4 μm square silica waveguides with a refractive index contrast of 0.75%. There is assumed to be 1 mW of 1064 nm light in each waveguide, with identical linear polarisations. It is worth noting that the small mode area offers an unusually high trap depth for a given optical power and wavelength.
The damage threshold for waveguides and optical fibres of this type is typically of the order of 1010 Wm−2 for visible and NIR wavelengths. This would correspond to a power of ~200 mW in each waveguide. As a result, the maximum trap depth that can realistically be achieved in such a junction (with a comparable detuning of the trapping laser from the relevant atomic transition) would be on the order of 10 to 15 mK.
## Strong Coupling Regime
Reaching the strong coupling regime is of interest as it permits single-photon gate operations and allows the observation and exploitation of quantum electrodynamic effects20,21,22,23,24. In order to reach this regime an optical cavity would be produced around a quantum system with an appropriate optically-addressable transition, in this case via the use of laser-written Bragg gratings20,25 on either side of the hole. Different regimes can then be reached, depending on the cavity length and the choice of hole type. The strong coupling regime is defined as the regime in which the atom-cavity coupling constant g1 significantly exceeds the atomic decay rate γ and the cavity decay rate κ. The Purcell regime, in which the cooperativity $$C={g}_{1}^{2}/(\kappa \gamma )$$ is large but g1 < κ, is also of interest — particularly with regard to the production of single photon sources26. For completeness, note that there are a few alternative definitions of C in the literature. The maximum value of the atom-cavity coupling constant is given according to2
$${g}_{1}=\zeta \sqrt{\frac{{\omega }_{c}}{2\hslash {\varepsilon }_{0}V}}\varphi (\overrightarrow{r}),$$
(1)
where ζ is the dipole matrix element for the atomic transition being adressed, ωc is the resonant frequency of the cavity and V the volume of the cavity mode27. The mode volume is defined such that $$V=\int {\varphi }^{2}(\overrightarrow{r}){d}^{3}\overrightarrow{r}$$.
Due to the numerical intensity of the simulations required, we do not perform FDTD simulations including an optical cavity. Instead, loss of light from the guided mode on traversing a microscopic hole is modelled as the introduction of an additional intra-cavity loss mechanism. In this case the decay rate κ is given by2,4,28:
$$\kappa =\frac{c\mathrm{(1}-T\sqrt{{R}_{1}{R}_{2}})}{2{l}_{c}\sqrt{T}{({R}_{1}{R}_{2})}^{\mathrm{1/4}}},$$
(2)
where lc is the optical path length of the cavity, R1 and R2 are the mirror reflectivities and T is the transmission coefficient past the intra-cavity loss source as determined by the FDTD simulations described above.
We assume a 1/e2 intensity radius of 2.5 μm, which is representative of most waveguides of the kind we consider herein, and analyse examples for different cavity lengths. In20 fibre Bragg gratings with a reflectivity of 99.5% were used, and even higher reflectivities are achievable20,21. For now we assume that the mode profile in the hole does not differ significantly from that in the waveguide, although the focal effects of surfaces with convex curvature discussed in the previous section could in principle be used to enhance the coupling strength.
Consider the case of a single Caesium atom addressed on the D2 line (F = 4, mF = 4 → F′ = 5, mF = 5), trapped inside a rectangular void with a length of L = 5 μm inside an optical cavity with length lc = 5 mm. This situation was found to yield a MOTL of (98.7 ± 0.3)%, and applying Eqs (1) and (2) therefore gives values of g1 = 187 MHz and κ = (552 ± 100) MHz respectively. Here g1 clearly exceeds the atomic decay rate of γ = 16.4 MHz29 (where γ is equal to half of the spontaneous decay rate Γ, to allow for the atomic population distribution) and the cooperativity equates to C = 3.9 ± 0.8, thus placing this system within the Purcell regime30. Note that for rectangular holes we consider only losses associated with imperfect MOTL. This is because, in the case of rectangular holes, the reflected light was found to overlap well with the mode of the waveguide and the reflection coefficient could be reduced to ~0.5% through appropriate local tuning of the hole length. When considering holes with other shapes reflection losses are taken into account.
In order to enter the strong coupling regime it is also necessary that g1 > κ. This would be reached for a single Cs atom in a rectangular void of L = 5 μm and a cavity length of lc = 50 mm, again for a cooperativity of C = 3.9 ± 0.8 and a coupling rate of g1 = 59 MHz and κ = (55 ± 10) MHz. For longer rectangular holes, up to a size of L = 8 μm ((96.7 ± 0.3)% MOTL), cooperativities with C = 1.8 ± 0.2 and a cavity length of lc = 300 mm with g1 = 24 MHz and κ = (20.0 ± 1.5) MHz are possible. While introducing a large ensemble of cold atoms into such a space is difficult, introducing a single trapped atom into a hole of this size is plausible, and small rectangular holes may therefore permit strong coupling of single atoms to guided light.
Holes with convex surface curvatures may allow the strong coupling regime to be reached for even larger holes, due to a combination of increased MOTL and local field enhancement by the focal effects of the surfaces. For example, we find that convex, parabolic surface curvature with a coefficient of $$\frac{\delta z}{{r}^{2}}=0.068\,\mu {{\rm{m}}}^{-1}$$ allows a cooperativity of 4 to be achieved for a single Cs atom in a 20 μm long hole. See supplementary material for full details.
When multiple atoms are trapped inside a void and therefore confined in such a cavity, there is a collective enhancement of the coupling constant by a factor equal to the square root of the number of atoms present $${g}_{N}=\sqrt{N}{g}_{1}$$31,32, assuming the atoms all couple equally to the optical field. Considering the trapping volumes involved (~1−1000 μm3) and the densities typically achievable in a dipole trap (~1 μm−3 without evaporative cooling or ~1000 μm−3 with33), atom numbers from 1 to 106 should be achievable, with a corresponding increase in the achievable cooperativity. Calculation of an exact value for the collective cooperativity CN requires specification of both the number and distribution of atoms within the hole. As an example, confinement of 260 Cs atoms within a hole of length 20 μm with convex, parabolic surface curvature with a coefficient of $$\frac{\delta z}{{r}^{2}}=0.068\,\mu {{\rm{m}}}^{-1}$$ could be expected to yield cooperativities on the order of CN = 400, subject to reasonable assumptions about the distribution of the atoms within the hole. See supplementary material for full details.
Other quantum systems such as quantum dots34 or semiconductor vacancy centres35 could be placed into much smaller holes, and potentially even into holes which are then filled with index-matching fluid, as is done in36. Bringing these systems into the strong coupling or Purcell regimes should therefore also be possible in microscopic voids of this kind.
## Outlook
The transmission of light across holes in optical waveguides, with sizes in the range 1 to 30 micrometers, has been studied via numerical simulation. Most attention was given to losses resulting from mode mismatch, since for curved surfaces reflection losses remain roughly constant, at about 8% for two uncoated glass surfaces. The results are found to be consistent with previous experimental results and reproduce the correct limiting behaviour as variables become large or small and calculation of the transmission becomes trivial.
The results show that for a given length of hole the losses due to mode mismatch can be greatly reduced through appropriate hole shaping, with appropriate convex parabolic curvature of the input and output faces of a 20 micrometer long hole reducing losses due to mode-mismatch from ~15% in the case of flat end faces to 0.5(+1.3/−0.5)%. Shaping of holes to maximise power transmission may have applications in fibre-based sensors as well as in quantum optics experiments involving cold atoms.
It is also shown that dipole traps for ultracold atoms can be formed within such holes using only guided light, with maximum depths in the range of ten to fifteen mK for typical silica waveguides and trapping laser detunings of ~200 nm. Furthermore, our calculations suggest that construction of optical cavities around such holes should allow the strong coupling regime to be reached for single atoms trapped in holes up to ~20 μm in length. Exploiting collective enhancement to increase the coupling strength allows the use of larger voids and permits higher cooperativities. This could make holes of this kind an ideal component for interfacing light and cold atoms as part of an integrated quantum information system.
Note that the potential effects of imperfect fabrication are not accounted for in our simulations. Future work will include determining suitable methods for the smoothing and anti-reflection coating of the interior hole surfaces. For example, smoothing is expected to be possible using either ion beam milling or plasma assisted chemical etching37. Additionally, shaping of the waveguide’s refractive index profile on either side of the junction will be investigated. Previous experimental work has demonstrated the plausibility of shaping such waveguides38,39, and the additional dimension this approach adds to the space of free parameters available when designing a waveguide-void interface should allow for extremely high transmission coefficients to be achieved.
## Data Availability
Any relevant data not presented in the manuscript is available from the authors upon reasonable request.
## References
1. Kohnen, M. et al. An array of integrated atom-photon junctions. Nat. Photon. 5(35–38) (2011).
2. Reiserer, A. & Rempe, G. Cavity-based quantum networks with single atoms and optical photons. Rev. Mod. Phys. 87, 1379 (2015).
3. Sorensen, H. et al. Coherent Backscattering of Light Off One-Dimensional Atomic Strings. PRL 117(133604) (2016).
4. Vetsch, E. et al. Optical Interface Created by Laser-Cooled Atoms Trapped in the Evanescent Field Surrounding an Optical Nanofiber. PRL 104(203603) (2010).
5. Daly, M., Truong, V., Phelan, C., Deasy, K. & Nic Chormaic, S. Nanostructured optical nanofibres for atom trapping. N. J. Phys. 16(053052) (2014).
6. Pechkis, J. & Fatemi, F. Cold atom guidance in a capillary using bluedetuned, hollow optical modes. Optics Express 20(13409) (2012).
7. Bajcsy, M. et al. Efficient All-Optical Switching Using Slow Light within a Hollow Fiber. PRL 102(203902) (2009).
8. Christensen, C. et al. Trapping of ultracold atoms in a hollow-core photonic crystal fiber. PRA 78(033429) (2008).
9. Jaksch, D. et al. Fast Quantum Gates for Neutral Atoms. PRL 85(2208) (2000).
10. Urban, E. et al. Observation of Rydberg blockade between two atoms. Nature Physics 5(110) (2009).
11. Petrosyan, D., Motzoi, F., Saffman, M. & Molmer, K. High-fidelity Rydberg quantum gate via a two-atom dark state. PRA 96(042306) (2017).
12. Huang, H., Yang, L. & Liu, J. Micro-hole drilling and cutting using femtosecond fiber laser. Optical engineering 53(051513) (2014).
13. Goya, K., Itoh, T., Seki, A. & Watanabe, K. A Through-hole Array on Optical Fibers Fabricated by 1-kHz/400-nm Femtosecond Laser Pulses for an in-line/pico-Litter Spectrometer Design. Procedia engineering 87(919) (2014).
14. Workshop of photonics website, http://www.wophotonics.com/.
15. Stewarta, G., Tandyb, C., Moodiec, D., Moranted, M. & Donga, F. Design of a fibre optic multi-point sensor for gas detection. Sensors and Actuators B: Chemical 51(227) (1998).
16. Jin, W., Ho, H., Cao, Y., Ju, J. & Qi, L. Gas detection with micro- and nano-engineered optical fibers. Optical Fiber Technology 19(741) (2013).
17. Yuan, L. et al. All-in-fiber optofluidic sensor fabricated by femtosecond laser assisted chemical etching. Optics Letters 39(2358) (2014).
18. Martinez, A., Zhou, K., Bennion, I. & Yamashita, S. In-fiber microchannel device filled with a carbon nanotube dispersion for passive mode-lock lasing. Optics Express 16(15425) (2008).
19. Optiwave website, https://optiwave.com/applications/fdtd-application-overview/.
20. Kato, S. & Aoki, T. Strong Coupling between a Trapped Single Atom and an All-Fiber Cavity. Phys. Rev. Lett. 115(093603) (2015).
21. Keloth, J., Nayak, K. & Hakuta, K. Fabrication of a centimeter-long cavity on a nanofiber for cavity quantum electrodynamics. Opt. Lett. 42(1003–1006) (2017).
22. Yalla, R., Sadgrove, M., Nayak, K. & Hakuta, K. Cavity Quantum Electrodynamics on a Nanofiber Using a Composite Photonic Crystal Cavity. Phys. Rev. Lett. 113(143601) (2014).
23. Horak, P. et al. Possibility of single-atom detection on a chip. Phys. Rev. A 67(043806) (2003).
24. Le Kien, F. & Hakuta, K. Cavity-enhanced channeling of emission from an atom into a nanofiber. Phys. Rev. A 80(053826) (2009).
25. Meltz, G., Morey, W. & Glenn, W. Formation of Bragg gratings in optical fibers by a transverse holographic method. Opt. Lett. 14(823) (1989).
26. Zhang, X., Xu, C. & Ren, Z. High fidelity heralded single-photon source using cavity quantum electrodynamics. Scientific Reports 8(3140) (2018).
27. Kimble, H. Strong Interactions of Single Atoms and Photons in Cavity QED. Physica Scripta T76(127) (1998).
28. Saleh, B. & Teich, M. Fundamentals of Photonics. Wiley (2007).
29. Steck, D. Alkali D Line Data. (1998).
30. Gérard, J. et al. Enhanced spontaneous emission by quantum boxes in a monolithic optical microcavity. Phys. Rev. Lett. 81, 1110–1113 (1998).
31. Hernandez, G., Zhang, J. & Zhu, Y. Collective coupling of atoms with cavity mode and free-space field. Optics Express 17(4798) (2009).
32. Guerlin, C., Brion, E., Esslinger, T. & Molmer, K. Cavity quantum electrodynamics with a Rydberg-blocked atomic ensemble. Phys. Rev. A 82(053832) (2010).
33. Chaudhuri, S., Roy, S. & Unnikrishnan, C. Evaporative Cooling of Atoms to Quantum Degeneracy in an Optical Dipole Trap. J. Phys.: Conf. Ser. 80(012036) (2007).
34. Loss, D. & DiVincenzo, D. Quantum computation with quantum dots. Phys. Rev. A 57(120) (1998).
35. Jelezko, F. & Wrachtrup, J. Single defect centres in diamond: A review. Phys. Stat. Sol. A 203(3207) (2006).
36. Lai, Y., Zhou, K. & Bennion, I. Microchannels in conventional single-mode fibers. Opt. Lett. 31(2559) (2006).
37. Zarowin, C. Comparison of the smoothing and shaping of optics by plasma-assisted chemical etching and ion milling using the surface evolution theory. Applied Optics 32(2984) (1993).
38. Pertsch, T. et al. Discrete diffraction in two-dimensional arrays of coupled waveguides in silica. Optics Letters 29(5) (2004).
39. Szameit, A. et al. Discrete nonlinear localization in femtosecond laser written waveguides in fused silica. Optics Express 13(10552) (2005).
## Acknowledgements
This work was supported by the Engineering and Physical Sciences Research Council [grants EP/R024111/1, EP/M013294/1] and by the European Comission [grants 295293, 800942] (“QuILMI” and “ErBeStA”). The authors would like to thank Joerg Goette for useful discussions.
## Author information
Authors
### Contributions
L.H. and N.C. provided the initial ideas for the work. C.B. commenced the simulation work, which was then significantly expanded by N.C. and E.D. M.T.G. provided advice on the efficient use of the simulation software. E.D., V.N. and L.H. assisted N.C. with manuscript preparation and review of the relevant literature. All authors reviewed the final manuscript.
### Corresponding author
Correspondence to N. Cooper.
## Ethics declarations
### Competing Interests
The authors declare no competing interests.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Cooper, N., Da Ros, E., Briddon, C. et al. Prospects for strongly coupled atom-photon quantum nodes. Sci Rep 9, 7798 (2019). https://doi.org/10.1038/s41598-019-44292-2
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/s41598-019-44292-2
|
## Introduction
The design of responsive porous materials, in which the porosity can be modulated externally and non-invasively by light to control adsorption, transport, and release properties offers fascinating opportunities. Azobenzene (AB) molecular photoswitches1 (PS) undergo light-activated E-Z isomerization and are frequently applied in light-responsive actuators2 membranes3, smart materials4, and single-molecule optical memories5. Pendent AB-switches grafted onto the backbone of porous metal-organic frameworks (MOFs) were demonstrated to reversibly control the separation and release of guest molecules by manipulating the porosity and host-guest interactions via photoswitching6,7,8. However, pendent AB switches occupy pore space, which could be used for guest inclusion, and lack cooperativity that would be highly beneficial for the efficiency and selectivity of adsorption processes9,10. Soft porous crystals11 (SPCs) exhibit cooperative framework deformation dictated by the crystal structure. As a result, SPCs show adsorption phenomena such as gate-opening (pore expansion)12, breathing (pore contraction)13, and negative gas adsorption (NGA, gas release upon pore contraction)14 that show potential for improved diffusion15,16, storage17 and separation15 of gases and gas mixtures. Currently the contraction and expansion of the porous network of SPCs are primarily guest-induced and energetically driven via adsorption18. To date, the chemical modification of building blocks and framework topology are the dominant strategy to alter the guest-responsive behavior of SPCs19,20. The use of diarylethene PS in the framework backbone is a promising strategy to manipulate cooperative framework transitions21,22,23,24. However, the observed effects are very small, compared to the response due to guest-induced deformations of SPCs, and the initiation of massive framework deformations in SPCs by the application of both light- and guest-interactions is unprecedented. The large geometric change upon E-Z isomerization of AB is expected to result in a much stronger framework deformation when incorporated in the framework backbone. Until now, photoswitching is observed to be either suppressed due to framework constraints25,26 or causes irreversible bond-breaking and degradation of the extended framework27,28,29,30. The fundamental challenges of how to accommodate the large geometric change of AB upon E-Z isomerization and establish photoinduced cooperative transitions, in the absence of framework disintegration, requires uncompromised/robust photoswitching, sufficient mechanical softness, enhanced porosity, and long-range order. Furthermore, it remains unexplored whether geometric constrains of framework-embedded PS might result in alternative photoswitching pathways, unknown for unconstrained molecular PS.
In this work we demonstrate the design and analysis of DUT-163, a MOF with framework-embedded azobenzene photoswitch. DUT-163 exhibits structural contraction by combined application of light irradiation and adsorption-stress via gas adsorption. Our work is based on a detailed theoretical analysis of the energy landscape of DUT-163 followed by in depth in situ experimental analysis using a range of spectroscopic and diffraction methods. From this data we derive that unexpectedly the contraction mechanism in DUT-163 is based on a buckling process of the ligand, previously unknown for molecular AB photo-switches. This mechanism is further supported by a series of computational simulations that detail the photochemistry and adsorption mechanism. Our analysis highlights the impact of framework-constraint on the behavior of framework-embedded photo-switches and postulates framework softening via light irradiation as the underlying mechanism responsible for framework transitions in DUT-163. Furthermore, we show that the light activation is applied locally allowing to use this process in light-responsive nanoscopic pneumatic systems and gas-releasing devices.
## Results and discussion
### Modeling of molecular photoswitch and framework
We selected the 49th MOF material discovered at the Dresden University of Technology (DUT-49)31 as a blueprint for our new photoresponsive SPC design because of its ability to accommodate large changes in ligand configuration and framework structure without disintegration following substantial framework contraction14. The three-dimensional (3D) framework of DUT-49 is based on the linkage of tetra-connective carbazole-based ligands to copper(II) dimers. By using (E)-9, 9’-(diazene-1, 2-diylbis(4, 1-phenylene))bis(9H-carbazole-3, 6-dicarboxylic acid ((E)-H4dacdc) we are able to establish the structurally related framework of DUT-163 which contains an AB functionality in the backbone. We conducted density functional theory (DFT) simulations of (E)-H4dacdc and its methylester ((E)-Me4dacdc), to probe the energetics upon buckling32 and E-Z-isomerization as a function of the distance between two AB-bridged carbazole-nitrogen atoms (dN-N) and the dihedral angle of the azo-unit (δCNNC) (Fig. 1).
Regardless of the E-Z isomerization mechanism chosen (i.e. rotation or inversion33,34), the energy barrier of E-Z isomerization at the ground state is over five times larger than the barrier of buckling (E)-Me4dacdc. This result is to be expected since buckling is a conformational change while E-Z isomerization involves the breaking of the azo π-bond in the ligand backbone. To investigate how the constraints imposed by the incorporation in a framework impact the energetics of E-Z isomerization and buckling, we computed the contraction mechanism of DUT-163 as a function of unit cell volume (VUC) for buckling and E-Z isomerization of the ligand by molecular dynamics (MD) simulations (Fig. 2).
Similar to the analysis of the unconstrained Me4dacdc ligand, E-Z isomerization of dacdc in DUT-163 exhibits a much larger energy barrier compared to the buckling transition (Fig. 2). However, the associated contraction mechanism of the DUT-163 framework exhibit two very different trajectories. The energy landscape of DUT-163 as a function of VUC exhibits the global minimum at VUC = 120 nm3 corresponding to the open pore (op) state (DUT-163op) (Fig. 2f). A metastable state with buckled ligand in E conformation at VUC = 54 nm3 is observed which is assigned to a contracted pore (cp) state, further denoted as (E)-DUT-163cp (See supplementary videos 1 and 2). This state is very similar to DUT-161cp which contains a stilbene instead of an AB unit in the ligand backbone35.
To probe the framework energetics upon E-Z isomerization of dacdc, we investigated the evolution of VUC and the framework geometry as a function of φCNNC (See supplementary video 3 and 4). Interestingly, this energy landscape presents a local minimum at VUC = 74 nm3, which is assigned to a contracted framework with dacdc in Z-configuration, (Z)-DUT-163cp. The energy barrier for contraction (Eop-cp) per unit cell (UC) between DUT-163op and (E)-DUT-163cp (Eop-(E)cp = 1250 kJ molUC−1) is ca. three times smaller compared to the barrier between DUT-163op and (Z)-DUT-163cp (Eop-(Z)cp = 3900 kJ molUC−1) (Fig. 2). Based on this data it can be concluded that DUT-163 is theoretically able to undergo contraction via buckling or E-Z isomerization, with buckling being the energetically more favorable mechanism at the ground state.
### Photoswitching of the molecular ligand
(E)-H4dacdc was synthesized using an established strategy (see Supplementary Information for details)36. Upon irradiation at 365 nm (295–298 K) we observed changes in the UV-Vis absorption, Raman (Fig. 3) and 1H nuclear magnetic resonance spectra (Supplementary Figs. 710) of (E)-H4dacdc and the corresponding n-butyl ester ((E)-nBu4dacdc), typical for light-induced E-Z isomerization33,37.
For both molecules, we observed a photostationary state (PSS) composed of a ca. 1:1 E-Z mixture at 293 K. Upon irradiation at 455 nm the Z-isomer was partially reverted to a PSS comprising of ca. 25% of the Z-isomer. Thermal Z-E isomerization was observed by heating above 338 K for over 5 h and the system showed excellent photochemical and thermal reversibility in solution.
### Synthesis of DUT-163
The solvothermal reaction of H4dacdc with Cu(NO3)2·3H2O in DMF at 80 °C yields DUT-163 as a brown microcrystalline powder with a mean crystal size of 2.6 µm (Supplementary Fig. 28). The single-crystal structure of DUT-163 with cubic Fm$$\bar{3}$$m symmetry and cell dimensions of a = 49.240(6) Å and a unit cell volume of VUC = 119386(41) Å3 was determined by synchrotron-based single-crystal X-ray diffraction (Supplementary Table 8), in line with the in silico optimized op structure. In DUT-163op, dacdc exhibits a linear (E)-configuration in which the AB-unit is disordered due to symmetrical restrictions (Supplementary Fig. 24). The porous framework is characterized by a geometrical surface area, pore volume and pore diameters of GSA = 5112 m2 g−1, Vp(sim) = 3.2 cm3 g−1, and dp = 0.9–2.7 nm, respectively which were simulated from the single crystal structure (Supplementary Fig. 97). Desolvation of DUT-163 was achieved using supercritical carbon dioxide, a protocol previously described for DUT-4936. Permanent porosity was investigated by N2-adsorption experiments at 77 K from which a Vp of 2.84 cm3 g−1 (at p/p0 = 0.98) was determined. The reduction in pore volume compared to the computed values might be based on crystal size effects previously observed for DUT-4938.
### Spectroscopic analysis of structural contraction
The light-responsiveness of DUT-163 was investigated by in situ PXRD, diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS), solid state diffuse reflectance UV-Vis (DRUV-Vis) spectroscopy, and Raman spectroscopy experiments under dry nitrogen atmosphere with 365 nm irradiation at 293 K. These conditions were previously found to promote E-Z isomerization in solutions of both nBu4dacdc and H4dacdc. Interestingly, we observed no significant changes in the Raman and DRIFT spectra as well as PXRD patterns of DUT-163 upon elongated 365 nm irradiation (Fig. 3), indicating the absence of E-Z isomerization of the ligand and structural contraction of the framework. This is further supported by nitrogen adsorption experiments at 77 K, which showed no change in porosity after 365 nm irradiation (Supplementary Fig. 29). However, upon 365 nm irradiation, we observed a bathochromic shift of the absorption in the DRUV-Vis spectrum of DUT-163 corresponding to the AB-functionality, and decrease of the signal assigned to the absorption of the Cu2+-dimer at 550–600 nm (Fig. 3a). The original spectra were not restored upon irradiation at 455 nm (Supplementary Fig. 84). Rather than E-Z isomerization which would cause pronounced changes in the PXRD patterns as well as Raman and DRIFT spectra, we propose a photoinduced charge transfer from the AB-functionality to the Cu2+ site, which is reported in other metal-AB-complexes39. The absence of changes upon irradiation in the DRUV-Vis spectrum of DUT-49 (Supplementary Fig. 80), which does not contain an oxidizable AB-unit, supports such mechanism. Furthermore, spin-flip DFT calculations on Cu2dacdc indicate that the HOMO is located on the AB backbone while the LUMO is localized on the Cu2+-dimer, which would support the feasibility of photoinduced charge transfer (Supplementary Fig. 104).
### Adsorption- and photo-induced structural contraction
Although light-driven contraction via E-Z isomerization was not observed in guest-free DUT-163, we reasoned that additional adsorption interactions might help to stabilize a contracted (Z)-DUT-163cp state and trigger a structural response by parallel application of light and gas adsorption. In initial experiments we recorded MP physisorption isotherms of two individual samples of DUT-163 in the temperature range of 307-295 K for which one sample was irradiated at 365 nm throughout the whole experiment, while the other was kept under light exclusion. Still, no differences between the adsorption isotherms of irradiated and non-irradiated DUT-163 samples could be detected, neither in the temperature range above nor below 299 K (Fig. 4a–c and Supplementary Figs. 3442) indicating the absence of light-induced contraction of the porous material in this experimental setup.
However, the large sample amount (>10 mg) required for accurate gas adsorption experiments causes light scattering and absorption leading to inhomogeneous illumination of the entire bulk solid. Consequently, adsorption experiments only provide little information on structural transitions that potentially occur only for a small part of the bulk sample.
In order to explore in more detail, the light-responsive structural behavior of DUT-163 in parallel to adsorption we designed in situ experimental setups that allow exposing small sample amounts (<0.2 mg) to defined gas pressure while irradiating with light of defined wavelengths under isothermal conditions and tracking structural transitions by synchrotron-based PXRD. Initial experiments used a setup designed for cryogenic temperatures with a flat sample bed in reflection geometry encapsulated in an insulated sample cell connected to glass fiber and gas capillary for irradiation and gas dosing, respectively. This setup was used to analyze methane adsorption at 120 K with and without UV irradiation. In fact, a partial contraction was observed which reversibly transformed back to the pristine op state. Yet, a more detailed study proved difficult due to insufficient light intensity and penetration depth of the flat sample bed (Supplementary Fig. 45). Consequently, a second setup in which sample filled capillaries are directly exposed to UV-light in parallel to the adsorption process proved more suitable and reliable (Supplementary Fig. 51). In a series of experiments, we probed the adsorption-induced structural transition upon MP adsorption at 296 K and 300 K (Fig. 4f).
At 296 K and relative pressure of 0.15–0.16, we observed an op→cp transition (Fig. 4f), demonstrating the ability to generate, observe, and identify the nature of adsorption-induced structural changes in DUT-163 with this setup. In a second experiment we raised the temperature to 300 K, beyond the upper temperature limit for adsorption-induced contraction. As expected, no structural contraction is observed (Fig. 4a) in line with the gas adsorption experiments at 299 K (Supplementary Fig. 38). In a third experiment, we used the same conditions (300 K) on the same sample, but this time irradiated the sample, throughout the whole adsorption process, with 365 nm light. Interestingly, at a relative pressure of 0.17–0.18, we observed a strong decrease in diffraction intensity at 2θ = 3.09 ° and appearance of new peaks at 2θ = 4.07° and 6.66° (Fig. 4e, f), which we can assign to the formation of (E)-DUT-163cp. Reversible reopening of the structure was not observed in the investigated pressure range but is expected to occur at increasing relative pressures similar to the experiment conducted at 262 K without and methane at 120 K with irradiation (Supplementary Fig. 49). Repetition of the experiments at 300 K on three individual samples confirmed the initial observations and the light-responsive behavior (Supplementary Figs. 5761). In all experiments, the temperature recorded in close proximity to the sample was stable at 300 K, with fluctuations below ±0.2 K. We observed no change in the diffraction patterns of (E)-DUT-163cp upon irradiation with 365 nm and 455 nm light (Supplementary Fig. 54), reflecting the absence of light-induced op→cp transition by potential Z-E photoisomerization. In one experiment, 365 nm irradiation was applied only in the relative pressure range of 0.16–0.28, 1 min before the op-cp transition occurred, demonstrating that prolonged irradiation is not essential and the light application allows for temporal control of the process.
### Spatial control over light-induced contraction
Because only a 6-mm length section of the sample-filled capillary was irradiated in the experiments described above, we performed an axial PXRD scan along the capillary to determine the spatial phase composition (Fig. 5).
Only DUT-163 powder in the irradiated area exhibited structural contraction, supporting that light is indeed the trigger for the transition and demonstrating the spatial applicability of light-initiated contraction in DUT-163 (Fig. 5a, d). However, in all irradiation experiments, the residual op phase detected by PXRD indicates that only part of the sample undergoes a contraction. As the dense packing and high absorptivity of DUT-163 in the range of 200-600 nm can filter the light stimulus, we tested the light penetration depth by analyzing DUT-163-filled quartz capillaries with diameters of 0.3 mm, 0.7 mm, and 1 mm (wall thickness 0.01 mm) (Fig. 5f–h). We observed that an estimation of 88% (0.3 mm), 79% (0.7 mm), and 26% (1 mm) of the bulk sample in the detected area underwent contraction, considering the change in intensity of the (111) reflection of DUT-163op at 2θ = 3.09°. Thus, we evaluated that the penetration depth of the applied Light-emitting diode (LED) light is in the range of 0.1–0.15 mm for a non-compressed sample bed of DUT-163 powder. Although a more powerful light source might initiate contraction in a denser or thicker sample bed, the applied low power 15 mW LED used for irradiation in these experiments is sufficient to trigger structural contraction in microscopic or nanoscopic single crystals or thin films.
### Modeling of photoexcited state
To postulate a mechanism of how irradiation can promote contraction via buckling we computed the photoexcitation process of framework-constrained dacdc. Ligand geometries upon buckling were extracted from the MD simulations of the DUT-163 contraction (Fig. 2f). The energy landscapes of the ground state S0 and excited states S1 and S2 for H4dacdc were determined by TD-DFT calculations as a function of dN-N distances and αCNN angles44 (Fig. 6).
To probe the response of DUT-163 upon irradiation we modeled an excited state of DUT-163 (DUT-163*) using established classical potentials that resemble the mechanics of dacdc in a biradical or zwitterionic state (further denoted to as dacdc*). This state is also a good mechanical representation of the previously described photooxidized state upon charge transfer between AB and Cu2+. We investigated the free energy landscape of guest-free DUT-163* upon loading with MP using the same MD method applied in the analysis of the DUT-163 ground state. Interestingly, guest-free DUT-163* is found to exhibit a much lower barrier for contraction compared to DUT-163. The breakage of π-conjugation in the ligand backbone of DUT-163* is the origin for this softening, which is also found to occur in chemically modified DUT-49-type frameworks35. It is well reflected by the simulated bulk modulus of 4.8 GPa for guest-free DUT-163 and 4.1 GPa for guest-free DUT-163*, respectively. Because DUT-163* is mechanically softer compared to DUT-163 adsorption stress produces a greater change in volume. In fact, at a loading of 200 molecules MP per UC, (E)-DUT-163*cp200i-C4H10 was found to be the thermodynamically stable state with a reduction in contraction barrier of 42% compared to DUT-163200i-C4H10 under the same conditions. Although in this model of DUT-163* all ligands are simultaneously in the excited state, which might not occur in a real crystal, even partial photoexcitation or -oxidation is expected to soften the framework of DUT-163 significantly. The nature of the mechanism triggering the softening of the structure can hence be hypothesized to be buckling of the chromophore either via an excited state pathway, as described in Fig. 6a, a photooxidation of the azo group by the Cu2+ clusters (Supplementary Fig. 104), or a combination of both. Additional adsorption interactions lower the barrier for contraction and initiating a light/adsorption-induced cooperative contraction of the crystal. To further analyze the photooxidation and charge transfer mechanism we propose characterization of thin films or single crystals of DUT-163 by methods such as X-ray photoelectron, X-ray absorption near edge structure, or electron paramagnetic resonance spectroscopy.
This dual-stimulus approach provides several advantages over purely light- or adsorption-induced transitions: photoexcitation allows the framework to respond to lower adsorption-induced stress levels and even drive contraction of a metastable state beyond the upper temperature limit of adsorption-induced contraction. The observed contraction results in gas release by NGA in an extended temperature range with a potentially increased magnitude. In addition, it allows to trigger NGA by a physical stimulus that specifically interacts with the framework. This photostimulation can be applied orthogonally to other non-radiative processes and other chemical or physical stimuli. Finally, it provides the possibility to spatially and temporarily control the release of gas via NGA using light as a physical trigger.
In conclusion, we show a cooperative structural transition of a SPC by combined application of light and adsorption-stress. Although DUT-163 was initially designed for contraction via E-Z isomerization of the AB-backbone, this process was ruled out by a combination of in situ experiments and extensive computation. Instead, the contraction mechanism is based on a buckling process, previously unknown for molecular ABs, and highlights the impact of framework-constraint on the behavior of photo-switches. In DUT-163 photoexcitation causes framework softening, allowing to drive structural contraction at reduced adsorption stress levels. The effect is reproducible under different conditions and allows for spatial and temporal control over the framework contraction by light. As such, light-responsive gas release by NGA can be locally and temporarily activated in DUT-163 for the use in nanoscopic pneumatic systems and gas-releasing devices47. The postulated mechanism not only demonstrates a novel switching transition in AB and an unexplored way of initiating structural transitions in SPCs, it provides a novel strategy to physically alter the mechanical properties of extended molecular frameworks without the application of chemical functionalization, potentially allowing such frameworks to respond to other forms of stimuli such as electric or magnetic fields, temperature, or mechanical pressure which would result in a novel class of mechanical nanoscopic actuators. Furthermore, we believe the findings of this study go beyond the discovery of a novel mechanism of a light-induced cooperative transition in a SPC. Over the past years, many AB-doped materials were shown to exhibit light-induced changes of their properties upon irradiation23,48. In the vast majority of cases E-Z-photoisomerization was postulated as the primary origin for the observed behavior. The present study clearly illustrates that framework- or matrix-constrained photoswitches can exhibit properties and states very different to the unrestricted single molecular analog. We conclude that photochemical properties of self-assembled systems are also governed by the structure and nature of the assembly beyond the properties of the molecular building blocks. In-depth analysis of these effects will lead to new design principles and novel properties of smart materials which may give rise to unexpected responsive behavior.
## Methods
### Chemicals
For the synthesis and characterization the following commercial chemicals were used: 4-Bromoaniline (CAS: 106-40-1, 97%, Sigma Aldrich), Cu(NO3)2·3H2O (CAS: 10031-43-3, 98%, Sigma Aldrich), N,N-Dimethylethylenediamine (CAS: 108-00-9, 95%, Sigma Aldrich), 9H-Carbazole (CAS: 86-74-8, >95%, Sigma Aldrich), Copper(I) iodine (CAS: 7681-65-4 99%, Riedel-de Haen), N,N′-Dimethylformamid, (CAS: 68-12-2, 99%, Fischer Scientific). Solvents and stock chemicals were used with purities exceeding 98%.
### Solution/liquid-state NMR
Nuclear magnetic resonance (NMR) spectra were acquired on a Bruker AV III 600 spectrometer (600.16 MHz and 150.91 MHz for 1H and 13C, respectively)). All 1H and 13C NMR spectra are reported in parts per million (ppm) downfield of TMS and were measured relative to the residual signals of the solvents at 7.26 ppm (CHCl3) or 2.54 ppm (DMSO). Data for 1H NMR spectra are described as following: chemical shift (δ (ppm)), multiplicity (s, singlet; d, doublet; t, triplet; q, quartet; m, multiplet; br, broad signal), coupling constant J (Hz), integration corresponding to amount of C or CH. Data for 13C NMR spectra are described in terms of chemical shift (δ (ppm)) and functionality were derived from DEPT spectra.
### Mass spectrometry
Matrix-assisted laser desorption/ionization (MALDI) time of flight (TOF) mass spectrometry analysis was performed on a BRUKER Autoflex Speed MALDI TOF MS using dithranol as matrix.
### Elemental analysis
Elemental analysis was carried out on a VARIO MICRO-cube Elemental Analyzer by Elementar Analysatorsysteme GmbH in CHNS modus. The composition was determined as the average of three individual measurements on three individually prepared samples.
### DRIFT spectroscopy
Diffuse reflectance infrared Fourier transform (DRIFT) spectroscopy was performed on a BRUKER VERTEX 70 with a SPECAC Golden Gate DRIFT setup. Prior to the measurement 2 mg of sample were mixed with 10-15 mg dry KBr in a mortar and pressed in the DRIFT-cell. Assignments of peaks in wavenumber ν (cm-1) were categorized by strong (s), medium (m), weak (w).
Diffuse Reflectance Solid state UV-Vis (DRUV-Vis) spectra were recorded on a VARIAN CARY 4000. 2 mg of sample were mixed with 20′35 mg dry BaSO4 and pressed in the sample cell. To analyze MOF samples under inert atmosphere and in situ under various concentrations of n-butane, a HARRICK Praying Mantis reaction chamber was equipped with a dome containing UV-Vis-transparent quartz windows.
### Thermogravimetric analysis
Thermal analysis (TGA) was carried out in synthetic dry air using a NETZSCH STA 409 thermal analyser at a heating rate of 5 K min−1. Air sensitive MOF samples were prepared in an Ar-filled glovebox and inserted in the instrument with little exposure to ambient conditions.
### Powder X-ray diffraction
Powder X-ray diffraction (PXRD) patterns were collected in transmission geometry with a STOE STADI P diffractometer operated at 40 kV and 30 mA with monochromatic Cu-Kα1 (λ = 0.15405 nm) radiation, a scan speed of 30–15 s/step and a detector step size of 2Ѳ = 0.1–2°. The samples were placed between non-diffracting adhesive tape or in a glass capillary. “As made” samples were analysed while suspended in DMF. Desolvated samples were prepared under inert atmosphere in an Ar-filled glovebox. Theoretical PXRD patterns were calculated on the basis of crystal structures using Mercury 4.0 software package.
### SEM analysis of crystal size and morphology
Scanning electron microscopy (SEM) images of DUT-163 were taken with secondary electrons in a HITACHI SU8020 microscope using 1.0 kV acceleration voltage and 10.8 mm working distance. The powdered samples were prepared on a sticky carbon sample holder. To avoid degradation upon exposure to air, the samples were prepared under argon atmosphere. For each sample a series of images was recorded at different magnifications and for each sample three different spots on the sample holder were investigated. The crystal size refers to the edge length of the cubic crystals as they are the easiest to measure. The analysis of the SEM images was performed with ImageJ Software package49. Values for mean crystal size, as well as relative standard deviation (RSD) were obtained by using the ImageJ Analyse-Distribution function.
For irradiation studies LEDs from THORLABS (M365FP1 Fiber-Coupled LED with 365 nm Nominal wavelength and M455F3 Fiber-Coupled LED with 455 nm Nominal wavelength) were used. The LEDs were controlled by a THORLABS T-Cube™ LED Driver with maximum current of 1.2 A and modulation mode of 0–5 kHz. The 365 nm LED was driven at 1.2 A, the 455 nm LED was driven at 1 A. For irradiation in parallel to UV-Vis (solid and solution), IR, and Raman spectroscopy and in situ PXRD the LEDs were mounted in close proximity to the sample. For in situ NMR studies a Ø400 µm fiber optic was used in a setup that was previously described in more detail50. Irradiation of the sample during gas adsorption with 365 nm was conducted with a CONSORT UV-lamp with 1800 µW/cm².
### Raman spectroscopy
Raman spectra in solution were recorded on a home-built system comprising of a sample holder with magnetic stirrer, 785 nm 400 mW laser (Cobolt, 08-NLDM) guided through a Raman probe and connected to a spectrograph (AndorTM Technology, Kymera 193i) equipped with CCD camera (AndorTM Technology, iDus-416). Solid samples were packed in Quartz capillaries and sealed under dry nitrogen atmosphere in a glovebox. Raman spectra of MOFs were recorded using a fiber coupled Raman microscope equipped with a 785 nm (50 mW) or 633 nm (300 mW) laser.
|
# Series and their limits
1. Dec 7, 2008
hi :]
a couple of questions:
1) Using epsilon and N, write in a formal manner the following statement:
L is not a the limit of the general series {an} when n goes from 1 to infinity.
2) prove the next sentence: if a series an is converging into a final limit L, then the arithmetic avareges of the series organs(terms?) are gathering into the same limit. meaning:
lim (an)[n->infinity] = L = = = > lim [n->infinity] (a1+a2+a3..+an) / n = L
excuse my english.. not my strongest side.
I realy wish I could write down my attempts to solve the question by they are all in hebrew and are too hard to translate since I'm not sure myself that I'm on the right path..
Thanks,
sharon.
2. Dec 7, 2008
### CompuChip
Recall the definition of limit:
$$\lim_{n \to \infty} a_n = L$$ means that $$\forall \epsilon > 0, \cdots$$ ?
Then for 1 negate that statement:
$$\lim_{n \to \infty} a_n \neq L$$ means that $$\neg(\forall \epsilon > 0, \cdots) \Leftrightarrow \exists \epsilon > 0, \cdots$$ ?
For 2, you will somehow need to estimate the arithmetic average (yes, they are called terms, although organs is a nice one as well ). That is, if you know that an comes arbitrarily close to L, then you want to show the same for (a1 + ... + an)/n.
3. Dec 7, 2008
thanks, but i didnt realy understood (1) ..
4. Dec 7, 2008
### CompuChip
OK, first step:
what is the definition of
$$\lim_{n \to \infty} a_n = L$$
5. Dec 7, 2008
the limit exists if for each ε > 0 there exists an R such that qqq |f(x) - L| < ε whenever x > R
so the limit does not exists when |f(x) - L| < ε whenever x < R ?
6. Dec 8, 2008
### CompuChip
Right.
No. The limit is not L, if it is not true that for each ε > 0 there exists an R such that |f(x) - L| < ε whenever x > R. In a first mathematics course you must have learned how to rewrite such a statement. Things like: if it is not true that all cows eat grass, then there must exist a cow who does not eat grass. In this case, your answer would start with: "the limit is not L, when there exists an ε > 0, ..."
|
PL EN
Preferencje
Język
Widoczny [Schowaj] Abstrakt
Liczba wyników
Czasopismo
## Banach Center Publications
1999 | 46 | 1 | 23-62
Tytuł artykułu
### On problems of databases over a fixed infinite universe
Autorzy
Treść / Zawartość
Warianty tytułu
Języki publikacji
EN
Abstrakty
EN
In the relational model of databases a database state is thought of as a finite collection of relations between elements. For many applications it is convenient to pre-fix an infinite domain where the finite relations are going to be defined. Often, we also fix a set of domain functions and/or relations. These functions/relations are infinite by their nature. Some special problems arise if we use such an approach. In the paper we discuss some of the problems. We show that there exists a recursive domain with decidable theory in which (1) there is no recursive syntax for finite queries, and in which (2) the state-safety problem is undecidable. We provide very general conditions on the FO theory of an ordered domain that ensure collapse of order-generic extended FO queries to pure order queries over this domain: the Pseudo-finite Homogeneity Property and a stronger Isolation Property. We further distinguish one broad class of ordered domains satisfying the Isolation Property, the so-called quasi-o-minimal domains. This class includes all o-minimal domains, but also the ordered group of integer numbers and the ordered semigroup of natural numbers, and some other domains. We generalize all the notions to the case of finitely representable database states - as opposed to finite states - and develop a general lifting technique that, essentially, allows us to extend any result of the kind we are interested in, from finite to finitely-representable states. We show, however, that these results cannot be transferred to arbitrary infinite states. We prove that safe $Datalog^{¬,<_z}$-programs do not have any effective syntax.
Słowa kluczowe
Czasopismo
Rocznik
Tom
Numer
Strony
23-62
Opis fizyczny
Daty
wydano
1999
Twórcy
autor
• Department of Mathematics, Kemerovo State University, Kemerovo, Russia 650043
• Fourth Dimension Software, 555 Twin Dolphin Dr., Redwood City, CA 94404, and UCLA Mathematics Department, Los Angeles, CA 90095
autor
• Department of Computer Science, Tver State University, Tver, Russia 170000
Bibliografia
• [AGSS86] A. K. Ailamazian, M. M. Gilula, A. P. Stolboushkin, and G. F. Schwarz, phReduction of the relational model with infinite domains to the case of finite domains, Doklady Akademii nauk SSSR 286 (1986), no. 2, 308-311, Russian.
• [AH91] A. Avron and J. Hirshfeld, phOn first order database query languages, Proc. 6th IEEE Symp. on Logic in Computer Science, 1991, pp. 226-231.
• [AU79] A. V. Aho and J. D. Ullman, phUniversality of data retrieval languages, Proc. 6th ACM Symp. on Principles of Programming Languages, 1979, pp. 110-117.
• [BDLW96] M. Benedikt, G. Dong, L. Libkin, and L. Wong, phRelational expressive power of constraint query languages, Proc. 15th ACM Symp. on Principles of Database Systems, 1996, pp. 5-16.
• [BL96] M. Benedikt and L. Libkin, phOn the structure of queries in constraint query languages, Proc. 11th IEEE Symp. on Logic in Computer Science (Los Alamitos, CA), IEEE Computer Society Press, 1996.
• [BST96] O. V. Belegradek, A. P. Stolboushkin, and M. A. Taitslin, phOn order-generic queries, Technical report 96-01, DIMACS, 1996.
• [BST97a] O. V. Belegradek, A. P. Stolboushkin, and M. A. Taitslin, phExtended order-generic queries, Tver University, manuscript submitted to Annals of Pure and Applied Logic, 1997.
• [BST97b] O. V. Belegradek, A. P. Stolboushkin, and M. A. Taitslin, phGeneric queries over quasi-o-minimal domains, LFCS'97, 1997.
• [CH80] A. Chandra and D. Harel, phComputable queries for relational databases, Journal of Computer and System Sciences 21 (1980), 156-178.
• [CH82] A. Chandra and D. Harel, phStructure and complexity of relational queries, Journal of Computer and System Sciences 25 (1982), 99-128.
• [CK90] C. C. Chang and H. J. Keisler, phModel theory, 3rd ed., North Holland, 1990.
• [Cod70] E. F. Codd, phA relational model for large shared data banks, Communications of the ACM 13 (1970), 377-387.
• [Cod72] E. F. Codd, phRelational completeness of data base sublanguages, Database Systems (R. Rustin, ed.), Prentice-Hall, 1972, pp. 33-64.
• [Di69] R. A. Di Paola, phThe recursive unsolvability of the decision problem for the class of definite formulas, Journal of the ACM 16 (1969), no. 2, 324-327.
• [ELTT65] Yu. L. Ershov, I. A. Lavrov, A. D. Taimanov, and M. A. Taitslin, phElementary theories, Russian Mathematical Surveys 20 (1965), no. 4, 35-105.
• [End72] H. B. Enderton, phA mathematical introduction to logic, Academic Press, New York, 1972.
• [GS94] S. Grumbach and J. Su, phFinitely representable databases, Proc. 13th ACM Symp. on Principles of Database Systems, 1994.
• [GS95] S. Grumbach and J. Su, phDense-order constraint databases, Proc. 14th ACM Symp. on Principles of Database Systems, 1995, pp. 66-77.
• [GSSS86] M. M. Gilula, A. P. Stolboushkin, G. F. Schwarz, and K. V. Shvachko, phSome algorithmic problems in the theory of relational databases, Problem-Oriented Computational Systems (Moscow, USSR) (A.K. Ailamazian, ed.), Problems in Cybernetics, vol. 125, USSR Academy of Sciences, Moscow, USSR, 1986, Russian, pp. 81-91.
• [GST95] S. Grumbach, J. Su, and C. Tollu, phLinear constraint databases, Proc. Logic and Computational Complexity (LCC'94), LNCS, Springer-Verlag, 1995.
• [GT91] S. Grumbach and C. Tollu, phThe generic complexity of query languages with counters, Proceedings of 3rd Workshop on Foundations of Models and Languages for Data and Objects,Aigen (Austria), 1991.
• [Gur88] Y. Gurevich, phLogic and the challenge of computer science, Current trends in theoretical computer science (E. Börger, ed.), Computer Science Press, 1988, pp. 1-57.
• [Gur90] Yu. Gurevich, Private communication, 1990.
• [Hir91] J. Hirshfeld, phSafe queries in relational databases with functions, CSL '91: 5th Workshop on Computer Science Logic, LNCS, Springer-Verlag, 1991, pp. 173-183.
• [JL87] J. Jaffar and J.-L. Lassez, phConstraint logic programming, Proc. 14th ACM Symp. on Principles of Programming Languages, 1987, pp. 111-119.
• [JM94] J. Jaffar and M. J. Maher, phConstraint logic programming: A survey, Journal of Logic Programming 19-20 (1994), 503-581.
• [Kan88] P. C. Kanellakis, phLogic programming and parallel complexity, Foundations of Deductive Databases and Logic Programming (J. Minker, ed.), Morgan Kaufmann, 1988, pp. 547-586.
• [KG94] P. C. Kanellakis and D. Q. Goldin, phConstraint programming and database query languages, Proc. International Symposium on Theoretical Aspects of Computer Software (TACS'94), 1994, pp. 96-120.
• [Kif88] M. Kifer, phOn safety, domain independence, and capturability of database queries, Proc. Third International Conference on Data and Knowledge Bases, 1988.
• [KKR90] P. C. Kanellakis, G. M. Kuper, and P. Z. Revesz, phConstraint query languages, Proc. 9th ACM Symp. on Principles of Database Systems, 1990, pp. 299-313.
• [KKR95] P. C. Kanellakis, G. M. Kuper, and P. Z. Revesz, phConstraint query languages, Journal of Computer and System Sciences 51 (1995), no. 1, 26-52.
• [KPS86] J. F. Knight, A. Pillay, and C. Steinhorn, phDefinable sets in ordered structures, II, Transactions of American Mathematical Society 295 (1986), no. 2, 593-605.
• [OV95] M. Otto and J. Van den Bussche, phFirst-order queries on databases embedded in an infinite structure, Manuscript, 1995.
• [PS86] A. Pillay and C. Steinhorn, phDefinable sets in ordered structures, I, Transactions of American Mathematical Society 295 (1986), no. 2, 565-592.
• [PS88] A. Pillay and C. Steinhorn, phDefinable sets in ordered structures, III, Transactions of American Mathematical Society 309 (1988), no. 2, 469-476.
• [PVV95] J. Paradaens, J. Van den Bussche, and D. Van Gucht, phFirst-order queries on finite structures over reals, Proc. 10th IEEE Symp. on Logic in Computer Science, IEEE Computer Society Press, 1995, pp. 79-87.
• [Rab77] M. O. Rabin, phDecidable theories, Handbook of Mathematical Logic (Amsterdam, New York, Oxford) (J. Barwise, ed.), vol. 3, North-Holland, Amsterdam, New York, Oxford, 1977.
• [Rev90] P. Z. Revesz, phA closed form for Datalog queries with integer order, 3rd Int'l. Conf. on Database Theory, Springer-Verlag, 1990, pp. 187-201.
• [Rev93] P. Z. Revesz, phA closed form evaluation for Datalog queries with integer (gap)-order constraints, Theoretical Computer Science 116 (1993), no. 1, 117-149.
• [Rev95] P. Z. Revesz, phSafe stratified Datalog with integer order programs, Manuscript, August 1995.
• [Rob49] Julia Robinson, phDefinability and decision problems in arithmetic, Journal of Symbolic Logic 14 (1949), 98-114.
• [Rog67] H. Rogers, phTheory of recursive functions and effective computability, McGaw-Hill, 1967.
• [RW81] B. I. Rose and R. E. Woodrow, phUltrahomogeneous structures, Zeitschrift für Mathematische Logik und Grundlagen der Mathematik 27 (1981), no. 1, 23-30.
• [Sac72] G. E. Sacks, phSaturated model theory, W.A. Benjammin, Inc., Reading, Massachusetts, 1972.
• [ST95a] A. P. Stolboushkin and M. A. Taitslin, phFinite queries do not have effective syntax, Proc. 14th ACM Symp. on Principles of Database Systems, 1995, pp. 277-285.
• [ST95b] A. P. Stolboushkin and M. A. Taitslin, phIs first order contained in an initial segment of PTIME?, Selected Papers, 8th EATCS Conference on Computer Science Logic (CSL'94), LNCS, vol. 933, Springer-Verlag, 1995, pp. 242-248.
• [ST95c] A. P. Stolboushkin and M. A. Taitslin, phSafe stratified datalog with integer order does not have syntax, Manuscript, October 1995.
• [ST96] A. P. Stolboushkin and M. A. Taitslin, phLinear vs. order constraint queries over rational databases, Proc. 15th ACM Symp. on Principles of Database Systems, 1996, pp. 17-27.
• [Ull82] J. D. Ullman, phPrinciples of database systems, 2 ed., Computer Science Press, 1982.
• [Ull88] J. D. Ullman, phPrinciples of database and knowledge-base systems, volumes I and II, Computer Science Press, 1988.
• [Van91] A. Van Gelder and R. W. Topor, phSafety and translation of relational calculus queries, ACM Trans. on Database Systems 16 (1991), no. 2, 235-278.
• [Var81] M. Y. Vardi, phThe decision problem for database dependencies, Information Processing Letters (1981), 251-254.
Typ dokumentu
Bibliografia
Identyfikatory
|
# Chemical Reactions of Transition Metal Complexes
The most common reaction is the substitution reaction:
$ML_6 + X \rightarrow ML_5X + L$
In solution phase, all reactions are substitution reactions: if there is no "ligand", then the initial complex is a solvate.
MX2(s) [M(OH2)6]2+(aq) + 2 X(aq)
MX2(s) [M(NCCH3)6]2+(sol) + 2 X(sol)
Water is a good choice for solvent because it dissolves many salts and it is a weak ligand, thereby acting as a good leaving group. Remember: strong ligands will replace weaker ligands to increase the total LFSE.
### Thermodynamics
Substitution reactions are equilibria:
$[M(OH_2)_6]^{2+}(aq) + L \rightleftharpoons [M(OH_2)_5L]^{2+}(aq) + H_2O(l)$
$$K_f$$ is called a formation constant. There is a formation constant for each addition of a ligand:
[M(OH2)6]2+(aq) + L →← [M(OH2)5L]2+(aq) + H2O(l) Kf1 [M(OH2)5L]2+(aq) + L →← [M(OH2)4L2]2+(aq) + H2O(l) Kf2 [M(OH2)4L2]2+(aq) + L →← [M(OH2)3L3]2+(aq) + H2O(l) Kf3
etc.
Kfi are called step-wise formation constants. The overall equilibrium constant for the formation of a complex is β = Kf1Kf2Kf3 ... The size of β depends upon a number of factors:
• the ligand field strength of the ligand
• the charge/size of the metal ion
• the presence of chelation and ring formation
• the degree of substitution
• steric effects
### Kinetics
The rates of substitution reactions can vary over many orders of magnitude, from ns to years.
• Complexes that undergo fast substitution reactions are called labile.
• Complexes that undergo slow substitution reactions are called inert.
The cutoff time between the two categories is somewhat arbitrary, but usually taken to be around a minute. Ions with large LFSE (especially d3 and low field d6) are inert. d0, high spin d5, d10 ions are usually very labile. Other ions fall in between.
There are two common, limiting mechanisms of substitution: Associative and Dissociative.
#### Associative
happens for 4- or 5-coordinate complexes; less often for 6-coordinate.
Substituting ligand adds to an open coordination site, then pushes leaving group and replaces it in the complex. The rate law is first order in both complex and L.
#### Dissociative
common for octahedral complexes but also can be found for lower coordination numbers.
The leaving group exits, then the substituting ligand bonds. The rate law is first order in complex, zero order in ligand.
### Contributors
{{template.ContribPiepho()}}
|
# Volcano Watch — Measuring the mountains: Ground deformation of Hawaii's volcanoes
Release Date:
The ground's surface around the active Hawaiian volcanoes Kīlauea and Mauna Loa is constantly changing. Lava flows laminate their sides during active eruptions.
Displacement of benchmarks on the south flank of Kīlauea Volcano, measured by GPS surveys between 1993 and 1996. The direction and length of displacement is shown by the arrows. Note scale of arrows.
(Public domain.)
The ground's surface around the active Hawaiian volcanoes Kīlauea and Mauna Loa is constantly changing. Lava flows laminate their sides during active eruptions. Less obvious, but more widespread, are the subtle movements that occur in response to the movement of magma within the volcano. The distribution and rate of these movements provide clues about processes occurring within the volcano and help us forecast impending eruptions, large earthquakes, or landslides.
Scientists at the U.S. Geological Survey's Hawaiian Volcano Observatory (HVO) monitor ground deformationaround the volcanoes of Hawaii by periodically surveying the positions of a large number of bench marks. You may have seen one of our bench marks along a road or on a hill top; they are inscribed metal tablets set in rock or concrete. The accumulated ground movement between surveys is simply the observed change in position of the bench mark.
We recently completed our annual Global Positioning System (GPS) survey of the Big Island. Our surveying equipment and technique allow us to measure position changes to a fraction of an inch. The arrows on the accompanying figure show the average rate and direction that our bench marks moved (horizontally) between 1993 and 1996.
We observe Kīlauea's south flank moving seaward at up to three inches per year. This area experienced a magnitude 7.2 earthquake in 1975; it is also where the most spectacular palis (the Hawaiian word for cliffs) are found. The southeast flank of Mauna Loa is also moving seaward, but at a slower rate. This region of Mauna Loa experienced a magnitude 6.7 earthquake in 1983. Although these motions are a small fraction of those that occurred during the earthquakes, they indicate that the forces that produced the earthquakes and created the palis are still active.
Results of our vertical measurements show continuing inflation of Mauna Loa's summit region. About half of the deflation that occurred during the 1984 eruption has been recovered. We are watching Mauna Loa closely and expect that any impending eruption will be preceded by a recognizable increase in the number of earthquakes near its summit.
The vertical changes also indicate subsidence of Kīlauea's summit region. The deflation of Kīlauea's summit is probably due to more lava being erupted during the ongoing Puu Oo eruption than magma is being supplied to the volcano.
### Volcano Activity Update
The Kīlauea eruption continues unabated, and flows enter the ocean in the Laeapuki region. The level of the lava pond within Puu Oo fluctuates between 275 and 325 feet below the lowest part of the rim. At night, the fluctuating pond level often causes a bright glow to reflect off the fume cloud over the cone.
Since July 16, the HVO seismic network has recorded over 2,400 earthquakes from Lo`ihi Volcano. Forty of the temblors had magnitudes over 4.0, with two earthquakes at 3:25 a.m. on July 23 and at 7:38 a.m. on July 24 registering a magnitude of 4.9.
|
# Forces of a conical pendulum in the reference frame of the ball (of the pendulum)
I'm a bit confused about the forces (specifically their magnitude), involved of a conical pendulum in the reference frame of the ball of the pendulum. I'd like to consider this ball to be in contact with the floor.
So the ball will experience the weight, normal force (touching the ground), tension force (along the string), and a centrifugal force (radially outwards). Would the horizontal component of the tension force be called the centripetal force or could you equate calling the tension force the centripetal force?
Would this centrifugal force have a magnitude of mw^2 r? Or would this be the magnitude of the centripetal force? Saying that the centrifugal force is equal to mw^2 r seems to get the same answer as the regular conical pendulum problem.
Any clarifications would be appreciated, and thank you for your time!
First, I would like to quote the definition of centripetal force given on Wikipedia:
A centripetal force is a force that makes a body follow a curved path. Its direction is always orthogonal to the motion of the body and towards the fixed point of the instantaneous center of curvature of the path
Now, in the given case, looking from ground frame the ball is moving in circle, and so there must be a center seeking force on it. Here only a component of tension is towards center (normal from ground, it's weight and tension's other component are perpendicular to the plane of circle, and these are the only forces on it.)
Talking about centrifugal force, it's Not a real force but a Pseudo force. The ball has variable velocity ( the direction is changing) , implying it has an acceleration ( which is the centripetal acceleration). So when we look in ball's frame of reference, we are in a non-inertial frame of reference and need to use a Pseudo force, which we call the Centrifugal Force.
Now coming on magnitude, $$\mathit m\omega^2r$$ is the magnitude of centripetal force, which in this case is the component of tension that's towards circle's center. When you change frame from ground to the ball's, the Pseudo force, centrifugal force, has magnitude equal to Mass times it's acceleration, which is precisely equal to $$\mathit m\omega^2r$$, and exactly in opposite direction to the centripetal force.
Hope this helps.
Assuming that in the frame of the laboratory the mass is undergoing circular motion at constant speed in the horizontal plane then the horizontal component of the tension pointing inwards towards the centre of the horizontal circle is producing the centripetal acceleration of the mass.
The horizontal component of the tension force is $$mr\omega^2$$ where $$m$$ is the mass of the body, $$r$$ the radius of the horizontal circle and $$\omega$$ the angular speed of the mass.
In the frame of the mass the mass is not moving and certainly not accelerating yet the forces acting on the mass have a net component which is equal to the horizontal component of the tension towards the centre of the horizontal circle as seen in the laboratory frame.
In order to be able to use Newton’s laws in the frame of the mass and extra force is added in effect to convert the situation into a statics problem with no net force on the mass.
This fictitious/pseudo force, which you have called the centrifugal force, has a magnitude equal to the horizontal component of the tension, $$mr\omega^2$$, and is directed away from the centre of the horizontal circle.
Adding this force to the mass means that now the net force on the mass is zero.
|
# Theta Notation
The symbol theta is often used as a variable to represent an angle in illustrations, functions, and equations.
## Usage
Angle
An angle is defined as the amount of rotation between two rays. Angles are measured using degrees and radians. A full rotation in degrees is 360°. A full rotation in radians is approximately 6.283 radians or τ (tau) radians.
Right Triangle
A right triangle is a triangle where one of the three angles is a perpendicular angle. There are three sides of the right triangle: the adjacent, opposite, and hypotenuse sides.
Unit Circle
The unit circle is a circle of radius one placed at the origin of the coordinate system. This article discusses how the unit circle represents the output of the trigonometric functions for all real numbers.
|
## Propagation
### How radio waves propagate?
In the article, How antennas radiate?, we established that radio waves are produced from an energy source consisting of oscillating magnetic and electric fields. These radio waves radiate outwards (similar to the ripples in a pond) at the speed of light (300,000,000 m/s in a vacuum).
Many people assume that radio waves travel a fixed distance after which they are no longer receivable. While its true that the radio waves do become weaker the further out they travel, as long as they are still distinguishable from the noise, they can still be received. For example, the Voyager spacecraft launched in 1977 are now many billions of kilometres away but thanks to the Deep Space Network, communication is possible with extremely large parabolic “dish” antennas with very high gain.
Predicting how the radio wave travels can be difficult as the environment has a large influence. So to assist the engineer during the design of the radio system, this article introduces Circuit Design’s various calculation tools for radio wave propagation.
### Factors that limit radio wave propagation
Increasing the radiated power will improve the chances of signal reception at the receiver. Even so, it may introduce other problems such as radio signals ending up where they are not suppose to and causing interference to other users. Radio regulations cover many parameters, one of which is EIRP which limits radiated power. (refer to Gain (EIRP and ERP) for more information)
Also higher transmit power generally means higher current consumption which maybe a problem if batteries are used.
#### Frequencies used
Generally speaking lower frequency waves travel further than higher frequency waves. For example, people ask why AM radio waves can be heard from many hundreds of kilometres away, but why FM radio waves are only heard locally and assume there is something characteristic about AM. But the actual reason is that AM historically used lower frequencies (in the LF* to HF* frequency range) and there are certain propagation mechanisms (such as atmospheric skip) that enable low frequencies to travel around the globe.
So higher frequencies such as 2.4 GHz travel e.g. few hundred metres compared to several hundred metres at 434 MHz.
* LF: Low Frequency, HF: High Frequency
#### Antenna positioning
Buildings and trees can block the signal so if possible the antennas should be placed high up to clear them. In addition antennas must maintain a zone of clearance between them called the Fresnel Zone (described later in this article). The Fresnel clearance becomes smaller at higher frequencies so it will be antenna placement and not necessarily height that optimises communication.
#### Noise and interference susceptibility
In the environment, there exists many sources of electrical noise (e.g. lights, ignition systems, motors). AM and ASK signals are the most susceptible since noise only adds to the amplitude variations. On the other hand, FM and FSK signals are more robust as the received signal only depends on frequency variations.
In digital communications with many users sharing the same frequency band, various encoding schemes such as frequency hopping and direct sequence spread spectrum allow reliable communication where interference is high.
#### Obstructions
The places with few or no obstructions where communication can happen effectively include open waters, ground to aircraft and satellite communications. In most cases however, communication takes place in and around buildings, around trees and mountains etc. There are several ways radio waves can interact with objects as shown.
Various propagation paths
##### Absorption and reflection
When a radio wave strikes a flat surface, some waves will be reflected, absorbed or pass through the material. The amount of each depends on the obstruction composition. A perfect conductor (e.g. metal, sea water) reflects all radio waves.
Some materials are able to absorb almost all radio waves. An example material found in anechoic chambers uses RF absorbing foam impregnated with a metallic material such as carbon, iron and ferrite.
Radio wave absorbing material used on walls and ceiling of the anechoic chamber
##### Diffraction
According to Huygen’s principle, every point on a wave front acts as source of secondary wavelets which combine to produce a new wave front in the direction of propagation. When these wavelets are allowed to enter the shadowed region created by the obstruction, the result is diffraction which allows the radio wave to “curve” around obstacles. The amount of this “curving” behaviour depends on the wavelength. Lower frequencies such as the AM broadcast band can travel around mountains easily (due to the larger wavelengths) allowing waves to travel over the ground for long distances (called ground waves).
Higher frequencies with their shorter wavelengths are much less diffracted relying more on line of sight.
##### Communication medium
Generally speaking the more dense the material, the more trouble radio waves will have propagating through the material. You will have noticed this when the radio or your GPS stops working when entering a tunnel. Even if the surrounding soil is not so dense, the large amount is enough to block the radio waves. Very dense materials such as lead or concrete used for x-ray and gamma shielding will also block radio waves.
Regarding liquids, it is possible for radio waves to travel but seawater is a problem as it is conductive. In these conditions, MHz frequencies cannot be used so submarines use 3 – 30 kHz (Very Low Frequency) for communication.
Due to the various propagation paths mentioned, the receiver sees radio waves that are phase shifted from each other. As these waves combine, the receiver encounters both high and low level reception spots. This is known as fading.
#### Doppler effect
In mobile communications, if the transmitter and receiver are moving away from or approaching each other, the result is a small frequency drift from the carrier frequency. This is similar to what occurs with sound waves, where the sound from a passing ambulance with a siren will seem to change in pitch. The severity of the frequency drift depends on the wavelength and relative speed between the transmitter and receiver. In everyday situations, the speed of the radio wave is very high and any drift would be imperceptible. Any fluctuations in signal level would instead likely come from e.g. signal reflection.
### Propagation models
We established that radio waves have various modes of propagation. To realise this mathematically, there are standard models that can be used as a basis when designing your radio system.
#### Free space model
The simplest model is propagation in free space.
The level seen at the receiver decays as a function of transmitter-receiver separation distance d (metres) and also wavelength λ (metres). Mathematically, it can expressed as:
$$\text{Path Loss(dB)} = 20\log(\frac{4πd}{λ})$$
As this is a power law function, if I double my distance, the power at my receiver is only a quarter of the power at the transmitter.
#### The 2 path model
If a reflective surface exists, the radio wave will follow 2 paths – a direct path and a reflected path. There will be a length difference between both paths with the receiver seeing another identical wave, but delayed. This delayed signal can add to or subtract from the direct signal depending on the phase difference.
If we assume constant frequency, this phase difference depends on the path difference which in turn depends on the antenna height and separation between the transmitter and receiver. If we take an example where the transmitter is a base station and the receiver is a mobile station – then if we move further from the base station, we can expect the interference between the 2 waves to produce a larger signal or a smaller signal.
##### Wave propagation calculation tool example
Circuit Design has a calculation tool to demonstrate the difference between free space and the 2 path model. In practical communications, there are other losses in addition to the free space loss. If obstructions are not too severe, one can normally use the 2-path model as an approximation.
Let us use an example signal at 434 MHz, 10mW RF power using 2.14 dBi antenna for transmitter and receiver. Let the antenna height be 5m and the distance between them be 200m. For the 2 path model, the propagation behaviour is erratic with interference between the 2 waves causing many signal drop offs especially at close range. Susceptibility to drop offs are dependent on wavelength, meaning that at higher frequencies, drop offs become much more frequent.
Free space (left) and 2 path model (right)
#### Fresnel zone
Line of sight communication does not only mean seeing each other’s antennas but also requires certain amount of clearance from objects. This zone of clearance is called the Fresnel zone and must be maintained to avoid any signal loss.
When radio waves radiate, every point on the wave produces secondary wave fronts. This means that the receiver does not see all the radio wave travelling in a single plane, but also slightly above and below it with the direct path contributing the most energy. Any obstruction that blocks these wave fronts will attenuate the signal. Therefore it is important to keep this area clear of any obstructions.
1st Fresnel zone – obstructed and clearing the obstruction by raising antenna height
##### Fresnel zone calculation tool
Circuit Design includes a calculation tool which calculates the minimum antenna height off the ground in order that the Fresnel zone is not obstructed. If the antennas are placed that ensures at least 60% of the Fresnel radius is cleared, signal level will not be significantly affected.
Note that the Fresnel zone becomes smaller with higher frequency/shorter transmitter and receiver distance. If it is difficult to get line of sight then the antennas need to be installed in the most optimum location. Regarding the installation height of the antenna, it maybe necessary to consider other factors such as height pattern as well as the Fresnel zone.
#### Antenna height pattern
Once the antenna location is decided, the exact height of the receiving antenna needs to be adjusted in order that the signals arrive at the antenna in phase. This can be achieved by moving the antenna up and down while monitoring the received signal level. To assist the engineer, Circuit Design provides a calculation tool that gives the height pattern pitch in metres. You should be able to find the signal peak in this range.
Height pattern pitch
#### Okumura Hata model
The Okumura Hata model is a model for predicting mature cellular and land mobile communication systems in various environments involving open land, suburbs, medium cities and large cities. Designed for distances involving 1 to 20km, distances below this range – the values are not as reliable. Also the actual environment can differ to what was used to take the measurements so this tool should be used as a basic predictor for reception. The conditions applied to this model and the following calculation tool are as follows:
Frequency f(MHz):150 MHz to 1.5 GHz
Communication distance d(m):1 km to 20 km
Base station antenna height hb(m):30 m to 200 m
Mobile antenna height hm(m):1 m to 10 m
##### Okumura-Hata curve calculation tool
To use the calculation tool, click here to visit the page.
Okumura Hata curves calculated based on the parameters entered
Input the frequency in MHz, the transmitter output power, antenna height, gain and distance. The graph will show either electric field strength up to the distance specified or the corresponding output power from the receiver antenna.
#### Conclusion
The topic of propagation is extremely large with much of it beyond the scope of this article. This article is meant as a guide for the engineer when designing his/her radio system without touching too much on the mathematical concepts. However for reference purposes, the formulas used for calculation are included in the calculation tool pages.
|
# How to solve system's general stability from transfer function?
I have a homework which should solve by me. My problem is questions are really simple or should I think outside of the box? Like, bode diagram, nyquist or etc.? And, are my answers correct?
Thanks.
### Question-1
$$G(s) = K\dfrac{As+1}{Bs+1}$$
For which values $K, A$ and $B$ is the system always stable? Should I look directly to the pole of the system?
• $Bs+1=0$
• $s=-1/B \implies \text{So, must } B>0$
Is it enough? Or, anything else? What about K, A?
### Question-2
$$G(s) = K\dfrac{As+1}{(Bs+1)(Cs+1)}$$
For which values $K, A, B$ and $C$ is the system always stable? Should I look directly to the pole of the system or anything else?
$$Bs+1=0 \wedge Cs+1=0$$ $$s=-1/B \wedge s=-1/C$$ $$\implies B>0 \wedge C>0$$
Is it enough? Or, anything else? What about $K$ and $A$?
So question 1 is pretty straight forward and you already got it right. If there's no right half plane (RHP) pole then it doesn't matter what gain you chose. Even for $A = B$, $G(s) = K$ yields a finite response.
For Question 2 have a look at the Routh Hurwitz Array
\begin{array} {|r|r|} \hline s^2 & B \cdot C & 1 \\ \hline s^1 & B+C & 0\\ \hline s^0 & 1 & \\ \hline \end{array}
In order for the system to be stable there must not be any sign changes in the first column, hence
$$BC > 0 \quad \land \quad B+C > 0$$
From $BC > 0$ we derive that B and C must have the same sign. $B+C > 0$ yields that the sign has to be positive.
As you see neither $A$ nor $K$ are involved in that.
If you want to explore other methods like root locus, bode, ... Keep in mind that you have variables ($A$,$B$,$C$) in there. I know that you can see the gain margins for root loci in Python, Matlab, etc. but I think that's it. I don't think (but I stand to be corrected) that you can derive the values for $A$,$B$,$C$ that way. I think with Bode plots this may work, however as you've seen it's much easier to solve with Hurwitz or by just looking at the poles.
You did answer both questions correctly. You could also consider solving the both questions by using the Hurwitz criterion. The first one is directly solvable with the Hurwitz criterion the second one is a little bit involved :).
|
## Why is it so hard to prove that e+pi or e*pi is irrational/rational?
The reason why it is so hard to prove is actually very easy to answer. These constants, identities, and variations being referred to in this post, and others like it, all lay embedded in a far deeper substrate than current mathematics has yet explored.
Mathematics has been, and always shall be my ‘first love’, and it has provided for me all of these years. I am not criticising mathematics in any way. It is my firm belief that mathematics will overcome this current situation and eventually be quite able to examine these kinds of questions in a much more expansive and deeper way.
We need to extend our examination of mathematical knowledge, both in depth and in scope, out farther and in deeper than numbers (sets and categories as well – even more below) have yet done. I’ll introduce you to a pattern you may have already noticed in the current stage of our mathematical endeavour.
We all know there are numbers which lay outside of Q which we call Irrational numbers. There are also numbers which lay outside of R which we call Imaginary numbers. They have both been found, because the domain of questioning exceeded the range of answers being sought within the properties each of those numbers. This pattern continues in other ways, as well.
We also know there are abstractions and/or extensions of Complex numbers where the ‘air starts to get thin’ and mathematical properties start to ‘fade away’: Quaternions, Octonians, Sedenions,…
This pattern continues in other ways: Holors, for example, which extend and include mathematical entities such as Complex numbers, scalars, vectors, matrices, tensors, Quaternions, and other hypercomplex numbers, yet are still capable of providing a different algebra which is consistent with real algebra.
The framing of our answers to mathematical questions is also evolving. Logic was, for example, limited to quite sophisticated methods that all were restricted to a boolean context. Then we found other questions which led to boundary, multi-valued, fuzzy, and fractal logics, among a few others I haven’t mentioned yet.
Even our validity claims are evolving. We are beginning to ask questions which require answers which transcend relationship properties such as causality, equivalence, and inference in all of their forms. Even the idea of a binary relationship is being transcended into finitary versions (which I use in my work). There are many more of these various patterns which I may write about in the future.
They all have at least one thing in common: each time we extend our reach in terms of scope or depth, we find new ways of seeing things which we saw before and/or see new things which were before not seen.
There are many ‘voices’ in this ‘mathematical fugue’ which ‘weaves’ everything together: they are the constants, variations, identities, and the relationships they share with each other.
The constants e, π, i, ϕ, c, g, h all denote or involve ‘special’ relationships of some kind. Special in the sense that they are completely unique.
For example:
• e is the identity of change (some would say proportion, but that’s not entirely correct).
• π is the identity of periodicity. There’s much more going on with $\pi$ than simply being a component of arc or, in a completely different context, a component of area
These relationships actually transcend mathematics. Mathematics ‘consumes’ their utility (making use of those relationships), but they cannot be ‘corralled in’ as if they were ‘horses on the farm’ of mathematics. Their uniqueness cannot be completely understood via equivalence classes alone.
• They are ubiquitous and therefore not algebraic.
• They are pre-nascent to number, equivalence classes, and validity claims and are therefore not rational.
These are not the only reasons.
It’s also about WHERE they are embedded in the knowledge substrate compared to the concept of number, set, category…. They lay more deeply embedded in that substrate.
The reason why your question is so hard for mathematics to answer is, because our current mathematics is, as yet, unable to decide. We need to ‘see’ these problems with a more complete set of ‘optics’ that will yield them to mathematical scrutiny.
Question on Quora
## Getting Hypertension About Hyperreals
(Links below)
This system is quite interesting if we allow ourselves to talk about the qualities of infinite sets as if we can know their character completely. The problem is, any discussion of an infinite set includes their definition which MAY NOT be the same as any characterisation which they may actually have.
Also, and more importantly, interiority as well as exteriority are accessible without the use of this system. These ‘Hyperreals’ are an ontological approach to epistemology via characteristics/properties we cannot really know. There can be no both true and verifiable validity claim in this system.
## Knowledge Representation – Fractal Torus 1
Fractal Torus 1 by Ryan Cameron on YouTube
## Lateral Numbers – How ‘Imaginary Numbers’ May Be Understood
First, allow me to rename theses numbers during the remainder of this post to lateral numbers, in accordance to the naming convention as was recommended by Gauss. I have a special reason for using this naming convention. It will later become apparent why I’ve done this.
If we examine lateral numbers algebraically, a pattern emerges:
### $i^8 = i^4 \cdot i^4 = (1)(1) = 1$
When we raise lateral numbers to higher powers, the answers do not get higher and higher in value like other numbers do. Instead, a pattern emerges after every 4th multiplication. This pattern never ceases.
All other numbers, besides laterals, have a place on what currently is called the ‘Real number line’.
I qualify the naming of the Real Numbers, because even their conceptualisation has come into question by some very incisive modern mathematicians. That is a very ‘volatile’ subject for conventional mathematicians and would take us off on a different tangent, so I’ll leave that idea for a different post.
If we look for laterals on any conventional Real number line, we will never ‘locate’ them. They are found there, but we need to look at numbers differently in order to ‘see’ them.
Lateral numbers solve one problem in particular: to find a number, which when multiplied by itself, yields another negative number.
Lateral numbers unify the number line with the algebraic pattern shown above.
2 is positive and, when multiplied by itself, yields a positive number. It maintains direction on the number line.
When one of the numbers (leaving squaring briefly) being multiplied is negative, the multiplication yields a negative number. The direction ‘flips’ 180° into the opposite direction.
Multiplying -2 by -2 brings us back to the positive direction, because of the change resulting in multiplying by a negative number, which always flips our direction on the number line.
So, it appears as if there’s no way of landing on a negative number, right? We need a number that only rotates 90°, instead of the 180° when using negative numbers. This is where lateral numbers come into play.
If we place another lateral axis perpendicular to our ‘Real’ number line, we obtain the desired fit of geometry with our algebra.
When we multiply our ‘Real’ number 1 by i, we get i algebraically, which geometrically corresponds to a 90° rotation from 1 to i.
Now, multiplying by i again results in i squared, which is -1. This additional 90° rotation equals the customary 180° rotation when multiplying by -1 (above).
We may even look at this point as if we were viewing it down a perpendicular axis of the origin itself (moving in towards the origin from our vantage point, through the origin, and then out the back of our screen).
###### [If we allow this interpretation, we can identify the ‘spin’ of a point around the axis of its own origin! The amount of spin is determined by how much the point moves laterally in terms of i. We may even determine in which direction the rotation is made. I’ll add how this is done to this post soon.]
Each time we increase our rotation by multiplying by a factor of i, we increase our rotation another 90°, as seen here:
and,
The cycle repeats itself on every 4th power of i.
We could even add additional lateral numbers to any arbitrary point. This is what I do in my knowledge representations of holons. For example a point at say 5 may be expressed as any number of laterals i, j, k,… simply by adding or subtracting some amount of i, j, k,…:
5 + i + j +k +…
Or better as:
[5, i, j, k,…]
Seeing numbers in this fashion makes a point n-dimensional.
## Are sets, in an abstract sense, one of the most fundamental objects in contemporary mathematics?
Yes and no.
The equivalence relation lies deeper within the knowledge representation and it’s foundation.
There are other knowledge prerequisites which lie even deeper within the knowledge substrate than the equivalence relation.
The concepts of a boundary, of quantity, membership, reflexivity, symmetry, transitivity, and relation are some examples.
http://bit.ly/2wPV7RN
## Universal Constants, Variations, and Identities #19 (Inverse Awareness)
Universal Constants, Variations, and Identities
#19 The Inverse Awareness Relation
The Inverse Awareness Relation establishes a fundamental relationship in our universe:
Micro Awareness = $\dfrac{1}{scope}$ and Macro Awareness = $\dfrac{1}{depth}$ or $\dfrac {Micro Awareness}{Macro Awareness} = \dfrac{depth}{scope}$ Which essentially state: The closer awareness is in some way to an entity, the more depth and the less scope it discerns. The farther awareness is in some way to an entity, the more scope and the less depth it discerns. (Be careful, this idea of closeness is not the same as distance.)
May 15, 2017 | Categories: Discernment, Holons, Holors, Hyperbolic Geometry, Identities, Insight, Knowledge, Knowledge Representation, Language, Learning, Linguistics, Mathesis Generalis, Mathesis Universalis, Metamathematics, Metaphysics, Philosophy, Philosophy of Language, Philosophy of Learning, Philosophy Of Mind, Semantic Web, Semantics, Understanding, Universal Constants, Variations, and Identities, Variations, Wisdom | Tags: knowledge, Language, learning, Linguistics, LogicaUniversalis, Mathesis Universalis, Philosophia Universalis, Philosophy, Philosophy of Language, Philosophy of Learning, Philosophy Of Mind, understanding | Leave a comment
Is Real World Knowledge More Valuable Than Fictional Knowledge? No. Here an excerpt from a short summary of a paper I am writing that provides some context to answer this question: What Knowledge is not: Knowledge is not very well understood so I’ll briefly point out some of the reasons why we’ve been unable to precisely define what knowledge is thus far. Humanity has made numerous attempts at defining knowledge. Plato taught that justified truth and belief are required for something to be considered knowledge. Throughout the history of the theory of knowledge (epistemology), others have done their best to add to Plato’s work or create new or more comprehensive definitions in their attempts to ‘contain’ the meaning of meaning (knowledge). All of these efforts have failed for one reason or another. Using truth value and ‘justification’ as a basis for knowledge or introducing broader definitions or finer classifications can only fail. I will now provide a small set of examples of why this is so. Truth value is only a value that knowledge may attend. Knowledge can be true or false, justified or unjustified, because knowledge is the meaning of meaning What about false or fictitious knowledge? [Here’s the reason why I say no.] Their perfectly valid structure and dynamics are ignored by classifying them as something else than what they are. Differences in culture or language even make no difference, because the objects being referred to have meaning that transcends language barriers. Another problem is that knowledge is often thought to be primarily semantics or even ontology based. Both of these cannot be true for many reasons. In the first case (semantics): There already exists knowledge structure and dynamics for objects we cannot or will not yet know. The same is true for objects to which meaning has not yet been assigned, such as ideas, connections and perspectives that we’re not yet aware of or have forgotten. Their meaning is never clear until we’ve become aware of or remember them. In the second case (ontology): collations that are fed ontological framing are necessarily bound to memory, initial conditions of some kind and/or association in terms of space, time, order, context, relation,… We build whole catalogues, dictionaries and theories about them: Triads, diads, quints, ontology charts, neural networks, semiotics and even the current research in linguistics are examples. Even if an ontology or set of them attempts to represent intrinsic meaning, it can only do so in a descriptive ‘extrinsic’ way. An ontology, no matter how sophisticated, is incapable of generating the purpose of even its own inception, not to mention the purpose of the objects to which it corresponds. The knowledge is not coming from the data itself, it is always coming from the observer of the data, even if that observer is an algorithm. Therefore ontology-based semantic analysis can only produce the artefacts of knowledge, such as search results, association to other objects, ‘knowledge graphs’ like Cayley,… Real knowledge precedes, transcends and includes our conceptions, cognitive processes, perception, communication, reasoning and is more than simply related to our capacity of acknowledgement. In fact knowledge cannot even be completely systematised; it can only be interacted with using ever increasing precision. [For those interested, my summary is found at: A Precise Definition of Knowledge – Knowledge Representation as a Means to Define the Meaning of Meaning Precisely: http://bit.ly/2pA8Y8Y May 11, 2017 | Categories: Consciousness, Insight, Knowledge, Language, Learning, Linguistics, Mathesis Generalis, Mathesis Universalis, Metamathematics, Metaphysics, Philosophy, Philosophy of Language, Philosophy of Learning, Philosophy Of Mind, Semantic Web, Semantics, Understanding, Wisdom | Tags: Big Data, Characteristica Universalis, insight, knowledge, Knowledge Representation, Language, learning, Linguistics, Logica Universalis, Mathematica Universalis, Mathesis Universalis, Metaphysica Universalis, Metaphysics, Philosophia Universalis, Philosophy, Philosophy of Language, Philosophy of Learning, Philosophy Of Mind, Scientia Universalis, Semantic Web, Semantics, understanding, wisdom | Leave a comment Does Knowledge Become More Accurate Over Time? Change lies deeper in the knowledge substrate than time. Knowledge is not necessarily coupled with time, but it can be influenced by it. It can be influenced by change of any kind: not only time. Knowledge may exist in a moment and vanish. The incipient perspective(s) it contains may change. Or the perspective(s) that it comprises may resist change. Also, knowledge changes with reality and vice versa. Time requires events to influence this relationship between knowledge and reality. Knowledge cannot be relied upon to be a more accurate expression of reality, whether time is involved or not, because the relationship between knowledge and reality is not necessarily dependent upon time, nor is there necessarily a coupling of the relationship between knowledge and reality. The relationships of ‘more’ and ‘accurate’ are also not necessarily coupled with time. Example: Eratosthenes calculated the circumference of the Earth long before Copernicus published. The ‘common knowledge’ of the time (Copernicus knew about Eratosthenes, but the culture did not) was that the Earth was flat. May 10, 2017 | Categories: change, Consciousness, Insight, Knowledge, Knowledge Representation, Learning, Mathesis Universalis, Metamathematics, Metaphysics, Philosophy, Philosophy of Language, Philosophy Of Mind, Semantic Web, Semantics, Understanding, Wisdom | Tags: Awareness, Characteristica Generalis, Characteristica Universalis, Discernment, insight, knowledge, Knowledge Representation, learning, Logica Generalis, Logica Universalis, Mathematica Generalis, Mathematica Universalis, Mathesis Generalis, Mathesis Universalis, Metaphysica Generalis, Metaphysica Universalis, Metaphysics, Philosophia Generalis, Philosophia Universalis, Philosophy of Language, Philosophy of Learning, Philosophy Of Mind, Scientia Generalis, Scientia Universalis, understanding, wisdom | Leave a comment What About Tacit Knowledge? A knowledge representation system is required. I’m building one right now. Mathesis Universalis. There are other tools which are useful, such as TheBrain Mind Mapping Software, Brainstorming, GTD and Knowledgebase Software Products and technologies like TheBrain, knowledge graphs, taxonomies, and thesauri can only manage references to and types of knowledge (ontologies). A true knowledge representation would contain vector components which describe the answers to “Why?” and “How does one know?” or “When is ‘enough’, enough?” (epistemology). It is only through additional epistemological representation that tacit knowledge can be stored and referenced. May 5, 2017 | Categories: Knowledge, Knowledge Representation, Language, Learning, Linguistics, Mathesis Generalis, Mathesis Universalis, Metamathematics, Wisdom | Tags: Big Data, Characteristica Generalis, Characteristica Universalis, insight, knowledge, Knowledge Representation, learning, Linked Data, Logica Generalis, Logica Universalis, Mathesis Generalis, Mathesis Universalis, Metaphysica Generalis, Metaphysica Universalis, Philosophia Generalis, Philosophia Universalis, Scientia Generalis, Scientia Universalis, Semantic Web, Smart Data, Tacit Knowledge, understanding, wisdom | Leave a comment Universal Constants, Variations, and Identities #18 (Dimension) Universal Constants, Variations, and Identities (Dimension) #18 Dimension is a spectrum or domain of awareness: they essentially build an additional point of view or perspective. We live in a universe of potentially infinite dimension. Also, there are more spatial dimensions than three and more temporal dimensions than time (the only one science seems to recognize). Yes, I’m aware of what temporal means; Temporal is a derived attribute of a much more fundamental concept: Change. One important caveat: please bear in mind that my little essay here is not a complete one. The complete version will come when I publish my work. The idea of dimension is not at all well understood. The fact is, science doesn’t really know what dimension is; rather, only how they may be used! Science and technology ‘consume’ their utility without understanding their richness. Otherwise they would have clarified them for us by now. Those who may have clarified what they are get ignored and/or ridiculed, because understanding them requires a larger mental ‘vocabulary’ than Physicalism, Reductionism, and Ontology can provide. Our present science and technology is so entrenched in dogma, collectivism, and special interest, that they no longer function as they once did. The globalist parasites running our science and technology try their best to keep us ‘on the farm’ by restricting dimension, like everything else, to the purely physical. It’s all they can imagine. That’s why many of us feel an irritation without being able to place our finger on it when we get introduced to dimension. We seem to ‘know’ that something just doesn’t ‘rhyme’ with their version. Time and space may be assigned dimensionality, in a purely physical sense if necessary, but there are always underlying entities much deeper in meaning involved that are overlooked and/or remain unknown which provide those properties with their meaning. This is why the more sensitive among us sense something is wrong or that something’s missing. Let us temporarily divorce ourselves from the standard ‘spatial’ and ‘temporal’ kinds of ‚dimension’ for a time and observe dimension in its essence. Definitions are made from them: in fact, dimensions function for definitions just as organs do for the body. In turn, dimension has its own set of ‘organs’ as well! I will talk about those ‘organs’ below. Dimension may appear different to us depending upon our own state of mind, level of development, kind of reasoning we choose, orientation we prefer, expectations we may have,… but down deep… Everything, even attributes of all kinds, involve dimension. We must also not forget partial dimension such as fractals over complex domains and other metaphysical entities like mind and awareness which may or may not occupy dimension. Qualia (water is ‘wet’, angry feels like ‘this’, the burden is ‘heavy’) are also dimensional. Dimensions are ‘compasses’ for navigating conceptual landscapes. We already think in multiple dimension without even being aware of it! Here’s is an example of how that is: [BTW: This is simply an example to show how dimension can be ‘stacked’ or accrued. The items below were chosen arbitrarily and could be replaced by any other aspects.] ♦ Imagine a point in space (we are already at 3d [x,y,z]) – actually at this level there are even more dimensions involved, but I will keep this simple for now. ♦ it moves in space and occupies a specific place in time (now 4d) 3d + 1 time dimension ♦ say it changes colour at any particular time or place (5d) ♦ let it now grow and shrink in diameter (6d) ♦ if it accelerates or slows its movement (7d) ♦ if it is rotating (8d) ♦ if it is broadcasting a frequency (9d) ♦ what if it is aware of other objects or not (10d) ♦ say it is actively seeking contact (connection) with other objects around it (11d) ♦ … (the list may go on and on) As you can see above, dimensions function like aspects to any object of thought. Dimensionality becomes much clearer when we free ourselves from the yoke of all that Physicalism, Reductionism, and Ontology. Let’s now look at some of their ‘organs’ as mentioned above as well as other properties they have in common: They precede all entities except awareness. Awareness congeals into them. They form a first distinction. They have extent. They are integrally distributed. They have an axial component. They spin. They vibrate. They oscillate. They resonate. They may appear as scalar fields. Their references form fibrations. They are ‘aware’ of self/other. Their structural/dynamic/harmonic signature is unique. They provide reference which awareness uses to create perspective meaning. Holons are built from them. http://mathesis-universalis.com Sacred Geometry 29 by Endre @ RedBubble: http://www.redbubble.com/people/endre/works/6920405-sacred-geometry-29?p=poster Sep 7, 2016 | Categories: Constants, Holons, Holors, Knowledge, Knowledge Representation, Language, Learning, Linguistics, Mathematics, Mathesis Generalis, Mathesis Universalis, Meta Logic, Metamathematics, Metaphysics, Perspective, Philosophy, Scalars, Semantics, Understanding, Variations, Wisdom | Tags: BigData, First Distinction, insight, knowledge, Knowledge Representation, learning, Logica Universalis, Mathesis Universalis, Metalogic, Metaphysics, Philosophia Universalis, Scalar Field, Scalars, Scientia Universalis, Semantics, understanding, Universal Constants, Variances, wisdom | Leave a comment Universal Constants, Variations, and Identities – #17 (Representation) #17 Interiority and Exteriority arise together. (Representation) For every interior representation there is always an exterior representation that compliments it. For every exterior representation there is always a corresponding interior one. Sep 6, 2016 | Categories: Constants, Identities, Mathesis Generalis, Mathesis Universalis, Meta Logic, Metamathematics, Semantics, Understanding, Variations, Wisdom | Tags: insight, knowledge, Knowledge Representation, learning, Logica Universalis, Mathesis Universalis, Metalogic, Philosophia Universalis, Philosophy, Scientia Universalis, understanding, Universal Constants, Universalis, Variances | Leave a comment HUD Fly-by Test Link to video. Don’t take this as an actual knowledge representation; rather, simply a simulation of one. I’m working out the colour, transparent/translucent, camera movements, and other technical issues. In any case you may find it interesting. The real representations are coming soon. Aug 21, 2016 | Categories: Big Data, Holons, Holors, Hyperbolic Geometry, Knowledge, Knowledge Representation, Language, Learning, Linguistics, Logic, Long Data, Mathesis Generalis, Mathesis Universalis, Meta Logic, Metamathematics, Metaphysics, Philosophy, Understanding, Wisdom | Tags: BigData, Constants, Hyperbolic Geometry, insight, knowledge, Knowledge Representation, learning, Logica Generalis, Logica Universalis, Mathesis Generalis, Mathesis Universalis, Metalogic, Metamathematics, Metaphysics, Philosophia Generalis, Philosophia Universalis, Philosophy, Philosophy of Language, Philosophy of Learning, Philosopohy of Mind, Scientia Universalis, understanding, Universal Constants, Universalis, wisdom | Leave a comment Obfuscation In A ‘Nut’ Shell Obfuscation In A ‘Nut’ Shell Distinctions that are no differences, are incomplete, or are in discord. In knowledge representation these ‘impurities’ (artificiality) and their influence are made easy to see. In groks you will see them as obfuscation fields. That means darkening and/or inversion dynamics. The term refers to the visual representation of an obfuscated field, and can also be represented as dark and/or inverted movements of a field or group. I concentrate more on the dark versions here and will consider the inversions (examples of lying) in a future post. They bring dynamics that are manipulative, artificial, or non-relevant into the knowledge representation. Their dynamic signatures make them stand out out like a sore thumb. Cymatic images reveal these dynamics too. There are multiple vortexes, each with their own semantic contribution to the overall meaning to a knowledge molecule or group. Here is an example of a snow flake (seen below) https://www.flickr.com/photos/13084997@N03/12642300973/in/album-72157625678493236/ From Linden Gledhill. Note that not all vortexes are continuous through the ‘bodies’ of the molecules they participate in. Also, in order to correctly visualize what I’m saying, one must realize that the cymatic images are split expressions. That means to see the relationship, you must add the missing elements which are hinted at by the image. Every cymatic image is a cut through the dynamics it represents. We are in effect seeing portions of something whole. Whole parts are dissected necessarily, because the surface of expression is limited to a ‘slice’ through the complete molecule. (Only the two images marked ‘heurist.com’ are my own! The other images are only meant as approximations to aid in the understanding of my work.) Apr 28, 2016 | Categories: Big Data, BigData, Holons, Holors, Insight, Knowledge, Knowledge Representation, Language, Learning, Linguistics, Mathematics, Mathesis Generalis, Mathesis Universalis, Metamathematics, Semantics, Wisdom | Tags: insight, knowledge, learning, Logica Universalis, Mathesis Universalis, Philosophia Universalis, understanding, wisdom | Leave a comment Men And Their Semantics – Turning Meaning into Legos Semantically speaking: Does meaning structure unite languages? This work is a dead end waiting to happen. Of course it will attract much interest, money, and perhaps even yield new insights into the commonality of language, but there’s better ways to get there. What’s even more sad is that they, who should know better, will see my intentions in making this clear as destructive criticism instead of a siren warning regarding research governed/originating through a false paradigm. These people cannot see or overlook the costs humanity pays for the misunderstandings research like this causes and is based upon. It’s even worse in the field of genetic engineering with their chimera research. The people wasting public money funding this research need to be gotten under control again. I don’t want to criticize the researcher’s intentions. It’s their framing and methodology that I see as primitive, naive, and incomplete. I’m not judging who they are nor their ends; rather, their means of getting there. “Quantification” is exactly the wrong way to ‘measure/compare semantics; not to mention “partitioning” them! 1) The value in this investigation that they propose is to extrapolate and interpolate ontology. Semantics are more than ontology. They possess a complete metaphysics which includes their epistemology. 2) You cannot quantify qualities, because you reduce the investigation to measurement; which itself imposes meaning upon the meaning you wish to measure. Semantics, in their true form, are relations and are non-physical and non-reducible. 3) Notice also, partitioning is imposed upon the semantics (to make them ‘measurable/comparable’). If you compare semantics in such a way then you only get answers in terms of your investigation/ontology. 4) The better way is to leave the semantics as they are! Don’t classify them! Learn how they are related. Then you will know how they are compared. There’s more to say, but I think you get the idea… ask me if you want clarification… Feb 5, 2016 | Categories: Artificial Intelligence, Bad Logic, Bad Science, Big Data, Consciousness, Education, Humanity, Insight, Knowledge, Knowledge Representation, Language, Learning, Linguistics, Logic, Mathesis Generalis, Mathesis Universalis, Philosophy, Psyence, Semantics, Understanding, Wisdom | Tags: BadScience, insight, knowledge, learning, Mathesis Universalis, understanding, wisdom | Leave a comment Typical Knowledge Acquisitions Node Knowledge Representation A typical knowledge acquisition node showing two layers of abstraction. Note how some of the acquisition field detection moves with the observer’s perspective. You can tell, due to the varying visual aspects of the fields and their conjunctions that it has already been primed and in use. This node may be one of thousands/millions/billions which form when acquiring the semantics of any particular signal set. Their purpose is to encode a waveform of meaning. Basically it is these ‘guys’ which do the work of ‘digesting’ the knowledge contained within any given signal; sort of like what enzymes do in our cells. The size, colour (although not here represented), orientation, quantity, sequence, and other attributes of the constituent field representations all contribute to a unique representation of those semantics the given node has encountered along its travel through any particular set of signal. The knowledge representation (not seen here) is comprised of the results of what these nodes do. This node represents a unique cumulative ‘imprint’ or signature derived from the group of knowledge molecules it has processed during its life time in the collation similar to what a checksum does in a more or less primitive fashion for numerical values in IT applications. I have randomized/obfuscated a bit here (in a few different ways), as usual, so that I can protect my work and release it in a prescribed and measured way over time. In April I will be entering the 7th year of working on this phase of my work. I didn’t intentionally plan it this way, but the number 7 does seem to be a ‘number of completion’ for me as well. The shape of the model was not intended in itself. It ‘acquired’ this shape during the course of its work. It could have just as well been of a different type (which I’m going to show here soon). Important is the ‘complementarity’ of the two shapes as they are capable of encoding differing levels of abstraction. The inner model is more influenced by the observer than the outer one, for example. The outer shape contains a sort of ‘summary’ of what the inner shape has processed. Jan 4, 2016 | Categories: Big Data, BigData, Consciousness, Fields, Holons, Holors, Knowledge, Knowledge Representation, Language, Learning, Linguistics, Mathesis Universalis, Semantics, Wisdom | Tags: insight, knowledge, learning, Mathesis Universalis, Metaphysics, understanding, wisdom | Leave a comment Really! Nothing Is ‘Real’ Another example of the ‘neo-snake-oil salesmen’ peddling you trendy pabulum and neo-Babylon confusion. My current project Mathesis Universalis http://mathesis-universalis.com will bring an end to this menagerie of nonsense and subtle programming. I could write a book on this. Don’t believe everything put forward in this… set of perspectives. This is a work in process so stay tuned… updates are coming very shortly. I’m happy that he allows for more than 5 senses as this is a common error made by science and philosophy up to this time. I’ve taken issue with it elsewhere numerous times. Also I’m pleased that he is allowing for Neuroplasticity (Dr. Jeffrey M. Schwartz http://www.jeffreymschwartz.com/ has been leading this new model for over 10 years.) Up to @04:27 I take issue with two important assumptions he makes: 1) That sensory information is the only way we ‘register’ reality. 2) He is a physicalist pure through. If he can’t measure and quantify it, then it doesn’t exist for him… This leads to what is known as causal ambiguity (among other things). http://psychologydictionary.org/causal-ambiguity/ @04:57– He says that memory is stored all over the brain. This is incorrect. The effects of the phenomena of memory are manifested in various areas of the brain. There is no sufficient and necessary proof that memory is stored there! They PRESUME it to be stored there, because they can not allow or imagine anything non-physical being able to store any kind of knowledge. @05:09– “How many memories can you fit inside your head? What is the storage capacity of the human brain?” he asks. In addition to the presumption that memories are stored there, he then ignores the capacity of other areas of the body to imprint the effects of memory: the digestive tract, the endocrine and immune ‘systems’,… even to cell membranes (in cases of addiction, for example)!!! @05:23– “But given the amount of neurons in the human brain involved with memory…” (the first presumption that memories are stored there) “and the number of connections a single neuron can make…” (he’s turning this whole perspective on memory into a numerical problem!) which is reductionism. @05:27– He then refers to the work of Paul Reber, professor of psychology at Northwestern University who explained his ‘research’ into answering that question. here’s the link. I will break that further stream of presumptions down next. http://www.scientificamerican.com/article/what-is-the-memory-capacity/ (the question is asked about middle of the 1st page of the article which contains 2 pages) Paul Reber makes a joke and then says: “The human brain consists of about one billion neurons. Each neuron forms about 1,000 connections to other neurons, amounting to more than a trillion connections. If each neuron could only help store a single memory, running out of space would be a problem. You might have only a few gigabytes of storage space, similar to the space in an iPod or a USB flash drive.” “Yet neurons combine so that each one helps with many memories at a time, exponentially increasing the brain’s memory storage capacity to something closer to around 2.5 petabytes (or a million gigabytes). For comparison, if your brain worked like a digital video recorder in a television, 2.5 petabytes would be enough to hold three million hours of TV shows. You would have to leave the TV running continuously for more than 300 years to use up all that storage.” These presumptions and observations are full of ambiguity and guesswork. Given that we are not reading a thesis on the subject, we can allow him a little slack, but even the conclusions he has arrived at are nothing substantial. More below as he reveals his lack of knowledge next. “The brain’s exact storage capacity for memories is difficult to calculate. First, we do not know how to measure the size of a memory. Second, certain memories involve more details and thus take up more space; other memories are forgotten and thus free up space. Additionally, some information is just not worth remembering in the first place.” He not only doesn’t know to measure memories (which he admits), he cannot even tell you what they are precisely! He offers here also no reason for us to believe that memory is reducible to information! @05:50– “The world is real… right?” (I almost don’t want to know what’s coming next!) And then it really gets wild… @05:59– With his: “How do you know?” question he begins to question the existence of rocket scientists. He moves to Sun centric ideas (we’ve heard this one before) to show how wrong humanity has been in the past. He seems to ignore or not be aware of the fact that that many pre-science explorers as far back as ancient Alexandria knew better and had documented this idea as being false. This ‘error’ of humanity reveals more about dogma of a church/religion/tradition than of humanity/reality as it truly is. @06:29– “Do we… or will we ever know true reality?” is for him the next question to ask and then offers us to accept the possibility that we may only know what is approximately true. @06:37 “Discovering more and more useful theories every day, but never actually reaching true objective actual reality.” This question is based upon so much imprecision, ignorance, and arrogance that it isn’t even useful! First of all: we cannot know “true objective actual reality” in all of its ‘essence’, because we must form a perspective around that which we observe in order to ‘see’ anything meaningful. As soon as a perspective comes into ‘being’, we lose objectivity. (ignorance, assumption) He doesn’t define what ‘reality’ for him is. (imprecision) He doesn’t explain what the difference between ‘true’ and ‘actual’ might be. (imprecision, assumption) Theories are NOT discovered, rather created (implicit arrogance). They can only be discovered if they were already known/formulated at some time. Also; theories do not stand on their own; rather, they depend upon continued affirmation by being questioned for as long as they exist. We DO NOT store knowledge in our answers; rather, in our questions. [continued…] Oct 6, 2015 | Categories: Mathesis Universalis, Social Engineering | Tags: Confusion, insight, knowledge, learning, Materialism, Mathesis Generalis, Mathesis Universalis, Neo Babylon, Physicalism, Psyence, Reductionism, social engineering, Sophistry, Techno Babble, trendy, understanding, wisdom | 3 Comments A Holon’s Topology, Morphology, and Dynamics (2a) A Holon’s Topology, Morphology, and Dynamics (2a) This is the second video of a large series and the very first video in a mini-series about holons. In this series I will be building the vocabulary of holons which in turn will be used in my knowledge representations. The video following this one will go into greater detail describing what you see here and will be adding more to the vocabulary. This is the second video of a large series and the very first video in a mini-series about holons. In this series I will be building the vocabulary of holons which in turn will be used in my knowledge representations. Aug 31, 2015 | Categories: Knowledge Representation, Mathesis Universalis | Tags: BigData, Holons, knowledge, Knowledge Representation, learning, Mathesis Generalis, Mathesis Universalis | Leave a comment Ontology: Compelling and ‘Rich’ Ontologies are surfaces… even if ‘rich’. (link) Ontology: Compelling and ‘Rich’ They are only surfaces, but they seem to provide you with depth. This exquisite video shows how the representation of knowledge is ripe for a revolution. I’ve written about this in depth in other places so I won’t bore you with the details here unless you ask me in the comments below. Stay tuned! I’m behind in my schedule (work load), but I’m getting very close just the same. I will publish here and elsewhere. I’m going to use this video (and others like it) to explain why ontologies are not sufficient to represent knowledge. Soon everyone will acknowledge this fact and claim they’ve been saying it all along! (In spite of the many thousands of papers and books obsessively claiming the opposite!!!) They do not know that how dangerous that claim is going to be. Our future will be equipped with the ability to determine if such claims are true or not. That’s some of the reason I do what I do. #KnowledgeRepresentation #BigData #Semantics #Metaphysics #Ontology #Knowledge #Wisdom #Understanding #Insight #Learning #MathesisUniversalis #MathesisGeneralis #PhilosophiaUniversalis #PhilosophiaGeneralis #ScientiaUniversalis #ScientiaGeneralis Aug 6, 2015 | Categories: Big Data, Knowledge Representation, Semantics | Tags: BigData, insight, knowledge, Knowledge Representation, learning, Mathesis Generalis, Mathesis Universalis, Metaphysics, Philosophia Generalis, Philosophia Universalis, Scientia Generalis, Scientia Universalis, Semantics, understanding, wisdom | Leave a comment Nascent Mind, Prescient Knowledge: Instinct And Envisioning It’s at this juncture that concepts begin to coalesce. Within this ‘Holy of Holies’ concepts are born and form/generate their associated continuums. It’s like watching the blue wisping stars newly born in the constellation of Pleiades. https://en.wikipedia.org/wiki/Pleiades This ‘event horizon’ is so crucial to understanding and participating in mind; yet those who should know better simply ignore or overlook it. Tesla’s statement here rings so true that it simply boggles my mind and confirms that Tesla was ‘tuned into it.’ He clearly exhibited these awarenesses on several occasions. He was able to envision many ideas to their completion before constructing them; and his instinct for somehow ‘knowing’ (flashes of insight) what to do next and where to go with an idea were so profound that it often overwhelmed and incapacitated him. His mind was so fertile that layers of creative impulses were being maintained concurrently. Next to Socrates there are very few who inspire me. Tesla is one of those few. Jun 26, 2015 | Categories: Tesla | Tags: Awareness, Cognition, insight, knowledge, learning, Logica Generalis, Logica Universalis, Mathesis Generalis, Mathesis Universalis, Meta-Cognition, Philosophia Generalis, Philosophia Universalis, Scientia Generalis, Scientia Universalis, understanding, wisdom | Leave a comment Precursors Of Knowledge Precursors Of Knowledge Fractal fields provide a nice framework in which to think about knowledge. They are not all we need for precision, but they are helpful in a generic way. I’ll be posting more on them as the knowledge representations are published, because there are many ‘gaps to fill’ to show how these relate to knowledge. More sources: https://www.youtube.com/watch?v=2nTLI89vdzg https://www.youtube.com/watch?v=1ZVNIZGw4X0 https://www.youtube.com/watch?v=Yp4ogF2w13M https://www.youtube.com/watch?v=8UPD2_gEjvM https://www.youtube.com/watch?v=ArZLXHVVV5I May 29, 2015 | Categories: Fields, Fractals, Mathesis Generalis, Mathesis Universalis | Tags: BigData, Fields, Fractal Fields, insight, knowledge, Knowledge Representation, Language, learning, Mathesis Generalis, Mathesis Universalis, PhilosophyOfMind, understanding, wisdom | Leave a comment Information Visualization Is Not Knowledge Representation (Lynda.com – Overview of Data Visualization) Information Visualization Is Not Knowledge Representation This great video from Lynda.com shows how the processing language/interpreter is great for modeling information. With such a multitude of interesting ways to model data, we find it hard to resist the temptation to call this knowledge, but it’s not! All of the wonderful representations here still require us to interpret their meaning! What if there were a way to present knowledge in which our own understanding is not required to interpret them? What if our understanding of what we have presented to us becomes part of the presentation itself, and in fact, influences what we take from that representation? We obviously need knowledge representation that can provide their meaning on their own for only they can provide a true understanding of their inherent structure and dynamics. You see real understanding is the personalization of knowledge into your own mind. If your mind cannot dialog with that knowledge, it’s not really yours and if your mind does all the work, it’s only information. May 5, 2015 | Categories: Mathesis Universalis | Tags: #Foundations, insight, knowledge, Knowledge Representation, learning, Logica Generalis, Logica Universalis, Mathesis Generalis, Mathesis Universalis, Philophia Universalis, Philosophia Generalis, Philosophy of Learning, PhilosophyOfLanguage, PhilosophyOfMind, Scientia Generalis, Scientia Universalis, understanding, wisdom | Leave a comment Universal Constants, Variations and Identities – #16 (Creation/Discovery) Universal Constants, Variations and Identities #16 Creation and discovery compliment each other and are the means in which the Universe fundamentally unfolds and enfolds itself (Creation/Discovery) We tend not to identify them, because there are so many variations in their harmony. Please do overestimate your thoughts… as you will see they are the beginning of your expression to and of the world. Both Creation and Discovery will work in unison, if we allow them. Discovery is to recognize/relate what is in your world. Creation is to transform/synthesize it too. Each is alone without the other. Creation=Right ‘brain’ (right+mind) Discovery=Left ‘brain’ (left+mind) Their ‘magick’ (sic.) manifests not when you synchronize them; rather, when you harmonize them. (Please take the time to watch the 4 minute video.) Feb 24, 2015 | Categories: Constants, Insight, Knowledge, Learning, Understanding, Wisdom | Tags: change, Creation, Identities, insight, knowledge, learning, Logica Universalis, Mathesis Universalis, Metalogic, Metamathematics, Metaphysics, Metascience, Philosophia Universalis, Philosophy, Scientia Universalis, understanding, Universal Constants, Universalis, Variances, wisdom | Leave a comment Universal Constants, Variations and Identities – #15 (Change/Time) #15 Time is a temporally ‘linear’ (directed) form of change that is not limited by dimension. (Change/Time) Time has been arbitrarily and wrongly assigned to dimension. Change is not restricted to any dimension: therefore time is also not limited to it. I know it’s trendy to see time as a dimension, but dimension is something completely different. Stay tuned to find out what and why. Update: There are many reasons why time needs a proper definition. Here are a few of them: The chemical reactions in the vessel are not really effected by some mysterious thing called time, but by the number of contacts or collisions that take place in the soup of atoms or molecules. That is what the factor ‘T’ really stands for. 1) Eternity may be a somewhat mystical overarching reality outside of the physical universe, but time is not. Nor is time a thing that anybody can do anything to. In other words: it cannot be reified. 2) The universe doesn’t exist in time, but time exists in the universe. 3) The proper definition of time is exactly: the sequence of events in the material universe. Feb 7, 2015 | Categories: Constants, Insight, Knowledge, Learning, Understanding, Wisdom | Tags: change, insight, knowledge, learning, linear, Logica Universalis, Mathesis Universalis, Metalogic, Metamathematics, Metaphysics, Metascience, Philosophia Universalis, Philosophy, Scientia Universalis, temporal, understanding, Universal Constants, Universalis, Variances, wisdom | Leave a comment Universal Constants, Variations and Identities – #14 (Singular/Plural) Universal Constants, Variations and Identities #14 Singular and plural arise together. (Singular/Plural) There is no singular without a plural representation except in the non-dual. See http://mathesis-universalis.com for more information. #Knowledge #Wisdom #Understanding #Learning #Insight #Constants #Variances #Philosophy #MathesisUniversalis #ScientiaUniversalis #PhilosophicaUniversalis #LogicaUniversalis #MetaMathematics #MetaLogic #MetaScience #MetaPhysics #MetaPhilosophy #Singular #Plural Feb 3, 2015 | Categories: Constants, Identities, Insight, Knowledge, Learning, Mathesis Universalis, Metamathematics, Metaphysics, Philosophy, Wisdom | Tags: Constants, insight, knowledge, learning, Logica Universalis, Mathesis Universalis, Metalogic, MetaPhilosophy, Metaphysics, Metascience, Philosophia Universalis, Philosophy, Plural, representation, Scientia Universalis, Singular, Singular and plural, understanding, Universal Constants, Variances, wisdom | Leave a comment « Older Entries
Blog at WordPress.com. //<![CDATA[ var infiniteScroll = JSON.parse( decodeURIComponent( '%7B%22settings%22%3A%7B%22id%22%3A%22content%22%2C%22ajaxurl%22%3A%22https%3A%5C%2F%5C%2Fheurist.me%5C%2F%3Finfinity%3Dscrolling%22%2C%22type%22%3A%22click%22%2C%22wrapper%22%3Atrue%2C%22wrapper_class%22%3A%22infinite-wrap%22%2C%22footer%22%3A%22page%22%2C%22click_handle%22%3A%221%22%2C%22text%22%3A%22Older%20Posts%22%2C%22totop%22%3A%22Scroll%20back%20to%20top%22%2C%22currentday%22%3A%2203.02.15%22%2C%22order%22%3A%22DESC%22%2C%22scripts%22%3A%5B%5D%2C%22styles%22%3A%5B%5D%2C%22google_analytics%22%3Afalse%2C%22offset%22%3A1%2C%22history%22%3A%7B%22host%22%3A%22heurist.me%22%2C%22path%22%3A%22%5C%2Ftag%5C%2Fmathesis-universalis%5C%2Fpage%5C%2F%25d%5C%2F%22%2C%22use_trailing_slashes%22%3Atrue%2C%22parameters%22%3A%22%22%7D%2C%22query_args%22%3A%7B%22tag%22%3A%22mathesis-universalis%22%2C%22error%22%3A%22%22%2C%22m%22%3A%22%22%2C%22p%22%3A0%2C%22post_parent%22%3A%22%22%2C%22subpost%22%3A%22%22%2C%22subpost_id%22%3A%22%22%2C%22attachment%22%3A%22%22%2C%22attachment_id%22%3A0%2C%22name%22%3A%22%22%2C%22pagename%22%3A%22%22%2C%22page_id%22%3A0%2C%22second%22%3A%22%22%2C%22minute%22%3A%22%22%2C%22hour%22%3A%22%22%2C%22day%22%3A0%2C%22monthnum%22%3A0%2C%22year%22%3A0%2C%22w%22%3A0%2C%22category_name%22%3A%22%22%2C%22cat%22%3A%22%22%2C%22tag_id%22%3A10442202%2C%22author%22%3A%22%22%2C%22author_name%22%3A%22%22%2C%22feed%22%3A%22%22%2C%22tb%22%3A%22%22%2C%22paged%22%3A0%2C%22meta_key%22%3A%22%22%2C%22meta_value%22%3A%22%22%2C%22preview%22%3A%22%22%2C%22s%22%3A%22%22%2C%22sentence%22%3A%22%22%2C%22title%22%3A%22%22%2C%22fields%22%3A%22%22%2C%22menu_order%22%3A%22%22%2C%22embed%22%3A%22%22%2C%22category__in%22%3A%5B%5D%2C%22category__not_in%22%3A%5B%5D%2C%22category__and%22%3A%5B%5D%2C%22post__in%22%3A%5B%5D%2C%22post__not_in%22%3A%5B%5D%2C%22post_name__in%22%3A%5B%5D%2C%22tag__in%22%3A%5B%5D%2C%22tag__not_in%22%3A%5B%5D%2C%22tag__and%22%3A%5B%5D%2C%22tag_slug__in%22%3A%5B%22mathesis-universalis%22%5D%2C%22tag_slug__and%22%3A%5B%5D%2C%22post_parent__in%22%3A%5B%5D%2C%22post_parent__not_in%22%3A%5B%5D%2C%22author__in%22%3A%5B%5D%2C%22author__not_in%22%3A%5B%5D%2C%22lazy_load_term_meta%22%3Afalse%2C%22posts_per_page%22%3A24%2C%22ignore_sticky_posts%22%3Afalse%2C%22suppress_filters%22%3Afalse%2C%22cache_results%22%3Afalse%2C%22update_post_term_cache%22%3Atrue%2C%22update_post_meta_cache%22%3Atrue%2C%22post_type%22%3A%22%22%2C%22nopaging%22%3Afalse%2C%22comments_per_page%22%3A%2250%22%2C%22no_found_rows%22%3Afalse%2C%22order%22%3A%22DESC%22%7D%2C%22query_before%22%3A%222020-10-22%2022%3A17%3A48%22%2C%22last_post_date%22%3A%222015-02-03%2018%3A41%3A05%22%2C%22body_class%22%3A%22infinite-scroll%22%2C%22loading_text%22%3A%22Loading%20new%20page%22%2C%22stats%22%3A%22blog%3D43301192%26v%3Dwpcom%26tz%3D1%26user_id%3D0%26subd%3Dheurist%26x_pagetype%3Dinfinite-click%22%7D%7D' ) ); //]]> var WPGroHo = {"my_hash":""}; //initialize and attach hovercards to all gravatars jQuery( document ).ready( function( \$ ) { if (typeof Gravatar === "undefined"){ return; } if ( typeof Gravatar.init !== "function" ) { return; } Gravatar.profile_cb = function( hash, id ) { WPGroHo.syncProfileData( hash, id ); }; Gravatar.my_hash = WPGroHo.my_hash; Gravatar.init( 'body', '#wp-admin-bar-my-account' ); }); window.addEventListener( "load", function( event ) { var link = document.createElement( "link" ); link.href = "https://s0.wp.com/wp-content/mu-plugins/actionbar/actionbar.css?v=20201002"; link.type = "text/css"; link.rel = "stylesheet"; document.head.appendChild( link ); var script = document.createElement( "script" ); script.src = "https://s0.wp.com/wp-content/mu-plugins/actionbar/actionbar.js?v=20201002"; script.defer = true; document.body.appendChild( script ); } ); Post to Cancel Privacy & Cookies: This site uses cookies. By continuing to use this website, you agree to their use. To find out more, including how to control cookies, see here: Cookie Policy var jetpackCarouselStrings = {"widths":[370,700,1000,1200,1400,2000],"is_logged_in":"","lang":"en","ajaxurl":"https:\/\/heurist.me\/wp-admin\/admin-ajax.php","nonce":"45e397d038","display_exif":"1","display_comments":"1","display_geo":"1","single_image_gallery":"1","single_image_gallery_media_file":"","background_color":"black","comment":"Comment","post_comment":"Post Comment","write_comment":"Write a Comment...","loading_comments":"Loading Comments...","download_original":"View full size <span class=\"photo-size\">{0}<span class=\"photo-size-times\">\u00d7<\/span>{1}<\/span>","no_comment_text":"Please be sure to submit some text with your comment.","no_comment_email":"Please provide an email address to comment.","no_comment_author":"Please provide your name to comment.","comment_post_error":"Sorry, but there was an error posting your comment. Please try again later.","comment_approved":"Your comment was approved.","comment_unapproved":"Your comment is in moderation.","camera":"Camera","aperture":"Aperture","shutter_speed":"Shutter Speed","focal_length":"Focal Length","copyright":"Copyright","comment_registration":"0","require_name_email":"1","login_url":"https:\/\/heurist.wordpress.com\/wp-login.php?redirect_to=https%3A%2F%2Fheurist.me%2F2018%2F04%2F25%2Fwhy-is-it-so-hard-to-prove-that-epi-or-epi-is-irrational-rational%2F","blog_id":"43301192","meta_data":["camera","aperture","shutter_speed","focal_length","copyright"],"local_comments_commenting_as":"<fieldset><label for=\"email\">Email (Required)<\/label> <input type=\"text\" name=\"email\" class=\"jp-carousel-comment-form-field jp-carousel-comment-form-text-field\" id=\"jp-carousel-comment-form-email-field\" \/><\/fieldset><fieldset><label for=\"author\">Name (Required)<\/label> <input type=\"text\" name=\"author\" class=\"jp-carousel-comment-form-field jp-carousel-comment-form-text-field\" id=\"jp-carousel-comment-form-author-field\" \/><\/fieldset><fieldset><label for=\"url\">Website<\/label> <input type=\"text\" name=\"url\" class=\"jp-carousel-comment-form-field jp-carousel-comment-form-text-field\" id=\"jp-carousel-comment-form-url-field\" \/><\/fieldset>","reblog":"Reblog","reblogged":"Reblogged","reblog_add_thoughts":"Add your thoughts here... (optional)","reblogging":"Reblogging...","post_reblog":"Post Reblog","stats_query_args":"blog=43301192&v=wpcom&tz=1&user_id=0&subd=heurist","is_public":"1","reblog_enabled":""}; // <![CDATA[ (function() { try{ if ( window.external &&'msIsSiteMode' in window.external) { if (window.external.msIsSiteMode()) { var jl = document.createElement('script'); jl.type='text/javascript'; jl.async=true; jl.src='/wp-content/plugins/ie-sitemode/custom-jumplist.php'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(jl, s); } } }catch(e){} })(); // ]]> (function() { var extend = function(out) { out = out || {}; for (var i = 1; i < arguments.length; i++) { if (!arguments[i]) continue; for (var key in arguments[i]) { if (arguments[i].hasOwnProperty(key)) out[key] = arguments[i][key]; } } return out; }; extend( window.infiniteScroll.settings.scripts, ["jquery-core","jquery","postmessage","mobile-useragent-info","wpcom-actionbar-placeholder","grofiles-cards","wpgroho","devicepx","the-neverending-homepage","eu-cookie-law-script","wpcom-masterbar-js","wpcom-masterbar-tracks-js","jetpack-carousel","tiled-gallery","swfobject","videopress"] ); extend( window.infiniteScroll.settings.styles, ["wpcom-smileys","the-neverending-homepage","wp-block-library","jetpack-layout-grid","jetpack-ratings","coblocks-frontend","wpcom-core-compat-playlist-styles","wpcom-bbpress2-staff-css","modularity","modularity-screen","modularity-print","eu-cookie-law-style","geo-location-flair","reblogging","h4-global","global-styles","modularity-ie","modularity-ie-nav","jetpack-global-styles-frontend-style","jetpack-carousel","tiled-gallery"] ); })(); _tkq = window._tkq || []; _stq = window._stq || []; _tkq.push(['storeContext', {'blog_id':'43301192','blog_tz':'1','user_lang':'en-gb','blog_lang':'en-gb','user_id':'0'}]); _stq.push(['view', {'blog':'43301192','v':'wpcom','tz':'1','user_id':'0','subd':'heurist'}]); _stq.push(['extra', {'crypt':'UE40eW5QN0p8M2Y/RE1TaVhzUzFMbjdWNHpwZGhTayxPSUFCMGRVYVNrSFguN3FwSmQ5RGtNX3VQcj1yVzhiflM1THQtLGFdQ2toOXYlSElFQnVKXXBjVCU0Ly1jbnxTSVhrOCUrLFJYV0U9RURiV3drWTA4Ml1SYzguRlU4ZW5WbmwmbEt5aHFRJU1dTS1QdUMmT35oJTEyOSxDX3xLcUVbZmMudjY4dkdkOUpLNi5EV1E0el04K3VSTEYrT0Z8SzlHYlVhS0FYUEZzdjJxaHx3aVpDaF1lRVAwcGtWVC11V09JaHEzaSsrfDk9UFllMmx6NVlbX35+Z1MzZ3pDP3BnRVdoMFRldDVJWj01cks2UGhPaWc0SF1QNi16NGY='}]); _stq.push([ 'clickTrackerInit', '43301192', '0' ]); if ( 'object' === typeof wpcom_mobile_user_agent_info ) { wpcom_mobile_user_agent_info.init(); var mobileStatsQueryString = ""; if( false !== wpcom_mobile_user_agent_info.matchedPlatformName ) mobileStatsQueryString += "&x_" + 'mobile_platforms' + '=' + wpcom_mobile_user_agent_info.matchedPlatformName; if( false !== wpcom_mobile_user_agent_info.matchedUserAgentName ) mobileStatsQueryString += "&x_" + 'mobile_devices' + '=' + wpcom_mobile_user_agent_info.matchedUserAgentName; if( wpcom_mobile_user_agent_info.isIPad() ) mobileStatsQueryString += "&x_" + 'ipad_views' + '=' + 'views'; if( "" != mobileStatsQueryString ) { new Image().src = document.location.protocol + '//pixel.wp.com/g.gif?v=wpcom-no-pv' + mobileStatsQueryString + '&baba=' + Math.random(); } }
|
### PRODUCING POSSIBILITIES
Tammy Clemons
This dissertation, based on anthropological research between 2015 and 2020, focuses on young people in different yet interconnected social contexts in Central Appalachia and how they envision, construct, and act upon possibilities for themselves and the region through multimodal cultural production processes like visual art, performance, and multisensory media. The research question focusing this project was: How do the social contexts of young Appalachians’ engagement in media consumption and production practices shape the possibilities they...
### Wisdom From the Collard Field
Robert Gorum
This dissertation surveys agrarian literature written by American writers since World War II. It compares the Southern Agrarians of Vanderbilt University and New Agrarians such as Wendell Berry, Wes Jackson, and Gene Logsdon to examine their understanding of place and home. I begin my inquiry with a personal frame story of time I have spent in and around the sustainable agriculture movement. Drawing on various forms of literature, including memoirs, cookbooks, novels, reportage, and other...
### Topics in Quantum Quench and Entanglement
Sinong Liu
The dissertation includes two parts. In Part I, we study non-equilibrium phenomena in various models associated with global quantum quench. It is known that local quantities, when subjected to global quantum quench across or approaching critical points, exhibit a variety of universal scaling behaviors at various quench rates. To investigate if similar scaling holds for non-local quantities, we consider the scaling behavior of circuit complexity under quantum quench across the critical massless point in Majorana...
### Novel Machine Learning and Wearable Sensor Based Solutions for Smart Healthcare Monitoring
Rajdeep Kumar Nath
The advent of IoT has enabled the design of connected and integrated smart health monitoring systems. These health monitoring systems can be utilized for monitoring the mental and physical wellbeing of a person. Stress, anxiety, and hypertension are the major elements responsible for the plethora of physical and mental illnesses. In this context, the older population demands special attention because of the several age-related complications that exacerbate the effects of stress, anxiety, and hypertension. Monitoring...
### PLASMON ENHANCED SINGLE MOLECULE FLUORESCENCE IN ZERO MODE WAVEGUIDES (ZMWS)
Abdullah Masud
Plasmonic nanostructures are an extensive research focus due to their ability to modify the photophysical properties of nearby fluorophores. Surface plasmons (SP), defined as the collective oscillation of delocalized electrons, are the fundamental characteristic primarily responsible for altering those photophysical properties. Studying fluorophores at the single-molecule level has received significant attention since more specific information can be extracted from single molecule-based studies, which otherwise could be obscured in ensemble studies. However, single-molecule studies are inherently...
### \"IT'S ABOUT MORE THAN JUST ANIMALS\"
Dayton D. Starnes
This research explores the influences of diverse environmental politics in shaping zoo-adjacent conservation activities in the United States. Based upon 13 months of multi-sited ethnographic research, conducted with conservation actors across six states, the researcher investigates and documents how conservation professionals—operating in contexts adjacent to zoological institutions—experience and respond to the socio-environmental implications associated with the cascading effects of global environmental change. In the face of current challenges and uncertain environmental futures—shaped by habitat alterations,...
### Solubility of Additive Forms over Local Fields
Drew Duncan
Michael Knapp, in a previous work, conjectured that every additive sextic form over $\mathbb{Q}_2(\sqrt{-1})$ and $\mathbb{Q}_2(\sqrt{-5})$ in seven variables has a nontrivial zero. In this dissertation, I show that this conjecture is true, establishing that $$\Gamma^*(6, \mathbb{Q}_2(\sqrt{-1})) = \Gamma^*(6, \mathbb{Q}_2(\sqrt{-5})) = 7.$$ I then determine the minimal number of variables $\Gamma^*(d, K)$ which guarantees a nontrivial solution for every additive form of degree $d=2m$, $m$ odd, $m \ge 3$ over the six ramified quadratic extensions...
### TWO ESSAYS ON FOOD ENVIRONMENT, NUTRITION, AND FOOD INSECURITY
Suliman Almojel
A healthy food environment is fundamental to good health. It contributes to the reduction of obesity and the development of healthy eating habits. In spite of this, many people in the United States (US) have been hypnotized to become obese due to the current food environment. Recently, the US has consistently ranked high in the world in terms of obesity. The rising rate is symptomatic of consuming unhealthy diets. Besides, the double-edged crisis of the...
### \"HOW SWEET THAT I AM THE ONE TO WHISPER THESE THINGS”
Laura Manning
The motivation for this study came from a need to construct student-centered pedagogical practices to enhance learning and teaching of the Latin language in K-12 schools in the USA. This study aimed to advance a conceptual understanding of how active Latin teaching and learning occurs and to investigate the potential benefits of applying historical pedagogical frameworks for active Latin teaching practiced during the Renaissance, when Latin was both a lingua franca and a dead language....
### THE GESTATION OF HEALTH
Brittany Rice
Diabetes remains a leading cause of death nationwide despite pharmacological advances. Recent etiological investigations of the disease detail the role of perinatal exposure to environmental contaminants, such as polychlorinated biphenyls (PCBs), in enhancing disease susceptibility. Polychlorinated biphenyl 126, a coplanar PCB, elicits its toxic effects through the aryl-hydrocarbon receptor and the disruption of endocrine signaling. The goal of this dissertation was to focus on delineating the differences in the developmental windows of diabetes susceptibility respective...
### INCREASING SOCIAL INCLUSION FOR CHILDREN WITH DISABILITIES IN FAITH-BASED SETTINGS
Valerie Miller
The aim of this dissertation is to increase the body of research in occupational therapy about how to increase the social inclusion of children with disabilities in faith-based settings. Even since the advent of important legislation like the Americans with Disabilities Act, which paved the way for community participation for individuals with disabilities, individuals with disabilities continue to face barriers to participating in society. Decreased inclusion for individuals with disabilities is seen throughout all sectors...
### FACTORS IN THE SUCCESS OF FEMALE COMPUTING MAJORS IN COMMUNITY COLLEGES
Melanie Williamson
Historically, the role of women in computing changes over time as does their presence in the field. In 1985, 37% of computer science bachelor’s degree recipients were women, but in recent years, that number has decreased and currently holds at, around, 18%. Using a mixed methods approach, the study looked at the success of women enrolled in a computing degree program at a community college and the impact that self-efficacy, involvement in academic support opportunities,...
### Impact of Feeding Foods Containing Industrial Hemp-Derived Cannabidiol on Canine Health and Well-Being
Elizabeth Morris
Anecdotal evidence of beneficial behavioral and health effects of cannabidiol (CBD) use in companion animals has amplified the need to elucidate safety and potential impacts of CBD use. The purpose of this investigation was to determine the impact of industrial hemp-derived CBD administration on canine health and well-being. We hypothesized that CBD would produce beneficial effects on canine behavior without negatively impacting animal health. Dog treats were formulated to include CBD and shown to be...
### LEVERAGING CHEMICAL AND COMPUTATIONAL BIOLOGY TO PROBE THE CELLULOSE SYNTHASE COMPLEX
Kirtley Amos
Cellular expansion in plants is a complex process driven by the constraint of internal cellular turgor pressure by an expansible cell wall. The main structural element of the cell wall is cellulose. Cellulose is vital to plant fitness and the protein complex that creates it is an excellent target for small molecule inhibition to create herbicides. In the following thesis many small molecules (SMs) from a diverse library were screened in search of new cellulose...
### Multi-stream Longitudinal Data Analysis using Deep Learning
Longitudinal healthcare data encompasses all tasks where patients information are collected at multiple follow-up times. Analyzing this data is critical in addressing many real world problems in healthcare such as disease prediction and prevention. In this thesis, technical challenges in analyzing longitudinal administrative claims data are addressed and novel deep learning based models are proposed for multi-stream data analysis and disease prediction tasks. These algorithms and frameworks are assessed mainly on substance use disorders prediction...
### Robust RNA Integrity-to-Neuronal Gene Expression Association in Autopsy Brain Tissue Not Explained by Post-Mortem Variables; and Acute Behavioral Stress Does Not Alter RNA Quality, While Progesterone Protects Against Effects of Stress Exposure
Eleanor Johnson
Transcriptional profiling (TP) is a common tool to determine RNA expression levels. It allows for thousands of genes to be analyzed simultaneously, and determines differences in gene expression levels due to various pathologies. RNA quality also impacts the reported expression level. One of the most common approaches for assessing RNA quality is Agilent Technology’s RNA integrity number (RIN). The use of RINs allowed scientists to standardize the assessment and reporting of RNA quality by predominantly...
### Metabolic and Electrophysiological Effects of Fibroblast Growth Factor 19 in the Dorsal Vagal Complex
Jordan Wean
The dorsal vagal complex (DVC) is an important homeostatic regulatory center located in the hindbrain that alters vagal parasympathetic activity in response to central, viscerosensory, and humoral cues. Within the DVC, second-order sensory neurons in the nucleus tractus solitarius (NTS) integrate ascending vagal sensory input with descending regulatory inputs from higher brain areas and respond to circulating hormones and glucose. In turn, the NTS projects to the dorsal motor nucleus of the vagus (DMV) which...
### THE INFLUENCE OF FORMAL MENTORING ON TEACHER BELIEFS OF K-12 CLASSROOM TECHNOLOGY USE DURING A GLOBAL PANDEMIC
Anthony Arbisi
This dissertation explores the influence and transfer of knowledge related to instructional technology that occurs in the formal teacher mentoring relationship of seven mentoring dyads in a suburban Missouri public school district. This multiple case study was performed during the COVID-19 pandemic during the 2020-2021 school year. The unit of analysis in this study was a mentoring dyad that consisted of an experienced mentor teacher and a novice teacher. A multiple case study method was...
### A Computational Fluid-Structure Interaction Method for Simulating Supersonic Parachute Inflation
Jonathan Boustani
Following the successful landing of the Curiosity rover on the Martian surface in 2012, NASA/JPL conducted the low-density supersonic decelerator (LDSD) missions to develop large diameter parachutes to land the increasingly heavier payloads being sent to the Martian surface. Unexpectedly, both of the tested parachutes failed far below their design loads. It became clear that there was an inability to model and predict loads that occur during supersonic parachute inflation. In this dissertation, a new...
### The Development of Structural Hollow Carbon Fibers from a Multifilament Segmented Arc Spinneret
Elizabeth Morris
Carbon fiber is an ideal material for structural applications requiring high strength and stiffness and low weight. Yet it has seen only incremental improvements in properties over the last few decades. Carbon fibers remain limited in attaining their theoretical tensile strength and modulus, largely due to defects in their structure, some of which stem from the fiber production process itself. Through the mitigation of defect formation as well as approaches to decrease fiber linear density,...
### Metaphor and the Struggle between Populism and Liberal Democracy
Daniel Cole
Populist movements have emerged the world over, appearing even in countries in which it had long been assumed that liberal democracy was unassailable. Scholars have been grappling with the concept of populism for decades, but as populists have won victories close to home, the research has taken on a heightened sense of urgency. Two of the common theses that have appeared in the recent literature are, (a) populism is opposed to liberal democracy, and (b)...
### INVESTIGATION OF THE BIOSYNTHESIS OF THE NUCLEOSIDE ANTIBIOTIC SPHAERIMICIN
Jonathan Overbay
Antibiotic-resistance has become a widespread problem in the United States and across the globe. Meanwhile, new antibiotics are entering the clinic at an alarmingly low rate. Highly-modified nucleosides, a class of natural products often produced by actinobacteria, target MraY bacterial translocase I. MraY is a clinically unexploited enzyme target that is ubiquitous and essential to peptidoglycan cell wall biosynthesis. The nucleoside antibiotics known vary in efficacy and the functionalities contributing to improved activity is poorly...
### Electric Power Systems and Components for Electric Aircraft
Damien Lawhorn
Electric aircraft have gained increasing attention in recent years due to their potential for environmental and economic benefits over conventional airplanes. In order to offer competitive flight times and payload capabilities, electric aircraft power systems (EAPS) must exhibit extremely high efficiencies and power densities. While advancements in enabling technologies have progressed the development of high performance EAPS, further research is required. One challenge in the design of EAPS is determining the best topology to be...
### THE ANTITHESIS OF ‘BUSINESS AS USUAL’
Chelsea Cutright
Youth in Tanzania make up the majority of the current growing population and therefore are increasingly a focus of local and international development concern, specifically as the rates of urban growth and unemployment are also increasing. This research builds upon existing anthropological literature, which largely addresses contemporary and urban African youths as “problems” in dire need of governmental intervention and international solutions. Through explorations of the ways in which Tanzanian youth are actively and creatively...
### Neural Representations of Concepts and Texts for Biomedical Information Retrieval
Jiho Noh
Information retrieval (IR) methods are an indispensable tool in the current landscape of exponentially increasing textual data, especially on the Web. A typical IR task involves fetching and ranking a set of documents (from a large corpus) in terms of relevance to a user's query, which is often expressed as a short phrase. IR methods are the backbone of modern search engines where additional system-level aspects including fault tolerance, scale, user interfaces, and session maintenance...
• 2022
189
• 2021
295
• Dissertation
484
#### Affiliations
• University of Kentucky
484
|
Welcome to Obsidian Forum Community
47 replies to this topic
### #1 BMac Posted 10 May 2018 - 08:58 AM
BMac
Programmer
• Developers
• 410 posts
Official tutorials for modding game data:
(Advanced) How to Edit Assets and Assetbundles (by Fhav6X): https://forums.obsid...d-assetbundles/
Documentation for all game data formats: https://eternity.obs...me-data-formats.
Basic information about modding string tables: https://forums.obsid...ease/?p=1977260
Much of Deadfire's data is placed in easily-readable text files located in Pillars of Eternity II\PillarsOfEternityII_Data\exported. You can modify these files directly, but a better way to make mods that you can easily share is to use the override folder. The override folder is Pillars of Eternity II\PillarsOfEternityII_Data\override.
You should create a subfolder in the override folder for each mod. So my mod might be in the folder Pillars of Eternity II\PillarsOfEternityII_Data\override\bmac-mod.
• *.conversationbundle files must appear at the path specified in the bundle as the "Filename" property. For example, re_si_ship_combat.conversationbundle needs to appear at <your mod>\Conversations\RE_Scripted_Interactions\re_si_ship_combat.conversation.
• *.stringtable files need to be in the same folder hierarchy as the file they're overriding. E.g. to override localized\en\text\game\gui.stringtable, your override file should be in override\[yourmod]\localized\en\text\game\gui.stringtable.
• Other *.bundle files can be anywhere in your mod folder
If your mod causes problems or doesn't function correctly, you might find helpful error messages in the game's output log folder at "PillarsOfEternityII_Data/output_log.txt".
• Tanred, Varana, Tick and 8 others like this
### #2 Xaratas Posted 13 May 2018 - 02:07 PM
Xaratas
(7) Enchanter
• Members
• 901 posts
@BMac
I took those information bits from the link. In the final release, are there changes on the modding system comparing to the beta?
Can such file only contain the one changed id? -> Yes
What happens on id collision in multiple files? Alphabetical last file wins? -> Last loaded file will win.
Is the Audio system also overrideable or has it hard depends on the wwise packages? -> No override folder and needs wwise packages.
Can one add images for items, skills and so on?
How to add something to a vendor? Without overriding his whole backpack?
Edited by Xaratas, 13 May 2018 - 03:29 PM.
### #3 BMac Posted 14 May 2018 - 04:22 PM
BMac
Programmer
• Developers
• 410 posts
There weren't any major changes for modding between the last backer beta release and the 1.0 release, just some changes in the structure of a few gamedata types to make them more amendable to overriding.
Adding new icons is still difficult; it's possible a patch could change this.
You do have to fully override one of the store's loot lists to add an item to it. I could make a change in a patch to make it possible to add additional loot lists to stores. I'll make a note of it.
• Lexx, Tick, SiliconMage and 1 other like this
### #4 Finchyy Posted 22 May 2018 - 06:49 AM
Finchyy
(0) Nub
• Initiates
• 4 posts
Much of Deadfire's data is placed in easily-readable text files located in Pillars of Eternity II\PillarsOfEternityII_Data\exported. You can modify these files directly, but a better way to make mods that you can easily share is to use the override folder. The override folder is Pillars of Eternity II\PillarsOfEternityII_Data\override.
Any file in the override folder will take priority over the files in the exported data folder - so you can add new data objects to the game in these files, or replace existing ones.
You should create a subfolder in the override folder for each mod. So my mod might be in the folder Pillars of Eternity II\PillarsOfEternityII_Data\override\bmac-mod.
Hi, there!
So I've created a mod that replaces numerous audio files. I've verified this works by overwriting the original audio files, but upon seeing this post decided to do it this way as it seems a lot safer.
However, the override folder didn't exist. I created it, created a subfolder for my mod, and placed the files within... yet it does not work. As far as I can tell, it is not overriding the files.
Are the to-be-overridden files supposed to be loose in the subfolder? If so, that didn't work. I also tried keeping them in their original folders, as well as re-creating the original path to the files within the subfolder.
Edit: I just read the answer to this question above "Is the Audio system also overrideable or has it hard depends on the wwise packages? -> No override folder and needs wwise packages." Apologies for my haste!
Edited by Finchyy, 22 May 2018 - 07:13 AM.
### #5 GrimLefourbe Posted 25 May 2018 - 12:40 AM
GrimLefourbe
(1) Prestidigitator
• Members
• 18 posts
From what I've seen, it's not possible to partially change objects in the *.*bundle. If it is, then please inform me.
I think it'd be cool if it was possible to only change single variables in objects. I made a mod to change the level cap, it only needs to change a single variable in global.gamedatabundle but I had to copy the whole GlobalGameSettings object which is 1200 lines just to change that one variable and that makes this change incompatible with any other mod that would change the GlobalGameSettings right?
### #6 peko Posted 29 May 2018 - 08:12 AM
peko
(2) Evoker
• Members
• 60 posts
How does it work on mac? Create an override folder inside the app?
### #7 Tarlonniel Posted 30 May 2018 - 05:37 PM
Tarlonniel
(5) Thaumaturgist
• Members
• 467 posts
Seconding the above question, someone asked me for a Mac version of my mod, saying the file formats are different. Are they? Can I convert somehow without having a Mac myself?
### #8 BMac Posted 01 June 2018 - 04:57 PM
BMac
Programmer
• Developers
• 410 posts
The process should be relatively similar on Mac. The 'override' folder should go inside the app at "PillarsOfEternityII.app\Contents\override". Nothing should need to be different about the files themselves. If you run into any issues with that, let me know and I'll dig deeper.
From what I've seen, it's not possible to partially change objects in the *.*bundle. If it is, then please inform me.
I think it'd be cool if it was possible to only change single variables in objects. I made a mod to change the level cap, it only needs to change a single variable in global.gamedatabundle but I had to copy the whole GlobalGameSettings object which is 1200 lines just to change that one variable and that makes this change incompatible with any other mod that would change the GlobalGameSettings right?
That's right. I'll make a note of that and see if we can do something about it in a future version.
• Tarlonniel likes this
### #9 Tarlonniel Posted 02 June 2018 - 01:25 AM
Tarlonniel
(5) Thaumaturgist
• Members
• 467 posts
Thank you, I'll pass that along and add instructions to my page.
### #10 Quillon Posted 03 June 2018 - 12:34 PM
Quillon
(6) Magician
• Members
• 639 posts
From what I've seen, it's not possible to partially change objects in the *.*bundle. If it is, then please inform me.
I think it'd be cool if it was possible to only change single variables in objects. I made a mod to change the level cap, it only needs to change a single variable in global.gamedatabundle but I had to copy the whole GlobalGameSettings object which is 1200 lines just to change that one variable and that makes this change incompatible with any other mod that would change the GlobalGameSettings right?
I only took the blocks of code(or whatever its called) that I edited within, I didn't need to copy the whole file to make it work. (I took the code related to recovery and reload that I wanted to edit; in the end mod file ended up 3411 lines(for 29 edits in it) but the original attacks.gamedatabundle file has 240252 lines). What didn't work was combining code from another original file with it, I had to make 2 separate mod files from 2 separate origins for 2 separate purposes
Anyway, so what's changed with 1.1 related to modding? People's been saying that it would become easier somehow...
### #11 BMac Posted 04 June 2018 - 11:24 AM
BMac
Programmer
• Developers
• 410 posts
From what I've seen, it's not possible to partially change objects in the *.*bundle. If it is, then please inform me.
I think it'd be cool if it was possible to only change single variables in objects. I made a mod to change the level cap, it only needs to change a single variable in global.gamedatabundle but I had to copy the whole GlobalGameSettings object which is 1200 lines just to change that one variable and that makes this change incompatible with any other mod that would change the GlobalGameSettings right?
I only took the blocks of code(or whatever its called) that I edited within, I didn't need to copy the whole file to make it work. (I took the code related to recovery and reload that I wanted to edit; in the end mod file ended up 3411 lines(for 29 edits in it) but the original attacks.gamedatabundle file has 240252 lines). What didn't work was combining code from another original file with it, I had to make 2 separate mod files from 2 separate origins for 2 separate purposes
That's right, you can override particular objects from a gamedatabundle without overriding the whole bundle, you just can't override particular values on an object without overriding the whole object.
You should be able to put any number of gamedata objects in one .gamedatabundle, regardless of the source. You can't mix objects from different types of bundles, of course (for example, a gamedata object and a global script).
Anyway, so what's changed with 1.1 related to modding? People's been saying that it would become easier somehow...
I don't think there have been any large systematic changes. Some specific data is now in gamedata that wasn't, particularly the tables that define how many spellcasts spellcasting classes get. Player voice sets also exist on their own individual gamedata objects instead of as a list on a single object, which made it difficult to have multiple voice set mods.
• Quillon likes this
### #12 Gary1986 Posted 04 June 2018 - 01:47 PM
Gary1986
(5) Thaumaturgist
• Members
• 508 posts
I've edited crews starting job traits using the Ship GameDataBundle file. If I want to keep that file I've edited through each patch, do I just create an override folder and add the edited Ship file to it?
I also don't want to miss out on new stuff that may get added to the Ship file via future patches, so how do I keep my edited Ship file yet still allow the new things to be added via future game patches?
### #13 Quillon Posted 05 June 2018 - 02:40 AM
Quillon
(6) Magician
• Members
• 639 posts
I don't think there have been any large systematic changes. Some specific data is now in gamedata that wasn't, particularly the tables that define how many spellcasts spellcasting classes get. Player voice sets also exist on their own individual gamedata objects instead of as a list on a single object, which made it difficult to have multiple voice set mods.
Any chance for attack/ability animation speeds to be in gamedata in the future?
Or speed slider's data if it ain't just time manipulation
### #14 Zap Gun For Hire Posted 07 June 2018 - 11:19 AM
Zap Gun For Hire
(5) Thaumaturgist
• Members
• 417 posts
I've edited crews starting job traits using the Ship GameDataBundle file. If I want to keep that file I've edited through each patch, do I just create an override folder and add the edited Ship file to it?
I also don't want to miss out on new stuff that may get added to the Ship file via future patches, so how do I keep my edited Ship file yet still allow the new things to be added via future game patches?
Putting a modded file in the override folder will make it so it doesn't get overwritten by a patch, yes. HOWEVER, as you note, it's generally a good idea to always check a modded file against an updated basefile to see what sort of additions have been made to the basefile that the game is expecting to see. If you don't do this, there's the potential for weirdness or even crashes. And that's not including game balance patches like changing the values of ship hull health and whatnot.
Theoretically it should be possible to put a modded crewmember in its own gamedatabundle file, as long as it has its entire statblock properly formatted. However, I was running into issues with that last night when I was testing the possibility (like you, I have modded a couple of crew members). So perhaps not.
As it is, I would just grit my teeth and re-mod the ships.gamedatabundle file with whatever changes you've made to the crew members (and whatever else) and then plop that into the override folder. Then I would start messing around with a crew member stat block as a solitary gamedatabundle file and see whether or not the game will recognize it on its own.
NOTE:::: In this case, the patch that dropped today changed A LOT in ships.gamedatabundle, so it is almost assuredly easier just to re-mod the crew in the basefile than add all of the changes to your original modded file. Though I will note that it doesn't appear to have changed any of the hirable crewmembers, so you should be able to just copypasta those changes over without too much problem.
Edited by Zap Gun For Hire, 07 June 2018 - 11:22 AM.
### #15 peardox Posted 09 June 2018 - 03:33 AM
peardox
(5) Thaumaturgist
• Members
• 564 posts
• Location:Manchester, UK
Examining the *.gamadatabundle files the all have EF BB BF at the start followed by a load of JSON
Any significance to this for MODs?
Just looked this up and it signifies BOM encoding
Edited by peardox, 09 June 2018 - 03:47 AM.
### #16 peardox Posted 11 June 2018 - 03:34 PM
peardox
(5) Thaumaturgist
• Members
• 564 posts
• Location:Manchester, UK
While BMac is completely correct I've looked at a load of MODs and it appears most of us on NexusMods are replicating the original structure.
Of course, both will work but for making things easy for our fellow MOD developers to understand how a MOD is put together I feel it beneficial if we stick to the original structure.
This approach, whilst COMPLETELY un-required makes it easier for the community to understand how someone else's MOD works.
I got into MOD development as a side project to another POE2 I have going on (mapping the DB) and today released my first 'No Storms'
My reasoning for this rationale is that while we pick things up quickly others may get confused given conflicting info.
If everything is strictly placed there is no chance of mis-understanding for newbs (like me)
I could have placed my MOD in a dir and forgot about it, I decoded to follow structure
It's OK saying a *bundle can go anywhere but stringtables etc have to replicate structure but that will confuse the novice MODder
Simply lie and say EVERYTHING has to be where it's meant to be!
This will produce better newbs and understandable structure for the community
I see zero problems with my concept
Edited by peardox, 11 June 2018 - 03:39 PM.
• BMac likes this
### #17 kilay Posted 19 June 2018 - 04:29 PM
kilay
(5) Thaumaturgist
• Members
• 524 posts
There is way to change the name of a creature summoned using a .prefab file?
i did that , its a mod for a panther figurine , it's my first mod, so maybe i wrong something
btw my issue is about the name of the panther in game
it call a prefab file , under the calling there is a SummonDisplayStrings value that i've changed but it isn't recognized .
The game take the string assigned to the CRE_Panther_Elder in character.gamedatabundle, (entry 606 in characters.stringtable)
so is there a way to make another version of it just using the same model? or some string value that i can add to the mod to show another name of my liking?
maybe clone the prefab ? I've already opened also the unity file and found it, but i don't have any clue about the rebuild of those kind of files
{
"GameDataObjects": [
{
"$type": "Game.GameData.AttackSummonGameData, Assembly-CSharp", "DebugName": "Figurine_Jade_Panther_Ability_AttackSummon", "ID": "efe48ffd-75e4-4d7f-a4c9-dd86c70a88b6", "Components": [{ "$type": "Game.GameData.AttackBaseComponent, Assembly-CSharp",
"KeywordsIDs": [],
"AttackDistance": 12,
"MinAttackDistance": 0,
"AttackVariationID": "dd5934cf-0e6f-4f4a-8f92-3d3102090e8f",
"UseParentEquippableHand": "false",
"CastSpeedID": "eacb53e3-6eb5-422a-92ca-99cc883ae4a9",
"RecoveryTimeID": "566840d9-1561-4243-8ca7-889df9869847",
"ImpactDelay": 0,
"ForcedTarget": "None",
"AffectedTargetType": "All",
"AffectedTargetConditional": {
"Conditional": {
"Operator": 0,
"Components": []
}
},
"AffectedTargetDeathState": "Alive",
"HostilityOverride": "Default",
"PushDistance": 0,
"FaceTarget": "true",
"AccuracyBonus": 0,
"PenetrationRating": 7,
"DamageData": {
"DamageType": "None",
"AlternateDamageType": "None",
"Minimum": 0,
"Maximum": 0,
"DamageProcs": []
},
"Require****Object": "false",
"StatusEffectKeywordsIDs": [],
"StatusEffectsIDs": [],
"RandomizeStatusEffect": "false",
"CanGraze": "false",
"CanCrit": "true",
"DefendedBy": "None",
"AfflictionsDefendedBy": "None",
"AfflictionApplicationModifier": "None",
"SubstituteHitVisualEffect": "",
"VisualEffects": [],
"AttackOnImpactID": "00000000-0000-0000-0000-000000000000",
"ExtraAttackID": "00000000-0000-0000-0000-000000000000",
"LaunchBone": "RightWeapon",
"HitBone": "Chest",
"OnHitShakeDuration": "None",
"OnHitShakeStrength": "None",
"NoiseLevelID": "15743f94-1026-40b0-8e13-a667b3f66f63",
"AllReactNoise": "false",
"InterruptsOn": "None",
"InterruptType": "Normal",
"TargetAngle": 0,
"ApplyOnceOnly": "false",
"PathsToTarget": "true",
"HideFromCombatLog": "false",
"DoesNotApplyDamage": "false",
"TreatAsWeapon": "false",
"BounceData": {
"Bounces": 0,
"Multiplier": 0.5,
"Range": 10,
"InRangeOrder": "false",
"NoRepeatTargets": "false",
"AlwaysBounceAtEnemies": "false",
"Delay": 0,
"NeverBounce": "false"
}
}, {
"$type": "Game.GameData.AttackSummonComponent, Assembly-CSharp", "SummonType": "Summoned", "SummonFileList": [{ "Filename": "prefabs/characters/animal companions/CRE_Panther_Elder.prefab" } ], "SummonDisplayStrings": [{ "String": 15000 } ], "OnSummonVisualEffect": "prefabs/effects/abilities/summon/fx_summon_yellow.prefab", "OnDesummonVisualEffect": "prefabs/effects/abilities/summon/fx_summon_yellow.prefab", "TeamType": "JoinTeam", "SummonCopyOfSelf": "false", "Duration": 30, "HasLoot": "false" } ] }, { "$type": "Game.GameData.GenericAbilityGameData, Assembly-CSharp",
"ID": "45d3d709-e939-4793-8005-103f734119fb",
"Components": [{
"$type": "Game.GameData.GenericAbilityComponent, Assembly-CSharp", "KeywordsIDs": ["ddf90f19-c9a2-4087-a1a7-a00ee46bc3dd"], "DisplayName": 10000, "Description": -1, "UpgradeDescriptions": [], "UpgradedFromID": "00000000-0000-0000-0000-000000000000", "Vocalization": "NoVocalization", "Icon": "", "UsageType": "None", "UsageValue": 0, "AbilityClass": "None", "AbilityLevel": 1, "IsPassive": "false", "StackingRuleOverride": "Default", "TriggerOnHit": "false", "IsModal": "false", "ModalGroupID": "00000000-0000-0000-0000-000000000000", "IsCombatOnly": "true", "IsNonCombatOnly": "false", "HideFromUI": "false", "HideFromCombatLog": "false", "UniqueSet": "None", "NoiseLevelID": "15743f94-1026-40b0-8e13-a667b3f66f63", "DurationOverride": 0, "OverrideEmpower": "Default", "ClearsOnMovement": "false", "CannotActivateWhileInStealth": "false", "CannotActivateWhileInvisible": "false", "ActivationPrerequisites": { "Conditional": { "Operator": 0, "Components": [] } }, "ApplicationPrerequisites": { "Conditional": { "Operator": 0, "Components": [] } }, "DeactivationPrerequisites": { "Conditional": { "Operator": 0, "Components": [] } }, "PowerLevelScaling": { "BaseLevel": 0, "LevelIncrement": 1, "MaxLevel": 0, "DamageAdjustment": 1, "DurationAdjustment": 1, "BounceCountAdjustment": 0, "ProjectileCountAdjustment": 0, "AccuracyAdjustment": 0, "PenetrationAdjustment": 0 }, "StatusEffectKeywordsIDs": [], "StatusEffectsIDs": [], "VisualEffects": [], "SelfMaterialReplacementID": "00000000-0000-0000-0000-000000000000", "AttackID": "efe48ffd-75e4-4d7f-a4c9-dd86c70a88b6", "AITargetingConditional": { "Conditional": { "Operator": 0, "Components": [] }, "Scripts": [] }, "AudioEventListID": "00000000-0000-0000-0000-000000000000" }, { "$type": "Game.GameData.ProgressionUnlockableComponent, Assembly-CSharp"
}
]
},
{
"$type": "Game.GameData.ConsumableGameData, Assembly-CSharp", "DebugName": "Figurine_Jade_Panther", "ID": "839ebb7d-da8f-4e42-9c2a-7d550cb306ce", "Components": [{ "$type": "Game.GameData.ItemComponent, Assembly-CSharp",
"DisplayName": 10000,
"DescriptionText": 10001,
"FilterType": "Consumables",
"InventoryAudioEventListID": "705deb97-3f84-48c8-a84b-e3c34e2d0e3a",
"IsQuestItem": "false",
"IsIngredient": "false",
"IsCurrency": "false",
"CanSellForFullValue": "false",
"MaxStackSize": 1,
"NeverDropAsLoot": "false",
"CanBePickpocketed": "true",
"IsUnique": "true",
"Value": 1500,
"PencilSketchTexture": "",
"InspectOnUseButton": [],
"IsPlaceholder": "false"
}, {
"$type": "Game.GameData.ConsumableComponent, Assembly-CSharp", "Type": "Figurine", "UsageCount": 1, "UsageType": "PerRest", "AnimationVariation": 100, "AbilityID": "45d3d709-e939-4793-8005-103f734119fb", "PickpocketAbilityID": "00000000-0000-0000-0000-000000000000", "Timer": 0, "SkillRequirement": { "SkillID": "00000000-0000-0000-0000-000000000000", "Value": 0 }, "ShipMoraleBonus": 1 } ] }, { "$type": "Game.GameData.PromotionalItemCollectionGameData, Assembly-CSharp",
"DebugName": "Drizzt_Figurine_Panther",
"ID": "a02eeafa-1f73-41db-b12b-46e57b2f019c",
"Components": [{
"\$type": "Game.GameData.PromotionalItemCollectionComponent, Assembly-CSharp",
"PromotionalItemCollections": {
"PromotionalItemCollection": [{
"ItemReferenceID": "839ebb7d-da8f-4e42-9c2a-7d550cb306ce",
"Quantity": 1
}
]
}
}
]
}
]
}
Edited by kilay, 22 June 2018 - 02:16 PM.
### #18 kilay Posted 19 June 2018 - 04:37 PM
kilay
(5) Thaumaturgist
• Members
• 524 posts
And also what's about entry count in stringtable?
a lot of mod add new entries string, isn't that value related to that?
Edited by kilay, 19 June 2018 - 04:38 PM.
### #19 GravitonGamer Posted 19 June 2018 - 06:34 PM
GravitonGamer
(1) Prestidigitator
• Members
• 27 posts
• Steam:jacetheace
And also what's about entry count in stringtable?
a lot of mod add new entries string, isn't that value related to that?
The entry count in the stringtable files isn't used. You can ignore it in your mod.
• kilay likes this
### #20 Xaratas Posted 22 June 2018 - 01:54 PM
Xaratas
(7) Enchanter
• Members
• 901 posts
*.*bundle files can be anywhere in your mod folder, but *.stringtable files need to be in the same folder hierarchy as the file they're overriding. E.g. to override localized\en\text\game\gui.stringtable, your override file should be in override\[yourmod]\localized\en\text\game\gui.stringtable.
*note* That also means that you can not use subfolders in your mod to split texts for different purposes.
For example override\[yourmod]\blueThings\localized\en\text\game\gui.stringtable will not work.
I suggest using such folder names, as I will do for the enhanced ui mod:
override\[yourmod]_blueThings\localized\en\text\game\gui.stringtable
override\[yourmod]_redThings\localized\en\text\game\gui.stringtable
#### 0 user(s) are reading this topic
0 members, 0 guests, 0 anonymous users
|
# What is the average runtime of appending items to arrays?
It is the time of the year again in colleges for final exams and I am preparing mine as of now and I am finding myself in hot water when it comes to understanding the running times of appending items to arrays in C.
Basically, I know there are different ways of appending items to an array, but please consider this method of appending and it's average runtime. I have managed to find its other runtimes, but not it's average.
This method to append items was introduced to me in class, and the trick is to double the size of the array when it is full, while keeping track of the next free element.
From my analysis (I hope I am right here..) we see that the price to append something is O(n) when the array is maxed out because a new array of size x2 needs to be made, and the values in the old array has to be photocopied over. However, if our array isn't maxed out then we can just append an item to the end and we get O(1) runtime.
Can someone explain to me what the average case is and what it would be in Big O notation? I am looking for a simple explanation that I can use when exam comes because it is a short answer / multiple choice exam.
• "Average" isn't really the proper terminology here. This kind of complexity analysis is usually called "amortized". You typically look at a sequence of $n$ insertions (starting from an empty array) and look at how much time these insertions take together. Then that divided by $n$ is the amortized cost of the insertion operation. – Tom van der Zanden Nov 9 '14 at 21:25
• You say you want the average but you descriped an amortising approch. What do you want to average over? – Raphael Nov 10 '14 at 12:11
• The argument/proof that my algorithms book used for this, and that I found elegant, was the Accounting method. – Guildenstern Nov 10 '14 at 16:28
|
# 3. Configuring Outlook¶
## 3.1. Generic Installation¶
Kopano Groupware Core allows native interfacing with Microsoft Outlook 2013 and above via the ActiveSync protocol. With the optional Kopano OL Extension package, the available feature set is extended with collaborative features which do not exist in the ActiveSync transport implementation available in Microsoft Outlook. The Kopano OL Extension does not interfere at all with the native ActiveSync synchronization provided by Outlook and instead is a native COM plugin which provides extra functionality by extending the available feature set. Normal updates of Outlook distributed by Microsoft do not interfere with the functionality provided by the extension.
Note
It has to be said that while syncing via ActiveSync works ok for moderately sized mailboxes, the protocol was not made to be used with lots and lots of folders or multiple gigabyte of synced data. Having also a bit of control on the server side, we have implemented the possibility to open folders from shared accounts through Z-Push, while this generally works, problems can arise if working with multiple users in very busy folders. For future versions are are investigating possibilities to enforce certain maximum sizes for mailboxes and amounts of folders.
### 3.1.1. Limitations of KOE¶
Depending on the use case the Kopano OL Extension can greatly enhance the user experience of Outlook accounts that have been setup to use ActiveSync. Please be aware that it is by design not a replacement for the old Zarafa Mapi connector.
Although we have received reports of Outlook working stable on Inboxes with around 1000 folders and total sizes of up to 10GB, we strongly recommend against the use of ActiveSync for busy shared mailboxes or for mailboxes with large folder structures. For large mailboxes it is possible to enforce a shorter sync period on the server side. Z-Push 2.4 will also implement a webservice to set a shorter sync period per user/device.
In addition to this for version 2.0 of KOE we want to implement client side warnings if the synced mailbox is too large or contains too many shared folders.
## 3.2. Installation of the Kopano OL Extension¶
There are two requirements that must be fulfilled prior to installing a Kopano workspace: First, Outlook 2013 or higher is needed, and second, the user must exist on the Kopano server.
Other than that, there are no special requirements beyond installing the Kopano OL Extension, whose installation package is called KopanoOLExtension-<version>-combined.exe
Note
It is recommended to have the latest service packs and security patches installed. An installation of Kopano OL Extension on a system with older versions than Outlook 2013 is not supported and will not work.
### 3.2.1. Instructions for Outlook 2013 and higher¶
1. Go to Control Panel > Mail > Show Profiles...
2. Click on Add... and fill in a title, for example Kopano. Click OK.
3. Select the option Manually configure server settings or additional server types and click on Next.
4. Select With Outlook.com or Exchange ActiveSync compatible service and click on Next.
6. Fill in the hostname of the Z-Push server in the “E-Mail-Server” field.
7. Fill in the logon data of the user in the Username and Password fields.
8. Choose the amount of Offline-Data that is wanted available, either “1 Month” or “All” and click on Next.
9. With the Account settings test successful, close the account test window with a click on Close.
10. Click Finish to finalize the profile creation.
Important
We recommend the Kopano account to be the default account in the Outlook profile. It is possible to have the Kopano account as secondary account and use as primary account for example an IMAP account, yet it is not recommended practice.
#### 3.2.1.1. Start Outlook¶
1. Start Outlook and make sure the added profile is being used.
2. This can be set in Control panel > Mail > Profiles... on the bottom of the dialog window.
3. The private mailbox of the entered user will appear as a store in Outlook.
By default, the store in Outlook is empty and Outlook instantly starts synchronizing the available data in the store for the selected time period. Depending on the size of the store, the available network connection and system ressources in the backend, this might take some time. Due to the use of the ActiveSync protocol, Outlook behaves very much like a mobile device in synchronization, which allows interruptions. For instance, when Outlook is closed and not fully in sync, the next start of Outlook will continue synchronization where it stopped.
|
Weak interaction and the Chirality of anti-particles
Consider a weak current of the form
$J^{\mu} = \bar{u}_{\nu}\gamma^{\mu}(1-\gamma^5)u_{e}$
This describes the part of a weak process where a left-handed electron converts into a left-handed neutrino by emitting/absorbing a W boson. Equivalently, it should also describe the same process for a right-handed positron going to a right-handed anti-neutrino. How do you get this second part from the form of $J^{\mu}$, considering that $P_L = 1-\gamma^5$ is by definition the left handed projector? Whatever antiparticle states contained in $u$ and $\bar{u}$ should have eigenvalue $-1$ of $\gamma^5$ in order to be included in $J^{\mu}$, so, aren't they by definition left-handed?
(note: this is all in the massless approximation so that I can equate chirality and helicity/handedness)
1 Answer
The charged current part of the Lagrangian of the electoweak interaction, for the first generation of leptons, is :
$$L_c = \frac{g}{\sqrt{2}}(\bar \nu_L \gamma^\mu e_L W^+_\mu + \bar e_L \gamma^\mu \nu_L W^-_\mu )$$
The first part corresponds to different versions of the same vertex :
$e_L + W^+ \leftrightarrow \nu_L \tag{1a}$
$(\bar\nu)_R + W^+ \leftrightarrow(\bar e)_R \tag{1b}$
$W^+ \leftrightarrow (\bar e)_R +\nu_L \tag{1c}$
The second part corresponds to different versions of the hermitian congugate vertex :
$\nu_L + W^- \leftrightarrow e_L \tag{2a}$
$(\bar e)_R + W^- \leftrightarrow(\bar \nu)_R \tag{2b}$
$W^- \leftrightarrow e_L +(\bar \nu)_R \tag{2c}$
Here, $(\bar e)_R$ and $(\bar\nu)_R$ are the anti-particle of $e_L$ and $\nu_L$ Roughly speaking, you can change the side of a particle relatively to the $\leftrightarrow$, if you take the anti-particle.
Why the right-handed particles appear ? The fundamental reason is that we cannot separate particles and anti-particles, for instance, we cannot separate the creation of a particle and the destruction of an anti-particle.
[EDIT]
(Precisions due to OP comments)
The quantized Dirac field may be written :
$$\psi(x) = \int \frac{d^3p}{(2\pi)^\frac{3}{2} (\frac{E_p}{m})^\frac{1}{2}}~\sum_s(b(p,s) u(p,s)e^{-ip.x} + d^+(p,s) v(p,s)e^{+ip.x} )$$
$$\psi^*(x) = \int \frac{d^3p}{(2\pi)^\frac{3}{2} (\frac{E_p}{m})^\frac{1}{2}}~\sum_s(b^+(p,s) \bar u(p,s)e^{+ip.x} + d(p,s) \bar v(p,s)e^{-ip.x} )$$
Here, the $u$ and $v$ are spinors corresponding to particle and anti-particle, the $b$ and $b^+$ are particle creation and anihilation operators, the $d$ and $d^+$ are anti-particle creation and anihilation operators.
We see, that in Fourier modes of the Dirac quantized field, the elementary freedom degree is (below $p$ and $s$ are fixed):
$$b(p,s) u(p,s)e^{-ip.x} + d^+(p,s) v(p,s)e^{+ip.x}$$
Now, suppose we are considering massless particles, so that helicity and chirality are the same thing. Suppose that, for the particle (spinor $u(p,s)$) the couple $s,p$ corresponds to some helicity. We see, that, for the anti-particle ($v$), there is a term $e^{+ip.x}$ instead of $e^{-ip.x}$ for the particle. That means that the considered momentum is $-p$ for the anti-particle, while the considered momentum is $p$ for the particle. The momenta are opposed for a same $s$, so it means that the helicities are opposed.
• I understand why the anti-particles appear, but not why they are necessarily right-handed. You are right that we cannot separate the particle from the antiparticle, but the left projection operator $1-\gamma^5$ stays the same - so why is the anti particle that is involved right handed? – user28400 Aug 18 '13 at 18:13
• @user28400 : I have made an edit to the answer. Hope it helps. – Trimok Aug 19 '13 at 8:16
• Do all the process (1abc, 2abc) gowith left projector when you compute the vertex for each one? – Vicky Mar 16 at 5:15
|
R such that = g u. ) x the elasticity of. {\displaystyle h(x)} ) Some of the key properties of a homogeneous function are as follows, 1. ∂ The demand functions for this utility function are given by: x1 (p,w)= aw p1 x2 (p,w)= (1−a)w p2. k functions defined by (2): Proposition 1. z x Aggregate production functions may fail to exist if there is no single quantity index corresponding to final output; this happens if final demand is non-homothetic either be-cause there is a representative agent with non-homothetic preferences or because there , x = ) Title: Homogeneous and Homothetic Functions 1 Homogeneous and Homothetic Functions 2 Homogeneous functions. and only if the scale elasticity is constant on each isoquant, i.e. Homothetic functions are functions whose marginal technical rate of substitution (the slope of the isoquant, a curve drawn through the set of points in say labour-capital space at which the same quantity of output is produced for varying combinations of the inputs) is homogeneous of degree zero. 1 a function is homogenous if The next theorem completely classi es homothetic functions which satisfy the constant elasticity of substitution property. x 1 y ∂ x •With homothetic preferences all indifference curves have the same shape. g ) Homogeneous Functions Homogeneous of degree k Applications in economics: return to scale, Cobb-Douglas function, demand function Properties h ( Lecture Outline 9: Useful Categories of Functions: Homogenous, Homothetic, Concave, Quasiconcave This lecture note is based on Chapter 20, 21 and 30 of Mathematics for Economists by Simon and Blume. 13. x 2 x The properties and generation of homothetic production functions: A synthesis ... P MeyerAn aggregate homothetic production function. f This service is more advanced with JavaScript available, Cost and Production Functions ) ∂ When wis empty, equation (1) is homothetic. h ∂ This result identifies homothetic production functions with the class of production functions that may be expressed in the form G(F), where F is homogeneous of degree one and C is a transformation preserving necessary production-function properties. ∂ Southern Econ. ( z n The Marginal Rate of Substitution and the Non-Homotheticity Parameter The most distinctive property of NH-CES and NH-CD is, of course, that the pro-duction function is non-homothetic and is This process is experimental and the keywords may be updated as the learning algorithm improves. R is called homothetic if it is a mono-tonic transformation of a homogenous function, that is there exist a strictly increasing function g: R ! Let k be an integer. 2 n 1. x •Homothetic: Cobb-Douglas, perfect substitutes, perfect complements, CES. Theorem 3.1. , A function is homogeneous if it is homogeneous of degree αfor some α∈R. Not affiliated z 2. homothetic production functions with allen determinants Let h(x) be an p homogeneous function, x =(x 1;:::x n) 2Rn +;and f= F(h(x)) a homothetic production function of nvariables. Download preview PDF. k x For a twice dierentiable homogeneous function f(x) of degree, the derivative is 1 homogeneous of degree 1. x y 2 229-238. A function r(x) is de…ned to be homothetic if and only if r(x) = h[g(x)] where his strictly monotonic and gis linearly homogeneous. y Enjoy the videos and music you love, upload original content, and share it all with friends, family, and the world on YouTube. 1 y * For example, see Cowles Commission Monograph No. Creative Commons Attribution-ShareAlike License. 1 g Homogeneous Functions For any α∈R, a function f: Rn ++→R is homogeneous of degree αif f(λx)=λαf(x) for all λ>0 and x∈Rn ++. ∂ ∂ is called the -homothetic convex-hull function associated to K. The goal of this paper is to investigate the properties of the convex-hull and -homothetic convex-hull functions of convex bodies. x R and a homogenous function u: Rn! ∂ CrossRef View Record in Scopus Google Scholar. The function f of two variables x and y defined in a domain D is said to be homogeneous of degree k if, for all (x,y) in D f (tx, ty) = t^k f (x,y) Multiplication of both variables by a positive factor t will thus multiply the value of the function by the factor t^k. such that f can be expressed as z {\displaystyle f(tx_{1},tx_{2},\dots ,tx_{n})=t^{k}f(x_{1},x_{2},\dots ,x_{n})} = x J., 36 (1970), pp. ) ∂ {\displaystyle g(h)}, Q It is clear that homothetiticy is … 2 and a homogenous function Calculate MRS, = 2 Q 2 = f This is a preview of subscription content. y form and if the production function has elasticity of substitution σ, the corresponding cost function has elasticity of substitution 1/σ. ∂ x ( f … Let f(x) = F(h(x 1;:::;x n(3.1) )) be a homothetic production function. = + ∂ •Not homothetic… ∂ For example, Q = f (L, K) = a —(1/L α K) is a homothetic function for it gives us f L /f K = αK/L = constant. ∂ Homothetic Preferences •Preferences are homothetic if the MRS depends only on the ratio of the amount consumed of two goods. 1.3 Homothetic Functions De nition 3 A function : Rn! Chapter 20: Homogeneous and Homothetic Functions Properties Homogenizing a function Theorem 20.6: Let f be a real-valued function defined on a cone C in Rn. t Assumption of homotheticity simplifies computation, Derived functions have homogeneous properties, doubling prices and income doesn't change demand, demand functions are homogenous of degree 0, The slope of the MRS is the same along rays through the origin. , z ∂ Classification of homothetic functions with CES property. Cite as. © 2020 Springer Nature Switzerland AG. , h ( x ) The cost function does not exist it there is no technical way to produce the output in question. Over 10 million scientific documents at your fingertips. ) y production is homothetic Suppose the production function satis es Assumption 3.1 and the associated cost function is twice continuously di erentiable. ∂ the MRS is a function of the underlying homogenous function Q x ( ) A function is said to be homogeneous of degree r, if multiplication of each of its independent variables by a constant j will alter the value of the function by the proportion jr, that is, if; In general, j can take any value. t 137.74.42.127, A Production function of the Independent factor variables x, $$\Phi (\sigma ({x_{{1,}}}\,{x_{2}}), \ldots ,\,{x_{n}})$$, $$(U) = \Phi (\sigma ({x_{{1,}}}\,{x_{2}}), \ldots ,\,{x_{n}})$$, $$f(U) = (\sigma ({x_{{1,}}}\,{x_{2}}), \ldots ,\,{x_{n}})$$, $$\frac{{d\Phi (\sigma )}}{{d\sigma }} > 0,\frac{{d\Phi (U)}}{{dU}} > 0$$. We are extremely grateful to an anonymous referee whose comments on an earlier draft significantly improved the manuscript. The following proposition characterizes the scale property of homothetic. More speci cally, we show that in the family of all convex bodies in Rn, G Then F is a homogeneous function of degree k. And F(x;1) = f(x). f 1 scale is a function of output. g z aggregate distance function by using different specifications of final demand. 1 ( Homothetic Production Function: A homothetic production also exhibits constant returns to scale. f f This expenditure function will be useful in monopolistic competition models, and retains its properties even as the number of goods varies. B. , cations of Allen’s matrices of the homothetic production functions are also given. + 2 Q is not homogeneous, but represent Q as Homothetic functions 24 Definition: A function is homothetic if it is a monotone transformation of a homogeneous function, that is, if there exist a monotonic increasing function and a homogeneous function such that Note: the level sets of a homothetic function are … ( = Boston: (1922); (3rd Edition, 1927). I leave the Cobb-Douglas case to you. A Production function of the Independent factor variables x 1, x 2,..., x n will be called Homothetlc, if It can be written Φ (σ (x 1, x 2), …, x n) (31) where σ is a. homogeneous function of degree one and Φ is a continuous positive monotone increasing function of Φ. Some unpublished work done on Air Force contract at Carnegie Tech. Q ) pp 41-50 | 3. ( ) 10 on statistical inference in economic models. But it is not a homogeneous function … t ( x ∂ This page was last edited on 31 July 2017, at 00:31. Keywords: monopolistic competition, homothetic, translog, new goods However, in the case where the ordering is homothetic, it does. In Section 2 we collect our results about the convex-hull functions. x 2 Notice that the ratio of x1 to x2 does not depend on w. This implies that Engle curves (wealth y Afunctionfis linearly homogenous if it is homogeneous of degree 1. f ( t x 1 , t x 2 , … , t x n ) = t k f ( x 1 , x 2 , … , x n ) {\displaystyle f (tx_ {1},tx_ {2},\dots ,tx_ {n})=t^ {k}f (x_ {1},x_ {2},\dots ,x_ {n})} A homothetic function is a monotonic transformation of a homogeneous function, if there is a monotonic transformation. This can be easily proved, f(tx) = t f(x))t @f(tx) @tx Then: When the production function is homothetic, the cost function is multiplicatively separable in input prices and output and can be written c(w,y) = h(y)c(w,1), where h0 y It follows from above that any homogeneous function is a homothetic function, but any homothetic function is not a homogeneous function. , ( Not logged in ( y z Q Properties of NH-CES and NH-CD There are a number of specific properties that are unique to the non-homothetic pro-duction functions: 1. 0.1.2 Cost Function for C.E.S Production Function It turns out that the cost function for a c.e.s production function is also of the c.e.s. 2 ( EXAMPLE: Cobb-Douglas Utility: A famous example of a homothetic utility function is the Cobb-Douglas utility function (here in two dimensions): u(x1,x2)=xa1x1−a 2: a>0. J PolA note on the generalized production function. x Then f satis es the constant elasticity of g ( z ) {\displaystyle g (z)} and a homogenous function. The production function (1) is homothetic as defined by (2) if. The symmetric translog expenditure function leads to a demand system that has unitary income elasticity but non-constant price elasticities. These keywords were added by machine and not by the authors. ( The entire range of output ) } and a homogenous function:!... A number of goods varies available, Cost and production functions are also given scale of... And production functions pp 41-50 | Cite as anonymous referee whose comments on an earlier draft significantly improved manuscript. The next theorem completely classi es homothetic functions which satisfy the constant elasticity of substitution σ, the corresponding function..., CES of cations of Allen ’ s matrices of the c.e.s matrices the... Properties of NH-CES and NH-CD There are a number of goods varies at Carnegie Tech by. The Bugas Fund and a grant from Arizona State University it turns out that the Cost function for c.e.s.: Rn linearly homogenous if it is homogeneous if it is clear that homothetiticy is … some the. Todd Sandler 's research was partially financed by the authors concept of functions... Is constant on each isoquant, i.e this page was last edited on July! Scale over the entire range of output the manuscript anonymous referee whose comments on an draft., and retains its properties even as the number of specific properties that are unique to the pro-duction. In economic theory Air Force contract at Carnegie Tech form and if the scale property of homothetic this is... Exhibits constant returns to scale classi es homothetic functions which satisfy the constant elasticity of property. ( z ) } and a homogenous function the derivative is 1 of. ) = f ( x ; 1 ) is homothetic •with homothetic all. Location cited: ( 1922 ) ; ( 3rd Edition, 1927 ) number of goods varies be! In monopolistic competition models, and retains its properties even as the number goods!, along rays coming from the origin, the corresponding Cost function for c.e.s production function elasticity. Complements, CES: ( 2 ): proposition 1 improved the manuscript ( 1922 ) ; ( Edition! Satisfy the constant elasticity of substitution property following proposition characterizes the scale elasticity is constant on isoquant... The derivative is 1 homogeneous of degree 1 functions pp 41-50 | Cite as are... Production function is also of the key properties of NH-CES and NH-CD There are a number of properties..., this type of production function has elasticity of cations of Allen ’ s matrices of c.e.s... Were added by machine and not by the authors degree 1 of Allen ’ matrices. This type of production function is homogeneous of degree k. and f ( x ; 1 ) is,... This video we introduce the concept of homothetic our results about the functions... The manuscript ( 2 ) and ( 9 ) coming from the origin, the Cost! Non-Constant price elasticities partially financed by the Bugas Fund and a homogenous function: Rn see Cowles Commission No. The isoquants will be the same shape ( 9 ) in monopolistic competition models, and its. Isoquants will be the same economic theory degree, the derivative is 1 of... These keywords were added by machine and not by the authors proposition characterizes the scale property of.! The homothetic production functions pp 41-50 | Cite as Sandler 's research was partially financed by the authors given. State University even as the number of goods varies was last edited on 31 July 2017, at 00:31 Bugas. In this video we introduce the concept of homothetic ) if ) = f ( ;. In monopolistic competition models, and retains its properties even as the number of goods varies properties that unique! There are a number of goods varies done on Air Force contract Carnegie. The manuscript of substitution 1/σ a function is also of the key properties of NH-CES and NH-CD are. Of cations of Allen ’ s matrices of the homothetic production functions 41-50... Substitutes, perfect complements, CES are extremely grateful to an anonymous whose! Isoquants will be the same shape that the Cost function for c.e.s production function exhibits constant returns to over... Has elasticity of substitution 1/σ we are extremely grateful to an anonymous referee whose comments an! 1 homogeneous of degree k. and f ( x ; 1 ) is homothetic as defined by ( )! This type of production function has elasticity of substitution property that homothetiticy …! This process is experimental and the keywords may be updated as the number of specific properties that are unique the... On 31 July 2017, at 00:31 complements, CES homothetic as by... Function for a c.e.s production function exhibits constant returns to scale over entire... Of goods varies along rays coming from the origin, the slopes the! Is more advanced with JavaScript available, Cost and production functions pp 41-50 | Cite as this, rays! Functions De nition 3 a function is also of the c.e.s the homothetic functions. Nition 3 a function is homogeneous of degree 1 slopes of the will! Competition models, and retains its properties even as the number of properties. Theorem completely classi es homothetic functions De nition 3 a function: Rn not the. Each isoquant, i.e g ( z ) } and a homogenous function and only the. Homogeneous function f ( x ) of degree, the slopes of the homothetic production functions pp 41-50 | as. Partially financed by the Bugas Fund and a homogenous function improved the manuscript ). Functions are also given we collect our results about the convex-hull functions of goods.! The homothetic production functions are also given = 1 the production function constant... In economic theory the entire range of output of a homogeneous function are as follows, 1 we the. 3 a function: Rn ( z ) } and a homogenous function 0.1.2 Cost function has elasticity of property. | Cite as the learning algorithm improves as the number of goods varies: ( 2 ): proposition.! Slopes of the homothetic production functions are also given type of production function ( 1 ) is homothetic it... This video we introduce the concept of homothetic non-homothetic pro-duction functions: 1 NH-CD There are a number of properties! This, along rays coming from the origin, the derivative is 1 homogeneous of degree 1 )... Following proposition characterizes the scale elasticity is constant on each isoquant,.. Degree k. and f ( x ) of degree 1 derivative homothetic function properties 1 of! Example, see Cowles Commission Monograph No entire range of output isoquants be! 1 ) is homothetic 2 we collect our results about the convex-hull.. All indifference curves have the same in economic theory 0.1.2 Cost function for twice. Constant returns to scale over the entire range of output about the convex-hull functions significantly improved manuscript... Arizona State University range of output be the same shape f is a homogeneous of. As the learning algorithm improves, i.e partially financed by the authors Monograph No Force contract at Tech. Defined by ( 2 ): proposition 1 a number of goods varies ) and ( 9 ):... Anonymous referee whose comments on an earlier draft significantly improved the manuscript State University about... Due to this, along rays coming from the origin, the slopes of isoquants! Are as follows, 1 Monograph No elasticity of cations of Allen ’ s matrices of the c.e.s clear homothetiticy! The non-homothetic pro-duction functions: 1 derivative is 1 homogeneous of degree k. and f ( )! Convex-Hull functions as defined by ( 2 ): proposition 1 on earlier...
|
# Question cc55e
May 8, 2016
$4 M {n}^{2 +} + 5 B i {O}_{3}^{+} + {H}_{2} O \to 4 M n {O}_{4}^{-} + 5 B {i}^{3 +} + 2 {H}^{+}$
#### Explanation:
To balance the following redox equation:
$M {n}^{2 +} + B i {O}_{3}^{+} \to M n {O}_{4}^{-} + B {i}^{3 +}$
we can split this reaction into two half equations:
Oxidation: $M {n}^{2 +} \to M n {O}_{4}^{-}$
Reduction: $B i {O}_{3}^{+} \to B {i}^{3 +}$
I am going to use the following rules:
1. Balance the elements other than oxygen and hydrogen
2. Balance oxygen using water ${H}_{2} O$
3. Balance Hydrogen using ${H}^{+}$
4. Balance the charge using electrons ${e}^{-}$
Oxidation: Mn^(2+)+4H_2O->MnO_4^(-)+8H^(+)+5e^(-)color(red)(xx4
Reduction: BiO_3^(+)+6H^(+)+4e^(-)->Bi^(3+)+3H_2Ocolor(red)(xx5#
Redox: $4 M {n}^{2 +} + 5 B i {O}_{3}^{+} + {H}_{2} O \to 4 M n {O}_{4}^{-} + 5 B {i}^{3 +} + 2 {H}^{+}$
Here is a video that explains in details the balancing process:
Balancing Redox Reactions | Acidic Medium.
|
2k views
### On a Super-Earth 1.5x the volume and mass of Earth, would our rocket technology allow us to reach orbit? [duplicate]
To try and make parameters clear, can we say we are talking about 50% 'more Earth'? As in, Earth, but 1.5 times as big and heavy? And let us include the atmosphere. If there is 50% more of it by ...
114 views
### At what gravity would the rocket equation mean “cannot reach orbit from the surface”? [duplicate]
I remember reading (a long time ago in a library far, far away...) that, if Earth's gravity were that much stronger, the rocket equation (?) would mean that it were impossible to actually reach orbit ...
83 views
### Escape Velocity [duplicate]
I've read that if the Earth was 50% larger, escape velocity would be so great that we could not build a powerful enough rocket to escape. Is this true and how do I do the maths to prove it?
60 views
### Is it possible for a space rocket to escape a planet's gravity if the gravity was 10 times that of earth? [duplicate]
I know that the scenario isn't realistic but lets assume that we have a planet which is identical to ours except that it's gravity is 10 times bigger. Would it be possible to launch a space rocket ...
43 views
### Is there a maximum gravity limit for conventional rockets? [duplicate]
The "tyranny" of the Tsiolkovsky rocket equation means that more and more fuel is needed to reach higher delta-vs, and that the amount of fuel needed grows in greater than linear way. Does this mean ...
37 views
### How massive can a planet become before it is impossible to escape from using chemical rocket propulsion? [duplicate]
Assuming a vaguely Earth like world but with higher gravity, at what point is it no longer possible to launch a human being from the surface into orbit? Or anything into orbit?
3k views
### Is this a correct understanding of Tsiolkovsky's rocket equation?
When I graph the rocket equation, substituting arbitrary values for v(exhaust) and m1, so m0 because m1 - m0, the graph implies that increasing propellant mass past a certain point does not increase ...
764 views
### Why is that max-Q doesn't occur in transonic regime?
Is there any reason why the maximum dynamic pressure should not occur in the transonic regime. It is clear from this answer that the max-Q for various rockets occur outside the transonic region Do ...
969 views
### Limiting factors of liquid rocket engine thrust
What are the limitations for the 1st stage liquid fueled rocket engines that are currently in widespread use, what are the factors that limit their total thrust? Why can't you just inject more and ...
400 views
### Is there a theoretical limit to the size of launch vehicles?
Is there a limit to the size (mass, structural issues) of boosters that can be launched either to orbital or escape velocity? I am asking about such constraints as the required thrust to weight ratio ...
270 views
### Can planet Earthtoo put a Tooian in orbit too?
Planet Earthtoo saw that Earth could put a person in orbit, so they wanted to go to space too. The planet Earthtoo is twice the diameter of Earth, with the same internal structure - the average ...
|
# Is dualizablility of an object equivalent to tensoring with that object having a left adjoint?
Let $C$ be a closed symmetric monidal category. There is hence an adjunction $$-\otimes X\colon C\leftrightarrows C\colon Map(X,-)$$ involving the internal Hom $Map(-,-)$ for every object $X$ of $C$
An object $X$ of $C$ is called dualizable if the canonical map $$X\otimes DX\to Map(X,X)$$ is an isomorphism where $DX=Map(X,1)$. It turns out, that this condition is equivalent to the condition that the canonical map $Y\otimes DX\to Map(X,Y)$ is an isomorphism for each $Y$ in $C$. The isomorphism $$Map(Y,Z\otimes X)\cong Map(Y,Z\otimes DDX)\cong Map(Y,Map(DX,Z))\cong Map(Y\otimes DX, Z)$$ shows that there is an adjunction $$-\otimes DX\colon C\leftrightarrows C\colon -\otimes X$$ for a dualizable $X$, so then $-\otimes X$ has not only a right adjoint but also a left adjoint.
Is an object $X$ of $C$ necessarily dualizable, if $-\otimes X$ has a left adjoint and does this left adjoint have to be $-\otimes DX$?
• I don't think I can contribute to an answer. But, your question seems interesting to me. There's just a part I haven't understood. Why is this: $Map(Y,Z\otimes X)\cong Map(Y,Z\otimes DDX)$ ? – frabala Feb 14 '14 at 13:14
• $X\cong DDX$ if $X$ is dualizable. A proof can be found in many articles/books introducing the notion of a dualizable object. – user8463524 Feb 14 '14 at 13:24
• Interesting question. If $C=\mathsf{Mod}(R)$, then the answer is yes, by the Eilenberg-Watts Theorem: The left adjoint of $X \otimes -$ is a cocontinuous functor, hence given by tensoring with some object - this has to be the dual of $X$. For general $C$ this argument doesn't work. – Martin Brandenburg Feb 14 '14 at 14:30
[This answer is copied with very minor modifications from a preprint of mine "The Balanced Tensor Product of Module Categories" joint with Chris Douglas and Chris Schommer-Pries which will appear on the arxiv in the next few weeks.]
Let $\mathcal{R} \cong \mathrm{Vec} \oplus \mathrm{Vec} \cdot X$ be the symmetric monoidal category consisting of pairs of vectors spaces which we write as $V_1+V_2 X$ with tensor product given by
$(V_1+V_2 X) \otimes (W_1+W_2 X) = (V_1 \otimes W_2 \oplus V_2 \otimes W_1)X.$
Up to equivalence there are unique choices of associator, unitors, and symmetric structure making this a symmetric monoidal category. It is both finite and semisimple, and is a categorification of the ring $k[x]/(x^2)$, but it is not rigid. The object X cannot have a dual as there is no object $Z \in \mathcal{R}$ such that $Z \otimes X$ has a non-zero map to or from the unit object of $\mathcal{R}$.
However, it is easy to see that the tensoring with $X$ is an exact functor and hence by the adjoint functor theorem has both adjoints. Explicitly, the adjoint to tensoring with $X$ is the functor which sends $X \mapsto 1$ and $1 \mapsto 0$ (since $\mathcal{R}$ is semisimple this describes a unique functor).
• This is a counterexample to all sorts of things. A fun exercise is to compute its Drinfeld center. – Noah Snyder Feb 27 '14 at 5:46
|
# Chain Break Fraction
Where is chain break fraction in the D-Wave literature? How can chain break fraction be influenced in the Advantage system where there does not seem to be a 'chain strength' parameter?
1 comment
• Hi Richard,
Chain break fraction is a fraction of chains in a sample that are broken. For example, if a sample has 8 chains and one of them is broken then chain_break_fraction is 0.125. So, it is preferred to have a low chain_break_fraction.
Here is a good resource that goes over chains and chain breaks: https://support.dwavesys.com/hc/en-us/community/posts/360016697094-What-is-a-chain-
As the chains are formed during minor-embedding the logical problem onto physical qubits, Ocean composites, such as EmbeddingComposite, include chain_strength parameter. So, this parameter is not solver dependent.
I also want to mention that the new release of Ocean SDK now includes a default chain_strength. Rather than using a static default, we now calculate the chain strength using the uniform_torque_compensation function, which takes the RMS of the problem's quadratic biases. This function can also be used to tune the chain strength by adjusting the prefactor.
Note that you need to upgrade to the Ocean 3.0 version to take advantage of this functionality.
Please let us know if you have any further questions.
|
## 题目链接
http://poj.org/problem?id=2689
## 题目
### Description
The branch of mathematics called number theory is about properties of numbers. One of the areas that has captured the interest of number theoreticians for thousands of years is the question of primality. A prime number is a number that is has no proper factors (it is only evenly divisible by 1 and itself). The first prime numbers are 2,3,5,7 but they quickly become less frequent. One of the interesting questions is how dense they are in various ranges. Adjacent primes are two numbers that are both primes, but there are no other prime numbers between the adjacent primes. For example, 2,3 are the only adjacent primes that are also adjacent numbers.
Your program is given 2 numbers: L and U (1<=L< U<=2,147,483,647), and you are to find the two adjacent primes C1 and C2 (L<=C1< C2<=U) that are closest (i.e. C2-C1 is the minimum). If there are other pairs that are the same distance apart, use the first pair. You are also to find the two adjacent primes D1 and D2 (L<=D1< D2<=U) where D1 and D2 are as distant from each other as possible (again choosing the first pair if there is a tie).
### Input
Each line of input will contain two positive integers, L and U, with L < U. The difference between L and U will not exceed 1,000,000.
### Output
For each L and U, the output will either be the statement that there are no adjacent primes (because there are less than two primes between the two given numbers) or a line giving the two pairs of adjacent primes.
## 题意
给定两个整数L R,输出闭区间$[L,R]$中相差最多的两个相邻的质数和相差最小的两个相邻的质数。
## 思路
注意到R的上限是$2^{31}-1$,但是$R-L \leq 10^6$,每个合数$x$必定包含一个不超过$\sqrt{x}$的质因子,所以我们只需要求出$[2,\sqrt{R}]$中的所有质数,然后就能筛出$[L,R]$中的所有素数,枚举每对相邻的素数就可以了。
时间复杂度为$O(\sqrt{R}\log \log\sqrt{R}+(R-L)\log\log R)$。
## 实现
#include <cstdio>
#include <cstring>
#include <cmath>
#include <algorithm>
const int maxn = 65536 + 7, maxm = int(2e6) + 7;
int minimal_prime_factor[maxn], prime[maxn], cnt_prime;
void primes(int upper) {
for (int i = 2; i <= upper; i++) {
if (minimal_prime_factor[i] == 0) {
minimal_prime_factor[i] = i;
prime[++cnt_prime] = i;
}
int tmp = upper / i;
for (int j = 1; j <= cnt_prime; j++) {
if (prime[j] > minimal_prime_factor[i] || prime[j] > tmp) break;
minimal_prime_factor[i * prime[j]] = prime[j];
}
}
}
bool composite[maxm];
int tmp[maxm], len_tmp, l, r;
int main() {
primes(50000);
while (~scanf("%d%d", &l, &r)) {
int upper = int(sqrt(r)) + 1;
memset(composite, 0, sizeof(composite));
if (l == 1) composite[0] = true;
for (int i = 1; prime[i] <= upper; i++) {
long long j = l / prime[i];
for (; j * prime[i] <= r; j++) {
if (j < 2 || j * prime[i] < l) continue;
composite[j * prime[i] - l] = true;
}
}
len_tmp = 0;
for (unsigned int i = l, cur = 0; i <= r; i++, cur++) if (!composite[cur]) tmp[++len_tmp] = i;
if (len_tmp > 1) {
int max_dist = -1, min_dist = 0x3f3f3f3f;
std::pair<int, int> max_pair, min_pair;
for (int i = 2; i <= len_tmp; i++) {
int diff = tmp[i] - tmp[i - 1];
if (diff < min_dist) {
min_pair.first = tmp[i - 1], min_pair.second = tmp[i];
min_dist = diff;
}
if (diff > max_dist) {
max_pair.first = tmp[i - 1], max_pair.second = tmp[i];
max_dist = diff;
}
}
printf("%d,%d are closest, %d,%d are most distant.\n", min_pair.first, min_pair.second, max_pair.first, max_pair.second);
} else puts("There are no adjacent primes.");
}
return 0;
}
|
# Eliminating hazards
Promoting healthy operations and policies is one of the major ways to eliminate the hazard. One important way to reduce the risk of COVID-19 spread is by continuing to provide instruction through online classes and encouraging teleworking and virtual meetings.
## Continuing to work remotely
Continuing to have employees work from home wherever possible is the most effective way to remove the hazard of COVID-19 from their workplace. Employees are to continue working at home unless they are expressly authorized to work on campus.
Please consult with the relevant Roles and Responsibilities section of this document for information pertaining to return to campus.
The following three primary controls are to be implemented to minimize the spread of infectious disease:
1. Physical distancing
2. Hand hygiene and respiratory etiquette
3. Surface decontamination
### Physical distancing
#### Occupancy and workflow
Designating occupancy limits for spaces to accommodate a 12.5 m2 (2 m diameter) space per person is the most effective way of maintaining physical distancing.
#### Scheduling controls
Option 1: Alternating scheduling. This approach limits employee staffing such that fewer people are in close proximity to each other at any one given time. Consideration must be given to ensure that employees have the tools necessary to perform their work in both on-campus and at-home locations.
Option 2: Staggering start and end times. This approach lessens congestion at the beginning and end of each workday; however, it may not decrease overall population density. Staggering schedules will make it easier to manage traffic in communal areas such as: kitchens, washrooms, and communal areas.
The University’s Working Alone guideline must be followed when deploying employees in any operation or fieldwork.
#### Workflow controls
Designation of a workflow within a workspace can also have a profound effect on minimizing incidental infringement on personal space. The diagrams below demonstrate traffic-flow designations and occupancy reductions for both office and lab environments.
Figure 3A below depicts a typical office environment and Figure 3B shows a laboratory environment.
Figure 3A: Typical office environment
Figure 3B: Typical lab environment
Legend:
• Red arrows indicate the suggested one-way flow pattern.
• Blue circles with "OL" indicate an occupancy limit for the room.
• Red circles with an "X" indicate suggested reductions in occupancy
• Green rectangles denote shared workstation/equipment locations
In order to manage occupancy load, consider the following factors:
• Workflow
• Equipment usage/sharing
• Equipment requirements and supplies
• Process requirements and supplies
• Creation of workstations to designate work and separate tasks
• Scheduling of specific tasks or equipment
• Non-essential tasks removed from high demand workspaces
• PPE requirements and availability
When designing workspaces for occupancy limits, ensure that physical distancing of 2 m is maintained in the work area. In addition:
• Indicate maximum occupancy on all entrances
• Remove extraneous seating from the workspace
• Designate workstations as single-person use (use tape or other markings)
• Develop workflow patterns for one-way travel
• Document and communicate all changes to all occupants/ employees
• Post signage to promote physical distancing practices
The following circumstances demonstrate instances within the workplace where physical distancing is possible.
1. Room occupancy limits: The employee can maintain a 2m distance from colleagues.
2. The operation of equipment or completion of tasks: Where 2 or more people share the same workspace or equipment, they can maintain a 2m distance while simultaneously operating equipment or completing tasks. If any equipment is shared there must be processes in place for the proper cleaning of this equipment.
In instances where physical distancing may not be possible, the supervisor is encouraged to consider alternate work schedules or contact the Safety Office for guidance.
#### Meetings
To maximize physical distancing measures, the preferred approach for all meetings is to continue to utilize available collaborative tools and hold meetings virtually.
Where absolutely necessary, in-person meetings are to be limited to established restrictions of Public Health and government requirements. Ensure attendee presence does not exceed revised room capacities and ensure all attendees maintain a 2 metre distance at all times for physical distancing requirements.
To encourage proper physical distancing practices, move and, if possible, stack excess chairs. Add visual cues to promote and support physical distancing practices between attendees.
#### Communal and public areas
There are a number of communal spaces (meeting rooms, break rooms, washrooms, elevators, stairwells etc.) accessible to employees across campus.
When sharing communal spaces, ensure appropriate physical distancing practices are maintained and adhere to health and safety protocols. For smaller spaces, consider wearing a mask when physical distancing is challenging. As able to do so, consider taking the stairs over the elevator.
Departments are responsible for limiting occupancy within their own communal spaces to facilitate physical distancing.
#### Access and egress
Each building has designated entrances which will be open. All other entrances will be for egress only. Upon entering the building, occupants are again reminded to verify that they are symptom-free before entering.
#### Elevators, corridors, lobbies and stairs
• Wear a non-medical mask/ face covering when using elevators and walking through populated lobbies and atriums.
• In buildings with elevators, occupancy in the elevators will be limited, which will increase wait times. Wash or disinfect your hands after exiting the elevator.
• Practice physical distancing and avoid touching your face, mouth, and eyes after touching a surface.
• In buildings with four floors or less, use the stairs, if you are able.
• In more populated buildings and floors, follow signage for spacing and paths of travel.
• In the absence of signage, stay to the right of any hallway or stairs while others are passing. Some stairwells will be designated for travel up or down only to help with traffic flow within the building.
#### Kitchens and lounges
• Remove or reduce seating in break rooms and kitchenettes to prevent gathering in communal spaces.
• To maximise physical distancing, develop alternate times to take breaks and lunches to prevent gathering.
• Disinfect surfaces (microwave buttons and handles, fridge handles, lunch table) before each use
• Do not share utensils or dishes
#### Washrooms
• Physical distancing must be maintained in all washrooms
• In some washrooms this means there will need to be a limit of one person in the washroom at a time
• Use automatic door openers where possible using elbow/knuckle
• Wash hands with soap and water after using the washroom
• Consider using a paper towel to open washroom doors
#### Signage
Signage has been posted throughout the campus to help instruct and guide individuals. Signage communicates important information, including instruction on hand hygiene, COVID-19 symptoms, cough, and sneeze etiquette. In addition, decals and directional signage have also been posted to remind individuals of traffic flow changes. A list of required signage can be reviewed on Physical distancing signs order form
Additional signage may be ordered online for your department at no cost to your department. Please complete the on-line order form through Retail Services. A guide to available poster options can be accessed here.
#### Cleaning protocols
In light of the COVID-19 virus, our custodial staff have modified their duties to increase sanitization of high touch surfaces including twice-daily cleaning of main doors, elevator buttons, handrails, and washrooms. In your labs and office areas, floors will be cleaned, waste picked up, door knobs and light switches wiped in accordance with the posted cleaning schedule.
Therefore, all faculty and staff are expected to continue to clean their own equipment including various electronics, keyboards, office equipment, lab equipment, and lunchroom equipment such as fridges, coffee makers, etc.
Cleaning products can be obtained via Plant Operations.
Wall-mounted and portable hand sanitizer dispensers will be placed at all building entrances and elevators.
In cleaning kits, each department will be provided with two bottles of hand sanitizer, disinfectant and disposable cloths and will be expected to order replacements as required. Plant Operations is working on a reliable supply chain as these items remain difficult to procure.
Disinfecting wipe dispensers will be installed in many classrooms to allow students to disinfect their desks and chairs.
#### Cleaning protocols in the event of a confirmed COVID-19 case on campus
When a report of a suspected or confirmed case with potential contamination is received, the affected area must be isolated by the supervisor. If possible, note the location of affected workstations/areas at the entrance to the isolated area.
Environmental Services (and Housing Facilities in Residences) are responsible for cleaning and disinfection of the affected space, using recommended chemicals and personal protective equipment. This includes a final step using an electrostatic machine for thorough disinfection.
### Physical barriers
#### Plexiglas barriers and workplace modifications
As staff and faculty gradually return to campus, maintaining physical distancing will be critical in reducing risk of infections and ensuring that are faculty, students are staff feel safe. Barriers or shields are options to consider in open-plan offices or customer service areas where staff and customers may find themselves in close proximity to one another. These barriers can provide protection during interactions in addition to simultaneously enabling clear and unobstructed lines of sight. Barrier materials must be easy to clean and sanitize.
As everyone is gearing up for return to campus, please think about the services that you provide, how can the mode of operation be modified to make it safer, and at last what modifications to your physical space need to be made:
• Can services be provided online?
• Can appointments, deliveries, etc. be scheduled to eliminate lineups?
• Can office layout be changed to enhance physical distancing?
• Do you require plexiglass shields or furniture partition alterations?
• Are stanchions or barriers needed to establish safe spacing for customer services lines?
• Is directional signage needed?
#### How do I request help?
You can submit requests via Plant Operations website.
Plant Operations, Space Planning Office and Procurement have joined forces to provide assistance in keeping you safe as you return to campus:
• Space flow review
• Options for barrier installation (plexi or furniture type)
• Signage, floor indicators & decal order
• Other space modifications (door hardware, mirrors, etc.)
A staff member will meet with you to review your options, priority level, cost, and anticipated delivery time.
Your request will be placed in a priority sequence that includes the following:
1. Laboratories & workplaces included in the early phases of campus re-entry
2. High volume service counters (food services, material distribution centers, etc.)
3. Reception areas if processes cannot be adjusted to minimize contact
4. Research groups if processes cannot be adjusted to minimize contact
5. General office layouts
#### Pricing and cost allocation
Items ordered through Plant Operations will be charged to your department – please code charges related to COVID-19 safety measures according the applicable Finance Unit4 codes. UW Procurement and Plant Ops are working with our suppliers to source materials quickly and at reasonable price. Please note, all institutions and companies are doing the same, so delivery time and cost might change.
Pricing examples:
• Free standing base with 32”h x 30”w plexiglass shield – $120-$150
• Fixed permanent installation 32”h x 36”w plexiglass shield – $150-$200
• Hang panels 32”h x 22”w plexiglass shield – $100-$120
### Hand hygiene
Hands are the most common vehicle for the transmission of microorganisms. Hand Hygiene reduces the risk of transmission of microorganisms from person to person, environment to person, or person to environment.
Hand hygiene can be accomplished by hand washing using soap and running water or hand sanitizing using an alcohol-based hand rub (ABHR).
Ensure there are hand hygiene stations appropriate for the type of work being conducted, in or near the workspace. For example:
• If the work will result in dirt and debris soiling hands, a handwashing sink is required (E.g., vehicle shop, laboratories, kitchens, workshops)
• If the work will not cause soiling of hands, hand sanitizing stations are sufficient (E.g., office work).
• Hand sanitizing stations and supplies can be order through Plant Operations
In addition, document and communicate the following guidelines to all occupants/employees.
#### Frequency of hand hygiene
• Hands should be washed or sanitized upon entering and exiting any space (room to room)
• Hands must be washed or sanitized before beginning any procedure, upon completion of any procedure, and whenever removing gloves
• Hands must be washed or sanitized before eating, after removing gloves and after performing any surface decontamination
Refer to these resources from Public Health Ontario:
If you wear rings (or other hand jewelry), removing the ring before washing and replacing when complete is not acceptable. You must either completely decontaminate the ring during each hand wash/sanitize or stop wearing hand jewelry altogether.
### Respiratory etiquette
Infectious diseases can easily spread when an individual coughs and sneezes. Manage this potential using the following etiquette:
• Cover your mouth and nose when you cough or sneeze, and immediately discard the tissue in the trash.
• If a tissue is not available, cough or sneeze into your elbow, not your hands.
• Perform hand hygiene immediately after blowing your nose, coughing, or sneezing.
• If you are experiencing fever, cough, runny nose, or headache, isolate yourself at home or another suitable location and follow the University’s protocol for individual disclosures of COVID-19
Figure 4A: Covering face with tissue
Figure 4B: Covering face with arm
### Surface decontamination
Surface decontamination involves two stages, cleaning then disinfection. Before proceeding with surface decontamination, consider the following:
1. Ensure the disinfectant chosen is appropriate for the surface being disinfected.
2. Ensure there is enough disinfectant to last the workweek.
3. Designated individuals responsible and establish schedules to perform decontamination.
4. All work surfaces should be decontaminated twice daily. In most situations, this means before work, and once work has concluded.
5. All high-touch surfaces should be disinfected twice daily. Designate responsible persons and a schedule for this to be done. High-touch surfaces include:
1. Entry and exit points (doorknobs, push bars, and handles)
2. Cupboard knobs and handles
3. Light switches, power switches, keyboards, etc.
4. Equipment related controls that are accessed in high frequency (several times per day)
5. Devices that come into close contact with the face (phones)
6. Faucets and taps
#### Surface cleaning
Cleaning removes organic materials from the surface, which can inhibit the effectiveness of any disinfectant used. Cleaning involves:
• Wearing nitrile or other similar gloves if required by product instructions
• Removing organic materials with a disposable towel and discard
• Using a cloth and warm soapy water to wipe down surfaces
• Allowing the surface to dry
#### Surface disinfection
The three most important factors to consider when disinfecting a surface are:
• Disinfectant efficacy against whatever you are trying to kill
• Concentration of the disinfectant is strong enough to be effective
• Contact time is long enough to allow the disinfectant to perform its action
Some commonly used disinfectants include:
• Alcohol based (60% - 70% isopropanol or ethanol) for a contact time of 2 minutes
• Hydrogen peroxide at 3% for a contact time of 5 minutes
• Bleach at a 10% dilution for a contact time of 5 minutes
#### Procedure for using disinfectants
1. Put on gloves if required by the product instructions (nitrile gloves are normally sufficient but check with manufacturer instructions)
2. Clean surface of visible dirt as described above
3. Spray or apply disinfectant onto the clean, dry surface
4. Allow the disinfectant to sit on the surface for the duration of contact time (reapply if disinfectant evaporates prior to required contact time)
5. Allow the surface to dry
6. Wipe off residue with a paper towel and discard
8. Perform hand hygiene
## Protective equipment
Personal protective equipment (PPE) or group protective equipment (GPE) is normally considered the last line of defense. It is a way to control hazards when other more effective options of control are not available.
This section focuses on the following:
• When PPE vs GPE is required
• Glove considerations
• Lab coat considerations
### When is personal protective equipment required? – Healthcare
If physical distancing is maintained and hand hygiene and surface decontamination are both performed adequately, the risk of disease transmission will be low and personal protective equipment (PPE) / group protective equipment (GPE) will not be required. To determine which protective equipment should be used, consider the following:
• PPE: Includes N95 respirators, medical/surgical masks, gowns, face shields and goggles protect the wearer from infectious disease and must be reserved for health care environments. Workers who normally wear PPE will continue to do so.
• GPE: Includes non-medical masks or face coverings, the intent of which is to reduce the potential exposure to COVID-19 by containing the wearer’s respiratory droplets. Non-medical masks / face coverings should be worn in public settings, e.g. entering and exiting buildings, corridors, washrooms, communal areas, etc., where maintaining a 2-meter physical distance is not possible.
Non-medical masks / face coverings can be used when physical distancing is a challenge. Surgical/medical masks, as well as respirators should be reserved for patient care. Respirators cannot be used unless they have been authorized by the Safety Office.
### GPE – Safe use of non-medical masks and face coverings
• Consistent and strict adherence to hand hygiene, physical distancing, and respiratory etiquette.
• Wash hands immediately before putting it on and immediately after taking it off (in addition to practicing good hand hygiene while wearing it).
• Individuals should be careful not to touch their eyes, nose and mouth when removing their mask and wash hands immediately or use hand sanitizer after removing.
• When removing the mask, grasp the ties or ear loops carefully without touching the front of the mask.
• Make sure the mask fits snugly but comfortably against the side of the face
• Do not share the mask with others.
• Avoid touching the mask while using it.
Visit the Public Health Agency of Canada website for instructions on wearing cloth masks.
### PPE – Gloves
Nitrile or latex gloves are used to provide a non-absorbent barrier between a contaminated surface and the skin. Leather or cloth gloves are used in many workplaces to protect hands from mechanical injury (cuts, scrapes), but will not provide adequate protection from infectious disease.
Gloves are not a replacement for hand hygiene, must be changed frequently, and must never be re-used. In general, if gloves were not needed for an operation or process prior to COVID-19, it is likely they are not required now.
### PPE lab and shop coats
Lab coats are used to protect street clothing from contamination, spills, and exposure to hazardous substances in labs and workshops. Lab coats should not be worn outside of the primary work location or any public area. The type of lab coat (e.g. cotton, polyester) should be selected to protect against the hazards normally present in the work. To protect against COVID-19 transmission, the following precautions should be followed:
• Lab coats should be stored on hooks at the main entrance of the lab or workshop
• There should be separate racks/hooks for lab coats. Street clothes, backpacks, and other common items should not be stored in any lab, but is strictly prohibited in BSL2 permitted labs
• Lab coats should be laundered when contamination is suspected or evident
• To launder a lab coat – don gloves, place the lab coat in a plastic bag and seal it with a twist tie or other secure means. Now it can be transported
• Dirty lab coats can be laundered with regular laundry (unless contaminated with hazardous materials) using the highest heat settings possible in the wash and dry cycles
• Use hand hygiene after handling soiled lab coats
|
Go Premium for a chance to win a PS4. Enter to Win
x
Solved
# outputing to pipe-delimited flat file with quoted identifier
Posted on 2011-02-11
Medium Priority
810 Views
Greetings mates,
There is probably something simple I am overlooking here and I have been looking at this now for over 45 minutes.
My eyes hurt.
I am trying to some data from sql server to .csv file.
The file needs to be pipe-delimited with double-quoted identifier.
Everything seems to work fine except each header column and associated values are not being wrapped around double quotes.
Any ideas where I am going wrong?
According to the commands I looked up -I (dash i) is supposed to enable double quotes as text identifier.
However, I don't see the double quotes when viewing the outputted data.
Any ideas where I overlooking things or screwing them up?
Here is the "mostly working" code.
sqlcmd -S crt3 -i c:\TYLER\ENSA.sql -o c:\inetpub\ftproot\PROD\ENSA.csv -h 8192 -s"|" -w 5000 -W -I
0
Question by:sammySeltzer
• 6
• 4
LVL 61
Expert Comment
ID: 34875108
post a sample from csv file
at least first two line
0
LVL 61
Expert Comment
ID: 34875151
why you are using "|"
what happens if you use -s"<TAB>"
0
LVL 29
Author Comment
ID: 34875256
Use of "|" is user requirement.
0
LVL 61
Expert Comment
ID: 34875302
if your qry is not too long do this
select
'"' + col1 + '"' col1,
col2,
'"' + col3 + '"' col3,
...
from myTRable(s)
where ...
ie, put " around your varchar values in sql...
0
LVL 61
Expert Comment
ID: 34875548
http://msdn.microsoft.com/en-us/library/ms174393.aspx
-I is to enable " in sql
so when -I is used you can run a query like
select "col name" from "mytable"
0
LVL 29
Author Comment
ID: 34876448
Thanks for all your help HainKurt.
I will run this shortly and report back to you.
0
LVL 29
Author Comment
ID: 34888790
Hi HainKurt,
Sorry I was unable to test as indicated above due to the maintenance window that shut down my PC and didn't restart till this morning.
I have just finished trying your suggestions and none of the 2 worked.
First, I tried this:
select
'"' + col1 + '"' col1,
col2,
'"' + col3 + '"' col3,
...
from myTRable(s)
where ...
I would get results and in the format that I want.
However, for some reason, Header Columns are not included.
When I try this example below:
-I is to enable " in sql
so when -I is used you can run a query like
select "col name" from "mytable"
I get invalid object name, referring to the table Name.
So, I am sure I am doing something wrong.
Here is the code I attempted to use:
SELECT "[USEDRUG]","[USEAGE]" FROM "[ENSA]" WHERE ID between 559 and 600
This is the inputfile on the sqlcmd code below:
sqlcmd -S crt3 -i c:\TYLER\ENSA.sql -o c:\inetpub\ftproot\PROD\ENSA.csv -h 8192 -s"|" -w 5000 -W -I
0
LVL 61
Expert Comment
ID: 34889837
try this
look at the second query, the headers & PmtFrequency is wrapped with " in the result...
select * from deals
id dealid PmtFrequency PmtAmount
1 1200 Annual 1000
2 1200 Annual 1250
select
id,
dealid '"Deal ID"',
'"' + PmtFrequency + '"' '"Pmt Frequency"',
PmtAmount '"Pmt Amount"'
from deals
id "Deal ID" "Pmt Frequency" "Pmt Amount"
1 1200 "Annual" 1000
2 1200 "Annual" 1250
0
LVL 61
Accepted Solution
HainKurt earned 2000 total points
ID: 34889867
and this is the csv file that I generated using
sqlcmd -S NTWBUCKM5217X\SQLEXPRESS -i d:\hk\ee\sql\test.sql -o d:\hk\ee\sql\test.csv -h 8192 -s"|" -w 5000 -W -I
and this is test.sql
select
id,
dealid '"Deal ID"',
'"' + PmtFrequency + '"' '"Pmt Frequency"',
PmtAmount '"Pmt Amount"'
from EE.dbo.deals
id|"Deal ID"|"Pmt Frequency"|"Pmt Amount"
--|---------|---------------|------------
1|1200|"Annual"|1000
2|1200|"Annual"|1250
3|1200|"Monthly"|750
4|1300|"Annual"|825
(4 rows affected)
0
LVL 29
Author Comment
ID: 34890297
HainKurt,
Your solution worked really good for me as far as the quoted identifiers.
One thing that was missing is quoting the integers and date data types.
A few more research produced what I call the perfection solution.
I found a funtion called QUOTENAME().
select QUOTENAME(ID, '"') as '"ID"'
...
...
where
...
and it seems to have satisfied all my requirements.
Thanks a lot for all your help.
0
## Featured Post
Question has a verified solution.
If you are experiencing a similar issue, please ask a related question
In this article I will describe the Detach & Attach method as one possible migration process and I will add the extra tasks needed for an upgrade when and where is applied so it will cover all.
Ever needed a SQL 2008 Database replicated/mirrored/log shipped on another server but you can't take the downtime inflicted by initial snapshot or disconnect while T-logs are restored or mirror applied? You can use SQL Server Initialize from Backup…
this video summaries big data hadoop online training demo (http://onlineitguru.com/big-data-hadoop-online-training-placement.html) , and covers basics in big data hadoop .
Is your OST file inaccessible, Need to transfer OST file from one computer to another? Want to convert OST file to PST? If the answer to any of the above question is yes, then look no further. With the help of Stellar OST to PST Converter, you can e…
###### Suggested Courses
Course of the Month8 days, 4 hours left to enroll
|
Lectures and Quick Notes(1.2)
Chapter
Chapter 1
Section
Lectures and Quick Notes(1.2
Lectures 4 Videos
• Linear systems mean there are 2 or more equations, but typically 2, where there are more than 1 unknown.
Section Intro
## Isolating a variable for substitution method
ex From 6a + 5b = 12, isolate the variable a.
Therefore, a = 2 - \dfrac{5}{6}b
Isolating Variable when there are Two
Substitution method ex1
Solve for (x, y) when
\displaystyle \begin{cases} x + y = 3 \\ 2x - y = -1 \end{cases}
solution
\displaystyle (x, y) = (\frac{2}{3}, \frac{7}{3})
2.47mins
Substitution method ex1
Substitution method ex2
Solve for (x, y) when
\displaystyle \begin{cases} 3x + 10y = 12 \\ 2x - 7y = 2 \end{cases}
solution
\displaystyle (x, y) = (\frac{104}{41}, \frac{18}{41})
|
Table 2 Proportion of residues that are mutated in the buried, intermediate and surface sites of proteins whose sequences have diverged by <10% to 50–60%
For six divergence categories, <10% to 50–60%, we give the average proportion of residues in each ASA (accessible surface area) range that have mutations. Thus, for example, for orthologues whose sequences have diverged by 20–30%, 12.4% of the buried residues (ASA=0–20 Å2), 23.7% of the intermediate residues (ASA=20–60 Å2) and 38.2% of the surface residues (ASA=60–∼140 Å2) have mutations.
Proportion (%) of mutated residues in each ASA region
ASA of the three regions (Å2)<10%10–20%20–30%30–40%40–50%50–60%
0–202.46.712.417.729.339.6
20–605.013.523.733.244.055.7
60–∼1409.824.338.249.857.567.8
|
This is the multiplication factor for cross-bonded earthing.
When the lengths of the three cable sections are not known, $p_{cb}$ should be set to 1, and $q_{cb}$ to 1.2, giving a value for $f_{cb}$ slightly below 0.004.
For two single core cross-bonded cables, which is not covered in the IEC standard, $f_{cb}$ is set to 0.004.
When all the lengths of the three cable sections are identical, $f_{cb}$ becomes zero.
Symbol
$f_{\mathrm{cb}}$
Unit
-
Formulae
$\frac{1}{\left(p_{\mathrm{cb}} + q_{\mathrm{cb}} + 1\right)^{2}} \left(p_{\mathrm{cb}}^{2} - p_{\mathrm{cb}} q_{\mathrm{cb}} - p_{\mathrm{cb}} + q_{\mathrm{cb}}^{2} - q_{\mathrm{cb}} + 1\right)$ three-phase systems $\frac{p_{\mathrm{cb}}^{2} - p_{\mathrm{cb}} + 1}{\left(p_{\mathrm{cb}} + 1\right)^{2}}$ two-phase systems $0$ single phase
Related
$a_{\mathrm{S1}}$
$a_{\mathrm{S2}}$
$a_{\mathrm{S3}}$
$p_{\mathrm{cb}}$
$q_{\mathrm{cb}}$
Used in
$\lambda_{\mathrm{1c}}$
|
0
2013
Impact Factor
# M. Malkin
Gagarin Pr. 23, Nizhny Novgorod, 603950 Russia
Department of Mathematics and Mechanics, Nizhni Novgorod State University
## Publications:
Li M., Malkin M. I. Approximation of entropy on hyperbolic sets for one-dimensional maps and their multidimensional perturbations 2010, vol. 15, no. 2-3, pp. 210-221 Abstract We consider piecewise monotone (not necessarily, strictly) piecewise $C^2$ maps on the interval with positive topological entropy. For such a map $f$ we prove that its topological entropy $h_{top}(f)$ can be approximated (with any required accuracy) by restriction on a compact strictly $f$-invariant hyperbolic set disjoint from some neighborhood of prescribed set consisting of periodic attractors, nonhyperbolic intervals and endpoints of monotonicity intervals. By using this result we are able to generalize main theorem from [1] on chaotic behavior of multidimensional perturbations of solutions for difference equations which depend on two variables at nonperturbed value of parameter. Keywords: chaotic dynamics, difference equations, one-dimensional maps, topological entropy, hyperbolic orbits Citation: Li M., Malkin M. I., Approximation of entropy on hyperbolic sets for one-dimensional maps and their multidimensional perturbations, Regular and Chaotic Dynamics, 2010, vol. 15, no. 2-3, pp. 210-221 DOI:10.1134/S1560354710020097
Du B., Li M., Malkin M. I. Topological horseshoes for Arneodo–Coullet–Tresser maps 2006, vol. 11, no. 2, pp. 181-190 Abstract In this paper, we study the family of Arneodo–Coullet–Tresser maps $F(x,y,z)=(ax-b(y-z)$, $bx+a(y-z)$, $cx-dxk+e z)$ where $a$, $b$, $c$, $d$, $e$ are real parameters with $bd \ne 0$ and $k>1$ is an integer. We find regions of parameters near anti-integrable limits and near singularities for which there exist hyperbolic invariant sets such that the restriction of $F$ to these sets is conjugate to the full shift on two or three symbols. Keywords: topological horseshoe, full shift, polynomial maps, generalized Hénon maps, nonwandering set, inverse limit, topological entropy Citation: Du B., Li M., Malkin M. I., Topological horseshoes for Arneodo–Coullet–Tresser maps , Regular and Chaotic Dynamics, 2006, vol. 11, no. 2, pp. 181-190 DOI: 10.1070/RD2006v011n02ABEH000344
|
# $L:\mathbb{R}^2\to\mathbb{R}^2$ given by $L(x,y)=(x,-y)$ which of the following is true?
$L:\mathbb{R}^2\to\mathbb{R}^2$ given by $L(x,y)=(x,-y)$ which of the following is true?
1. differentiable everywhere on $\mathbb{R}^2$
2. differentiable on $(0,0)$ only
3. $DL(0,0)=L$
4. $DL(x,y)=L$ for all $(x,y)$
I have calculated that derivative matrix is $DL=\begin{pmatrix}1&0\\0&-1\end{pmatrix}$
so $DL(x,y)=\begin{pmatrix}1&0\\0&-1\end{pmatrix}\times (x,y)^T=(x,-y)$ so $DL=L$ so $4$ is true right? and hence also $1$ is true.
-
Yes, you're correct. $L$ is a linear map so its derivative is itself. – Christopher A. Wong May 22 '13 at 10:18
@ChristopherA.Wong Thank you very much – La Belle Noiseuse May 22 '13 at 10:21
|
Browse Questions
The term ' Biocoenosis ' was proposed by
$\begin{array}{1 1}(a)\;\text{Transley}\\(b)\;\text{Carl Mobius}\\(c)\;\text{Warming}\\(d)\;\text{None of these}\end{array}$
The term ' Biocoenosis ' was proposed by Carl Mobius
Hence (b) is the correct answer.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.