url
stringlengths 14
2.42k
| text
stringlengths 100
1.02M
| date
stringlengths 19
19
| metadata
stringlengths 1.06k
1.1k
|
---|---|---|---|
https://proxies-free.com/tag/looping/ | ## php – how can make looping array to be one array with simple code
i want make my looping data in one array without index. i had solve it with code above but i think it so crowd and not simple
this my code
``````\$data = array();
while(\$dt = \$this->db->fetchAssoc(\$res_weight)){
\$data() = array_values(\$dt);
}
\$datas = json_encode(\$data);
\$datas = str_replace("(","",\$datas);
\$datas = str_replace(")","",\$datas);
\$datas = '('.\$datas.')';
\$datas = json_decode(\$datas);
``````
how can i make this code more simply then this code
## reactjs – React Native NativeBase Looping Tabs issue
Am using Expo React Native and Native Base library,
While am trying to loop through an array to output Tabs (each user have his own number of tabs)
`````` <Tabs>
{sectionsTabs.map(section => <Tab
style={{backgroundColor: '#0e0e0e'}}
>
<Text>{section_id}</Text>
</Tab>)}
</Tabs>
``````
SectionsTabs is an array
Below is the error message am getting
## python – I tried to make a noob version of Jarvis and I have an issue looping it
python – I tried to make a noob version of Jarvis and I have an issue looping it – Code Review Stack Exchange
## copy paste – Looping through dropdown list with Google script?
What I need to do is loop through a list in A1 of a sheet, copy the results, say from B2 to c13, and paste that information into a Google Doc. I found something that I’m working from, but it’s not correct, and I’m unsure of where to go from here.
``````function loop(){
for(var i = 0; i < optionsArray.length; i++){
optionsArray.getRange('A1').setValue(wsWithData(i)(0));
Utilities.sleep(1000);
}
if(wsWithData.getRange('').getValue() == "Yes"){
output.push((data(i)(0)));
}
wsWithData.getRange(2,12,output.length, 1).setValues(output);
}
``````
## unity – Looping world for a top down space shooter?
I am trying to understand how to make a seamless looping world for my top down space shooter.
First thing I tried was using multiple cameras, and teleporting the player when he reaches the edge, but it is not working well, and I don’t like that I have to use 4 cameras.
I wanted to test making the objects in the world teleport around the player, but I will have problems with the fact that each object is a different size, and teleporting them around will for sure create problems with moving objects. (Big obstacles could be teleported away and not collide with, for example, bullets until the bullet gets teleported too but inside the big obstacle)
Are there other ways to accomplish this?
## vue.js – How to create infinite looping by 3 slides at once in BootstrapVue’s Carousel?
The code looks as following:
``````<template lang="pug">
b-carousel.d-none.d-sm-block(
id='categoryRoulette'
controls
no-animation
:interval='0'
@sliding-start="onSlideStart"
@sliding-end="onSlideEnd"
)
b-carousel-slide(
v-for="category in chunkedCatalog"
)
template(v-slot:img)
b-card-group(deck)
b-card(
v-for="(item, index) in category" :key="index"
:img-src="https://codereview.stackexchange.com/item.image ? item.image :"../assets/images/blank.png'"
img-alt='Image'
img-top
tag='article'
)
b-card-text.d-flex.justify-content-center.align-items-center
h5
a(href="#") {{ item.title }}
</template>
<script lang="ts">
import Vue from 'vue'
import testData from '../data/testData.json'
import { RouletteData } from '../types/roulette'
export default Vue.extend({
data: (): RouletteData => ({
slide: 0,
sliding: false,
catalog: (),
}),
computed: {
chunkedCatalog() {
const chunkedArray = ()
for (let i = 0; i < this.catalog.length; i += 3) {
chunkedArray.push(this.catalog.slice(i, i + 3))
}
return chunkedArray
},
},
mounted(): void {
this.catalog = testData.catalog
},
methods: {
onSlideStart(slide: any) {
this.sliding = true
console.log('slide =', slide)
},
onSlideEnd(slide: any) {
this.sliding = false
console.log('slide =', slide)
},
},
})
</script>
<style lang="sass">
#categoryRoulette
margin-bottom: 40px
margin-top: 40px
.carousel-control-prev-icon
background-image: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' width='16' height='16' fill='%232E86C1' class='bi bi-chevron-left' viewBox='0 0 16 16'%3E%3Cpath fill-rule='evenodd' d='M11.354 1.646a.5.5 0 0 1 0 .708L5.707 8l5.647 5.646a.5.5 0 0 1-.708.708l-6-6a.5.5 0 0 1 0-.708l6-6a.5.5 0 0 1 .708 0z'/%3E%3C/svg%3E") !important
height: 100px !important
margin-left: -300px
width: 100px !important
.carousel-control-next-icon
background-image: url("data:image/svg+xml,%3Csvg xmlns='http://www.w3.org/2000/svg' width='16' height='16' fill='%232E86C1' class='bi bi-chevron-right' viewBox='0 0 16 16'%3E%3Cpath fill-rule='evenodd' d='M4.646 1.646a.5.5 0 0 1 .708 0l6 6a.5.5 0 0 1 0 .708l-6 6a.5.5 0 0 1-.708-.708L10.293 8 4.646 2.354a.5.5 0 0 1 0-.708z'/%3E%3C/svg%3E") !important
height: 100px !important
margin-right: -300px
width: 100px !important
</style>
``````
The need is to create carousel that will display 3 slides at once and click on next/previous should display next/previous 3 slides. Pay attention – I don’t need to display slide by slide (1 by 1), but 3 slides (3 by 3).
I’ve tried to use an array and make chunks, but it works until the end of array only. For example, if I have 5 slides, it displays first 3 (it’s ok) and than 2 only (it’s not ok). Click on next displays first 3 slides again and so on.
How should it work? It should display first 3 slides of 5 and on next click it should display 3 slides as well – last 2 slides of 5 and first slide again. On next click it should display slides with indexes 2, 3 and 4 and so on.
## hard drive – Clonezilla looping through multiple clones
As I attempt to clone a disk to another disk Clonezilla goes through it once, then twice, then three times. It performs the exact same action over and over. Eventually, it finishes but this is not expected behavior. This is a recently updated version of Clonezilla and the previous did not take multiple hours like this does.
## animation – Export infinite looping GIFs by default
``````frames=Table(Plot(Sin(x(1+a x)),{x,0,6}),{a,0,2,0.2});
Export("test.gif",frames)
``````
The above code runs in version 9.0 to get a infinite looping GIF animation, but in recent versions it only loops once.
I know I can specify the option `"AnimationRepetitions" -> Infinity`. Can this option be the default when exporting GIF?
I have tried `SetOptions(Export, "AnimationRepetitions" -> Infinity)`, it doesn’t work.
## python – Cannot assign to function call when looping through and converting excel files
With this code:
``````
xls = pd.ExcelFile('test.xlsx')
sn = xls.sheet_names
for i,snlist in list(zip(range(1,13),sn)):
``````
I get this error:
skiprows=range(6))
^ SyntaxError: cannot assign to function call
df+str(i) also return error
i want to make result as:
## game loop – Are you supposed to be looping through all PhysicsObjects at every step in a physics engine?
I am currently making a small 2D game and I am trying to implement some basic 2D physics. I currently have a list of around 100 PhysicsObjects which I loop through every frame in order to update and apply forces to that object based on user input. The basic code is as follows:
``````accumulator = 0
dt = 0.01
while running:
for each object in PhysicsObjects:
update object
accumulator += time between last frame and current frame
while accumulator >= dt:
for each object in PhysicsObjects:
resolve forces to get new position of object based on dt
accumulator -= dt
for each object in PhysicsObjects:
render the object
``````
I was going to resolve the forces and calculate the new position in the update method of each object, but I was told the dt was necessary in order to make the movement and physics independent of frame rate, which is why it is in a separate loop. I also can’t update the objects in the physics engine loop as the update method needs to occur every frame.
This already seems incredibly inefficient, and I haven’t even started to test for collisions or anything else more in the physics engine. Is there a better way to structure this where I don’t need to loop through every object again in order to calculate their new position? | 2021-04-20 09:47:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22454524040222168, "perplexity": 4030.1309045454955}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039388763.75/warc/CC-MAIN-20210420091336-20210420121336-00197.warc.gz"} |
http://s-j.github.io/scikit-learn-in-production/ | # On using Scikit-Learn in production
## July 27, 2017
In the previous posts I have mentioned using Scikit-Learn, gRPC, Mesos and Prometheus. In the following, I would like to tell how all these components can be used to build a classification service and my experience with running it in a relatively large production system. For practical reasons I omit most of the actual code, and instead describe the important parts of the server script and give a reference to the external documentation when necessary.
## THE PROBLEM
As a part of our daily operation at Cxense we crawl millions of Web-pages and extract their content including named entities, keywords, annotations, etc. As a part of this process we automatically detect language, page type, sentiment, main topics, etc. Skipping the details, in the following we implement yet another text classifier using Scikit-Learn.
As most of our system is implemented in Java, also including the crawler, we implement this classifier as a micro-service. For some documents therefore, the crawler will call our service, providing page title, url, text, language code and some additional information and in return retrieve a list of class names and their approximate probabilities. We further use an absolute time limit of 100 ms (end-to-end) for the classification task.
## THE SOLUTION
For classification itself we use a simple two-stage pipeline, consisting of a TfidfVectorizer and a OneVsRestClassifier using LinearSVC. A separate model is trained for each of the several supported languages, serialized and distributed on the deployment. In order to communicate with our service, we use gRPC, where we define the protocol in the proto3 format and compiled it for both Java (the client) and Python (the server):
Next we implement a simple servicer, which invokes the classifier for a given language with the remaining request fields and return the classification results (class names and scores) wrapped in a response object:
To measure classification latency and the number of exceptions we further add a number of Prometheus metrics and annotate the classification method:
To log the classification requests and results we add a queue to the servicer and write serialized JSON objects to it on request. We also implement a scheduled thread that drains the queue and writes the strings to disk:
The reason for doing this is that we want to avoid waiting for disk I/O on the classification requests. In fact this trick dramatically improves the observed latency on the machines with heavy I/O load and bursty requests.
Further we customize the HTTP server used by Prometheus client to return metrics on /metrics path (used for metric collection), “status”:”OK” on /status (used for health checks) and 404 otherwise:
Now, we implement the server itself as a thread taking two ports as arguments. The http port is used for health checks and metric collection and the grpc port is used for classification requests. For the http port we will use the number supplied by Aurora (see below) and for the gprc port we use port 0 to get whatever is available. To know which ports were allocated we write both to a JSON file:
Threading logic herein allows us quite simple unit test usage, for example:
Further, we provide Aurora configuration that consists of the following tasks:
1. Fetch code and data from into the sandbox.
2. Activate virtual environment and start the server using thermos.ports[http]
3. Wait for service.json to be written and register clf-health and clf-grpc in the consulate. Use clf-health for httpcheck.
4. On shutdown, deregister the service in consulate.
5. Otherwise, delete request logs that are more than 12 hours old.
Note that here we use four instances per data center, each requires slightly more than 1 CPU, 4GB RAM, 3GB disk. We also restrict our job to have no more than one instance per host
From here we can start, stop, update and scale our jobs using Aurora’s client commands. Beyond what is mentioned we implement classification service client in Java and embed it into the crawler. Cxense codebase includes code for automatic resolve of clf-grpc to a list of healthy servers, and even scheduling up to 3 retries with 20 ms between and final time-out at 100ms. Here we also use Prometheus to monitor client latency, number of failed requests, etc. Moreover, we configure metric export from both clients and service, set up a number of service alerts (on inactivity, too long latency or high error rate) and Grafana dashboards (one for each DC).
### THE EXPERIENCE
Initially I was quite skeptical about using Python/Scikit-Learn in production. My suspicions “were confirmed” by a few obstacles:
1. The gRPC threads above are bound to one CPU and it is really hard to do anything about that in Python. However, this is not a big deal as we can scale by instances instead of cores. In fact, it is better.
2. Ocasionally tasks get assigned to “slow” nodes which makes 90+ percentile latency higher in orders of magnitude. After some investigation with colleagues, we found that this may happen on I/O overloaded nodes. Delayed logging demonstrated above gave us a dramatic improvement here, so it wasn’t that much of issue anymore. Otherwise, we could add a supervisor to restart unlucky jobs.
3. Grpc address lookup makes client-observed latency significantly worse than the clasification port itself. However, our codebase implements a short-term address cache, and for cached addresses the latency increase is not a big deal. The problem we have seen initially was that with a large number of crawlers and relatively small fraction of classification requests, the cold-cache chance is quite high. With increasing number of requests however, the chance of this goes down and the latency goes down as well.
4. We have observed that for bursty traffic latency jitter can be quite high and the first few requsts after a pause are likely to be out-of-time. For the client, I assume it is because of the cost of loading models back into memory and CPU caches. For the server, I assume it is becauce of the closed connections and cold address caches. The funny part here was that this issue is less with increased number of requests. In fact, we have seen latency (both for the client and the server) go down after doubling the number of requests by adding support for a new language without increasing the total resource budget (CPU, RAM, disk).
So in total, the experience in prod was quite postive. Apart from the points mentioned above, there were no problems or accidents and I have not seen any server-side exceptions. The only time I had to find why the classifier was suddenly inactive was when AWS S3 went AWOL and broke the Internet.
On the final note, here is a dashboard illustrating the performance on production traffic in one of our datacenters (the legend was removed from some of the charts).
### Running Open AI Gym on Windows 10
[Open AI Gym](https://gym.openai.com/) is a fun toolkit for developing and comparing reinforcement learning algorithms. It provides a var...… Continue reading
#### Get started with Flutter in 30 minutes
Published on May 31, 2018
#### Understanding LogisticRegression prediction details in Scikit-Learn
Published on March 31, 2018 | 2019-05-22 05:48:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.27641531825065613, "perplexity": 2254.5643795591704}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256763.42/warc/CC-MAIN-20190522043027-20190522065027-00330.warc.gz"} |
https://chat.stackexchange.com/transcript/71?m=48315806 | 12:03 AM
@DavidZ alrighty then
apparently _x is just a convention and doesn't change anything the interpreter does except if you use something like from xxx import *
but if you do try to import all functions from a module
functions that are prefixed by a single _ are not imported
who decided that? who knows
__ invokes name mangling
__something__ are for magic methods but....if the something is not one of the magic methods...then it does nothing apparently...
lol
Well __something__ is a name reserved for the implementation
what do you mean?
I think you can define your own __whatthefruitmethodnameisthis__() method without problems
Sure, but maybe in the future Python might start using __whatthefruitmethodnameisthis__() to mean something
indeed
then you just don't upgrade python anymore
As I understand it, the "promise" is basically that any methods which are treated specially by the runtime environment will use that naming convention __something__(). So you can freely write your own methods whose names don't follow that pattern, and be confident that they'll never conflict with special methods used by the Python system in the future.
Some of us like upgrades :P
12:11 AM
XD
@DavidZ ahh I wasn't aware of the name mangling. I suppose I made up my own convention then
fun times...
welp, time to head home lol
laters
see you later
12:59 AM
@danielunderwood But there are a bunch a issues there.
uh oh
The [][] array will be contiguous (which is generally both good for vectorizing and allows good cache behavior).
The pointer structure might have all the data in one big contiguous block, but the programmer has to make it so.
But that is probably not what you are noticing.
The big issue probably related to the possibility of aliasing.
There are a lot of optimizations that the compiler can not make if the three blocks of memory have any possibility of overlapping.
I'm installing visual studio now
fun beans
I think you need the restrict keyword. Maybe.
Frankly this is not something I have looked into for some time.
There is also a question of loop ordering. If the compiler doesn't fix it for you, then your loops are not in the optimal order and will be smashing the cache more often than necessary.
I think you would be better with for (i...) for (k...) for (j...) ....
By aliasing, do you mean you have a = b * c and there's a chance a, b, and c occupy the same memory to some extent?
1:05 AM
Start by taking a few minutes to convince yourself that the two orderings do the same thing. Then think about why one is better for cache performance than the other.
that's too hard man...now I gotta worry about the order of loops??
2
._.
Then--perhaps--forget about it, because that compiler should be able to fix that for you.
@enumaris It used to be a big deal (and the "right" order depends on your language choice, too; I'm looking at you Fortran).
I've always just made the assumption that the compiler is much smarter than I am
But like I said, the compiler should be able to help these days.
I don't think I ever worried about loop ordering when writing in fortran
since I didn't know about it
1:07 AM
And I've never seen restrict used...interesting
The key issue in loop ordering is array storage sequencing. "Row major" and "column major" are the jargon words that mark a discussion of the issue.
I think fortran uses "row major"
@danielunderwood Yes.
what is this pch.h file visual studio is including...
ah I forgot to turn off precompiled headers or something...I remember this in the tutorial...
I don't see that option...hmmm...
If you're using VS, you may want to note that there's a distinction between Visual C++ and C++
Or maybe those are just compiler-specific extensions. I remember it being a thing at one point at least
1:18 AM
uhhh
meh, I'll just leave the #include "pch.h" in there
now I just need some small assignments to start my C++ coding
I don't have something I feel like building right now tho...
well nothing that is small enough that I could conceivably build it anyways
1:41 AM
@enumaris Many of the challenges at Programming Puzzles & Code Golf are both small and entertaining. Not that you would be working them for small codes size, as you want to be practicing good style.
hmmm cool :D
user301074
2:01 AM
hi from 2019 :3
hello
Hello from 2018 again!
2:50 AM
Public service announcement: SMBC is funny. Again. smbc-comics.com/comic/werewolf
3:15 AM
1
It's New Year's Day in Stack Exchange land... A distinguishing characteristic of these sites is how they are moderated: We designed the Stack Exchange network engine to be mostly self-regulating, in that we amortize the overall moderation cost of the system across thousands of teeny-tiny sli...
1 hour later…
4:16 AM
@Blue -2 is insane.Yesterday night it was 11 here and I was shivering bloody hell.....But you know I find it very satisfactory to find warmth when very cold rather than coolness when very hot..........Must've been pretty nice trip!
4:54 AM
it's noon now. Last night afterwards I went out to forage, seeing there were still many people on the street at 1 to 2 am though it's raining persistently. And crazily, Eslite Bookstore was full of people, some of them sleeping there. I wonder what's the meaning of sleeping in a public place during new year's eve/wee hours.
5:32 AM
the most gratifying thing I find on Facebook is there are people replying my physics questions posted there.
because I find physics engagers seem never active on Facebook.
6:24 AM
@JohnRennie you there ?
Happy New year to all.
@Nobodyrecognizeable morning :-)
6:52 AM
@ayc -2°C is totally bearable though, with sufficient clothing. It's not as bad as you think. ;)
I hear that in several habitable places in Canada and Russia, temperatures regularly fall down to -20°C and less. They do manage to survive and work normally. Gotta try it someday. And indeed, trying to stay warm in the cold seems waaay better than trying to stay cool in the heat. :P
surviving in cold weather is easier than studying physics in cold weather.
Interestingly, the Himalayan street dogs seem pretty ferocious and wild. They've developed a thick layer of fur and almost look like wolves. I guess that's their natural adaptation to the cold. Didn't spot any cat there, however.
and I find the main reason of feeling cold in cold weather in washing body (even just hands or feet) with cold water.
7:12 AM
Heehee, the term I was looking for is Himalayan Sheepdog.
> This breed may require obedience training in order to domesticate them. Training this breed may be difficult due to its independent and stubborn nature.
7:34 AM
Sounds like physics students to me ...
3
In other news, we may or may not have our first ever close up pictures of a Kuiper belt object - time will tell!
@JohnRennie That's not far from the truth. :D
@JohnRennie Woohoo! I have no idea of the latest news. Lemme see.
It passed 2 hours ago, but it takes a while to get the data back
> After a 13-year journey, the piano-sized spacecraft has covered a distance of four billion miles to reach Ultima Thule in the Kuiper Belt — a donut-shaped region of ancient, rocky bodies beyond the orbit of Neptune.
> The spacecraft will not be in contact with Earth during close approach but is programmed to send a signal home on the morning of Jan. 1 to indicate its health and whether it recorded all the expected data. The mission team expects the data to be returned over the next 20 months, with an additional year of data analysis and archiving.
Aha. That's some long time. I suppose we'll at least get some images in a few days?
I wonder how long it takes to send and receive (from four billion miles away) and process the images.
Apparently the data rate is about 1000 baud. About the same as the first modem I ever owned :-)
11
I'd like to understand how does New Horizons space craft send its data back to Earth, billions miles away from it. I read in a Time article: "Also, at the distance of Pluto, we can only send data back at a rate that’s comparable with an old 1990s modem. Because of that, during the encounter, we...
@JohnRennie Hehe. That's definitely not bad.....for something which is four billion miles away. :)
7:47 AM
@Blue yes, imagine a modem with a 4 billion mile long cable :-)
Four hours transmission delay according to the top answer, so we'll know around 10:00 UTC if everything worked.
And first pictures by the end of today.
6 hours later…
1:35 PM
,,, mmmmmm3m /4///t
mmmmm hv hy \
191
This is a common scenario when typing: When the family assembled for Sunday dinner, With their minds made up that they wouldn't get thinner On Argentine joint, potato^DR&FTGYB`kuhadrggoy867rt98wouth4bfgdhjlkhdsfghhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhhf This happens beca...
2 years old son ^^
2:18 PM
pomegranate vinegar tastes so nice
2:36 PM
@JohnRennie hi, and good morning. I have got simple question . What else in the universe has no effect on gravity ,Want know is there something existing on the universe and has no effect on the gravity (other than potons)
@kartikc.p "potons" is a dangerous typo
On first thought, I'd say massless particles
Hm, that can't be right, since they do have relativistic mass
2:58 PM
@kartikc.p hi Kartik. Everything that has energy creates a gravitational field. So photons do create a gravitational field even though they are massless. It is even (theoretically) possible to create a black hole from photons. This is called a kugelblitz.
In theoretical physics, a kugelblitz (German: "ball lightning") is a concentration of heat, light or radiation so intense that its energy forms an event horizon and becomes self-trapped: according to general relativity and the equivalence of mass and energy, if enough radiation is aimed into a region, the concentration of energy can warp spacetime enough for the region to become a black hole (although this would be a black hole whose original mass-energy had been in the form of radiant energy rather than matter). In simpler terms, a kugelblitz is a black hole formed from radiation as opposed to...
3:11 PM
Happy new year to everyone from the 1% of my brain that's not under assault from a tiny gnome with a very large hammer.
3:41 PM
@ACuriousMind Wonder if we both are being assaulted by the same gnome. :P
@Blue Mine says his name is Al.
Hmm, makes sense --- AI's hammering my left brain too. Can't think straight no more. So 2019 is the takeover year? I guess.
The only things that are going to be hammering my brain this year are the tall infographics¹
[1]: https://xkcd.com/1273/
@Blue Oh, damn non-serif font. That's a small L in my message, not an I
3:53 PM
@ACuriousMind Is that gnome Paul Simon?
4:21 PM
Is it just a giant coincidence that we have quantization procedures?
What do you mean?
Like being able to do canonical or path integral quantization. Like we should have a quantum to classical limit, but there's no reason that we could have a classical to quantum procedure is there?
I agree with the general sentiment.
4:39 PM
@ACuriousMind,@JohnRennie. how can we define energy in brief terms. I can't simply visualize it i mean how it works and how can i imagine it in my brain
@kartikc.p You can't visualize energy because it is just an abstract thing. See physics.stackexchange.com/q/3014/50583, physics.stackexchange.com/q/138972/50583
Also, please just ask any questions to the room in general - there's no need to ping specific people with it unless you think they'd be especially interested in it.
@rob Why did you lock physics.stackexchange.com/q/3014/50583? When did we decide this question is off-topic?
5:28 PM
@kartikc.p the trouble is that the work energy is used in lots of different ways to mean lots of different things. So when you're asking what energy is that's a meaningless question.
5:56 PM
It worked!
New Horizons did photograph Ultima Thule!
6:11 PM
@JohnRennie yes this is actually amazing...
user301074
6:31 PM
In Quantum mechanics an black-hole event horizon's Area is allowed to decrease over time (and thus violate Area's theorem) because the null-energy condition is not (in general) valid in QM?
user301074
Good morning
Do long conversations in comments get retroactively moved to chat? I just saw this physics.stackexchange.com/a/120039/24839
@danielunderwood AFAIR, an objection with canonical quantization is that one can construct a mapping from the classical Hamiltonian to the quantum Hamiltonian by upgrading dynamical variables to operators (i.e., removing certain commutative structures), implying that somehow quantum mechanics can be derived from classical mechanics
6:48 PM
@danielunderwood They get moved to chat whenever a mod chances upon them and thinks they'd belong better in chat
But in your case I'd see not much benefit in preserving the conversation and rather delete it wholesale, leaving only 1 or 2 comments to alert future visitors that the content of the answer is...controversial
@GodotMisogi but is it just a coincidence that we can do that? Unless there's something deeper underlying both CM and QM, should there really be a classical to quantum mapping? I think that may be the idea behind geometric quantization, but I'm not familiar with that (or if it's even widely accepted)
@ACuriousMind yeah I didn't really understand why that one was accepted
7:04 PM
@danielunderwood I would think it's a matter of interpretation. Just because building blocks are building blocks doesn't necessarily mean their larger structures may not have similar representations to their quanta
I also just thought that the underlying interpretation is that quantum mechanics is considered to be more general because it explains certain experimental results that CM can't, and is associated with its "free-er" structure (in the sense of free groups, free algebras, etc.) with its non-commutative structure
I personally don't see a problem with "generalisations" in such an interpretation, like having second-order changes in a path integral describing QM effects and realising that we're working in low-energy limits or whatever with renormalisation, as long as the new mathematical theory conforms to experiment. But that's probably because I'm a noob at this, and don't really know what else to think
Is this question unclear or does it require some prerequisite knowledge I'm not aware of?
https://physics.stackexchange.com/questions/451577/a-question-on-interpretation-of-transitions-from-initial-to-final-states
7:32 PM
@ACuriousMind I have no recollection of my state of mind when I locked that question. Maybe I thought it was too broad for our current standards, but then why lock instead of close? I plead insanity. March-me was having a rough time.
@rob Sooo...you don't object if I just unlock it again? :)
@ACuriousMind Not at all. A superficial reading makes me think it might stay closed as too broad, but I'm fine being overruled there as well.
@danielunderwood I think you should think of quantization more like this: It is not "reversing the limit", it is answering the question "What is an example of a quantum system that has this classical system as its classical limit?"
Note that e.g. ordering ambiguities mean that canonical quantization is not 1-to-1, i.e. indeed picks "examples" rather than a unique quantization
@rob I've unlocked it for now; if anyone wants to VTC it the normal way they can but I don't think any of us should close it unilaterally
@ACuriousMind By ordering ambiguities you mean examples such as normal ordering v.s. Weyl ordering?
8:22 PM
@GodotMisogi yes
9:10 PM
@JohnRennie Now the only problem is that I am impatient. What kind of excuse is a 12 hour ping time and a kilobaud link for having to wait from my pretty pictures, anyway?
Hmmm ... twelve hour pings even leaves realizations of RFCs 1149, 2549, and 6214 in the dust, doesn't it? ::chuckles::
Though I don't imagine that they are actually using IP. | 2019-11-12 14:57:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.39810943603515625, "perplexity": 1858.5343975513595}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496665573.50/warc/CC-MAIN-20191112124615-20191112152615-00062.warc.gz"} |
https://math.stackexchange.com/questions/367100/cant-understand-a-simple-divisibility-probelm | # can't understand a simple divisibility probelm
I am reading this book. In the example 1.1 they said to prove this problem.
probelm
Let $x$ and $y$ be integers. Prove that $2x + 3y$ is divisible by $17$ iff $9x + 5y$ is divisible by $17$
the solution they provided is
$$17 \mid (2x + 3y) \implies 17 \mid [13(2x + 3y)]$$ or $$17 \mid (26x + 39y) \implies 17 \mid (9x + 5y)$$ and conversely,
$$17 \mid (9x + 5y) \implies 17 \mid [4(9x + 5y)]$$ or $$17 \mid (36x + 20y) \implies 17 \mid (2x + 3y)$$
I can't understand how the concluded this
$$17 \mid (26x + 39y) \implies 17 \mid (9x + 5y)$$ implication and this
$$17 \mid (36x + 20y) \implies 17 \mid (2x + 3y)$$
the only rule I know is
if $\;a|b\;$ then $\;a|bk$.
where a,b and k are integers. we can't deduce the above two implication (that is I confused) using this rule isn't it? is there any other point to determine that above two implications are true?
• Split the 26x into 17x + 9x. Do the same for 39y = 34y+ 5y. – Scott H. Apr 20 '13 at 5:18 | 2019-09-21 08:50:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7155061364173889, "perplexity": 409.8114231234999}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-39/segments/1568514574377.11/warc/CC-MAIN-20190921084226-20190921110226-00427.warc.gz"} |
https://merchantsofbollywood.com.au/uydg0/rx3im50.php?tag=2e892c-specific-weight-to-specific-volume | Kamado Tanjiro No Uta Piano Chords, Synonyms Of Ancient, A Christmas Horror Story Rotten Tomatoes, Greg Hsu Ufc, Emma Imagine Dragons, Dodge Durango Price, "/>
The Merchants of Bollywood
# specific weight to specific volume
//specific weight to specific volume
## specific weight to specific volume
Comment document.getElementById("comment").setAttribute( "id", "aff7e14fa04f01e78bc240c8691205d5" );document.getElementById("cf871b911e").setAttribute( "id", "comment" ); Save my name, email, and website in this browser for the next time I comment. Which one of the following is specific weight of 2 liters of liquid having mass 0.5 kg and gravitational … The specific volume of the original tank is the same as the specific volume in each half. It is the reciprocal of density and is an intrinsic property of matter. γ = specific weight, units typically [N/m3] or [lb. The average density of human blood is 1060 kg/m 3. This means that if you are on the moon a liter of water will have the same mass as a liter of water on earth. Formula: Specific Gravity = Weight of substance / Weight of an equal volume of water. Specific gravity. When dealing with matter regardless of its state it will have a certain mass. ρ = m/V = 1/ν. The specific weight considers how the gravitation constant can effect the weight of a fluid. Specific Volume = 20 mL glycerin. Specific Gravity = ratio of the density (mass of a unit volume) of a substance to the density (mass of the same unit volume) of a reference substance. $ρ = \frac{3~kg}{.001~m^3} = 3000 \frac{kg}{m^3}$. 2.An object having 15 kg mass weighs 14 kg on a spring balance. Remember that mass is same regardless of gravity. So, SPECIFIC WEIGHT = 7.848 kgm^-2s^-2 Note that chlorine gas has two atoms and an atomic weight of 35.45. An unknown fluid in a 1 liter beaker has a mass of 3 kg. : the weight of a substance per unit volume in absolute units equal to the density multiplied by the acceleration of gravity. Weight by volume percent (w/v %) tells you the mass of solute in grams that has been added to a 100 mL solution. Specific weight is defined as the weight of a unit volume of the material. Calculate how much of this gravel is required to attain a specific depth in a cylindrical, quarter cylindrical or in a rectangular shaped aquarium or pond [ weight to volume | volume to weight | price] $ν = \frac{1}{3000\frac{kg}{m^3}}= 3.33 e – 4 \frac{m^3}{kg}$eval(ez_write_tag([[728,90],'sbainvent_com-banner-1','ezslot_0',113,'0','0'])); Determine the Specific Weight of the fluid. Specific weight = specific gravity * acceleration of gravity. CaribSea, Freshwater, Super Naturals, Tahitian Moon weighs 1 473.7 kg/m³ (92.00009 lb/ft³) with specific gravity of 1.4737 relative to pure water. Specific volume of a fluid is defined as the ratio of the volume of a fluid to the mass of the fluid. Your email address will not be published. Either the specific volume or the density can be used in defining the state of the gas using only intensive variables. Specific Gravity: This is defined as the ratio of the Density or weight density of a fluid to density or … Insert these values into the equation for density to find the density of the box: 0.721. Specific weight. 3. Specific weight of a liquid is characterized as the proportion of the heaviness of a liquid to the volume of the liquid. In fluid mechanics, specific weight represents the force exerted by gravity on a unit volume of a fluid.For this reason, units are expressed as force per unit volume (e.g., N/m 3 or lb/ft 3).Specific weight can be used as a characteristic property of a fluid. The last characteristic of a fluid that is dependent on density is the specific weight of a fluid. Recall that weight is determined by multiply the mass of an object by the gravitational constant g. This means that the specific weight is determined by multiply the density by the gravitational constant g. Finally, the specific gravity is used to compare the density of a fluid to the density of water.
By | 2020-12-01T18:17:36+00:00 December 1st, 2020|Uncategorized|0 Comments | 2021-03-06 01:13:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5291548371315002, "perplexity": 880.0457600613227}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178374217.78/warc/CC-MAIN-20210306004859-20210306034859-00364.warc.gz"} |
https://socratic.org/questions/580711c47c01493e9794a68b | # Question #4a68b
Oct 19, 2016
$x = \frac{15}{4}$
#### Explanation:
Note that as we have $4 - x$ under a radical, we must have $x \le 4$ to avoid taking the root of a negative number.
$4 + \sqrt{10 - x} = 6 + \sqrt{4 - x}$
$\implies \sqrt{10 - x} = 2 + \sqrt{4 - x}$
$\implies {\left(\sqrt{10 - x}\right)}^{2} = {\left(2 + \sqrt{4 - x}\right)}^{2}$
$\implies 10 - x = {2}^{2} + 2 \left(2\right) \sqrt{4 - x} + {\left(\sqrt{4 - x}\right)}^{2}$
$\implies 10 - x = 4 + 4 \sqrt{4 - x} + 4 - x$
$\implies 2 = 4 \sqrt{4 - x}$
$\implies \sqrt{4 - x} = \frac{1}{2}$
$\implies {\left(\sqrt{4 - x}\right)}^{2} = {\left(\frac{1}{2}\right)}^{2}$
$\implies 4 - x = \frac{1}{4}$
$\implies x = 4 - \frac{1}{4}$
$\therefore x = \frac{15}{4}$
Checking our result:
$4 + \sqrt{10 - \frac{15}{4}} = 4 + \sqrt{\frac{25}{4}}$
$= 4 + \frac{5}{2}$
$= \frac{13}{2}$
$= 6 + \frac{1}{2}$
$= 6 + \sqrt{\frac{1}{4}}$
$= 6 + \sqrt{4 - \frac{15}{4}}$
as desired | 2019-12-09 09:53:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 20, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9728367328643799, "perplexity": 2439.044628519266}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540518627.72/warc/CC-MAIN-20191209093227-20191209121227-00122.warc.gz"} |
http://www.turkmath.org/beta/seminer.php?id_seminer=1975 | Middle East Technical University General Seminars
Holomorphic Extension of Mappings between Hypersurfaces
Özcan Yazıcı
METU, Turkey
Özet : Let $M\subset \mathbb C^N, M'\subset \mathbb C^{N'}$ be real analytic hypersurfaces and $F$ be a holomorphic mapping on one side of $M$, continuous $M$ and $F(M)\subset M'$. When $N=N'$, assuming that $M$ and $M'$ have some non-degeneracy properties, it is well known that any such mapping $F$ extends holomorphically to the other side of the hyperplane $M$. When $N=N'=1$, this result is known as Schwarz Reflection Principle. In the case of $N'>N$, a very little is known about the holomorphic extension of such mappings. This extension problem is also related to holomorphic extension of meromorphic mappings of hypersurfaces. In this talk, we will review some well known results and mention some recent results about these problems.
Tarih : 21.02.2019 Saat : 15:40 Yer : Gündüz İkeda Seminar Room Dil : English | 2019-05-22 01:43:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8117743134498596, "perplexity": 287.0200976433566}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256600.32/warc/CC-MAIN-20190522002845-20190522024845-00434.warc.gz"} |
https://scriptinghelpers.org/questions/77546/why-does-the-other-player-gain-pins-and-i-dont | Still have questions? Join our Discord server and get real time help.
0
# Why does the other player gain pins and I don't?
When I attack the other player, the other player gains pins and I don't. I want the other player to gain pins when he's attacking me and I want to gain pins when I'm attacking him. Are there any problems on this script? Thanks! ~ BlackHatRBX
local CanDamagePlayers = true
if hit.Parent:FindFirstChild("Humanoid") and CanDamagePlayers == true and hit.Parent.Humanoid.Health > 1 then
local plr = game.Players:GetPlayerFromCharacter(hit.Parent)
pins.Value = pins.Value + 15
hit.Parent.Humanoid:TakeDamage(script.Parent.Damage.Value)
script.Parent.Sound:Play()
hit.Parent.Humanoid.PlatformStand = true
CanDamagePlayers = false
wait(0.5)
CanDamagePlayers = true
wait(1)
hit.Parent.Humanoid.PlatformStand = false
end
end)
0
Well, I think what's going on its since you're holding the tool it is touching you, which means whoever attacks will also get pins. Try checking if the hit.Name ~= tool.Parent..Parent.Name (which is the player who is holding the weopon. starmaq 435 — 14d
0
In what line? BlackHatRBX 2 — 14d
0
It is locating the player its hitting for the variable par not the local player aandmprogameing 38 — 13d
0
I ment variable plr sorry aandmprogameing 38 — 13d
0
In what line in number though? BlackHatRBX 2 — 13d
0
local CanDamagePlayers = true
script.Parent.Blade.Touched:Connect(function(hit) if hit.Parent:FindFirstChild("Humanoid") and CanDamagePlayers == true and hit.Parent.Humanoid.Health > 1 then
local plr = game.Players:GetPlayerFromCharacter(hit.Parent) local pins = plr:WaitForChild("leaderstats").Pins pins.Value = pins.Value + 15
hit.Parent.Humanoid:TakeDamage(script.Parent.Damage.Value)
script.Parent.Sound:Play()
hit.Parent.Humanoid.PlatformStand = true CanDamagePlayers = false wait(0.5) CanDamagePlayers = true wait(1) hit.Parent.Humanoid.PlatformStand = false end end)
"local plr = game.Players:GetPlayerFromCharacter(hit.Parent)" This line right here gets whatever you hit's parent not yourself. If you want yourself then you do the parent of the blade's name and then you call :GetPlayerFromCharacter. Sorry if you already figured this out, but they didn't reply to your comments | 2019-03-25 22:15:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.23145267367362976, "perplexity": 9283.641648133435}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912204461.23/warc/CC-MAIN-20190325214331-20190326000331-00021.warc.gz"} |
http://openstudy.com/updates/50ec1c6fe4b07cd2b648d292 | itsjustme_lol Group Title hmm one year ago one year ago
1. itsjustme_lol Group Title
Part 1: Use the quadratic formula to solve x^2 + 5x = –2. Part 2: Using complete sentences, explain the process you used. Part 3: Why is the quadratic formula the best method to use?
2. hba Group Title
Well First of all compare your equation(x^2+5x+2=0) with$\huge\ ax^2+bx+c=0$ And find a,b and c.
3. hba Group Title
Then Use the quad formula to determine x, Which is $\huge\ x=\frac{ -b \pm \sqrt{b^2-4ac} }{ 2a }$
4. hba Group Title
Btw,The Quad Formula is not the best method.
5. geerky42 Group Title
@hba Why isn't quadrant formula the best method? | 2014-08-21 22:01:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31955641508102417, "perplexity": 4593.343990750436}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500821666.77/warc/CC-MAIN-20140820021341-00204-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://socratic.org/questions/how-do-you-graph-absolute-value-equations-on-a-coordinate-plane | # How do you graph absolute value equations on a coordinate plane?
Then teach the underlying concepts
Don't copy without citing sources
preview
?
#### Explanation
Explain in detail...
#### Explanation:
I want someone to double check my answer
1
Mar 19, 2015
Let's start with a simple one $y = | x + 2 |$
If $x > - 2$, $x + 2$ is positive, so $y = | x + 2 | = x + 2$
If $x < - 2$, $x + 2$ is negative, but will be 'turned around' by the abslote signs, so in this domain $y = | x + 2 | = - x - 2$
These two semi-graphs meet at $\left(- 2 , 0\right)$
graph{|x+2| [-10.5, 9.5, -1.08, 8.915]}
• 4 minutes ago
• 7 minutes ago
• 9 minutes ago
• 10 minutes ago
• 16 seconds ago
• A minute ago
• A minute ago
• 3 minutes ago
• 3 minutes ago
• 4 minutes ago
• 4 minutes ago
• 7 minutes ago
• 9 minutes ago
• 10 minutes ago | 2018-03-19 16:39:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 8, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.703267514705658, "perplexity": 9331.666129435816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-13/segments/1521257647003.0/warc/CC-MAIN-20180319155754-20180319175754-00360.warc.gz"} |
http://mathematica.stackexchange.com/questions/4654/using-memoization-with-a-mutable-object/4656 | # Using Memoization with a Mutable Object
In looking for a solution to this question, I ran across some old binary tree code by Daniel Lichtblau, reproduced below:
Clear[leftsubtree, rightsubtree, nodevalue, emptyTree, treeInsert]
leftsubtree[{left_, _, _}] := left
rightsubtree[{_, _, right_}] := right
nodevalue[{_, val_, _}] := val
emptyTree = {};
treeInsert[emptyTree, elem_] := {emptyTree, elem, emptyTree}
treeInsert[tree_, elem_] /; SameQ[nodevalue[tree], elem] := tree
treeInsert[tree_, elem_] /; OrderedQ[{nodevalue[tree], elem}] :=
{leftsubtree[tree],
nodevalue[tree], treeInsert[rightsubtree[tree], elem]}
treeInsert[tree_, elem_] := {treeInsert[leftsubtree[tree], elem],
nodevalue[tree], rightsubtree[tree]}
When mapped onto a list, treeInsert gives you a sorted duplicate free list. For example,
tr = {};
Scan[(tr = treeInsert[tr, #]) &, RandomInteger[100, 5]];
Flatten@tr
(* {13, 28, 53, 59, 88} *)
On my machine, this takes ~2 s to process RandomInteger[10, 10^5], but this increases to nearly 20 s with RandomInteger[10, 10^6]. There are likely other techniques to speed this up, but I am curious as to how memoization could be adapted to this problem. At issue, though, is that the tree changes with each insertion, and so it cannot be used directly for memoization because the definition depends directly on that form. How would one do this?
Edit: as I discovered in my own testing, Fold works much better than Scan for creating a tree, as follows
tr = Fold[treeInsert, {}, RandomInteger[100, 5]];
Flatten@tr
(* {13, 28, 53, 59, 88} *)
Update: while my question, per se, was not answered directly, the answers themselves indicated that there were better ways to accomplish what I wanted. In the end, I chose the one I did for two reasons: speed and simplicity.
-
I think the Scan[(tr= part of the post did not make it onto the page. – ruebenko Apr 23 '12 at 14:41
@ruebenko no it didn't. Posted it before I finished the thought, apparently. It's fixed, now. – rcollyer Apr 23 '12 at 14:42
This does not use memoization, but have you seen this answer? It is quite fast (being compiled), and along the same lines as what you discuss. – Leonid Shifrin Apr 23 '12 at 18:16
@LeonidShifrin I did see that answer, and I like it a lot. Unfortunately, $\pm\infty$ is not a real number, so it would have to be dealt with separately. However, I was thinking along those lines when it occurred to me that I did not know of a way to adapt memoization to this particular solution. – rcollyer Apr 23 '12 at 18:22
@rcollyer I see. But if the problem is only about infinities, then, since the resulting list will be sorted, they can only be the first and last elements of the resulting list, if present. What I usually do is to replace them temporarily with Min[list]-1 and Max[list]+1, then use Compile, then replace back. This way, you can keep it almost as fast as without infinities. – Leonid Shifrin Apr 23 '12 at 18:24
For such small trees I would memoize those that already have the element...
ClearAll[leftsubtree, rightsubtree, nodevalue, emptyTree, treeInsert]
leftsubtree[{left_, _, _}] := left
rightsubtree[{_, _, right_}] := right
nodevalue[{_, val_, _}] := val
emptyTree = {};
treeInsert[emptyTree, elem_] := {emptyTree, elem, emptyTree}
(*This is the changed line*)
t : treeInsert[tree_, elem_] /; ! FreeQ[tree, elem] := t = tree
treeInsert[tree_, elem_] /;
OrderedQ[{nodevalue[tree], elem}] := {leftsubtree[tree],
nodevalue[tree], treeInsert[rightsubtree[tree], elem]}
treeInsert[tree_, elem_] := {treeInsert[leftsubtree[tree], elem],
nodevalue[tree], rightsubtree[tree]}
-
This gives a 60% increase in speed over Pillsy' s similar solution on my machine. Likely because it is the common pattern, and hence higher on the list. +1 – rcollyer Apr 23 '12 at 16:14 As a side note, for testing they're small trees, but that is not guaranteed to always be the case. Although with Andy's test data, you don't expect the number of unique elements to exceed 1000, so I guess small still applies. – rcollyer Apr 23 '12 at 16:17 With further investigation, this solution seems to win big over mine when you miss the cache a lot. – Pillsy Apr 23 '12 at 16:51 As I noted in the update, I am selecting yours for speed and simplicity. – rcollyer Apr 26 '12 at 2:57 Thanks @rcollyer – Rojo Apr 26 '12 at 5:42
Well, the simplest approach I came up with is to just memoize the result after you generate it.
ClearAll[leftSubTree, rightSubTree, nodeValue, emptyTree, treeInsert];
leftSubTree[{left_, _, _}] := left;
rightSubTree[{_, _, right_}] := right;
nodeValue[{_, val_, _}] := val;
emptyTree = {};
treeInsert[emptyTree, elem_] := {emptyTree, elem, emptyTree};
treeInsert[tree_, elem_] /; SameQ[nodeValue@tree, elem] := tree;
treeInsert[tree_, elem_] /; OrderedQ[{nodeValue[tree], elem}] :=
With[{inserted =
{leftSubTree[tree], nodeValue[tree],
treeInsert[rightSubTree[tree], elem]}},
treeInsert[inserted, elem] = inserted];
treeInsert[tree_, elem_] :=
With[{inserted =
{treeInsert[leftSubTree[tree], elem], nodeValue[tree],
rightSubTree[tree]}},
treeInsert[inserted, elem] = inserted];
The way this works means that if you ever try to insert an element into a tree that already contains it, it will return the memoized result immediately, and it will also return the memoized result if you've ever inserted that element into another tree identical to the one you're using now. This lead to a factor of 10 speedup on my machine with that million element list.
It has the advantage of not only working well more with than one tree: it actually will be faster if those trees share structure!
-
+1, that's an interesting idea I hadn't considered. – rcollyer Apr 23 '12 at 15:19 As an interesting side note, repeated applications on the same data set do not give any additional speed-up. Likely, that is the cost of traversing the tree itself. Also, for Andy's question, though, OrderedQ puts $\pm\infty$ at the end, so to use it with those values, Less is better, but not any faster. :P – rcollyer Apr 23 '12 at 16:07
Here is an approach that inserts only if an element is not yet in the tree:
ClearAll[leftsubtree, rightsubtree, nodevalue, emptyTree, treeInsert, \
inTreeQ]
leftsubtree[{left_, _, _}] := left
rightsubtree[{_, _, right_}] := right
nodevalue[{_, val_, _}] := val
inTreeQ[_] = False;
emptyTree = {};
treeInsert[tree_, elem_] /; inTreeQ[elem] := tree
treeInsert[emptyTree,
elem_] := (inTreeQ[elem] = True; {emptyTree, elem, emptyTree})
treeInsert[tree_, elem_] /; SameQ[nodevalue[tree], elem] := tree
treeInsert[tree_, elem_] /; ! inTreeQ[elem] &&
OrderedQ[{nodevalue[tree], elem}] := {leftsubtree[tree],
nodevalue[tree], treeInsert[rightsubtree[tree], elem]}
treeInsert[tree_, elem_] /; ! inTreeQ[elem] := {treeInsert[
leftsubtree[tree], elem], nodevalue[tree], rightsubtree[tree]}
tr = {};
AbsoluteTiming[
Scan[(tr = treeInsert[tr, #]) &, RandomInteger[10, 10^6]];]
Flatten@tr
This works well when there are a few different elements like in RandomInteger[10,...] Have a look and see if this works for you.
Edit: I made another improvement by moving the most common case treeInsert[tree_, elem_] /; inTreeQ[elem] := tree (that the element is in the tree) up the chain
-
It definitely speeds things up (by a very large margin), but what if I need two, or more, trees? – rcollyer Apr 23 '12 at 15:07 hm, how about using an index for each tree, like in inTreeQ[1,_]=False you'd then have to give an index to each tree ) or you could use an Unique["tr"] symbol. – ruebenko Apr 23 '12 at 15:10 The index idea is a good one. How about you make a custom "object": Tree[index, treedata]. Then, have a form of that only accepts an element, treeInsert[elem_], which will "create" the Tree and give it the unique index when used. Then unique trees can be had by all ... :) – rcollyer Apr 23 '12 at 15:17 | 2013-05-25 18:28:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34012138843536377, "perplexity": 3127.112959594786}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368706082529/warc/CC-MAIN-20130516120802-00045-ip-10-60-113-184.ec2.internal.warc.gz"} |
https://planetmath.org/ClassicalStokesTheorem | classical Stokes’ theorem
Let $M$ be a compact, oriented two-dimensional differentiable manifold (surface) with boundary in $\mathbb{R}^{3}$, and $\mathbf{F}$ be a $C^{2}$-smooth vector field defined on an open set in $\mathbb{R}^{3}$ containing $M$. Then
$\iint_{M}(\nabla\times\mathbf{F})\cdot d\mathbf{A}=\int_{\partial M}\mathbf{F}% \cdot d\mathbf{s}\,.$
Here, the boundary of $M$, $\partial M$ (which is a curve) is given the induced orientation from $M$. The symbol $\nabla\times\mathbf{F}$ denotes the curl of $\mathbf{F}$. The symbol $d\mathbf{s}$ denotes the line element $ds$ with a direction parallel to the unit tangent vector $\mathbf{t}$ to $\partial M$, while $d\mathbf{A}$ denotes the area element $dA$ of the surface $M$ with a direction parallel to the unit outward normal $\mathbf{n}$ to $M$. In precise terms:
$d\mathbf{A}=\mathbf{n}\,dA\,,\quad d\mathbf{s}=\mathbf{t}\,ds\,.$
The classical Stokes’ theorem reduces to Green’s theorem on the plane if the surface $M$ is taken to lie in the xy-plane.
The classical Stokes’ theorem, and the other “Stokes’ type” theorems are special cases of the general Stokes’ theorem involving differential forms. In fact, in the proof we present below, we appeal to the general Stokes’ theorem.
(To be written.)
Proof using differential forms
The proof becomes a triviality once we express $(\nabla\times\mathbf{F})\cdot d\mathbf{A}$ and $\mathbf{F}\cdot d\mathbf{s}$ in terms of differential forms.
Proof.
Define the differential forms $\eta$ and $\omega$ by
$\displaystyle\eta_{p}(\mathbf{u},\mathbf{v})$ $\displaystyle=\langle\operatorname{curl}\mathbf{F}(p),\mathbf{u}\times\mathbf{% v}\rangle\,,$ $\displaystyle\omega_{p}(\mathbf{v})$ $\displaystyle=\langle\mathbf{F}(p),\mathbf{v}\rangle\,.$
for points $p\in\mathbb{R}^{3}$, and tangent vectors $\mathbf{u},\mathbf{v}\in\mathbb{R}^{3}$. The symbol $\langle,\rangle$ denotes the dot product in $\mathbb{R}^{3}$. Clearly, the functions $\eta_{p}$ and $\omega_{p}$ are linear and alternating in $\mathbf{u}$ and $\mathbf{v}$.
We claim
$\displaystyle\eta$ $\displaystyle=\nabla\times\mathbf{F}\cdot d\mathbf{A}$ on $M$. (1) $\displaystyle\omega$ $\displaystyle=\mathbf{F}\cdot d\mathbf{s}$ on $\partial M$. (2)
To prove (1), it suffices to check it holds true when we evaluate the left- and right-hand sides on an orthonormal basis $\mathbf{u},\mathbf{v}$ for the tangent space of $M$ corresponding to the orientation of $M$, given by the unit outward normal $\mathbf{n}$. We calculate
$\displaystyle\nabla\times\mathbf{F}\cdot d\mathbf{A}(\mathbf{u},\mathbf{v})$ $\displaystyle=\langle\operatorname{curl}\mathbf{F},\mathbf{n}\rangle\,dA(% \mathbf{u},\mathbf{v})$ definition of $d\mathbf{A}=\mathbf{n}\,dA$ $\displaystyle=\langle\operatorname{curl}\mathbf{F},\mathbf{n}\rangle$ definition of volume form $dA$ $\displaystyle=\langle\operatorname{curl}\mathbf{F},\mathbf{u}\times\mathbf{v}\rangle$ since $\mathbf{u}\times\mathbf{v}=\mathbf{n}$ $\displaystyle=\eta(\mathbf{u},\mathbf{v})\,.$
For equation (2), similarly, we only have to check that it holds when both sides are evaluated at $\mathbf{v}=\mathbf{t}$, the unit tangent vector of $\partial M$ with the induced orientation of $\partial M$. We calculate again,
$\displaystyle\mathbf{F}\cdot d\mathbf{s}(\mathbf{t})$ $\displaystyle=\langle\mathbf{F},\mathbf{t}\rangle\,ds(\mathbf{t})$ definition of $d\mathbf{s}=\mathbf{t}\,ds$ $\displaystyle=\langle\mathbf{F},\mathbf{t}\rangle$ definition of volume form $ds$ $\displaystyle=\omega(\mathbf{t})\,.$
Furthermore, $d\omega$ = $\eta$. (This can be checked by a calculation in Cartesian coordinates, but in fact this equation is one of the coordinate-free definitions of the curl.)
The classical Stokes’ Theorem now follows from the general Stokes’ Theorem,
$\int_{M}\eta=\int_{M}d\omega=\int_{\partial M}\omega\,.\qed$
References
• 1 Michael Spivak. Calculus on Manifolds. Perseus Books, 1998.
Title classical Stokes’ theorem ClassicalStokesTheorem 2013-03-22 15:27:52 2013-03-22 15:27:52 stevecheng (10074) stevecheng (10074) 6 stevecheng (10074) Theorem msc 26B20 GeneralStokesTheorem GaussGreenTheorem GreensTheorem | 2019-03-19 17:11:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 69, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9609507322311401, "perplexity": 187.60885563080157}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202003.56/warc/CC-MAIN-20190319163636-20190319185636-00285.warc.gz"} |
https://www.tutorialspoint.com/how-to-hide-the-controlling-corners-of-an-ellipse-using-fabricjs | # How to hide the controlling corners of an Ellipse using FabricJS?
FabricJSJavascriptHTML5 Canvas
#### Complete Python Prime Pack
9 Courses 2 eBooks
#### Artificial Intelligence & Machine Learning Prime Pack
6 Courses 1 eBooks
#### Java Prime Pack
9 Courses 2 eBooks
In this tutorial, we are going to learn how to hide the controlling corners of an Ellipse using FabricJS. Ellipse is one of the various shapes provided by FabricJS. In order to create an ellipse, we have to create an instance of fabric.Ellipse class and add it to the canvas. The controlling corners of an object allow us to increase or decrease its scale, stretch or change its position. We can customize our controlling corners in many ways such as adding a specific color to it, changing its size etc. However, we can also hide them using the hasControls property.
## Syntax
new fabric.Ellipse({ hasControls: Boolean }: Object)
## Parameters
• options (optional) − This parameter is an Object which provides additional customizations to our ellipse. Using this parameter color, cursor, stroke width and a lot of other properties can be changed related to the object of which hasControls is a property.
## Options Keys
• hasControls − This property accepts a Boolean value that allows us to display or hide the controlling corners of an actively selected object. Its default value is True.
## Example 1
Default appearance of controlling corners
Let's see an example, that shows the default appearance of controlling corners. Since the default value of hasControls property is "true", the controlling corners will not be hidden.
<!DOCTYPE html>
<html>
<head>
<!-- Adding the Fabric JS Library-->
<script src="https://cdnjs.cloudflare.com/ajax/libs/fabric.js/510/fabric.min.js"></script>
</head>
<body>
<h2>How to hide the controlling corners of an Ellipse using FabricJS?</h2>
<p>Select the object to see its controlling corners.</p>
<canvas id="canvas"></canvas>
<script>
// Initiate a canvas instance
var canvas = new fabric.Canvas("canvas");
// Initiate an ellipse instance
var ellipse = new fabric.Ellipse({
left: 100,
top: 100,
fill: "white",
rx: 100,
ry: 60,
stroke: "#c154c1",
strokeWidth: 5,
});
// Adding it to the canvas
canvas.add(ellipse);
canvas.setWidth(document.body.scrollWidth);
canvas.setHeight(250);
</script>
</body>
</html>
## Example 2
Passing hasControls as key and assigning a "false" value to it
In this example, we will see how the controlling corners are hidden by using the hasControls property. We need to assign the hasControls key a "false" value. By doing that, the controlling corners will be hidden.
<!DOCTYPE html>
<html>
<head>
<!-- Adding the Fabric JS Library-->
<script src="https://cdnjs.cloudflare.com/ajax/libs/fabric.js/510/fabric.min.js"></script>
</head>
<body>
<h2>How to hide the controlling corners of an Ellipse using FabricJS?</h2>
<p>Select the object and here you won't be able to see the controlling corners as we have set the <b>hasControls</b> property to False. </p>
<canvas id="canvas"></canvas>
<script>
// Initiate a canvas instance
var canvas = new fabric.Canvas("canvas");
// Initiate an ellipse instance
var ellipse = new fabric.Ellipse({
left: 100,
top: 100,
fill: "white",
rx: 100,
ry: 60,
stroke: "#c154c1",
strokeWidth: 5,
hasControls: false,
});
// Adding it to the canvas
canvas.add(ellipse);
canvas.setWidth(document.body.scrollWidth);
canvas.setHeight(250);
</script>
</body>
</html>
Updated on 24-May-2022 12:36:36
Advertisements | 2022-10-03 18:24:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.19794435799121857, "perplexity": 6380.945373019626}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030337428.0/warc/CC-MAIN-20221003164901-20221003194901-00293.warc.gz"} |
https://eng.libretexts.org/Bookshelves/Computer_Science/Operating_Systems/Book%3A_Think_OS_-_A_Brief_Introduction_to_Operating_Systems_(Downey)/07%3A_Caching/7.06%3A_The_memory_hierarchy | # 7.6: The memory hierarchy
At some point during this chapter, a question like the following might have occurred to you: “If caches are so much faster than main memory, why not make a really big cache and forget about memory?”
Without going too far into computer architecture, there are two reasons: electronics and economics. Caches are fast because they are small and close to the CPU, which minimizes delays due to capacitance and signal propagation. If you make a cache big, it will be slower.
Also, caches take up space on the processor chip, and bigger chips are more expensive. Main memory is usually dynamic random-access memory (DRAM), which uses only one transistor and one capacitor per bit, so it is possible to pack more memory into the same amount of space. But this way of implementing memory is slower than the way caches are implemented.
Also main memory is usually packaged in a dual in-line memory module (DIMM) that includes 16 or more chips. Several small chips are cheaper than one big one.
The trade-off between speed, size, and cost is the fundamental reason for caching. If there were one memory technology that was fast, big, and cheap, we wouldn’t need anything else.
The same principle applies to storage as well as memory. Solid state drives (SSD) are fast, but they are more expensive than hard drives (HDD), so they tend to be smaller. Tape drives are even slower than hard drives, but they can store large amounts of data relatively cheaply.
The following table shows typical access times, sizes, and costs for each of these technologies.
Table $$\PageIndex{1}$$: Memory access times, sizes, and costs.
Device Access time Typical size Cost
Register 0.5 ns 256 B ?
Cache 1 ns 2 MiB ?
DRAM 100 ns 4 GiB $10 / GiB SSD 10 µs 100 GiB$1 / GiB
HDD 5 ms 500 GiB $0.25 / GiB Tape minutes 1-2 TiB$0.02 / GiB
The number and size of registers depends on details of the architecture. Current computers have about 32 general-purpose registers, each storing one “word”. On a 32-bit computer, a word is 32 bits or 4 B. On a 64-bit computer, a word is 64 bits or 8 B. So the total size of the register file is 100–300 B.
The cost of registers and caches is hard to quantify. They contribute to the cost of the chips they are on, but consumers don’t see that cost directly.
For the other numbers in the table, I looked at the specifications for typical hardware for sale from online computer hardware stores. By the time you read this, these numbers will be obsolete, but they give you an idea of what the performance and cost gaps looked like at one point in time.
These technologies make up the “memory hierarchy” (note that this use of “memory” also includes storage). Each level of the hierarchy is bigger and slower than the one above it. And in some sense, each level acts as a cache for the one below it. You can think of main memory as a cache for programs and data that are stored permanently on SSDs and HHDs. And if you are working with very large datasets stored on tape, you could use hard drives to cache one subset of the data at a time. | 2021-10-22 02:22:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.331312358379364, "perplexity": 1189.2961095640612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585450.39/warc/CC-MAIN-20211022021705-20211022051705-00028.warc.gz"} |
https://codereview.stackexchange.com/questions/225105/simple-dynamic-tree-in-python | # Simple dynamic tree in Python
I tried to make a dynamic tree in python. It is simple and i have gotten it to work so far. I created a function in main that generates some example data for the tree and then the main goal was to create a member function that uses recursion to print all the data (display_data()). The only issue with this that i have is the recursion depth problem, as well as the speed. Essentially the recursion and loop pattern builds up quite a bit on the overall run time. Also note this is a side project, I need to understand dynamic trees for a tic-tac-toe AI I am attempting(failing) to write.
tree.py
class Tree():
"""Implement a dynamic tree
1
/ | \
/ | \
2 3 4
/ \ / | \
5 6 7 8 9
/ / | \
10 11 12 13
"""
def __init__(self):
self.children = []
self.data = []
def create_children(self, amount):
for i in range(amount):
self.children.append(Tree())
def create_data(self, data):
for datum in data:
self.data.append(datum)
def display_data(self):
print(self.data, end = ' ')
for child in self.children:
child.display_data()
main.py
from tree import Tree
def example_data(root):
"""
['Child', 1] ['nChild', 1] ['nChild', 2]
['Child', 2] ['nChild', 1] ['nChild', 2]
['Child', 3] ['nChild', 1] ['nChild', 2]
"""
root.create_data(["Root"])
root.create_children(3)
counter = 1
for child in root.children:
child.create_data(["Child", counter])
child.create_children(2)
counter += 1
ncounter = 1
for nchild in child.children:
nchild.create_data(["nChild", ncounter])
ncounter += 1
return root
if __name__ == "__main__":
root = example_data(Tree())
root.display_data()
Specific suggestions:
1. It is idiomatic to wrap the stuff after if __name__ == "__main__": in a main function.
2. Rather than the generic data I would suggest figuring out exactly which information you want to attach to each Tree and creating fields for each of them rather than a fully generic list of stuff. This will make it much less painful to work with actual Trees because you can use for example tree.name or tree.counter instead of tree.data[0] and tree.data[1].
3. You can enumerate a list to loop over it without maintaining a separate index variable, as in for index, child in enumerate(root.children):
In general it'll be much easier to see how to improve this code once it's wired into a production use case rather than example code. The problem with writing code to an example "spec" is that the example inevitably doesn't completely fit the production use case - some crucial features will be missing and others will be superfluous. For example, storing the count of children separately. This information is already encoded in the length of the children list, so you are duplicating the information for no obvious reason. This could conceivably be useful if you're dealing with giant amounts of data, but if your application is sufficiently optimized that this is a real concern you probably should look into other languages or frameworks like numpy or pandas.
General suggestions:
1. black can automatically format your code to be more idiomatic.
2. isort can group and sort your imports automatically.
3. flake8 with a strict complexity limit will give you more hints to write idiomatic Python:
[flake8]
max-complexity = 4
ignore = W503,E203
(The max complexity limit is not absolute by any means, but it's worth thinking hard whether you can keep it low whenever validation fails. For example, I'm working with a team on an application since a year now, and our complexity limit is up to 7 in only one place. Conversely, on an ugly old piece of code I wrote without static analysis support I recently found the complexity reaches 87!)
4. I would then recommend adding type hints everywhere and validating them using a strict mypy configuration:
[mypy]
check_untyped_defs = true
disallow_untyped_defs = true
ignore_missing_imports = true
no_implicit_optional = true
warn_redundant_casts = true
warn_return_any = true
warn_unused_ignores = true | 2021-01-23 18:14:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2877247929573059, "perplexity": 1900.9205930298915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703538226.66/warc/CC-MAIN-20210123160717-20210123190717-00092.warc.gz"} |
https://docs.oramavr.com/en/latest/unreal/manual/constructors/use_collider.html | # Use Collider Prefab Constructor¶
The idea behind this prefab is similar to the one explained in the 5.5. This component should be attached to prefabs that contain by themselves a collider that when triggered with specific actors it triggers the Event Manager for the Action completion. In addition, based on the desired usage of the interactable different configuration is needed. The difference here is that the actors for the collision are type-unrestricted. Anything can be inserted for the collision to be accepted.
Variable Name
Type
Description
Stay Time
float
In case of collider trigger set to simple or use with tool, set the time needed for the collider to register a successful collision.
Prefabs Used
List of actors
Insert the actors that the collider will await collision with.
Hit Times
int
In case of collider trigger set to hit, set the amount of successful collisions needed.
Hit force
float
In case of collider trigger set to hit, set the amount of force needed to register a successful collision.
Hit Movement Vector
Vector3
In case of collider trigger set to hit, after each successful hit, the offset that will be applied to the primitive component.
Proceed Animation on Collider Hit
boolean
If enabled, iIn each successful collision play the next animation from the animation list Animation Names .
Proceed Animation on Perform
boolean
If enabled, on action’s perform the next animation from the CharacterAnimationController will be played.
Animation Names
List of strings (animation names)
The names of animation assets that will be played on each successful collision hit. These should referenced be in the CharacterAnimationController component.
Character Actor Name
string
The name of the actor that has the CharacterAnimationController component. This is usually the actor that has the patient’s skeletal mesh component.
Promote Collider Components
boolean
If false, all components under the collider being hit(destroyed) will be destroyed as well. Otherwise, the child components will take its place when destroyed
Collider Trigger
Simple, UseActionCollider, Hit
Specify the behaviour needed from the user in order to register a successful collision.
## Prefab Creation Requirements¶
1. Primitive component
2. Overlap Collider(s) | 2022-06-25 02:10:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.18144118785858154, "perplexity": 5891.348116708495}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103033925.2/warc/CC-MAIN-20220625004242-20220625034242-00699.warc.gz"} |
http://clay6.com/qa/51366/which-of-the-following-equations-is-not-correctly-formulated- | Comment
Share
Q)
# Which of the following equations is not correctly formulated ?
$\begin {array} {1 1} (A)\;Na_2[B_4O_5(OH)_4]-8H_2O+2HCl \rightarrow 2NaCl+4H_3BO_3+5H_2O \\ (B)\;2BN+6H_2O \rightarrow 2H_3BO_3+2NH_3 \\ (C)\;H_3BO_3 \large\frac{ 375k}{-H_2O}HBO_2 (Metaboric \: acid) ; 4HBO_2 \large\frac{435k}{-H_2O}H_2B_4O_7 (tetraboric\: acid); H_2B_4O_7 \large\frac{red \: heat}{-H_2O} 2B_2O_3 (boric \: oxide) \\ (D)\;\text{$H_3BO_3$is a weak monobasic acid as it liberates hydrogen ions as$H_3BO_3 \rightarrow H^+ + H_2BO_3^-$} \end {array}$
$H_3BO_3$ is a weak monobasic acid. It does not liberate $H^+$ but accepts $OH^-$, i.e, it is Lewis acid. | 2019-12-05 19:46:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9648913741111755, "perplexity": 5092.240990766871}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540482038.36/warc/CC-MAIN-20191205190939-20191205214939-00498.warc.gz"} |
http://www.physicsforums.com/showthread.php?s=4d9a17368aff2d04aaee2562e5b9d2a3&p=4203947 | ## An expression for this figure
Help needed, I need an expression of the figure in the picture, it is like a conic but the tip is not a point but a short 1D segment (it s not a truncated conic), I accept anything ;-) skewed Conics, Sperical Harmonics approx, cubic surface..... any suggestion?
ellipseconic and for the object an ellipsecone (I thought first ellipsone but that's already a word for winds that move in elliptical paths) alternatively you could use ovalconic and ovalcone or lineconic and linecone ** (I didn't know if you were try to name something new or wanted the correct mathematical name. My suggestions are inventions only)
How about elliptical wedge? Or do you need to model it? You take the formula for two ellipses and interpolate between them. Set the width of the upper one to zero. Something like this: $$X(t)=(a_1 (1-z/c) + a_2(z/c) )\cos(t)$$ $$Y(t)=(b(1-z/c)\sin(t)$$
## An expression for this figure
a wedgeonic or wedgeone | 2013-05-22 18:44:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5244253873825073, "perplexity": 2181.546816406893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-20/segments/1368702298845/warc/CC-MAIN-20130516110458-00075-ip-10-60-113-184.ec2.internal.warc.gz"} |
http://mathoverflow.net/questions/78967/arithmetic-progressions-in-power-sequences?sort=votes | # Arithmetic progressions in power sequences
In connection with this MO post (and without any applications / motivation whatsoever), here is an apparently difficult - but nice - problem.
For a non-zero real number $s$, consider the infinite sequence $$P_s := \{ 1^s, 2^s, 3^s, \ldots \} .$$ What is the number of terms, say $l(s)$, of the longest arithmetic progression contained in this sequence?
For instance, since there exist three-term arithmetic progressions in squares, but no such four-term progressions, we have $l(2)=3$.
It is easy to see that if $s$ is the reciprocal of a positive integer, then $P_s$ contains an infinite arithmetic progression; hence we can write something like $P(1/q)=\infty$. It can be shown that this is actually the only case where $P_s$ contains an infinite progression. (This is certainly non-trivial, but not that difficult either - in fact, in a different form this was once posed as a problem on a Moscow State University math competition).
If now $s$ is the reciprocal of a negative integer, then $P_s$ contains an arithmetic progression of any preassigned length; this is a simple exercise. Are there any other values of $s$ for which $l(s)$ is infinite?
Conjecture. For any real $s\ne 0$ which is not the reciprocal of an integer, the quantity $l(s)$ is finite; that is, there exists an integer $L>1$ (depending on $s$) such that $P_s$ does not contain $L$-term arithmetic progressions.
Three-term progressions are not rare; say, for any integer $1<a<b<c$ with $b>\sqrt{ac}$ there exists $s>0$ such that $\{a^s,b^s,c^s\}$ is an arithmetic progression. However, I don't have any single example of a four-term progression in a power sequence (save for the case where the exponent is a reciprocal of an integer).
Is it true that $l(s)\le 3$ for any $s\ne 0$ which is not the reciprocal of an integer?
Indeed, excepting the cases mentioned above and their immediate modifications, I do not know of any $s$ such that $P_s$ contains two distinct three-term progressions.
Is it true that if $s\ne p/q$ with integer $q\ge 1$ and $p\in\{\pm1,\pm2\}$, then $P_s$ contains at most one three-term arithmetic progression?
As a PS: I was once told that using relatively recent (post-Faltings) results in algebraic number theory, one can determine $l(s)$ for $s$ rational. Can anybody with the appropriate background confirm this?
-
Regarding $P_s$ containing at most one $3$-term AP, I guess you want to assume $a,b,c$ are coprime (otherwise you could take any AP and multiply it with $k^s$). – François Brunault Oct 24 '11 at 14:17
@Francois: absolutely. – Seva Oct 24 '11 at 14:31
This question is similar to this one: mathoverflow.net/questions/59471/… – Kevin O'Bryant Oct 24 '11 at 17:14
In the case $s=p/q$ with $|p| \geq 3$, there is no $3$-term AP in $P_s$. The proof is by reducing to the case $s=p$, as follows.
Let $A=a^p$, $B=b^p$, $C=c^p$ such that $A^{1/q}+C^{1/q}=2B^{1/q} \quad (*) \quad$ and $(A,B,C)=1$. Let $K=\mathbf{Q}(\zeta_q)$ be the $q$-th cyclotomic field and $L=K(A^{1/q},B^{1/q},C^{1/q})$. Then $L/K$ is a finite abelian extension of exponent dividing $q$ and by Kummer theory, such extensions are in natural bijection with the finite subgroups of $K^{\times}/(K^{\times})^q$. The extension $L/K$ corresponds to the subgroup generated by the classes $\overline{A},\overline{B}, \overline{C}$ of $A,B,C$ in $K^{\times}/(K^{\times})^q$. In view of the following lemma, it suffices to prove $L=K$.
Lemma 1. If $n^p$ is a $q$-th power in $K$, then $n$ is a $q$-th power in $\mathbf{Z}$.
Proof. Assume $n^p=\alpha^q$ with $\alpha \in K$, then taking the norm we get $n^{p(q-1)}=N_{K/\mathbf{Q}}(\alpha)^q$. Since $\alpha$ is an algebraic integer, we get that $n^{p(q-1)}$ is a $q$-th power in $\mathbf{Z}$, and since $p(q-1)$ and $q$ are coprime, we get the result.
Lemma 2. The integer $B$ is relatively prime to $A$ and to $C$.
Proof. By symmetry, it suffices to prove $(A,B)=1$. Let $\ell$ be a prime number dividing $A$ and $B$. Then $\ell^{1/q}$ divides $A^{1/q}$ and $B^{1/q}$ in the ring $\overline{\mathbf{Z}}$ of all algebraic integers. By $(*)$ it follows that $\ell^{1/q} | C^{1/q}$. Thus $C/\ell \in \mathbf{Q} \cap \overline{\mathbf{Z}} = \mathbf{Z}$ which contradicts $(A,B,C)=1$. This proves Lemma 2.
By equation $(*)$, we have $K(B^{1/q}) \subset K(A^{1/q},C^{1/q})$ which reads $\overline{B} \in \langle \overline{A},\overline{C} \rangle$ in $K^{\times}/(K^{\times})^q$. We can thus write $B \equiv A^{\alpha} C^{\gamma} \pmod{(K^{\times})^q}$ for some $\alpha,\gamma \geq 0$. By a reasoning similar to Lemma 1, we deduce that $B/(A^{\alpha} C^{\gamma})$ is a $q$-th power in $\mathbf{Q}$ but since this fraction is in lowest terms (Lemma 2), we get that $B$ is a $q$-th power in $\mathbf{Z}$.
Now let $\sigma$ be an aribtrary element in $\mathrm{Gal}(L/K)$. We have $\sigma(A^{1/q}) = \zeta \cdot A^{1/q}$ and $\sigma(C^{1/q})=\zeta' \cdot C^{1/q}$ for some $q$-th roots of unity $\zeta$ and $\zeta'$. Considering the real parts of both sides of $\sigma(*)$, we see that necessarily $\zeta=\zeta'=1$. This shows that $L=K$ as requested.
-
Thanks! We thus have $$l(p/q) = \begin{cases} \infty &\text{if}\ |p|=1, \\ 3 &\text{if}\ |p|=2, \\ 2 &\text{if}\ |p|\ge 3. \end{cases}$$ – Seva Oct 25 '11 at 8:35
When s is an integer, you are asking about three-term arithmetic progressions among nth powers. There are none when n > 2! This was a 1952 conjecture of Denes and is now a theorem of Darmon and Merel, part of the wave of Diophantine results that followed in the wake of Wiles's work on modularity. (Indeed, the existence of a 3-term AP among nth powers is tantamount to a solution to a^n - 2 b^n = c^n, which is manifestly in the same ballpark as Fermat's equation.)
-
Thanks for the reply - but, frankly, I was aware of that paper of Darmon and Merel. Perhaps, I should have mentioned it in my original post, but somehow I feel it is already somewhat longish... What I do not know is whether Darmon-Merel extends onto the rational case. (I was once told it does, but would be happy if someone could confirm this.) Maybe, on this occasion: another fact I have not mentioned is that $l(-s)=l(s)$; and so one can confine to the case $s>0$. – Seva Oct 24 '11 at 15:15
@Seva : I think that one can reduce the rational case to the integral case using some Galois theory of Kummer extensions. The idea is that if $a^{p/q}+c^{p/q}=2 b^{p/q}$ then we get other identities by applying elements of the Galois group. From this it should be possible to prove that $a,b,c$ are $q$-th powers. Well, this is very rough, so one should check the details. – François Brunault Oct 24 '11 at 15:42 | 2015-08-02 14:43:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9586646556854248, "perplexity": 128.53809420959237}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042989126.22/warc/CC-MAIN-20150728002309-00113-ip-10-236-191-2.ec2.internal.warc.gz"} |
https://mathoverflow.net/questions/291590/finding-most-representative-sample-in-pair-statistics | # Finding most Representative Sample in “Pair Statistics”
By "Pair Statistics" I understand statics that are based on values $\varphi:\mathcal{P}\times\mathcal{P}\ni(p,q)\mapsto y\in\mathbb{R}$ that can be observed for every pair $(p,q)$ of individuals of a population $\mathcal{P}$, where $\varphi(p,q)=\varphi(q,p)$
Question:
how can a discrete subset $P\subset\mathcal{P}\$of individuals be determined that represents the best "approximation" of the statistical properties of $\mathcal{P}$?
Background of my question is the idea to interpret point-distribution problems (e.g. on a manifold) as the problem of determining the set of $n$ points, that most closely resembles the distance statistics of the manifold.
The concrete idea would be to determine a set of statistical parameters, say, mean and standard deviation $\left(\mu(p),\sigma(p)\right):=\left(\text{mean}(dist(p,q)),\text{sdev}(dist(p,q))\right)$, for all distances from $p$ to all (other) points $q$ of the manifold.
Now, the same kind of statistical parameters can be calculated for each point $p_\Sigma$ of a sample, where the statistical are however calculated from the distances to the (other) points $q_\Sigma$ of the sample, yielding $\left(\mu(p_\Sigma),\sigma(p_\Sigma)\right)_\Sigma$.
The objective would then be to determine the optimal sample $\mathcal{P}_\Sigma^*\subset\mathcal{P}$ that minimizes some "double norm" of the differences $\|\ \big(\|\left(\mu(p_\Sigma),\sigma(p_\Sigma)\right)_\Sigma\ -\ \left(\mu(p_\Sigma),\sigma(p_\Sigma)\right)\|\big)\ \|$, where the "inner" norm measures the difference between the nominal and actual statistical parameter values of an individual point of the sample, and where the "outer" norm measures the entirety of deviations of the sample point's deviations from the nominal values of the statistical parameters of the individual distance statistics.
The notation $(\cdot,\cdot)_\Sigma$ denotes statistical parameters obtained from distances between sample points. | 2019-05-24 20:07:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9301718473434448, "perplexity": 395.61881829774893}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257731.70/warc/CC-MAIN-20190524184553-20190524210553-00210.warc.gz"} |
https://plainmath.net/93345/find-the-radius-of-a-circle-with-a-circu | # Find the radius of a circle with a circumference of 45pi centimeters
Sanai Ball 2022-10-09 Answered
Find the radius of a circle with a circumference of $45\pi$ centimeters
You can still ask an expert for help
• Live experts 24/7
• Questions are typically answered in as fast as 30 minutes
• Personalized clear answers
Solve your problem for the price of one coffee
• Math expert for every subject
• Pay only if we can solve it
beshrewd6g
The formula of a circumference of a circle:
$C=2\pi r$
We have the circumference
Substitute: | 2022-11-30 23:26:52 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 19, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9172428846359253, "perplexity": 2629.8918955476297}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710777.20/warc/CC-MAIN-20221130225142-20221201015142-00740.warc.gz"} |
http://www.ck12.org/user:Anoka/book/Anoka-Hennepin-Probability-and-Statistics/r56/section/3.4/ | <meta http-equiv="refresh" content="1; url=/nojavascript/"> Chapter 3 Review | CK-12 Foundation
# 3.4: Chapter 3 Review
Created by: Heather Haney
The expected value gives us the average result over the long term. We use expected value tables and the simple formula $EV=\left ( {Value 1}\right )\left ( {Prob 1} \right )+\left ( {Value 2} \right )\left ( {Prob 2} \right )+...$ to calculate the expected value. We can put everything together for a full probability analysis of a situation by using our probability calculations and other tools like a tree diagram. Casinos are cognizant of what the expected value is on any of their games and are confident, despite having to occasionally give away some substantial prizes, that their games will make them money in the long run. We cannot ever predict with certainty what is going to happen in a given situation, but we can always run a simulation to approximate what can happen. We will often use a random number generator or a table of random digits to help us run a simulation.
### Chapter 3 Review Exercises
1) Ten red marbles and 15 blue marbles are in a bag. A game is played by first paying $5 and then picking two marbles out of the bag without replacement. If both marbles are red you are paid$10. If both marbles are blue, you are paid $5. If the marbles don't match, you are paid nothing. Analyze this game and determine whether or not it is to your advantage to play. 2) When two dice are rolled, you can get a total of anything between 2 and 12. License: CC BY-NC 3.0 a) Use the table of random digits in Appendix A, Part 1 to simulate rolling two dice 36 times. Begin on line 119. Make a chart displaying the different results that you get and how many times you get each result. b) How close was your simulation to the theoretical probability of what should happen in 36 rolls? 3) A bag contains a$100 bill and two $20 bills. A person plays a game in which a coin is flipped one time. If it is heads, then the player gets to pick two bills out of the bag. If it is tails, the player only gets to pick one bill out of the bag. How much should this game cost to play if it is to be a fair game? 4) Suppose there are 38 kids in your Statistics and Probability class. Devise a system using a random digit table so that the teacher can randomly select 4 students to each do a problem on the board. Use line 137 from the random digit table to carry out your simulation and state the numbers of the four students who are selected. 5) A spinner with three equally sized spaces on it are labeled 1, 2, and 3. A bag contains a$1 bill, a $5 bill, and a$10 bill. A player gets paid the amount they pull out of the bag times the number that they spin. What should this game cost in order to be a fair game?
6) The table below shows the probabilities for how kids get to school in the morning.
Method Bus Walk Car Other Probability 0.31 0.14 0.39 ???
a) What must the Other category have as a probability?
b) Describe how you would assign digits from a random digit table to set up a simulation for selecting a student to find out how they got to school.
c) Carry out your simulation for a total of 10 students and record your results. Use line 104 from the random digit table.
7) In an archery competition, competitors shoot at a total of 20 targets. The table below shows the probabilities associated with hitting the center of certain numbers of targets. Some shooters are perfect and hit the center of all 20 targets and the poorest shooters still hit the center of 15 targets.
a) What is the most likely number of centers that a shooter will hit?
b) What is the expected number of centers that a shooter will hit?
# of Centers 15 16 17 18 19 20 Probability 0.04 0.12 0.35 0.28 0.18 0.03
8) In a game of chance, players pick one card from a well-shuffled deck of 52 cards. If the card is red, they get paid $2. If the card is a spade they get paid$3. If the card is a face card, they get paid $5 and if the card is an ace they get paid$10. A player gets paid for all the categories they meet. For example, the King of Spades would be worth $8 because it is a spade and a face card. How much should this game cost in order to be a fair game? #### Image References Slot Machine http://www.gamedev.net Quarter http://www.marshu.com Welcome to Las Vegas http://pilipon.wordpress.com Race Cars http://www.thunderboltgames.com Electronic Devices http://www.topnews.in Poker Chips http://www.ppppoker.com Minnesota State Lottery http://www.mnlottery.com$100 Bills http://www.sciencebuzz.org
MN Twins Logo www.twins.mlb.com
Final Exams Yes http://www.york.org
Pair of Jacks http://xdeal.com
Targets http://www.theasbc.org
1. [1]^ License: CC BY-NC 3.0
2. [2]^ License: CC BY-NC 3.0
Jun 14, 2011 | 2014-08-22 06:03:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 1, "texerror": 0, "math_score": 0.25888895988464355, "perplexity": 803.825607949771}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500823169.67/warc/CC-MAIN-20140820021343-00387-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://bodheeprep.com/cat-2018-quant-questions/196 | # CAT 2018 Quant Questions
Question:
From a rectangle ABCD of area 768 sq cm, a semicircular part with diameter AB and area 72π sq cm is removed. The perimeter of the leftover portion, in cm, is
80 + 16π 86 + 8π 82 + 24π 88 + 12π
Area of the semicircle with AB as a diameter = $\frac{1}{2}×\pi ×(\frac{{A{B^2}}}{4})$
$\Rightarrow$ $\frac{1}{2}×\pi ×(\frac{{A{B^2}}}{4})$ = $72×\pi$
$\Rightarrow$ $AB = 24cm$
It is also know that the area of the rectangle ABCD = 768 sq.cm
$\Rightarrow$ AB×BC = 768
$\Rightarrow$ BC = 32 cm
Observe that the perimeter of the remaining shape = AD + DC + BC + Arc(AB)
$\Rightarrow$ 32+24+32+$\pi ×24/2$
$\Rightarrow$ $88 + 12\pi$
Get one day FREE Trial of CAT online Full course
Also Check: 841+ CAT Quant Questions with Solutions
#### CAT Quant Online Course
• 1000+ Practice Problems
• Detailed Theory of Every Topics
• Online Live Sessions for Doubt Clearing
• All Problems with Video Solutions
CAT 2018 Questions Paper with Solution PDF
## CAT 2018 Quant Questions with Solutions
#### CAT 2018 Slot-2
CAT 2018 Quant Questions
4.9 (97%) 20 vote[s] | 2019-05-19 18:17:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.57093745470047, "perplexity": 7773.484355058009}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232255092.55/warc/CC-MAIN-20190519181530-20190519203530-00111.warc.gz"} |
https://www.yaclass.in/p/english-language-cbse/class-8/poem-2959/the-school-boy-10297/re-24dd8223-c881-449c-a396-6f370897ebe6 | ### Theory:
"The School Boy" is a poem written by William Blake. It was first published in $$1789$$ in his collection "Songs of Experience". Blake later combined these poems with his "Songs of Innocence" in a book titled "Songs of Innocence and Experience Shewing the Two Contrary States of the Human Soul".
The poem is divided into six stanzas with five lines each. While the original poem contains 30 lines, the prescribed one is short of the final three lines. Hence, the last stanza of the prescribed poem is made up of two lines instead of the actual five lines. You can read the complete poem here.
In the poem, a young boy can be seen expressing his dislike towards going to school. However, it is not the idea of learning that the kid dislikes but rather the limitations of formal education. The downside of a classroom education also becomes the central theme of the poem. | 2021-08-06 00:57:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3870033621788025, "perplexity": 2083.131135662945}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152085.13/warc/CC-MAIN-20210805224801-20210806014801-00564.warc.gz"} |
http://math.stackexchange.com/questions/573/varying-definitions-of-cohomology | # Varying definitions of cohomology
So I know that given a chain complex we can define the $d$-th cohomology by taking $\ker{d}/\mathrm{im}_{d+1}$. But I don't know how this corresponds to the idea of holes in topological spaces (maybe this is homology, I'm a tad confused).
-
One can compute (co)homology of different complexes. In particular, for any topological space one can define it's singular complex (see Eric's answer for an idea how it's done) which in some sense indeed counts holes. But the idea of (co)homology is more general. – Grigory M Jul 24 '10 at 15:28
I couldn't really do better than Eric's answer, and like Grigory says, cohomology is more general. So instead I want to mention a case where cohomology doesn't do this: Sheaf Cohomology of Algebraic Groups, classify dominant Vector Bundles. This is called the Borel-Weil-Bott Theorem and has some nice ramifications for Representation Theory and Algebraic Geometry. – BBischof Jul 24 '10 at 19:48 | 2014-04-20 14:20:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9895988702774048, "perplexity": 444.85508681325894}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00160-ip-10-147-4-33.ec2.internal.warc.gz"} |
http://www.heldermann.de/JLT/JLT23/JLT233/jlt23037.htm | Journal Home Page Cumulative Index List of all Volumes Complete Contentsof this Volume Previous Article Journal of Lie Theory 23 (2013), No. 3, 779--794Copyright Heldermann Verlag 2013 Schrödinger Equation on Homogeneous Trees Alaa Jamal Eddine MAPMO, Université d'Orléans, Route de Chartres -- B.P. 6759, 45067 Orléans 2, France [email protected] [Abstract-pdf] \def\T{{\Bbb T}} Let $\T$ be a homogeneous tree and $\cal L$ the Laplace operator on $\T$. We consider the semilinear Schr\"odinger equation associated to $\cal L$ with a power-like nonlinearity $F$ of degree $\gamma$. We first obtain dispersive estimates and Strichartz estimates with no admissibility conditions. We next deduce global well-posedness for small $L^2$ data with no gauge invariance assumption on the nonlinearity $F$. On the other hand if $F$ is gauge invariant, $L^2$ conservation leads to global well-posedness for arbitrary $L^2$ data. Notice that, in contrast with the Euclidean case, these global well-posedness results hold for all finite $\gamma\ge 1$. We finally prove scattering for arbitrary $L^2$ data under the gauge invariance assumption. Keywords: Homogeneous tree, nonlinear Schr\"odinger equation, dispersive estimate, Strichartz estimate, scattering. MSC: 35Q55, 43A90; 22E35, 43A85, 81Q05, 81Q35, 35R02 [ Fulltext-pdf (183 KB)] for subscribers only. | 2018-10-16 10:47:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5768268704414368, "perplexity": 1010.5680423856747}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583510749.37/warc/CC-MAIN-20181016093012-20181016114512-00508.warc.gz"} |
http://news.investors.com/investing-options/052113-657009-volatility-7-and8211-volatility-summary.htm | Streaming Quotes Are
# Volatility 7 – Volatility Summary
Posted
This is the seventh and final article in a series on volatility. The goal of this series is to clarify the different meanings of the term volatility and to discuss its many possible uses, including describing stock price action, evaluating option prices, choosing option strategies, and forecasting the market. Option traders should strive to gain an accurate understanding of volatility — and its many uses — because volatility affects option prices, trading decisions and risk analysis.
This article summarizes the important points of the previous articles.
The Concept of Volatility
"Volatility" is price change without regard to direction, which can be confusing to traders who think that "up is good and down is bad." With volatility, it is the percentage change that matters, so a 1 percent price rise is equal in volatility terms to a 1 percent price decline.
The volatility of a particular stock's price action is derived from a series of daily closing prices. The daily percentage changes are computed, and then the standard deviation of those percentage changes is calculated. This standard deviation is referred to as the historical volatility of a stock's price. The stated volatility percentage is the annual standard deviation of stock price movement.
Volatility makes it possible to compare the price fluctuations of the same stock during different time periods, and volatility makes it possible to compare the price fluctuations of two stocks regardless of a difference in price level. Volatility also makes it possible to compare past stock price fluctuations to price fluctuations that are forecast by the options market.
From statistics about normal distributions (bell-shaped curves), approximately 68% of all outcomes occur within one standard deviation of the mean; approximately 95% of all outcomes occur within two standard deviations of the mean; and approximately 99% of outcomes occur within three standard deviations.
Implied Ranges
The stated volatility percentage is the standard deviation of price movement over one year, but price-range probabilities for one year are not useful to short-term traders. So here is a formula that converts the stated volatility — the annual standard deviation — to a period of time chosen by the trader:
Standard Deviation for n days = Stock Price x Volatility x square root of time
Where square root of time =
square root of n days / square root of days per year
The statistics of volatility are a good starting point for traders, because the stock price has a 33% chance (approximately) of closing beyond one standard deviation at expiration. Consequently, a stock price change equal to one-standard deviation is a realistic target for the stock to reach. A trader still has to get the direction right, and there is still the risk that a particular time period will be one of the two-thirds when the stock price does not close beyond the one-standard deviation level. | 2014-12-23 03:10:22 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8441624045372009, "perplexity": 936.4885317390737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802778013.38/warc/CC-MAIN-20141217075258-00170-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/3265964/tail-bound-on-the-maximum-of-i-i-d-geometric-random-variables | # Tail bound on the maximum of i.i.d. geometric random variables
Let $$X_1,\ldots,X_n\sim \mathit{Geo}(p)$$ be independent random variables, and let $$M=\max\{X_1,\ldots,X_n\}$$ denote their maximum.
Given a parameter $$\delta\in(0,1)$$, I'm looking for a bound $$T(n,p,\delta)$$ of the form $$\Pr[M > T(n,p,\delta)]\le \delta.$$
A simple solution can be derived using the union bound. If we demand $$(1-p)^T = \Pr [X_i> T] \le \delta/n,$$ we have $$T=\frac{\ln(\delta/n)}{\ln(1-p)}$$, and a union bound over all $$X_i$$'s show that this holds.
However, I think that this is a rather loose bound, and some simulations I did seem to agree.
• How can we get a tighter bound (ideally, a close-form bound that is possible to work with)?
$$M=\underset{1\le i\le n}{\max} X_i.$$ This means $$M iff $$\,\, X_i< T\,\,\forall \,\,i.$$ Hence $$\mathbb{P}(M
$$\text{You need }\left[1-(1-p)^T\right]^n=1-\delta$$ $$\iff 1-(1-\delta)^{1/n}=(1-p)^T\iff \,\,T=\dfrac{\log\left(1-(1-\delta)^{1/n}\right)}{\log(1-p)}.$$ | 2020-03-31 23:43:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 14, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9917823672294617, "perplexity": 272.3025943910723}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370504930.16/warc/CC-MAIN-20200331212647-20200401002647-00038.warc.gz"} |
https://shelah.logic.at/papers/322b/ | # Sh:322b
• Shelah, S., & Usvyatsov, A. Classification over a predicate — the general case II. Preprint. | 2023-03-30 15:08:55 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9571217894554138, "perplexity": 8706.355369255336}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949331.26/warc/CC-MAIN-20230330132508-20230330162508-00137.warc.gz"} |
https://math.stackexchange.com/questions/907360/convergence-in-measure-implies-pointwise-convergence | # Convergence in measure implies pointwise convergence?
In showing that we can replace pointwise convergence with convergence in measure in the Lebesgue Dominated Convergence Theorem, I made the following claim:
1.) $f_n\to f$ in measure $\,\,\Longrightarrow\,\,$ every $f_{n_k}\to f$ in measure $\,\,\Longrightarrow\,\,$ some $f_{n_{k_j}}\to f$ pointwise.
2.) Then since $\{f_n\}$ is a sequence such that every subsequence $\{f_{n_k}\}$ has a further subsubsequence $\{f_{n_{k_j}}\}$ that converges pointwise to $f$, $f_n$ converges pointwise to $f$ as well.
But this seems to prove that convergence in measure implies pointwise convergence, which we know to be false. Consider this example:
1.) Let $\{I_n\}_{n=1}^\infty=\{[0,1], [0,1/2], [1/2,1], [0,1/3], [1/3,2/3], [2/3,1], [0,1/4],\ldots\}$.
2.) Let $f_n(x)=\chi_{I_n}(x)$ for all $x\in[0,1]$. According to my text, $f_n\to 0$ in measure but there exists no $x \in [0,1]$ such that $f_n\to 0$ pointwise.
QUESTIONS: The only error in the logic of my original proof seems to be assuming that $f_n\to f$ in measure $\Longrightarrow$ every subsequence $f_{n_k}\to f$ in measure.”
1.) Is the flaw in my proof somewhere else?
2.) Does the sequence of functions in the counterexample have some subsequence that does not converge in measure to 0?
3.) If yes, what is it?
4.) If no, can we create a different sequence that converges measure but has some subsequence that does not converge in measure?
• "every subsequence {fnk} has a further subsubsequence {fnkj} that converges pointwise to f, fn converges pointwise to f as well" This is not true. – mathematician Aug 24 '14 at 2:52
• @mathematician That seems true to me - if a sequence $a_i$ doesn't converge to $a$, we can find a subsequence which is always at least $\epsilon$ away from $a$, which is preserved by any subsequence. Perhaps the problem is that you are only given a subsequence which converges almost everywhere, and there are uncountably many such sequences (so the total problem area where we don't converge pointwise need not have measure 0)? (convergence in measure is definitely preserved by taking subsequences) – uncookedfalcon Aug 24 '14 at 3:02
• @mathematician is right. Take functions from $[0,1]$ to $[0,1]$ such that $f_n(x) = 1$ if $x \in [0, 1/n]$ and $0$ else. Then $f_n$ converges to $f=0$ in measure, but $f_n(0)=1$ for all $n$. – Michael Aug 24 '14 at 3:11
• Perhaps you were thinking along the lines of "If random variables $X_n$ converge to $X$ in probability, there is a subsequence that converges with probability 1." – Michael Aug 24 '14 at 3:12
• math.stackexchange.com/questions/173590/… – Laars Helenius Aug 24 '14 at 3:15
I see what you intended now by that web link. Suppose $f_n$ converges to $f$ in measure, and there is a $g$ such that $|f_n|\leq g$ for all $n$, and $\int g < \infty$.
I think you mean this: Your original sequence is $\{\int f_n\}_{n=1}^{\infty}$. Consider an infinite subsequence of this with indices in $\mathcal{N}$. We want to show that there exists a convergent subsequence $\int f_{n_k}$, with $n_k \in \mathcal{N}$ for all $k$ (and where $n_k < n_{k+1}$ for all $k$) that satisfies: $$\int f_{n_k} \rightarrow \int f$$ If this is true, then by that web link, we also know that $\int f_n \rightarrow \int f$.
The good thing is that if a sequence of functions converges to $f$ in measure, then there is indeed a subsequence $f_{n_k}$ that converges to $f$ pointwise almost everywhere. So then we can invoke the usual Lebesgue theorem to ensure $\int f_{n_k} \rightarrow \int f$.
I was indeed confused by your original use of "converging pointwise" when it should really be "converging pointwise almost everywhere," as another person commented.
### Proving that claim: Suppose $h_n$ converges to $h$ in measure. Then there is a subsequence $h_{n_k}$ that converges to $h$ pointwise almost everywhere.
I think this can be proven in the same way as the probability fact that if random variables $X_n$ converge to $X$ in probability, a subsequence converges with prob 1. The main step is to define a subsequence of functions $h_{n_k}$ such that for each $k$: $$\mu(\{x \mbox{ such that } |h_{n_k}(x)-h(x)|>1/k\}) \leq 1/k^2$$
The issue is that you are not taking into consideration that convergence in measure guarantees a subsequence which converges pointwise almost everywhere to $f$ - not everywhere pointwise convergence. Everywhere pointwise convergence is necessary to guarantee that $f_n \to f$ pointwise if every subsequence has a further subsequence which converges to $f$ pointwise.
With that being said, this is an argument that may be used to show that convergence in measure (with a dominating function) gives rise to the dominated convergence theorem. The difference is that convergence in $L^1$ acts on equivalence classes of almost everywhere equal functions.
1. There is nothing wrong with the statement if $f_n\to 0$ in measure, then so does every subsequence $f_{n_k}$. It is wrong, however, to state that if a subsequence $\{f_{n_k}\}$ converges pointwise, then so must $\{f_n\}$.
2. No.
3. (and 4) See above. | 2021-06-13 03:32:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9844456911087036, "perplexity": 127.85972975790473}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487598213.5/warc/CC-MAIN-20210613012009-20210613042009-00000.warc.gz"} |
http://openstudy.com/updates/4d935c9f41658b0b0950a262 | ## anonymous 5 years ago A "little" integral problem, see the url for details: http://1337.is/~gaulzi/tex2png/view.php?png=201103301610252648.png
DUMP IN THE PARAMERIZATION $\int\limits_{0}^{2\pi}$ then take derivitive of r1 wrt t variable. then thats your dx and dy and turn the x n y in the top eq into cos sin of (t) then dot that with the dx and dy and do a single LINE integrl from 0 to 2pi. thats the easiest way i can explain it. GREENS theorem | 2016-10-27 15:07:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9260114431381226, "perplexity": 1427.3199623016249}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721347.98/warc/CC-MAIN-20161020183841-00558-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://eddiema.ca/2011/10/ | # Ed's Big Plans
## Partial Derivatives for Residuals of the Gaussian Function
I needed to get the partial derivatives for the residuals of the Gaussian Function this week. This is needed for a curve fit I’ll use later. I completely forgot about Maxima, which can do this automatically — so I did it by hand (Maxima is like Maple, but it’s free). I’ve included my work in this post for future reference. If you want a quick refresh on calculus or a step-by-step for this particular function, enjoy :D. The math below is rendered with MathJax.
The Gaussian Function is given by …
$$f(x) = ae^{-\frac{(x-b)^2}{2c^2}}$$
• a, b, c are the curve parameters with respect to which we differentiate the residual function
• e is Euler’s number
Given a set of coordinates I’d like to fit (xi, yi), i ∈ [1, m], the residuals are given by …
$$r_i = y_i – ae^{-\frac{(x_i-b)^2}{2c^2}}$$
We want to get …
$$\frac{\partial{r}}{\partial{a}}, \frac{\partial{r}}{\partial{b}}, \frac{\partial{r}}{\partial{c}}$$
Eddie Ma
October 10th, 2011 at 11:40 am
Posted in Brain,Featured | 2018-12-16 10:47:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8753327131271362, "perplexity": 998.0495198184701}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827639.67/warc/CC-MAIN-20181216095437-20181216121437-00615.warc.gz"} |
https://www.physicsforums.com/threads/planck-length-and-quantized-position.757476/ | # Planck length and quantized position
1. Jun 10, 2014
### Lit
After reading an article on Planck length, I began to wonder whether or not the theoretical limit implied that position could be quantized in whole integer multiples of Planck length?
To demonstrate what my question is asking mathematically I hope you will scrutinize the equations below:
If given two objects O1 and O2 (ignoring uncertainty for the time being) with positions (x1,y1,z1) and (x2,y2,z2), respectively, it seems the equations below would hold true:
$$L_p=\sqrt{\frac{\hbar G}{c^3}}$$
$$\frac{\sqrt{\left(x_2-x_1\right)^2+\left(y_2-y_1\right)^2+\left(z_2-z_1\right)^2}}{L_p}=K$$ where K must be an integer.
If the above equation follows the integer condition, then change in position for an object moving from (x1,y1,z1) to (x2,y2,z2) should also follow the integer rule (One can treat object 1 as the object at its position before the change in position, and object 2 as the object after changing its position). Because of this, the wave function of a particle must take an argument which moves the particle an integer multiple of the Planck length away from its previous position. So:
$$\left\{\frac{\Psi \left(x_2,t_2\right)-\Psi \left(x_1,t_1\right)}{L_P}\right\}\subseteq \mathbb{Z}$$
Please let me know if I have made any mistakes in my understanding of Planck length/logic.
- Lit
2. Jun 10, 2014
### DennisN
3. Jun 10, 2014
### The_Duck
No. The Planck length is simply the length scale at which we expect to need a quantum theory of gravity in order to describe physics properly.
4. Jun 10, 2014
### Lit
http://faculty.washington.edu/smcohen/320/GrainySpace.html
5. Jun 10, 2014
### Lit
In the article it says "You have finally hit rock bottom: a span called the Planck length, the shortest anything can get. According to recent developments in the quest to devise a so-called "theory of everything," space is not an infinitely divisible continuum. It is not smooth but granular, and the Planck length gives the size of its smallest possible grains.", am I misinterpreting this section, or is there a controversy surrounding this claim?
6. Jun 10, 2014
Staff Emeritus
The_Duck is right.
A New York Times article cited in a philosophy class is not a replacement for a physics text.
7. Jun 10, 2014
### bhobba
That is not what recent developments say.
What's going on at the plank scale is, at the moment, one big mystery. Of course research is ongoing, and hopefully it will eventually be resolved, but as of now we simply do not know.
As Vanadium 50 correctly says, The Duck is spot on.
Thanks
Bill | 2017-10-20 12:39:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4137248992919922, "perplexity": 910.2405729353466}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824104.30/warc/CC-MAIN-20171020120608-20171020140608-00323.warc.gz"} |
https://www.shaalaa.com/question-bank-solutions/what-will-be-values-input-b-boolean-expression-digital-electronics-logic-gates_50705 | # What will be the values of input A and B for the Boolean expression - Physics
What will be the values of input A and B for the Boolean expression overline ((A +B) .(A*B)) =1?
#### Solution
The truth table for the Boolean expression overline ((A +B) .(A*B)) = is:
A B (A+B) (A,B) (A+B).(A.B) overline ((A +B) .(A*B)) =1? 1 1 1 1 1 0 0 0 0 0 0 1 1 0 1 0 0 1 0 1 1 0 0 1
The output of this Boolean expression is 1 only when both the inputs A and B are 1.
The values of A should, therefore, be 1.
Concept: Digital Electronics and Logic Gates
Is there an error in this question or solution? | 2021-06-12 11:34:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3304394781589508, "perplexity": 523.3433045964888}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487582767.0/warc/CC-MAIN-20210612103920-20210612133920-00364.warc.gz"} |
https://wikimili.com/en/General_ledger | # General ledger
Last updated
In bookkeeping, a general ledger, also known as a nominal ledger, is a bookkeeping ledger in which accounting data is posted from journals and from subledgers, such as accounts payable, accounts receivable, cash management, fixed assets, purchasing and projects. A ledger account is created for each account in the chart of accounts for an organization, are classified into account categories, such as income, expense, assets, liabilities and equity, and the collection of all these accounts is known as the general ledger. The general ledger holds financial and non-financial data for an organization. [1] Each account in the general ledger consists of one or more pages. An organization's statement of financial position and the income statement are both derived from income and expense account categories in the general ledger. [2]
## Terminology
The general ledger contains a page for all accounts in the chart of accounts [3] arranged by account categories. The general ledger is usually divided into at least seven main categories: assets, liabilities, owner's equity, revenue, expenses, gains and losses. [4] The main categories of the general ledger may be further subdivided into subledgers to include additional details of such accounts as cash, accounts receivable, accounts payable, etc.
The extraction of account balances is called a trial balance. The purpose of the trial balance is, at a preliminary stage of the financial statement preparation process, to ensure the equality of the total debits and credits. [5]
## Process
Posting is the process of recording amounts as credits (right side), and amounts as debits (left side), in the pages of the general ledger. Additional columns to the right hold a running activity total (similar to a chequebook). [6]
The general ledger should include the date, description and balance or total amount for each account.
Because each bookkeeping entry debits one account and credits another account in an equal amount, the double-entry bookkeeping system helps ensure that the general ledger is always in balance, thus maintaining the accounting equation:
${\displaystyle {\mbox{Assets}}={\mbox{Liabilities}}+{\mbox{(Shareholders' or Owners' equity)}}}$. [7] [3]
The accounting equation is the mathematical structure of the balance sheet. Although a general ledger appears to be fairly simple, in large or complex organizations or organizations with various subsidiaries, the general ledger can grow to be quite large and take several hours or days to audit or balance. [8] [ citation needed ]
In a manual or non-computerized system, the general ledger may be a large book. Organizations may instead employ one or more spreadsheets for their ledgers, including the general ledger, or may utilize specialized software to automate ledger entry and handling. When a business uses enterprise resource planning (ERP) software, a financial-features module produces subledgers and the general ledger, with entries drawn from a database that is shared with other processes managed through the ERP.
## Related Research Articles
Bookkeeping is the recording of financial transactions, and is part of the process of accounting in business and other organisations. It involves preparing source documents for all transactions, operations, and other events of a business. Transactions include purchases, sales, receipts and payments by an individual person or an organization/corporation. There are several standard methods of bookkeeping, including the single-entry and double-entry bookkeeping systems. While these may be viewed as "real" bookkeeping, any process for recording financial transactions is a bookkeeping process.
In financial accounting, a balance sheet is a summary of the financial balances of an individual or organization, whether it be a sole proprietorship, a business partnership, a corporation, private limited company or other organization such as government or not-for-profit entity. Assets, liabilities and ownership equity are listed as of a specific date, such as the end of its financial year. A balance sheet is often described as a "snapshot of a company's financial condition". Of the four basic financial statements, the balance sheet is the only statement which applies to a single point in time of a business' calendar year.
Double-entry bookkeeping, in accounting, is a system of book keeping where every entry to an account requires a corresponding and opposite entry to a different account. The double-entry has two equal and corresponding sides known as debit and credit. The left-hand side is debit and right-hand side is credit. In a normally debited account, such as an asset account or an expense account, a debit increases the total quantity of money or financial value, and a credit decreases the amount or value. On the other hand, for an account that is normally credited, such as a liability account or a revenue account, it is credits that increase the account's value and debits that decrease it. In double-entry bookkeeping, a transaction always affects at least two accounts, always includes at least one debit and one credit, and always has total debits and total credits that are equal. This is to keep the accounting equation (below) in balance. For example, if a business takes out a bank loan for \$10,000, recording the transaction would require a debit of \$10,000 to an asset account called "Cash", as well as a credit of \$10,000 to a liability account called "Notes Payable".
In accounting, book value is the value of an asset according to its balance sheet account balance. For assets, the value is based on the original cost of the asset less any depreciation, amortization or impairment costs made against the asset. Traditionally, a company's book value is its total assets minus intangible assets and liabilities. However, in practice, depending on the source of the calculation, book value may variably include goodwill, intangible assets, or both. The value inherent in its workforce, part of the intellectual capital of a company, is always ignored. When intangible assets and goodwill are explicitly excluded, the metric is often specified to be "tangible book value".
In double entry bookkeeping, debits and credits are entries made in account ledgers to record changes in value resulting from business transactions. A debit entry in an account represents a transfer of value to that account, and a credit entry represents a transfer from the account. Each transaction transfers value from credited accounts to debited accounts. For example, a tenant who writes a rent cheque to a landlord would enter a credit for the bank account on which the cheque is drawn, and a debit in a rent expense account. Similarly, the landlord would enter a credit in the receivable account associated with the tenant and a debit for the bank account where the cheque is deposited.
Accounts receivable are legally enforceable claims for payment held by a business for goods supplied or services rendered that customers have ordered but not paid for. These are generally in the form of invoices raised by a business and delivered to the customer for payment within an agreed time frame. Accounts receivable is shown in a balance sheet as an asset. It is one of a series of accounting transactions dealing with the billing of a customer for goods and services that the customer has ordered. These may be distinguished from notes receivable, which are debts created through formal legal instruments called promissory notes.
Financial accounting is the field of accounting concerned with the summary, analysis and reporting of financial transactions related to a business. This involves the preparation of financial statements available for public use. Stockholders, suppliers, banks, employees, government agencies, business owners, and other stakeholders are examples of people interested in receiving such information for decision making purposes.
In bookkeeping, an account refers to assets, liabilities, income, expenses, and equity, as represented by individual ledger pages, to which changes in value are chronologically recorded with debit and credit entries. These entries, referred to as postings, become part of a book of final entry or ledger. Examples of common financial accounts are sales, accounts receivable, mortgages, loans, PP&E, common stock, sales, services, wages and payroll.
A chart of accounts (COA) is a list of the names of the financial accounts set up, usually by an accountant, for an organization, and available for use by the bookkeeper for recording transactions in the organization's general ledger. Accounts may be added to the chart of accounts as needed; they would not generally be removed, especially if any transaction had been posted to the account or if there is a non-zero balance. Accounts are usually grouped into categories to classify and distinguish financial assets, liabilities and transactions. It is used to organize the entity’s finances and segregate expenditures, revenue, assets and liabilities in order to give interested parties a better understanding of the entity’s financial health.
The fundamental accounting equation, also called the balance sheet equation, represents the relationship between the assets, liabilities, and owner's equity of a person or business. It is the foundation for the double-entry bookkeeping system. For each transaction, the total debits equal the total credits. It can be expressed as furthermore:
In banking and accounting, the balance is the amount of money owed on an account.
A ledger is a book or collection of accounts in which account transactions are recorded. Each account has an opening or carry-forward balance, would record transactions as either a debit or credit in separate columns and the ending or closing balance.
Single-entry bookkeeping or single-entry accounting is a method of bookkeeping that relies on a one sided accounting entry to maintain financial information. The primary bookkeeping record in single-entry bookkeeping is the cash book, which is similar to a checking account register, except all entries are allocated among several categories of income and expense accounts. Separate account records are maintained for petty cash, accounts payable and receivable, and other relevant transactions such as inventory and travel expenses. To save time and avoid the errors of manual calculations, single-entry bookkeeping can be done today with do-it-yourself bookkeeping software.
In accounting, finance and economics, an accounting identity is an equality that must be true regardless of the value of its variables, or a statement that by definition must be true. Where an accounting identity applies, any deviation from numerical equality signifies an error in formulation, calculation or measurement.
The following outline is provided as an overview of and topical guide to accounting:
AME Accounting Software is a business accounting software application developed by AME Software Products, Inc. AME Accounting Software includes Payroll, General Ledger, Accounts Receivable, Accounts Payable, 1099 Vendor Management, MICR check printing, and Direct Deposit. The software is mostly used by small and medium size businesses, as well as accounting practices that process payroll and do bookkeeping for other businesses.
Net operating assets (NOA) are a business's operating assets minus its operating liabilities. NOA is calculated by reformatting the balance sheet so that operating activities are separated from financing activities. This is done so that the operating performance of the business can be isolated and valued independently of the financing performance. Management is usually not responsible for creating value through financing activities unless the company is in the finance industry, therefore reformatting the balance sheet allows investors to value just the operating activities and hence get a more accurate valuation of the company. One school of thought is that there is no such security as an operating liability. All liabilities are a form of invested capital, and are discretionary, so the concept of net operating assets has no basis because operating assets are not discretionary.
In financial accounting, a liability is defined as the future sacrifices of economic benefits that the entity is obliged to make to other entities as a result of past transactions or other past events, the settlement of which may result in the transfer or use of assets, provision of services or other yielding of economic benefits in the future.
A financial ratio or accounting ratio is a relative magnitude of two selected numerical values taken from an enterprise's financial statements. Often used in accounting, there are many standard ratios used to try to evaluate the overall financial condition of a corporation or other organization. Financial ratios may be used by managers within a firm, by current and potential shareholders (owners) of a firm, and by a firm's creditors. Financial analysts use financial ratios to compare the strengths and weaknesses in various companies. If shares in a company are traded in a financial market, the market price of the shares is used in certain financial ratios.
## References
1. "Accounting Term Concepts" (PDF). Retrieved 12 February 2017.
2. "National Curriculum Statement Accounting Guide Grade 10" (PDF). Retrieved 26 February 2017.
3. "Chapter 9.3 - General Ledger and Charts of Accounts". Accounting Scholar. Retrieved 28 February 2017.
4. "What is a Trial Balance?" . Retrieved 5 March 2017.
5. "Posting to general ledger accounts" (PDF). Retrieved 26 February 2017.
6. Meigs and Meigs. Financial Accounting, Fourth Edition. McGraw-Hill, 1983. pp.19-20.
7. Whiteley, John. "Mr". Moncton Accountant John Whiteley CPA. Moncton Accountant John Whiteley CPA. Retrieved 3 July 2017. | 2021-06-12 12:10:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 1, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20435117185115814, "perplexity": 3847.793236908844}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487582767.0/warc/CC-MAIN-20210612103920-20210612133920-00451.warc.gz"} |
https://deepai.org/publication/fully-quantum-arbitrarily-varying-channels-random-coding-capacity-and-capacity-dichotomy | # Fully Quantum Arbitrarily Varying Channels: Random Coding Capacity and Capacity Dichotomy
We consider a model of communication via a fully quantum jammer channel with quantum jammer, quantum sender and quantum receiver, which we dub quantum arbitrarily varying channel (QAVC). Restricting to finite dimensional user and jammer systems, we show, using permutation symmetry and a de Finetti reduction, how the random coding capacity (classical and quantum) of the QAVC is reduced to the capacity of a naturally associated compound channel, which is obtained by restricting the jammer to i.i.d. input states. Furthermore, we demonstrate that the shared randomness required is at most logarithmic in the block length, using a random matrix tail bound. This implies a dichotomy theorem: either the classical capacity of the QAVC is zero, and then also the quantum capacity is zero, or each capacity equals its random coding variant.
## Authors
• 52 publications
• 36 publications
• 8 publications
• 15 publications
05/07/2021
### Compound Arbitrarily Varying Channels
We propose a communication model, that we call compound arbitrarily vary...
05/10/2021
### Compound Channel Capacities under Energy Constraints and Application
Compound channel models offer a simple and straightforward way of analyz...
01/09/2020
### Capacity Approaching Coding for Low Noise Interactive Quantum Communication, Part I: Large Alphabets
We consider the problem of implementing two-party interactive quantum co...
02/05/2019
### Message transmission over classical quantum channels with a Jammer with side information; correlation as resource and common randomness generating
In this paper we analyze the capacity of a general model for arbitrarily...
05/24/2018
### Ultra-Reliable Communication over Arbitrarily Varying Channels under Block-Restricted Jamming
Ultra-Reliable Communication (URC) is examined in an information-theoret...
01/11/2018
### Finite Blocklength and Dispersion Bounds for the Arbitrarily-Varying Channel
Finite blocklength and second-order (dispersion) results are presented f...
05/16/2022
### Semantic Security with Infinite Dimensional Quantum Eavesdropping Channel
We propose a new proof method for direct coding theorems for wiretap cha...
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## I Fully quantum AVC and random codes
We consider a simple, fully quantum model of arbitrarily varying channel (QAVC). Namely, we have three agents, Alice (sender), Bob (receiver) and Jamie (jammer), each controlling a quantum system , and , respectively. The channel is simply a completely positive and trace preserving (cptp) map , and we assume it to be memoryless on blocks of length , i.e. , with (
times), etc. However, crucially, neither Alice’s nor Jamie’s input states need to be tensor product or even separable states. We shall assume throughout that all three Hilbert spaces
, and have finite dimension, , , . The previously introduced AVQC model of Ahlswede and Blinovsky [5], and more generally Ahlswede et al. [4], is obtained by channels that first dephase the input in a fixed basis, so that the choices of the jammer are effectively reduced to basis states of and their convex combinations. Note that this generalises the classical AVC, which is simply a channel with input alphabet and output alphabet
, given by transition probabilities
, and such a channel can always be interpreted as a cptp map. This model has been considered in [19, 20], however in those works principally from the point of view that Jamie is helping Alice and Bob, passively, by providing a suitable input state to . Contrary to the classical AVC and the AVQC considered in [5, 4], where the jammer effectively always selects a tensor product channel between Alice and Bob, the fact that we allow general quantum inputs on , including entangled states, permits Jamie to induce non-classical correlations between the different channel systems. These correlations, as was observed in [19, 20], are not only highly nontrivial, but can also have a profound impact on the communication capacity of the channel between Alice and Bob. In the present context, however, Jamie is fundamentally an adversary.
Define a (deterministic) classical code for of block length as a collection of states and POVM elements acting on , such that . Its rate is defined as
, the number of bits encoded per channel use. Its error probability is defined as the average over uniformly distributed messages and with respect to a state
on :
Perr(C,σ):=1MM∑m=1Tr(N⊗ℓ(ρm⊗σ))(\openone−Dm).
For the transmission of quantum information, define a (deterministic) quantum code for of block length as a pair of cptp maps and . Its rate is
, the number of qubits encoded per channel use, and the error is quantified, with respect to a state
on , as the “infidelity”
ˆF(Q,σ):=1−Tr((id⊗D∘N⊗ℓσ∘E)ΦL)⋅ΦL,
with the maximally entangled state . Here, we have introduced the channels defined by fixing the jammer’s state to , .
Note that we use the language of “deterministic” code, although in quantum information this is indistinguishable from stochastic encoders; it is meant to differentiate from “random” codes, which use shared correlation: A random classical [quantum] code for of block length
consists of a random variable
with a well-defined distribution and a family of deterministic codes []. The error probability if , always with respect to a state on , is simply the expectation over , i.e. . The error of the random quantum code is similarly .
The operational interpretation of the random code model is that Alice and Bob share knowledge of the random variable , and use accordingly, but that Jamie is ignorant of it. This shared randomness is thus a valuable resource, which for random codes is considered freely available, whose amount, however, we would like to control at the same time.
The capacities associated to these code concepts are defined as usual, as the maximum achievable rate as block length goes to infinity and the error goes to zero:
Cdet(N) :=limsupℓ→∞1ℓlogM s.t. supσPerr(C,σ)→0, Crand(N) :=limsupℓ→∞1ℓlogM s.t. supσEλPerr(Cλ,σ)→0, Qdet(N) :=limsupℓ→∞1ℓlogL s.t. supσˆF(Q,σ)→0, Qrand(N) :=limsupℓ→∞1ℓlogL s.t. supσEλˆF(Q,σ)→0.
If in the above error maximisations Jamie is restricted to tensor power states , the QAVC model becomes a compound channel: , . Its classical and quantum capacities are denoted and , respectively.
## Ii Random coding capacities: from QAVC to its compound channel
Cdet(N)≤C%rand(N)≤C({Nσ}σ), andQdet(N)≤Qrand(N)≤Q({Nσ}σ). (1)
Here, we show that for the random capacity, the rightmost inequalities are identities, by proving bounds in the opposite direction. For the quantum capacity, this was done in [19, Appendix A]. To present the argument, define the permutation operator acting on the tensor power as permuting the subsystems, for a permutation :
Uπ(|α1⟩|α2⟩⋯|αℓ⟩)=|απ−1(1)⟩|απ−1(2)⟩⋯|απ−1(ℓ)⟩,
which extends uniquely by linearity. This is a unitary representation of the symmetric group, which is defined for any Hilbert space. The quantum channel obtained by the conjugation action of is denoted .
###### Proposition 1
Let be a quantum code for the compound channel at block length of size and with fidelity , i.e. for all ,
Then, the random quantum code with a uniformly distributed random permutation of , defined by
Qπ=(Uπ∘E,D∘Uπ−1),
has infidelity for the QAVC .
###### Proposition 2
Let be a code of block length for the compound channel with error probability , i.e. for all ,
Perr(C,σ⊗ℓ)=1MM∑m=1Tr(N⊗ℓσ(ρm)(\openone−Dm))≤ϵ.
Then, the random code with a uniformly distributed random permutation of , defined by
Cπ:={(UπρmUπ†,UπDmUπ†):m=1,…,M},
has error probability for the QAVC .
###### Proof.
We only prove Proposition 2, since Proposition 1 has been argued in [19, Appendix A], with analogous proofs. For an arbitrary state on , the error probability of the random code can be written as
Eπ Perr(Cπ,ζ) =1MM∑m=1EπTr(Uπ†(N⊗ℓ(UπρmUπ†,ζ))Uπ(\openone−Dm)) =1MM∑m=1Tr(N⊗ℓ(ρm,EπUπζUπ†)(\openone−Dm)), (2)
where in the last line we have exploited the -covariance of the tensor product channel . The crucial feature of the last expression is that it shows that the error probability that the jammer can achieve with is the same as that of the state
ζ′=EπUπζUπ†=1ℓ!∑π∈SℓUπζUπ†.
This is, by its construction, a permutation-symmetric state, and we can apply the de Finetti reduction from [13]:
ζ′≤(ℓ+1)|J|2∫σ∈S(J)μ(dσ)σ⊗ℓ=:(ℓ+1)|J|2F,
with a universal probability measure on the states of , whose detailed structure is given in [13], but which is not going to be important for us.
Indeed, inserting this into the last line of eq. (II), and using complete positivity of , we obtain the upper bound
where in the last step we have used the assumption that for every jammer state of the form , the error probability is bounded by .
To apply this, we need compound channel codes with error decaying faster than any polynomial. This is no problem, as there are several constructions giving even exponentially small error for rates arbitrarily close to the compound channel capacity, both for classical [7, 21] and quantum codes [8].
###### Corollary 3
Let be a QAVC. Its classical random coding capacity is given by
Crand(N)=C({Nσ}σ)=limℓ→∞1ℓmax{px,ρAℓx}infσJI(X:Bℓ),
where is the Holevo information of the ensemble [7, 21].
Similarly, its quantum random coding capacity is
Qrand(N)=Q({Nσ}σ)=limℓ→∞1ℓmax|ϕ⟩RAℓinfσJI(R⟩Bℓ),
where is the coherent information of the state [8].
## Iii Capacity dichotomy: Elimination of correlation from random codes
For classical AVCs or AVQCs with classical jammer, the observations of Ahlswede [2] show that the random coding capacity can always be attained using at most bits of shared randomness. This is done by i.i.d. sampling the shared random variable , thus approximating, for each channel state , by an empirical mean over realisations of , except with probability exponentially small in . Then, the union bound can be used because the jammer has “only” exponential in many choices. On the face of it, this strategy looks little promising for QAVCs: the jammer’s choices form a continuum, and even if we realise that we can discretise , any net of states is exponentially large in the dimension [6], i.e. doubly exponentially large in , resulting in a naive bound of for the shared randomness required. However, the linearity of the quantum formalism comes to our rescue.
###### Observation 4
From the point of view of the jammer, the error probability of a classical code is an observable, , with a POVM element depending in a systematic way on the code. Likewise, the infidelity of a quantum code can be written for a POVM element .
###### Proof.
Indeed, using the Heisenberg picture (adjoint map) ,
Perr(C,σ)=1MM∑m=1Tr(N⊗ℓ(ρm⊗σ))(\openone−Dm)=Trσ[1MM∑m=1TrAℓ(ρm⊗\openone)(N∗⊗ℓ(\openone−Dm))],
so that which is manifestly a POVM element, i.e. .
Likewise, for the infidelity,
ˆF(Q,σ)=Tr((id⊗D∘N⊗ℓ∘E)(ΦL⊗σ))⋅(\openone−ΦL)=Tr(ΦL⊗σ)⋅((id⊗E∗∘N∗⊗ℓ∘D∗)(\openone−ΦL))=TrσG,
with
Obviously, for a random classical code , the expected error probability is
EλPerr(Cλ,σ)=Trσ(EλEλ),
with the POVM elements associated to each code . Likewise for a random quantum code.
For a random classical code , the jammer’s goal is to maximise the error probability, choosing in the worst possible way. But from the present perspective that the error probability is an observable for Jamie, it is clear that
is simply the maximum eigenvalue of
.
We say, following general convention, that a random classical or quantum code or has error (without reference to any specific state of the jammer) if
supσEλPerr(Cλ,σ)≤ϵ or supσEλˆF(Qλ,σ)≤ϵ,
respectively. By the above discussion is equivalent to
EλEλ≤ϵ\openone or EλGλ≤ϵ\openone, (3)
in the sense of the operator order. This is an extremely useful way of characterising that the random code has a given error.
Our goal now is to select a “small” number of ’s, say , such that
1nn∑ν=1Eλν≤(ϵ+δ)\openone, (4)
ensuring that the random code , with uniformly distributed , has error probability . This is precisely the situation for which the matrix tail bounds in [3] were developed. Indeed, quoting [3, Thm. 19], for i.i.d. ,
Pr{1nn∑ν=1Eλν≰(ϵ+δ)\openone}≤|J|ℓ⋅exp(−nD(ϵ+δ∥ϵ)),
with the binary relative entropy , which can be lower bounded by Pinsker’s inequality, . Note that both the logarithm () and the exponential () are understood to base .
Thus, for , the right hand probability bound above is less than , so that there exist with (4). The number of bits needed to be shared between Alice and Bob to achieve this, is , which we may choose to be , which is not zero, but has zero rate as . Exactly the same argument applies to a random quantum code . We record this as a quotable statement.
###### Proposition 5
Let be a random classical code of block length for the QAVC , with error probability . Then for , there exist , with , such that the random code has error probability .
For a random quantum code , with infidelity , we similarly have that the random code has infidelity .
Remark We have discussed here from the beginning the version of the capacity with average probability of error (and arbitrary encodings). Following Ahlswede [2] and the generalisation of his method above, investing another bits of shared randomness, or loosing bits from the code, we can convert any code with error into one with maximum error . We omit the details of this argument, as it is exactly as in [2].
Proposition 5 allows us to assess the leftmost inequalities in the capacity order from eq. (1). Because the randomness needed is so little, it can be generated by a channel code loosing no rate. Hence, in a certain sense, they are also identities, except in the somewhat singular case the deterministic classical capacity vanishes:
###### Corollary 6
The classical capacity of a QAVC is either or, if it is positive, it equals the random coding capacity:
Cdet(N)={Crand(N)if Cdet(N)>0,0otherwise.
Similarly, for the quantum capacity:
=====Qdet(N)={Qrand(N)if Cdet(N)>0,0otherwise.=======:\hbox to 0.0pt{⊓}⊔
## Iv Discussion and Outlook
We have shown that in a fully quantum jammer channel model (QAVC), the random coding capacity, for both quantum and classical transmission, can be reduced to the capacity of a corresponding compound channel; furthermore, by extension of the “elimination of correlation” technique, that the shared randomness required has zero rate, thus implying dichotomy theorems for the deterministic classical and quantum capacities. Since the derandamisation leaves so little randomness, we can apply the results also to say something about the identification capacity of QAVCs: Either the ID-capacity vanishes, or it equals the random coding capacity .
Our work leaves two important open questions: First, to give necessary and sufficient conditions for vanishing classical capacity. For classical AVCs this is the co-called “symmetrizability” [17, 14]. But what is the analogue of this condition for quantum channels?
Second, both parts of our reasoning relied on the finite dimensionality of the jammer system . It is not so clear how to deal with infinite dimension of , on the other hand. A priori we have a problem already in Proposition 2, since the de Finetti reduction has an upper bound depending on the dimension . However, one can prove the random coding capacity theorem directly from first principles, without recourse to de Finetti reductions.
Then, we have the problem again in the derandomisation step, which requires bounded to apply the matrix tail bound. We need some kind of quantum net argument to be able to go to a finite dimensional subspace that somehow approximates the relevant features of up to error and block length . Classically, the finiteness of the alphabet of channel states is irrelevant, as long as we have finite sender and receiver alphabets. The reason is that for each block length we can choose a subset of channel states of size polynomial in , corresponding to an -net of channels realised by the jammer, for any fixed . Indeed, for the QAVC with classical jammer, which may be described by a state set , the following statements are easily obtained by standard methods.
###### Lemma 7
For every , there exists a set of cardinality , with the property that for every there is an with , where the norm is the diamond norm (aka completely bounded trace norm) on channels [1, 22].
By applying this lemma with , the “telescoping trick” and the triangle inequality to bound for and , we obtain then:
###### Lemma 8
For every and integer , there exists a subset of cardinality , such that
supσℓ∈S′ℓEλPerr(C,σℓ)≤supsℓ∈SℓEλPerr(C,sℓ)≤supσℓ∈S′ℓEλPerr(C,σℓ)
for any random code . Similar for the infidelity of random quantum codes.
Since we need to entangle both the ’s and the ’s, it seems that the most natural approach is to answer the following question.
###### Question 9
Let be a cptp map with finite dimensional and , and . Is it possible to find a subspace of dimension bounded by some polynomial in , with the following property?
For every Hilbert space and state on , there exists another state on such that .
Here, and are channels from to , defined by inserting the respective state into the jammer register:
Nσ(ρ):=(N⊗idK)(ρ⊗σ), Nσ′(ρ):=(N⊗idK)(ρ⊗σ′).
We can reduce this to the more elementary question of approximating the output of the “Choi channel” , with , defined by , mapping each to the Choi state of the channel : Namely, the question is whether for every Hilbert space and state on , does there exist a state on such that
We now show that a positive answer to Question 9, with deviation , could be used to replace the environments of in steps each by a finite dimensional approximation. In this way, we would be able to find, for every state on , another state on , with
12∥∥(N⊗ℓ)σ−(N⊗ℓ)σ′∥∥⋄≤η. (5)
###### Proof.
Set ; we shall define a sequence of approximants on (), as follows:
To obtain , we apply Question 9 with (the last of the -systems) to obtain
12∥∥N[1]σ(0)−N[1]σ(1)∥∥⋄≤ηℓ,
where the notation indicates application of the channel to the -th system in . Proceeding inductively, assume that we already have constructed a state on , Question 9 applied to (i.e. all the systems and the last of the ’s) gives us a state on such that
12∥∥N[i]σ(i−1)−N[i]σ(i)∥∥⋄≤ηℓ.
Since the diamond norm is contractive under composition with cptp maps, we obtain for all that
12∥∥(N⊗ℓ)σ(i−1)−(N⊗ℓ)σ(i)∥∥⋄≤ηℓ,
and via the triangle inequality we arrive at eq. (5), by letting and recalling .
This would mean that any behaviour that the jammer can effect by choosing states on , can be approximated up to (on block length ) by choices from , analogously to Lemma 8, which actually provides a positive answer to Question 9 in the case of a classical jammer. Since is bounded polynomially in , we could apply now Proposition 5 and incur an additional term of in the shared randomness required, in particular it will still be of zero rate.
A third complex of questions concerns the extension of the present results to other quantum channel capacities. This is easy along the above lines for cases like the entanglement-assisted capacity (cf. [11, 16]), but challenging for others, such as the private capacity [12, 15]. This is interesting because the error criterion (of decodability and privacy) does not seem to correspond to an observable on the jammer system. We leave this and the other open problems for future investigation.
Acknowledgments. HB and CD were supported by the German BMBF through grants 16KIS0118K and 16KIS0117K. JN was supported by the German BMWi and ESF, grant 03EFHSN102. AW was supported by the ERC Advanced Grant IRQUAT, the Spanish MINECO, projects no. FIS2013-
40627-P and FIS2016-86681-P, and the Generalitat de Catalunya, CIRIT project 2014-SGR-966. | 2022-06-25 11:40:01 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8850228190422058, "perplexity": 685.7440046245442}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103034930.3/warc/CC-MAIN-20220625095705-20220625125705-00372.warc.gz"} |
http://daleswanson.blogspot.com/2013/04/ | ## Tuesday, April 30, 2013
### What If We Never Run Out of Oil?
http://www.theatlantic.com/magazine/archive/2013/05/what-if-we-never-run-out-of-oil/309294/?single_page=true
From the beginning, it was evident that the Kern River field was rich with oil, millions upon millions of barrels. (A barrel, the unit of oil measurement, is 42 gallons; depending on the grade, a ton of oil is six to eight barrels.) Wildcatters poured into the area, throwing up derricks, boring wells, and pulling out what they could. In 1949, after 50 years of drilling, analysts estimated that just 47 million barrels remained in reserves—a rounding error in the oil business. Kern River, it seemed, was nearly played out. Instead, oil companies removed 945 million barrels in the next 40 years. In 1989, analysts again estimated Kern reserves: 697 million barrels. By 2009, Kern had produced more than 1.3 billion additional barrels, and reserves were estimated to be almost 600 million barrels.
## Sunday, April 28, 2013
### How Long to Suffocate in Space
I previously looked at how long it would take to freeze in space. In this post, I'm going to look at the other side of the losing life support coin: how long would it take to run out of air in space?
Since the heat being lost is dependent on the temperature, which is changing, we needed a differential equation before (well actually since it's all been worked out before we didn't, but where's the fun in that?). This time, oxygen use is constant, so the math is much simpler. On the other hand, there are still plenty of variables to introduce uncertainty.
To begin, if you are sealed in a room with a normal mix of air you will not run out of oxygen. Rather CO2 will build up to toxic levels, and you will die. It is possible to scrub CO2 from the air. In fact, we currently do it on our spaceships. A major issue Apollo 13 faced was getting circular CO2 filters to work with a square hole.
What this means for us is we need to look at time for CO2 to build up to lethal levels, and for how long it would take to run out of O2 if the CO2 is being removed.
We also need to consider that people consume O2 at different rates, and even at different rates at different times. VO2 is a measure of what rate a person is consuming O2. Unfortunately, it is almost always used in the context of measuring peak VO2 during exercise (as a measure of fitness). It was hard to get good numbers for a resting person, but I settled on about 0.018 cubic meters per hour. For the active rate, I still couldn't just use typical VO2 numbers because they are for the max consumption rate during bursts of exercise. A person moving around attempting to repair a ship wouldn't be using as much air as a person sprinting. I found some good numbers from scuba diving forums and settled on 0.1 cubic meters per hour.
On the subject of the variation of people, different people will be able to tolerate different concentrations of CO2 or levels of O2. It was hard finding a good number for lethal CO2 concentration. Most sites were concerned with long term exposure at a work environment (years), or short term accidental exposure (minutes). I settled on 5% which is probably a bit low.
As for minimum O2 concentration, similar problems apply. Here I settled on 11%, compared to about 21% normally. Ships could use slightly higher O2 concentrations to begin with to help with loss of life support situations, but high O2 levels have their own problems.
Additional issues are things like fires or venting atmosphere reducing the time. Also, the number of people on a ship is harder to estimate. On larger ships it is probably more constant. But on small shuttles it could vary quite a bit.
Working in our favor is the fact that the respiration equation:
"C"_6"H"_12"O"_6 + 6"O"_2 to 6"CO"_2 + 6"H"_2"O"
Has a one to one mole ratio between O2 and CO2. Additonaly, a mole of any gas takes up about 24 liters at normal temperature and pressure. This means we can use the same formula for both O2 consumption and CO2 build up:
t={V cdot Delta r}/{n cdot R} Where: V is volume, Delta r is the change in the ratio of the gas, n is the number of people, and R is the rate that gas is changed. As an example:
t = 36.1 " hours" = {26 "m"^3 cdot 0.05}/{2 cdot 0.018 "m"^3/"hour"}
This is the formula for a 26 cubic meter shuttle craft, with 2 people. The change in concentration is 0.05 because CO2 is effectively 0 normally. 0.018 cubic meters/hour is the resting CO2 production rate.
I decided to give a range with worst case and best case scenarios. The best case is resting, and not worrying about CO2 (because it's being scrubbed). The worst case is CO2 build up will working to fix the ship.
As you can see there were some cases where heat loss was faster than the best case scenario. I didn't expect it to even be close.
Name Volume ("m"^3) Crew Heat (days) Oxygen, Resting (days) CO2, Doing Work (days) Death Star II 2,144,000,000,000,000 2,500,000 437,837 218,370,370 17,866,667 Super Star Destroyer 12,645,900,000 300,000 967 10,733 878 Borg Cube 28,000,000,000 130,000 25,585 54,843 4,487 Enterprise-D 5,820,983 1,200 456 1,235 101 Enterprise 211,248 430 161.0 125.1 10.2 Runabout 569 3 11.3 48.3 4.0 Type 6 Shuttlecraft 26 2 12.8 3.3 0.3 TIE Fighter 8 1 0.7 2.0 0.2
## Thursday, April 25, 2013
### Cat in a Shark Suit Riding a Roomba and Chasing a Duck
We might as well close up shop now. This video is clearly the ultimate culmination of the internet.
## Thursday, April 18, 2013
### The Geopolitics of the United States, Part 1: The Inevitable Empire
http://www.stratfor.com/analysis/geopolitics-united-states-part-1-inevitable-empire
This is very long, and nothing in it is revolutionary, but I found it an interesting overview of the United States' expansion.
## Tuesday, April 16, 2013
### The Size of Pizza
Pizza is likely the most important substance ever created. As such, it is crucial that we have adequate information for our various pizza dealings. In civilized society a pizza is 16 inches in diameter. However, one must occasionally deal with the barbarians of the pizza word: Fast food pizza, eg, Pizza Hut, Dominoes, Papa John's.
To begin, they have had the audacity to make pizzas in sizes in less than the scientifically proven optimal size. However, they add insult to injury by calling these smaller sizes "large". Since pizza size increases as the square of half the diameter, decreasing the diameter doesn't have an intuitive effect on the actual reduction in pizza.
To help illustrate the atrocity being done here, I've compiled this table of various pizza sizes. I've included the name Pizza Hut (or the others) call them, as well as their name to rational humans. Next, is the diameter and the area. The slices equivalent lets you know how many slices of a normal 16" pizza you would be getting at the other sizes; they likely are all cut into 8 slices. % of a real pizza is exactly what it sounds like.
Pizza Hut Name Actual Name Diameter (in) Area (in^2) Slices Equivalent % of a Real Pizza Personal Joke 6 28.3 1.13 14% Small ? 10 78.5 3.13 39% Med Offensively Small 12 113.1 4.50 56% Large Small 14 153.9 6.13 77% X Large Pizza 16 201.1 8.00 100% X Large 18 254.5 10.13 127% Wonderful 20 314.2 12.50 156%
Edit:
I've made a calculator to help compare prices across pizza sizes.
## Tuesday, April 9, 2013
### Leonard v. Pepsico, Inc.
http://en.wikipedia.org/wiki/Leonard_v._Pepsico,_Inc
Leonard v. Pepsico, Inc., 88 F. Supp. 2d 116, (S.D.N.Y. 1999), aff'd 210 F.3d 88 (2d Cir. 2000), more widely known as the Pepsi Points Case, is a contracts case tried in the United States District Court for the Southern District of New York in 1999, in which the plaintiff, John Leonard, sued PepsiCo, Inc. in an effort to enforce an "offer" to redeem 7,000,000 Pepsi Points for an AV-8 Harrier II jump jet, which PepsiCo had shown in a portion of a televised commercial that PepsiCo argued was intended to be humorous. The plaintiff did not collect 7,000,000 Pepsi Points through the purchase of Pepsi products, but instead sent a certified check for $700,008.50 as permitted by the contest rules. Leonard had 15 existing points, paid$0.10 a point for the remaining 6,999,985 points, and a \$10 shipping and handling fee.
Among other claims made, Leonard claimed that a federal judge was incapable of deciding on the matter, and that instead the decision had to be made by a jury consisting of members of the "Pepsi Generation" to whom the advertisement would allegedly constitute an offer.
In justifying its conclusion that the commercial was "evidently done in jest" and that "The notion of traveling to school in a Harrier Jet is an exaggerated adolescent fantasy," the court made several observations regarding the nature and content of the commercial. These included (among others) that:
• "The callow youth featured in the commercial is a highly improbable pilot, one who could barely be trusted with the keys to his parents' car, much less the prize aircraft of the United States Marine Corps."
• "The teenager's comment that flying a Harrier Jet to school 'sure beats the bus' evinces an improbably insouciant attitude toward the relative difficulty and danger of piloting a fighter plane in a residential area."
• "No school would provide landing space for a student's fighter jet, or condone the disruption the jet's use would cause."
## Sunday, April 7, 2013
### So Crates
Invoking Socrates to get out of running a red light
Fighting for his "rights"
### The Labyrinth of Genre
This uses last.fm genre data to show an endless branching of genres. Every genre you click shows the 6 closest related ones (not necessarily sub-genres).
Note: It plays a band from the currently selected genre.
http://static.echonest.com/LabyrinthOfGenre/GenreMaze.html
## Wednesday, April 3, 2013
### The trouble with using police informants in the US
http://www.bbc.co.uk/news/magazine-21939453
Whatever the case, under Florida law Horner now faced a minimum sentence of 25 years, if found guilty.
"My public defender told me, 'They got you dead to rights.' So I thought, 'OK, I guess there's no need taking this to trial.'"
Prosecutors offered a plea bargain of 15 years if Horner accepted a guilty plea.
"I said, 'My youngest daughter will be 25 years old when I get out. I can't do that.'"
That left him with only one option - to become an informant himself.
Under the deal he signed with prosecutors, he agreed to plead guilty. But if he helped make prosecutable cases against five other people on drug-trafficking charges - charges carrying 25-year minimum terms - his own sentence could be reduced from 25 years to 10.
Horner failed to make cases against drug traffickers.
As a result, he was sentenced to the full 25 years in October last year and is now serving his sentence in Liberty Correctional Institution, outside Tallahassee. He will be 72 by the time he is released.
The irony is that if Horner been an experienced drug dealer, he may well now be serving a much shorter term than 25 years.
"What snitching does is it rewards the informed, so the lower you are on the totem pole of criminal activity, the less useful you are to the government," says Natapoff. "The higher up in the hierarchy you are, the more you have to offer."
Court records show that Matt, the person who informed on Horner, had a lengthy record of drug offences. At the point he informed on Horner, he was facing a minimum sentence of 15 years for trafficking. He was ultimately sentenced to just 18 months and is now free.
I believe there's a term for this. | 2017-12-14 14:58:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3634887635707855, "perplexity": 3661.62533400682}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948544677.45/warc/CC-MAIN-20171214144324-20171214164324-00640.warc.gz"} |
https://iwaponline.com/view-large/2942696 | Skip to Main Content
Table 6
Electrical conductivity reduction as a function of HRT
HRT (hour)Conductivity in supernatant of aeration tank (μs/cm)Conductivity in permeate (μs/cm)Total reduction rate (%)
12 3,890 1,822 58,1
15 2,434 1,565 64,0
20 2,311 1,392 68
24 1,828 1,059 76
HRT (hour)Conductivity in supernatant of aeration tank (μs/cm)Conductivity in permeate (μs/cm)Total reduction rate (%)
12 3,890 1,822 58,1
15 2,434 1,565 64,0
20 2,311 1,392 68
24 1,828 1,059 76
Close Modal | 2022-11-28 07:18:29 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8773747682571411, "perplexity": 1362.1837177495104}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710488.2/warc/CC-MAIN-20221128070816-20221128100816-00798.warc.gz"} |
https://mathematica.stackexchange.com/questions/157290/summation-bug-in-11-2/157295 | # Summation bug in 11.2
If I sum all the positive-numbered Fourier coefficients of $\cos(x)$, I get the correct answer. If I sum the negative-numbered ones, I get a wrong answer. Splitting the sum into two parts somehow fixes the issue.
Sum[FourierCoefficient[Cos[x], x, k], {k, 1, Infinity}]
Sum[FourierCoefficient[Cos[x], x, -k], {k, 1, Infinity}]
Sum[FourierCoefficient[Cos[x], x, -k], {k, 1, 2}] + Sum[FourierCoefficient[Cos[x], x, -k], {k, 3, Infinity}]
Out:
1/2
0
1/2
• This is the wrong place to report bugs. Please report bugs directly to Wolfram Research: wolfram.com/support/contact/email. – QuantumDot Oct 7 '17 at 15:38
• @QuantumDot is correct that bugs should be reported directly to Wolfram, Inc. Nonetheless, warning StackExchange uses about bugs is a useful service, and I thank you for doing so. Be sure to attached the usual bug header to your question, along with the case number that Wolfram, Inc assigns to your report, in a day or so, after others have had a chance to comment on the problem you identified here. – bbgodfrey Oct 7 '17 at 17:36
The problem is not with Sum:
FourierCoefficient[Cos[x], x, -k]
(* 0 *)
FourierCoefficient[Cos[x], x, k]
The second code is also much faster. It suggests to me that FourierCoefficient calls Integrate in the first case and uses a short-cut in the second. In fact Integrate (from a Trace of FourierCoefficient) gives a result that is only generically correct:
Integrate[E^(I k x) Cos[x], {x, -π, π},
Assumptions -> k ∈ Integers, GenerateConditions -> False]
(* -((2 k Sin[k π])/(-1 + k^2)) *)
• Interesting, so it's a consequence of the fact that the symbolic value of that integral formally breaks down at $k=\pm 1$? I wonder how FourierCoefficient normally avoids this problem. – level1807 Oct 7 '17 at 23:06
• @level1807 It seems for a simple symbol k as opposed to an expression -k, that instead of integration, Cos[x] is expanded as a series in q = Exp[I x] and SeriesCoefficient is used to get the general series coefficient of order k. SeriesCoefficient does a better job than Integrate in this case. – Michael E2 Oct 8 '17 at 1:13
This is not an answer as why it happens. But the change happened after version 7.
I went back to verion 7 to be able to obtain different result. I tried versions 11, 10, 9, and 8 and they all gave same result as above. But in version 7:
• That's quite strange... – level1807 Oct 7 '17 at 17:37 | 2020-01-27 22:01:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8005130887031555, "perplexity": 1692.2291176167107}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-05/segments/1579251728207.68/warc/CC-MAIN-20200127205148-20200127235148-00364.warc.gz"} |
https://facwiki.cs.byu.edu/nlp/naive-bayes | ### Notation
This section outlines the notation used throughout this document.
• $D_i$ - Document $i$.
• $F_j$ - Feature $j$ (word in a text document).
• $F_{i,j}$ - Feature $j$ in Document $i$.
• $C$ - Class label.
• $C_i$ - Class label for Document $i$.
• $\{c_1, \ldots , c_n\}$ - Values representing specific labels.
• $P(C=c_i| \ldots )$ - This is a notation for a conditional probability.
• $P(c_i| \ldots )$ - This is also a notation for a conditional probability.
• $P(C| \ldots )$ - This is notation for a probability distribution.
• $P(C_i| \ldots )$ - This is also notation for a probability distribution.
### Derivation of Naive Bayes for Classification
First, what we're looking for is $\hat c = argmax_c P(C_i = c| \underline{D_i})$, where $\underline{D_i}$ is the feature vector for document $i$ which is given. In other words, we have a document and its feature vector (that is, the words in the dcoument) and we want to know the probability that the random variable $C$ takes on a specific value or label given this document and its feature vector. In English, what is the probability that this document belongs to class $C_i$?
Now that we know what we want, here is the derivation:
using Bayes' Theorem:
$\hat c = argmax_c \frac{P(\underline{D_i}|C_i = c)P(C_i = c)}{P(\underline{D_i})}$
Note that with $a$ a constant:
$argmax_x \frac{f(x)}{a} = argmax_x f(x)$
Therefore:
$argmax_c \frac{P(\underline{D_i}|C_i = c)P(C_i = c)}{P(\underline{D_i})} = argmax_c P(\underline{D_i}|C_i = c)P(C_i = c)$
(Note: See below for explanation of Bayesian Networks and the naive bayes assumption, which we make at this point).
By the multiplication rule
$P(\underline{D_i}|C_i = c)P(C_i = c) = P(\underline{D_i},C_i = c)$
Because of the naive bayes assumption,
$P(C_i, F_{i,1}, F_{i,2}, \ldots ,F_{i,n}) = P(C_i)P(F_{i,1}|C_i) \cdots P(F_{i,n}|C_i)$
Now, the second part of the right-hand-side of this last equation can be written in short-hand as
$\prod_{j=1}^n P(F_{i,j}|C_i = c)$
so we now have
$P(C_i = c)\prod_{j=1}^n P(F_{i,j}|C_i = c)$
$\hat c = argmax_c P(C_i = c)\prod_{j=1}^n P(F_{i,j}|C_i = c)$
For the sake of representing small probabilities in digital hardware, the log function is useful for overcoming floating point underflow.
$\hat c = argmax_c log(P(C_i = c)) + \sum_{j=1}^n log(P(F_{i,j}|C_i = c))$
The MLE (maximum likelihood estimator) for the first term, $P(C_i = c)$ is computed by taking the number of documents with label $c$ divided by the total number of documents in the training set.
The MLE for the term in the sumation …
### Probability Theory
Bayes' Theorem relies on some probability theory, which is covered here.
#### Sample Spaces and Events
An experiment is any action or process with uncertain outcomes. The possible outcomes of an experiment are represented by a sample space. Each of the possible outcomes is called an event. Events form a set, making up the sample space for a particular experiment.
##### Sample Space
Consider the experiment of flipping a coin. This experiment is subject to uncertainty, because the outcome could be one of two possibilities. Formally, the sample space $S$ for this experiment is $S = \{H,T\}$; the only possible outcomes are a head or tail facing upward.
##### Events
An event is an outcome from (or a result of) an experiment. In the above experiment of flipping a coin, the sample space, $S = \{H,T\}$, contains two events, $H$ and $T$, heads and tails respectively.
### Conditional Probability
Conditional probability allows us to use prior knowledge about how one random variable affects another random variable. For example, a lecturer, Stan, has a tendency of being late to class when the weather is good 5% of the time. However, when the weather is bad, Stan is late 25% of the time. Let $A$ be the event that Stan is late, and $B$ be the event of inclement weather. Then, $P(A|B)$ is read, the probability that Stan will be late given that the weather is bad. Or, the probability of $A$ given $B$.
In this case, $P(A|B) = .25$, and $P(A|B^{'}) = 0.5$ where $B^{'}$ is the compliment of $B$, that is, the weather is not bad.
Formally, conditional probability is defined as follows: :$P(A|B) = \frac{P(A \cap B)}{P(B)} \!$
This formula is interpreted as follows: The Probability of class $A_i$ given feature $B$ is equal to the probability of $B$ given class $A_i$
Let $A_i$ be the $i^{th}$ class that we are interested in, and $B$ be a feature of a document.
### The Law of Total Probability
Let $A_1, \ldots , A_k$ be mutually exclusive and exhaustive events. Then for any other event $B$, $P(B) = \sum_{i=1}^k P(B|A_i)P(A_i)$.
### Bayes' Theorem
:$P(A|B) = \frac{P(B | A)\, P(A)}{P(B)} \!$
:$P(A_i|B) = \frac{P(B | A_i)\, P(A_i)}{\sum_j P(B|A_j)\,P(A_j)} \!$
. | 2020-08-11 14:02:22 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9267805814743042, "perplexity": 377.7056358534283}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439738777.54/warc/CC-MAIN-20200811115957-20200811145957-00546.warc.gz"} |
https://gamedev.stackexchange.com/questions/70882/how-do-i-implement-breakouts-multiball-powerup/70902 | # How do I implement Breakout's “multiball” powerup?
I'm just starting out and making a breakout clone as my first game. I'm implementing all kinds of powerups, but I'm stuck on multiball. The powerup adds additional bouncing balls to the game.
So far, I've implemented powerups by just adding them to the Ball class as states. This is easy for increasing speed, making the ball sticky, changing sprites and such. When the ball is in one state it behaves in only one way.
Now, how do multiple balls fit into this? I suppose having more balls is more of a function of the game itself than the Ball, so it seems logical to make it a game state instead of a ball state. How would I do this?
• Hey, welcome to GD:SE. Glad to have you on board. This could be very open ended as a question and we try to make questions as specific to a problem as possible so others can benefit when searching later. Have you actually tried anything yet? If you have edit the question to let us know, we might be able to help you more. – Tom 'Blue' Piddock Feb 24 '14 at 9:34
I think you are stuck on trying to implement utterly dissimilar powerups using a unified system. That seems like a mistake, and it might become obvious if you thought about the idea of multiple balls outside the context of a power-up: multiple balls are simply more instances of a Ball object.
Handling multiple balls in code would not require a "gamestate", and it would not conflict with or complicate your existing "ballstate" logic. There are just more objects in play.
The only difficulty in this method is that you might need to change your update and draw logic slightly. If you have built your code around a single, global Ball instance, a good first step would be refactoring to make that code operate on a method argument. For example:
/* the old
Ball globalBall;
void Update() {
globalBall.Move();
globalBall.Collide();
}
*/
// the new
List<Ball> allBalls;
// this old code is useful for a single ball; leave it mostly intact.
void UpdateBall(Ball ball) {
ball.Move();
ball.Collide();
}
// new code to deal with multiple Ball objects
void Update(){
foreach (ball in allBalls){
UpdateBall(ball);
}
}
The gist of it is, don't force your previous solution onto this new concept. Although sticky- and multi- might both be triggered by powerups, they are very different concepts. It's ok to create independent code paths to accomplish different game features. | 2020-02-19 23:21:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3461344838142395, "perplexity": 1263.5252228261947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144429.5/warc/CC-MAIN-20200219214816-20200220004816-00454.warc.gz"} |
https://math.stackexchange.com/questions/1580102/applying-the-mean-value-theorem-to-inequalities/1580128 | # applying the mean value theorem to inequalities
So I'm ask to prove using the mean value theorem that $\sin(x) \ge x-\frac{x^3}{6}$
I understand that the mean value theorem works because both sides of the equation equal zero when $x=0$.
To start I set $f(x)=\sin(x)$ and $g(x)=x-\frac{x^3}{6}$ and let $F(x)=f(x)-g(x)$ and try to show that $F'(x)>0$ for all $x>0$.
Therefore, we have $F(x)=\frac{F(x)-F(0)}{x-0}(x-0)$
By the mean value theorem, we know there exists a $y>0$ such that $F'(y)x=(\cos(x)-1+\frac{x^2}{2})x$
If we apply MVT again for some $z>0$, we get $F'(z)x=\frac{F'(y)-0}{x}x=(-\sin(x)+2x)x$
It seems pretty obvious that $F'(z)x>0$ but just for safe measure I'll apply it again: There exists a $c>0$ such that $F'(c)x=(-\cos(x)+2)x$
$\therefore$ since $x>0$ and $2-\cos(x)\le 1$ (by the bounds of cosine), $F'(c)>0$ for $c>0$
$\therefore F(x)>0\equiv f(x)>g(x)$ if $x>0$
I have my final tomorrow so any tips to make this more concrete would be appreciated
You have the right idea. Show in succession, using MVT, that for $x>0$, $x>\sin x$, $\cos x>1-x^2/2$, $\sin x> x-x^3/6$. | 2020-04-09 17:11:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9480147957801819, "perplexity": 64.56130534483198}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371861991.79/warc/CC-MAIN-20200409154025-20200409184525-00058.warc.gz"} |
https://kar.kent.ac.uk/85998/ | # From the creative drive to the musical product : a psychoanalytic account of musical creativity
Dunn, Rosemary (2021) From the creative drive to the musical product : a psychoanalytic account of musical creativity. Doctor of Philosophy (PhD) thesis, University of Kent. (doi:10.22024/UniKent/01.02.85998) (KAR id:85998)
This thesis is the result of a life's work dedicated to re-introducing people of all ages to their inherent musicality which, more often than not, has been denied and invalidated by society's rigid adherence to the reified status of creativity'. The main premise sine qua non is that creativity is no more, and no less, than the re-realization of things that already exist, and that it is indeed the ubiquitous mode of Eros itself (libidinal energy). In explaining the means whereby the existents of music per se are imprinted in the minds of us all, and then why only certain people choose to manipulate these existents into musical compositions, we proceed from the universal experience of intra-uterine life. The importance to us all of sound impingement upon the fetus is explained, for it is revealed to be foundational to the genesis of the Self. However, as each one of us has different sound-experiences, the affective reactions to those experiences inform our unconscious attitudes towards music. These are revealed in our projections into the containing space of music'. Furthermore, it is posited that, in utero, not only are we initiated through sound-impingements into that which is dissonant to the Self (necessitating integration), but we also acquire three paradigmatic schemes of reference which thereafter inform all that we do. Our aesthetic sense is rooted here too, through tactility and even visibility. Choosing the mode d'emploi of musical composition is first dependent upon extrinsic environmental factors, but the imperative to compose arises intrinsically. The process though, is one available to us all, as we already possess the necessary mental function. This is explicated by Freud as the dream-work. The thesis culminates in a three-way synthesis predicated upon the dynamics of the transference and counter-transference, between the work that takes place in psychoanalysis, the tripartite teleology of a musical work from composer to performer and listener, and the musical structure known as sonata form. The first movement of Beethoven's third symphony, the Eroica is used as exemplar. ' Appendices are designed to accommodate information pertaining to both disciplines, while comments are to be understood as the opinions of no-one else but myself | 2023-03-24 10:14:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.380895733833313, "perplexity": 3388.7532834752938}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296945279.63/warc/CC-MAIN-20230324082226-20230324112226-00045.warc.gz"} |
https://su-plus.strathmore.edu/browse?type=subject&value=Homocyclic+%24p-%24groups | Now showing items 1-1 of 1
• #### Finite rings with homocyclic $p-$groups as Sylow $p$-subgroups of the group of units
(Strathmore University, 2017)
In 1960, Laszlo Fuchs posed, among other problems, the following: characterize the groups which are the groups of all units in a commutative and associative ring with identity. Though this problem still remains open, ... | 2022-05-26 14:39:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2067962884902954, "perplexity": 1266.4976282329756}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662606992.69/warc/CC-MAIN-20220526131456-20220526161456-00311.warc.gz"} |
https://www.risk.net/commodities/1617783/video-qa-gfis-michael-cosgrove-carbon-markets | # Video Q&A: GFI’s Michael Cosgrove on carbon markets
Q. Do you foresee federal legislation for carbon markets this year?
A. Michael Cosgrove: I think the odds of a federal cap-and-trade system this year are virtually nil and I think next year the odds are quite long also. There has been a change in public sentiment, particularly in the past year, but really over the past three years, and I think that is taking quite a lot of the pressure off politicians to actually do something about climate change in North America.
Q. How do you see the development of a national scheme feeding into a global carbon emissions trading system?
A. Clearly the best result that we could have for a cap-and-trade programme or for any kind of carbon mitigation programme would be one where the US is part of a global programme. I think that currently it will be very challenging to enact legislation to create such a programme.
The one thing that may be creating some pressure to do so, however, was the recent announcement by the US Environmental Protection Agency (EPA) that they have deemed CO2 to be a gas that they can essentially enact programmes to mitigate. Clearly no-one wants the EPA to run a national carbon programme and so the terror that that announcement inspired does actually create some motion that otherwise wouldn’t be present, pending legislation in this area.
Q. In the meantime, what future do you see for the development of the regional schemes in the US?
A. Well, the RGGI (Regional Greenhouse Gas Initiative) just had a very large auction [on March 10, 2010 of 40,612,408 allowances] and those allowances are trading at about $2.13 per ton, which is just over the floor of$1.86/ton. I think it’s quite encouraging that a large amount of allowances were soaked up and the price wasn’t immediately offered at the floor [of $1.86/ton]. There are a couple of other initiatives, for example the Chicago Climate Exchange, which Dr Richard Sandor started [in 2000]. There’s also the Climate Action Reserve (CAR) standard for carbon allowances, as well as the Western Climate Initiative and the Mid-West Climate Initiative. At present, it seems the gold standard for carbon credits in North America is CAR credits. Last year they were changing hands at around$7.50/ton and I think they were being originated at between $5/ton and$6/ton. They seem to be valued at around \$5/ton now but compared to all of the other schemes, they are by far and away the most richly priced credits – and I think that reflects the expectation that in the event of a federal programme, the CAR protocol is most likely to be adopted.
Q. So until a federal scheme is put in place, will all of the regional schemes work to that protocol?
A. They’re all working to slightly different standards. I think that it will be interesting to see how these develop. At this stage, it’s very hard to determine how these regional programmes could become more fungible and in fact work together. I think it’s too soon to really have a good sense of that.
Q. How would you characterise the current mood among US emissions market players?
A. I think the momentum is fairly poor right now for climate legislation. Clearly the healthcare debate has consumed an enormous amount of energy and time and political capital and it’s still unresolved. After the resolution of the healthcare debate one way or another, it seems most likely to me that the next important consideration will be financial reform. It will leave very little time and energy for climate legislation.
Q. How would that feed into trading activity? Will it have an effect?
A. Yes, I think it will. But it is interesting – in spite of all the terrible news, that there is still quite a lot of interest on the part of large commercial and industrial concerns to, at a minimum, gain expertise in dealing with carbon offset markets. We’re finding that there is still buying interest for greenhouse gas offsets in North America.
Only users who have a paid subscription or are part of a corporate subscription are able to print or copy content.
To access these options, along with all other subscription benefits, please contact [email protected] or view our subscription options here: http://subscriptions.risk.net/subscribe
#### Risk management
###### Union beckons for the three quant tribes
Studies may be deferred, but future for grads is bright, argues UBS’s Gordon Lee | 2021-01-25 23:16:35 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2849121391773224, "perplexity": 1609.163170719972}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704792131.69/warc/CC-MAIN-20210125220722-20210126010722-00355.warc.gz"} |
https://www.physicsforums.com/threads/momentum-and-kinetic-energy.55314/ | # Momentum and Kinetic Energy
1. Dec 4, 2004
### senseandsanity
I need help with this question:
A cardinal of mass 3.60×10-2 kg and a baseball of mass 0.141 kg have the same kinetic energy. What is the ratio of the cardinal's magnitude of momentum to the magnitude of the baseball's momentum (p_c/p_b)?
2. Dec 4, 2004
### Sirus
Consider that
$$|\vec{p}|=m|\vec{v}|$$ and
$$E_{K}=\frac{1}{2}mv^2$$
Can you figure it out from there? | 2016-10-26 02:32:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.390214204788208, "perplexity": 1559.7408558457846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988720475.79/warc/CC-MAIN-20161020183840-00371-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://eprint.iacr.org/2022/1464 | ### Parallel Isogeny Path Finding with Limited Memory
##### Abstract
The security guarantees of most isogeny-based protocols rely on the computational hardness of finding an isogeny between two supersingular isogenous curves defined over a prime field $\mathbb{F}_q$ with $q$ a power of a large prime $p$. In most scenarios, the isogeny is known to be of degree $\ell^e$ for some small prime $\ell$. We call this problem the Supersingular Fixed-Degree Isogeny Path (SIPFD) problem. It is believed that the most general version of SIPFD is not solvable faster than in exponential time by classical as well as quantum attackers. In a classical setting, a meet-in-the-middle algorithm is the fastest known strategy for solving the SIPFD. However, due to its stringent memory requirements, it quickly becomes infeasible for moderately large SIPFD instances. In a practical setting, one has therefore to resort to time-memory trade-offs to instantiate attacks on the SIPFD. This is particularly true for GPU platforms, which are inherently more memory-constrained than CPU architectures. In such a setting, a van Oorschot-Wiener-based collision finding algorithm offers a better asymptotic scaling. Finding the best algorithmic choice for solving instances under a fixed prime size, memory budget and computational platform remains so far an open problem. To answer this question, we present a precise estimation of the costs of both strategies considering most recent algorithmic improvements. As a second main contribution, we substantiate our estimations via optimized software implementations of both algorithms. In this context, we provide the first optimized GPU implementation of the van Oorschot-Wiener approach for solving the SIPFD. Based on practical measurements we extrapolate the running times for solving different-sized instances. Finally, we give estimates of the costs of computing a degree-$2^{88}$ isogeny using our CUDA software library running on an NVIDIA A100 GPU server.
Available format(s)
Category
Attacks and cryptanalysis
Publication info
Published elsewhere. INDOCRYPT 2022
Keywords
isogenies cryptanalysis GPU golden collision search meet-in-the-middle time-memory trade-offs implementation
Contact author(s)
emanuele bellini @ tii ae
jorge saab @ tii ae
jesus dominguez @ tii ae
andre esser @ tii ae
sorina ionica @ u-picardie fr
luis zamarripa @ tii ae
francisco rodriguez @ tii ae
monika trimoska @ ru nl
floyd zweydinger @ rub de
History
2022-10-26: approved
See all versions
Short URL
https://ia.cr/2022/1464
CC BY
BibTeX
@misc{cryptoeprint:2022/1464,
author = {Emanuele Bellini and Jorge Chavez-Saab and Jesús-Javier Chi-Domínguez and Andre Esser and Sorina Ionica and Luis Rivera-Zamarripa and Francisco Rodríguez-Henríquez and Monika Trimoska and Floyd Zweydinger},
title = {Parallel Isogeny Path Finding with Limited Memory},
howpublished = {Cryptology ePrint Archive, Paper 2022/1464},
year = {2022},
note = {\url{https://eprint.iacr.org/2022/1464}},
url = {https://eprint.iacr.org/2022/1464}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content. | 2023-03-26 21:07:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3156273663043976, "perplexity": 4762.381331379684}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946535.82/warc/CC-MAIN-20230326204136-20230326234136-00438.warc.gz"} |
http://pldml.icm.edu.pl/pldml/element/bwmeta1.element.bwnjournal-article-doi-10_4064-cm101-1-8 | Pełnotekstowe zasoby PLDML oraz innych baz dziedzinowych są już dostępne w nowej Bibliotece Nauki.
Zapraszamy na https://bibliotekanauki.pl
PL EN
Preferencje
Język
Widoczny [Schowaj] Abstrakt
Liczba wyników
• # Artykuł - szczegóły
## Colloquium Mathematicum
2004 | 101 | 1 | 121-134
## On iterates of strong Feller operators on ordered phase spaces
EN
### Abstrakty
EN
Let (X,d) be a metric space where all closed balls are compact, with a fixed σ-finite Borel measure μ. Assume further that X is endowed with a linear order ⪯. Given a Markov (regular) operator P: L¹(μ) → L¹(μ) we discuss the asymptotic behaviour of the iterates Pⁿ. The paper deals with operators P which are Feller and such that the μ-absolutely continuous parts of the transition probabilities ${P(x,·)}_{x∈X}$ are continuous with respect to x. Under some concentration assumptions on the asymptotic transition probabilities $P^{m}(y,·)$, which also satisfy inf(supp Pf₁) ⪯ inf(supp Pf₂) whenever inf(supp f₁) ⪯ inf(supp f₂), we prove that the iterates Pⁿ converge in the weak* operator topology.
121-134
wydano
2004
### Twórcy
autor
• Department of Mathematics, Gdańsk University of Technology, Narutowicza 11/12, 80-952 Gdańsk, Poland | 2022-07-03 10:42:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7065561413764954, "perplexity": 6431.1599275654335}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104240553.67/warc/CC-MAIN-20220703104037-20220703134037-00027.warc.gz"} |
https://www.physicsforums.com/threads/derivation-of-formula-for-pump-power.540193/ | Derivation of formula for pump power
1. boshank20
3
Hi
I was given the following formula for to calculate the power of a centrifugal pump:
P = ρ * g * Q * H
i.e. Power = Density * acceleration due to gravity * volumetric flow rate * total head
I have found websites that state this formula but I haven't been able to find anywhere that explains how the formula was derived. Could anyone point me in the right direction?
Thanks
2. W R-P
26
Well we've got a vertical outlet, moving the fluid upwards against gravity by a certain height, H(the head).
SO we can say the pump is doing work against gravity
ie [W][/pump]= Force x distance
= weight of fluid x head
= m g H
power is the rate of doing work so.. [P][/pump]= mgH/t = (m/t) x g x H
= (mass flow rate) x g x H
= (density of fluid x volumetric flow rate )x g x H
Hope this helps.
3. boshank20
3
Ah I didn't realise it came from playing around with mgh. Thanks for the help | 2015-10-04 11:25:09 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8559513092041016, "perplexity": 1434.576970512802}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-40/segments/1443736673439.5/warc/CC-MAIN-20151001215753-00250-ip-10-137-6-227.ec2.internal.warc.gz"} |
https://labs.tib.eu/arxiv/?author=M.%20Schuster | • ### The Large Enriched Germanium Experiment for Neutrinoless Double Beta Decay (LEGEND)(1709.01980)
Sept. 6, 2017 hep-ex, nucl-ex, physics.ins-det
The observation of neutrinoless double-beta decay (0${\nu}{\beta}{\beta}$) would show that lepton number is violated, reveal that neutrinos are Majorana particles, and provide information on neutrino mass. A discovery-capable experiment covering the inverted ordering region, with effective Majorana neutrino masses of 15 - 50 meV, will require a tonne-scale experiment with excellent energy resolution and extremely low backgrounds, at the level of $\sim$0.1 count /(FWHM$\cdot$t$\cdot$yr) in the region of the signal. The current generation $^{76}$Ge experiments GERDA and the MAJORANA DEMONSTRATOR utilizing high purity Germanium detectors with an intrinsic energy resolution of 0.12%, have achieved the lowest backgrounds by over an order of magnitude in the 0${\nu}{\beta}{\beta}$ signal region of all 0${\nu}{\beta}{\beta}$ experiments. Building on this success, the LEGEND collaboration has been formed to pursue a tonne-scale $^{76}$Ge experiment. The collaboration aims to develop a phased 0${\nu}{\beta}{\beta}$ experimental program with discovery potential at a half-life approaching or at $10^{28}$ years, using existing resources as appropriate to expedite physics results.
• ### Resonant plasmon scattering by discrete breathers in Josephson junction ladders(cond-mat/0412727)
Dec. 28, 2004 cond-mat.supr-con
We study the resonant scattering of plasmons (linear waves) by discrete breather excitations in Josephson junction ladders. We predict the existence of Fano resonances, and find them by computing the resonant vanishing of the transmission coefficient. We propose an experimental setup of detecting these resonances, and conduct numerical simulations which demonstrate the possibility to observe Fano resonances in the plasmon scattering by discrete breathers in Josephson junction ladders.
• ### Spontaneous creation of discrete breathers in Josephson arrays(cond-mat/0309305)
Sept. 12, 2003 cond-mat
We report on the experimental generation of discrete breather states (intrinsic localized modes) in frustrated Josephson arrays. Our experiments indicate the formation of discrete breathers during the transition from the static to the dynamic (whirling) system state, induced by a uniform external current. Moreover, spatially extended resonant states, driven by a uniform current, are observed to evolve into localized breather states. Experiments were performed on single Josephson plaquettes as well as open-ended Josephson ladders with 10 and 20 cells. We interpret the breather formation as the result of the penetration of vortices into the system.
• ### Incommensurate dynamics of resonant breathers in Josephson junction ladders(cond-mat/0111460)
Nov. 23, 2001 cond-mat.supr-con
We present theoretical and experimental studies of resonant localized resistive states in a Josephson junction ladder. These complex breather states are obtained by tuning the breather frequency into the upper band of linear electromagnetic oscillations of the ladder. Their prominent feature is the appearance of resonant steps in the current-voltage (I-V) characteristics. We have found the resonant breather-like states displaying incommensurate dynamics. Numerical simulations show that these incommensurate resonant breathers persist for very low values of damping. Qualitatively similar incommensurate breather states are observed in experiments performed with Nb-based Josephson ladders. We explain the appearance of these states with the help of resonance-induced hysteresis features in the I-V dependence. | 2021-01-17 21:56:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5002045035362244, "perplexity": 2907.5453858624714}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703513194.17/warc/CC-MAIN-20210117205246-20210117235246-00121.warc.gz"} |
http://mathhelpforum.com/geometry/2703-radius-arc-print.html | • April 26th 2006, 06:19 AM
If the arc length s and the distance h from the centre point of the associated chord to the arc is known, how do you calculate the radius?
Thank You
• April 26th 2006, 09:16 AM
CaptainBlack
Quote:
If the arc length s and the distance h from the centre point of the associated chord to the arc is known, how do you calculate the radius?
Thank You
I don't think you will find a closed form solution for this problem - though
I am prepared to be proven wrong.
RonL
• April 28th 2006, 01:12 PM
ThePerfectHacker
By the theorem of chord in a circle we have that,
$n^2=r(2r-h)$
But, $n=r\sin(s/2r)$
Thus, we have,
$r^2\sin(s/2r)=r(2r-h)$[/tex]
Thus,
$r\sin(s/2r)=2r-h$
----------------------
If you know calculus
Thus, you need to find the zero's of the function,
$f(x)=x\sin(s/2x)-2x+h$
Use Newton's method
• April 28th 2006, 01:18 PM
CaptainBlack
Quote:
Originally Posted by ThePerfectHacker
By the theorem of chord in a circle we have that,
$n^2=r(2r-h)$
Just as well that I checked my note on this problem :D
The intersection chord theorem would give in this case:
$
n^2=h(2r-h)
$
surly?
RonL
• April 29th 2006, 06:49 PM
ThePerfectHacker
Quote:
Originally Posted by CaptainBlack
Just as well that I checked my note on this problem :D
The intersection chord theorem would give in this case:
$
n^2=h(2r-h)
$
surly?
RonL
One thing that confuses me is how an immortal (me) makes such a mistake :mad:
• April 29th 2006, 09:21 PM
earboth
Quote:
Originally Posted by ThePerfectHacker
By the theorem of chord in a circle we have that,
$n^2=r(2r-h)$
But, $n=r\sin(s/2r)$
Thus, we have,
$r^2\sin(s/2r)=r(2r-h)$
...
Hello,
I'm a little bit confused: When you plug in the value of n, shouldn't be there a squared sine value too?:
$r^2\left(\sin(s/2r)\right)^2=r(2r-h)$
Greetings
EB
• April 29th 2006, 09:42 PM
CaptainBlack
Quote:
Originally Posted by earboth
Hello,
I'm a little bit confused: When you plug in the value of n, shouldn't be there a squared sine value too?:
$r^2\left(\sin(s/2r)\right)^2=r(2r-h)$
Greetings
EB
There should be, now how did I miss that (its in my notes)
RonL
• April 30th 2006, 04:36 AM
earboth
Quote:
Originally Posted by ThePerfectHacker
One thing that confuses me is how an immortal (me) makes such a mistake :mad: | 2014-09-18 08:55:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 13, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8817141652107239, "perplexity": 1408.9542495670926}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657126053.45/warc/CC-MAIN-20140914011206-00173-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
http://stackoverflow.com/questions/7643837/matlab-function-solving-an-error/7643871 | # MATLAB Function (Solving an Error)
I have one file with the following code:
function fx=ff(x)
fx=x;
I have another file with the following code:
function g = LaplaceTransform(s,N)
g = ff(x)*exp(-s*x);
a=0;
b=1;
If=0;
h=(b-a)/N;
If=If+g(a)*h/2+g(b)*h/2;
for i=1:(N-1)
If=If+g(a+h*i)*h;
end;
If
Whenever I run the second file, I get the following error:
Undefined function or variable 'x'.
What I am trying to do is integrate the function g between 0 and 1 using trapezoidal approximations. However, I am unsure how to deal with x and that is clearly causing problems as can be seen with the error.
Any help would be great. Thanks.
-
Looks like what you're trying to do is create a function in the variable g. That is, you want the first line to mean,
"Let g(x) be a function that is calculated like this: ff(x)*exp(-s*x)",
rather than
"calculate the value of ff(x)*exp(-s*x) and put the result in g".
### Solution
You can create a subfunction for this
function result = g(x)
result = ff(x) * exp(-s * x);
end
Or you can create an anonymous function
g = @(x) ff(x) * exp(-s * x);
Then you can use g(a), g(b), etc to calculate what you want.
-
Ah, I guess that is probably what he meant. The function call/array access overloading makes it harder to infer intention... – bnaul Oct 4 '11 at 7:12
Also, you want to return If not g. – Nzbuu Oct 4 '11 at 9:28
You can also use the TRAPZ function to perform trapezoidal numerical integration. Here is an example:
%# parameters
a = 0; b = 1;
N = 100; s = 1;
f = @(x) x;
%# integration
X = linspace(a,b,N);
Y = f(X).*exp(-s*X);
If = trapz(X,Y) %# value returned: 0.26423
%# plot
area(X,Y, 'FaceColor',[.5 .8 .9], 'EdgeColor','b', 'LineWidth',2)
grid on, set(gca, 'Layer','top', 'XLim',[a-0.5 b+0.5])
title('$\int_0^1 f(x) e^{-sx} \,dx$', 'Interpreter','latex', 'FontSize',14)
-
The error message here is about as self-explanatory as it gets. You aren't defining a variable called x, so when you reference it on the first line of your function, MATLAB doesn't know what to use. You need to either define it in the function before referencing it, pass it into the function, or define it somewhere further up the stack so that it will be accessible when you call LaplaceTransform.
Since you're trying to numerically integrate with respect to x, I'm guessing you want x to take on values evenly spaced on your domain [0,1]. You could accomplish this using e.g.
x = linspace(a,b,N);
EDIT: There are a couple of other problems here: first, when you define g, you need to use .* instead of * to multiply the elements in the arrays (by default MATLAB interprets multiplication as matrix multiplication). Second, your calls g(a) and g(b) are treating g as a function instead of as an array of function values. This is something that takes some getting used to in MATLAB; instead of g(a), you really want the first element of the vector g, which is given by g(1). Similarly, instead of g(b), you want the last element of g, which is given by g(length(g)) or g(end). If this doesn't make sense, I'd suggest looking at a basic MATLAB tutorial to get a handle on how vectors and functions are used.
- | 2014-12-21 01:37:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6342828869819641, "perplexity": 1138.1130670257573}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802770554.119/warc/CC-MAIN-20141217075250-00073-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/2910038/normal-vector-from-transformation-matrix | # Normal vector from transformation matrix
as you are probably guessing by the title I'm not a math guy. The program I use for work has matrix which describes the position of a plane in 3d space. The center point of that card / plane can be easily read from the matrix. But the matrix also contains information regarding rotation (which I am guessing is in rad) and scale. I wondered if you could calculate the normal vector for the center point using this matrix data (maybe from the rotation?) to describe the plane.
The next question would be how to calculate the 3d positions of the 4 corner points of a square in this plane given a user set the distance from the center.
Thanks for taking the time explaining. picture of matrix here
• But it's a $4\times 4$ matrix.. – Berci Sep 8 '18 at 20:40
• You should clarify how this transformation represents the position of the object. Presumably, it’s a transformation from some standard position. Apply the matrix to the normal of the plane in this standard position. If this standard normal is a unit vector parallel to one of the coordinate axes, then the transformed normal will just be the corresponding column of your matrix. – amd Sep 8 '18 at 21:19 | 2019-10-22 09:20:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5227070450782776, "perplexity": 291.09171978977236}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570987813307.73/warc/CC-MAIN-20191022081307-20191022104807-00015.warc.gz"} |
https://mathematica.stackexchange.com/questions/89488/incorrect-display-on-linux-hidpi | Incorrect Display on Linux (+HiDPI)
I have Linux on a HiDPI laptop, and after first installing Mathematica, the default font size is far too small:
After searching for a few solutions, this can be fixed by either changing the magnification to 200% (in Notebook Options -> Display Options -> Magnification), or alternatively doubling the screen resolution in Formatting Options -> Font Options -> FontProperties -> ScreenResolution. (They seem, as far as I can tell, equivalent?)
Anyway, whichever method I end up choosing, it ends up magnifying another issue: the horizontal width of elements is small than the display width of these elements:
As you can see, some elements start running into each other, or alternatively get cropped by other elements. This issue is already present (though to a lesser extent) in the non-magnified display (you can see that 't' of Dataset is cropped).
In this particular picture, it is merely an inconvenience, but I am regularly finding a few instances where it is much more severely impacting the legibility of the output. For example:
Has anyone else experienced this issue? And how can I fix it, or at least improve legibility?
• Have you reported this issue to [email protected]? If not, you should do it. – halirutan Jul 30 '15 at 1:05
• I did not originally, but I have contacted them now. They may answer here, or if they answer by email, I shall post their response here. – JP-Ellis Jul 30 '15 at 1:57
• I don't see the clipping and collisions when using Magnification. Are you certain that you do, and not only when using FontProperties -> ScreenResolution? The latter only affects fonts, if I recall correctly, therefore the bounding areas do not increase and clipping can occur. – Mr.Wizard Jul 30 '15 at 7:08
• @Mr.Wizard, the first picture already shows some of the clipping and collisions, and that screenshot is with the default settings (that is, after removing ~/.mathematica completely). I have tried both ways of magnifying, and both seem equivalent in that they both produce clipping. – JP-Ellis Jul 30 '15 at 10:45
• Any updates on this? Would be very happy if this was improved in version 11. – rgrinberg Aug 15 '16 at 17:19 | 2020-02-26 17:08:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5019435882568359, "perplexity": 1383.4942133828367}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875146414.42/warc/CC-MAIN-20200226150200-20200226180200-00369.warc.gz"} |
http://balaio.com.br/naifa-appraiser-cayckmy/4c4dbd-limit-product-rule-proof | We want to prove that h is differentiable at x and that its derivative, h′(x), is given by f′(x)g(x) + f(x)g′(x). #lim_(h to 0) (f(x+h)-f(x))/(h) = f^(prime)(x)#. Thanks to all of you who support me on Patreon. 3) The limit of a quotient is equal to the quotient of the limits, 3) provided the limit of the denominator is not 0. This proof is not simple like the proofs of the sum and di erence rules. ⟹⟹ ddxq(x)ddxq(x) == limh→0q(x+h)−q(x)… }\] Product Rule. So we have (fg)0(x) = lim. First, recall the the the product #fg# of the functions #f# and #g# is defined as #(fg)(x)=f(x)g(x)#. By now you may have guessed that we're now going to apply the Product Rule for limits. Before we move on to the next limit property, we need a time out for laughing babies. Note that these choices seem rather abstract, but will make more sense subsequently in the proof. Creative Commons Attribution-ShareAlike License. If you're seeing this message, it means we're having trouble loading external resources on our website. One-Sided Limits – A brief introduction to one-sided limits. Ex 4 Ex 5. So by LC4, , as required. Let h(x) = f(x)g(x) and suppose that f and g are each differentiable at x. lim x → cf(x) = L means that. Proof of the Limit of a Sum Law. If you are in need of technical support, have a question about advertising opportunities, or have a general question, please contact us by phone or submit a message through the form below. The proof of the quotient rule is very similar to the proof of the product rule, so it is omitted here. The rule of product is a guideline as to when probabilities can be multiplied to produce another meaningful probability. Limits, Continuity, and Differentiation 6.1. We need to show that . The Constant Rule. So by LC4, an open interval exists, with , such that if , then . Let’s take, the product of the two functions f(x) and g(x) is equal to y. y = f(x).g(x) Differentiate this mathematical equation with respect to x. Just be careful for split ends. But this 'simple substitution' may not be mathematically precise. First plug the sum into the definition of the derivative and rewrite the numerator a little. The limit laws are simple formulas that help us evaluate limits precisely. Constant Multiple Rule. Using the property that the limit of a sum is the sum of the limits, we get: #lim_(h to 0) f(x+h)(g(x+h)-g(x))/(h) + lim_(h to 0)g(x)(f(x+h)-f(x))/(h)#, #(fg)^(prime)(x) = f(x)g^(prime)(x)+g(x)f^(prime)(x),#, #lim_(h to 0) f(x+h) = f(x),# Proving the product rule for derivatives. This rule says that the limit of the product of two functions is the product of their limits … The proof of L'Hôpital's rule is simple in the case where f and g are continuously differentiable at the point c and where a finite limit is found after the first round of differentiation. The limit of a difference is the difference of the limits: Note that the Difference Law follows from the Sum and Constant Multiple Laws. = lim_(h to 0) 1/h(f(x+h)[g(x+h)-g(x)]+g(x)[f(x+h)-f(x)])#. proof of limit rule of product Let fand gbe real (http://planetmath.org/RealFunction) or complex functionshaving the limits limx→x0f(x)=F and limx→x0g(x)=G. Proof - Property of limits . Then by the Sum Rule for Limits, → [() − ()] = → [() + ()] = −. The proofs of the generic Limit Laws depend on the definition of the limit. The limit of a product is the product of the limits: Quotient Law. Let F (x) = f (x)g … Calculus: Product Rule, How to use the product rule is used to find the derivative of the product of two functions, what is the product rule, How to use the Product Rule, when to use the product rule, product rule formula, with video lessons, examples and step-by-step solutions. We won't try to prove each of the limit laws using the epsilon-delta definition for a limit in this course. Let ε > 0. You da real mvps! Nice guess; what gave it away? Contact Us. #lim_(h to 0)(g(x+h)-g(x))/(h) = g^(prime)(x),# The law L3 allows us to subtract constants from limits: in order to prove , it suffices to prove . This page was last edited on 20 January 2020, at 13:46. Limit Product/Quotient Laws for Convergent Sequences. Limit Properties – Properties of limits that we’ll need to use in computing limits. Just like the Sum Rule, we can split multiplication up into multiple limits. Proof: Put , for any , so . In other words: 1) The limit of a sum is equal to the sum of the limits. Instead, we apply this new rule for finding derivatives in the next example. Proof: Suppose ε > 0, and a and b are sequences converging to L 1,L 2 ∈ R, respectively. Fill in the following blanks appropriately. 2) The limit of a product is equal to the product of the limits. Proof. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. We will now look at the limit product and quotient laws (law 3 and law 4 from the Limit of a Sequence page) and prove their validity. ddxq(x)ddxq(x) == limΔx→0q(x+Δx)−q(x)ΔxlimΔx→0q(x+Δx)−q(x)Δx Take Δx=hΔx=h and replace the ΔxΔx by hhin the right-hand side of the equation. Specifically, the rule of product is used to find the probability of an intersection of events: An important requirement of the rule of product is that the events are independent. 3B Limit Theorems 5 EX 6 H i n t: raolz eh um . To do this, $${\displaystyle f(x)g(x+\Delta x)-f(x)g(x+\Delta x)}$$ (which is zero, and thus does not change the value) is added to the numerator to permit its factoring, and then properties of limits are used. \$1 per month helps!! According to the definition of the derivative, the derivative of the quotient of two differential functions can be written in the form of limiting operation for finding the differentiation of quotient by first principle. It says: If and then . lim_(h to 0) (f(x+h)g(x+h)-f(x)g(x))/(h)#, Now, note that the expression above is the same as, #lim_(h to 0) (f(x+h)g(x+h)+0-f(x)g(x))/(h)#. Definition: A sequence a:Z+ 7→R converges if there exist L ∈ R (called the limit), such that for every (“tolerance”) ε > 0 there exists N ∈ Z+ such that for all n > N, |a(n)−L| < ε. Theorem: The sum of two converging sequences converges. But, if , then , so , so . Higher-order Derivatives Definitions and properties Second derivative 2 2 d dy d y f dx dx dx ′′ = − Higher-Order derivative Calculus Science Using limits The usual proof has a trick of adding and subtracting a term, but if you see where it comes from, it's no longer a trick. h!0. dy = f (x-h)-f (x) and dx = h. Since we want h to be 0, dy/dx = 0/0, so you have to take the limit as h approaches 0. The limit of a constant times a function is equal to the product of the constant and the limit of the function: \[{\lim\limits_{x \to a} kf\left( x \right) }={ k\lim\limits_{x \to a} f\left( x \right). is equal to the product of the limits of those two functions. How I do I prove the Product Rule for derivatives. for every ϵ > 0, there exists a δ > 0, such that for every x, the expression 0 < | x − c | < δ implies | f(x) − L | < ϵ . Define () = − (). 6. Therefore, we first recall the definition. If is an open interval containing , then the interval is open and contains . Hence, by our rule on product of limits we see that the final limit is going to be f'(u) g'(c) = f'(g(c)) g'(c), as required. proof of product rule. Using the property that the limit of a sum is the sum of the limits, we get: #lim_(h to 0) f(x+h)(g(x+h)-g(x))/(h) + lim_(h to 0)g(x)(f(x+h)-f(x))/(h)# Wich give us the product rule #(fg)^(prime)(x) = f(x)g^(prime)(x)+g(x)f^(prime)(x),# since: #lim_(h to 0) f(x+h) = f(x),# #lim_(h to 0)(g(x+h)-g(x))/(h) = g^(prime)(x),# #lim_(h to 0) g(x)=g(x),# The key argument here is the next to last line, where we have used the fact that both f f and g g are differentiable, hence the limit can be distributed across the sum to give the desired equality. #lim_(h to 0) g(x)=g(x),# Also, if c does not depend on x-- if c is a constant -- then which we just proved Therefore we know 1 is true for c = 0. c = 0. and so we can assume that c ≠ 0. c ≠ 0. for the remainder of this proof. Proving the product rule for derivatives. Limits We now want to combine some of the concepts that we have introduced before: functions, sequences, and topology. Here is a better proof of the chain rule. If the function involves the product of two (or more) factors, we can just take the limit of each factor, then multiply the results together. By simply calculating, we have for all values of x x in the domain of f f and g g that. Despite the fact that these proofs are technically needed before using the limit laws, they are not traditionally covered in a first-year calculus course. ( x) and show that their product is differentiable, and that the derivative of the product has the desired form. Therefore, it's derivative is, #(fg)^(prime)(x) = lim_(h to 0) ((fg)(x+h)-(fg)(x))/(h) = In particular, if we have some function f(x) and a given sequence { a n}, then we can apply the function to each element of the sequence, resulting in a new sequence. ( x). We first apply the limit definition of the derivative to find the derivative of the constant function, . The Limit – Here we will take a conceptual look at limits and try to get a grasp on just what they are and what they can tell us. ⟹ ddx(y) = ddx(f(x).g(x)) ∴ dydx = ddx(f(x).g(x)) The derivative of y with respect to x is equal to the derivative of product of the functions f(x) and g(x) with respect to x. Product Rule Proof Product rule can be proved with the help of limits and by adding, subtracting the one same segment of the function mentioned below: Let f (x) and g (x) be two functions and h be small increments in the function we get f (x + h) and g (x + h). A good, formal definition of a derivative is, given f (x) then f′ (x) = lim (h->0) [ (f (x-h)-f (x))/h ] which is the same as saying if y = f (x) then f′ (x) = dy/dx. References, From Wikibooks, open books for an open world, Multivariable Calculus & Differential Equations, https://en.wikibooks.org/w/index.php?title=Calculus/Proofs_of_Some_Basic_Limit_Rules&oldid=3654169. Wich we can rewrite, taking into account that #f(x+h)g(x)-f(x+h)g(x)=0#, as: #lim_(h to 0) 1/h [f(x+h)g(x+h)+(f(x+h)g(x)-f(x+h)g(x))-f(x)g(x)] By the Scalar Product Rule for Limits, → = −. 3B Limit Theorems 2 Limit Theorems is a positive integer. lim x → a [ 0 f ( x)] = lim x → a 0 = 0 = 0 f ( x) The limit evaluation is a special case of 7 (with c = 0. c = 0. ) The quotient rule can be proved either by using the definition of the derivative, or thinking of the quotient \frac{f(x)}{g(x)} as the product f(x)(g(x))^{-1} and using the product rule. :) https://www.patreon.com/patrickjmt !! (f(x) + g(x))′ = lim h → 0 f(x + h) + g(x + h) − (f(x) + g(x)) h = lim h → 0 f(x + h) − f(x) + g(x + h) − g(x) h. Now, break up the fraction into two pieces and recall that the limit of a sum is the sum of the limits. The Product Law If lim x!af(x) = Land lim x!ag(x) = Mboth exist then lim x!a [f(x) g(x)] = LM: The proof of this law is very similar to that of the Sum Law, but things get a little bit messier. All we need to do is use the definition of the derivative alongside a simple algebraic trick. Then … Suppose you've got the product $f(x)g(x)$ and you want to compute its derivative. www.mathportal.org 3. It is not a proof of the general L'Hôpital's rule because it is stricter in its definition, requiring both differentiability and that c … By the de nition of derivative, (fg)0(x) = lim. Product Law. (fg)(x+h) (fg)(x) h : Now, the expression (fg)(x) means f(x)g(x), therefore, the expression (fg)(x+h) means f(x+h)g(x+h). 3B Limit Theorems 4 Substitution Theorem If f(x) is a polynomial or a rational function, then assuming f(c) is defined. We will also compute some basic limits in … is a real number have limits as x → c. 3B Limit Theorems 3 EX 1 EX 2 EX 3 If find. 4 Help us evaluate limits precisely page was last edited on 20 January 2020, at 13:46 a Law... = − try to prove each limit product rule proof the derivative alongside a simple algebraic trick me... Use the definition of the constant function, R, respectively guideline as to when probabilities can multiplied...: Quotient Law very similar to the sum into the definition of the limits: Law! And b are sequences converging to L 1, L 2 ∈ R,.. Interval containing, then, so simple algebraic trick lim x → 3b. 1, L 2 ∈ R, respectively in the proof for all values of x in... Probabilities can be multiplied to produce another meaningful probability constants from limits: Quotient Law of x. For all values of x x in the domain of f f and g g.! Me on Patreon, but will make more sense subsequently in the domain of f., please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked are unblocked → =.... The limits: in order to prove, it means we 're having trouble loading external resources on our.... Eh um rule is very similar to the sum of the limits: Quotient Law erence. Time out for laughing babies a and b are sequences converging to L 1 L... This new rule for limits, → = − split multiplication up into multiple limits show that product. Allows us to subtract constants from limits: Quotient Law is a better proof the. L 2 ∈ R, respectively is equal to the sum rule, limit product rule proof, so it omitted... The limits laws using the epsilon-delta definition for a limit in this course, we need use! We can split multiplication up into multiple limits n't try to prove, it means we 're having trouble external! And a and b are sequences converging to L 1, L ∈! This proof is not simple like the sum into the definition of the derivative and rewrite the numerator a.. Quotient Law of limits that we have introduced before: functions,,..., so, so number have limits as x → cf ( x ) and show that their is! Limit Properties – Properties of limits that we ’ ll need to use in computing limits – Properties of that! And b are sequences converging to L 1, L 2 ∈,. Be mathematically precise of product is differentiable, and topology laws using the definition... Resources on our website be mathematically precise ) and show that their product is guideline. The desired form converging to L 1, L 2 ∈ R, respectively limits that we ll., → = − means we 're having trouble loading external resources our... Quotient Law us evaluate limits precisely ll need to do is use the of! Ε > 0, and a and b are sequences converging to L,! Algebraic trick it means we 're having trouble loading external resources on our website calculating, we this. Omitted here 3b limit Theorems 2 limit Theorems 3 EX 1 EX 2 EX 3 if find rule very... Is omitted here like the proofs of the derivative of the constant function, limit property, we for! Out for laughing babies the limit of a product is the product,! Sequences converging to L 1, L 2 ∈ R, respectively 2 the... Other words: 1 ) the limit limits: in order to prove it... Simple formulas that help us evaluate limits precisely Scalar product rule for finding derivatives in the of. We first apply the limit of a sum is equal to the proof of constant... Support me on Patreon b are sequences converging to L 1, L 2 ∈,! Other words: 1 ) the limit of a product is differentiable, and and! Trouble loading external resources on our website from limits: in order to prove of. 'Re having trouble loading external resources on our website means that in other:... Is use the definition of the limit of a sum is equal to next... New rule for limits, → = − split multiplication up into multiple limits, at 13:46 to of... Edited on 20 January 2020, at 13:46 guideline as to when probabilities can be multiplied produce!, then, so, so using the epsilon-delta definition for a in. Rule, so the constant function, depend on the definition of the chain rule instead, we this... And contains limits – a brief introduction to one-sided limits – a brief introduction to one-sided limits the... Web filter, please make sure that the derivative and rewrite the numerator a little rule we! Then the interval is open and contains for laughing babies.kasandbox.org are unblocked product of the product rule we... Di erence rules please make sure that the derivative of the product for! Simple like the proofs of the generic limit laws using the epsilon-delta definition for a limit in this.! Have limits as x → cf ( x ) = L means that omitted here is the product,... Exists, with, such that if, then the interval is open and contains.kastatic.org and.kasandbox.org... Alongside a simple algebraic trick c. 3b limit Theorems is a guideline as to when probabilities be! – Properties of limits that we have introduced before: functions, sequences, and topology the.... Domain of f f and g g that guideline as to when probabilities can be multiplied to another... All of you who support me on Patreon sequences, and a and b sequences. L 2 ∈ R, respectively sum rule, so it is omitted.. If, then, so, so, so it is omitted here to when probabilities can be to... Limit Properties – Properties of limits that we ’ ll need to is... Function,, so just like the proofs of the constant function, 1 ) the limit laws are formulas. Calculating, we can split multiplication up into multiple limits before we move on the. Need a time out for laughing babies was last edited on 20 January 2020, at 13:46 wo try! In other words: 1 ) the limit of a sum Law EX 2 EX if. Seeing this message, it suffices to prove x → cf ( x ) = lim trouble... We wo n't try to prove, it means we 're having trouble loading external on... Means that apply the limit laws using the epsilon-delta definition for a limit this... Constant function, we apply this new rule for derivatives constants from limits: Quotient Law Quotient Law at.. ( fg ) 0 ( x ) = lim are unblocked can split multiplication up into limits. Apply this new rule for limits, → = − derivative of limit! Next example find the derivative of the limit definition of the chain.. Here is a guideline as to when probabilities can be multiplied to produce another meaningful probability this proof is simple! So it is omitted here are unblocked can be multiplied to produce another meaningful.... L means that: in order to prove, it means we 're having trouble loading external on! Values of x x in the domain of f f and g g that that choices... Rule, so it is omitted here Suppose ε > 0, and topology depend the... Is very similar to the proof of the chain rule please make sure that the *., please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked combine some the! Apply the limit of a sum Law a limit in this course that. A web filter, please make sure that the domains *.kastatic.org and * are. Of limits that we have introduced before: functions, sequences, and that the derivative of the and. Meaningful probability apply the limit of a sum is equal to the.! Find the derivative and rewrite the numerator a little x → cf ( )... Raolz eh um seeing this message, it means we 're having trouble loading external resources on our.... Then, so first apply the limit definition of the derivative alongside a simple algebraic.. De nition of derivative, ( fg ) 0 ( x ) and that... Subtract constants from limits: in order to prove constants from limits: in order prove! We move on to the sum and di erence rules limit property, we need a time out for babies... Simple like the sum and di erence rules laughing babies the de of. Are simple formulas that help us evaluate limits precisely converging to L 1, L 2 ∈ R,.... ’ ll need to use in computing limits 1 ) the limit of! Sum of the limits probabilities can be multiplied to produce another meaningful probability if you 're a... Using the epsilon-delta definition for a limit in this course, and a and b are sequences to... Erence rules sum is equal limit product rule proof the proof of the product rule finding... Laughing babies for finding derivatives in the domain of f f and g g that to 1... Filter, please make sure that the derivative and rewrite the numerator a little to.! The chain rule limits precisely 're behind a web filter, please make sure that the derivative of constant! Be multiplied to produce another meaningful probability resources on our website proof Suppose. | 2021-07-31 08:33:51 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9352283477783203, "perplexity": 675.1078931003299}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154085.58/warc/CC-MAIN-20210731074335-20210731104335-00654.warc.gz"} |
https://www.ocean-sci.net/15/1023/2019/ | Journal cover Journal topic
Ocean Science An interactive open-access journal of the European Geosciences Union
Journal topic
Ocean Sci., 15, 1023–1032, 2019
https://doi.org/10.5194/os-15-1023-2019
Ocean Sci., 15, 1023–1032, 2019
https://doi.org/10.5194/os-15-1023-2019
Research article 02 Aug 2019
Research article | 02 Aug 2019
# Using canonical correlation analysis to produce dynamically based and highly efficient statistical observation operators
Using canonical correlation analysis to produce dynamically based and highly efficient statistical observation operators
Eric Jansen1, Sam Pimentel3, Wang-Hung Tse3, Dimitra Denaxa4, Gerasimos Korres4, Isabelle Mirouze2, and Andrea Storto2 Eric Jansen et al.
• 1Ocean Predictions and Applications (OPA) division, Euro-Mediterranean Center on Climate Change (CMCC), Lecce, Italy
• 2Ocean Modelling and Data Assimilation (ODA) division, Euro-Mediterranean Center on Climate Change (CMCC), Bologna, Italy
• 3Trinity Western University (TWU), Langley, BC, Canada
• 4Hellenic Centre for Marine Research (HCMR), Athens, Greece
Correspondence: Eric Jansen ([email protected])
Abstract
Observation operators (OOs) are a central component of any data assimilation system. As they project the state variables of a numerical model into the space of the observations, they also provide an ideal opportunity to correct for effects that are not described or are insufficiently described by the model. In such cases a dynamical OO, an OO that interfaces to a secondary and more specialised model, often provides the best results. However, given the large number of observations to be assimilated in a typical atmospheric or oceanographic model, the computational resources needed for using a fully dynamical OO mean that this option is usually not feasible. This paper presents a method, based on canonical correlation analysis (CCA), that can be used to generate highly efficient statistical OOs that are based on a dynamical model. These OOs can provide an approximation to the dynamical model at a fraction of the computational cost.
One possible application of such an OO is the modelling of the diurnal cycle of sea surface temperature (SST) in ocean general circulation models (OGCMs). Satellites that measure SST measure the temperature of the thin uppermost layer of the ocean. This layer is strongly affected by atmospheric conditions, and its temperature can differ significantly from the water below. This causes a discrepancy between the SST measurements and the upper layer of the OGCM, which typically has a thickness of around 1 m. The CCA OO method is used to parameterise the diurnal cycle of SST. The CCA OO is based on an input dataset from the General Ocean Turbulence Model (GOTM), a high-resolution water column model that has been specifically tuned for this purpose. The parameterisations of the CCA OO are found to be in good agreement with the results from the GOTM and improve upon existing parameterisations, showing the potential of this method for use in data assimilation systems.
1 Introduction
Data assimilation (DA) strives to improve the forecast skill of a numerical model by combining the model with observations. Observations are incorporated into the model by applying a series of corrections to the internal state of the model. As the state variables of a numerical model are usually not observed directly, this procedure requires an observation operator (OO) to project the model state variables onto the variable that is observed. The difference between the observation and the model prediction, the so-called innovation, forms the basis for calculating the correction to the model state. The accuracy of the OO is paramount in this process: any bias in the projection will lead to a bias in the innovation and therefore result in a biased correction to the model state. For this reason, bias correction procedures have been built considering not only systematic errors in observations but also in observation operators (see e.g. , for satellite radiance data).
Many different types of OO exist. In its simplest form, an OO could just select one of the state variables in a point near the observation or, perhaps, perform an interpolation. More complex OOs may include corrections for processes that influence the observation but are not modelled or are insufficiently modelled. Ultimately, one could even consider a dynamical OO that wraps a second numerical model to locally refine the results of the parent model. The latter solution may very well provide the most accurate results, but the vast number of observations that need to be assimilated in a typical atmospheric or oceanographic model means that this approach would require a prohibitive amount of computing resources. This limits OOs in most practical applications to relatively simple parameterisations in terms of the model state variables. Moreover, variational data assimilation requires observation operators to be linearised around the background within the inner loops (tangent-linear approximation). This translates into a need to construct OOs that can be formally and practically differentiated.
This paper presents a method of parameterising the results of a specialised model in such a way that it can be efficiently used within an OO. The parameterisation is based on canonical correlation analysis (CCA), a well-established mathematical method for finding cross-correlations between datasets. A new pseudo-dynamical OO is generated using the canonical correlation between the inputs and outputs of the specialised model on a large and representative dataset. Once this correlation has been calculated, the application of the pseudo-dynamical OO involves only a matrix multiplication that can be performed at a fraction of the computational cost of the dynamical OO. A similar method has been used previously to build reduced-order OOs in atmospheric data assimilation .
This work is part of the SOSSTA (Statistical-dynamical observation Operator for SST data Assimilation) project, funded by the EU Copernicus Marine Environment Monitoring Service (CMEMS) through the Service Evolution grants. The aim of SOSSTA is to formulate an efficient OO for sea surface temperature (SST) DA that accounts for the diurnal variability of the ocean skin temperature. The results of the project are presented in multiple publications. The modelling of the diurnal cycle of SST is described in , while the current paper focuses on the method for constructing the OO. The project includes pilot studies in the Mediterranean Sea and the Aegean Sea that will be described in forthcoming publications.
The paper is organised as follows: Sect. 2 provides a quick review of CCA; Sect. 3 discusses how CCA can be used to construct the OO matrix; Sect. 4 applies the CCA OO to the modelling of satellite sea surface temperature (SST) measurements in oceanographic models; and Sect. 5 discusses the performance of the method and other possible applications. Conclusions are presented in Sect. 6.
2 The CCA method
CCA is a method to find cross-correlations between two datasets X and Y. The datasets are considered to be matrices structured such that the columns represent different variables and the rows represent the measurements of these variables. CCA then aims to find transformation matrices A and B that transform the anomaly of the variables of X and Y, denoted X and Y, into the set of canonical variables F and G:
$\begin{array}{}\text{(1)}& \mathbf{F}={\mathbf{X}}^{\prime }\mathbf{A}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\mathbf{G}={\mathbf{Y}}^{\prime }\mathbf{B}.\end{array}$
The structure of F and G matches that of X and Y. The canonical variables are constructed such that the variable Fi is maximally correlated with the variable Gi. At the same time, both Fi and Gi are uncorrelated with Fj and Gj for ij; therefore, each additional canonical variable describes the maximal remaining correlation between the two datasets. The number of canonical variables that can be obtained with this procedure is limited to the smallest number of variables in X or Y.
The calculation of the matrices A and B is relatively straightforward using the algorithm of . Writing the requirements outlined above in equation form yields
$\begin{array}{}\text{(2a)}& {\mathbf{F}}^{T}\mathbf{F}={\mathbf{G}}^{T}\mathbf{G}& =\mathbf{I},\text{(2b)}& {\mathbf{F}}^{T}\mathbf{G}& =\mathbf{D},\end{array}$
with I the unit matrix and D a diagonal matrix. The algorithm uses a QR decomposition to decompose both X and Y into an orthogonal matrix Q and an upper-triangular matrix R:
$\begin{array}{}\text{(3)}& {\mathbf{X}}^{\prime }={\mathbf{Q}}_{x}{\mathbf{R}}_{x}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}{\mathbf{Y}}^{\prime }={\mathbf{Q}}_{y}{\mathbf{R}}_{y}.\end{array}$
The algorithm proceeds by applying a singular value decomposition (SVD) on the product ${\mathbf{Q}}_{x}^{T}{\mathbf{Q}}_{y}$:
$\begin{array}{}\text{(4)}& {\mathbf{Q}}_{x}^{T}{\mathbf{Q}}_{y}={\mathbf{USV}}^{T}.\end{array}$
By trying the ansatz,
$\begin{array}{}\text{(5)}& \mathbf{A}\equiv {\mathbf{R}}_{x}^{-\mathrm{1}}\mathbf{U}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\mathbf{B}\equiv {\mathbf{R}}_{y}^{-\mathrm{1}}\mathbf{V},\end{array}$
the orthonormality requirement of Eq. (2a) becomes
$\begin{array}{}\text{(6)}& \begin{array}{rl}{\mathbf{F}}^{T}\mathbf{F}& ={\mathbf{A}}^{T}{\mathbf{X}}^{\prime }{}^{T}{\mathbf{X}}^{\prime }\mathbf{A}\\ & =\left({\mathbf{U}}^{T}{\left({\mathbf{R}}_{x}^{-\mathrm{1}}\right)}^{T}\right)\left({\mathbf{R}}_{x}^{T}{\mathbf{Q}}_{x}^{T}\right)\left({\mathbf{Q}}_{x}{\mathbf{R}}_{x}\right)\left({\mathbf{R}}_{x}^{-\mathrm{1}}\mathbf{U}\right)\\ & =\mathbf{I},\end{array}\end{array}$
and an analogous result follows for GTG.
The orthogonality requirement of Eq. (2b) becomes
$\begin{array}{}\text{(7)}& \begin{array}{rl}\mathbf{D}& ={\mathbf{F}}^{T}\mathbf{G}={\mathbf{A}}^{T}{{\mathbf{X}}^{\prime }}^{T}{\mathbf{Y}}^{\prime }\mathbf{B}\\ & =\left({\mathbf{U}}^{T}{\left({\mathbf{R}}_{x}^{-\mathrm{1}}\right)}^{T}\right)\left({\mathbf{R}}_{x}^{T}{\mathbf{Q}}_{x}^{T}\right)\left({\mathbf{Q}}_{y}{\mathbf{R}}_{y}\right)\left({\mathbf{R}}_{y}^{-\mathrm{1}}\mathbf{V}\right)\\ & ={\mathbf{U}}^{T}\left({\mathbf{USV}}^{T}\right)\mathbf{V}=\mathbf{S}.\end{array}\end{array}$
Therefore, the ansatz of Eq. (5) is a valid solution for the matrices A and B. Moreover, by counting the number of degrees of freedom in these matrices and the number of constraints provided by Eq. (2), it can be shown that all solutions are permutations of Eq. (5(Press2011). The canonical basis is therefore uniquely defined. In the case that X and Y contain different numbers of variables Nx and Ny, the SVD of Eq. (4) selects the N largest correlations, with $N=min\left({N}_{x},{N}_{y}\right)$.
As QR decomposition and SVD are common matrix operations that are efficiently implemented in most numerical libraries, this algorithm is straightforward to implement in most programming languages.
3 Using CCA to construct an OO
The CCA method can be used to construct an OO. Let X be a set of (possibly) relevant model state variables and Y the corresponding observation values. Here Y could be obtained from a specialised model but also from a historical dataset of real observations. Applying the algorithm of Sect. 2 yields the matrices A, B, and D. The first two convert the mean subtracted model states X and observation values Y into their canonical counterparts F and G. The diagonal matrix D holds for each pair of canonical variables i the best fit to the slope of the correlation: ${\mathbf{D}}_{ii}=\mathrm{d}{\mathbf{G}}_{i}/\mathrm{d}{\mathbf{F}}_{i}$.
Assuming that NxNy – i.e. the number of model state variables is at least equal to the number of observed variables – it is possible to calculate Y from X by passing through canonical space and applying the fitted slope D,
$\begin{array}{}\text{(8)}& {\mathbf{Y}}^{\prime }={\mathbf{X}}^{\prime }{\mathbf{ADB}}^{-\mathrm{1}}\equiv {\mathbf{X}}^{\prime }\mathbf{M},\end{array}$
defining the CCA OO matrix,
$\begin{array}{}\text{(9)}& \mathbf{M}\equiv {\mathbf{ADB}}^{-\mathrm{1}},\end{array}$
of size Nx×Ny. As the CCA considers only the anomaly of X and Y, an additional offset term needs to be considered to accommodate the mean values of X and Y in the input dataset. However, the mean values of X and Y can be combined by applying the matrix M:
$\begin{array}{}\text{(10)}& \begin{array}{rl}\mathbf{Y}-\stackrel{\mathrm{‾}}{\mathbf{Y}}& =\left(\mathbf{X}-\stackrel{\mathrm{‾}}{\mathbf{X}}\right)\mathbf{M}\\ \mathbf{Y}& =\mathbf{XM}+\mathbit{K},\end{array}\end{array}$
with
$\begin{array}{}\text{(11)}& \mathbit{K}\equiv \stackrel{\mathrm{‾}}{\mathbf{Y}}-\stackrel{\mathrm{‾}}{\mathbf{X}}\mathbf{M},\end{array}$
a combined offset vector of length Ny.
During the training phase of the CCA OO, the datasets X and Y are used to calculate the matrix M and the offset K. Once computed, they can be used to form an observation operator H that transforms a state x as
$\begin{array}{}\text{(12)}& \mathrm{H}\left(\mathbit{x}\right)=\mathbit{x}\mathbf{M}+\mathbit{K}.\end{array}$
Furthermore, the tangent-linear approximation used in variational DA schemes requires that
$\begin{array}{}\text{(13)}& \mathrm{H}\left(\mathbit{x}\right)\sim \mathrm{H}\left({\mathbit{x}}^{\mathrm{b}}\right)+{\mathbf{H}}^{\prime }\mathrm{d}\mathbit{x},\end{array}$
where H is the tangent-linear version of the OO, xb the background state, and dx the deviation from the background. The CCA OO is straightforward to implement in this scheme, since for H and its adjoint HT it follows that
$\begin{array}{}\text{(14)}& {\mathbf{H}}^{\prime }={\mathbf{M}}^{T}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}{\mathbf{H}}^{\prime }{}^{T}=\mathbf{M}.\end{array}$
4 Use case: satellite SST
One possible application of the new CCA OO is the assimilation of SST in ocean general circulation models (OGCMs). In recent years OGCMs have seen significant improvements in vertical resolution, particularly near the surface, where the first layer has been reduced to a thickness of the order of 1 m or less. At this resolution, the diurnal cycle of SST should be taken into account. Although diurnal variability is included to some extent , the vertical resolution of OGCMs is still insufficient to fully resolve the variability of the skin and subskin ocean temperature.
This issue becomes particularly evident when assimilating satellite SST observations. The different types of sensors used on satellites probe the ocean temperature at different depths. Infrared (IR) sensors measure the temperature at about 10 µm, a layer that is referred to as the ocean skin. Microwave (MW) sensors, on the other hand, measure the temperature of the layer below that, the subskin, with a depth of about 1 mm. This is much shallower than the vertical resolution of a typical OGCM, while these layers are strongly affected by the atmospheric conditions. The ocean skin cools due to thermodynamic processes at the air–sea interface, while the absorption of solar heat causes a warming of the subskin. At the same time, wind can mix the skin and subskin with the water below, smoothing the temperature variations. During days of low wind and/or high insolation conditions the amplitude of the SST diurnal cycle can be larger than the combined accuracy of the model and observations, causing a straightforward assimilation of SST to degrade the performance of the model . Under favourable conditions this amplitude is typically of the order of a few degrees (see e.g. ), but values as high as 6 C have been observed .
Representation errors have been extensively discussed within ocean applications and generally include errors due to e.g. limited spatial resolution or unrepresented processes. However, the diurnal variability of skin SST represents a potentially systematic error that requires a proper treatment rather than just increasing the representation component of the observational error.
An important source of SST observational data is the Spinning Enhanced Visible and Infrared Imager (SEVIRI) instrument onboard the Meteosat satellites of the second generation. As these are geostationary satellites, SEVIRI can provide continuous measurements of the same area with a 15 min temporal resolution. Although the IR imager is sensitive to skin temperature, the calibration algorithm of SEVIRI corrects for the cool-skin bias, and the resulting SST products should be considered the subskin temperature . For wind speeds greater than 6 m s−1 the skin temperature may be calculated as ${T}_{\mathrm{skin}}={T}_{\mathrm{subskin}}-\mathrm{0.17}$, but this is only an approximation.
This section will discuss how to use the output of a water column model specifically tuned for modelling the diurnal cycle of SST together with the CCA OO to build an observation operator for SST that accounts for the diurnal variability.
## 4.1 General Ocean Turbulence Model
The SST diurnal cycle is modelled using the General Ocean Turbulence Model (GOTM). The GOTM is a one-dimensional water column model that includes multiple turbulence closure schemes . It has been successfully adapted to model the near-surface variability of ocean temperature, including both the diurnal cycle and the cool-skin effect . Recently it has been used to systematically simulate the atmospheric and oceanographic conditions in the Mediterranean Sea . The latter study has resulted in a multi-year dataset modelling the diurnal cycle in the Mediterranean Sea on a grid of $\mathrm{0.75}{}^{\circ }×\mathrm{0.75}{}^{\circ }$ resolution with hourly time resolution. For this dataset the GOTM is configured with the k-ε turbulent kinetic energy parameterisation with internal waves. The top 75 m of the water column is resolved using 122 vertical layers with fine resolution near the surface and gradually becoming coarser with depth. The uppermost 1 m contains a total of 21 layers, with the highest level at 1.5 cm of depth. This dataset is used in the present paper to build the CCA OO for SST.
The subskin SST represents the temperature at the base of the conductive laminar sub-layer of the ocean surface; for practical purposes it is represented by the temperature of the top model layer of the GOTM (1.5 cm). The conductive sub-layer of the air–sea interface, associated with the cool-skin effect, is parameterised and dynamically computed within the GOTM to produce a modelled skin SST. Further details are provided in .
## 4.2 Operator setup
The aim for the CCA OO is to parameterise the IR and MW satellite SST observations as a function of temperature in the water column below. While the dataset of uses a fine vertical resolution to calculate the SST observations, the CCA OO will consider only the levels of a typical OGCM. Within the SOSSTA project this OGCM is the CMEMS Mediterranean Forecasting System (MFS) , but the parameterisation can be performed for any vertical distribution of levels.
Figure 1The magnitude of the diurnal warming at the subskin level as a function of the time of the day for different wind and insolation categories. The diurnal warming is measured with respect to the SST at local sunrise. The wind categories are represented by the different panels, while the insolation categories are shown as different curves within each panel.
The magnitude of the diurnal signal depends strongly on the atmospheric conditions, most importantly the insolation and wind speed. Insolation causes the ocean skin to heat up during the course of the day, while wind mixes the upper layers of the ocean, leading to the dissipation of the heat. Due to latent heat loss, the ocean skin may even cool down below the bulk temperature. To accommodate a non-linear dependence on the different insolation and wind scenarios in the CCA OO, the GOTM dataset is divided into 12 insolation and 8 wind categories. Insolation and wind are defined in each location as the daily mean value in local mean time (LMT). The category boundaries were chosen to equally divide the dataset. The magnitude of the diurnal warming for the different categories is shown in Fig. 1.
The GOTM dataset has been compared to SEVIRI data at the skin level in and was found to be in good agreement over the whole period of 2013 and 2014. However, after dividing the dataset into atmospheric categories, it is found that categories with high diurnal warming may have a warm bias of up to 0.5 C and categories with low diurnal warming a cold bias of typically 0.1–0.2 C. This category bias is corrected for by subtracting the mean difference between SEVIRI and GOTM at subskin level for each category.
Figure 2The correlation coefficients between the model variables and observations (a), with the canonical equivalent of these variables (b).
For each category of wind and insolation, and at hourly time resolution, the CCA OO is calculated to project the 10 uppermost levels of the MFS model onto the skin and subskin SST temperatures. The 10 levels extend down to a depth of approximately 40 m, which was chosen to be well below the depth influenced by the diurnal cycle of temperature. Figure 2a shows the correlation between the model temperature at various depths and the two SST observation types. As expected, the SST is strongly correlated with the highest levels and the correlation decreases with depth. It is important to note that in this case the various levels are also strongly correlated with each other. Figure 2b shows the correlation after transforming to canonical coordinates. It can be seen that the strongest correlation has not significantly changed, as the first canonical variable is very similar to the highest model level. The second pair of canonical variables (F2,G2), however, describes an additional correlation of around 60 % between model water temperature and SST.
## 4.3 Validation
The CCA OO is validated by comparing its performance to that of the full GOTM. To use the operator effectively in a DA system, it should be able to provide an accurate approximation of the GOTM results. The validation is performed against GOTM profiles that are withheld from the CCA OO calculation. The GOTM dataset is split in two, withholding every other profile in the zonal direction from the calculation. The validation then uses the withheld profiles and extracts the depths corresponding to the MFS levels, mimicking the use of the operator inside a DA system. The CCA OO, based on the atmospheric category and closest time, is subsequently applied to project the model temperature onto the skin and subskin SST. The projected SST values are then compared to the values in the original GOTM profile.
Figure 3Examples of temperature profiles in various conditions and at different times. The GOTM profiles are shown by the red curve, while the filled circles indicate the values used as input to the CCA OO. The output of the CCA OO is shown by the black triangles. (a) Low wind, high insolation, early morning; (b) low wind, high insolation, afternoon; (c) high wind, high insolation, afternoon; (d) high wind, low insolation, afternoon.
Some examples of the validation are shown in Fig. 3. Each panel shows a profile from the GOTM dataset, together with the model levels that were used as input to the CCA OO. The output of the CCA OO is superimposed onto the GOTM profile so that a comparison can be made. Figure 3a shows a temperature profile in the early morning, during a day of low wind and high insolation. At this time, diurnal warming is limited, and due to the clear-sky conditions the skin and subskin temperatures have cooled down slightly below the temperature of the first model level. Figure 3b shows an afternoon profile on a similar day. At this time, diurnal warming is around its maximum, and the skin temperature has increased about 1 C above the first level of the model. In the case of high wind speed, the increased mixing of the upper layer of the ocean can completely cancel the effect of the high insolation, as shown in Fig. 3c. In this situation the temperature in the upper 10 m of the ocean is almost constant. When high wind conditions coincide with low insolation, the surface can also cool quite significantly, as shown in Fig. 3d. The CCA OO is able to correctly reproduce the GOTM skin and subskin temperature under different atmospheric conditions. The atmospheric categories with strong diurnal warming have a root mean square error (RMSE) of up to 0.4C; for all other categories the RMSE is around 0.1C. The bias of the CCA OO compared to the GOTM was found to be negligible.
Figure 4Skill score of the CCA OO compared to the OGCM upper layer for all wind and insolation categories at midnight (a) and in the afternoon (b).
5 Performance and discussion
The performance of the GOTM-based CCA OO for SST is compared to other commonly used methods. For this comparison the GOTM dataset is again split along the zonal direction using every other profile to calculate the CCA OO. The remaining profiles are matched to SEVIRI subskin retrievals using only profiles matched to a measurement with an acceptable (4) or good (5) quality control level. The performance can be conveniently expressed in terms of the skill score (SS), defined by as
$\begin{array}{}\text{(15)}& \mathrm{SS}=\mathrm{1}-\frac{{\mathrm{MSE}}_{\mathrm{model}}}{{\mathrm{MSE}}_{\mathrm{reference}}}.\end{array}$
The skill score is based on the mean square error (MSE) of the model under testing and of a reference model. Specifically, it expresses the difference in MSE as a fraction of the reference MSE. The skill score is straightforward to interpret: a perfect model (MSE=0) results in a skill score of 1, while a model that shows no improvement over the reference model receives a skill score of 0. Negative skill scores indicate that the model performs worse and its MSE has increased with respect to the reference. In this case the CCA OO will be used as the model and the reference will be another commonly used OO. The MSE is calculated with respect to the SEVIRI subskin temperature.
Figure 5Skill score of the CCA OO compared to the parameterisation of in the afternoon (a) and early evening (b).
The simplest method of assimilating satellite SST observations in a model that insufficiently describes the diurnal cycle of SST is to assimilate only at night or during high wind; see, for example, . During the night the cycle of SST is close to its minimum value and the temperature of the upper layer of an OGCM forms a reasonable approximation for the skin temperature. In this situation the assimilation is performed without additional corrections. Figure 4a shows the skill score of the CCA OO at midnight local time using the temperature of the OGCM upper layer as a reference method. Figure 4b shows the same situation, but in the afternoon. For high wind and low insolation the CCA OO performs, as expected, similarly to using the upper OGCM layer. However, for low wind speeds and high insolation the CCA OO shows a clear improvement, even at midnight. This can be explained by the fact that at midnight some diurnal signal still remains and, even using the wind and insolation values of the next day, this is correctly modelled by the CCA OO.
A more advanced solution is the parameterisation of , which estimates the diurnal signal as a function of wind, insolation, and time. This is a commonly used parameterisation; for example, it is included with the NEMO ocean model . Figure 5 shows the skill score for the CCA OO compared to the parameterisation of at the peak of the diurnal cycle (a) and in the early evening (b). It can be seen that for high insolation and low wind, conditions for which the diurnal warming is largest, both methods perform similarly. However, the CCA OO is better at accommodating different atmospheric conditions and shows significant improvements for the intermediate insolation and wind categories. Moreover, Fig. 5b shows that the CCA OO is able to better parameterise the cooling of the subskin in the late afternoon–evening after the peak of the diurnal warming has passed.
Using the CCA OO to improve the description of SST has many potential applications. For example, the CCA OO could be used as a parameterisation of diurnally varying skin SST within an OGCM as part of the air–sea flux calculations. The skin SST is the true interface temperature for air–sea fluxes, so this approach should result in improved air–sea heat transfer in OGCMs and coupled ocean–atmosphere models. See, for example, . Another possibility would be the use of the CCA OO as a parameterisation of diurnally varying SST within a climate model. The diurnal cycle is a fundamental signal of the climate system, yet for climate models the lack of vertical structure (and temporal resolution) is even more critical. See, for example, .
Due to the way in which it is constructed, the CCA OO is an inherently linear operator. This makes it straightforward to implement in DA schemes that require linearised and differentiable OOs. However, non-linear effects can be accommodated to some extent by constructing a series of CCA OOs conditioned on such a non-linear dependency. For example, in the case of SST, this method has been used to condition the CCA OO on insolation, wind, and time. The only requirement in this case is that the datasets X and Y of Sect. 3 are sufficiently large to divide them by such a dependent variable.
The minimum size of the input dataset required ultimately depends on the number of model variables used (Nx) and the number of observation variables to predict (Ny). The number of free parameters in the CCA OO matrix M and the offset K equals (Nx+1)Ny. As each entry in the input dataset also provides Ny observation values, Eq. (4) requires a minimum of Nx+1 entries to be mathematically solvable. However, at this point the CCA OO will be overfitted. It will simply be able to memorise the input datasets rather than being based on general characteristics of the data. Care has to be taken to avoid this situation, making sure the input dataset contains a number of entries n with n>>Nx. Whether a given size n is sufficient should be tested using independent data. One possible method for this test is to withhold part of the input dataset from the CCA OO calculation and then use this subset to calculate the CCA OO performance.
6 Conclusions
Observation operators (OOs) form a central component in any data assimilation (DA) system, as they transform the state variables of a numerical model into real-world observable variables. Often, an OO also needs to correct for processes that are not fully described by the parent model. Such processes may be best modelled by interfacing the OO to a specialised model, but this is generally not feasible due to computational constraints.
The assimilation of satellite sea surface temperature (SST) in ocean general circulation models (OGCMs) is a prime example of a situation in which insufficiently modelled processes play an important role. The diurnal cycle of SST causes a discrepancy in the temperature of the very thin upper layer measured by a satellite and the rather coarse upper layer in a typical OGCM. On a clear summer day with low wind, this discrepancy can amount to as much as 2 C or more .
The current paper presented a method, based on canonical correlation analysis (CCA), to build parameterisations based on an output dataset of a specialised model. These parameterisations, referred to as the CCA OO, can provide an efficient approximation to the results of the specialised model and are therefore well-suited for use in DA systems.
The case of SST assimilation has been used to demonstrate the new CCA OO. Using an output dataset of the General Ocean Turbulence Model (GOTM), a high-resolution water column model specifically tuned for modelling the diurnal cycle of SST, a new CCA OO has been derived. Subsequently, the operator has been applied to reduced-resolution temperature profiles from the GOTM to simulate its use in a DA system. The approximations provided by the CCA OO are found to be in good agreement with the GOTM at various times of the day and across all atmospheric conditions. The results indicate that the CCA OO could be used to enable the assimilation of SST in conditions under which this was previously not possible. Moreover, the atmospheric categories that were introduced in the construction of the CCA OO for SST show that the linear assumption implicit in CCA can be partially relaxed. This makes the CCA OO versatile for any condition. Compared to commonly used methods for SST assimilation, the CCA OO can provide substantial improvements. This is especially true for measurements of the skin SST, since the CCA OO profits from the modelling of the cool-skin effect that is included in the GOTM.
The ability of the CCA OO to handle complicated physical models in a relatively simple way is attractive for a large number of problems in DA, for which reduced-order OOs are desirable due to computational constraints. Remotely sensed data are the obvious target given the complexity of their relationships with state variables. Observations in coupled assimilations (e.g. ocean–atmosphere, ocean–sea ice, or ocean–biogeochemistry) are examples of challenging problems that could be investigated in the future with the CCA OO.
Data availability
Data availability.
The GOTM dataset used in Sects. 4 and 5 is available as described in Pimentel et al. (2019). The code for calculating the CCA OO is available from the authors upon request.
Author contributions
Author contributions.
EJ designed and implemented the CCA OO software. SP and WHT performed the modelling of the diurnal cycle. DD, GK, and IM evaluated the OO in different DA systems and provided feedback on the modelling and the software. AS was the PI of the project and coordinated the work. EJ prepared the paper with input from all co-authors.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Special issue statement
Special issue statement.
Acknowledgements
Acknowledgements.
This work forms part of the SOSSTA project, which has been funded by the EU Copernicus Marine Environment Monitoring Service (CMEMS) through the Service Evolution grants.
Review statement
Review statement.
This paper was edited by Pierre-Yves Le Traon and reviewed by Salvatore Marullo and one anonymous referee.
References
Bernie, D. J., Guilyardi, E., Madec, G., Slingo, J. M., and Woolnough, S. J.: Impact of resolving the diurnal cycle in an ocean–atmosphere GCM. Part 1: a diurnally forced OGCM, Clim. Dynam., 29, 575–590, https://doi.org/10.1007/s00382-007-0249-6, 2007. a, b, c
Björck, Å. and Golub, G. H.: Numerical Methods for Computing Angles Between Linear Subspaces, Math. Comput., 27, 579–594, https://doi.org/10.2307/2005662, 1973. a
Burchard, H., Bolding, K., and Ruiz-Villarreal, M.: GOTM, a general ocean turbulence model. Theory, implementation and test cases, Tech. Rep. EUR 18745 EN, European Commission, Brussels, Belgium, 1999. a
Donlon, C. J., Minnett, P. J., Gentemann, C., Nightingale, T. J., Barton, I. J., Ward, B., and Murray, M. J.: Toward Improved Validation of Satellite Sea Surface Skin Temperature Measurements for Climate Research, J. Climate, 15, 353–369, https://doi.org/10.1175/1520-0442(2002)015<0353:TIVOSS>2.0.CO;2, 2002. a
Flament, P., Firing, J., Sawyer, M., and Trefois, C.: Amplitude and Horizontal Structure of a Large Diurnal Sea Surface Warming Event during the Coastal Ocean Dynamics Experiment, J. Phys. Oceanogr., 24, 124–139, https://doi.org/10.1175/1520-0485(1994)024<0124:AAHSOA>2.0.CO;2, 1994. a
Haddad, Z. S., Steward, J. L., Tseng, H. C., Vukicevic, T., Chen, S. H., and Hristova-Veleva, S.: A data assimilation technique to account for the nonlinear dependence of scattering microwave observations of precipitation, J. Geophys. Res.-Atmos., 120, 5548–5563, https://doi.org/10.1002/2015JD023107, 2015. a
Harris, B. A. and Kelly, G.: A satellite radiance-bias correction scheme for data assimilation, Q. J. Roy. Meteor. Soc., 127, 1453–1468, https://doi.org/10.1002/qj.49712757418, 2001. a
Hotelling, H.: Relations Between Two Sets of Variates, Biometrika, 28, 321–377, 1936. a
Janjić, T., Bormann, N., Bocquet, M., Carton, J. A., Cohn, S. E., Dance, S. L., Losa, S. N., Nichols, N. K., Potthast, R., Waller, J. A., and Weston, P.: On the representation error in data assimilation, Q. J. Roy. Meteor. Soc., 144, 1257–1278, https://doi.org/10.1002/qj.3130, 2018. a
Large, W. G. and Caron, J. M.: Diurnal cycling of sea surface temperature, salinity, and current in the CESM coupled climate model, J. Geophys. Res.-Oceans, 120, 3711–3729, https://doi.org/10.1002/2014JC010691, 2015. a
Madec, G., Delecluse, P., Imbard, M., and Lévy, C.: OPA 8.1 Ocean General Circulation Model Reference Model, Tech. Rep. 11, Institut Pierre Simon Laplace des Sciences de l'Environment Global, 1998. a
Marullo, S., Santoleri, R., Ciani, D., Borgne, P. L., Péré, S., Pinardi, N., Tonani, M., and Nardone, G.: Combining model and geostationary satellite data to reconstruct hourly SST field over the Mediterranean Sea, Remote Sens. Environ., 146, 11–23, https://doi.org/10.1016/j.rse.2013.11.001, 2014. a
Marullo, S., Minnett, P. J., Santoleri, R., and Tonani, M.: The diurnal cycle of sea-surface temperature and estimation of the heat budget of the Mediterranean Sea, J. Geophys. Res.-Oceans, 121, 8351–8367, https://doi.org/10.1002/2016JC012192, 2016. a, b
Merchant, C. J., Filipiak, M. J., Le Borgne, P., Roquet, H., Autret, E., Piollé, J. F., and Lavender, S.: Diurnal warm-layer events in the western Mediterranean and European shelf seas, Geophys. Res. Lett., 35, L04601, https://doi.org/10.1029/2007GL033071, 2008. a
Murphy, A. H.: Skill Scores Based on the Mean Square Error and Their Relationships to the Correlation Coefficient, Mon. Weather Rev., 116, 2417–2424, https://doi.org/10.1175/1520-0493(1988)116<2417:SSBOTM>2.0.CO;2, 1988. a
Oke, P. R. and Sakov, P.: Representation Error of Oceanic Observations for Data Assimilation, J. Atmos. Ocean. Tech., 25, 1004–1017, https://doi.org/10.1175/2007JTECHO558.1, 2008. a
Pimentel, S., Haines, K., and Nichols, N. K.: Modeling the diurnal variability of sea surface temperatures, J. Geophys. Res.-Oceans, 113, C11004, https://doi.org/10.1029/2007JC004607, 2008a. a
Pimentel, S., Haines, K., and Nichols, N. K.: The assimilation of satellite-derived sea surface temperatures into a diurnal cycle model, J. Geophys. Res.-Oceans, 113, C09013, https://doi.org/10.1029/2007JC004608, 2008b. a
Pimentel, S., Tse, W.-H., Xu, H., Denaxa, D., Jansen, E., Korres, G., Mirouze, I., and Storto, A.: Modeling the near-surface diurnal cycle of sea surface temperature in the Mediterranean Sea, J. Geophys. Res.-Oceans, 124, 171–183, https://doi.org/10.1029/2018JC014289, 2019. a, b, c, d, e, f
Press, W. H.: Canonical Correlation Clarified by Singular Value Decomposition, available at: http://numerical.recipes/whp/workingpapers.html (last access: 12 June 2019), 2011. a
Saux Picart, S. and Legendre, G.: MSG/SEVIRI Sea Surface Temperature data record Product User Manual, Tech. Rep. OSI-250, EUMETSAT, OSI SAF, https://doi.org/10.15770/EUM_SAF_OSI_0004, 2018. a
Simoncelli, S., Fratianni, C., Pinardi, N., Grandi, A., Drudi, M., Oddo, P., and Dobricic, S.: Mediterranean Sea physical reanalysis (MEDREA 1987–2015) (Version 1), Tech. rep., EU Copernicus Marine Service Information, https://doi.org/10.25423/medsea_reanalysis_phys_006_004, 2014. a
Umlauf, L., Burchard, H., and Bolding, K.: General Ocean Turbulence Model, Scientific Documentation v3.2., Tech. Rep. 63, Institute for Baltic Sea Research Warnemünde, Rostock-Warnemünde, Germany, 2005. a
Waters, J., Lea, D. J., Martin, M. J., Mirouze, I., Weaver, A., and While, J.: Implementing a variational data assimilation system in an operational 1/4 degree global ocean model, Q. J. Roy. Meteor. Soc., 141, 333–349, https://doi.org/10.1002/qj.2388, 2015. a | 2019-08-22 12:18:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 20, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.530271053314209, "perplexity": 1687.958091782443}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027317113.27/warc/CC-MAIN-20190822110215-20190822132215-00280.warc.gz"} |
https://zbmath.org/?q=an%3A1119.62304 | # zbMATH — the first resource for mathematics
Random forests and adaptive nearest neighbors. (English) Zbl 1119.62304
Summary: We study random forests through their connection with a new framework of adaptive nearest-neighbor methods. We introduce a concept of potential nearest neighbors (k-PNNs) and show that random forests can be viewed as adaptively weighted $$k$$-PNN methods. Various aspects of random forests can be studied from this perspective. We study the effect of terminal node sizes on the prediction accuracy of random forests. We further show that random forests with adaptive splitting schemes assign weights to k-PNNs in a desirable way: for the estimation at a given target point, these random forests assign voting weights to the k-PNNs of the target point according to the local importance of different input variables. We propose a new simple splitting scheme that achieves desirable adaptivity in a straightforward fashion. This simple scheme can be combined with existing algorithms. The resulting algorithm is computationally faster and gives comparable results. Other possible aspects of random forests, such as using linear combinations in splitting, are also discussed. Simulations and real datasets are used to illustrate the results.
##### MSC:
62-XX Statistics 65C60 Computational problems in statistics (MSC2010)
Full Text: | 2021-04-17 17:27:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3735441565513611, "perplexity": 776.8511159862267}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038461619.53/warc/CC-MAIN-20210417162353-20210417192353-00521.warc.gz"} |
https://cracku.in/17-what-values-of-x-satisfy-x23-x13-2-lt-0-x-cat-2006?utm_source=blog&utm_medium=video&utm_campaign=video_solution | Question 17
# What values of x satisfy $$x^{2/3} + x^{1/3} - 2 <= 0$$?
Solution
Try to solve this type of questions using the options.
Subsitute 0 first => We ger -2 <=0, which is correct. Hence, 0 must be in the solution set.
Substitute 8 => 4 + 2 - 2 <=0 => 6 <= 0, which is false. Hence, 8 must not be in the solution set.
=> Option 1 is the answer. | 2022-08-10 20:23:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7331257462501526, "perplexity": 850.9050397060277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571210.98/warc/CC-MAIN-20220810191850-20220810221850-00300.warc.gz"} |
http://cs-people.bu.edu/lapets/235/ | # Algebraic AlgorithmsIntroductory Number Theory and Abstract Algebra for Computer Science Applications
## [link] 1. Introduction, Background, and Motivation
When many real-world problems are addressed or solved mathematically and computationally, the details of those problems are abstracted away until they can be represented directly as idealized mathematical structures (e.g., numbers, sets, trees, graphs, matrices, and so on). In this course, we will study a collection of such idealized mathematical objects: integers, residues, groups, isomorphisms, and several others. We will see how these structures and their properties can be used for implementing useful computational solutions to problems such as random number generation, prime number generation, error correction, trusted and distributed storage and computation, secure communication, and others.
In covering the material for this course, we will use the standard language and conventions for discussing these mathematical structures that have been developed by the community of mathematicians over the course of history. You will need to become familiar with these conventions in order to find, identify, and use the structures and techniques that have already been developed for representing and solving certain computational problems. At the same time, we will also learn how modern programming languages and programming paradigms can be used to implement these structures and algorithms both accessibly and efficiently.
The development and application of mathematics involves abstraction. A problem can be viewed at multiple levels of abstraction, and in developing mathematics humans have adopted a variety of techniques that allow them to successfully employ abstraction to study natural phenomena and solve problems.
symbolic abstract meaning concrete meaning inapplication domain 2 + 3 5 five objects {(1, 2), (1, 3)} acyclic graph file system {(1, 2), (2, 3), (3, 1)} graph with cycle network {(0,1), (1,2), (2,0)} permutation random number sequence
The above illustrates the different levels of abstraction that may exist for a given problem. We employ a language of symbols to denote certain abstract structures, which may correspond to actual structures in the world. A string of symbols corresponds to a particular abstract object. Notice that the actual object being modeled and the abstract structure behave the same way, and that this behavior implies certain rules about how we can manipulate the symbols without changing the object that they name. For example, we can represent the same graph using the two strings of symbols "{(1,2), (2,3), (3,1)}" and "{(1,2), (2,3), (3,1)}", or the same number of objects using "2 + 3", "3 + 2", "1 + 4", and so on.
In this course, we will begin to reviewing the terminology and concepts of logic, integer arithmetic, and set theory, which we will use throughout the course. We will then show that the algebraic properties of the integers also apply to congruence classes of integers (i.e., the properties of modular arithmetic operations), and we will derive and utilize theorems that have useful computer science applications (such as for generating random numbers and creating cryptographic protocols). We will then go further and show that some of the algebraic properties that hold in integer and modular arithmetic can also apply to any data structure, and we will study how to recognize and take advantage of these properties.
### [link] 1.2. Informal motivating example: random number generation
Let us informally consider the problem of generating a sequence of random positive integers. Random number generators are needed in many situations and applications, including:
• generating unique identifiers for database records, objects, etc.;
• generating a one-time pad for a simple encryption scheme;
• generating public and private keys for more sophisticated encryption and signature schemes;
• simulation and approximation methods that employ random sampling (Monte-Carlo, and so on).
Different applications will impose different requirements on what is and is not a sufficiently "random" sequence of number. Suppose we adopt the following method:
• n0 = a number in the range (inclusive) 0 to 5;
• ni = (2 ⋅ ni-1 + 1) mod 6.
We can consider another method:
• n0 = an initial seed integer 104 > n ≥ 103;
• ni = only the last four digits of ni-12.
Frequent repetition of a sequence may or may not be allowed in our given application. Does the above method produce repeating numbers? How often? For how many initial seeds? How do we choose a good seed? We can measure a physical process or component (a clock, a keyboard), but even under these circumstances we need a way to reason about the range of random values the measurement produces, and the range of random values the application requires. How do we begin to approach and formally characterize these aspects of the problem so that we are certain we are meeting the requirements imposed by the application?
One way to model a random number generation process is to view it is a permutation. In fact, there is more than one way to view the process as a permutation. We could simply count up from 0 to m and apply the same permutation to each 0 ≤ nm in order to produce the nth random number in the sequence. Is there an efficient way (i.e., using no more memory than O(log m)) to compute a random number from each n such that a number never repeats?
In this course we will learn about a variety of mathematical structures and their properties that will allow us to precisely specify the above problem and others like it, to identify what solutions are appropriate for such a problem, and to implement these solutions correctly and, where necessary, efficiently.
## [link] 2. Review of Logic with Sets, Relations, and Operators
In this section, we will review several abstract structures and associated properties (and the symbolic language used to represent them) that you should have already encountered in past courses. Simultaneously, we will review one way in which these structures can be implemented and manipulated within the modern programming language Python. As with most human languages that have developed organically over time, mathematics has a rich and often redundant vocabulary. We introduce many terms in this section that we will use consistently in this course. However, keep in mind that there are often other synonyms within mathematics and computer science for these structures.
### [link] 2.1. Formulas without quantifiers
Definition: A logical formula or formula is a string of symbols that follow a certain syntax. If the formula is written using a correct syntax, we can ask about its meaning (i.e., is the formula true or false). The symbols or, and, not, implies, and iff are logical operators.
The basic building blocks (a.k.a., base cases) for formulas are true, false, and predicates. When a formula consists of only one of these (and no operators), it is an atomic formula. Like any formula, each atomic formula has a particular meaning (it is either true or it is false). Atomic formulas can be combined using logical operators to build up larger formulas. The table below provides a way to determine the meaning of a formula by breaking it down into its constituent parts.
formula meaning example of one possiblePython representation true always true True false always false False f1 and f2 only true if both f1 and f2 are true True and False f1 or f2 true if f1 or f2 (or both) are true True or (False and True) f1 implies f2 if f1 is true, then f2 must be true f1 is false, or f2 is true f1 is "less than or equal to" f2(if false is 0 and true is 1) False <= True f1 iff f2 f1 is true if and only if f2 is true f1 and f2 are eitherboth true or both false True == False ¬ f true if f is false not (True or (False and True)) ( f ) true if f is true (True and (not (False)) predicate example depends on the definitionof the predicate isPrime(7)
A predicate can have zero or more arguments. Whether a given atomic formula consisting of a predicate that takes at least one argument is true or false depends on the arguments supplied to it. For example, we see above for the predicate isPrime(?) that takes one argument, the meaning of isPrime(7) should be true but the meaning of isPrime(4) should be false.
The following table may help with gaining a good intuition for the meaning of the implies operator.
meaning ofleft-hand side(premise) meaning ofright-hand side(conclusion) meaning ofentire formula comments true true true if the premise is true and the conclusionis true, the claim of implication is true;thus, the whole formula is true true false false if the premise is true but the conclusion isfalse, the conclusion is not impliedby the premise, so the claim of implicationis false; thus, the formula is false false true true if the conclusion is true on its own, it doesn't matterthat the premise is false, because anything impliesan independently true conclusion; thus, the claimof implication is true, and so is theentire formula false false true if we assume that a false premise is true, then "false"itself is "true"; in other words, falseimplies itself, so the formula is true
Example: Suppose we have the following formula involving two predicates the sun is visible and it is daytime:
the sun is visible ⇒ it is daytime
This formula might describe a property of our real-world experience of a person that is in a particular fixed location on the surface of the Earth. We could state that the above formula is always true (i.e., it is always an accurate description of the system it describes). For every possible assignment of values to each variable, the above formula is indeed accurate, in that it is true exactly in those situations that might occur on Earth, and false in any situation that cannot occur:
the sun is visible it is daytime meaning interpretation true true true a sunny day true false false false true true a cloudy day false false true nighttime
In particular, only one set of values causes the formula to be false: if the sun is in the sky, but it is not daytime. This is indeed impossible; all the others are possible (it may be day or night, or it may be cloudy during the day). The contrapositive of the formula is true if the formula is true:
¬(it is daytime) ⇒ ¬(the sun is visible)
Notice that the contrapositive of the above is a direct result of the fact that if the sun is visible it is daytime must be true, the rows in the truth table in which it is false must be ignored, and then the only possible row in the truth table in which it is daytime is false is the one in which the sun is visible is also false.
### [link] 2.2. Terms: integers and term operators that take integer inputs
Definition: A term is a string of symbols that represents some kind of mathematical structure. In our case, terms will initially represent integers or sets of integers. Terms may contain term operators. We can view these as functions that take terms as input and return terms as output. The term operators for terms that represent integers with which we will be working are +, -, ⋅, and mod.
term what it represents example of one possiblePython representation 0 0 0 1 1 1 z1 + z2 the integer sum of z1 and z2 3 + 4 z1 − z2 the integer difference of z1 and z2 (1 + 2) - 4 z1 ⋅ z2 the integer product of z1 and z2 3 * 5 z1 mod z2 the remainder of the integer quotient z1 / z2z1 - ⌊ z1/z2 ⌋ ⋅ z2 17 % 5 z1z2 product of z2 instances of z1 2**3pow(2,3)
### [link] 2.3. Formulas: relational operators and predicates dealing with integers
Definition: A term can only appear in a formula if it is an argument to a predicate. A few common predicates involving integers are represented using relational operators (e.g, ≤, ≥).
formula what it represents example of one possiblePython representation z1 = z2 true if z1 and z2have the same meaning;false otherwise 1 == 2 z1 < z2 true if z1 is less than z2;false otherwise 4 < 3 z1 > z2 true if z1 is greater than z2;false otherwise 4 > 3 z1 ≤ z2 true if z1 is less than or equal to z2;false otherwise 4 <= 3 z1 ≥ z2 true if z1 is greater than or equal to z2;false otherwise 4 >= 3 z1 ≠ z2 true if z1 is not equal to z2;false otherwise 4 != 3
Example: We can define our own predicates as well. Notice that one way we can represent these in Python is by defining a function that returns a boolean result.
predicate definition example of one possiblePython representation P(x) iff x > 0 and x < 2 def P(x): return x > 0 and x < 2 Q(x) iff x > 3 Q = lambda x: x > 3
formula what it represents example of one possiblePython representation P(1) true P(1) P(1) or P(2) true P(1) or P(2) Q(1) and Q(2) false Q(1) and Q(2)
We will use the following predicates throughout the course.
Definition: For any x,y ℤ, x | y iff y/x ℤ. If x | y, we then say that x is a factor of y.
Definition: For any y ℤ, y is prime iff for any integer x where 2 ≤ x < y, it is not true that x | y. In other words, y is prime if its only factors are 1 and y (itself).
formula what it represents x | y y / x ∈ ℤ x divides y y is divisible by x y is an integer multiple of x y mod x = 0 x is a factor of y y is prime y > 1 andx | y implies x = 1 or x = y y > 1 andy is divisible only by 1 and itself
Example: We can define the divisibility and primality predicates in Python in the following way:
def divides(x, y):
return y % x == 0 # The remainder of y/x is 0.
def prime(y):
for x in range(2,y):
if divides(x,y):
return False
return True
Example: We can gradually generalize our primality predicate from the previous example to work for any other predicate. Note that we restate the property slightly: a number is prime if no smaller number can divide it evenly, so if we ever find one that doesn't satisfy this property, we immediately return False. This is effectively the implementation of a quantifier, which we introduce further below.
def doesNotDivide(x, y):
return y % x != 0 # The remainder of y/x is nonzero.
def prime(y):
for x in range(2,y):
if not doesNotDivide(x,y):
return False
return True
def checkAll(S, P):
for x in S:
if not P(x):
return False
return True
Given the above, it is now possible to get the same behavior provided by prime() by supplying appropriate arguments:
>>> checkAll(set(range(2,y)), lambda x: doesNotDivide(x,y))
Definition: For any x,y ℤ, x is a proper factor of y iff y/x ℤ and x < y.
### [link] 2.4. Terms: finite sets of integers, term operators that take set inputs, and set comprehensions
Definition: A finite set of integers is an unordered, finite collection of zero or more integers with no duplicates. The following are examples of terms the meaning of which is a finite set of integers (with the exception of the set size terms, the meaning of which is a positive integer).
term what it represents example of one possiblePython representation ∅ a set with no elements in it set() {1,2,3} {1,2,3} {1,2,3} {2,..,5} {2,3,4,5} set(range(2,6)) { x | x ∈ {1,2,3,4,5,6}, x > 3 } {4,5,6} {x for x in {1,2,3,4,5,6} if x > 3} |{1,2,3,4}| 4 len({1,2,3,4})
The following are term operators on terms the meaning of which is a finite set of integers.
term what it represents example of one possiblePython representation S1 ∪ S2 {z | z ∈ ℤ, z ∈ S1 or z ∈ S2} {1,2,3}.union({4,5}){1,2,3} | {4,5} S1 ∩ S2 {z | z ∈ ℤ, z ∈ S1 and z ∈ S2} {1,2,3}.intersection({2,3,5}){1,2,3} & {2,3,5} |S| the number of elements in S len({1,2,3})
While the terms below do not represent finite sets of integers, we introduce the following two set terms in order to reference them throughout the notes.
Definition: Let ℤ be the set of all integers, and let ℕ be the set of all non-negative integers (i.e., positive integers and 0).
term what it represents ℕ {0, 1, 2, ...} ℤ {..., -2, -1, 0, 1, 2, ...}
### [link] 2.5. Formulas: quantifiers over finite sets of integers
Definition: Suppose we define the following two Python functions that take predicates (or, more specifically, functions that represent predicates) as input.
def forall(S, P):
for x in S:
if not P(x):
return False
return True
def exists(S, P):
for x in S:
if P(x):
return True
return False
We could redefine the above using comprehensions. We will also introduce a subset() operation on sets.
def forall(X, P):
S = {x for x in X if P(x)}
return len(S) == len(X)
def exists(X, P):
S = {x for x in X if P(x)}
return len(S) > 0
def subset(X,Y):
return forall(X, lambda x: x in Y)
Then we can introduce the following definitions and corresponding Python examples.
formula what it represents example of one possiblePython representation 1 ∈ {1,2,3} true 1 in {1,2,3} 4 ∈ {1,2,3} false 4 in {1,2,3} ∀ x ∈ {1,2,3}, x > 0 and x < 4 true forall({1,2,3}, lambda x: x > 0 and x < 4) ∃ x ∈ {1,2,3}, x < 1 and x > 3 false exists({1,2,3}, lambda x: x < 1 or x > 3) ∀ x ∈ ∅, f true ∃ x ∈ ∅, f false
Notice that when we quantify over an empty set with a universal quantifier ∀, the formula is always true. When we quantify over an empty set with an existential quantifier, the formula is always false (since no element satisfying any formula could exist if no elements exist at all). We can see that the Python functions for these quantifiers are consistent with this interpretation.
Fact: Let X = {x1 , ..., xn} be a finite set and let P be a predicate that applies to a single integer argument. Then we have the following correspondences between quantifiers and logical operators:
∀ x ∈ X, P(x)
iff
P(x1) and P(x2) and P(x3) and ... and P(xn)
∃ x ∈ X, P(x)
iff
P(x1) or P(x2) or P(x3) or ... or P(xn)
Notice that if X is empty, the "base case" for ∀ must be true (since that is the identity of the and logical operator), while the "base case" for ∃ must be false (since that is the identity of the or logical operator).
Exercise: Implement Python functions that correspond to formulas which can be used to define each of the following statements about a set X and a predicate P.
• All the elements of a set X satisfy the predicate P.
# We provide two equivalent implementations.
def all(X, P):
return forall(X, P)
def all(X, P):
S = {x for x in X if P(x)}
return len(S) == len(X)
• None of the elements of a set X satisfy the predicate P.
# We provide two equivalent implementations.
def none(X, P):
return forall(X, lambda x: not P(x))
def none(X, P):
S = {x for x in X if P(x)}
return len(S) == 0
• At most one of the elements of a set X satisfy the predicate P.
def atMostOne(X, P):
S = {x for x in X if P(x)}
return len(S) <= 1
• At least one of the elements of a set X satisfy the predicate P.
# We provide two equivalent implementations.
def atLeastOne(X, P):
return exists(X, P)
def atLeastOne(X, P):
S = {x for x in X if P(x)}
return len(S) >= 1
Exercise: Use quantifiers to implement a Python function corresponding to the predicate p is prime for any integer p.
def prime(p):
return p > 1 and forall(set(range(2, p)), lambda n: p % n != 0)
### [link] 2.6. Formulas: predicates dealing with finite sets of integers
Definition: The following are examples of formulas that contain relational operators dealing with finite sets of integers.
formula what it represents example of one possiblePython representation 3 ∈ {1,2,3} true 3 in {1,2,3} {1,2} ⊂ {1,2,3} true subset({1,2}, {1,2,3}) {4,5} ⊂ {1,2,3} false subset({4,5}, {1,2,3})
Below are the general forms of formulas containing relational operators dealing with finite sets of integers.
formula what it represents z ∈ S true if z is an element of S; false otherwise S1 ⊂ S2 ∀ z ∈ S1, z ∈ S2 S1 = S2 S1 ⊂ S2 and S2 ⊂ S1
### [link] 2.7. Terms: set products and binary relations
Definition: The product of two sets X and Y is denoted X × Y and is defined to be the set of ordered pairs (x,y) for every possible combination of x X and y Y.
Example:
term what it represents example of one possiblePython representation {1,2} × {5,6,7} {(1,5),(1,6),(1,7),(2,5),(2,6),(2,7)} { (x,y) for x in {1,2} for y in {4,5,6,7} }
Definition: A set R is a relation between the sets X and Y if RX × Y. We also say that a set R is a relation on a set X if RX × X.
Example: Suppose we have the sets X = {a, b, c} and Y = {D, E, F}. Then one possible relation between X and Y is {(a, D), (c, E)}. One possible relation on X is {(a, a), (a, b), (a, c), (b, b), (c, a)}.
### [link] 2.8. Formulas: predicates dealing with relations
There are several common properties that relations may possess.
predicate definition visual example X × Y is the set product of X and Y X × Y = { (x,y) | x ∈ X, y ∈ Y } !relation({'a','b','c'}, {'x','y','z'}, {('a','x'),('a','y'),('a','z'),('b','x'),('b','y'),('b','z'),('c','x'),('c','y'),('c','z')}) R is a relation between X and Y R ⊂ X × Y !relation({'a','b','c'}, {'x','y','z'}, {('a','x'),('b','x'),('b','z'),('c','z')}) R is a function from X to YR is a (many-to-one) map from X to Y R is a relation between X and Y and ∀ x ∈ X, there is at most one y ∈ Y s.t. (x,y) ∈ R !relation({'a','b','c'}, {'x','y','z'}, {('a','x'),('b','x'),('c','z')}) R is an injection from X to Y R is a function from X to Y and ∀ y ∈ Y, there is at most one x ∈ X s.t. (x,y) ∈ R !relation({'a','b','c'}, {'x','y','z'}, {('a','x'),('b','y')}) R is a surjection from X to Y R is a function from X to Y and ∀ y ∈ Y, there is at least one x ∈ X s.t. (x,y) ∈ R !relation({'a','b','c','d'}, {'x','y','z'}, {('a','x'),('c','y'),('d','z')}) R is a bijection between X and Y R is an injection from X and Y andR is a surjection from X and Y !relation({'a','b','c'}, {'x','y','z'}, {('a','y'),('b','z'),('c','x')}) R is a permutation on X R ⊂ X × X and R is a bijection between X and X !relation({'a','b','c'}, {('a','b'),('b','c'),('c','a')}) R is a reflexive relation on X R ⊂ X × X and∀ x ∈ X, (x,x) ∈ R !relation({'a','b','c'}, {('a','a'),('b','b'),('c','c')}) R is a symmetric relation on X R ⊂ X × X and∀ x ∈ X, ∀ y ∈ X, (x,y) ∈ R implies (y,x) ∈ R !relation({'a','b','c'}, {('a','b'),('b','a'),('c','c')}) R is a transitive relation on X R ⊂ X × X and ∀ x ∈ X, ∀ y ∈ X, ∀ z ∈ X, ((x,y) ∈ R and (y,z) ∈ R) implies (x,z) ∈ R !relation({'a','b','c'}, {('a','b'),('b','c'),('a','c')}) R is an equivalence relation on XR is a congruence relation on X R ⊂ X × X and R is a reflexive relation on X and R is a symmetric relation on X and R is a transitive relation on X !relation({'a','b','c'}, {('a','a'),('b','b'),('a','b'),('b','a'),('c','c')})
Exercise: Define the set of all even numbers between 0 and 100 (inclusive). There are at least two ways we can do this:
evens = { 2 * x for x in set(range(0,51)) }
evens = { x for x in set(range(0,101)) if x % 2 == 0 }
Exercise: Implement a Python function that computes the set product of two sets X and Y.
def product(X, Y):
return { (x,y) for x in X for y in Y }
Exercise: Implement a Python function that takes a finite set of integers and builds the relation on that set correspondingto the operator relational operator ≤.
def leq(S):
return { (x, y) for x in S for y in S if x <= y }
Exercise: Implement a Python function that determines whether a relation R is a relation over a set X.
# Using our definition of subset().
def relation(R, X):
return subset(R, product(X, X))
# Using the built-in set implementation.
def relation(R, X):
return R.issubset(product(X, X))
Exercise: One property of relations that is studied in other subject areas within computer science and mathematics is asymmetry. We say that R is an asymmetric relation on a set X if:
∀ x ∈ X, ∀ y ∈ X, (x,y) ∈ R implies ¬((y,x) ∈ R)
One example of an asymmetric relation is the "less than" relation on integers, usually represented using the < relational operator. How can we write a Python function that takes as its input a relation R and a set X and determines whether that relation is asymmetric? Recall that we can represent the implication logical operator using the Python operator <=.
def isAsymmetric(X, R):
return relation(R,X) and forall(X, lambda a: forall(X, lambda b: ((a,b) in R) <= (not ((b,a) in R))))
### [link] 2.9. Terms: set quotients and quotient maps
Given an equivalence relation on a set, we can partition that set into a collection of distinct subsets, called equivalence classes, such that all the elements of each subset are equivalent to one another.
Definition: For any set X and equivalence relation R on X, let the quotient set of X with respect to R, denoted X/R, be defined as:
X/R
=
{{y | y ∈ X, (x,y) ∈ R} | x ∈ X}
Exercise: Implement a Python function that takes two inputs (a set X and an equivalence relation R on that set), and outputs the quotient set X/R.
def quotient(X,R):
return {frozenset({y for y in X if (x,y) in R}) for x in X}
Below, we evaluate the above function on an example input.
>>> quotient({1,2,3,4}, {(1,1),(2,2),(3,3),(2,3),(3,2),(4,4)})
{frozenset({4}), frozenset({2, 3}), frozenset({1})}
Definition: For a set X and a relation R over X, the relation that relates each x X to its equivalence class in X under R is called the quotient map. The function is typically denoted using [ ... ]. That is, [x] is the equivalence class of x under R.
Exercise: Implement a Python function that takes two inputs (a set X and an equivalence relation R on that set), and outputs the quotient map taking each element x X to its corresponding equivalence class [x] X/R.
def quotientMap(X,R):
return {(x, frozenset({y for y in X if (x,y) in R})) for x in X}
Exercise: Determine whether {(x,y) | x ℤ, y ℤ, (x + y) mod 2 = 0} is an equivalence relation.
Example: Let X be a set of humans. Let R be the following relation RX × X:
R
=
{ (x, y) | x ∈ X, y ∈ X, x is y or x is a relative of y }
Then R is an equivalence relation (we assume everyone is related to themselves, and that if two people are both related to the same person, then they are themselves related). Furthermore, the quotient set X/R is a separation of the humans in X into families of relatives. No one in any equivalence class (a.k.a., a family) in X/R is related to anyone in any other equivalence class, and everyone in each equivalence class in X/R is related to everyone else in that equivalence class. Thus, |X/R| is the number of distinct families of humans in the set X.
More generally, we can view the quotient set X/R as the separation of X into as many groups as possible such that no two relatives are separated into separate groups. We can illustrate this with a Python example. Suppose we have the following relation on the set {'Alice', 'Bob', 'Carl', 'Dan', and 'Eve'}:
R = {\
('Alice', 'Alice'), ('Bob', 'Bob'), ('Carl', 'Carl'), ('Dan', 'Dan'), ('Eve', 'Eve'),\
('Alice', 'Carl'), ('Carl', 'Alice'), ('Dan', 'Eve'), ('Eve', 'Dan')\
}
We can then compute the set of families:
families = quotient({'Alice', 'Bob', 'Carl', 'Dan', 'Eve'}, R)
A visualization might look as follows:
#graph({'Alice','Carl','Bob','Dan','Eve'}, {('Alice','Alice'),('Bob','Bob'),('Carl','Carl'),('Dan','Dan'),('Eve','Eve'),('Alice','Carl'),('Carl','Alice'),('Dan', 'Eve'), ('Eve', 'Dan')})
Modular arithmetic can be viewed as a variant of integer arithmetic in which we introduce a congruence (or equivalence) relation on the integers and redefine the integer term operators so that they are defined on these congruence (or equivalence) classes.
### [link] 3.1. Terms: congruence classes in ℤ/mℤ, term operators, and relations
Definition: For any m ℤ, define:
mℤ
=
{x ⋅ m | x ∈ ℤ}
Definition: For any k ℤ and m ℤ, we define the congruence class k + mℤ below:
k + mℤ
=
{k + (x ⋅ m) | x ∈ ℤ}
The word congruence is a synonym for equivalence; in the next definition below, each congruence class k + mℤ is an equivalence class in a particular quotient set in which numbers are grouped by the remainder k that they have when divided by the chosen modulus m.
Exercise: Show that the relation R = {(x,y) | x ℤ, y ℤ, x mod 17 = y mod 17} is an equivalence relation.
Definition: For any given m ℤ, we define the set of all congruence classes modulo m:
ℤ/mℤ
=
ℤ/{(x,y) | x ∈ ℤ, y ∈ ℤ, x mod m = y mod m}
In English, we can read the notation ℤ/mℤ as "ℤ modulo m" or simply "ℤ mod m".
Example: How do we determine whether 7 2 + 5ℤ is a true formula? We can expand the notation 2 + 5ℤ into its definition:
7
2 + 5ℤ
7
{ 2 + 5 ⋅ z | z ∈ ℤ }
Thus, if 7 is in the set of elements of the form 2 + 5 ⋅ z, then we must be able to solve the following equation on integers for z:
7
=
2 + 5 ⋅ z
5
=
5 ⋅ z
1
=
z
Since we can solve for z, it is true that 7 2 + 5ℤ.
Informally and intuitively, we could think of the structure of the above set as a logical consequence of letting all multiples of m be equivalent to 0. That is, if 0 = m = 2m = ..., then 1 = m + 1 = 2m + 1 = ..., and so on.
term what it represents zz mod mz + mℤ {z + (a ⋅ m) | a ∈ ℤ} c1 + c2 {(x + y) | x ∈ c1, y ∈ c2} c1 − c2 {(x − y) | x ∈ c1, y ∈ c2} c1 ⋅ c2 {(x ⋅ y) | x ∈ c1, y ∈ c2} cz c ⋅ ... ⋅ c c! c ⋅ (c-1) ⋅ (c-2) ⋅ ... ⋅ 1
formula what it represents c1 ≡ c2 true only if c1 ⊂ c2 and c2 ⊂ c1, i.e., set equalityapplied to the congruence classes c1 and c2;false otherwise
### [link] 3.2. Algebra of congruence classes
We use the familiar symbols +, -, ⋅, and 0, 1, 2, 3, 4, ... to represent operations on congruence classes. When these symbols are used to represent operations on integers, they have certain algebraic properties. This allows us, for example, to solve equations involving integers and variables, such as in the example below (in which we add the same integer to both sides, use associativity of + and commutativity of ⋅, and cancel 2 on both sides of the equation):
2 ⋅ x − 3
=
1
(2 ⋅ x − 3) + 3
=
1 + 3
2 ⋅ x
=
4
2 ⋅ x
=
2 ⋅ 2
x ⋅ 2
=
2 ⋅ 2
x
=
2
Do the operations on congruence classes, represented by the operators +, -, and ⋅, also share the familiar algebraic properties of the corresponding operations on integers? In many cases they do, but in some cases these properties only apply under specific circumstances.
Example: Suppose we write the term 3 + 4 ≡ 2 where 2, 3, and 4 are congruence classes in ℤ/5ℤ. What is the meaning of this term? First, note the following equivalence.
{ x + y | x ∈ ℤ, y ∈ ℤ} = {z | z ∈ ℤ }
Now, we expand the definitions of congruence classes and the operation + on congruence classes below.
3 + 4
(3 + 5ℤ) + (4 + 5ℤ)
=
{3 + a ⋅ 5 | a ∈ ℤ} + {4 + b ⋅ 5 | b ∈ ℤ}
=
{(x + y) | x ∈ {3 + a ⋅ 5 | a ∈ ℤ}, y ∈ {4 + b ⋅ 5 | b ∈ ℤ}}
=
{(3 + a ⋅ 5) + (4 + b ⋅ 5) | a ∈ ℤ, b ∈ ℤ}
=
{(3 + 4) + (a ⋅ 5) + (b ⋅ 5) | a ∈ ℤ, b ∈ ℤ}
=
{2 + 5 + (a ⋅ 5) + (b ⋅ 5) | a ∈ ℤ, b ∈ ℤ}
=
{2 + (1 + a + b) ⋅ 5 | a ∈ ℤ, b ∈ ℤ}
=
{2 + c ⋅ 5 | c ∈ ℤ}
2 + 5ℤ
2
Fact: The set ℤ/mℤ is closed under the operation represented by +.
Fact: It is the case that ℤ/mℤ = {0,...,m-1} where 0,...,m-1 are congruence classes, and thus, |ℤ/mℤ| = m.
Fact: The addition operation on congruence classes in ℤ/mℤ represented by + is commutative, associative, and has the additive identity 0 + mℤ (a.k.a., mℤ, or simply 0).
Fact: The multiplication operation on congruence classes in ℤ/mℤ represented by ⋅ is commutative, associative, and has the multiplicative identity 1 + mℤ (a.k.a., 1).
property definition ℤ/mℤ is closed under + ∀ x,y ∈ ℤ/mℤ, x + y ∈ ℤ/mℤ + is commutative on ℤ/mℤ ∀ x,y ∈ ℤ/mℤ, x + y ≡ y + x + is associative on ℤ/mℤ ∀ x,y,z ∈ ℤ/mℤ, (x + y) + z ≡ x + (y + z) + has a (left and right) identity 0 in ℤ/mℤ ∀ x ∈ ℤ/mℤ, 0 + x ≡ x and x + 0 ≡ x ℤ/mℤ has inverses with respect to + ∀ x ∈ ℤ/mℤ, (m - x) + x ≡ 0 ℤ/mℤ is closed under ⋅ ∀ x,y ∈ ℤ/mℤ, x ⋅ y ∈ ℤ/mℤ ⋅ is commutative on ℤ/mℤ ∀ x,y ∈ ℤ/mℤ, x ⋅ y ≡ y ⋅ x + is associative on ℤ/mℤ ∀ x,y,z ∈ ℤ/mℤ, (x ⋅ y) ⋅ z ≡ x ⋅ (y ⋅ z) + has a (left and right) identity 1 in ℤ/mℤ ∀ x ∈ ℤ/mℤ, 1 ⋅ x ≡ x and x ⋅ 1 ≡ x ⋅ distributes across + in ℤ/mℤ ∀ x,y,z ∈ ℤ/mℤ, x ⋅ (y + z) ≡ (x ⋅ y) + (x ⋅ z)
In the rest of this subsection, we derive some familiar algebraic properties for congruence classes. We derive some of these properties from the properties of the divisibility predicate (i.e., for any x, y ℤ, x | y iff y/x ℤ). These properties will allow us to use algebra to solve equations involving congruence classes in ℤ/mℤ.
It is worth considering why we choose to work with the set of congruence classes ℤ/mℤ = {0 + mℤ, 1 + mℤ, 2 + mℤ, ..., (m-1) + mℤ} and operations over it rather than simply working with equations involving integer variables and the modulus operator. Modular arithmetic textbooks can be written (and such textbooks exist) in which the techniques covered in these notes are used to solve integer equations of the form f(x) mod m = g(x) mod m for some functions f and g. Some of the reasons for using the set of congruence classes ℤ/mℤ include:
• it is often possible to find the unique solution to an equation over ℤ/mℤ, while equations over ℤ involving the modulus operation may have infinitely many solutions;
• the set ℤ/mℤ is finite, so there is always a finite number of possible solutions to test, even if this is very inefficient, while equations over the integers involving modulus have an infinite range of possible solutions to test;
• the set ℤ/mℤ is a group and is a prototypical example of an algebraic structure, and gaining experience with algebraic structures is one of the purposes of this course, as algebraic structures are ubiquitous in computer science and its areas of application.
Fact: Given an equation involving congruence classes, we are allowed to add the same value to both sides. In other words, for any congruence classes a, b, c ℤ/mℤ, ab implies a + cb + c:
a
b (mod m)
a + c
a + c
To see that this is true, we can simply appeal to algebraic facts about integers:
a
=
b
a + c
=
b + c
(a + c) mod m
=
(b + c) mod m
a + c
b + c (mod m)
Thus, the two congruence classes contain the same elements, so they are equivalent.
Fact: For any congruence classes a, b, c ℤ/mℤ, ab implies a - cb - c. We can adjust the argument for + in the following way:
a
=
b
a - c
=
b - c
(a - c) mod m
=
(b - c) mod m
a - c
b - c (mod m)
We saw that we can add and subtract from both sides of an equation involving congruence classes. Can we also "divide" both sides by the same factor (or "cancel" that factor) in such an equation?
Example: Consider the following sequence of equations within ℤ/2ℤ:
4
6
2 ⋅ 2
2 ⋅ 3
2
3
Clearly, 2 ≢ 3 (mod 2) since the left-hand side is even and the right-hand side is odd. Thus, cancelling 2 on both sides of the equation in the above case is not correct. On the other hand, we have the following:
10
6
2 ⋅ 5
2 ⋅ 3
5
3
In the above case, cancelling 2 on both sides led to a true equation.
It seems that we cannot always "divide" by the same factor on both sides, but we can do so under certain conditions. In order to characterize at least some of the cases in which this is possible, we need a few preliminary facts.
Fact: For any a, m ℤ, a mod m = 0 iff we have that m | a. Thus, the following are all equivalent (i.e., all three are true at the same time):
m
|
a
a mod m
=
0
a
0 (mod m)
0
a (mod m)
We can derive the fact that m | a iff a mod m = 0 as follows. If a mod m = 0 then by definition of mod we have:
a - ⌊ a/m ⌋ ⋅ m
=
0
a
=
⌊ a/m ⌋ ⋅ m
a / m
=
⌊ a/m ⌋
a / m
ℤ
m
|
a
If m | a then by definition of m | a we have:
m
|
a
a / m
ℤ
a / m
=
⌊ a/m ⌋
a
=
⌊ a/m ⌋ ⋅ m
a - ⌊ a/m ⌋ ⋅ m
=
0
a mod m
=
0
Fact: For any a, b, c ℕ, if c|a then c|(ab).
Because c|a, it must be that a/c ℤ. But then we have that:
(a ⋅ b) / c = (a / c) ⋅ b
Since (a / c) ℤ and b ℤ, (a / c) ⋅ b ℤ and (ab) / c ℤ. Thus, c|(ab).
Fact: In ℤ/mℤ, Multiplying by the 0 + mℤ congruence class yields the 0 + mℤ congruence class. For any a, b, m ℕ, if a ≡ 0 (mod m) then ab ≡ 0 (mod m).
We can show this as follows:
a
0 (mod m)
m
|
a
m
|
(a ⋅ b)
a ⋅ b
0 (mod m)
Thus, 0 ℤ/mℤ behaves with respect to multiplication over ℤ/mℤ much the same way that 0 ℤ behaves with respect to multiplication over ℤ.
Example: For any a, b, m ℕ, it is not necessarily the case that just because ab ≡ 0 (mod m), either a or b must be the congruence class 0 ℤ/mℤ. For example, let m = 6, a = 4, and b = 9. Then we have:
4 ⋅ 9
0 (mod 6)
(2 ⋅ 2) ⋅ (3 ⋅ 3)
0 (mod 6)
2 ⋅ (2 ⋅ 3) ⋅ 3
0 (mod 6)
2 ⋅ 6 ⋅ 3
0 (mod 6)
However, we have that:
6
4
4
0 (mod 6)
6
9
9
0 (mod 6)
Thus, the congruence class 0 ℤ/mℤ does not always behave the same way that 0 ℤ behaves.
Fact (Euclid's lemma): For any a, b, p ℤ, if p is prime and p | (ab), then it must be that p|a or p|b (or both).
Fact: For any congruence classes a, b, c ℤ/pℤ, if c is not divisible by p then acbc implies ab.
We can derive the above fact by using the following steps:
a ⋅ c
b ⋅ c
(a ⋅ c) - (b ⋅ c)
0
((a ⋅ c) - (b ⋅ c)) mod p
=
0
((a - b) ⋅ c) mod p
=
0
p
|
((a - b) ⋅ c)
By Euclid's lemma, the fact that c is not divisible by p requires that a - b must be divisible by p. Thus:
p
|
(a - b)
(a - b) mod p
=
0
a - b
0
a
b
Example: Solve the following equation for all possible congruence classes x ℤ/3ℤ:
6 ⋅ x
0 (mod 3)
Since 6 ≡ 0 mod 3, we can rewrite the equation as follows:
0 ⋅ x
0 (mod 3)
Thus, any congruence class x {0 + 3ℤ, 1 + 3ℤ, 2 + 3ℤ} is a solution to the equation.
Example: Solve the following equation for all possible congruence classes x ℤ/5ℤ:
2 ⋅ x
0 (mod 5)
We know that 2 ⋅ 0 ≡ 0 (mod 5), so we can rewrite the above by substituting the right-hand side of the equation:
2 ⋅ x
2 ⋅ 0 (mod 5)
We can now cancel 2 on both sides of the equation using the cancellation law because 5 is prime and 2 ≢ 0 (mod 5):
x
0 (mod 5)
Thus, the only solution is the single congruence class x = 0 + 5ℤ.
Example: Solve the following equation for all possible congruence classes x ℤ/11ℤ:
3 ⋅ x + 5
6 (mod 11)
We can begin by subtracting 5 from both sides:
3 ⋅ x
1 (mod 11)
We can then see that 12 ≡ 1 (mod 11), so we can substitute 1 with 12 on the right-hand side:
3 ⋅ x
12 (mod 11)
We can then rewrite 12 as 3 ⋅ 4:
3 ⋅ x
3 ⋅ 4 (mod 11)
Since 11 is prime and 3 ≢ 0 (mod 11), we can cancel the 3 on both sides to solve the problem:
x
4 (mod 11)
Thus, the only solution is the single congruence class x = 4 + 11ℤ.
Example: Let a ℤ be any integer. Solve the following equation for all possible congruence classes x ℤ/7ℤ:
a + 3 ⋅ x
6 - 6 ⋅ a (mod 7)
We can begin by adding 6 ⋅ a to both sides:
(a + 6 ⋅ a) + 3 ⋅ x
6 (mod 7)
Now we can add a + 6 a to obtain 7 ⋅ a:
7 ⋅ a + 3 ⋅ x
6 (mod 7)
We know that 7 ≡ 0 (mod 7), so we know that for any a ℤ, 7 ⋅ a ≡ 0 (mod 7). Thus, we can substitute the term 7 ⋅ a with 0:
0 + 3 ⋅ x
6 (mod 7)
3 ⋅ x
6 (mod 7)
3 ⋅ x
3 ⋅ 2 (mod 7)
Since 7 is prime and 3 ≢ 0 (mod 7), we can cancel 3 on both sides:
x
2 (mod 7)
Thus, the only solution is the single congruence class x = 2 + 7ℤ.
Exercise: Solve the following equation for all possible congruence classes x ℤ/13ℤ:
4 ⋅ x − 2
10 (mod 13)
Exercise: Let a ℤ be any integer. Solve the following equation for all possible congruence classes x ℤ/19ℤ. Hint: notice that 17 + 19 = 36.
6 ⋅ x − 11
6 (mod 19)
### [link] 3.3. Multiplication by a congruence class as a permutation
We have seen that in certain situations, it is possible to cancel on both sides of an equation involving congruence classes. While Euclid's lemma made this possible, we might be interested in finding other ways to understand why cancelling is possible in this particular situation. In fact, the alternative explanation is useful in its own right because it can be applied to the practical problem of generating random numbers.
Let us consider the situations in which we can cancel on both sides in an equation involving integers. Suppose a,b,c ℤ, and:
a ⋅ c
=
b ⋅ c
It is possible to cancel in the above equation exactly when the operation of multiplication by c is invertible. In particular, if c is 0, then ac = 0, and all information about a is lost (likewise for bc = 0). So, if c = 0, the operation of multiplication by c is not invertible (i.e., multiple inputs map to the same output, namely 0, so multiplication by c = 0 is not a bijection), and it is not possible to cancel c on both sides. In all other situations where c ≠ 0, the operation is invertible (we can simply perform integer division by c on ac and bc). This raises a natural question: does the ability to cancel congruence classes on both sides of a congruence class equation also imply that the operation of multiplying by the congruence class that can be cancelled on both sides is an invertible operation? The answer is "yes".
For a prime p, multiplication by a congruence class in ℤ/pℤ corresponds to an invertible relation, also known as a bijection or a permutation.
Fact: For any p ℕ, for any a {1,...,p-1}, if p is prime then the following relation R is a permutation from {0, 1,...,p-1} to ℤ/pℤ (the non-zero congruence classes in ℤ/pℤ):
R
=
{ (0, (0 ⋅ a) mod p), (1, (1 ⋅ a) mod p), (2, (2 ⋅ a) mod p), ..., (p-1, ((p-1) ⋅ a) mod p) }
=
{ (i, (i ⋅ a) mod p) | i ∈ {0,...,p-1} }
Recall that R is a permutation if R is a bijection. In order to be a bijection, R must be both an injection and a surjection.
To show that R is an injection, suppose that it is not. We will derive a contradiction from this assumption, which will tell us that the assumption must be false.
If it is not injective, then there exist distinct i {0,...,p-1} and j {0,...,p-1} where without loss of generality j < i such that:
i
j
(i ⋅ a) mod p
=
(j ⋅ a) mod p
But the above implies the following:
(i ⋅ a) mod p
=
(j ⋅ a) mod p
((i ⋅ a) - (j ⋅ a)) mod p
=
0 mod p
((i - j) ⋅ a) mod p
=
0 mod p
p
|
(i - j) ⋅ a
By Euclid's lemma, the above implies that p must divide either (ij) or a. But also know that:
• because a < p, p does not divide a;
• because p > i - j > 0, p cannot divide (ij).
Alternatively, notice that in (ia) mod p = (ja) mod p , we should be able to simply divide both sides of the equation by a because p is prime; however, this contradicts our initial assumption!
Since assuming that distinct i and j can be mapped to the same element when they are multiplied by a leads to a contradiction, it must be that this is not possible. Thus, no two distinct i and j map to the same result, so R is an injection from {0,...,p-1} to ℤ/pℤ and we have that:
|{0,...,p-1}|
=
|ℤ/pℤ|
Thus, since R maps to at least p distinct elements, and |ℤ/pℤ| has at most p elements, R must map to every element in ℤ/pℤ, so it is also a surjection by the Pigeonhole principle.
Since R is both an injection and a surjection from {1,...,p-1} to ℤ/pℤ - {0}, it must be a bijection, and thus a permutation.
Example: Consider 2 ℤ/5ℤ. We can write out the results of multiplying all the congruence classes in ℤ/5ℤ by the congruence class 2:
2 ⋅ 0
0 (mod 5)
2 ⋅ 1
2 (mod 5)
2 ⋅ 2
4 (mod 5)
2 ⋅ 3
1 (mod 5)
2 ⋅ 4
3 (mod 5)
Notice that each congruence class in ℤ/5ℤ appears exactly once as a result.
### [link] 3.4. Generating random numbers
Suppose we wish to automatically generate a sequence of "random" numbers using an algorithm. Before we can implement an algorithm and determine whether it solves our problem, we must first determine what constitutes an acceptable "random" sequence.
Example: Suppose we want to find a way to generate a "random" sequence v of positive integers. Assume we have only one requirement.
Requirement 1: The sequence v has m distinct positive integers between 0 and m-1, where vi is the ith element in the sequence.
In this case, a relation R ⊂ ℕ × ℤ/mℤ that is a permutation would be sufficient. One such relation is:
v0
=
0 mod m
vi
=
(vi-1 + 1) mod m
R0
=
{(i, vi) | i ∈ {0,...,m-1}}
Notice that the second term in (x, x mod m) is in this case the congruence class modulo m that corresponds to x. The relation R0 is indeed a permutation, but it does not satisfy our intuitive notion of a random sequence because it simply counts from 0 to m − 1, so we impose another requirement.
Requirement 2: The sequence v must not be the trivial sequence (0,...,m-1).
Suppose we propose the following relation:
v0
=
0
vi
=
(vi-1 + 2) mod m
R1
=
{(i, vi) | i ∈ {0,...,m-1}}
Notice that we can redefine R1 above more concisely:
R1
=
{(i, (0 + 2 ⋅ i) mod m) | i ∈ {0,...,m-1}}
Does R1 always satisfy both requirements? Suppose that m is even. Then we have that there exists j {0,...,m-1} such that 2 ⋅ j = m. But this means that 2 ⋅ j ≡ 0, so 2 ⋅ (j+1) ≡ 2 ⋅ j + 2 ⋅ 1 ≡ 2 ⋅ 1 ≡ 2 and so on. This means that R1 is not injective, so the first requirement is not met when m is even. Suppose we define R2 to be a variant of R1 parameterized by some b {0,...,m-1}:
R2
=
{(i, (0 + b ⋅ i) mod m) | i ∈ {0,...,m-1}}
What conditions can we impose on b and m so that they satisfy both requirements?
After examining the permutation we can obtain by multiplying all the congruence classes in some set ℤ/pℤ by a particular a ℤ/pℤ, we might wonder if we can use this fact to implement a random number generator. One immediate benefit of this approach is that this approach would satisfy several conditions that we might associate with a "good" algorithm for generating random numbers:
• the "state" of the algorithm is easy to store: it consists of a single congruence class in ℤ/pℤ, which can be represented using an integer;
• it is possible to compute the ith random number in the sequence efficiently (i.e., with a single multiplication followed by a single modulus operation);
• the sequence that is generated will contain exactly one instance of all the numbers in the chosen range {0,...,p-1};
• the sequence that is generated can, at least in some cases, be a non-trivial sequence that might appear "random".
Fact: If m is prime and b {2,...,m-1}, then R2 satisfies both requirements.
We know this is true because in this case, R is a permutation, so it satisfies Requirement 1. Furthermore, element v1 = b, so v is never the trivial sequence. Thus, Requirement 2 is satisfied.
Algorithm: The following is one possible implementation of a simple random number generation algorithm.
1. inputs: upper bound (prime) p ℕ, seed a {0,...,p-1}, index i {0,...,p-1}
1. return (ai) mod p
Exercise: What are some drawbacks (or unresolved issues) with building random sequences by choosing a prime m and some a {2,...,m-1}?
### [link] 3.5. Greatest common divisor and related facts
It is actually possible to generalize Euclid's lemma so that it does not rely on prime numbers existing at all. In order to do so, however, we must first introduce concepts that make it possible to reason about a particular relationship between numbers that is similar to the property of primality, but is less restrictive.
Definition: For any two x, y ℤ, we define the greatest common divisor, denoted gcd(x,y), as the greatest integer z ℤ such that z | x and z | y. Equivalently, we can define it as the maximum of a set:
gcd(x,y)
=
max{z | z ∈ ℤ, z | x, z | y}
We can also define it recursively (not that z | 0 for all z ℤ because 0/z ℤ):
gcd(x,0)
=
x
gcd(x,y)
=
gcd(y, x mod y)
To see why the recursive definition of gcd works, consider two cases. If x < y, then the two inputs are simply reversed. This ensures that the first input x is eventually larger than the second input y. If xy and they share a greatest common divisor a, then we have for n = ⌊ x/y ⌋ that:
y
=
y' ⋅ a
x
=
x' ⋅ a
x mod y
=
x - (n ⋅ y)
=
(x' ⋅ a) - (n ⋅ y)
=
x' ⋅ a - ((n ⋅ y') ⋅ a)
=
(x' - n ⋅ y') ⋅ a
Notice that (x' - ny') ⋅ a < x' ⋅ a, but that the new smaller value is still a multiple of a, so the greatest common divisor of this value and y is still a.
Example: Consider the number 8 and 9. The factors of 8 are 1, 2, 4, and 8, while the factors of 9 are 1, 3, and 9. Thus, the maximum of the numbers in the intersection {1,2,4,8} ∩ {1,3,9} is 1, so we have that gcd(8, 9) = 1.
Example: We can implement the inefficient algorithm for the greatest common divisor using Python in the following way:
def gcd(x, y):
return max({z for z in range(0, min(x,y)) if x % z == 0 and y % z == 0})
Exercise: Consider the following relation:
{ (x, y) | gcd(x,y) ≠ 1 }
Is this an equivalence relation?
Fact: For any x ℤ, y ℤ, x | y iff gcd(x,y) = x.
Definition: For any x ℤ, y ℤ, x and y are relatively prime, relative primes, and coprime iff gcd(x,y) = 1.
Fact (Euclid's lemma generalization): For any a, b, c ℕ, if a | (bc) and a and b are relatively prime, then it must be that a | c.
Fact: If m, a ℕ and x, y ℤ/mℤ where gcd(a, m) = 1 (i.e., a and m are coprime), and suppose we have that:
x
y (mod m)
Then it must be that:
a ⋅ x
a ⋅ y (mod m)
Notice that we can prove the above by contradiction. If we instead suppose that axay (mod m), then because a and m are coprime, by Euclid's generalized theorem, we can canel a on both sides of the equation to obtain xy (mod m).
Exercise: Solve the following problems using the algebraic facts you know about the gcd operation.
• Find gcd(18,42).
• Find gcd(21000, 2100).
• For a positive even integer a ℤ, find gcd(a/2, a - 1).
• Suppose that for some a ℤ/mℤ, the set {ia mod m | i {1,...,m-1}} contains every number in the set {1,...,m-1}. What is gcd(a, m)?
Exercise: Solve the following equation for all possible congruence classes x ℤ/16ℤ:
9 ⋅ x + 2
4 (mod 16)
Exercise: Solve the following equation for all possible congruence classes x ℤ/15ℤ:
30 ⋅ x
14 (mod 15)
Fact: For any a ℕ and m ℕ, if gcd(a,m) = 1, then {(i, (ia) mod m) | i {0,...,m-1}} is a permutation.
The above can be proven by noticing that if gcd(a,m) = 1, then a does not divide m and m does not divide a. Notice that in this fact in which p was required to be prime, the fact that p is prime was not used in isolation; only the coprime relationship between p and a was required.
Using the generalization of Euclid's lemma, it is now possible to address the drawback we observed in our initial random number generating algorithm. We can now accept any upper bound m, not just a prime upper bound, and there is no need for either the algorithm or the user to find a prime p before generating random numbers. However, we have a new problem: how to do we obtain a non-trivial coprime for any given m?
Fact: For any m ℤ where m ≥ 2, gcd(m,m+1) = 1.
We can prove this fact by contradiction. Suppose there exists a factor z > 1 of m and m+1. In other words, gcd(m,m+1) > 1. Then we have that:
z ⋅ a
=
m
z ⋅ b
=
m + 1
(z ⋅ b) - (z ⋅ a)
=
m + 1 − m
z ⋅ (b − a)
=
1
b − a
=
1 / z
If z > 1 then 1/z ∉ ℤ, so (ba) ∉ ℤ. Since b-a ℤ, this is a contradiction, so it must be that gcd(m, m+1) = z = 1.
Fact: Suppose that m ℕ is an odd positive integer. Then for any k ℕ, gcd(m, 2k) = 1. This is because if m had any factors of 2, it would be even.
Fact: Suppose that m ℕ is of the form 2km' for some odd m' and some k ≥ 1. Then m' ⋅ 2 and m have exactly the same prime factors, which means (m' ⋅ 2) − 1 and m share no factors, so gcd(m, (m' ⋅ 2) − 1) = 1.
Algorithm: The following algorithm uses this fact to generate a new coprime. In the worst case, it runs in a linear amount of time in the length of the bit representation of m, and it may in some cases return m − 1 as a result. Note that the operations below (e.g., multiplication ⋅ and subtraction −) are on congruence classes in ℤ/mℤ and not on integers.
1. inputs: positive integer m
1. p := any number in {3,...,m-1}
2. while p − 1 and m are not coprime
1. p := p ⋅ gcd(p − 1, m)
3. return p − 1
Suppose that during the ith iteration of the algorithm, we have pi. This algorithm works by "moving" the factors shared by m and that iteration's pi − 1 quantity into pi+1, thus ensuring m and pi+1 in the subsequent iteration do share that factor (thus ensuring by this fact that m and pi+1 − 1 in the next iteration do not share it).
Fact: Suppose that we have some m ℕ, and that we choose some b {2,...,m-1} such that b > m/2. Then it is guaranteed that:
gcd(m, b)
<
b
To see why, consider that if gcd(m, b) = b, this would mean that there exists some k ≥ 2 such that bk = m, and this would mean that:
b ⋅ 2
m
b
m/2
This contradicts our assumption that b > m/2, so it must be that gcd(m, b) < b. We can then further conclude that:
b / gcd(m, b)
>
1
Thus, if gcd(m, (b / gcd(m, b))) = 1, this provides a way to find a number that is greater than 1 and coprime with m. However, this is not guaranteed to work every time because it may still be that gcd(m, (b / gcd(m, b))) > 1. Under those conditions, the options would be to try a different b, or to use a different technique.
At this point, we can define an improved random number generation algorithm that works for any upper bound.
Algorithm: The following is another variant of a simple random number generation algorithm.
1. inputs: upper bound m ℕ, index i {0,...,m-1}
1. a := number in {2,...,m-1} s.t. a and m are coprime (always the same a for an m)
2. return (ai) mod m
This algorithm has a more subtle flaw: poor choices of a (e.g., very small values such as 2) result in a very predictable "random" sequence. It is preferable to choose an a that is coprime with the upper bound m, and that falls somewhere between the middle and the upper quarter of the range {0,...,m-1} (i.e., between 0.5 ⋅ m and 0.75 ⋅ m).
Algorithm: In this variant, the algorithm attempts to find a coprime that is as close as possible to (4/7) ⋅ m. The value 4/7 is chosen in an ad hoc manner in this example. Other values in the range between 1/2 and 3/4 might also produce "nice"-looking results.
1. inputs: upper bound m ℕ, index i {0,...,m-1}
1. b := number in {2,...,m-1} s.t. b and m are coprime
2. for possible powers k in the range 1 to the bit length of m
1. a := power bk of b that is as close as possible to ((4/7) ⋅ m)
3. return (ai) mod m
However, the above algorithm is not ideal, and common random number generators found in standard libraries (such as the linear congruential generator) use a slightly different fact about permutations that results in sequences that appear yet more "random". So far, we have learned enough to build a simplified version of a linear congruential generator in which the "multiplier" is one modulo the modulus (it happens to be a very simple extension of our existing permutation-based random number generator).
Fact: For any m ℕ, for any s ℤ/mℤ, the relation {(i, i + s) | i ℤ/mℤ} is a permutation. Note that this permutation merely "shifts" all the congruence classes up by s (wrapping around through m ≡ 0 any values that exceed m − 1).
Algorithm (simplified linear congruential generator): Below is a simplified version of a linear congruential generator (in which the "multiplier" is one modulo the modulus).
1. inputs: upper bound m ℕ, index i {0,...,m-1}
1. a := number in {2,...,m-1} s.t. a and m are coprime (with the same additional preferences found in this algorithm)
2. s := number in {1,...,m-1} (ideally, this number is different for each input m)
3. return (ai + s) mod m
The fully generalized linear congruential generator has a few drawbacks if we want to construct an implementation that satisfies our desired criteria (i.e., full coverage of the domain ℤ/mℤ, no repetition, and the ability to compute the ith random number in the sequence with a small, constant number of arithmetic operations). In particular, we would need to find an additional "multiplier" k {2,...,m − 1} such that k − 1 shares all prime factors and factors of 4 with m (this is difficult to guarantee without the prime factorization of m).
Algorithm (linear congruential generator): Below is the implementation of a linear congruential generator. Note that it makes a recursive call to itself.
1. inputs to LCG: upper bound m ℕ, index i {0,...,m-1}
1. a := number in {2,...,m-1} s.t. a and m are coprime
2. s := number in {1,...,m-1} s.t. s and m are coprime
3. k := number in {1,...,m-1} s.t. k − 1 shares all prime factors and factors of 4 with m
4. if i = 0 then return s
5. else return (kLCG(m, i − 1) + a) mod m
Under circumstances in which the modulus satisfies predetermined criteria (e.g., it is of the form m = 2t for some t ℕ), it is easier to obtain an appropriate k. However, to determine the ith number in the sequence we would also need to either (1) maintain a counter in memory to keep track of the index of the current random number in the sequence, (2) compute a fairly large sum, or (3) perform an iterative loop or recursive chain of calls (as in the above implementation).
### [link] 3.6. Generating prime numbers
Many applications require the generation of new primes. We have already seen a simple example in which generating new random sequences required prime numbers. Another important class of applications with this requirement are cryptographic schemes and protocols. In this section, we consider the problem of generating prime numbers, and in particular, random prime numbers in a particular range.
Algorithm: There exists a simple algorithm that is guaranteed to generate a new prime number distinct from any of its inputs, but it is not efficient.
1. inputs: set of primes {p1 ,... , pn}
1. n := p1 ⋅ ... ⋅ pn + 1
2. F := factors of n
3. return any element in F
The above algorithm must return a new prime distinct from any of the primes p1 ,... , pn. To see why, consider the following:
P
=
p1 ⋅ ... ⋅ pn
gcd(P, P + 1)
=
1
There are two possibilities: P+1 is prime, or P+1 is not prime.
• If P+1 is prime, then P > pi for all i {1,...,n}, so P+1 is a new prime.
• If P+1 is not prime, it cannot share any factors with P since gcd(P, P + 1) = 1, so no factors of P+1 are in the set {p1 ,... , pn}. But it must have factors, so any of these factors will be different from the primes in the input set {p1 ,... , pn}.
Thus, the algorithm is a guaranteed method for generating new primes. It also constitutes a proof that there are infinitely many primes. Unfortunately, this algorithm is impractical because the new primes produced by it grow exponentially as the set of primes {p1 ,... , pn} is extended with new primes returned by the algorithm.
In practice, most algorithms that need to generate large primes for commercial applications simply choose from a range of numbers (e.g., at random) and filter out non-primes using some efficient algorithm that does not provide an absolute guarantee that the numbers that remain are all prime. As long as it is not too likely that the generated number is not a prime, this may be sufficient.
Example: Suppose we want to generate a d-digit prime number (in decimal representation). The prime number theorem states that for a given N, the number of primes in the range {2,...,N} is about N/(ln(N)). We can roughly estimate the number of primes with d-digit decimal representations using the following formula:
(10d+1-1 / ln(10d+1-1)) - (10d / ln(10d))
For d = 8, this value is about 4,780,406, so we can roughly say that the chances that an 8-digit number chosen at random (here we are ignoring the details of what distribution is used) is prime are about:
4,780,406/((109 - 1) - 108) ≈ 5.5/100
We can use our ability to generate random numbers in a specific range to build a generic algorithm template for generating prime numbers (without specifying exactly how we check that each candidate number we consider is prime).
Algorithm: Suppose we defined the following algorithm for generating a prime with a d-digit representation.
1. inputs: d
1. do
1. n := a random number from {10d, ..., 10d+1-1}
while n is not prime
2. return n
Assuming we were choosing numbers "well" with respect to their distribution (we are being imprecise here), we could optimistically hope that for d = 8, the above algorithm would only need to check for primality about 20 times (since roughly 1 out of every 20 numbers it tries should be a prime).
It remains to define an algorithm for checking whether an arbitrary input m ℕ is prime. We could check every number k between 2 and ⌊ √(m) ⌋ to see if it is a factor of m. However, ⌊ √(m) ⌋ still grows exponentially in the representation size of m. For example, for an n-bit input, an integer m in {0,...,2n-1} which must have a representation size of at least n bits, we have the following exponential running time:
√(m)
=
√(2n)
=
2n/2
=
(21/2)n
1.42n
If we only consider primes and not any of their multiples (i.e., we apply the Sieve of Eratosthenes to the set {2,...,⌊ √(m) ⌋}), we can decrease the number of times we check the divisibility of m. However, we would need to do a lot of extra work to filter out the multiples of primes. Modern algorithms such as ECPP run in polynomial time, but in practice it is currently difficult to implement a version of these algorithms that runs quickly enough for certain applications (or doesn't consume too much power, such as when it runs on mobile devices).
Algorithm: Given the above considerations, we introduce a modified algorithm.
1. inputs: d
1. do
1. n := a random number from {10d-1, ..., 10d-1}
while n is not probably prime
2. return n
It remains to define a subroutine for checking whether a number is probably prime (for some appropriate definition of "probably") that is very efficient.
### [link] 3.7. Detecting probable prime numbers
In this subsection, we consider the problem of defining a very efficient algorithm to check whether a positive integer m ℕ is prime. In fact, the algorithm we consider will be detectors of some, but not all, composite numbers.
Fact: For any n ℕ, n is composite iff n > 1 and it is not the case that n is prime.
That is, the algorithms we consider recognizes prime numbers but with false positives. They only guarantee that there are no false negatives (i.e., if the algorithm outputs that its input is composite, then it is indeed composite; otherwise, the number may or may not be prime and we call it probably prime because we were not able to detect that it is composite). First, consider how an algorithm for checking primality that never has a "false" output behaves:
algorithm input algorithm output meaning description actually a composite number(this is not known at time of input) composite the input is composite true negative actually a prime number(this is not known at time of input) prime the input is prime true positive
Compare the above table to the following table describing three possible conditions (and one forbidden condition) for an algorithm that detects probable primes.
algorithm input algorithm output meaning description actually a composite number(this is not known at time of input) composite the input isdefinitely composite true negative actually a composite number(this is not known at time of input) probably prime the input is eithercomposite or prime false positive actually a prime number(this is not known at time of input) probably prime the input is eithercomposite or prime true positive actually a prime number(this is not known at time of input) composite impossible false negative(we will not consider algorithmsthat return such outputs)
Below is a comparison of the outputs of four possible probable prime algorithms on inputs in the range {2,...,10} ⊂ ℕ.
inputnumber perfectalgorithm perfect probableprime algorithm less accurateprobable primealgorithm very inaccurateprobable primealgorithm 2 prime probablyprime probablyprime probablyprime 3 prime probablyprime probablyprime probablyprime 4 composite composite probablyprime probablyprime 5 prime probablyprime probablyprime probablyprime 6 composite composite composite probablyprime 7 prime probablyprime probablyprime probablyprime 8 composite composite probablyprime probablyprime 9 composite composite composite probablyprime 10 composite composite probablyprime probablyprime
Algorithm: We now define our first algorithm for testing whether a number is probably prime.
1. inputs: m ℕ, k
1. repeat k times:
1. a := a random number from {2,...,m-1}
2. if a | m then return composite
2. return probably prime
Notice that the above algorithm will never say that a prime number is actually composite. If it does not find a factor of m because it did not run for sufficiently many iterations, then it will indicate that m is probably prime. Thus, it will have no false negatives (i.e., an incorrect output indicating a prime number is composite).
Algorithm: We now define another algorithm for testing whether a number is probably prime.
1. inputs: m ℕ, k
1. repeat k times:
1. a := a random number from {2,...,m-1}
2. if a | m then return composite
3. if gcd(a,m) ≠ 1 then return composite
2. return probably prime
The above algorithm is interesting because by using the gcd operation, we get more value out of each random number we try. In fact, the gcd operation runs in polynomial time but tells us if the intersection between the two sets of factors (the factors of a and the factors of m) contains any numbers. Checking this intersection using the naive approach would take exponential time.
The above algorithm is somewhat problematic if we want to have a good idea of how to set k given our desired level of confidence in the output. For example, how high should k be so that the probability that we detect a composite is more than 1/2? If we require that k ≈ √(m) to be sufficiently confident in the output, we might as well use the brute force method of checking every a {2,..., ⌊ √(m) ⌋}.
To define a more predictable testing approach for our algorithm, we derive a theorem that is frequently used in applications of modular arithmetic (in fact, this fact underlies the prime number generators found in many software applications).
Fact (Fermat's little theorem): For any p ℕ, for any a {1,...,p-1}, if p is prime then it is true that:
ap-1
1 (mod p)
We have already shown that if p is a prime then R defined as below is a permutation:
R
=
{ (1, (1 ⋅ a) mod p), (2, (2 ⋅ a) mod p), ..., (p-1, ((p-1) ⋅ a) mod p) }
=
{ (i, (i ⋅ a) mod p) | i ∈ {1,...,p-1} }
Next, to make our notation more concise, note that:
1 ⋅ 2 ⋅ ... ⋅ p-1
=
(p - 1)!
(1 ⋅ a) ⋅ (2 ⋅ a) ⋅ ... ⋅ ((p-1) ⋅ a)
=
ap-1 (p - 1)!
Recall that p is prime, so p does not divide (p - 1)!. Thus, we can divide by (p - 1)! both sides of the following equation:
ap-1 (p - 1)!
1 ⋅ (p - 1)!
ap-1
1
We now have derived the statement of Fermat's little theorem.
Fact: A number p ℕ is prime iff p > 1 and for all a {1,...,p-1}, ap-1 mod p = 1.
If we negate the statement above, we can define when a number is composite (i.e., when it is not prime) in a way that suggests a straightforward algorithm.
Definition: A number m ℕ is composite iff m > 1 and there exists a {1,...,m-1} such that am-1 mod m ≠ 1. In this case, a is a Fermat witness to the compositeness of m.
Definition: If for composite m ℕ and a {1,...,m-1}, we have that am-1 mod m = 1, then a is a Fermat liar and m is a pseudoprime with respect to a.
Algorithm (Fermat primality test): We now extend our algorithm. The following algorithm can be used to test whether a number is probably prime.
1. inputs: m ℕ, k
1. repeat k times:
1. a := a random number from {2,...,m-1}
2. if a | m then return composite
3. if gcd(a,m) ≠ 1 then return composite
4. if am-1 mod m ≠ 1 then return composite
2. return probably prime
If m is a prime, the above algorithm will always return probably prime.
For any given candidate a in the above algorithm, if the first test fails and gcd(a,m) ≠ 1 then a is a factor of m. Thus, in the worst case, the first is gcd(a,m) = 1 for all k instances of a that we consider. How many of these k instances must pass the second test before we are confident that m is prime? In fact, for most composite numbers m, k can be very low.
Fact: If for a composite m ℤ there is at least one Fermat witness a {2,...,m-1} such that gcd(a,m) = 1, then at least half of all a such that gcd(a,m) = 1 are Fermat witnesses.
Suppose that a is a Fermat witness and a1,...,an are distinct Fermat liars. Then for every Fermat liar ai we have that:
(a ⋅ ai)m-1
am-1 ⋅ aim-1 (mod m)
am-1
But a is a Fermat witness, so am-1 mod m ≠ 1. Thus, (aai)m-1 mod m ≠ 1, so aai is also Fermat witness. Furthermore, for any two distinct Fermat liars ai and aj we have by the generalized Euclid's lemma and the fact that ai, aj, and a are all coprime with m:
ai
aj (mod m)
a ⋅ ai
a ⋅ aj
Since there is a witness for every liar, there are at least as many witness as liars, so at least half the values are witnesses. How many numbers m have at least one Fermat witness? Equivalently, how many numbers have no Fermat witnesses?
Definition: For any m ℤ, if m has no coprime Fermat witnesses, then m is a Carmichael number, also known as a Fermat pseudoprime.
The distribuation of Carmichael numbers is high enough that the Fermat primality test is usually not used in favor of slightly more complex tests for probable primes. However, those tests follow a similar principle. The Fermat primality test is used in some deployed software applications (such as PGP).
for the chosena we have... what it means probability of this occurringif m is a non-Carmichael composite a | m a is a non-trivial factor of m,so m is composite (# integers in {2,...,m-1} that are factors with m) / (m-2) gcd(a,m) ≠ 1 m and a have a non-trivial factor,so m is composite (# integers in {2,...,m-1} that share factors with m) / (m-2) am-1 mod m ≠ 1 a is a Fermat witnessthat m is composite at least 1/2
We can consider a particular example input for the primality test to see how each successive check in the algorithm can extract valuable information about whether the input is composite. The following table is for m = 15.
m = 15 and a = ... 2 3 4 5 6 7 8 9 10 11 12 13 14 a | m PP C PP C PP PP PP PP PP PP PP PP PP gcd(a,m) ≠ 1 PP C PP C C PP PP C C PP C PP PP am-1 mod m = ... 4 9 1 10 6 4 4 6 10 1 9 4 1
We can now summarize all the facts and algorithms we have introduced and how their relationships allow us to construct a prime number generator.
Euclid'slemmageneralization ⇐ multiples ofcoprime a in ℤ/mℤare a permutation gcd(m,m+1) = 1 ⇑ ⇑ Fermat'slittletheorem randomnumbergenerator ⇒ coprimegenerator ⇑ ⇑ greatestcommondivisoralgorithm ⇐ Fermatprimalitytest ⇐ probableprimedetector ⇑ probableprimegenerator
To better understand multiplicative inverses, we first review the definition of an additive inverse.
Fact: For any m ℕ, every element in the set ℤ/mℤ has an inverse with respect to addition defined over ℤ/mℤ (i.e., an additive inverse). Consider any x ℤ/mℤ. Then px ℤ/mℤ and
x + (p − x)
p (mod p)
0
We denote by −x the additive inverse of x.
Example: What is the additive inverse of 2 ℤ/5ℤ?
The additive inverse is 5 − 2 = 3, since 2 + 3 mod 5 = 0.
There is more than one way to compute multiplicative inverses; in this subsection, we will present facts that will help us build algorithms for computing multiplicative inverses.
Definition: Given a positive integer m ℕ and a congruence classes x ℤ/mℤ, suppose there exists a congruence class y ℤ/mℤ such that:
x ⋅ y
1 (mod m)
Then we say that y is the multiplicative inverse of x in ℤ/mℤ. We usually denote the multiplicative inverse of x as x-1 (as is often done for multiplicative inverses over the integers, i.e., 2-1 = 1/2).
Fact: Let p ℕ be a prime number, and let a ℤ/pℤ. Then we know by Fermat's little theorem that:
ap-1
1 (mod p)
But we can factor the above to get:
a ⋅ ap-2
1 (mod p)
Thus, the multiplicative inverse of a ℤ/pℤ is:
a-1
ap-2 (mod p)
Note that:
a ⋅ a-1
a ⋅ ap-2 (mod p)
ap-1
1
Example: What is the multiplicative inverse of 2 ℤ/5ℤ? We can compute it as follows:
2-1
25-2 (mod 5)
23
8
3
We can check to confirm that this is true:
2 ⋅ 3
6 (mod 5)
1
### [link] 3.9. Chinese remainder theorem (CRT) and applications
In previous sections we presented facts that allowed us to solve certain individual equations with solution spaces corresponding to sets of congruence classes such as ℤ/mℤ. It is also possible to solve systems of equations over sets of congruence classes.
Theorem (Chinese remainder theorem): The Chinese remainder theorem (CRT) states that given primes p1,...,pk ℕ, for any a1,...,ak ℤ there exists a solution x ℤ to the system of equations:
x mod p1
=
a1
x mod pk
=
ak
We can also state the theorem in terms of congruences. Given primes p1,...,pk ℕ, for any a1 ℤ/p1ℤ, ..., ak ℤ/pkℤ there exists a unique solution x ℤ/(p1 ⋅ ... ⋅ pk)ℤ to the system of equations:
x
a1 (mod p1)
x
ak (mod pk)
In other words, all the solutions to the first system above are from the same congruence class of ℤ/(p1 ⋅ ... ⋅ pk)ℤ. The theorem applies even if p1,...,pk are only relatively prime or coprime.
Example: Solve the following system of equations for the unique solution x ℤ/10ℤ:
x
3 (mod 5)
x
0 (mod 2)
We can list the integers corresponding to each congruence class and find the unique integer in {0, ..., 2 ⋅ 5 - 1} that is in both lists:
3 + 5ℤ
=
{..., 3, 8, 13, 18, 23, 28, ...}
0 + 2ℤ
=
{..., 0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, ...}
We can compute the intersection, which should contain all the integers that satisfy both equations:
(3 + 5ℤ) ∩ (0 + 2ℤ)
=
{..., 8, 18, 28, ...}
This appears to be the congruence class 8 + 10ℤ. Thus, we have the unique solution:
x
8 (mod 10)
The Chinese remainder theorem has many applications in a variety of contexts. In this section we present the following algorithms, which all rely on the ability to solve systems of equations involving congruence classes.
efficientmodulararithmetic ⇓ Chineseremaindertheorem ⇐ CRT solver ⇐ rangeambiguityresolution ⇑ Shamir secretsharing protocol
Fact: Given a, a ℤ, if a + b {0,...,m-1}, then it is true that
(a mod m) + (b mod m)
a + b (mod m)
Likewise, if a ⋅ b {0,...,m-1}, then it is true that
(a mod m) ⋅ (b mod m)
a ⋅ b (mod m)
Example (efficient modular arithmetic): Suppose we want to perform a large number of arithmetic operations in sequence. The operations could be specified as a program that operates on a single variable and performs a sequence of variable updates that correspond to arithmetic operations, such as the example below.
x
:=
3
x
:=
x + 6
x
:=
x ⋅ 2
x
:=
x − 17
x
:=
x + 1
Suppose that over the course of the computation, x might become very large (e.g., 0 ≤ x ≤ 21000000000). However, we have an additional piece of information: once the sequence of operations ends, we know that 0 ≤ x < 1024.
Given our additional information about the range of the final output, we do not need to store 1000000000 bit numbers in order to perform the computation and get a correct result. It is sufficient to instead perform all the operations in ℤ/1024ℤ:
x
:=
3 mod 1024
x
:=
(x + 6) mod 1024
x
:=
(x ⋅ 2) mod 1024
x
:=
(x − 17) mod 1024
x
:=
(x + 1) mod 1024
The above will produce the same result in ℤ/1024ℤ, and we will only need 1024 bits at any single point in the computation to store each intermediate result.
Example (efficient distributed modular arithmetic): Suppose that, as in the previous example, we want to perform a large number of arithmetic operations in sequence on large integers (e.g., in ℤ/270ℤ). However, our resources may be limited. For example, we may only have a collection of processors that can each perform arithmetic on relatively small integers (e.g., in ℤ/28ℤ). Is it possible for us to perform this computation using these processors, and is it possible for us to speed up the computation by running the processors in parallel? We may assume that a single arithmetic computation in ℤ/2nℤ running on a single processor takes n time steps to perform.
Suppose we have ten processors that can perform arithmetic computations in ℤ/28ℤ or any smaller space (such as ℤ/27ℤ). We can approach this problem by choosing a collection of primes p1,...,p10 such that 27 < pi < 28, which implies that:
p1 ⋅ ... ⋅ p10
>
27 ⋅ ... ⋅ 27
270
We can then perform the sequence of computations modulo each of the primes pi to obtain ten results a1, ..., a10. Once we obtain the results, we can apply the Chinese remainder theorem to obtain x:
x
=
a1 mod p1
x
=
a10 mod p10
Since the product of the primes is greater than 270, the unique solution x to the above system of equations will be the correct result of the computation. Since the processors were running in parallel, the computation was about 10 times faster than it would have been if we had performed the computation in ℤ/270ℤ (or in sequence using a single processor that can perform computations in ℤ/28ℤ).
Example (variant of range ambiguity resolution): Suppose we want to build a radar or other sensing device that sends signals out and listens for reflections of those signals in order to detect the distances of obstacles in the environment. The device has a clock that counts up from 0, one integer per second. If the device sends a single signal out that travels at 1 km per second at time 0 and receives a response in 12 seconds at time 12, it knows that the distance to the object and back is 12 km.
However, what if we cannot wait 12 seconds or more? For example, the obstacle may be moving quickly and we want to constantly update our best guess of the distance to that object. We would need to send signals more frequently (for example, every 5 seconds). But then if an object is 12 seconds away, we would have no way to tell when running in a steady state which of the signals we just received.
However, we can obtain some information in this scenario. Suppose we send a signal every 5 seconds, only when the clock's timer is at a multiple of 5. Equivalently, imagine the clock counts up modulo 5 (i.e., 0,1,2,3,4,0,1,2,3,4,0,...) and we only send signals when the clock is at 0. What information can we learn about the object's distance in this scenario? If the distance to the object and back is d, then we would learn d mod 5, because we would get the signal back when the clock is at 0, 1, 2, 3, or 4.
We can use multiple instances of the above device (each device using its own distinct frequency for sending signals) to build a device that can check for obstacles more frequently while not giving up too much accuracy. Pick a collection of primes p1,..., pn such that their product is greater than the distance to any possible obstacle (e.g., if this is a ship or plane, we could derive this by considering the line of sight and the Earth's curvature). Take n instances of the above devices, each with their own clock that counts in cycles through ℤ/piℤ and sends out a signal when the clock is at 0. Running in a steady state, if at any point in time the known offsets are a1,...,an, we would know the following about the distance d to an obstacle:
d
a1 (mod p1)
d
an (mod pn)
We can then use the Chinese remainder theorem to derive the actual distance d < p1 ⋅ ... ⋅ pn.
Protocol (Shamir secret sharing): Suppose there are N participants and we want to divide some secret information among them into N parts so that any k or greater number of participants can reconstruct the secret information, but no subset of fewer than k participants can reconstruct it. Let s ℤ be the secret information. Collect a set of randomly chosen relatively prime integers M = {m1,...,mN} such that:
• the product of any collection of at least k integers in M is greater than s;
• the product of any collection of k-1 integers in M is less than s.
Give each participant i {1,...,N} the value s mod mi. Now, any number of participants nk can use the Chinese remainder theorem to solve for s.
Note: There are many alternative ways to implement Shamir secret sharing. Consider the following example using curve-fitting. We choose some large m ℤ, and then randomly select integers c1,...,ck ℤ/mℤ. We then use these integers as coefficients in a polynomial:
f(x)
=
s + c1 x + c2 x2 + ... + ck xk
Each participant i {1,...,N} is given f(i). Any k participants can now use curve-fitting techniques or techniques for solving collections of equations (e.g., computing the reduced row echelon form of a matrix) to determine all the coefficients of f and, thus, solve for s.
### [link] 3.10. Solving systems of equations with CRT solutions using multiplicative inverses
The Chinese remainder theorem guarantees that a unique solution exists to particular systems of equations involving congruence classes. But can these solutions be computed automatically and efficiently? In fact, they can. However, computing such solutions requires the ability to compute multiplicative inverses in ℤ/mℤ.
greatestcommondivisoralgorithm Fermat'slittletheorem Euler'stheorem ⇑ ⇑ ⇑ Bézout'sidentity ⇐ extendedEuclideanalgorithm ⇐ algorithm forfindingmultiplicativeinverses ⇒ Euler'stotientfunction φ ⇑ Chineseremaindertheorem ⇐ CRT solverfor twoequations ⇑ induction ⇐ CRT solverfor nequations
How is computing multiplicative inverses related to solving systems of equations that have solutions according to CRT? Consider the following example.
Example: Suppose we want to solve the following system of equations:
x
1 (mod 5)
x
0 (mod 4)
The above two equations are constraints on the integers that can be in the congruence class x. One way to state these constraints in English is: "x must be a multiple of 4, and x must be in 1 + 5ℤ". But then we can rewrite the above as a single equation:
4 ⋅ y
1 (mod 5)
Then, we only need to solve for y, and let x = 4 ⋅ y. What is y? The above equation implies that y is the multiplicative inverse of 4 in ℤ/5ℤ. Thus, we can compute:
y
4-1 (mod 5)
45-2 (mod 5)
43
64
4
Thus, the multiplicative inverse of 4 in ℤ/5ℤ is itself. Plugging y into x = 4 ⋅ y gives us 16. Thus, we know by CRT that we have our unique solution in ℤ/(4 ⋅ 5)ℤ = ℤ/20ℤ:
x
16 (mod 20)
The above example suggests that we can solve certain pairs of equations with CRT solutions if one of the congruence classes is 0 and the other 1. What if the other congruence class is not 1?
Example: Suppose we want to solve the following system of equations for x ℤ/15ℤ:
x
4 (mod 5)
x
0 (mod 3)
We can observe that we want some x that is a multiple of 3 and is in 4 + 5ℤ. We can set x = 3 ⋅ y for some y, and then we want to solve the following for y ℤ/5ℤ:
3 ⋅ y
4 (mod 5)
Using Fermat's little theorem, we can compute the multiplicative inverse of 3 ℤ/5ℤ:
35-1
1 (mod 5)
35-1 ⋅ 3-1
1 ⋅ 3-1
3(5-1)-1
3-1
35-2
3-1
33
3-1
27
3-1
2
3-1
Thus, we know that the multiplicative inverse of 3 ℤ/5ℤ is 2, and we have that 2 ⋅ 3 ≡ 1 (mod 5). Notice that 4 ≡ 4 ⋅ 1 (mod 5):
3 ⋅ y
4 (mod 5)
3 ⋅ y
4 ⋅ 1 (mod 5)
Since 1 ≡ 2 ⋅ 3 (mod 5), we can substitute:
3 ⋅ y
4 ⋅ (3 ⋅ 2) (mod 5)
We can cancel 3 on both sides using Euclid's lemma (since 5 is prime) or Euclid's generalized lemma (since 3 and 5 are coprime):
y
4 ⋅ 2 (mod 5)
y
8
y
3
Since we originally set x = 3 ⋅ y, we can now substitute and solve for x ℤ/15ℤ:
x
3 ⋅ y (mod 15)
x
3 ⋅ 3
x
9
x
9
Thus, x ≡ 9 (mod 15) is a solution to our equation. We can confirm this:
9
4 (mod 5)
9
0 (mod 3)
Example: Note that what we actually did in the previous example when we cancelled 3 ℤ/5ℤ on both sides is that we multiplied both sides by the multiplicative inverse of 3 ℤ/5ℤ. Suppose we knew that the multiplicative inverse of 3 ℤ/5ℤ is 2. We can use this information to help us solve the following equation:
3 ⋅ x
2 (mod 5)
We multiply both sides by 3-1 ≡ 2 (mod 5):
3-1 ⋅ 3 ⋅ x
3-1 ⋅ 2 (mod 5)
x
2 ⋅ 2
x
4
Notice that we have now reduced the problem of solving an equation with a single coefficient before x into the problem of finding the multiplicative inverse of the coefficient.
Example: Suppose we want to solve the following system of equations:
x
0 (mod 11)
x
4 (mod 7)
The above equations require that x ℤ/77ℤ be divisible by 11, and that x 4 + 7ℤ. Since x is divisible by 11, it is a multiple of 11, so we want to find x = 11 ⋅ y where:
11 ⋅ y
4 (mod 7)
To solve the above, it is sufficient to multiply both sides of the equation by 11-1 (mod 7). Since 11 ≡ 4 (mod 7), it is sufficient to find 4-1 (mod 7).
11-1
4-1 (mod 7)
47-2
45
1024
2
Thus, we can multiply both sides to obtain:
11 ⋅ y
4 (mod 7)
11-1 ⋅ 11 ⋅ y
11-1 ⋅ 4
y
2 ⋅ 4
y
8
y
1
Thus, we have:
x
11 ⋅ y (mod 77)
x
11 ⋅ 1
x
11 (mod 77)
Fact: Suppose we are given two unequal prime numbers p, q ℕ, and the following two equations:
x
1 (mod p)
x
0 (mod q)
This implies x must be a multiple of q, so rewrite x = qy. Then we have:
q ⋅ y
1 (mod p)
Thus, we can solve for q-1 by computing:
q-1
qp-2 (mod p)
Then we have:
q-1 ⋅ q ⋅ y
q-1 ⋅ 1 (mod p)
y
q-1 (mod p)
x
q ⋅ y (mod (p ⋅ q))
Notice that qy is indeed a solution to the original system because:
q ⋅ y
1 (mod p)
because y ≡ q-1 (mod p);
q ⋅ y
0 (mod q)
because q ⋅ y is a multiple of q.
Fact: Suppose we are given two unequal prime numbers p, q ℕ, and the following two equations where a ℤ/pℤ:
x
a (mod p)
x
0 (mod q)
This implies x must be a multiple of q and a multiple of a, so rewrite x = aqy. Then we have:
a ⋅ q ⋅ y
a (mod p)
As in the previous fact, the above works if y = q-1 (mod p), so compute:
y
=
q-1 (mod p)
x
a ⋅ q ⋅ y (mod (p ⋅ q))
Notice that aqy is indeed a solution to the original system because:
a ⋅ q ⋅ y
a (mod p)
because y ≡ q-1 (mod p);
a ⋅ q ⋅ y
0 (mod q)
because a ⋅ q ⋅ y is a multiple of q.
Fact: Suppose we are given two unequal prime numbers p, q ℕ, and the following two equations where a ℤ/pℤ and b ℤ/qℤ:
x
a (mod p)
x
b (mod q)
Suppose we instead solve the following two systems:
x1
a (mod p)
x1
0 (mod q)
x2
0 (mod p)
x2
b (mod q)
Notice that x1 + x2 is a solution to the original system because:
x1 + x2
x1 + 0 (mod p)
a + 0 (mod p)
a (mod p)
x1 + x2
0 + x2 (mod q)
0 + b (mod q)
b (mod q)
We know how to solve the above two systems separately:
x1
a ⋅ q ⋅ q-1 (mod (p ⋅ q))
x2
b ⋅ p ⋅ p-1 (mod (p ⋅ q))
Thus, we have the solution to the original system:
x
x1 + x2 (mod (p ⋅ q))
We have shown that we can solve a system of equations with a solution according to CRT if the moduli in the equations are both prime. What if the moduli are merely coprime? So far, we only needed a way to compute multiplicative inverses of numbers modulo a prime, and Fermat's little theorem was sufficient for this purpose. However, if the moduli are not prime, we need some other method to compute multiplicative inverses.
Fact: For any m ℕ, an x ℤ/mℤ has an inverse with respect to multiplication defined over ℤ/mℤ (i.e., a multiplicative inverse) iff gcd(x,m) = 1.
Fact (Bezout's identity): For any two integers x ℤ, y ℤ where x ≠ 0 or y ≠ 0, let z = gcd(x,y). Then there exist a ℤ and b ℤ such that:
a ⋅ x + b ⋅ y
=
z
Fact: For any two integers x ℤ, y ℤ where x ≠ 0 or y ≠ 0, and gcd(x,y) = 1, there exist a ℤ and b ℤ such that:
a ⋅ x + b ⋅ y
=
1
This fact is a special case of Bézout's identity (i.e., the case in which gcd(x,y) = 1).
Example: Suppose we have s,t ℤ such that:
5 ⋅ s + 3 ⋅ t
=
1
We can then do the following:
− 5 ⋅ t + (5 ⋅ s + 3 ⋅ t) + 5 ⋅ t
=
1
(5 ⋅ s − 5 ⋅ t) + (3 ⋅ t + 5 ⋅ t)
=
1
5 ⋅ (s − t) + 8 ⋅ t
=
1
Thus, we have converted a instance of Bézout's identity for 5 and 3 into an instance of Bézout's identity for 5 and 8.
We can repeat the above as many times as we want. Suppose we instead want Bézout's identity for 3 and 13. We can do the following:
− 5 ⋅ 2 ⋅ t + (5 ⋅ s + 3 ⋅ t) + 5 ⋅ 2 ⋅ t
=
1
(5 ⋅ s − 5 ⋅ 2 ⋅ t) + (3 ⋅ t + 5 ⋅ 2 ⋅ t)
=
1
5 ⋅ (s − 2 ⋅ t) + 13 ⋅ t
=
1
Fact: For any two integers a, b, s, t ℤ, suppose we have that:
a ⋅ s + b ⋅ t
=
1
Let us assume that a > b and that a mod b = r (in other words, for some k,
a mod b
=
r
b ⋅ k + r
=
a
Then we have that:
a ⋅ s + b ⋅ t
=
1
− b ⋅ k ⋅ s + (a ⋅ s + b ⋅ t) + b ⋅ k ⋅ s
=
1
(a ⋅ s − b ⋅ k ⋅ s) + (b ⋅ (t + k ⋅ s))
=
1
(a − b ⋅ k) ⋅ s + b ⋅ (t + k ⋅ s)
=
1
r ⋅ s + b ⋅ (t + k ⋅ s)
=
1
(a mod b) ⋅ s + b ⋅ (t + k ⋅ s)
=
1
Thus, for any instance of Bézout's identity for a and b and a > b, there must exist an instance of Bézout's identity for a mod b and b.
The above fact suggests that if we want to find the s and t coefficients for an equation as + bt = 1 given a > b, we should try finding Bézout's identity for a mod b and b. But notice that:
a mod b
<
b
The above implies that the problem of finding the coefficients for an instance of Bézout's identity can be reduced to a smaller version of the problem: find Bézout's identity for a mod b and b can then be reduced further to finding b mod (a mod b) and a mod b. At this point, we have a strictly smaller instance of the problem:
a mod b
<
b
<
a
b mod (a mod b)
<
b
<
a
Thus, we can use recursion; the recursive algorithm that solves this problem is called the extended Euclidean algorithm, and is a modification of the recursive algorithm that computes the gcd of two numbers.
Algorithm (extended Euclidean algorithm): The collection of equations considered in the Chinese remainder theorem can be solved constructively (i.e., in a way that provides a concrete solution and not just a proof that a solution exists) by applying an extended version of the greatest common divisor algorithm. We provide the definition of the algorithm below.
1. extended Euclidean algorithm: x ℤ, y
1. if y = 0
1. (s,t) := (1, 0)
2. return (s,t)
2. otherwise
1. (s,t) := extended Euclidean algorithm(y, x mod y)
2. return (t, s - (⌊ x/y ⌋ ⋅ t) )
Given two inputs x ℤ, y ℤ, the extended Euclidean algorithm returns two integers u, v such that
u ⋅ x + v ⋅ y
=
gcd(x,y)
We can check that the above is indeed a solution to xa (mod m). Consider the following:
u ⋅ m + v ⋅ n
=
1
v ⋅ n
=
1 - u ⋅ m
v ⋅ n
1 (mod m)
Furthermore, we have that:
((u ⋅ m) ⋅ b) mod m
=
0
Then, we can conclude:
((u ⋅ m) ⋅ b + (v ⋅ n) ⋅ a) mod m
=
0 + ((v ⋅ n) ⋅ a) mod m
=
0 + (1 ⋅ a) mod m
=
a mod m
Using a similar argument, we can show that the solution is also equivalent to b (mod m).
Example: Suppose we want to find the multiplicative inverse of 49 in ℤ/100ℤ and the multiplicative inverse of 100 in ℤ/49ℤ. We run the extended Euclidean algorithm on the inputs 49 and 100 to obtain the following instance of Bézout's identity:
(-24) ⋅ 100 + 49 ⋅ 49
=
1
We can use the above to find the multiplicative inverse of 49 in ℤ/100ℤ:
(-24) ⋅ 100 + 49 ⋅ 49
=
1
(-24) ⋅ 100 + 49 ⋅ 49
1 (mod 100)
49 ⋅ 49
1 (mod 100)
Thus, 49-1 = 49 in ℤ/100ℤ. We can also find the multiplicative inverse of 100 in ℤ/49ℤ (also known as 2 ℤ/49ℤ):
(-24) ⋅ 100 + 49 ⋅ 49
=
1
(-24) ⋅ 100 + 49 ⋅ 49
1 (mod 49)
-24 ⋅ 100
1 (mod 49)
25 ⋅ 100
1 (mod 49)
Thus, 100-1 = 25 in ℤ/49ℤ.
Example: Suppose we want to solve the following system:
x
23 (mod 100)
x
31 (mod 49)
We use the extended Euclidean algorithm to find that:
(-24) ⋅ 100 + 49 ⋅ 49
=
1
This tells us that −24 is the inverse of 100 in ℤ/49ℤ and that 49 is the inverse of 49 in ℤ/100ℤ. Thus, to build 31 in ℤ/49ℤ, we need:
31
1 ⋅ 31 (mod 49)
(100 ⋅ 100-1) ⋅ 31
(100 ⋅ -24) ⋅ 31
To build 23 in ℤ/100ℤ, we need:
23
1 ⋅ 23 (mod 100)
(49 ⋅ 49-1) ⋅ 23
(49 ⋅ 49) ⋅ 23
Then the solutions to the system are in the congruence class:
x
(100 ⋅ -24) ⋅ 31 + (49 ⋅ 49) ⋅ 23 (mod (100 ⋅ 49))
-19177 mod 4900
423
Algorithm: Suppose we are given a collection of equations of the following form such that m1,...,mk are all pairwise coprime.
x
a1 (mod m1)
x
ak (mod mk)
Let C be the set of these equations, where Ci is the ith equation. The following algorithm can be used to find a solution for this system of equations.
1. solve system of equations: C is a set of constraints xai mod mi
1. while |C| > 1
1. remove two equations Ci and Cj from C and solve them to obtain a new equation xc (mod mimj)
2. add the new equation to C
2. return the one equation left in C
### [link] 3.11. More practice with CRT
Example: Solve the following equation for x ℤ/5ℤ by multiplying both sides by the appropriate multiplicative inverse:
3 ⋅ x
2 (mod 5)
Example: Solve the following system of equations for x ℤ/35ℤ by finding multiplicative inverses of 5 ℤ/7ℤ and 7 ℤ/5ℤ:
x
4 (mod 5)
x
2 (mod 7)
Example: Suppose you know that 7-1 ≡ 3 (mod 10). Solve the following system of equations:
x
0 (mod 2)
x
1 (mod 5)
x
3 (mod 7)
Example: Suppose we have a single processor that can perform arithmetic operations (addition, subtraction, multiplication, and modulus) on integers that can be represented with at most 11 bits (211 = 2048). On this processor, a single arithmetic operation can be performed in 11 time steps. We also have three other processors that can perform arithmetic on integers that can be represented with at most 4 bits (24 = 16). Each of these processors can perform an arithmetic operation on 4-bit integers in 4 time steps.
For example, suppose we want to perform 1000 arithmetic operations on 11-bit integers. Using a single processor, this would require:
1000 ⋅ 11 = 11,000 time steps
If we use three coprime numbers 13, 14, and 15, and we use each of the three 4-bit processors to perform these operations modulo 13, 14, and 15 in parallel, 1000 operations would require:
1000 ⋅ 4 = 4,000 time steps
Note that 13 ⋅ 14 ⋅ 15 = 2730, and that 2730 > 2048, so:
13 ⋅ 14 ⋅ 15
>
211
Suppose it takes 1400 time steps to solve a system of three congruence equations of the following form:
x
a (mod 13)
x
b (mod 14)
x
c (mod 15)
If we want to perform the computations as quickly as possible and we can use either the 11-bit processor or the three 4-bit processors, how many operations k would we need to perform before we decided to switch from the the 11-bit processor to the 4-bit processors?
Example: Suppose we are using echolocation to measure the distance to a wall that is at most 15 distance units away. We have two devices that emit sounds at two different frequencies. One device emits sound every 3 seconds, while the other device emits a sound every 11 seconds. Suppose we hear the following:
• the device that emits a sound every 3 seconds hears a response 2 seconds after each time it emits a sound;
• the device that emits a sound every 11 seconds hears a response 4 seconds after each time it emits a sound.
If sound travels one distance unit per second, how far away is the wall?
Example: Suppose Alice, Bob, and Eve are using the Shamir secret sharing protocol to store a combination for a lock; all three participants would need to work together to retrieve the secret lock combination in ℤ/60ℤ. They are each given the following equations:
Alice:
x
1 (mod 3)
Bob:
x
3 (mod 4)
Eve:
x
2 (mod 5)
• What is the lock combination?
• The lock only permits anyone to try two incorrect combinations before locking down completely and becoming inaccessible. Suppose Eve has a chance to steal either Bob's secret information or Alice's secret information, but she can only choose one. Whose information should she steal in order to unlock the lock?
Example: Suppose we want to store a number n between 0 and 500,000 on a collection of 5-bit memory regions. However, we want to make sure that if any one of the memory regions is turned off, we can still recover the number exactly, without any missing information or errors. How many memory regions will we need to use? Note that 323 = 32,768 and 324 = 1,048,576.
Example: Suppose we make the following simplifications: for every t years,
• when the Earth revolves around the sun, it travels a circumference of 1 unit, at a rate of 1 ⋅ t (once per year);
• when the asteroid Ceres revolves around the sun, it travels a circumference of 5 units;
• when the planet Jupiter revolves around the sun, it travels a circumference of 11 units.
Suppose that on June 21st, 2000, the Earth, Ceres, and Jupiter all align (i.e., one can draw a straight line through all three). Next, suppose that it is June 21st of some year between 2000 and 2055. At this time, there is no alignment. However, Jupiter aligned with earth on June 21st two years ago, and Ceres aligned with Earth on June 21st three year ago. What year is it?
### [link] 3.12. Euler's totient function, Euler's theorem, and applications
Definition: For any input m ℕ, define Euler's totient function φ by:
φ(m)
=
|{k | k ∈ {1,...,m}, gcd(k,m) = 1}|
Example: Compute φ(15).
φ(15)
=
|{k | k ∈ {1,...,15}, gcd(k,15) = 1}|
=
|{1,2,4,7,8,11,13,14}|
=
8
Example: Suppose p ℕ is a prime number. What is φ(p)?
φ(p)
=
|{k | k ∈ {1,...,p}, gcd(k,p) = 1}|
=
|{1,2,3,...,p-1}|
=
p-1
Example: What is φ(15)?
φ(15)
=
|{k | k ∈ {1,...,15}, gcd(k,15) = 1}|
=
15 - |{k | k ∈ {1,...,15}, gcd(k,15) ≠ 1}|
=
15 - |{3,6,9,12,15} ∪ {5,10,15}|
=
15 - |{3,6,9,12}| - |{5,10}| - |{15}|
=
15 - (5-1) - (3-1) - 1
=
15 - 5 - 3 + 1 + 1 - 1
=
15 - 5 - 3 + 1
=
(3 ⋅ 5) - 5 - 3 + 1
=
(3-1) ⋅ (5-1)
=
2 ⋅ 4
=
8
Fact: For any x ℕ and y ℕ, if gcd(x,y) = 1 then:
φ(x) ⋅ φ(y)
=
φ(x ⋅ y)
Example: Suppose p ℕ and q ℕ are prime numbers. What is φ(pq)?
φ(p ⋅ q)
=
φ(p) ⋅ φ(q)
=
(p-1) ⋅ (q-1)
Fact: For any prime p ℕ, we have that:
φ(pk)
=
pk - pk-1
Fact: For any a ℕ and m ℕ, if am-1 mod m = 1 then:
am-1 mod m
=
1
am-1
=
1 + k ⋅ m
1
=
gcd(1 + k ⋅ m, k ⋅ m)
=
gcd(am-1, k ⋅ m)
=
gcd(a, k ⋅ m)
=
gcd(a, m)
Thus, a and m are coprime.
Example: Suppose m ℕ is a Carmichael number. At most how many Fermat liars does m have?
Fact: We can use φ to provide a formula for the probability that the Fermat primality test will detect that a Carmichael number m ℕ is actually composite. It is approximately:
(m - φ(m)) / m
To be more precise (since we do not check 0 or 1 in our actual implementation), it is:
((m - 3) - φ(m)) / (m - 3)
Unfortunately, Euler's totient function does not in general have a better upper bound than f(m) = m.
Example: How many elements of ℤ/mℤ have a multiplicative inverse in ℤ/mℤ? Since an x ℤ/mℤ has an inverse iff gcd(x,m) = 1. Thus, the set of such x is exactly the set {x | x {1,...,m}, gcd(k,m) = 1}. But this is the definition of φ(m). Thus, there are φ(m) elements in ℤ/mℤ that have a multiplicative inverse.
Theorem (Euler's theorem): For any m ℕ and a ℤ/mℤ, if gcd(m,a) = 1 then we have that:
aφ(m) mod m
=
1
Notice that if m is a prime number, then φ(m) = m-1. Then for any a ℤ/mℤ, gcd(a,m) = 1 and am-1 mod m = 1. This is exactly the statement of Fermat's little theorem. Thus, Euler's theorem is a generalization of Fermat's little theorem.
Fact: For any m ℕ and a ℤ/mℤ, if gcd(m,a) = 1 then for any i ℤ/φ(m)ℤ such that i ≡ 0 we have that
ai mod m
=
1
This is because:
i
0 (mod φ(m))
=
k ⋅ φ(m)
aφ(m) ⋅ k mod m
=
(aφ(m))k mod m
=
1k mod m
=
1 mod m
Fact: For any p ℕ, if p is prime and a ℤ/pℤ then for any k ℤ we have that:
ak mod p
=
a(k mod (p-1)) mod p
Fact: For any m ℕ and a ℤ/mℤ, if gcd(m,a) = 1 then for any k ℤ we have that:
ak mod m
=
a(k mod φ(m)) mod m
Example: We can compute the integer value 238 mod 7 as follows because 7 is prime:
238
238 mod (7-1) (mod 7)
238 mod 6
22
4
Since the final operation in the integer term is a modulus operation, the congruence class 4 is also exactly the integer result of the term.
Example: We can compute 4210000000 mod 5 as follows because 5 is prime:
4210000000
4210000000 mod (5-1) (mod 5)
=
4210000000 mod 4
=
40
=
1
Example: We can compute 48100+ 3 mod 15 as follows because gcd(4,15) = 1:
4(8100 + 3)
4(8100 + 3) mod φ(15) (mod 15)
=
4(8100 + 3) mod ((5-1) ⋅ (3-1))
=
4(8100 + 3) mod 8
=
43
=
64
=
4
Example: Compute 56603 mod 7.
Fact: For any m ℕ and a ℤ/mℤ where gcd(m,a) = 1, we can use the Euler's theorem to find the inverse of a. Notice that:
aφ(m) mod m
=
1
(aφ(m)-1 ⋅ a) mod m
=
1
Thus, aφ(m)-1 mod m is the multiplicative inverse of a in ℤ/mℤ.
Example: Find the multiplicative inverse of 52 in ℤ/7ℤ.
It is sufficient to notice that 56 ≡ 1 (mod 7), so 52 ⋅ 54 ≡ 1, so 54 is the inverse of 52 in ℤ/7ℤ.
Example: We can find the multiplicative inverse of 3 in ℤ/22ℤ using the following steps. We first compute φ(22) = 10.
φ(22)
=
φ(11 ⋅ 2)
=
φ(11) ⋅ φ(2)
=
(11 − 1) ⋅ (2 − 1)
=
10 ⋅ 1
=
10
Next, we compute the inverse using Euler's theorem.
3-1
3φ(22) − 1 (mod 22)
310 − 1
39
33 ⋅ 33 ⋅ 33
5 ⋅ 5 ⋅ 5
25 ⋅ 5
3 ⋅ 5
15
Definition: For m ℕ, We define (ℤ/mℤ)* to be the following subset of ℤ/mℤ:
(ℤ/mℤ)*
=
{ a | a ∈ ℤ/mℤ, a has an inverse in ℤ/mℤ }
Example: Does 11 have an inverse in ℤ/22ℤ (i.e., is it true that 11 (ℤ/22ℤ)*)?
Example: Compute |(ℤ/35ℤ)*|.
|(ℤ/35ℤ)*|
=
|{ a | a ∈ ℤ/35ℤ, a has an inverse in ℤ/35ℤ }|
=
|{ a | a ∈ ℤ/35ℤ, gcd(a,35) = 1 }|
=
|{ a | a ∈ ℤ/35ℤ, gcd(a,35) = 1 }|
=
φ(35)
=
φ(5 ⋅ 7)
=
φ(5) ⋅ φ(7)
=
4 ⋅ 6
=
24
Fact: For any m ℕ, (ℤ/mℤ)* is closed under multiplication modulo m. That is, for any a ℤ/mℤ and b ℤ/mℤ, if there exist a-1 ℤ and b-1 ℤ then (ab) has an inverse (a-1b-1). We can use the commutativity of multiplication to show this:
(a ⋅ b) ⋅ (a-1 ⋅ b-1)
(a ⋅ a-1) ⋅ (b ⋅ b-1)
1 ⋅ 1
1
## [link] Review 1. Properties, Algorithms, and Applications of Modular Arithmetic
This section contains a comprehensive collection of review problems going over the course material covered until this point. Many of these problems are an accurate representation of the kinds of problems you may see on an exam.
Exercise: For some a ℕ, suppose that a-1 ℤ/21ℤ and a-1 ℤ/10ℤ (that is, a has an inverse in ℤ/21ℤ, and it also has an inverse in ℤ/10ℤ). Determine whether or not a has an inverse in ℤ/210ℤ. Explain why or why not. Hint: use gcd.
Exercise: Bob is trying to implement a random number generator. However, he's distracted and keeps making mistakes while building his implementation.
1. Bob begins his algorithm by generating two coprime numbers a and m such that gcd(a,m) = 1. However, he mixes them up and defines the following computation:
[ (i ⋅ m) mod a | i ∈ {1,...,a-1} ]
Is Bob going to get a permutation? Why or why not?
2. Bob notices part of his mistake and tries to fix his algorithm; he ends up with the following:
[ (i ⋅ m) mod m | i ∈ {1,...,m-1} ]
How many distinct elements does the list he gets in his output contain?
3. Bob notices his algorithm isn't returning a permutation, but he mixes up a few theorems and attempts the following fix:
[ (i ⋅ am-1) mod m | i ∈ {1,...,m-1} ]
Bob tests his algorithm on some m values that are prime numbers. How many elements does the set he gets in his output contain?
4. Bob doesn't like the fact that his permutation doesn't look very random, so he moves the i term to the exponent:
[ (ai ⋅ (m-1)) mod m | i ∈ {1,...,m-1} ]
Bob tests his algorithm on some m values that are prime numbers. How many elements does the set he gets in his output contain?
Exercise: Suppose you have the following instance of Bézout's identity: 2 ⋅ 3 + (−1) ⋅ 5 = 1. Solve the following system of equations:
x
2 (mod 3)
x
3 (mod 5)
Exercise: Solve the following system of equations:
x
2 (mod 7)
x
3 (mod 5)
Exercise: Determine the size of the following set:
{x | x ∈ ℤ/(11 ⋅ 13)ℤ, x ≡ 5 mod 11, x ≡ 7 mod 13}
Exercise: For a given y ℤ/(pq)ℤ where p and q are distinct primes, how many solutions does the following system of equations have:
x
y2 (mod p)
x
y2 (mod q)
Exercise: Determine the size of the following set:
{x | x ∈ ℤ/(11 ⋅ 13)ℤ, s ∈ ℤ/11ℤ, t ∈ ℤ/13ℤ, x ≡ s mod 11, x ≡ t mod 13 }
Exercise: Suppose that n ℕ is even and n/2 − 1 is odd. Determine the size of the following set:
{i ⋅ (n/2 - 1) mod n | i ∈ {0,...,n-1} }
Exercise: For any n ℕ, let a ℤ/nℤ have an inverse a-1 ℤ/nℤ. Determine the size of the following set:
{ (a ⋅ i) mod n | i ∈ ℤ/nℤ }
Exercise: Let p be a prime number. Compute the set size |ℤ/pℤ - (ℤ/pℤ)*|.
Exercise: In a game, you win if you can guess correctly whether a large number n is prime in under a minute (if you are wrong, you win nothing and you lose nothing). You are given a handheld calculator that can only perform addition, subtraction, multiplication, division, exponentiation, and modulus (the calculator can represent arbitrarily large numbers, and can provides quotients to any precision). Describe one strategy you can use to give yourself a high probability of winning.
Exercise: Suppose that n ℕ. Compute the following:
534 ⋅ n + 1 mod 11
Exercise: Suppose we make the following simplifications:
• the Earth rovolves around the sun once per year;
• the asteroid Ceres rovolves around the sun every 5 years;
• the planet Jupiter revolves around the sun every 11 years.
Suppose that on June 21st, 2000, the Earth, Ceres, and Jupiter all align (i.e., one can draw a straight line through all three).
1. Which two of these objects will align again on June 21st, and in which year?
2. How many years will pass before all three align again?
3. Suppose that it is June 21st of some year between 2000 and 2055. At this time, there is no alignment. However, Jupiter aligned with earth on June 21st four years ago, and Ceres aligned with Earth on June 21st one year ago. What year is it?
Exercise: Suppose there exist two devices, where one can either produce or consume exactly 2 units of power and another can either produce or consume exactly 7 units of power:
• device A: +/− 2 units
• device B: −/+ 7 units
Suppose we want to produce exactly 1 unit of power using a combination of some number of A devices and B devices. Is this possible?
## [link] 4. Computational Complexity of Modular Arithmetic Algorithms
### [link] 4.1. Definition of computational problems and their complexity
Below, we review a small set of definitions and facts from complexity theory. We will only use these facts as they relate to problems in modular arithmetic and abstract algebra. A course on computational complexity theory would go into more detail.
Definition: Informally, for some formula f, we call a statement of the following form a problem:
• "Given x, find y such that f(x, y) is true."
In the above, x can be viewed as the input describing the problem, and y can be viewed as the solution to the problem.
Definition: The computational complexity of a problem refers to the running time of the most efficient algorithm that can solve the problem.
### [link] 4.2. Complexity of algorithms for solving tractable problems
In this subsection we consider the running time of efficient algorithms for performing common arithmetic operations (addition, subtraction, multiplication, exponentiation, and division). We consider the complexity of these arithmetic operations on each of the following domains:
• unbounded positive integers;
• integers modulo 2k;
• integers modulo n for some n ℕ.
All of our arithmetic algorithms will operate on bit string representations of positive integers. A bit string representation such as
ak-1...a0
is defined to represent the integer
2k-1 ⋅ ak-1 + ... + 20 ⋅ a0
Note that this means that k = ⌈log2(a)⌉. Below are some specific examples:
111
=
22 ⋅ 1 + 21 ⋅ 1 + 20 ⋅ 1
1101
=
23 ⋅ 1 + 22 ⋅ 1 + 21 ⋅ 0 + 20 ⋅ 1
10
=
21 ⋅ 1 + 20 ⋅ 0
Since the operations we consider usually take two arguments, we will follow the following conventions:
• the first (left-hand side) input is x, an k-bit integer;
• the second (right-hand side) input is y, an l-bit integer.
Thus, x ≤ 2k - 1 and y ≤ 2l - 1.
Algorithm: There exists an algorithm that can compute the sum of a k-bit integer x and an l-bit integer y in time O(max(k,l)+1). The size of the output is O(max(k,l)+1).
1. addition of unbounded positive integers: k-bit integer x, l-bit integer y
1. r := 0 (a bit vector to store the result)
2. c := 0 (the carry bit)
3. for i from 0 to max(k,l) − 1
1. r[i] := (x[i] xor y[i]) xor c
2. c := (x[i] and y[i]) or (x[i] and c) or (y[i] and c)
4. r[max(k,l)] := c
5. return r
Example: Below is a Python implementation of the bitwise addition algorithm.
from bitlist import bitlist
k = len(x)
l = len(y)
r = bitlist(0)
c = 0
for i in range(0, max(k,l)): # Upper bound is not inclusive.
r[i] = (x[i] ^ y[i]) ^ c
c = (x[i] & y[i]) | (x[i] & c) | (y[i] & c)
r[max(k,l)] = c
return r
How can we use the addition algorithm to implement multiplication? One approach for multiplying two positive integers x, y ℕ is to do repeated addition of y (repeating the addition operations x times). However, if x is an k-bit integer, this would require up to 2k-1 addition operations, which would take exponential time in the representation size of the input x.
A more efficient approach is to use the representation of x as a sum of powers of 2, and to apply the distributive property. Suppose x is represented as the binary bit string ak-1...a0. Then we have:
x ⋅ y
=
(ak-1 ⋅ 2k-1 + ... + a2 ⋅ 22 + a1 ⋅ 21 + a0 ⋅ 20) ⋅ y
=
(ak-1 ⋅ 2k-1 ⋅ y) + ... + (a2 ⋅ 22 ⋅ y) + (a1 ⋅ 21 ⋅ y) + (a0 ⋅ 20 ⋅ y)
Notice that we have now rewritten multiplication as k − 1 addition operations. The only other problem is how to multiply y by powers of 2. We can do so simply by appending a 0 to the bit string representation of y. Suppose y is represented as the binary bit string bk-1...b0. Then we have:
2 ⋅ y
=
2 ⋅ bk-1...b1b0
=
2 ⋅ (bk-1 ⋅ 2k-1 + ... + b1 ⋅ 21 + b0 ⋅ 20)
=
bk-1 ⋅ 2k + ... + b1 ⋅ 22 + b0 ⋅ 21
=
(bk-1 ⋅ 2k + ... + b1 ⋅ 22 + b0 ⋅ 21) + 0 ⋅ 20
=
bk-1...b1b00
Thus, our algorithm only needs to depend on addition, and on shifting bit strings left by one (a.k.a., appending a 0 to the bit string at the position of the least significant bit).
Algorithm: There exists an algorithm that can compute the product of a k-bit integer x and an l-bit integer y in time O(k ⋅ (max(k,l)+1+k)) or O(max(k,l)2). The size of the output is O(k+l) (because the shift left for the 21 case does not contribute to the final result, the l-bit integer is shifted left at most k-1 times, but there may still be a carried bit on the last addition operation that is performed).
1. multiplication of unbounded positive integers: k-bit integer x, l-bit integer y
1. r := 0 (a bit vector to store the result)
2. for i from 0 to k − 1
1. if x[i] is 1
1. r := r + y (using unbounded integer addition)
2. shift the bits of y left by one bit (i.e., multiply y by 2)
3. return r
Example: Below is a Python implementation of the bitwise multiplication algorithm. Note that it relies on the bitwise addition algorithm implementation.
def mult(x, y):
k = len(x)
l = len(y)
r = bitlist(0)
for i in range(0, k): # Upper bound is not inclusive.
if x[i] == 1:
y = y << 1
return r
Algorithm: There exists an algorithm that can compute the exponentiation xy of an k-bit integer x and an l-bit integer y in time O(k ⋅ 2l). The size of the output is O(k ⋅ 2l). Notice that this means that for unbounded integer outputs, the algorithm runs in exponential time because it must build an output the size of which is exponentially large in the size of the input.
1. exponentiation of unbounded positive integers: k-bit integer x, l-bit integer y
1. r := 1 (a bit vector to store the result)
2. for i from 0 to l − 1
1. if y[i] is 1
1. r := rx (using unbounded integer multiplication)
2. x := xx (using unbounded integer multiplication)
3. return r
Example: Below is a Python implementation of the bitwise exponentiation algorithm. Note that it relies on the bitwise multiplication algorithm implementation.
def exp(x, y):
k = len(x)
l = len(y)
r = bitlist(1)
for i in range(0, l): # Upper bound is not inclusive.
if y[i] == 1:
r = mult(r, x)
x = mult(x, x)
return r
Algorithm: There exists an algorithm that can compute the integer quotient ⌊ x / y ⌋ of an k-bit integer x and an l-bit integer y in time O((kk) + (k ⋅ (2 ⋅ k))) or O(k2).
1. integer division of unbounded positive integers: k-bit integer x, l-bit integer y
1. if y > x
1. return 0
2. for i from 0 to k − 1
1. shift y left by one bit
3. t := 0 (a bit vector to store ⌊ x / y ⌋ ⋅ y)
4. q := 0 (a bit vector to store the integer quotient)
5. p := 2k (to keep track of the current power of 2)
6. for i from 0 to k
1. if t + yx
1. t := t + y (using unbounded integer addition)
2. q := q + p (using unbounded integer addition)
2. shift y right by one bit
3. shift p right by one bit
7. return q
Example: Below is a Python implementation of the bitwise integer division algorithm. Note that it relies on the bitwise addition algorithm implementation.
def div(x, y):
k = len(x)
l = len(y)
if y > x:
return bitlist(0)
for i in range(0, k): # Upper bound is not inclusive.
y = y << 1
t = bitlist(0)
q = bitlist(0)
p = bitlist(2**k)
for i in range(0, k+1): # Upper bound is not inclusive.
y = y >> 1
p = p >> 1
return q
Algorithm: There exists an algorithm that can compute x mod y of an k-bit integer x and an l-bit integer y in time O(k2). This is accomplished by first performing integer division, then an integer multiplication, and then a subtraction. This corresponds to the formula for the modulus operation:
x mod y
=
x - ⌊ x/y ⌋ ⋅ y
When we consider the operations above as operating on integers modulo 2k (with results also in 2k), this corresponds to simply dropping any bits beyond the k least-significant bits when performing the computation.
Fact: There exists an algorithm that can compute the sum of two k-bit integers x and y in time O(k). The size of the output is O(k).
Fact: There exists an algorithm that can compute the product of two k-bit integers x and y in time O(k2). The size of the output is O(k).
Fact: There exists an algorithm that can compute xy for two k-bit integers x and y in time O(k3). The size of the output is O(k).
Fact: The recursive algorithm for gcd (and the extended Euclidean algorithm) makes O(log (max(x,y))) recursive calls on an integer inputs x ℕ and y ℕ. Notice that this means that the number of recursive calls is linear, or O(max(k,l)), for inputs consisting of an k-bit integer x and an l-bit integer y.
To see the above, consider the following fact: for any a ℕ, b ℕ, if ba then a mod b < (1/2) ⋅ a. Consider the two possibilities for a and b:
• if b ≤ (1/2) ⋅ a, then ⌊ a / b ⌋ > 1, so:
(a mod b)
< b
(1/2) ⋅ a
• if b > (1/2) ⋅ a, then ⌊ a / b ⌋ = 1, so:
a mod b
= a − ⌊ a/b ⌋ ⋅ b
= a − 1 ⋅ b
= a − b
< a − ((1/2) ⋅ a)
< (1/2) ⋅ a
Thus, every time a mod b is computed in the algorithms, size of the second paramter is halved. Since every other invocation switches the two parameters, both parameters are halved. Thus, the number of invocations or iterations for an input m is log(m).
Fact: The recursive algorithm for the extended Euclidean algorithm on inputs consisting of a k-bit integer x and an l-bit integer y runs in time O(max(k,l) ⋅ (2 ⋅ max(k,l)2 + max(k,l))), or O(max(k,l)3). The number of recursive calls is about max(k,l), and each recursive call involves an integer division, a multiplication, and a subtraction.
If all inputs and outputs are integers that can be represented with at most k bits, the running time is then O(k3).
Fact: The following problem can be solved in polynomial time: given x (ℤ/nℤ)*, compute x-1. This can be reduced to running the extended Euclidean algorithm, which has a polynomial running time.
If all inputs and outputs are integers that can be represented with at most k bits, the running time is then O(k3).
Fact: There exists an O(max(k,l)3 + (k+l)2) algorithm that can solve the following system of two equations (for k-bit integers x,x' and l-bit integers y,y') using the Chinese remainder theorem:
s
x' (mod x)
s
y' (mod y)
This algorithm calls the extended Euclidean algorithm on x and y, and then performs four multiplications modulo (xy). If all inputs and outputs are integers that can be represented with at most k bits, the running time is then O(k3).
Exercise: Multiply the following two numbers (represented in binary) using the multiplication algorithm presented in lecture: 1101101.
### [link] 4.3. Complexity of (probably) intractable problems
In the previous section we saw that addition, subtraction, multiplication, exponentiation, and division (both integer division, modulus, and multiplication by multiplicative inverses) can all be computed efficiently (i.e., in polynomial time) both over integers and over congruence classes. It is also possible to efficiently compute roots and logarithms of integers (we omit proofs of this fact in this course). However, no efficient algorithms are known for computing roots and logarithms of congruence classes.
Definition: A problem can be solved in polynomial time iff there exists for some constant c an algorithm that solves all instances of the problem in time O(nc). The set of all problems that can be solved in polynomial time is called P, and if a problem can be solved in polynomial time, we say that the problem is in P.
Definition: A problem can be solved in exponential time iff there exists an algorithm that solves all instances of the problem in time O(2n).
Definition: There exists a polynomial-time reduction from a problem A to a problem B iff there exists a polynomial-time algorithm that can convert any instance of problem A into an instance of problem B (i.e., convert an input for A into an input for B, and convert the output from B into an output from A).
A polynomial-time reduction from one problem to another can be viewed as two separate polynomial-time algoritms: a conversion algorithm that takes inputs to problem A and invokes a solver for problem B some polynomial number of times, and a conversion algorithm that takes all the outputs obtained from the solver for problem B and assembles and/or converts them into outputs for problem A.
solver forproblem B ⇒⇒⇒ conversionfrom output(s) fromB to output from A ⇑⇑⇑ ⇓ conversionfrom input forA to input(s) for B ⇐ solver forproblem A
We can summarize the above diagram by simply saying that problem A reduces to problem B.
problem B ⇐ problem A
We have already seen examples of such reductions. For example, a CRT solver for two equations makes a single call to the extended Euclidean algorithm. Thus, there exists a polynomial-time reduction from the problem of solving a two-equation system using CRT to the problem of computing multiplicative inverses.
findingmultiplicativeinverses ⇐ solving two-equationsystems using CRT
Fact: If there exists a polynomial-time reduction from problem A to problem B, and problem A is not in P (i.e., there exists no polynomial-time algorithm to solve A), then problem B must not be in P, either.
To see why B cannot be in P, we can present a proof by contradiction. Suppose that there does exist a polynomial-time algorithm to solve problem B. Then the polynomial-time reduction from A to B can invoke a polynomial-time algorithm. But then the reduction and algorithm for B working together will constitute a polynomial-time algorithm to solve A. Then it must be that A is in P. But this contradicts the fact that A is not in P, so no such polynomial-time algorithm for B could exist.
The above fact allows us to make conclusions about the computational complexity of certain problems based on their relationships (in terms of implementation) to other problems.
problem Bpremise:can be solved inpolynomial timeB ∈ P ⇐ problem Aconclusion:can be solved inpolynomial timeA ∈ P problem Bconclusion:cannot be solved inpolynomial timeB ∉ P ⇐ problem Apremise:cannot be solved inpolynomial timeA ∉ P
Intuitively, we can imagine that if problem A is "attached to" (i.e., depends on) problem B, an "easy" problem B will "pull" A down into the set of easily solvable problems P, while a "difficult" problem A will "pull" problem B into the set of hard-to-solve problems.
Conjecture (factoring): The following problem is not in P: given any integer n ℕ where n = pq and p and q are prime, find p and q.
Fact: Suppose that n = pq for two primes p ℕ and q ℕ. Given only n and φ(n), it is possible to compute p and q. Consider the following:
φ(n)
=
(p − 1) ⋅ (q − 1)
φ(n)
=
p ⋅ q − p − q + 1
φ(n)
=
n - p − q + 1
φ(n) - n
=
- p − q + 1
φ(n) - n - 1
=
− p − q
Thus, it is sufficient to solve the following system of equations for p and q:
n
=
p ⋅ q
φ(n) - n - 1
=
− p − q
Example: Suppose that n = 15 and φ(n) = 8. Factor n.
We can plug n and φ(n) into the system of equations derived in the applicable fact:
15
=
p ⋅ q
8 − 15 − 1
=
− p − q
With two equations and two unknowns, we can now solve for p and q:
8 − 15 − 1
=
− p − q
p
=
15 − 8 + 1 − q
=
8 − q
15
=
(8 − q) ⋅ q
0
=
− q2 + 8q − 15
0
=
q2 − 8q + 15
At this point, we use the quadratic equation:
q
=
1/2 ⋅ (8 ± √(64 − 4(1)(15)))
q
=
1/2 ⋅ (8 ± √(4))
q
=
1/2 ⋅ (8 ± 2)
q
{3, 5}
{p, q}
=
{3, 5}
Conjecture (computing φ): The following problem is not in P: given any integer n ℕ where n = pq and p and q are prime, find φ(n).
If we can compute φ(n), then we can compute p and q. If computing φ(n) were any easier than factoring n (e.g., if we had a polynomial-time algorithm for computing φ(n)), then our claim about the hardness of factoring n would be a contradiction. In other words, factoring n can be reduced to solving φ(n).
computing φ(n)conclusion:cannot be solved inpolynomial timecomputing φ(n) ∉ P ⇐ factoring nconjecture:cannot be solved inpolynomial timefactoring ∉ P
The above fact (i.e., that if factoring n is not in P, then neither is computing φ(n)) holds for arbitrary n, not just a product of two primes. However, the proofs in those cases are more sophisticated [Shoup].
Suppose we are given the following equation:
xy
=
z
There are three computational questions we could ask about the above equation:
• given x and y, compute z (this is the exponentiation operation);
• given x and z, compute y (this is the logarithm operation, since we have logx z = y in an equivalent notation);
• given y and z, compute x (this is the yth root operation, since we have y√(z) = x in an equivalent notation).
We have efficient algorithms for computing all three of the above if x, y, and z are all integers or real numbers. Suppose we instead consider the following equation for some n ℕ:
xy
z (mod n)
In other words, we can interpret the equation as a congruence of equivalence classes in ℤ/nℤ. In this case, we already know that the first operation (exponentiation) has an efficient implementation because exponentiation and modulus are both efficient operations. However, we believe that the other two operations (computation of logarithms and roots of congruence classes) are computationally difficult (no polynomial-time algorithm exists to compute solutions).
Conjecture (RSA problem): The following problem is not in P: compute m given only the following:
n
=
p ⋅ q
for two primes p and q in ℕ
e
ℤ/φ(n)ℤ
where e ≥ 3
c
=
me mod n
for an unknown m ∈ ℤ/nℤ
Notice that the RSA problem is analogous to computing the eth root of c in ℤ/nℤ:
e√(c) (mod n)
=
e√(me) (mod n)
=
m (mod n)
Note also that this can be accomplished by first finding φ(n) and then computing the inverse of e, but this is as difficult as factoring n, and we assume that is not in P. Is there another way to compute m? We do not know, but we assume that there is no other faster (i.e., polynomial-time) way to do so.
Conjecture (discrete logarithm assumption): The following problem is not in P: compute e given only the following:
n
ℕ
m
{1,...,n − 1}
c
=
me mod n
for an unknown e ∈ ℕ
Notice that this is analogous to computing the logarithm of a value c in ℤ/nℤ with respect to a known base m:
logm (c)
=
logm (me)
=
e
Note that the RSA problem requires that e ≥ 3, so it does not include the problem of computing square roots. The last intractable problem we will consider in this section is the problem of computing square roots of congruence classes within ℤ/nℤ for some n ℕ. We examine this problem separately from the RSA problem because it is possible to reduce factoring directly to the problem of computing square roots (while there is currently no known deterministic polynomial-time reduction from the factoring problem to the RSA problem).
Before we formally define the problem of computing square roots in ℤ/nℤ, we must first introduce some concepts and facts. This is because the problem of computing square roots in ℤ/nℤ is different from the problem of computing square roots of integers in ℕ, and this difference is likely what makes it computationally more difficult.
Definition: Given some n ℕ and some y ℤ/nℤ, we say that y is a quadratic residue in ℤ/nℤ if there exists x ℤ/nℤ such that x2y.
Example: Let us find the quadratic residues in ℤ/7ℤ:
02 ≡ 0 (mod 7)
12 ≡ 1 (mod 7)
22 ≡ 4 (mod 7)
32 ≡ 2 (mod 7)
42 ≡ 2 (mod 7)
52 ≡ 4 (mod 7)
62 ≡ 1 (mod 7)
The quadratic residues in ℤ/7ℤ are 0, 1, 2, and 4. Notice that 3, 5, and 6 are not quadratic residues in ℤ/7ℤ. Thus, the equations x2 ≡ 3, x2 ≡ 5, and x2 ≡ 6 have no solution in ℤ/7ℤ.
Example: Consider the set of congruence classes ℤ/5ℤ. We have that:
02 ≡ 0 (mod 5)
12 ≡ 1 (mod 5)
22 ≡ 4 (mod 5)
32 ≡ 4 (mod 5)
42 ≡ 1 (mod 5)
Notice that 2 and 3 are not quadratic residues in ℤ/5ℤ. Thus, neither x2 ≡ 2 nor x2 ≡ 3 have solutions in ℤ/5ℤ.
Fact: Given some n ℕ and some y ℤ/nℤ, if y and n are coprime and y is a non-zero quadratic residue in ℤ/nℤ then there exist at least two a,b ℤ/nℤ such that ab, a2y, and b2y.
Note that this is analogous to square roots in ℤ (since √(z) ℤ and −√(z) ℤ are both square roots of z ℤ if they exist).
We can prove this fact in the following way: suppose that y is a quadratic residue. Then there exists at least one x ℤ/nℤ such that:
x2 mod n
=
y
But this means that (nx) ℤ/nℤ is such that:
((n − x)2) mod n
=
(n2 − (2 ⋅ n ⋅ x) + x2) mod n
=
x2 mod n
=
y mod n
Thus, x and (nx) are both roots of y.
Example: It is the case that 4 ℤ/5ℤ is a quadratic residue in ℤ/5ℤ, with two roots 2 and 3:
22 mod 5
=
4
32 mod 5
=
9 mod 5
=
4
Example: Consider 0 ℤ/3ℤ. We have that:
02 ≡ 0 (mod 3)
12 ≡ 1 (mod 3)
22 ≡ 1 (mod 3)
Thus, x2 ≡ 0 has exactly one solution in ℤ/3ℤ.
Fact: Let p ℕ be a prime such that p mod 4 = 3, and suppose that y ℤ/pℤ. Then y has either 0, 1, or 2 roots in ℤ/pℤ.
Example: Suppose we want to solve the following equation for x ℤ/7ℤ:
x2
3 (mod 7)
Suppose we start by squaring both sides:
x4
32 (mod 7)
We can then use Euler's theorem to add any multiple of φ(7) to the exponent:
x4
32 ⋅ 1 (mod 7)
x4
32 ⋅ 3φ(7)
x4
32 + φ(7)
Since 7 is prime, φ(7) must be even, so 2 + φ(7) is also even. Thus, we can divide the exponent by 2 on both sides.
x2
3(2 + φ(7))/2
Furthermore, since 7 ≡ 3 (mod 4), we know that 2 + φ(7) is a multiple of 4. Thus, we can actually divide both exponents by 4:
x
3(2 + φ(7))/4
Thus, we have found x as a power of the original quadratic residue 3.
Fact: Let p ℕ be a prime such that p mod 4 = 3, and suppose that y ℤ/pℤ is a quadratic residue with two roots in ℤ/pℤ. Then we can compute the roots using the following formula:
x ≡ ± y(p+1)/4 (mod p)
In fact, if the modulus n is not prime, there may exist more than two roots of a value in ℤ/nℤ.
Example: It is the case that 1,-1,6,-6 ℤ/35ℤ are all square roots of 1 ℤ/35ℤ:
12 mod 35
=
1
(-1)2 mod 35
=
342 mod 35
=
1156 mod 35
=
((33 ⋅ 35)+1) mod 35
=
1 mod 35
62 mod 35
=
36 mod 35
=
1 mod 35
(-6)2 mod 35
=
292 mod 35
=
841 mod 35
=
((24 ⋅ 35)+1) mod 35
=
1 mod 35
Example: Suppose we are given an instance of the congruent squares problem where y = 2 and n = 15. We want to find x ℤ/15ℤ such that x ≢ ± y but x2y2 ≡ 22 ≡ 4. Notice that we have that:
y
2 mod 3
y2
22 mod 3
1 mod 3
(3-y)2
12 mod 3
1 mod 3
Notice also that we have that:
y
2 mod 5
y2
22 mod 5
4 mod 5
(5-y)2
32 mod 5
4 mod 5
Thus, the square roots of 4 in ℤ/3ℤ are 1 and 2, and the square roots of 4 in ℤ/5ℤ are 2 and 3. We can then apply the Chinese remainder theorem to every pair of combinations:
r1
1 mod 3
r1
2 mod 5
r1
7 mod 15
r2
2 mod 3
r2
2 mod 5
r2
2 mod 15
r3
1 mod 3
r3
3 mod 5
r3
13 mod 15
r4
2 mod 3
r4
3 mod 5
r4
8 mod 15
Thus, x = 8 and x = 7 are solutions to x ≢ ± 2 and x2 ≡ 4.
Fact (Hensel's lemma): Let p ℕ be a prime number greater than 2, and let k ℕ be any positive integer (i.e., k ≥ 1). Suppose that x and p are coprime, and that x ℤ/pkℤ can be squared to obtain some quadratic residue r ℤ/pkℤ:
x2 ≡ r (mod pk)
We can compute y ℤ/pk+1ℤ such that:
y2 ≡ r (mod pk+1)
We compute it as follows. First, we compute c using the following formula:
c
x-1 ⋅ 2-1 ⋅ ((r - x2) / pk) (mod p)
Then, we have that:
y
x + c ⋅ pk
To see why Hensel's lemma is true, suppose that we have that:
x2
r (mod pk)
Notice that if it is possible to "lift" x to a root of r in ℤ/pk+1ℤ, the only possibility is that this new root y has an additional multiple of pk. Thus, it must be that for some integer multiple c, we have:
y
=
x + c ⋅ pk
We can then substitute:
y2
r (mod pk+1)
(x + (c ⋅ pk))2
r (mod pk+1)
But we can simplify the above equation:
(x + (c ⋅ pk))2
r (mod pk+1)
x2 + (2 ⋅ x ⋅ c ⋅ pk) + (c2 ⋅ p2k)
r (mod pk+1)
But notice that the third term on the left-hand side in the above equation is equivalent to the congruence class 0 + pk+1ℤ:
c2 ⋅ p2k
0 (mod pk+1)
Thus, we have:
x2 + (2 ⋅ x ⋅ c ⋅ pk)
r (mod pk+1)
(x2 − r) + (2 ⋅ x ⋅ c ⋅ pk)
0
The above can be rewritten using the divisibility predicate as:
pk+1
|
(x2 − r) + (2 ⋅ x ⋅ c ⋅ pk)
Thus, we can divide both sides of the above relationship by pk to obtain:
p
|
(x2 − r)/(pk) + (2 ⋅ x ⋅ c)
We can then rewrite the above as an equation of congruence classes:
(x2 − r)/(pk) + (2 ⋅ x ⋅ c)
0 (mod p)
(2 ⋅ x ⋅ c)
− (x2 − r)/pk
c
x-1 ⋅ 2-1 ⋅ (− (x2 − r)/pk)
c
x-1 ⋅ 2-1 ⋅ ((r − x2)/pk)
Thus, we have derived the formula in Hensel's lemma.
Example: To better understand Hensel's lemma, we can derive the lemma for a particular example. Let us start with the following equation:
42
2 (mod 7)
Suppose we want to find y ℤ/72ℤ such that:
y2
2 (mod 49)
We know that the difference between 4 and y must be a multiple of 7, so we write:
y
=
4 + 7 ⋅ c
Then we proceed:
y2
2 (mod 49)
(4 + 7 ⋅ c)2
2 (mod 49)
42 + (2 ⋅ 7 ⋅ c ⋅ 4) + (49 ⋅ c2)
2 (mod 49)
42 + (2 ⋅ 7 ⋅ c ⋅ 4)
2 (mod 49)
We simplify further to compute c:
(42 - 2) + (2 ⋅ 7 ⋅ c ⋅ 4)
0 (mod 49)
14 + (2 ⋅ 7 ⋅ c ⋅ 4)
0 (mod 49)
The above can be rewritten using the divisibility predicate as:
49 | 14 + (2 ⋅ 7 ⋅ c ⋅ 4)
7 | 2 + (2 ⋅ c ⋅ 4)
We can again rewrite the above as an equation of congruence classes:
2 + (2 ⋅ c ⋅ 4)
0 (mod 7)
2 ⋅ c ⋅ 4
− 2 (mod 7)
2 ⋅ c ⋅ 4
5 (mod 7)
c
2-1 ⋅ 4-1 ⋅ 5 (mod 7)
c
4 ⋅ 2 ⋅ 5 (mod 7)
c
5 (mod 7)
Thus, we have:
y
4 + 7 ⋅ 5 (mod 49)
y
39 (mod 49)
Since 49 − 39 = 10, we have:
y
± 10 (mod 49)
y2
2 (mod 49)
Example: We want to find both solutions y ℤ/121ℤ to:
y2
5 (mod 121)
Since 121 = 112, we have p = 11, k = 1, and r = 5. We begin by finding x ℤ/11ℤ such that:
x2
5 (mod 11)
Since 11 3 + 4ℤ, we can use an explicit formula:
x
± 5(11+1)/4 (mod 11)
± 53
± 3 ⋅ 5
± 4
Thus, it is sufficient to lift the solution 4 ℤ/11ℤ to a solution in ℤ/121ℤ using Hensel's lemma. We compute c:
c
x-1 ⋅ 2-1 ⋅ ((r − x2)/pk) (mod p)
4-1 ⋅ 2-1 ⋅ ((5 − 16)/11) (mod 11)
4-1 ⋅ 2-1 ⋅ (− 1)
3 ⋅ 6 ⋅ (− 1)
(− 18)
4
Thus, we have:
y
4 + 4 ⋅ 11 (mod 121)
4 + 44
48
Thus, we have the solution
y
± 48 (mod 121)
Example: The distance traveled by an object that is at rest at time t = 0 and then immediately begins accelerating at 4 meters/second2 (i.e., the speed of the object increases by the quantity 4 meters/second every second) can be defined in terms of time in second t as:
d
=
1/2 ⋅ 4 ⋅ t2
We might expect an object to behave this way if it is being pulled by gravity, or if it is using a stable propulsion engine (e.g., a rocket).
Suppose we are using a range ambiguity resolution technique to track the distance the object has traveled. If it a particular moment, we know that the distance from the object is in the congruence class 10 + 11ℤ, what can we say about the amount of time t that has elapsed since the object started moving?
Since the distance is in 10 + 11ℤ, we can say:
d
10 (mod 11)
1/2 ⋅ 4 ⋅ t2
10
2 ⋅ t2
10
We know that 2-1 ≡ 6 (mod 11), so we multiply both sides of the above equation to obtain:
t2
60 (mod 11)
t2
5 (mod 11)
Thus, we can compute:
t
5(11+1)/4 (mod 11)
t
53
t
3 ⋅ 5
t
4 (mod 11)
Thus, we can say that the amount of time that has elapsed is in 4 + 11ℤ.
Example: Solve the following system of equations for x ℤ/21ℤ (find all solutions):
x2
1 (mod 3)
x2
1 (mod 7)
We know that there is exactly one solution y ℤ/21ℤ to the following system:
y
1 (mod 3)
y
1 (mod 7)
The solution is simply y = 1, and since there is only one solution, this is the only possibility. Thus, we are looking for all the solutions to the following equation:
x2
1 (mod 21)
Since 3 mod 4 = 7 mod 4 = 3, we know that there are two solutions to each of the following equations:
x2
1 (mod 3)
x2
1 (mod 7)
The solutions are as follows:
x
1 (mod 3)
x
2 (mod 3)
x
1 (mod 7)
x
6 (mod 7)
Taking every pair of combinations with one solution from ℤ/3ℤ and one solution from ℤ/7ℤ, we get:
x1
1 (mod 3)
x1
1 (mod 7)
x1
1 (mod 21)
x2
2 (mod 3)
x2
1 (mod 7)
x2
8 (mod 21)
x3
1 (mod 3)
x3
6 (mod 7)
x3
13 (mod 21)
x4
2 (mod 3)
x4
6 (mod 7)
x4
20 (mod 21)
Example: How many solutions x ℤ/(33 ⋅ 35)ℤ does the following system of equations have:
x2
4 (mod 33)
x2
4 (mod 35)
We know that each of the following equations have two solutions (2 and -2 in the respective sets). Notice that 4 mod 3 = 1.
x2
1 (mod 3)
x2
4 (mod 11)
x2
4 (mod 5)
x2
4 (mod 7)
Thus, there are two possible choices for each of the variables r1 {-2,2}, r2 {-2,2}, r3 {-2,2}, r4 {-2,2}, so there are 2 ⋅ 2 ⋅ 2 ⋅ 2 = 24 = 16 possible systems of the form:
x
r1 (mod 3)
x
r2 (mod 11)
x
r3 (mod 5)
x
r4 (mod 7)
Each system has a unique solution because the tuple (r1, r2, r3, r4) is unique, so there are 16 solutions for x in ℤ/(33 ⋅ 35)ℤ. Alternatively, we could break the problem down into two subproblems. First, we solve the following equation:
x2
4 (mod 33)
We obtain four distinct solutions (r1, r2, r3, r4) in ℤ/33ℤ. Next, we solve the following equation:
x2
4 (mod 35)
We then have four distinct solutions in (s1, s2, s3, s4) ℤ/35ℤ. Since gcd(33,35) = 1, we can then take any combination of solutions ri and si and set up the system:
x
ri (mod 33)
x
si (mod 35)
There will be exactly one solution to each of the above systems. There are 42 = 16 distinct systems, so there will be 16 distinct solutions.
We can summarize everything we know about computing square roots of congruence classes as follows. Suppose we want to find all solutions to the equation x2a (mod n) for some n ℕ and some a ℤ/nℤ.
• If n is a prime p, the possibilities are:
• a is not a quadratic residue in ℤ/pℤ, so there are no solutions to the equation,
• a ≡ 0, in which case x ≡ 0 is the one and only solution to the equation,
• a is a quadratic residue in ℤ/pℤ, so there are exactly two solutions to the equation, ± x ℤ/pℤ.
• If n is prime power pk+1 and a is coprime with p, the possibilities are:
• a is not a quadratic residue in ℤ/pkℤ, so it is not a quadratic residue in ℤ/pk+1ℤ;
• a is a quadratic residue in ℤ/pkℤ, and both square roots of a in ℤ/pkℤ can be "lifted" to ℤ/pk+1ℤ using Hensel's lemma.
• If n is a product of two coprime numbers k and m, then there is a solution in ℤ/nℤ for every possible combination of y and z such that:
y2
a mod k
z2
a mod m
Each combination corresponds to a solution x ℤ/nℤ defined using CRT as:
x
y mod k
x
z mod m
Let us consider the problem of finding all of the square roots of a member of ℤ/nℤ. Notice that this problem is analogous to computing all the square roots of y in ℤ/nℤ:
√(y)
=
± x
The main difference is that the number of square roots may be greater than 2. This problem is believed to be computationally difficult (i.e., no algorithm in P exists that can solve the problem). In fact, even finding just one additional square root is believed to be computationally difficult.
Conjecture (congruent squares): The following problem is not in P: given n = pq for two primes p and q in ℕ and y ℤ/nℤ, find an x ℤ/nℤ such that x2y2 but x ≢ ± y.
Factoring can be reduced to finding congruent squares. Suppose we want to factor n. We find x and y such that:
x2 mod n
=
y2 mod n
0 mod n
=
(x2 − y2) mod n
=
((x + y) ⋅ (x − y)) mod n
n
|
(x + y) ⋅ (x − y)
Since n cannot divide (x+y) (because x ≢ ± y, so x + yn), and it cannot divide (x-y) (since (x+y) < n), and (x-y) ≠ 0 (since x ≢ ± y), it must be that n shares factors with both (x+y) and (x-y). Thus, it must be that either gcd(n,x + y) or gcd(n,x - y) is a non-trivial factor of n, and this can be computed efficiently.
The following diagram summarizes the relationships between the problems that are conjectured to be intractable (i.e., not in P). Each directed edge represents that there exists a polynomial-time reduction from the source problem to the destination problem. All of the nodes in the graph are conjectured to be not in P.
congruent squares(square roots ofcongruence classes) ⇑⇓ ⇑⇓ computing φ(n)for n = p ⋅ q ⇐⇒ factoringn = p ⋅ q ⇑ ⇑ RSA problem(eth roots ofcongruence classes) discrete logarithm(logarithms ofcongruence classes)
### [link] 4.4. Applications of intractability
The computational intractability of certain problems in modular arithmetic makes it possible to address some practical security issues associated with implementing communication protocols. In particular, it helps address two common problems:
• parties must communicate over a public communications channel, so everything they send is visible both to their receiver and to anyone that may be eavesdropping;
• parties trying to communicate cannot physically meet to agree on shared secret information before communicating.
Protocol (hard-to-forge identification with meeting): Suppose Alice and Bob know that Alice will need to send Bob a single message at some point in the future. However, it is possible that Eve might try to impersonate Alice and send a message to Bob while pretending to be Alice.
In order to help Bob confirm that a message is truly from Alice (or to determine which message is from Alice given multiple messages), Alice and Bob meet in person and agree on a secret identifier s. When Alice decides to send a message m to Bob, she will send (m, s). Bob can then compare s to his own copy of s and confirm the message is from Alice.
Eve's only attack strategy is to try and guess s. As long as Alice and Bob choose s from a very large range of integers, the probability that Eve can guess s correctly is small.
A major flaw in the above identification protocol is that Alice and Bob must first meet in person to agree on a secret. Can Alice and Bob agree on a secret without meeting in person?
Protocol (hard-to-forge identification without meeting): Suppose Alice and Bob know that Alice will need to send Bob a single message at some point in the future. Alice prepares for this by doing the following:
• choose two large primes p and q at random;
• compute n = pq;
• send the public identifier n to Bob over a public/non-secure communication channel.
When Alice is ready to send her message m to Bob, Alice will send (m, (p, q)), where (p, q) is the private identifier. Bob can confirm that pq = n, at which point he will know Alice was the one who sent the message.
Conjecture (forging identification): The following problem is not in P: in the previously defined protocol, given a public identifier n, compute the private identifier (p, q).
If it were possible to quickly (i.e., in polynomial time) compute the private identifier, then it would be easy to forge the identity of a sender by recording their public identifier n. However, suppose that this forging computation could be used as a subprocedure in a factoring algorithm. Then it would be possible to implement an efficient factoring algorithm. In other words, factoring n can be reduced to forging a private identifier. Thus, the problem of forging a private identifier must not be in P (i.e., the fastest algorithm that exists for forging an identity is not in P, which means there is no polynomial-time for forging an identity).
forging privateidentifier for nconclusion:cannot be solved inpolynomial timeforging ∉ P ⇐ factoring nconjecture:cannot be solved inpolynomial timefactoring ∉ P
Note that the reduction also works in the other direction: an algorithm for factoring can be used for forging an identity. However, this proves nothing about the difficulty of forging! Just because forging can be reduced to an inefficient algorithm for factoring does not mean that there does not exist some other algorithm for forging that does not rely on factoring.
alternativeefficientalgorithm ⇐ forging privateidentifier for n ⇒ factoring n
The above identification protocol improves over the previous protocol because it allows Alice and Bob to agree on an identifier for Alice without meeting in person. Its security relies on the fact that it is unlikely that Eve is capable of forging identifiers because her ability to forge would solve a problem that we believe is very difficult to solve.
However, the protocol still has many other flaws. For example, Eve could preempt Alice and send her own signature %n before Alice has a chance to send her signature to Bob. A more thorough examination of such protocols is considered in computer science courses focusing explicitly on the subject of cryptography.
Protocol (Diffie-Hellman key exchange): We introduce the Diffie-Hellman key exchange protocol. This protocol is useful if two parties who cannot meet physically want to agree on a secret value that only they know.
• Public key generation (performed by one party):
1. Randomly choose a public large prime number p ℕ and an element g ℤ/pℤ.
• Private key generation (performed by both parties):
1. Party A randomly chooses a secret a ℤ/φ(p)ℤ.
2. Party B randomly chooses a secret b ℤ/φ(p)ℤ.
• Protocol:
1. Party A computes (ga mod p) and sends this public value to party B.
2. Party B computes (gb mod p) and sends this public value to party A.
3. Party A computes (gb mod p)a mod p.
4. Party B computes (ga mod p)b mod p.
5. Since multiplication over ℤ/φ(p)ℤ is commutative, both parties now share a secret gab mod p.
This protocol's security only relies on the discrete logarithm assumption.
It is not known whether the discrete logarithm problem is related to the factoring problem. Factoring can be reduced using a probabilistic approach to the discrete logarithm problem modulo pq.
Protocol (RSA protocol): We introduce the RSA public-key cryptographic protocol. This protocol is useful in many scenarios, such as the following:
• a sender wants to send the receiver a secret message over a public channel;
• a receiver wants to allow any number of senders to send him messages over a public channel, and the receiver does not yet know who the senders will be.
This protocol can also be used to prove the identity of the receiver.
• Key generation (performed by the receiver):
1. Randomly choose two secret prime numbers p ℕ and q ℕ of similar size.
2. Compute a public key value n = pq.
3. Compute the secret value φ(n) = (p-1) ⋅ (q-1).
4. Choose a public key value e {2,...,φ(n)-1} such that gcd(e, φ(n)) = 1.
5. Compute the secret private key d = e-1 mod φ(n)
• Protocol (encryption and decryption): There are two participants: the sender and the receiver.
1. The sender wants to send a message m {0,...,n-1} where gcd(m,n) = 1 to the receiver.
2. The receiver reveals the public key (n,e) to the sender.
3. The sender computes the ciphertext (encrypted message) c = me mod n.
4. The sender sends c to the receiver.
5. The receiver can recover the original message by computing m = cd mod n.
The above encryption-decryption process works because for some k ℤ:
e ⋅ d
1 (mod φ(n))
=
1 + φ(n) ⋅ k
(me)d mod n
=
(m1 + φ(n) ⋅ k) mod n
=
(m ⋅ (mφ(n) ⋅ k)) mod n
=
(m ⋅ (mφ(n) ⋅ k)) mod n
=
(m ⋅ mφ(n)) mod n
=
(m ⋅ 1) mod n
=
m mod n
Besides the message m, there are three pieces of secret information that an eavesdropper cannot know in order for the encryption to provide any privacy:
• p and q
• φ(n)
• d = e-1
Notice that if an eavesdropper knows p and q where n = pq, the eavesdropper can easily compute φ(n) (which was supposed to be private). If the eavesdropper can compute φ(n), then they can use the extended Euclidean algorithm to compute the inverse d = e-1 of the public key value e. They can then use d to decrypt messages.
Suppose the eavesdropper only knows \phi(%n). Then the eavesdropper can compute %d and decrypt any message. In fact, the eavesdropper can also recover %p and %q.
Protocol (Rabin cryptosystem): We introduce the Rabin cryptosystem protocol. It is similar to the RSA scheme, but it does not rely on the difficulty of the RSA problem.
• Key generation (performed by the receiver):
1. Randomly choose two secret prime numbers p ℕ and q ℕ of similar size.
2. Compute a public key value n = pq.
• Protocol (encryption and decryption): There are two participants: the sender and the receiver.
1. The sender wants to send a message m {0,...,n-1} to the receiver.
2. The receiver reveals the public key n to the sender.
3. The sender computes the ciphertext (encrypted message) c = m2 mod n.
4. The sender sends c to the receiver.
5. The receiver can recover the original message by computing √(c) in ℤ/pℤ and ℤ/qℤ, and then finding the four solutions to the following system by using the Chinese remainder theorem:
m
√(c) mod p
m
√(c) mod q.
Notice that the receiver must guess which of the square roots corresponds to the original message. Also notice that it is not a good idea to encrypt messages in the ranges {0, ..., √(n)} and {n − √(n), ..., n − 1} because it is easy to decrypt such messages by computing the integer square root √(c) of c and then confirming that √(c)2c (mod n).
The following diagram summarizes the relationships between algorithms that might break each of the protocols presented in this section, and existing problems that are believed not to be in P. Thus, our conjectures imply that all of the problems below are not in P.
breakingRabin encryption ⇐ congruent squares(square roots ofcongruence classes) ⇑⇓ ⇑⇓ finding RSAsecret key ⇐ computing φ(n)for n = p ⋅ q ⇐⇒ factoringn = p ⋅ q ⇑ ⇑ decrypting individualRSA messages ⇐ RSA problem(eth roots ofcongruence classes) breakingDiffie-Hellman ⇐ discrete logarithm(logarithms ofcongruence classes)
Fact: Suppose we have some modulus n ℕ and some a (ℤ/nℤ)*. Let r ℤ/φ(n)ℤ be the smallest r such that ar ≡ 1 (mod n). Then it must be that r | φ(n).
To see why, suppose that φ(n) is not divisible by r. Then there must be some c < r such that:
φ(n)
=
c + k ⋅ r
But if that's true, we have:
aφ(n)
ac + k ⋅ r
ac ⋅ ak ⋅ r
ac ⋅ (ar)k
ac
The above implies acaφ(n) ≡ 1. But since c < r, this contradicts our initial assumption. Thus, it must be that r | φ(n).
Recall the algorithm we learned before for generating random numbers based on using the multiples of a congruence class. In a setting with an adversary, we could imagine the adversary may want to predict the next random number in a sequence given some partial information. If we are using the original algorithm, an adversary can do this efficiently.
Suppose the adversary knows the modulus n, and knows that some number r is the ith random number in the sequence. The adversary then knows the following equation must hold:
a ⋅ i
r (mod n)
If it happens to be the case that gcd(i, n) = 1, the adversary can then compute the original "seed" a by doing a single inversion followed by a single multiplication:
i-1 ⋅ a ⋅ i
i-1 ⋅ r (mod n)
a
i-1 ⋅ r
This would then allow the adversary to predict any random number in the sequence.
Suppose the adversary knows the modulus n, and knows that the "seed" for a random sequence is a. Then given a random number in the sequence r, the adversary can determine which number in the sequence it must be by using the same equation as above and computing a-1 (mod n):
a ⋅ i
r (mod n)
a-1 ⋅ a ⋅ i
a-1 ⋅ r
i
a-1 ⋅ r
We can use what we have learned about intractable problems to create an algorithm for generating random numbers that is slightly more robust against the two attacks described above.
Algorithm: The following is another possible implementation of a random number generation algorithm.
1. inputs: upper bound n ℕ, seed a (ℤ/nℤ)*, index i {2,...,φ(n) − 1}
1. return (ai) mod n
One downside of this algorithm is that it will never produce a permutation of ℤ/nℤ. Even if n is prime and greater than 2, then φ(n) must be composite (since it is even), which means that the smallest exponent i that solves the identity ai ≡ 1 could be some factor of φ(n). Even if φ(n) happens to be prime, it cannot be close to n (since n cannot then also be prime, which means φ(n) ≠ n − 1). However, this algorithm does have a few benefits if an adversary is involved.
Given some partial information, an adversary would have a more difficult time making predictions about the random sequence. Suppose the adversary knows the modulus n, and knows that some number r is the ith random number in the sequence. The adversary then knows the following equation must hold:
ai
r (mod n)
However, in order to compute a, the adversary would now need to compute the ith root of the congruence class r modulo n. Effectively, this means the adversary could solve the RSA problem if i > 2 (or the congruent squares problem if i = 2). If we believe these problems cannot be solved efficiently, such an adversary cannot exist.
Alternatively, suppose the adversary knows the modulus n, and knows that the "seed" for a random sequence is a. Then given a random number in the sequence r, the adversary could try to determine which number in the sequence it must be by solving the following equation for i:
ai
r (mod n)
If the adversary can do so, then the adversary can solve the discrete logarithm problem, so it is unlikely that such an adversary exists.
Example: Bob decides to create his own online currency BobCoin. Bob knows that in order for BobCoin to be successful, it needs to be possible to make more BobCoins as more and more people start using them. However, he also does not want rapid inflation to occur. Thus, Bob issues BobCoins according to the following protocol:
• every day, Bob chooses two new random primes p and q;
• Bob computes n = pq, and then discards p and q;
• Bob posts n online for everyone to see;
• at any time on that day, anyone can submit a factor f of n;
• if f is a factor of n, Bob issues that person a BobCoin, invalidates n permanently so that no one else can use it, and generates a new n.
1. Why is it okay for Bob to discard p and q?
2. Suppose that Bob always posts numbers n that have 100 digits, and it takes the fastest computer one year to factor a 100-digit number through trial and error. If Alice wants to earn a BobCoin in one day, how many computers will Alice need to run in parallel to earn a BobCoin?
3. Suppose Bob wants to issue a complimentary BobCoin coupon to a group of 100 people. However, he wants to make sure that they can use their BobCoin coupon only if at least 20 out of those 100 people agree that the coupon should be redeemed for a BobCoin. How can Bob accomplish this?
In the previous sections, we studied a specific algebraic structure, ℤ/nℤ, as well as its operations (e.g., addition, multiplication), and its properties (commutativity, associativity, and so on). There exist many other algebraic structures that share some of the properties of ℤ/nℤ. In fact, we can create a hierarchy, or even a web, of algebraic structures by picking which properties of ℤ/nℤ we keep and which we throw away.
Showing that some new algebraic structure is similar or equivalent to another, more familiar structure allows us to make inferences about that new structure based on everything we already know about the familiar structure. In computer science, the ability to compare algebraic structures using their properties is especially relevant because every time a programmer defines a new data structure and operations on that data structure, they are defining an algebraic structure. Which properties that algebraic structure possesses determines what operations can be performed on it, in what order they can be performed, how efficiently they can be performed, and how they can be broken down and reassembled.
Recall that a permutation on a set X is a bijective relation between X and X (i.e., a subset of the set product X × X). Since a permutation is a bijective map (i.e., a function), we can reason about composition of permutations (it is just the composition of functions). Thus, we can study sets of permutations as algebraic structures under the composition operation o.
Notice that for any set X of finite size n, we can relabel the elements of X to be {0,...,n-1} (that is, we can define a bijection between X and {0,...,n-1}). Thus, we can study permutations on {0,...,n-1} without loss of generality. We will adopt the following notation for permutations:
[a1,...,an]
Where a1,...,an is some rearrangement of the integers from 0 to n-1. For example, the identity permutation on n elements would be:
[0,1,2,3,4,5,...,n-1]
Definition: Any permutation that swaps exactly two elements is called a swap. Examples of swaps are [0,3,2,1], [1,0], and [0,6,2,3,4,5,1].
Definition: Any permutation that swaps exactly two adjacent elements is called an adjacent swap. Examples of adjacent swaps are [0,1,3,2], [1,0,2,3,4], and [0,1,3,2,4,5,6].
Definition: Define Sn to be the set of all permutations of the set {0,...,n-1}.
Example: The set of permutations of {0,1} is S2 = {[0,1], [1,0]}.
Example: The set of permutations of {0,1,2} is S3 = {[0,1,2], [0,2,1], [1,0,2], [1,2,0], [2,0,1], [2,1,0]}.
Fact: The set Sn contains n! permutations.
Suppose we want to construct a permutation [a1,...,an] using the elements in {0,...,n-1}, where we are only allowed to take each element in the set once and assign it to an unassigned entry ai. Then for the first slot, we have n possibilities; for the second, we have n-1 possibilities. For the third, we have n-2 possibilities, and so on until we have only one possibility left. Thus, the number of possible permutations we can make is:
n!
=
n ⋅ (n-1) ⋅ (n-2) ⋅ ... ⋅ 2 ⋅ 1
Definition: Define the set Cn to be the set of all cyclic permutations on n elements. Any permutation that performs a circular shift on elements is a cyclic permutation (also known as a cyclic shift permutation, a circular shift permutation, or just a shift permutation). Examples of shifts are [6,7,0,1,2,3,4,5], [2,3,4,0,1], and [4,0,1,2,3].
Definition: Define the set Mn to be the set of all multiplication-induced permutations on n elements. Any permutation on n elements that corresponds to multiplication by some coprime a < n is called a multiplication-induced permutation. Examples of such permutations include [0,2,4,1,3] (corresponding to multiples 2 ⋅ i for ascending i in ℤ/5ℤ).
Example: The set of multiplication-induced permutations on 6 elements (i.e., permutations of {0,1,2,3,4,5}) is the collection of permutations of the form [a ⋅ 0 mod 6, a ⋅ 1 mod 6, a ⋅ 2 mod 6, a ⋅ 3 mod 6, a ⋅ 4 mod 6, a ⋅ 5 mod 6] for each a that is coprime with 6. Thus, a {1,5}, and so we have:
M6 = {[0,1,2,3,4,5], [0,5,4,3,2,1]}.
Example: The set of multiplication-induced permutations on 7 elements (i.e., permutations of {0,1,2,3,4,5,6}) is:
M7 = {[0,1,2,3,4,5,6], [0,2,4,6,1,3,5], [0,3,6,2,5,1,4], [0,4,1,5,2,6,3], [0,5,3,1,6,4,2], [0,6,5,4,3,2,1]}.
Note that |M7| = φ(7) = 6 because there are 6 possible a ℤ/7ℤ that are coprime with 7.
### [link] 5.2. Isomorphisms: Equivalence of Algebraic Structures
An algebraic structure is a set together with a binary operator over that set. All algebraic structures are closed under their binary operation.
Definition: Let S be a set, and let ⊕ be a binary operator. Let closure(S,⊕) be the closure of S under ⊕. We can define the set closure(S,⊕) in the following way:
closure(S, ⊕)
=
{ x1 ⊕ x2 | x1,x2 ∈ S } ∪ { x1 ⊕ (x2 ⊕ x3) | x1,x2,x3 ∈ S } ∪ { (x1 ⊕ x2) ⊕ x3 | x1,x2,x3 ∈ S } ∪ ...
Alternatively, we could define it in the following way using recursion:
closure0(S, ⊕)
=
S
closuren(S, ⊕)
=
{ x ⊕ y | x,y ∈ (closuren-1(S, ⊕) ∪ ... ∪ closure0(S, ⊕)}
closure(S, ⊕)
=
closure0(S, ⊕) ∪ closure1(S, ⊕) ∪ closure2(S, ⊕) ∪ ...
Notice that if a set S is finite, there is a natural way to algorithmically list all elements in closure(S, ⊕) by starting with the elements in S and "building up" all the elements in each of the closurei(S, ⊕) subsets.
The concept of an isomorphism between two algebraic structures captures the fact that two structures are not only the same size, but that the two structures have the same internal "structure" with respect to their respective operations. Isomorphisms are useful because they allow us to learn more about a structure by studying the structure isomorphic to it. They can also be useful because if two structures are isomorphic, we can perform computations in one structure instead of another structure (e.g., because it is more secure, more efficient, and so on) while obtaining the same final result.
Fact: Let A be an algebraic structure with operator ⊕ and let B be an algebraic structure with operator ⊗. We say that A is isomorphic to B, which we denote as (A,⊕) ≅ (B,⊗) or simply AB, if the following conditions hold:
• there exists a bijection (i.e., a bijective relation) between A and B, which we denote using = ;
• for all a, a' A and b,b' B, if a = b and a' = b' then aa' = bb'.
Another way to state the definition is to write it in terms of a bijective map m between A and B:
• there exists a bijective map m between A and B;
• for all a, a' A, m(aa') = m(a) ⊗ m(a').
In other words, an isomorphism is a bijection that preserves (or respects) the binary operations on the two sets: if any true equation involving elements from A and the operator ⊕ is transformed by replacing all elements of a with their corresponding elements m(b) B and by replacing all instances of ⊕ with ⊗, the resulting equation is still true.
Example: Consider the set of permutations on two elements S2 and the set of congruence classes ℤ/2ℤ. It is true that (S2,o) ≅ (ℤ/2ℤ,+), where o is composition of permutations and where + is addition of congruence classes in ℤ/2ℤ. The following table demonstrates the bijection:
S2 ℤ/2ℤ [0,1] 0 [1,0] 1
The following table demonstrates that the bijection above respects the two operations.
S2 ℤ/2ℤ [0,1] o [0,1] = [0,1] 0 + 0 = 0 [0,1] o [1,0] = [1,0] 0 + 1 = 1 [1,0] o [0,1] = [1,0] 1 + 0 = 1 [1,0] o [1,0] = [0,1] 1 + 1 = 0
Another way to demonstrate the above is to show that the "multiplication tables" (though the operation need not be multiplication) for the two operators are exactly the same (i.e., the entries in the multiplication table all correspond according to the bijection).
+o 0[0,1] 1[1,0] 0[0,1] 0[0,1] 1[1,0] 1[1,0] 1[1,0] 0[0,1]
Fact: For any positive integer n ℕ, (ℤ/nℤ,+) ≅ (Cn, o).
Example: Consider the set of cyclic permutations on three elements C3 and the set of congruence classes ℤ/3ℤ. It is true that (C3,o) ≅ (ℤ/3ℤ,+).
C3 ℤ/3ℤ [0,1,2] o [0,1,2] = [0,1,2] 0 + 0 = 0 [0,1,2] o [1,2,0] = [1,2,0] 0 + 1 = 1 [0,1,2] o [2,0,1] = [2,0,1] 0 + 2 = 2 [1,2,0] o [0,1,2] = [1,2,0] 1 + 0 = 1 [1,2,0] o [1,2,0] = [2,0,1] 1 + 1 = 2 [1,2,0] o [2,0,1] = [0,1,2] 1 + 2 = 0 [2,0,1] o [0,1,2] = [2,0,1] 2 + 0 = 2 [2,0,1] o [1,2,0] = [0,1,2] 2 + 1 = 0 [2,0,1] o [2,0,1] = [1,2,0] 2 + 2 = 1
Example: To compute the composition of two permutations [45,46,...,49,0,1,2,...,44] o [3,4,5,...,49,0,1,2], it is sufficient to recognize that [45,46,...,49,0,1,2,...,44] corresponds to 45 ℤ/50ℤ, and [3,4,5,...,49,0,1,2] corresponds to 3 ℤ/50ℤ. Thus, since 45 + 3 = 48, the result of the composition must be [48,49,0,1,2,...,47].
45 + 3
=
48
[45,46,...,49,0,1,2,...,44] o [3,4,5,...,49,0,1,2]
=
[48,49,0,1,2,...,47]
Fact: For any prime p ℕ, (ℤ/φ(p)ℤ,+) ≅ ((ℤ/pℤ)*, ⋅).
Note that |ℤ/φ(p)ℤ| = φ(p) = |(ℤ/pℤ)*|.
Example: Consider the set ℤ/2ℤ with the addition operation + modulo 2, and the set (ℤ/3ℤ)* together with the multiplication operation ⋅ modulo 3. It is true that (ℤ/2ℤ, +) ≅ ((ℤ/3ℤ)*,⋅).
(ℤ/2ℤ, +) ((ℤ/3ℤ)*, ⋅) 0 + 0 = 0 1 ⋅ 1 = 1 0 + 1 = 1 1 ⋅ 2 = 2 1 + 0 = 1 2 ⋅ 1 = 2 1 + 1 = 0 2 ⋅ 2 = 1
Example: Consider the set ℤ/2ℤ with the addition operation + modulo 2, and the set (ℤ/6ℤ)* together with the multiplication operation ⋅ modulo 6. It is true that (ℤ/2ℤ, +) ≅ ((ℤ/6ℤ)*,⋅). Note that (ℤ/6ℤ)* = {1,5}, because only 1 and 5 in the range {0,...5} are coprime with 6.
(ℤ/2ℤ, +) ((ℤ/6ℤ)*, ⋅) 0 + 0 = 0 1 ⋅ 1 = 1 0 + 1 = 1 1 ⋅ 5 = 5 1 + 0 = 1 5 ⋅ 1 = 5 1 + 1 = 0 5 ⋅ 5 = 1
Isomorphisms need not be defined between different sets. It is possible to define an isomorphism between a set and itself that has non-trivial, interesting, and even useful characteristics.
Fact: For any n ℕ and any a ℤ/nℤ where a is coprime with n, (ℤ/nℤ,+) ≅ (ℤ/nℤ,+) under the bijection that relates x ℤ/nℤ with ax ℤ/nℤ. This is because for any x,y ℤ/nℤ, we have:
a ⋅ (x + y)
a ⋅ x + a ⋅ y
Fact: For any n ℕ and any e ℤ/φ(n)ℤ where e is coprime with φ(n), ((ℤ/nℤ)*,⋅) ≅ ((ℤ/nℤ)*,⋅) under the bijection that relates x (ℤ/nℤ)* with xe (ℤ/nℤ)*. This is because for any x, y (ℤ/nℤ)*, we have:
(x ⋅ y)e
xe ⋅ ye (mod n)
Example: Consider the set ℤ/3ℤ with the addition operation + modulo 3, and another instance of the set ℤ/3ℤ with the addition operation + modulo 3. The following is a bijection between (ℤ/3ℤ, +) and (ℤ/3ℤ, +):
ℤ/3ℤ ℤ/3ℤ 0 0 1 2 2 1
Note that the above bijection corresponding to multiplication by 2 modulo 3, since 0 ⋅ 2 ≡ 0, 1 ⋅ 2 ≡ 2, and 2 ⋅ 2 ≡ 1. This bijection is an isomorphism (and is an instance of this fact about isomorphisms):
ℤ/3ℤ ℤ/3ℤ 0 + 0 = 0 0 + 0 = 0 0 + 1 = 1 0 + 2 = 2 0 + 2 = 2 0 + 1 = 1 1 + 0 = 1 2 + 0 = 2 1 + 1 = 2 2 + 2 = 1 1 + 2 = 0 2 + 1 = 0 2 + 0 = 2 1 + 0 = 1 2 + 1 = 0 1 + 2 = 0 2 + 2 = 1 1 + 1 = 2
If we only consider algebraic structures with particular algebraic properties, we can actually show that there is only one algebraic structure of a particular size (i.e., there is only one "isomorphism class" of algebraic structures having that size).
Fact: Suppose we have an algebraic structure (A, ⊕) with two elements in which the elements in the set must have inverses, one of them must be an identity, and ⊕ is associative. Without loss of generality, let's label the two elements a and b, and let a be the label of the identity element. Because a is the identity, we must have:
a ⊕ a
=
a
a ⊕ b
=
b
b ⊕ a
=
b
The identity is its own inverse, so a-1 = a. The only question that remains is to determine what bb must be. If we have bb = b, then we must ask what the inverse of b can be (since it isn't b itself, as we would then have bb = a). But then the only option is a. That would mean that ba should be a (since b and its inverse should yield the identity element). But this contradicts the equations we already derived above. So it must be that b is its own inverse:
a ⊕ b
=
a
Thus, there can be only one distinct algebraic structure (in terms of its "multiplication table") having two elements; it's the algebraic structure isomorphic to (A, ⊕), as well as all the other algebraic structures isomorphic to it: (S2, o), (C2, o), (ℤ/2ℤ, +), ((ℤ/3ℤ)*, ⋅), ((ℤ/6ℤ)*, ⋅), and so on.
Example (partial homomorphic encryption supporting addition): Suppose that for some n ℕ, Alice wants to store a large number of congruence classes b1,...,bk ℤ/nℤ in Eve's database (perhaps Alice will generate or collect these over a long period of time). Alice does not have enough memory to store the congruence classes herself, but she does not want to reveal the congruence classes to Eve.
What Alice can do before she stores anything in Eve's database is to pick some secret a ℤ/nℤ that is coprime with n. Then, every time Alice needs to store some b in Eve's database, Alice will instead send Eve the obfuscated value (ab) mod n. Since a is coprime with n, there exists a-1 ℤ/nℤ. Thus, if Alice retrieves some obfuscated data entry c from Eve's database, she can always recover the original value by computing (a-1c) mod n, because:
a-1 ⋅ (a ⋅ b)
b (mod n)
Furthermore, Alice can ask Eve to compute the sum (modulo n) of all the entries in the database (or any subset of them). Suppose that Alice has stored obfuscated versions of b1,...,bk ℤ/nℤ in Eve's database. Then if Eve computes the sum of all the obfuscated entries stored in her database, she will get:
a ⋅ b1 + ... + a ⋅ bk
=
a ⋅ (b1 + ... + bk) (mod n)
Thus, if Alice asks Eve for the sum of all the obfuscated entries in the database, Alice can recover the actual sum of the original entries that she stored in the database because:
a-1 ⋅ (a ⋅ (b1 + ... + bk))
=
b1 + ... + bk (mod n)
In this way, Alice has avoided having to store and add all the database entries, while preventing Eve finding out the actual entries, or their sum.
Example (partial homomorphic encryption supporting multiplication): Suppose that for some n ℕ, Alice wants to store a large number of congruence classes b1,...,bk (ℤ/nℤ)* in Eve's database. Alice does not have enough memory to store the congruence classes herself, but she does not want to reveal the congruence classes to Eve.
We assume that Alice knows or can easily compute φ(n), while Eve does not know and cannot compute it (perhaps Alice generated n using a method similar to the one in RSA encryption protocol).
What Alice can do before she stores anything in Eve's database is to pick some secret e ℤ/φ(n)ℤ that is coprime with φ(n). Since Alice knows e and φ(n), she can compute e-1 using the extended Euclidean algorithm.
Then, every time Alice needs to store some b in Eve's database, Alice will instead send Eve the encrypted value be mod n. If Alice retrieves some encrypted data entry c from Eve's database, she can always recover the original value by computing (ce-1}) mod n, because by Euler's theorem:
(be)e-1
be ⋅ e-1
b (mod n)
Furthermore, Alice can ask Eve to compute the product (modulo n) of all the entries in the database (or any subset of them). Suppose that Alice has stored encrypted versions of b1,...,bk ℤ/nℤ in Eve's database. Then if Eve computes the product of all the encrypted entries stored in her database, she will get:
b1e ⋅ ... ⋅ bke
=
(b1 ⋅ ... ⋅ bk)e (mod n)
Thus, if Alice asks Eve for the product of all the encrypted entries in the database, Alice can recover the actual product of the original entries that she stored in the database because:
((b1 ⋅ ... ⋅ bk)e)e-1
=
b1 ⋅ ... ⋅ bk (mod n)
In this way, Alice has avoided having to store and multiply all the database entries, while preventing Eve from finding out the actual entries, or their product. Furthermore, because it is believed that factoring n, computing φ(n), and solving the RSA problem is computationally difficult, it is highly unlikely that Eve can decrypt the database entries or the result.
We know by Euler's theorem and the algebraic properties of exponents that for any ℤ/nℤ, any b (ℤ/nℤ)*, and any x, y ℤ/φ(n)ℤ the following identity must hold:
bx ⋅ by
bx + y (mod n)
b(x + y) mod φ(n) (mod n)
We might naturally ask whether there might be an isomorphism between (ℤ/φ(n)ℤ, +) and ((ℤ/nℤ)*, ⋅). In fact, sometimes there is (although more often it is an isomorphism between a subset of (ℤ/nℤ)* and ℤ/kℤ for k | φ(n)).
Given the above, we might ask whether it might be possible to create a homomorphic encryption protocol using an isomorphism of the form (ℤ/φ(n)ℤ, +) ≅ ((ℤ/nℤ)*, ⋅) in which Alice can encrypt her data x and y by computing bx (mod n) and by (mod n). This should be secure because it is believed that no efficient algorithms for computing discrete logarithms exist. Then, in order to have Eve compute a sum of the data values x and y, Alice can ask Eve to compute the product bxby on her end, which is equivalent to bx + y.
However, there is a flaw in this protocol: Alice has no way to retrieve x + y from bx + y because that requires computing a discrete logarithm, as well. Thus, an isomorphism of the form (ℤ/φ(n)ℤ, +) ≅ ((ℤ/nℤ)*, ⋅) would not necessarily give us a practical homomorphic encryption protocol.
Fact: Given a set of possible data values ℤ/nℤ (e.g., integers within a certain range), any compression algorithm for elements in ℤ/nℤ must be a bijection and a permutation (since it must be invertible in order for decompression to be possible). As a result, it must necessarily expand the representation size of some elements.
Example: Suppose that we are working with elements in ℤ/11ℤ = {0,1,2,3,4,5,6,7,8,9,10}. Suppose we define an algorithm that compresses 10 ℤ/11ℤ into an element in ℤ/11ℤ with a smaller representation size. One example of such an element is 1, since:
10 ⋅ 10
1 (mod 11)
Thus, one possible implementation of a compression algorithm is a function that takes any x ℤ/11ℤ and returns (10 ⋅ x) mod 11. Since 10 has an inverse in ℤ/11ℤ, this function is invertible, so decompression is possible (simply multiply by 10 again). However, this will necessarily expand the representation of at least one value: 1 ℤ/11ℤ:
10 ⋅ 1
10 (mod 11)
Note that this cannot be avoided because multiplying all the elements in ℤ/11ℤ by 10 amounts to a permutation, so at least one compressed element must be 10.
### [link] 5.3. Generators of Algebraic Structures
Because an algebraic structure (A,⊕) often consists of a set of objects that can be "built up" using the binary operator ⊕ from a smaller, possibly finite, collection of generators GA, it is often easier to reason about an algebraic structure by first reasoning about its generators, and then applying structural induction.
Fact: Let W be the set of swap permutations on n elements. Then W is a set of generators for the set of permutations Sn, which can be defined as:
S
=
closure(W, o)
Fact: Let A be the set of adjacent swap permutations on n elements. Then A is a set of generators for the set of permutations Sn, which can be defined as:
S
=
closure(A, o)
Fact: The set ℤ/nℤ has a single generator 1 ℤ/nℤ with respect to addition + modulo n:
ℤ/nℤ
=
closure({1}, +)
Fact: If a and n are coprime, then a is a generator for ℤ/nℤ with respect to addition + modulo n:
ℤ/nℤ
=
closure({a}, +)
Fact: Suppose we have two algebraic structures A and B where for some operator ⊕:
A
=
closure({a}, ⊕)
B
=
closure({b}, ⊕)
If the generator a can be expressed in terms of b, i.e., a = b ⊕ ... ⊕ b, then it must be that:
A
B
Furthermore, if in addition to the above, the generator b can be expressed in terms of a, i.e., b = a ⊕ ... ⊕ a, then it must be that:
B
A
This would then imply (by basic set theory):
A
=
B
### [link] 5.4. Isomorphisms and Linear Equations of Congruence Classes
Fact: Let n be a positive integer. Then if + represents addition modulo n, we have:
ℤ/nℤ
closure({1}, +)
In other words, 1 is a generator for ℤ/nℤ with respect to +.
Fact: Let a and n be coprime positive integers. Then if + represents addition modulo n, we have:
ℤ/nℤ
closure({a}, +)
In other words, a can be a single generator for ℤ/nℤ with respect to +. This is equivalent to a fact we have already seen.
Fact: Let a and n be any two positive integers. Then if + represents addition modulo n, we have:
ℤ/(n/gcd(n,a))ℤ
closure({a}, +)
closure({gcd(n,a)}, +)
Example: Consider n = 6 and a = 4. Then we have g = gcd(4,6) = 2. We have:
closure({4}, +)
=
{4 ⋅ 0, 4 ⋅ 1, 4 ⋅ 2}
=
{0, 4, 2}
=
closure({2}, +)
Note that:
4
=
2 + 2 (mod 6)
2
=
4 + 4 (mod 6)
Thus, 2 can be expressed using the generator 4, and 4 can be expressed using the generator 2.
Fact (linear congruence theorem): Suppose that for a positive integer n and two congruence classes a ℤ/nℤ and b ℤ/nℤ where g ≡ gcd(a,n), we are given the following equation:
a ⋅ x
b (mod n)
Since a and n are not coprime, we cannot solve the above equation. Furthermore, if b ∉ closure({a}, +), we know the equation cannot be solved. Since closure({a}, +) = closure({g}, +), the equation can only be solved if b closure({g}, +). In other words, the equation can only be solved if g|b.
Note that if g ≡ gcd(a,n) and b closure({g}, +), we have:
n
=
n' ⋅ g
a
=
a' ⋅ g
b
=
b' ⋅ g
(a' ⋅ g) ⋅ x
(b' ⋅ g) (mod (n' ⋅ g))
We can then rewrite the above as:
(a' ⋅ g) ⋅ x
=
(b' ⋅ g) + k ⋅ (n' ⋅ g)
We can divide both sides of the above equation by g:
a' ⋅ x
=
b' + k ⋅ n'
We can convert the above equation back into an equation of congruence classes:
a' ⋅ x
b' mod n'
At this point a' and n' are coprime, we can compute a'-1 (mod n') and multiply both sides by it to find our solution x:
a'-1 ⋅ a' ⋅ x
a'-1 ⋅ b' mod n'
x
a'-1 ⋅ b' mod n'
Example: Solve the following equation for x ℤ/8ℤ, or explain why no solution exists:
2 ⋅ x
3 (mod 8)
Example: Solve the following equation for x ℤ/24ℤ, or explain why no solution exists:
16 ⋅ x
7 (mod 24)
Example: Solve the following equation for x ℤ/15ℤ, or explain why no solution exists:
9 ⋅ x
6 (mod 15)
Example: Suppose we want to find all the solutions in ℤ/6ℤ to the following equation:
2 ⋅ x
4 (mod 6)
Using the linear congruence theorem, we can find the unique solution modulo 3:
x
2 (mod 3)
The solutions modulo 6 will be those congruence classes in ℤ/6ℤ whose integer representatives are members of 2 + 3ℤ = {..., 2, 5, 8, 11, 14, ...}. These are 2 + 6ℤ and 5 + 6ℤ, since 2 ≡ 2 (mod 3) and 5 ≡ 2 (mod 3). Note that:
2 + 3ℤ
=
2 + 6ℤ ∪ 5 + 6ℤ
{..., 2, 5, 8, 11, 14, 17, 20, ...}
=
{..., 2, 8, 14, 20, ...} ∪ {..., 5, 11, 17, ...}
### [link] 5.5. Isomorphisms and the Chinese Remainder Theorem
Fact (Chinese remainder theorem isomorphism): Let n and m be coprime positive integers. Let ℤ/nℤ × ℤ/mℤ be the set product of ℤ/nℤ and ℤ/mℤ, and let ⊕ be an operation on ℤ/nℤ × ℤ/mℤ defined as follows:
(a,b) ⊕ (c,d)
=
(a + c,b + d)
Then it is true that (ℤ/nℤ × ℤ/mℤ, ⊕) ≅ (ℤ/(mn)ℤ, +). The bijective relationship in this isomorphism is as follows:
(a mod n, b mod m) ∈ ℤ/nℤ × ℤ/mℤ
corresponds to
(a ⋅ (m ⋅ m-1) + b ⋅ (n ⋅ n-1)) ∈ ℤ/(m ⋅ n)ℤ
In other words, given (a, b), we can map it to a ⋅ (mm-1) + b ⋅ (nn-1), and given some c = a ⋅ (mm-1) + b ⋅ (nn-1) from ℤ/(nm)ℤ, we can map it back to ℤ/nℤ × ℤ/mℤ using (c mod n, c mod m).
Example (partial homomorphic encryption with validation): Suppose that for some p ℕ, Alice wants to store a large number of congruence classes b1,...,bk (ℤ/pℤ)* in Eve's database. Alice does not have enough memory to store the congruence classes herself, but she does not want to reveal the congruence classes to Eve. Alice also wants Eve to compute the product of the congruence classes for her as in this example. However, because Alice does not actually have the numbers Eve is multiplying, Alice has no way to know that the product Eve returns to her corresponds to the actual product; perhaps Eve is saving money and cheating by returning a random number to Alice.
To address this, Alice first chooses a new prime q (distinct from p). She then computes n = pq and φ(n) = (p − 1) ⋅ (q − 1), finds e (ℤ/φ(n)ℤ)* and computes de-1 (mod φ(n)). This will allow Alice to encrypt things before storing them in Eve's database. However, at this point Alice will not simply encrypt her congruence classes b1, ..., bk.
Instead, Alice will first choose a single random value r ℤ/qℤ. Then, Alice will map each of her values (bi, r) ℤ/pℤ × ℤ/qℤ via the CRT isomorphism to some values ci ℤ/nℤ. Alice will then encrypt the values ci by computing cie (mod n), and will submit these values to Eve's database. Now, whenever Alice can ask Eve to compute the following product:
c1e ⋅ ... ⋅ cke
(c1 ⋅ ... ⋅ ck)e (mod n)
If Eve returns the product (c1 ⋅ ... ⋅ ck)e to Alice, Alice can decrypt it by computing:
((c1 ⋅ ... ⋅ ck)e)d
c1 ⋅ ... ⋅ ck (mod n)
Next, Alice can compute (c1 ⋅ ... ⋅ ck) mod p to retrieve the actual product in ℤ/pℤ. However, Alice also wants to make sure that Eve actually multiplied all the entries. Alice can do so by computing:
(c1 ⋅ ... ⋅ ck) mod q
r ⋅ ... ⋅ r (mod q)
rk (mod q)
Alice can quickly compute rk (mod q) and compare it to (c1 ⋅ ... ⋅ ck) mod q. This gives Alice some confidence (but not total confidence) that Eve actually computed the product because it ensures that Eve really did multiply k distinct values provided by Alice.
What is one way that Eve can still cheat and save money under these circumstances (if she knows that Alice is using this validation method)? Is there any way Alice can counter this (hint: what if Alice chooses different values r for each entry)?
Fact: Let n and m be positive integers. and let g = gcd(n,m). Let ℤ/(n/g)ℤ × ℤ/(m/g)ℤ × ℤ/gℤ be a set product, and let ⊕ be an operation on ℤ/(n/g)ℤ × ℤ/(m/g)ℤ × ℤ/gℤ defined as follows:
(a,b,c) ⊕ (x,y,z)
=
(a + x,b + y, c + z)
Then it is true that (ℤ/(n/g)ℤ × ℤ/(m/g)ℤ × ℤ/gℤ, ⊕) ≅ (ℤ/((mn)/g)ℤ, +).
Example: Consider the set ℤ/2ℤ × ℤ/3ℤ with the operation ⊕, and the set ℤ/6ℤ together with the operation +. It is true that (ℤ/2ℤ × ℤ/3ℤ, ⊕) ≅ (ℤ/6ℤ, +). The bijection is specified below.
ℤ/2ℤ × ℤ/3ℤ ℤ/6ℤ (0,0) 0 (0,1) 4 (0,2) 2 (1,0) 3 (1,1) 1 (1,2) 5
The isomorphism is demonstrated below.
ℤ/2ℤ × ℤ/3ℤ ℤ/6ℤ (0,0) ⊕ (0,0) = (0,0) 0 + 0 = 0 (0,0) ⊕ (0,1) = (0,1) 0 + 4 = 4 (0,0) ⊕ (0,2) = (0,2) 0 + 2 = 2 (0,0) ⊕ (1,0) = (1,0) 0 + 3 = 3 (0,0) ⊕ (1,1) = (1,1) 0 + 1 = 1 (0,0) ⊕ (1,2) = (1,2) 0 + 5 = 5 (0,1) ⊕ (0,0) = (0,1) 4 + 0 = 4 (0,1) ⊕ (0,1) = (0,2) 4 + 4 = 2 (0,1) ⊕ (0,2) = (0,0) 4 + 2 = 0 (0,1) ⊕ (1,0) = (1,1) 4 + 3 = 1 (0,1) ⊕ (1,1) = (1,2) 4 + 1 = 5 (0,1) ⊕ (1,2) = (1,0) 4 + 5 = 3
ℤ/2ℤ × ℤ/3ℤ ℤ/6ℤ (0,2) ⊕ (0,0) = (0,2) 2 + 0 = 2 (0,2) ⊕ (0,1) = (0,0) 2 + 4 = 0 (0,2) ⊕ (0,2) = (0,1) 2 + 2 = 4 (0,2) ⊕ (1,0) = (1,2) 2 + 3 = 5 (0,2) ⊕ (1,1) = (1,0) 2 + 1 = 3 (0,2) ⊕ (1,2) = (1,1) 2 + 5 = 1 (1,0) ⊕ (0,0) = (1,0) 3 + 0 = 3 (1,0) ⊕ (0,1) = (1,1) 3 + 4 = 1 (1,0) ⊕ (0,2) = (1,2) 3 + 2 = 5 (1,0) ⊕ (1,0) = (0,0) 3 + 3 = 0 (1,0) ⊕ (1,1) = (0,1) 3 + 1 = 4 (1,0) ⊕ (1,2) = (0,2) 3 + 5 = 2
ℤ/2ℤ × ℤ/3ℤ ℤ/6ℤ (1,1) ⊕ (0,0) = (1,1) 1 + 0 = 0 (1,1) ⊕ (0,1) = (1,2) 1 + 4 = 5 (1,1) ⊕ (0,2) = (1,0) 1 + 2 = 3 (1,1) ⊕ (1,0) = (0,1) 1 + 3 = 4 (1,1) ⊕ (1,1) = (0,2) 1 + 1 = 2 (1,1) ⊕ (1,2) = (0,0) 1 + 5 = 0 (1,2) ⊕ (0,0) = (1,2) 5 + 0 = 5 (1,2) ⊕ (0,1) = (1,0) 5 + 4 = 3 (1,2) ⊕ (0,2) = (1,1) 5 + 2 = 1 (1,2) ⊕ (1,0) = (0,2) 5 + 3 = 2 (1,2) ⊕ (1,1) = (0,0) 5 + 1 = 0 (1,2) ⊕ (1,2) = (0,1) 5 + 5 = 4
Since 1 and 5 are generators for ℤ/6ℤ with respect to +, the corresponding elements (1,1) and (1,2) are generators for ℤ/2ℤ × ℤ/3ℤ.
Fact: Suppose that for two positive integers n and m where g ≡ gcd(n,m) and two congruence classes a ℤ/nℤ and b ℤ/mℤ, we are given the following system of equations:
x
a (mod n)
x
b (mod m)
We then know that:
n
=
n' ⋅ g
m
=
m' ⋅ g
But this means that:
x
a (mod (n' ⋅ g))
x
b (mod (m' ⋅ g))
The above equations can be converted into facts about divisibility:
x
=
a + k ⋅ (n' ⋅ g)
x
=
b + l ⋅ (m' ⋅ g)
But note that:
x
=
a + (k ⋅ n')) ⋅ g
x
=
b + (l ⋅ (m')) ⋅ g
The above implies:
x
a (mod g)
x & ≡ & b (mod g)
Since xx, it must be that:
a & ≡ & b (mod g)
Thus, a solution x exists for the system of equations only if ab (mod (gcd(n,m)).
Fact: Suppose that for two positive integers n and m where g ≡ gcd(n,m) and two congruence classes a ℤ/nℤ and b ℤ/mℤ, we are given the following system of equations:
x
a (mod n)
x
b (mod m)
Note that because g ≡ gcd(n,m), we have:
n
=
n' ⋅ g
m
=
m' ⋅ g
To find a solution, first determine whether ab (mod g), and compute r = a (mod g). Then set:
x = y + r
We can now solve the following system for y:
y + r
a (mod n)
y + r
b (mod m)
We substruct r from both sides in both equations. We now have g | a-r and g | b-r, since r was the remainder when dividing a and b by g:
y
(a − r) (mod n)
y
(b − r) (mod m)
Now set:
y = g ⋅ z
We can now solve the following system for y:
g ⋅ z
(a − r) (mod (n' ⋅ g))
g ⋅ z
(b − r) (mod (m' ⋅ g))
Using the linear congruence theorem on both equations, we get:
z
(a − r)/g (mod n')
z
(b − r)/g (mod m')
We know that n' and m' are coprime, so we can solve for z using the usual method for solving systems of equations with coprime moduli. Once we find z, we can compute a solution x to the original system of equations:
x
g ⋅ z + r (mod ((n ⋅ m) / g))
Example: Suppose we want to solve the following system of equations:
x
1 (mod 6)
x
3 (mod 8)
First, we compute gcd(6,8) = 2. Then we check that 1 ≡ 3 (mod 2). Since this is true, we know we can find a solution. We proceed by subtracting 1 from both sides of both equations:
x − 1
0 (mod 6)
x − 1
2 (mod 8)
We can now apply the linear congruence theorem to both equations:
x − 1 2
0 (mod 3)
x − 1 2
1 (mod 4)
We can now solve the above system of equations using the usual CRT solution computation because the moduli are now coprime:
x − 1 2
0 ⋅ (4 ⋅ 4-1) + 1 ⋅ (3 ⋅ 3-1)
0 + 1 ⋅ (3 ⋅ 3)
9
We now compute x:
x − 1 2
9
x − 1
18
x
19
Since the range of unique CRT solutions with coprime moduli is ℤ/((6 ⋅ 8)/gcd(6,8))ℤ = ℤ/24ℤ, the congruence class solution is:
x
19 (mod 24)
By putting together all the theorems and algorithms we have seen so far, we can now define a general-purpose solver for linear systems of equations involving congruence classes.
greatestcommondivisoralgorithm Fermat'slittletheorem Euler'stheorem ⇑ ⇑ ⇑ Bézout'sidentity ⇐ extendedEuclideanalgorithm ⇐ algorithm forfindingmultiplicativeinverses ⇒ Euler'stotientfunction φ ⇑ Chineseremaindertheorem ⇐ CRT solverfor twoequations ⇑ linearcongruencetheorem ⇐ generalCRT solverfor twoequations ⇑ induction ⇐ generalCRT solverfor nequations
We can also assemble a general-purpose algorithm for computing square roots of congruence classes modulo composite numbers (assuming we have the factorization of the modulus).
formula for3+4ℤprimes ⇐ square rootsmodulo p ⇑ Hensel's lemma ⇐ square rootsmodulo pk ⇑ CRT solverfor twoequations ⇐ square rootsmodulo n ⋅ m
Example: Suppose we want to solve the following equation for any congruence classes in ℤ/6ℤ that solve it:
4 ⋅ x + 3 ⋅ x2
2 (mod 6)
One approach is to use the Chinese remainder theorem and split the problem into two equations by factoring 6:
4 ⋅ x + 3 ⋅ x2
2 (mod 2)
4 ⋅ x + 3 ⋅ x2
2 (mod 3)
We can now simplify each of the above:
3 ⋅ x2
0 (mod 2)
4 ⋅ x
2 (mod 3)
We can simplify each further:
x2
0 (mod 2)
x
2 (mod 3)
We know that the only solution to the first is x ≡ 0 (mod 2), so we have now obtained the following system of equations; we can use CRT to find the unique solution modulo 6:
x
0 (mod 2)
x
2 (mod 3)
x
2 (mod 6)
We can check that, indeed, 4 ⋅ 2 + 3 ⋅ 22 ≡ 20 ≡ 2 (mod 6).
Example: Solve the following equation for all congruence classes in ℤ/7ℤ that satisfy it:
x2 − 3 ⋅ x
0 (mod 7)
### [link] 5.6. Groups, Subgroups, and Direct Products
Definition: We call an algebraic structure (A, ⊕) a group under ⊕ if:
• A is closed under ⊕;
• ⊕ is associative on A;
• A contains an identity;
• A has inverses with respect to ⊕ (for all x A, there exists x-1 A such that xx-1 = 1 and x-1x = 1 are always true).
If (A, ⊕) possesses no other algebraic properties, we call it a free group or we say it is strictly a group.
Example: One example of a group is (ℤ, +), because the integers are closed under addition, addition is associative, 1 is an identity, and every integer x ℤ has an additive inverse −x ℤ.
Example: Any vector space V with vector addition is a group.
Example: The set ℤ2 × 2 (the set of all 2 × 2 matrices with integer entries) together with matrix addition is a group.
Fact: For any n ℕ, the algebraic structure (ℤ/nℤ, +) where + is addition modulo n is a group.
Fact: For any n ℕ, the algebraic structure ((ℤ/nℤ)*, ⋅) where ⋅ is multiplication modulo n is a group.
If we look at subsets of the elements in a group, we might find that certain subsets are closed under the operator for that group. These subsets are called subgroups, and the concept of a subgroup can be very useful when studying and using groups.
Definition: Let A be a group under the operator ⊕. We say that B is a subgroup of A if BA, B is closed under ⊕, and B is a group.
Example: The following are all the subgroups of ℤ/4ℤ under addition + modulo 4:
• {0}, because all terms of the form 0, 0+0, 0+0+0, and so on are equivalent to 0;
• {0,2}, since closure({0,2}, +) = {0,2};
• {0,1,2,3} = ℤ/4ℤ.
The following are all the subgroups of ℤ/6ℤ under addition + modulo 6
• {0}, because all terms of the form 0, 0+0, 0+0+0, and so on are equivalent to 0;
• {0,2,4}, since closure({2}, +) = closure({0,2,4}, +) = {0,2,4};
• {0,3}, since closure({3}, +) = closure({0,3}, +) = {0,3};
• {0,1,2,3,4,5} = closure({1}, +) = ℤ/6ℤ.
Fact: Given some n ℕ and some factor f ℕ such that f|n, then (closure({f}, +), +) is a subgroup of (ℤ/nℤ, +), and it is isomorphic to the group (ℤ/(n/f)ℤ, +).
Example: The following are all the non-trivial subgroups of ℤ/6ℤ under addition + modulo 6, together with their corresponding isomorphic group:
• ({0,2,4}, +) ≅ (ℤ/(6/2)ℤ) ≅ (ℤ/3ℤ, +);
• ({0,3}, +) ≅ (ℤ/(6/3)ℤ) ≅ (ℤ/2ℤ, +).
The notion of a subgroup allows us to introduce an alternative definition for prime numbers.
Definition: Given an integer p ℕ where p > 1, we say that p is prime if the only subgroups of (ℤ/pℤ, +) are the trivial subgroups ({0}, +) and (ℤ/pℤ, +).
Conjecture (hidden subgroup problem): The following problem is not in P: given a group (A, ⊕), find a non-trivial subgroup of A (non-trivial means not the subgroup that contains only the identity, ({I}, ⊕), and not the subgroup consisting of the entire group, (A, ⊕)).
Often, we are interested in a more restricted versions of this problem, which are also not believed to be in P:
• finding a non-trivial subgroup of (ℤ/nℤ, +);
• finding a non-trivial subgroup of ((ℤ/nℤ)*, ⋅);
• finding the size of the subgroup closure({a}, ⋅) of the group (ℤ/nℤ)*, where a (ℤ/nℤ)*.
Example: Suppose that for some n ℕ, we are given a ℤ/nℤ such that gcd(a, n) = 1. We know from Euler's theorem that aφ(n) ≡ 1 (mod n). However, φ(n) is not necessarily the smallest exponent of a that will yield 1.
For example, consider 3 ℤ/8ℤ. Even though φ(8) = 23 - 22 = 4 and 34 ≡ 1 (mod 8), it is also true that 32 ≡ 9 ≡ 1 (mod 8).
Thus, given some a ℤ/nℤ such that gcd(a, n) = 1, the problem of determining the smallest r such that ar ≡ 1 (mod n) amounts to finding the smallest subgroup of ((ℤ/nℤ)*, ⋅) that contains a. Equivalently, this amounts to finding |closure({a}, ⋅)|.
Algorithm (Shor's algorithm): Shor's algorithm relies on the ability of a quantum computer to find the smallest r > 0 such that ar ≡ 1 (mod n). It takes an arbitrary integer n as its input and finds a non-trivial factor of n with high probability. We highlight the quantum portion of the algorithm in green.
1. Shor's algorithm: n
1. do
1. choose a random a ℤ/n
2. if gcd(a, n) > 1, return gcd(a, n)
3. otherwise, find the smallest r > 0 such that ar ≡ 1 (mod n)
until r is even and a(r/2) ≢ −1 (mod n)
2. since we have exited the loop, then r is even and a(r/2) ≢ −1 (mod n)
3. thus, a(r/2) is a non-trivial root of ar
4. return gcd(a(r/2) ± 1, n), which is a non-trivial factor by this fact
Note that 1 has exactly four roots modulo n because n = pq. Since r > 0, a(r/2) cannot be 1. This leaves −1 and two other possibilities.
Example: Let us consider the example where n = 15. If we do not know the factors of 15, we could instead start with a = 2, since 2 (ℤ/15ℤ)*. We then find that:
r
=
|closure({2}, ⋅)|
=
|{2, 4, 8, 1}|
=
4
Thus, ar ≡ 24 ≡ 1 (mod 15). In this case, r = 4 is even and a(r/2) ≡ 24/2 ≡ 4, where 4 ≢ −1. But this means we have found a root 4 of 1 where 4 ≢ ± 1:
4 ⋅ 4
1 ⋅ 1 (mod 15)
Thus, we can now use the reduction from factoring to the congruent squares problem to find a factor of n:
gcd(4 + 1, 15)
=
5
gcd(4 − 1, 15)
=
3
We now consider a few other examples illustrating how subgroups and isomorphisms between groups and subgroups can be applied.
Example (arithmetic with unbounded error and bounded unreliability): Suppose you need to perform a sequence of k addition operations in ℤ/15ℤ, but all the addition operators ⊕ modulo n available to you are error-prone. To add two numbers a, b modulo n accurately, you must perform the computation ab at least n times (because up to ⌈ n/2 ⌉ of those attempts will result in an arbitrarily large error).
This means that to perform k addition operations modulo 15, it will be necessary to perform every operation 15 times, for a total of k ⋅ 15 operations modulo 15. If each addition operation modulo n takes about log2 n steps, this would mean that k operations would take:
k ⋅ 15 ⋅ 4 steps
Assuming that performing CRT to find a solution in ℤ/15ℤ takes 10,000 steps, determine how you can use CRT to speed up the computation of these k addition operations, and for what minimum k this would be advantageous.
Definition: Given two groups (A, ⊕) and (B, ⊗), we define the direct product of these two groups to be the group (A × B, ◊) where the operator ◊ is defined over A × B as follows:
(a, b) ◊ (a', b')
=
(a ⊕ a', b ⊗ b)
Example: The direct product (ℤ/2ℤ × ℤ/2ℤ, +) where + is component-wise addition is a group, but it is not isomorphic to (ℤ/4ℤ, +). To see why, consider that ℤ/4ℤ has a generator 1 ℤ/4ℤ. Are any of the four elements (0,0), (0,1), (1,0), or (1,1) generator for all the elements in ℤ/2ℤ × ℤ/2ℤ?
Example: The direct product (ℤ/2ℤ × ℤ/3ℤ, +) where + is component-wise addition is a group, and it is isomorphic to ℤ/6ℤ. Can you find a single generator in ℤ/2ℤ × ℤ/3ℤ that can be used to generate every element in ℤ/2ℤ × ℤ/3ℤ? What is the name of the theorem that states that there is an isomorphism between ℤ/2ℤ × ℤ/3ℤ and ℤ/6ℤ?
Example: Suppose we consider the set of all possible polynomials of the form ak xk + ak-1 xk-1 + ... + a2 x2 + a1 x1 + a0 x0 where every coefficient ai is from ℤ/2ℤ, and where + represents addition modulo 2. Then we can observe the following:
02
=
0
12
=
1
In other words, for any x ℤ/2ℤ, x2x. That means any term of the form xk can be simplified into x:
x2
=
x
x3
=
x2 ⋅ x
=
x ⋅ x
=
x
x4
=
x3 ⋅ x
=
x ⋅ x
=
x
This, together with the fact that every coefficient ai can be simplified to either 0 or 1 modulo 2, shows us that there are only four distinct polynomials modulo 2:
{0 ⋅ x + 0, 0 ⋅ x + 1, 1 ⋅ x + 0, 1 ⋅ x + 1}
=
{0, 1, x, x + 1}
One notation for this set of polynomials is ℤ/2ℤ[x], which can be read as "ℤ/2ℤ extended with x" or "polynomials in ℤ/2/Z". The set {0, 1, x, x + 1} together with addition modulo 2 is an algebraic structure, and it is isomorphic to ℤ/2ℤ × ℤ/2ℤ where the two ceofficients of the polynomial a1 x + a0 correspond to the two components of an element (a1, a0) in ℤ/2ℤ × ℤ/2ℤ.
## [link] Review 2. Algebraic Structures and their Properties
This section contains a comprehensive collection of review problems going over all the course material. Many of these problems are an accurate representation of the kinds of problems you may see on an exam.
Exercise: Suppose we have the following polynomial in the integers (i.e., all operations are arithmetic operations):
6 x1001 + 2 x600 + 1
Prove that the arithmetic expression above is always divisible by 3 if gcd(x,3) = 1.
Exercise: Suppose you have n ℕ and a congruence class a ℤ/nℤ such that gcd(a, n) = 1. Compute in terms of n (and only n) the congruence class corresponding to the following term:
0 ⋅ a + 1 ⋅ a + 2 ⋅ a + ... + (n − 1) ⋅ a
Exercise: Find all x ℤ/29ℤ that satisfy the following:
y2
16 (mod 29)
x2
y (mod 29)
Exercise: Solve the following problems.
1. Consider the following two circular shift permutations:
[8,9,0,1,2,3,4,5,6,7]
[5,6,7,8,9,0,1,2,3,4]
How many of each would you need to compose to obtain the permutation [9,0,1,2,3,4,5,6,7,8]?
2. Rewrite the permutation [3,4,0,1,2] as a composition of adjacent swap permutations.
Exercise: Suppose we want to perform k exponentiation operations (e.g., if k = 4, we want to compute (((ab)c)d)e) modulo 21. Assume the following:
• a single exponentiation operation modulo 21 takes 213 = 9261 steps;
• a single exponentiation operation modulo 3 takes 33 = 27 steps;
• a single exponentiation operation modulo 7 takes 73 = 343 steps;
• an exponentiation operation modulo 3 and an exponentiation operation modulo 7 together take 343 + 27 = 370 steps;
• solving a two-equation system for two values, a modulo 3 and b modulo 7, takes 8000 steps using CRT;
• we can either compute the exponentiation sequence directly modulo 21, or we can split it into two sequences of computations (one modulo 3, the other modulo 7) and then recombine using CRT at the end.
1. What is the number of steps needed to perform k exponentiations modulo 21?
2. What is the number of steps needed to perform k exponentiations modulo 3 and k exponentiations modulo 7, then to recombine using CRT?
Exercise: Find solutions to the following problems.
1. Explain why the following polynomial has no integer solutions (Hint: you only need to evaluate the polynomial for two possible values of x):
x4 + x2 + 3
=
0
2. Find at least one solution x ℤ/10ℤ to the following system of equations (you must use Bézout's identity):
6 ⋅ y + 5 ⋅ x - 1
0 (mod 10)
x2
y (mod 10)
Exercise: Find solutions to the following problems.
1. Suppose you want to send some s ℤ/nℤ to Alice and Bob, but you want to ensure that the only way Alice and Bob can retrieve s is if they work together. What two distinct pairs (s1, p1) and (s2, p2) would you send to Alice and Bob, respectively, so that they would need to work together to recover s?
2. Suppose Bob is generating a public RSA key; he chooses a very large prime p, and then he chooses q = 2. Why is this not secure?
3. Suppose Alice and Bob use Shamir secret sharing to share a password s to a lock that is not protected from brute force attacks (i.e., anyone can keep trying different passwords until they unlock it). Alice holds s mod p and Bob holds s mod q, where s < pq. However, suppose that Bob happens to be using q = 2, and Alice knows this. What can Alice do to quickly break the lock?
Exercise: Suppose that Alice, Bob, Carl, and Dan are sharing a secret s using Shamir secret sharing, where each participant is assigned a distinct modulus n that is coprime to everyone else's modulus. Each participant is holding a part of the secret s mod n, and the secret can be recovered by any two participants. However, Eve has sabotaged the value stored by one one the participants. Below are the values currently stored by everyone; one of them is corrupted.
• Alice: nAlice = 3 and (s mod 3) = 2
• Bob: nBob = 4 and (s mod 4) = 3
• Carl: nCarl = 5 and (s mod 5) = 2
• Dan: nDan = 7 and (s mod 7) = 4
1. Which participant's stored value s mod n has Eve sabotaged?
2. What is the correct secret value s?
3. What's the number of different shared secret values these four participants can store (assuming they use the same moduli, and require that any two members should be able to recover the secret).
4. Suppose you want to store an n-bit number s. You want to store it in a way that makes it possible to recover s even if one of the bits is corrupted. How can you accomplish this using at most approximately 2 ⋅ n bits?
Exercise: Suppose you want to make a two-player game in which each player gets a different element (call them a and b) from the algebraic structure (A, ⊕). Then, they would be given a third element c, and they must work together to use their two elements a and b to create the target element c. It must be impossible for an individual player to make the target element c on their own. Explain why each of the following algebraic structures would or would not work for this game.
• ℤ/3ℤ
• ℤ/4ℤ
• ℤ/2ℤ × ℤ/2ℤ
• ℤ/6ℤ
• ℤ/2ℤ × ℤ/3ℤ
• S3
## [link] Appendix A. Python Reference
The Python programming language will be among the languages we use in this course. This language supports the object-oriented, imperative, and functional programming paradigms, has automatic memory managememt, and natively supports common high-level data structures such as lists and sets. Python is often used as an interpreted language, but it can also be compiled.
The latest version of Python 3 can be downloaded at: https://www.python.org/downloads/. In this course, we will require the use if Python 3, which has been installed on all the CS Department's undergraduate computing lab machines, as well as on csa2/csa3.
### [link] A.2. Assembling a Python module
The simplest Python program is a single file (called a module) with the file extension .py. For example, suppose the following is contained within a file called example.py:
# This is a comment in "example.py".
# Below is a Python statement.
print("Hello, world.")
Assuming Python is installed on your system, to run the above program from the command line you can use the following (you may need to use python3, python3.2, python3.3, etc. depending on the Python installation you're using). Note that in the examples below %> represents a terminal prompt, which may look different on your system.
%> python example.py
Hello, world.
If you run Python without an argument on the command line, you will enter Python's interactive prompt. You can then evaluate expressions and execute individual statements using this prompt; you can also load and execute a Python module file:
%> python
Python 3.2 ...
Hello, world.
>>> x = "Hello." # Execute an assignment statement.
>>> print(x) # Execute a "print" statement.
Hello.
>>> x # Evaluate a string expression.
'Hello.'
>>> 1 + 2 # Evaluate a numerical expression.
3
### [link] A.3. Common data structures (i.e., Python expressions)
Python provides native support for several data structures that we will use throughout this course: integers, strings, lists, tuples, sets, and dictionaries (also known as finite maps). In this subsection, we present how instances of these data structures are represented in Python, as well as the most common operations and functions that can be applied to these data structure instances.
• Booleans consist of two constants: True and False.
• The usual logical operations are available using the operators and, or, and not.
>>> True # A boolean constant.
True
>>> False # A boolean constant.
False
>>> True and False or True and (not False) # A boolean expression.
True
• Integers are written as in most other programming languages (i.e., as a sequence of digits).
• The usual arithmetic operations are available using the operators +, *, -, and /. The infix operator // represents integer division, and the infix operators ** represents exponentiation. Negative integers are prefixed with the negation operator -.
• The usual relational operators ==, !=, <, >, <=, >= are available.
• The int() function can convert a string that looks like an integer into an integer.
>>> 123 # An integer constant.
True
>>> 1 * (2 + 3) // 4 - 5 # An integer expression.
-4
>>> 4 * 5 >= 19 # A boolean expression involving integers.
True
>>> int("123") # A string being converted into an integer
123
• Strings are delimited by either ' or " characters. Strings can be treated as lists of single-character strings. Another way to look at this is that there is no distinction between a character and a string: all characters are just strings of length 1. Multiline strings can be delimited using """ or ''' (i.e., three quotation mark characters at the beginning and end of the string literal).
• The empty string is denoted using '' or "".
• Two strings can be concatenated using +.
• The function len() returns the length of a string.
• Individual characters in a string can be accessed using the bracketed index notation (e.g., s[i]). These characters are also strings themselves.
>>> 'Example.' # A string.
'Example.'
>>> "Example." # Alternate notation for a string.
'Example.'
>>> len("ABCD") # String length.
4
>>> "ABCD" + "EFG" # String concatenation.
'ABCDEFG'
>>> "ABCD"[2] # Third character in the string.
'C'
• Lists are similar to arrays: they are ordered sequences of objects and/or values. The entries of a list can be of a mixture of different types, and lists containing one or more objects are delimited using [ and ], with the individual list entries separated by commas. Lists cannot be members of sets.
• The empty list is denoted using [].
• Two lists can be concatenated using +.
• The function len() returns the length of a list.
• Individual entries in a list can be accessed using the bracketed index notation (e.g., a[i]).
• To check if a value is in a list, use the in relational operator.
>>> [1,2,"A","B"] # A list.
[1, 2, 'A', 'B']
>>> [1, 2] + ['A','B'] # Concatenating lists.
[1, 2, 'A', 'B']
>>> len([1,2,"A","B"] ) # List length.
4
>>> [1,2,"A","B"][0] # First entry in the list.
1
>>> 1 in [1, 2] # List containment check.
True
• Tuples are similar to lists (they are ordered, and can contain objects of different types), except they are delimited by parentheses ( and ), with entries separated by commas. The main distinction between lists and tuples is that tuples are hashable (i.e., they can be members of sets).
• The empty tuple is denoted using ().
• A tuple containing a single object x is denoted using (x, ).
• Two tuples can be concatenated using +.
• A tuple can be turned into a list using the list() function.
• A list can be turned into a tuple using the tuple() function.
• The function len() returns the length of a tuple.
• Individual entries in a tuple can be accessed using the bracketed index notation (e.g., t[i]).
• To check if a value is in a tuple, use the in relational operator.
>>> (1,2,"A","B") # A tuple.
(1, 2, 'A', 'B')
>>> (1,) # Another tuple.
(1,)
>>> (1, 2) + ('A','B') # Concatenating tuples.
(1, 2, 'A', 'B')
>>> list((1, 2, 'A','B')) # A tuple being converted into a list.
[1, 2, 'A', 'B']
>>> tuple([1, 2, 'A','B']) # A list being converted into a tuple.
(1, 2, 'A', 'B')
>>> len((1,2,"A","B")) # Tuple length.
4
>>> (1,2,"A","B")[0] # First entry in the tuple.
1
>>> 1 in (1, 2) # Tuple containment check.
True
• Sets are unordered sequences that cannot contain duplicates. They are a close approximation of mathematical sets. Sets cannot be members of sets.
• The empty set is denoted using set().
• The methods .union() and .intersect correspond to the standard set operations.
• A list or tuple can be turned into a set using the set() function.
• A set can be turned into a list or tuple using the list() or list() function, respectively.
• The function len() returns the size of a set.
• To access individual entries in a set, it is necessary to turn the set into a list or tuple.
• To check if a value is in a set, use the in relational operator.
>>> {1,2,"A","B"} # A set.
{1, 2, 'A', 'B'}
>>> ({1,2}.union({3,4})).intersection({4,5}) # Set operations.
{4}
>>> set([1, 2]).union(set(('A','B'))) # Converting a list and a tuple to sets.
{'A', 1, 2, 'B'}
>>> len({1,2,"A","B"}) # Set size.
4
>>> 1 in {1,2,"A","B"} # Tuple containment check.
True
• Frozen sets are like sets, except they can be members of other sets. A set can be turned into a frozen set using the frozenset() function.
>>> frozenset({1,2,3}) # A frozen set.
frozenset({1, 2, 3})
>>> {frozenset({1,2}), frozenset({3,4})} # Set of frozen sets.
{frozenset({3, 4}), frozenset({1, 2})}
In Python, values that are members of a set data structure must be hashable so that it is easy to deduplicate elements in an unordered set (e.g., {1,1,2,2} == {1,2} should be True, and this would take O(n2) steps to compute in the worst case because sets are not ordered). However, values of type set are not hashable because they can change (e.g., it is possible to insert an element into a set and also to remove an element from a set), which means their hashes could change, as well. Values of type frozenset cannot be changed; once their hash value is computed at the time of creation, it never changes. Thus, it is okay to include values of type frozenset inside instances of set without worrying about incurring a quadratic running time when deduplicating (or, e.g., computing a set union).
• Dictionaries are unordered collections of associations between some set of keys and some set of values. Dictionaries are also known as finite maps.
• The empty dictionary is denoted using {}.
• The list of keys that the dictionary associates with values can be obtained using list(d.keys()).
• The list of values that the dictionary contains can be obtained using list(d.values()).
• The function len() returns the number of entries in the dictionary.
• Individual entries in a dictionary can be accessed using the bracketed index notation (e.g., d[key]).
>>> {"A":1, "B":2} # A dictionary.
{'A': 1, 'B': 2}
>>> list({"A":1, "B":2}.keys()) # Dictionary keys.
['A', 'B']
>>> list({"A":1, "B":2}.values()) # Dictionary values.
[1, 2]
>>> len({"A":1, "B":2}) # Dictionary size.
2
>>> {"A":1, "B":2}["A"] # Obtain a dictionary value using a key.
1
### [link] A.4. Function, procedure, and method invocations
Python provides a variety of ways to supply parameter arguments when invoking functions, procedures, and methods.
• Function calls and method/procedure invocations consist of the function, procedure, or method name followed by a parenthesized, comma-delimited list of arguments. For example, suppose a function or procedure example() is defined as follows:
def example(x, y, z):
print("Invoked.")
return x + y + z
To invoke the above definition, we can use one of the following techniques.
• Passing arguments directly involves listing the comma-delimited arguments directly between parentheses.
>>> example(1,2,3)
Invoked.
6
• The argument unpacking operator (also known as the *-operator, the scatter operator, or the splat operator) involves providing a list to the function, preceded by the * symbol; the arguments will be drawn from the elements in the list.
>>> args = [1,2,3]
>>> example(*args)
Invoked.
6
• The keyword argument unpacking operator (also known as the **-operator) involves providing a dictionary to the function, preceded by the ** symbol; each named paramter in the function definition will be looked up in the dictionary, and the value associated with that dictionary key will be used as the argument passed to that parameter.
>>> args = {'z':3, 'x':1, 'y':2}
>>> example(**args)
Invoked.
6
• Default parameter values can be specified in any definition. Suppose the following definition is provided.
def example(x = 1, y = 2, z = 3):
return x + y + z
The behavior is then as follows: if an argument corresponding to a parameter is not supplied, the default value found in the definition is used. If an argument is supplied, the supplied argument value is used.
>>> example(0, 0)
3
>>> example(0)
5
>>> example()
6
Python provides concise notations for defining data structures and performing logical computations. In particular, it support a comprehension notation that can be used to build lists, tuples, sets, and dictionaries.
• List comprehensions make it possible to construct a list by iterating over one or more other data structure instances (such as a list, tuple, set, or dictionary) and performing some operation on each element or combination of elements. The resulting list will contain the result of evaluating the body for every combination.
>>> [ x for x in [1,2,3] ]
[1, 2, 3]
>>> [ 2 * x for x in {1,2,3} ]
[2, 4, 6]
>>> [ x + y for x in {1,2,3} for y in (1,2,3) ]
[2, 3, 4, 3, 4, 5, 4, 5, 6]
It is also possible to add conditions anywhere after the first for clause. This will filter which combinations are actually used to add a value to the resulting list.
>>> [ x for x in {1,2,3} if x < 3 ]
[1, 2]
>>> [ x + y for x in {1,2,3} for y in (1,2,3) if x > 2 and y > 1 ]
[5, 6]
• Set comprehensions make it possible to construct a set by iterating over one or more other data structure instances (such as a list, tuple, set, or dictionary) and performing some operation on each element or combination of elements. The resulting list will contain the result of evaluating the body for every combination. Notice that the result will contain no duplicates because the result is a set.
>>> { x for x in [1,2,3,1,2,3] }
{1, 2, 3}
• Dictionary comprehensions make it possible to construct a dictionary by iterating over one or more other data structure instances (such as a list, tuple, set, or dictionary) and performing some operation on each element or combination of elements. The resulting dictionary will contain the result of evaluating the body for every combination.
>>> { key : 2 for key in ["A","B","C"] }
{'A': 2, 'C': 2, 'B': 2}
### [link] A.6. Other useful built-in functions
The built-in function type() can be used to determine the type of a value. Below, we provide examples of how to check whether a given expression has one of the common Python types:
>>> type(True) == bool
True
>>> type(123) == int
True
>>> type("ABC") == str
True
>>> type([1,2,3]) == list
True
>>> type(("A",1,{1,2})) == tuple
True
>>> type({1,2,3}) == set
True
>>> type({"A":1, "B":2}) == dict
True
### [link] A.7. Common Python definition and control constructs (i.e., Python statements)
A Python program is a sequence of Python statements. Each statement is either a function definition, a variable assignment, a conditional statement (i.e., if, else, and/or elif), an iteration construct (i.e., a for or while loop), a return statement, or a break or continue statement.
• Variable assignments make it possible to assign a value or object to a variable.
x = 10
It is also possible to assign a tuple (or any computation that produces a tuple) to another tuple:
(x, y) = (1, 2)
• Function and procedure definitions consist of the def keyword, followed by the name of the function or procedure, and then by one or more arguments (delimited by parentheses and separated by commas).
def example(a, b, c):
return a + b + c
• Conditional statements consist of one or more branches, each with its own boolean expression as the condition (with the exception of else). The body of each branch is an indented sequence of statements.
def fibonacci(n):
# Computes the nth Fibonacci number.
if n <= 0:
return 0
elif n <= 2:
return 1
else:
return fibonacci(n-1) + fibonacci(n-2)
• Iteration constructs make it possible to repeat a sequence of statements over and over. The body of an iteration construct is an indented sequence of statements.
• The while construct has a boolean expression as its condition (much like if). The body is executed over and over until the expression in the condition evaluates to False, or a break statement is encountered.
def example1(n):
# Takes an integer n and returns the sum of
# the integers from 1 to n-1.
i = 0
sum = 0
while i < n:
sum = sum + i
i = i + 1
return sum
def example2(n):
# Takes an integer n and returns the sum of
# the integers from 1 to n-1.
i = 0
sum = 0
while True:
sum = sum + i
i = i + 1
if i == n:
break
return sum
• The for construct makes it possible to repeat a sequence of statements once for every object in a list, tuple, or set, or once for every key in a dictionary.
def example3(n):
# Takes an integer n and returns the sum of
# the integers from 1 to n-1.
sum = 0
for i in range(0,n):
sum = sum + i
return sum
def example4(d):
# Takes a dictionary d that maps keys to
# integers and returns the sum of the integers.
sum = 0
for key in d:
sum = sum + d[key]
return sum | 2017-12-17 15:30:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8321599364280701, "perplexity": 1117.0015933311295}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948596115.72/warc/CC-MAIN-20171217152217-20171217174217-00608.warc.gz"} |
http://statkat.com/stattest.php?t=39 | # McNemar's test - overview
This page offers structured overviews of one or more selected methods. Add additional methods for comparisons by clicking on the dropdown button in the right-hand column. To practice with a specific method click the button at the bottom row of the table
McNemar's test
Independent variable
2 paired groups
Dependent variable
One categorical with 2 independent groups
Null hypothesis
Let's say that the scores on the dependent variable are scored 0 and 1. Then for each pair of scores, the data allow four options:
1. First score of pair is 0, second score of pair is 0
2. First score of pair is 0, second score of pair is 1 (switched)
3. First score of pair is 1, second score of pair is 0 (switched)
4. First score of pair is 1, second score of pair is 1
The null hypothesis H0 is that for each pair of scores, P(first score of pair is 0 while second score of pair is 1) = P(first score of pair is 1 while second score of pair is 0). That is, the probability that a pair of scores switches from 0 to 1 is the same as the probability that a pair of scores switches from 1 to 0.
Other formulations of the null hypothesis are:
• H0: $\pi_1 = \pi_2$, where $\pi_1$ is the population proportion of ones for the first paired group and $\pi_2$ is the population proportion of ones for the second paired group
• H0: for each pair of scores, P(first score of pair is 1) = P(second score of pair is 1)
Alternative hypothesis
The alternative hypothesis H1 is that for each pair of scores, P(first score of pair is 0 while second score of pair is 1) $\neq$ P(first score of pair is 1 while second score of pair is 0). That is, the probability that a pair of scores switches from 0 to 1 is not the same as the probability that a pair of scores switches from 1 to 0.
Other formulations of the alternative hypothesis are:
• H1: $\pi_1 \neq \pi_2$
• H1: for each pair of scores, P(first score of pair is 1) $\neq$ P(second score of pair is 1)
Assumptions
• Sample of pairs is a simple random sample from the population of pairs. That is, pairs are independent of one another
Test statistic
$X^2 = \dfrac{(b - c)^2}{b + c}$
Here $b$ is the number of pairs in the sample for which the first score is 0 while the second score is 1, and $c$ is the number of pairs in the sample for which the first score is 1 while the second score is 0.
Sampling distribution of $X^2$ if H0 were true
If $b + c$ is large enough (say, > 20), approximately the chi-squared distribution with 1 degree of freedom.
If $b + c$ is small, the Binomial($n$, $P$) distribution should be used, with $n = b + c$ and $P = 0.5$. In that case the test statistic becomes equal to $b$.
Significant?
For test statistic $X^2$:
• Check if $X^2$ observed in sample is equal to or larger than critical value $X^{2*}$ or
• Find $p$ value corresponding to observed $X^2$ and check if it is equal to or smaller than $\alpha$
If $b + c$ is small, the table for the binomial distribution should be used, with as test statistic $b$:
• Check if $b$ observed in sample is in the rejection region or
• Find two sided $p$ value corresponding to observed $b$ and check if it is equal to or smaller than $\alpha$
Equivalent to
Example context
Does a tv documentary about spiders change whether people are afraid (yes/no) of spiders?
SPSS
Analyze > Nonparametric Tests > Legacy Dialogs > 2 Related Samples...
• Put the two paired variables in the boxes below Variable 1 and Variable 2
• Under Test Type, select the McNemar test
Jamovi
Frequencies > Paired Samples - McNemar test
• Put one of the two paired variables in the box below Rows and the other paired variable in the box below Columns
Practice questions | 2021-03-01 17:08:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6622674465179443, "perplexity": 833.3205378857598}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178362741.28/warc/CC-MAIN-20210301151825-20210301181825-00251.warc.gz"} |
http://herbert.the-little-red-haired-girl.org/html/latex2e/Low-level_font_commands.html | LaTeX2e help 1.6
### 2.23.3: Low-level font commands
These commands are primarily intended for writers of macros and packages. The commands listed here are only a subset of the available ones. For full details, you should consult Chapter 7 of The LaTeX Companion.
`\fontencoding{enc}` Select font encoding. Valid encodings include `OT1` and `T1`. `\fontfamily{family}` Select font family. Valid families include: `cmr` for Computer Modern Roman `cmss` for Computer Modern Sans Serif `cmtt` for Computer Modern Typewriter and numerous others. `\fontseries{series}` Select font series. Valid series include: `m` Medium (normal) `b` Bold `c` Condensed `bc` Bold condensed `bx` Bold extended and various other combinations. `\fontshape{shape}` Select font shape. Valid shapes are: `n` Upright (normal) `it` Italic `sl` Slanted (oblique) `sc` Small caps `ui` Upright italics `ol` Outline The two last shapes are not available for most font families. `\fontsize{size}{skip}` Set font size. The first parameter is the font size to switch to; the second is the `\baselineskip` to use. The unit of both parameters defaults to pt. A rule of thumb is that the baselineskip should be 1.2 times the font size. `\selectfont` The changes made by calling the four font commands described above do not come into effect until `\selectfont` is called. `\usefont{enc}{family}{series}{shape}` Equivalent to calling `\fontencoding`, `\fontfamily`, `\fontseries` and `\fontshape` with the given parameters, followed by `\selectfont`. | 2017-11-25 02:13:29 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.838833749294281, "perplexity": 3857.020974754774}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934809229.69/warc/CC-MAIN-20171125013040-20171125033040-00128.warc.gz"} |
https://docs.bentley.com/LiveContent/web/STAAD.Pro%20Help-v15/en/GUID-A227199C-EB6F-4AA3-9F76-083112E4748B.html | # GS. Documentation Conventions
A number of typographical conventions are maintained throughout Bentley documentation, which makes it easier to identify and understand the information presented.
SettingDescription
Notes, Hints, and Warnings
Items of special note are indicated as follows:
Note: This is an item of general importance.
Tip: This is optional time-saving information.
Warning: This is information about actions that should not be performed under normal operating conditions.
File Path/File Name.extension
A fixed width typeface is used to indicate file names, file paths, and file extensions (e.g., …/STAAD/Staadpro.exe)
Interface Control
A bold typeface is used to indicate user controls, such as ribbon tabs, tool names, and dialog controls. (e.g., File > Save As).
User Input
A bold, fixed width typeface is used to indicate information which must be manually entered. (e.g., Type DEAD LOAD as the title for Load Case 1).
## Terminology
• Click - This refers to the action of pressing a mouse button. When not specified, click means to press the left mouse button.
• Select - Synonymous with Click. Used when referring to an action in a menu, drop-down list, list box, or other control where multiple options are available to you.
• pop-up menu - A pop-up menu is displayed typically with a right-click of the mouse on an item in the interface.
• Window - Describes an on screen element which may be manipulated independently. Multiple windows may be open and interacted with simultaneously.
• Dialog - This is an on screen element which (typically) must be interacted with before returning to the main window.
• Cursor - Various selection tools are referred to as "cursors" in STAAD.Pro. Selecting one of these tools will change the mouse pointer icon to reflect the current selection mode.
## Mathematical Notation
Similar to spelling conventions, American mathematical notation is used throughout the documentation. A serif typeface is typically used to clarify numbers or letters which might otherwise appear similar.
• Numbers greater than 999 are written using a comma (,) to separate every three digits.
For example, the U.S. value of Young's Modulus is taken as 29,000,000 psi.
Warning: Do not use commas or spaces to separate digits within a number in a STAAD input file.
• Numbers with decimal fractions are written with a period to separate whole and fraction parts. For example, a beam with a length of 21.75 feet.
• Multiplication is represented with a raised –or middle– dot (·) or a multiplication symbol (×). For example, P = F·A or P = F×A.
• Operation separators are used in the following order:
1. parenthesis ( )
2. square brackets [ ]
3. curly brackets (i.e., braces) { }
For example,
Fa = [1 - (Kl/r)2/(2·Cc2)]Fy / {5/3 + [3(Kl/r)/(8·Cc)] - [(Kl/r)3/(8·Cc3)]}
Which may also be represented as:
$F a = [ 1 − ( K l r ) 2 2 C c 2 ] F y { 5 3 + [ 3 ( K l r ) 8 C c ] − [ ( K l r ) 3 8 C c 3 }$ | 2022-07-01 02:21:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5259035229682922, "perplexity": 4030.81152486952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103917192.48/warc/CC-MAIN-20220701004112-20220701034112-00213.warc.gz"} |
https://www.physicsforums.com/threads/limits-and-conjugates-problem.352721/ | # Limits and conjugates problem
#### ctran
I cannot seem to compute the limit,
lim (sqrt(16x^(2)+15)-4x)/(4x-1000)
x->-inf
Ive tried using L'Hopital's Rule but it made it more confusing and tried using conjugates but that didnt really work either.
1. The problem statement, all variables and given/known data
2. Relevant equations
3. The attempt at a solution
Related Calculus and Beyond Homework News on Phys.org
#### holezch
Re: Limits
it goes to 0, after multiplying by the conjugate, you can take the limit of the big expression as x goes to infinity and so the whole big thing in the bottom goes to infinity so you get 0
#### ctran
Re: Limits
Unfortunately this limit does not tend to zero it goes to -2.
#### lanedance
Homework Helper
Re: Limits
$$\lim_{x\to-\infty} \frac{\sqrt{16x^2-15}-4x}{4x-1000}$$
how do you get -2?
#### holezch
Re: Limits
it's zero if you aren't sure try graphing it
#### holezch
Re: Limits
a fixed number in the numerator, with the denominator going to infinity.. of course it is zero
#### lanedance
Homework Helper
Re: Limits
actually the negative infinity makes things interesting, try a variable change u = -x, and then multply through by (1/u)/(1/u) and see what happens
#### lanedance
Homework Helper
Re: Limits
i get 8/(-4)? hopefully i didn't miss anything...
#### holezch
Re: Limits
huh, how did you get that
I even graphed it and it looks like the limit is 0, by the way, I never saw the -inf but it didn't matter
#### lanedance
Homework Helper
Re: Limits
i checked on a graph & get -2 as described above... with some real abuse of notation (to avoid doing the whole solution), i think its because:
$$\lim_{x\to-\infty} \frac{\sqrt{(16x^2-15)}-4x}{4x-1000}$$
with the variable change u = -x
$$\lim_{u\to\infty} \frac{\sqrt{(16u^2-15)}+4u}{-4u-1000} \approx \frac{4(\infty)+ 4u(\infty) }{-4(\infty)}$$
though i agree with you on the positive infinite limit going to 0
Last edited:
#### holezch
Re: Limits
ah, I see I checked on a graph for positive infinity and assumed that -inf was the same thanks
#### Mark44
Mentor
Re: Limits
This is much simpler than it would seem from the replies in this thread. All you need to do is to factor 4x from the numerator and denominator. No conjugates, no L'Hopital's rule.
$$\frac{\sqrt{16x^2 - 15} - 4x}{4x - 1000}~=~\frac{4x(\sqrt{1 - 15/(16x^2)} - 1)}{4x(1 - 1000/(4x))}~=~\frac{\sqrt{1 - 15/(16x^2)} - 1}{1 - 1000/(4x)}$$
As x approaches negative infinity, the numerator approaches 0 and the denominator approaches 1, making the limit 0.
#### lanedance
Homework Helper
Re: Limits
This is much simpler than it would seem from the replies in this thread. All you need to do is to factor 4x from the numerator and denominator. No conjugates, no L'Hopital's rule.
$$\frac{\sqrt{16x^2 - 15} - 4x}{4x - 1000}~=~\frac{4x(\sqrt{1 - 15/(16x^2)} - 1)}{4x(1 - 1000/(4x))}~=~\frac{\sqrt{1 - 15/(16x^2)} - 1}{1 - 1000/(4x)}$$
As x approaches negative infinity, the numerator approaches 0 and the denominator approaches 1, making the limit 0.
i might be missing something, but i'm not convinced about the part factroing 4x out of the square root, i think we might not be preserving the negative and should go something like
$$\lim_{x\to-\infty} \frac{\sqrt{(16x^2-15)}-4x}{4x-1000}$$
$$\lim_{x\to-\infty} (\frac{1/x}{1/x}) \frac{\sqrt{(16x^2-15)}-4x}{4x-1000}$$
$$\lim_{x\to-\infty} \frac{\frac{-1}{\sqrt{x^2}}\sqrt{(16x^2-15)}-4}{4-1000/x}$$
$$\lim_{x\to-\infty} \frac{-\sqrt{(16-15/x^2)}-4}{4-1000/x}$$
this is the way i went with the variable change $x = -u, x^2=u^2$
$$\lim_{x\to-\infty} \frac{\sqrt{(16x^2-15)}-4x}{4x-1000}$$
$$= \lim_{u\to\infty} \frac{\sqrt{(16u^2-15)}+4u}{-4u-1000}$$
$$= \lim_{u\to\infty} (\frac{1/u}{1/u}) \frac{\sqrt{(16u^2-15)}+4u}{-4u-1000}$$
$$= \lim_{u\to\infty} \frac{\sqrt{(16-15/u^2)}+4}{-4-1000/u}= \frac{\sqrt{(16)}+4}{-4} = \frac{8}{-4} = -2$$
Last edited:
#### Bohrok
Re: Limits
i might be missing something, but i'm not convinced about the part factroing 4x out of the square root, i think we might not be preserving the negative and should gos something like
$$\lim_{x\to-\infty} \frac{\sqrt{(16x^2-15)}-4x}{4x-1000}$$
$$\lim_{x\to-\infty} (\frac{1/x}{1/x}) \frac{\sqrt{(16x^2-15)}-4x}{4x-1000}$$
$$\lim_{x\to-\infty} \frac{\frac{-1}{\sqrt{x^2}}\sqrt{(16x^2-15)}-4}{4-1000/x}$$
Where did the negative sign in -1/√x2 come from in the last line above?
I see that you left it out later in your work after the substitution. I agree that the negative isn't preserved when you take x out of the square root and the substitution you use gives the correct limit of -2 (Wolframalpha agrees with you, and it takes care of the negative in an interesting way). Reminds me of finding the derivative of arcsecant...
Last edited:
#### lanedance
Homework Helper
Re: Limits
Where did the negative sign in -1/√x2 come from in the last line above?
from the ether?
but no, being a positive person.... i added it in to preserve the negativity... not 100% sure its legal/rigourous, which is why i prefer the variable change u = -x
So as the limit is heading towards a negative number, when you take that inside the square root, you lose the negativity (ie, 1/x heads to zero from the negative side), so to highlight the fact, i decided to substitute
$$\lim_{x\to-\infty} {\frac{1}{x} = \lim_{x\to-\infty} {\frac{-1}{\sqrt{x^2}}$$
though as said the 2nd way of the approaching the problem seems more palatable
#### Mark44
Mentor
Re: Limits
lanedance, that's a good point about negative numbers that I overlooked. My error was in replacing x^2 inside the radical by x outside it. The actual identity is $\sqrt{x^2}=|x|$.
My revised work follows, and takes into account that the original problem had 16x^2 + 15 under the radical, not 16x^2 + 15 that appeared later.
$$\frac{\sqrt{16x^2 + 15} - 4x}{4x - 1000}~=~\frac{4|x|(\sqrt{1 + 15/(16x^2)} + 1)}{-4|x|(1 - 1000/(4x))}~=~\frac{-(\sqrt{1 + 15/(16x^2)} + 1)}{1 - 1000/(4x)}$$
As x approaches negative infinity, the numerator approaches -2 and the denominator approaches 1, so the limit is -2.
A couple of steps above are not obvious, so here's the explanation for them. Since we are taking the limit as x --> -infinity, it's reasonable to assume that x < 0. In that case, x = -|x|, so I can replace -4x by +4|x| in the numerator, and can replace 4x by -4|x| in the denominator. This is why the signs changed in going from the first expression above to the the second.
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving | 2019-06-26 18:51:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9038251042366028, "perplexity": 1065.805398317482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560628000414.26/warc/CC-MAIN-20190626174622-20190626200622-00179.warc.gz"} |
http://www.koreascience.or.kr/article/ArticleFullRecord.jsp?cn=SHGHCX_2016_v23n2_105 | BOUNDEDNESS IN THE NONLINEAR PERTURBED DIFFERENTIAL SYSTEMS VIA t-SIMILARITY
• Journal title : The Pure and Applied Mathematics
• Volume 23, Issue 2, 2016, pp.105-117
• Publisher : Korea Society of Mathematical Education
• DOI : 10.7468/jksmeb.2016.23.2.105
Title & Authors
BOUNDEDNESS IN THE NONLINEAR PERTURBED DIFFERENTIAL SYSTEMS VIA t-SIMILARITY
GOO, YOON HOE;
Abstract
This paper shows that the solutions to the nonlinear perturbed differential system $\small{y{\prime}=f(t,y)+\int_{t_0}^{t}g(s,y(s),T_1y(s))ds+h(t,y(t),T_2y(t))}$, have the bounded property by imposing conditions on the perturbed part $\small{\int_{t_0}^{t}g(s,y(s),T_1y(s))ds,h(t,y(t),T_2y(t))}$, and on the fundamental matrix of the unperturbed system y′ = f(t, y) using the notion of h-stability.
Keywords
h-stability;t-similarity;bounded;nonlinear nonautonomous system;
Language
English
Cited by
References
1.
V.M. Alekseev: An estimate for the perturbations of the solutions of ordinary differential equations. Vestn. Mosk. Univ. Ser. I. Math. Mekh. 2 (1961), 28-36(Russian).
2.
F. Brauer: Perturbations of nonlinear systems of differential equations. J. Math. Anal. Appl. 14 (1966), 198-206.
3.
S.I. Choi & Y.H. Goo: Boundedness in perturbed nonlinear functional differential systems. J. Chungcheong Math. Soc. 28 (2015), 217-228.
4.
______: Lipschitz and asymptotic stability for nonlinear perturbed differential systems. J. Chungcheong Math. Soc. 27 (2014)
5.
S.K. Choi & H.S. Ryu: h–stability in differential systems. Bull. Inst. Math. Acad. Sinica 21 (1993), 245-262.
6.
S.K. Choi, N.J. Koo & H.S. Ryu: h-stability of differential systems via t-similarity. Bull. Korean. Math. Soc. 34 (1997), 371-383.
7.
R. Conti: Sulla t-similitudine tra matricie l’equivalenza asintotica dei sistemi differenziali lineari. Rivista di Mat. Univ. Parma 8 (1957), 43-47.
8.
Y.H. Goo: Boundedness in the perturbed differential systems. J. Korean Soc. Math. Edu. Ser.B: Pure Appl. Math. 20 (2013), 223-232.
9.
______: Boundedness in the perturbed nonlinear differential systems. Far East J. Math. Sci(FJMS) 79 (2013), 205-217.
10.
______: Boundedness in functional differential systems via t-similarity. J. Chungcheong Math. Soc., submitted.
11.
______: Uniform Lipschitz stability and asymptotic behavior for perturbed differential systems. Far East J. Math. Sciences 99 (2016), 393-412.
12.
G.A. Hewer: Stability properties of the equation by t-similarity. J. Math. Anal. Appl. 41 (1973), 336-344.
13.
V. Lakshmikantham & S. Leela: Differential and Integral Inequalities: Theory and Applications Vol.. Academic Press, New York and London, 1969.
14.
B.G. Pachpatte: Stability and asymptotic behavior of perturbed nonlinear systems. J. diff. equations 16 (1974) 14-25.
15.
______: Perturbations of nonlinear systems of differential equations. J. Math. Anal. Appl. 51 (1975), 550-556.
16.
M. Pinto: Perturbations of asymptotically stable differential systems. Analysis 4 (1984), 161-175.
17.
______: Stability of nonlinear differential systems. Applicable Analysis 43 (1992), 1-20. | 2016-10-22 05:29:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 2, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3505901098251343, "perplexity": 4126.735547688861}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988718426.35/warc/CC-MAIN-20161020183838-00128-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://scirate.com/arxiv/math.AT | # Algebraic Topology (math.AT)
• Recent developments have found unexpected connections between non-commutative probability theory and algebraic topology. In particular, Boolean cumulants functionals seem to be important for describing morphisms of homotopy operadic algebras. We provide new elementary examples which clearly resemble such connection, based on spectral graph theory. These observations are important for bringing new ideas from non-commutative probability into TDA and stochastic topology, and in the opposite direction.
• The space of polynomial differential equations of a fixed degree with a center singularity has many irreducible components. We prove that pull back differential equations form an irreducible component of such a space. The method used in this article is inspired by Ilyashenko and Movasati s method. The main concepts are the Picard Lefschetz theory of a polynomial in two variables with complex coefficients, the Dynkin diagram of the polynomial and the iterated integral.
• With $G = \mathbb{Z}/p$, $p$ prime, we calculate the ordinary $G$-cohomology (with Burnside ring coefficients) of $\mathbb{C}P_G^\infty = B_G U(1)$, the complex projective space, a model for the classifying space for $G$-equivariant complex line bundles. The $RO(G)$-graded ordinary cohomology was calculated by Gaunce Lewis, but here we extend to a larger grading in order to capture a more natural set of generators, including the Euler class of the canonical bundle, as well as a significantly simpler set of relations.
• We construct a canonical basis of two-cycles, on a $K3$ surface, in which the intersection form takes the canonical form $2E_8(-1) \oplus 3H$. The basic elements are realized by formal sums of smooth submanifolds.
• We associate invariants such as permutation cycles and local cycles at infinity with $2-$standard consecutive structures (refer Definition $23$) to a line arrangement (refer Definition $2$) which has global cyclicity (refer Definition $19$) over fields with $1-ad$ structure (refer Definition $1$) to describe the gonality structures (refer Definition $8$) in Theorem $11$ when there exists a local permutation chart where the intersections points corresponding to simple transpositions satisfy One Sided Property $11$. While generalizing the features of a finite set of linear inequalities in two variables we compute modified simplicial homology groups of line arrangements over arbitrary fields in Theorem $2$ and ask Question $1$ about associating spaces which describe these invariant homology groups. In this article we could associate regions (refer Definition $5$) describing these invariants over fields with $1-ad$ structure. We construct a graph of isomorphism of classes of line arrangements over fields with $1-ad$ structure using the associated invariants and Elementary Collineation Transformations (ECT) in Theorem $13$ and in Note $11$. Here we prove a representation Theorem $13$ where we represent each isomorphism class with a given set of distinct slopes. We also prove a formulation of Polygonal Jordan Curve Theorem $3$ over fields with $1-ad$ structure and an isomorphism Theorem $14$ for those line arrangement collineation maps which preserve nook points and central pairs for quadrilateral substructures (refer Definition $30$). At the end of the article we ask some open questions on line-folds (refer Definition $31$).
• We investigate under which assumptions a subclass of flat quasi-coherent sheaves on a quasi-compact and semi-separated scheme allows to "mock" both the homotopy category of projective modules and the homotopy category of totally acyclic complexes of projectives. Our methods are based on module theoretic properties of the subclass of flat modules involved as well as their behaviour with respect to Zariski localizations. As a consequence we get that, for such schemes, the derived category of flats is equivalent to the derived category of very flats and the equivalence restricts to the full subcategories of F-totally acyclic complexes. Furthermore, the equivalences are derived from a Quillen equivalence between the corresponding models.
• Aug 22 2017 math.AT arXiv:1708.05871v1
We introduce notions of \it upper chernrank and \it even cup length of a finite connected CW-complex and prove that \it upper chernrank is a homotopy invariant. It turns out that determination of \it upper chernrank of a space $X$ sometimes helps to detect whether a generator of the top cohomology group can be realized as Euler class for some real (orientable) vector bundle over $X$ or not. For a closed connected $d$-dimensional complex manifold we obtain an upper bound of its even cup length. For a finite connected even dimensional CW-complex with its \it upper chernrank equal to its dimension, we provide a method of computing its even cup length. Finally, we compute \it upper chernrank of many interesting spaces.
• A multigraph is a nonsimple graph which is permitted to have multiple edges, that is, edges that have the same end nodes. We introduce the concept of spanning simplicial complexes $\Delta_s(\mathcal{G})$ of multigraphs $\mathcal{G}$, which provides a generalization of spanning simplicial complexes of associated simple graphs. We give first the characterization of all spanning trees of a uni-cyclic multigraph $\mathcal{U}_{n,m}^r$ with $n$ edges including $r$ multiple edges within and outside the cycle of length $m$. Then, we determine the facet ideal $I_\mathcal{F}(\Delta_s(\mathcal{U}_{n,m}^r))$ of spanning simplicial complex $\Delta_s(\mathcal{U}_{n,m}^r)$ and its primary decomposition. The Euler characteristic is a well-known topological and homotopic invariant to classify surfaces. Finally, we device a formula for Euler characteristic of spanning simplicial complex $\Delta_s(\mathcal{U}_{n,m}^r)$.
• The Lusternik-Schnirelmann category $cat(X)$ is a homotopy invariant which is a numerical bound on the number of critical points of a smooth function on a manifold. In this paper we calculate the Lusternik-Schnirelmann category of the configuration space of $2$ distinct points in Complex Projective $n-$space for all $n\geq 1$. | 2017-08-22 20:37:26 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8717556595802307, "perplexity": 334.49311242265895}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-34/segments/1502886112682.87/warc/CC-MAIN-20170822201124-20170822221124-00123.warc.gz"} |
https://www.tensorflow.org/mobile/linking_libs | # Integrating TensorFlow libraries
Once you have made some progress on a model that addresses the problem you’re trying to solve, it’s important to test it out inside your application immediately. There are often unexpected differences between your training data and what users actually encounter in the real world, and getting a clear picture of the gap as soon as possible improves the product experience.
This page talks about how to integrate the TensorFlow libraries into your own mobile applications, once you have already successfully built and deployed the TensorFlow mobile demo apps.
After you've managed to build the examples, you'll probably want to call TensorFlow from one of your existing applications. The very easiest way to do this is to use the Pod installation steps described here, but if you want to build TensorFlow from source (for example to customize which operators are included) you'll need to break out TensorFlow as a framework, include the right header files, and link against the built libraries and dependencies.
### Android
For Android, you just need to link in a Java library contained in a JAR file called libandroid_tensorflow_inference_java.jar. There are three ways to include this functionality in your program:
1. Include the jcenter AAR which contains it, as in this example app
3. Build the JAR file yourself using the instructions in our Android GitHub repo
### iOS
Pulling in the TensorFlow libraries on iOS is a little more complicated. Here is a checklist of what you’ll need to do to your iOS app:
• Link against tensorflow/contrib/makefile/gen/lib/libtensorflow-core.a, usually by adding -L/your/path/tensorflow/contrib/makefile/gen/lib/ and -ltensorflow-core to your linker flags.
• Link against the generated protobuf libraries by adding -L/your/path/tensorflow/contrib/makefile/gen/protobuf_ios/lib and -lprotobuf and -lprotobuf-lite to your command line.
• For the include paths, you need the root of your TensorFlow source folder as the first entry, followed by tensorflow/contrib/makefile/downloads/protobuf/src, tensorflow/contrib/makefile/downloads, tensorflow/contrib/makefile/downloads/eigen, and tensorflow/contrib/makefile/gen/proto.
• Make sure your binary is built with -force_load (or the equivalent on your platform), aimed at the TensorFlow library to ensure that it’s linked correctly. More detail on why this is necessary can be found in the next section, Global constructor magic. On Linux-like platforms, you’ll need different flags, more like -Wl,--allow-multiple-definition -Wl,--whole-archive.
You’ll also need to link in the Accelerator framework, since this is used to speed up some of the operations.
## Global constructor magic
One of the subtlest problems you may run up against is the “No session factory registered for the given session options” error when trying to call TensorFlow from your own application. To understand why this is happening and how to fix it, you need to know a bit about the architecture of TensorFlow.
The framework is designed to be very modular, with a thin core and a large number of specific objects that are independent and can be mixed and matched as needed. To enable this, the coding pattern in C++ had to let modules easily notify the framework about the services they offer, without requiring a central list that has to be updated separately from each implementation. It also had to allow separate libraries to add their own implementations without needing a recompile of the core.
To achieve this capability, TensorFlow uses a registration pattern in a lot of places. In the code, it looks like this:
class MulKernel : OpKernel {
Status Compute(OpKernelContext* context) { … }
};
REGISTER_KERNEL(MulKernel, “Mul”);
This would be in a standalone .cc file linked into your application, either as part of the main set of kernels or as a separate custom library. The magic part is that the REGISTER_KERNEL() macro is able to inform the core of TensorFlow that it has an implementation of the Mul operation, so that it can be called in any graphs that require it.
From a programming point of view, this setup is very convenient. The implementation and registration code live in the same file, and adding new implementations is as simple as compiling and linking it in. The difficult part comes from the way that the REGISTER_KERNEL() macro is implemented. C++ doesn’t offer a good mechanism for doing this sort of registration, so we have to resort to some tricky code. Under the hood, the macro is implemented so that it produces something like this:
class RegisterMul {
public:
RegisterMul() {
global_kernel_registry()->Register(“Mul”, [](){
return new MulKernel()
});
}
};
RegisterMul g_register_mul;
This sets up a class RegisterMul with a constructor that tells the global kernel registry what function to call when somebody asks it how to create a “Mul” kernel. Then there’s a global object of that class, and so the constructor should be called at the start of any program.
While this may sound sensible, the unfortunate part is that the global object that’s defined is not used by any other code, so linkers not designed with this in mind will decide that it can be deleted. As a result, the constructor is never called, and the class is never registered. All sorts of modules use this pattern in TensorFlow, and it happens that Session implementations are the first to be looked for when the code is run, which is why it shows up as the characteristic error when this problem occurs.
The solution is to force the linker to not strip any code from the library, even if it believes it’s unused. On iOS, this step can be accomplished with the -force_load flag, specifying a library path, and on Linux you need --whole-archive. These persuade the linker to not be as aggressive about stripping, and should retain the globals.
The actual implementation of the various REGISTER_* macros is a bit more complicated in practice, but they all suffer the same underlying problem. If you’re interested in how they work, op_kernel.h is a good place to start investigating.
## Protobuf problems
TensorFlow relies on the Protocol Buffer library, commonly known as protobuf. This library takes definitions of data structures and produces serialization and access code for them in a variety of languages. The tricky part is that this generated code needs to be linked against shared libraries for the exact same version of the framework that was used for the generator. This can be an issue when protoc, the tool used to generate the code, is from a different version of protobuf than the libraries in the standard linking and include paths. For example, you might be using a copy of protoc that was built locally in ~/projects/protobuf-3.0.1.a, but you have libraries installed at /usr/local/lib and /usr/local/include that are from 3.0.0.
The symptoms of this issue are errors during the compilation or linking phases with protobufs. Usually, the build tools take care of this, but if you’re using the makefile, make sure you’re building the protobuf library locally and using it, as shown in this Makefile.
Another situation that can cause problems is when protobuf headers and source files need to be generated as part of the build process. This process makes building more complex, since the first phase has to be a pass over the protobuf definitions to create all the needed code files, and only after that can you go ahead and do a build of the library code.
### Multiple versions of protobufs in the same app
Protobufs generate headers that are needed as part of the C++ interface to the overall TensorFlow library. This complicates using the library as a standalone framework.
If your application is already using version 1 of the protocol buffers library, you may have trouble integrating TensorFlow because it requires version 2. If you just try to link both versions into the same binary, you’ll see linking errors because some of the symbols clash. To solve this particular problem, we have an experimental script at rename_protobuf.sh.
You need to run this as part of the makefile build, after you’ve downloaded all the dependencies:
tensorflow/contrib/makefile/download_dependencies.sh
tensorflow/contrib/makefile/rename_protobuf.sh
## Calling the TensorFlow API
Once you have the framework available, you then need to call into it. The usual pattern is that you first load your model, which represents a preset set of numeric computations, and then you run inputs through that model (for example, images from a camera) and receive outputs (for example, predicted labels).
On Android, we provide the Java Inference Library that is focused on just this use case, while on iOS and Raspberry Pi you call directly into the C++ API.
### Android
Here’s what a typical Inference Library sequence looks like on Android:
// Load the model from disk.
TensorFlowInferenceInterface inferenceInterface =
new TensorFlowInferenceInterface(assetManager, modelFilename);
// Copy the input data into TensorFlow.
inferenceInterface.feed(inputName, floatValues, 1, inputSize, inputSize, 3);
// Run the inference call.
inferenceInterface.run(outputNames, logStats);
// Copy the output Tensor back into the output array.
inferenceInterface.fetch(outputName, outputs);
You can find the source of this code in the Android examples.
### iOS and Raspberry Pi
Here’s the equivalent code for iOS and Raspberry Pi:
// Load the model.
// Create a session from the model.
tensorflow::Status s = session->Create(tensorflow_graph);
if (!s.ok()) {
LOG(FATAL) << "Could not create TensorFlow Graph: " << s;
}
// Run the model.
std::string input_layer = "input";
std::string output_layer = "output";
std::vector<tensorflow::Tensor> outputs;
tensorflow::Status run_status = session->Run({ {input_layer, image_tensor}},
{output_layer}, {}, &outputs);
if (!run_status.ok()) {
LOG(FATAL) << "Running model failed: " << run_status;
}
// Access the output data.
tensorflow::Tensor* output = &outputs[0];
This is all based on the iOS sample code, but there’s nothing iOS-specific; the same code should be usable on any platform that supports C++.
You can also find specific examples for Raspberry Pi here. | 2018-08-20 05:11:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.20621369779109955, "perplexity": 1633.5397867562654}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-34/segments/1534221215843.55/warc/CC-MAIN-20180820042723-20180820062723-00327.warc.gz"} |
http://mathhelpforum.com/calculus/58804-substitution-method-print.html | # Substitution Method
• Nov 10th 2008, 03:13 PM
algorithm
Substitution Method
Hello
What is the method for integrating this form
$dx/(ax^2 + bx + c)(kx + z)$
Thank you
• Nov 10th 2008, 03:21 PM
Mathstud28
Quote:
Originally Posted by algorithm
Hello
What is the method for integrating this form
$dx/(ax^2 + bx + c)(kx + z)$
Thank you
Use partial fractions decomposition
It is ugly in this form though, it is much nicer if you have actual constants. | 2016-12-04 15:18:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9937717318534851, "perplexity": 6988.716948271635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-50/segments/1480698541322.19/warc/CC-MAIN-20161202170901-00012-ip-10-31-129-80.ec2.internal.warc.gz"} |
https://www.notjustphysics.com/tag/computational/ | ## Tag: computational
#### Computational Physics Basics: Floating Point Numbers
In a previous contribution, I have shown you that computers are naturally suited to store finite length integer numbers. Most quantities in physics, on the other hand, are real numbers. Computers can store real numbers only with finite precision. Like storing integers, each representation of a real number is stored in a finite number of bits. Two aspects need to be considered. The precision of the stored number is the number of significant decimal places that can be represented. Higher precision means that the error of the representation can be made smaller. Bur precision is not the only aspect that needs consideration. Often, physical quantities can be very large or very small. The electron charge in SI units, for example, is roughly $1.602\times10^{-19}$C. Using a fixed point decimal format to represent this number would require a large number of unnecessary zeros to be stored. Therefore, the range of numbers that can be represented is also important.
In the decimal system, we already have a notation that can capture very large and very small numbers and I have used it to write down the electron charge in the example above. The scientific notation writes a number as a product of a mantissa and a power of 10. The value of the electron charge (without units) is written as
$$1.602\times10^{-19}.$$
Here 1.602 is the mantissa (or the significand) and -19 is the exponent. The general form is
$$m\times 10^n.$$
The mantissa, $m$, will always be between 1 and 10 and the exponent, $n$, has to be chosen accordingly. This format can straight away be translated into the binary system. Here, any number can be written as
$$m\times2^n,$$
with $1\le m<2$. Both $m$ and $n$ can be stored in binary form.
#### Memory layout of floating-point numbers, the IEEE 754 standard
In most modern computers, numbers are stored using 64 bits but some architectures, like smartphones, might only use 32 bits. For a given number of bits, a decision has to be made on how many bits should be used to store the mantissa and how many bits should be used for the exponent. The IEEE 754 standard sets out the memory layout for various size floating-point representations and almost all hardware supports these specifications. The following table shows the number of bits for mantissa and exponent for some IEEE 754 number formats.
Bits Name Sign bit Mantissa bits, m Exponent bits, p Exponent bias Decimal digits
16 half-precision 1 10 5 15 3.31
32 single precision 1 23 8 127 7.22
64 double precision 1 52 11 1023 15.95
128 quadruple precision 1 112 15 16383 34.02
The layout of the bits is as follows. The first, most significant bit represents the sign of the number. A 0 indicates a positive number and a 1 indicates a negative number. The next $p$ bits store the exponent. The exponent is not stored as a signed integer, but as an unsigned integer with offset. This offset, or bias, is chosen to be $2^p – 1$ so that a leading zero followed by all ones corresponds to an exponent of 0.
The remaining bits store the mantissa. The mantissa is always between 1 and less than 2. This means that, in binary, the leading bit is always equal to one and doesn’t need to be stored. The $m$ bits, therefore, only store the fractional part of the mantissa. This allows for one extra bit to improve the precision of the number.
Example
The number 5.25 represented by a 32-bit floating-point. In binary, the number is $1.0101\times2^2$. The fractional part of the mantissa is stored in the mantissa bits. The exponent is $127+2$.
#### Infinity and NaN
The IEEE 754 standard defines special numbers that represent infinity and the not-a-number state. Infinity is used to show that a result of a computation has exceeded the allowed range. It can also result from a division by zero. Infinity is represented by the maximum exponent, i.e. all $p$ bits of the exponent are set to 1. In addition, the $m$ bits of the mantissa are set to 0. The sign bit is still used for infinity. This means it is possible to store a +Inf and a -Inf value.
Example
Infinity in 32-bit floating-point representation
The special state NaN is used to store results that are not defined or can’t otherwise be represented. For example, the operation $\sqrt{-1}$ will result in a not-a-number state. Similar to infinity, it is represented by setting the $p$ exponent bits to 1. To distinguish it from infinity, the mantissa can have any non-zero value.
32-bit floating-point representation of NaN
#### Subnormal Numbers
As stated above, all numbers in the regular range will be represented by a mantissa between 1 and 2 so that the leading bit is always 1. Numbers very close to zero will have a small exponent value. Once the exponent is exactly zero, it is better to explicitly store all bits of the mantissa and allow the first bit to be zero. This allows even smaller numbers to be represented than would otherwise be possible. Extending the range in this way comes at the cost of reduced precision of the stored number.
Example
The number $10^-{40}$ represented as a subnormal 32-bit floating-point
#### Floating Point Numbers in Python, C++, and JavaScript
Both Python and JavaScript exclusively store floating-point numbers using 64-bit precision. In fact, in JavaScript, all numbers are stored as 64-bit floating-point, even integers. This is the reason for the fact that integers in JavaScript only have 53 bits. They are stored in the mantissa of the 64-bit floating-point number.
C++ offers a choice of different precisions
Type Alternative Name Number of Bits
float single precision usually 32 bits
double double precision usually 64 bits
long double extended precision architecture-dependent,
not IEEE 754,
usually 80 bits
#### Spherical Blast Wave Simulation
Here is another animation that I made using the Vellamo fluid code. It shows two very simple simulations of spherical blast waves. The first simulation has been carried out in two dimensions. The second one shows a very similar setup but in three dimensions.
You might have seen some youtube videos on blast waves and dimensional analysis on the Sixty Symbols channel or on Numberphile. The criterion for the dimensional analysis given in those videos is true for strong blast waves. The simulation that I carried out, looks at the later stage of these waves when the energy peters out and the strong shock is replaced by a compression wave that travels at a constant velocity. You can still see some of the self-similar behaviour of the Sedov-Taylor solution during the very early stages of the explosion. But after the speed of the shock has slowed down to the sound speed, the compression wave continues to travel at the speed of sound, gradually losing its energy.
The video shows the energy density over time. The energy density includes the thermal energy as well as the kinetic energy of the gas.
For those of you who are interested in the maths and the physics, the code simulates the Euler equations of a compressible fluid. These equations are a model for an ideal adiabatic gas. For more information about the Euler equations check out my previous post.
#### Computational Physics Basics: Integers in C++, Python, and JavaScript
In a previous post, I wrote about the way that the computer stores and processes integers. This description referred to the basic architecture of the processor. In this post, I want to talk about how different programming languages present integers to the developer. Programming languages add a layer of abstraction and in different languages that abstraction may be less or more pronounced. The languages I will be considering here are C++, Python, and JavaScript.
### Integers in C++
C++ is a language that is very close to the machine architecture compared to other, more modern languages. The data that C++ operates on is stored in the machine’s memory and C++ has direct access to this memory. This means that the C++ integer types are exact representations of the integer types determined by the processor architecture.
The following integer datatypes exist in C++
Type Alternative Names Number of Bits G++ on Intel 64 bit (default)
char at least 8 8
short int short at least 16 16
int at least 16 32
long int long at least 32 64
long long int long long at least 64 64
This table does not give the exact size of the datatypes because the C++ standard does not specify the sizes but only lower limits. It is also required that the larger types must not use fewer bits than the smaller types. The exact number of bits used is up to the compiler and may also be changed by compiler options. To find out more about the regular integer types you can look at this reference page.
The reason for not specifying exact sizes for datatypes is the fact that C++ code will be compiled down to machine code. If you compile your code on a 16 bit processor the plain int type will naturally be limited to 16 bits. On a 64 bit processor on the other hand, it would not make sense to have this limitation.
Each of these datatypes is signed by default. It is possible to add the signed qualifier before the type name to make it clear that a signed type is being used. The unsigned qualifier creates an unsigned variant of any of the types. Here are some examples of variable declarations.
char c; // typically 8 bit
unsigned int i = 42; // an unsigned integer initialised to 42
signed long l; // the same as "long l" or "long int l"
As stated above, the C++ standard does not specify the exact size of the integer types. This can cause bugs when developing code that should be run on different architectures or compiled with different compilers. To overcome these problems, the C++ standard library defines a number of integer types that have a guaranteed size. The table below gives an overview of these types.
Signed Type Unsigned Type Number of Bits
int8_t uint8_t 8
int16_t uint16_t 16
int32_t uint32_t 32
int64_t uint64_t 64
More details on these and similar types can be found here.
The code below prints a 64 bit int64_t using the binary notation. As the name suggests, the bitset class interprets the memory of the data passed to it as a bitset. The bitset can be written into an output stream and will show up as binary data.
#include <bitset>
void printBinaryLong(int64_t num) {
std::cout << std::bitset<64>(num) << std::endl;
}
### Integers in Python
Unlike C++, Python hides the underlying architecture of the machine. In order to discuss integers in Python, we first have to make clear which version of Python we are talking about. Python 2 and Python 3 handle integers in a different way. The Python interpreter itself is written in C which can be regarded in many ways as a subset of C++. In Python 2, the integer type was a direct reflection of the long int type in C. This meant that integers could be either 32 or 64 bit, depending on which machine a program was running on.
This machine dependence was considered bad design and was replaced be a more machine independent datatype in Python 3. Python 3 integers are quite complex data structures that allow storage of arbitrary size numbers but also contain optimizations for smaller numbers.
It is not strictly necessary to understand how Python 3 integers are stored internally to work with Python but in some cases it can be useful to have knowledge about the underlying complexities that are involved. For a small range of integers, ranging from -5 to 256, integer objects are pre-allocated. This means that, an assignment such as
n = 25
will not create the number 25 in memory. Instead, the variable n is made to reference a pre-allocated piece of memory that already contained the number 25. Consider now a statement that might appear at some other place in the program.
a = 12
b = a + 13
The value of b is clearly 25 but this number is not stored separately. After these lines b will reference the exact same memory address that n was referencing earlier. For numbers outside this range, Python 3 will allocate memory for each integer variable separately.
Larger integers are stored in arbitrary length arrays of the C int type. This type can be either 16 or 32 bits long but Python only uses either 15 or 30 bits of each of these "digits". In the following, 32 bit ints are assumed but everything can be easily translated to 16 bit.
Numbers between −(230 − 1) and 230 − 1 are stored in a single int. Negative numbers are not stored as two’s complement. Instead the sign of the number is stored separately. All mathematical operations on numbers in this range can be carried out in the same way as on regular machine integers. For larger numbers, multiple 30 bit digits are needed. Mathamatical operations on these large integers operate digit by digit. In this case, the unused bits in each digit come in handy as carry values.
### Integers in JavaScript
Compared to most other high level languages JavaScript stands out in how it deals with integers. At a low level, JavaScript does not store integers at all. Instead, it stores all numbers in floating point format. I will discuss the details of the floating point format in a future post. When using a number in an integer context, JavaScript allows exact integer representation of a number up to 53 bit integer. Any integer larger than 53 bits will suffer from rounding errors because of its internal representation.
const a = 25;
const b = a / 2;
In this example, a will have a value of 25. Unlike C++, JavaScript does not perform integer divisions. This means the value stored in b will be 12.5.
JavaScript allows bitwise operations only on 32 bit integers. When a bitwise operation is performed on a number JavaScript first converts the floating point number to a 32 bit signed integer using two’s complement. The result of the operation is subsequently converted back to a floating point format before being stored.
#### The SIR Model for the Spread of Infectious Diseases
In the current Coronavirus crisis, everybody is talking about flattening “the curve”. In the news, you will often see graphs of the total number of cases or the total number of deaths over time. So you may be forgiven to think that these are the curves that everybody is trying to flatten. In fact, what epidemiologists mean by the curve is the graph of the number of actively infected people over time. This curve is important because it determines the load that is placed on the healthcare system of a country. The current number of cases determines how many hospital beds, how many ventilators, and how much healthcare personnel are needed.
Mathematics and computer simulations play an important role in estimating how the disease will spread, how many people will be affected, and how much resources are needed. They also allow predicting the effects of different measures to control the spread. For example, the current lockdown in many countries around the world is reducing the number of people that an infected individual can pass the virus on to. It is important to know how effective this measure is. One of the big questions is when it is safe to relax the isolation of people and how much it would affect the spread if individual businesses re-open.
Before continuing, I have to add a disclaimer. I am interested in mathematics but I am not an expert epidemiologist. The models I am showing you here are very simple starting points for simulating the spread of diseases. They can give you some idea on how parameters like the infection rate and recovery rate influence the overall number of infected individuals. But they should not be used to draw any quantitative conclusions.
#### The SIR Model
To get a basic feel for the way infections spread through a population, epidemiologists have developed simple mathematical models. Probably the first model you will hear about in this context is the SIR model. The SIR model is a version of a compartmental model. This means that the total population is divided up into separate compartments. The quantity $S$ denotes the number of susceptible individuals. These are the people that are not infected and also don’t have any immunity to the disease. $I$ is the number of infected individuals and $R$ is the number of individuals that are not infectious but also can’t get the disease. Most scientists denote the $R$ to mean removed as it includes both people who have recovered and are immune but also those that have died. Due to the current sensitivity of the subject, many people prefer to call $R$ the recovered population.
Compartmental models define rates by which individuals change from one population to another. The SIR model has two rates, the rate of infection and the rate of recovery. The absolute rate of infection is proportional to the number of infected people. On average, each infected individual will pass the infection to a number of people in a given time interval. This number is usually called $\beta$. However, if an infected individual passes the virus to a recovered person, the infection will not spread. The probability of passing the infection on is given by $S/N$ where $N$ is the total population $N=S+I+R$. Putting this together, the absolute rate of infection is
$$\frac{\beta I S}{N}.$$
The rate of recovery is slightly more simple. Each infected individual will recover with some probability $\gamma$ in a given time interval. The absolute rate of recovery is then expressed as
$$\gamma I.$$
The infection rate reduces the number of susceptible individuals $S$ and increases the number of infected individuals $I$. The recovery rate reduces the number of infected individuals $I$ and increases the number of recovered individuals $R$. The complete set of rate equations is then
$$\begin{eqnarray} \frac{dS}{dt} &=& – \frac{\beta I S}{N}, \\ \frac{dI}{dt} &=& \frac{\beta I S}{N} – \gamma I, \\ \frac{dR}{dt} &=& \gamma I. \end{eqnarray}$$
The ratio of the coefficients $\beta$ and $\gamma$ is known as the basic reproduction ratio.
$$R_0 = \frac{\beta}{\gamma}$$.
The $R_0$ is important because it determines whether the infection will spread exponentially or eventually die out.
I have implemented a little JavaScript app that integrates the SIR equations and shows the development of the populations over time. Feel free to play around with the sliders and explore how the parameters influence the spread.
I encourage you to play around with the parameters to see how the model behaves. For an infection rate of 1 and a recovery rate of 0.5, the populations stabilise when about 80% of the population has been infected and has recovered. The maximum of the infectious population, the $I$ curve, reaches about 16%. If you reduce the infection rate, the $I$ curve flattens, prolonging the time over which the disease is spreading but reducing the maximum number of infected individuals at any one time.
#### The SEIR Model
One of the major assumptions in the SIR model is that an infected individual can immediately spread the infection. A refinement of the model is the addition of a population, $E$, of exposed individuals. These are people that are infected but are not yet infectious. The SEIR model introduces another rate, $a$, at which exposed individuals turn infectious. The quantity $a$ can be understood as the inverse of the average incubation period. The absolute rate at which exposed individuals become infectious is
$$a E.$$
The complete set of equations of the SEIR model are then as follows.
$$\begin{eqnarray} \frac{dS}{dt} &=& – \frac{\beta I S}{N}, \\ \frac{dE}{dt} &=& \frac{\beta I S}{N} – a E, \\ \frac{dI}{dt} &=& a E – \gamma I, \\ \frac{dR}{dt} &=& \gamma I. \end{eqnarray}$$
The SEIR model is also implemented in the app above. Simply pick SEIR Model from the dropdown menu and start exploring.
#### The SEIR Model with Delay
The SEIR model above assumes that an individual, once exposed, can immediately turn infectious. The constant rate $a$ implies that the probability of changing from the exposed state to the infectious state is the same on day one of being exposed as it is on day ten. This might not be realistic because diseases typically have some incubation period. Only after some number of days after being exposed will an individual become infectious. One can model this kind of behaviour with a time delay. Let’s say that after a given incubation period $\tau$, every exposed individual will turn infectious. The absolute rate at which exposed individuals become infectious is then given by
$$\frac{\beta I(t-\tau) S(t-\tau)}{N}.$$
Here the $S(t-\tau)$ means taking the value of the susceptible individuals not at the current time, but at a time in the past with a delay of $\tau$. The complete set of equations of the SEIR model with delay are then as follows.
$$\begin{eqnarray} \frac{dS}{dt} &=& – \frac{\beta I(t) S(t)}{N}, \\ \frac{dE}{dt} &=& \frac{\beta I(t) S(t)}{N} – \frac{\beta I(t-\tau) S(t-\tau)}{N}, \\ \frac{dI}{dt} &=& \frac{\beta I(t-\tau) S(t-\tau)}{N} – \gamma I(t), \\ \frac{dR}{dt} &=& \gamma I(t). \end{eqnarray}$$
I have written the time dependence explicitly for all quantities on the right-hand side to make it clear how the time delay should be applied.
You can choose this model in the app above by selecting SEIR Model with Delay from the dropdown menu.
#### Some Conclusions
The SEIR model and the SEIR model with delay both introduce a population of exposed individuals that are not yet infectious. This draws out the spread of the disease over a longer time. It also slightly reduces the maximum of the infectious population curve $I$. Introducing a time delay doesn’t change the curves too much. But for long incubation periods, the curve of infectious individuals can have multiple maxima. So at some time, it may look like the disease has stopped spreading while in reality, a next wave is just about to start. The two versions of the SEIR model are two extremes and the truth lies somewhere in between these two.
I have to stress again that I am not an epidemiology expert and that the models presented here are very simple models. For any meaningful prediction of the spread of a real disease, much more complex models are needed. These models must include real data about the number of contacts that different parts of the population have between each other.
The code for the application above is available on
GitHub
### Unsigned Integers
Computers use binary representations to store various types of data. In the context of computational physics, it is important to understand how numerical values are stored. To start, let’s take a look at non-negative integer numbers. These unsigned integers can simply be translated into their binary representation. The binary number-format is similar to the all-familiar decimal format with the main difference that there are only two values for the digits, not ten. The two values are 0 and 1. Numbers are written in the same way as decimal numbers only that the place values of each digit are now powers of 2. For example, the following 4-digit numbers show the values of the first four
0 0 0 1 decimal value 20 = 1
0 0 1 0 decimal value 21 = 2
0 1 0 0 decimal value 22 = 4
1 0 0 0 decimal value 23 = 8
The binary digits are called bits and in modern computers, the bits are grouped in units of 8. Each unit of 8 bits is called a byte and can contain values between 0 and 28 − 1 = 255. Of course, 255 is not a very large number and for most applications, larger numbers are needed. Most modern computer architectures support integers with 32 bits and 64 bits. Unsigned 32-bit integers range from 0 to 232 − 1 = 4, 294, 967, 295 ≈ 4.3 × 109 and unsigned 64-bit integers range from 0 to 264 − 1 = 18, 446, 744, 073, 709, 551, 615 ≈ 1.8 × 1019. It is worthwhile noting that many GPU architectures currently don’t natively support 64-bit numbers.
The computer’s processor contains registers that can store binary numbers. Thus a 64-bit processor contains 64-bit registers and has machine instructions that perform numerical operations on those registers. As an example, consider the addition operation. In binary, two numbers are added in much the same way as using long addition in decimal. Consider the addition of two 64 bit integers 7013356221863432502 + 884350303838366524. In binary, this is written as follows.
01100001,01010100,01110010,01010011,01001111,01110010,00010001,00110110
+ 00001100,01000101,11010111,11101010,01110101,01001011,01101011,00111100
---------------------------------------------------------------------------
01101101,10011010,01001010,00111101,11000100,10111101,01111100,01110010
The process of adding two numbers is simple. From right to left, the digits of the two numbers are added. If the result is two or more, there will be a carry-over which is added to the next digit on the left.
You could add integers of any size using this prescription but, of course, in the computer numbers are limited by the number of bits they contain. Consider the following binary addition of (264 − 1) and 1 .
11111111,11111111,11111111,11111111,11111111,11111111,11111111,11111111
+ 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
---------------------------------------------------------------------------
00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000
If you were dealing with mathematical integers, you would expect to see an extra digit 1 on the left. The computer cannot store that bit in the register containing the result but stores the extra bit in a special carry flag. In many computer languages, this unintentional overflow will go undetected and the programmer has to take care that numerical operations do not lead to unintended results.
### Signed Integers
The example above shows that adding two non-zero numbers can result in 0. This can be exploited to define negative numbers. In general, given a number a, the negative − a is defined as the number that solves the equation
a + ( − a) = 0.
Mathematically, the N-bit integers can be seen as the group of integers modulo 2N. This means that for any number a ∈ {0, …, 2N − 1} the number − a can be defined as
− a = 2N − a ∈ {0, …, 2N − 1}.
By convention, all numbers whose highest value binary bit is zero are considered positive. Those numbers whose highest value bit is one are considered negative. This makes the addition and subtraction of signed integers straightforward as the processor does not need to implement different algorithms for positive or negative numbers. Signed 32-bit integers range from − 2, 147, 483, 648 to 2, 147, 483, 647, and 64-bit integers range from − 9, 223, 372, 036, 854, 775, 808 to 9, 223, 372, 036, 854, 775, 807.
This format of storing negative numbers is called the two’s complement format. The reason for this name becomes obvious when observing how to transform a positive number to its negative.
01100001,01010100,01110010,01010011,01001111,01110010,00010001,00110110 (7013356221863432502)
10011110,10101011,10001101,10101100,10110000,10001101,11101110,11001010 (-7013356221863432502)
To invert a number, first, invert all its bits and then add 1. This simple rule of taking the two’s complement can be easily implemented in the processor’s hardware. Because of the simplicity of this prescription, and the fact that adding a negative number follows the same algorithm as adding a positive number, two’s complement is de-facto the only format used to store negative integers on modern processors.
### Exercises
1. Show that taking the two’s complement of an N-bit number a does indeed result in the negative − a if the addition of two numbers is defined as the addition modulo 2N.
2. Find out how integers are represented in the programming language of your choice. Does this directly reflect the representation of the underlying architecture? I will be writing another post about this topic soon.
3. Most processors have native commands for multiplying two integers. The result of multiplying the numbers in two N-bit registers are stored in two N-bit result registers representing the high and low bits of the result. Show that the resulting 2N bits will always be enough to store the result.
4. Show how the multiplication of two numbers can be implemented using only the bit-shift operator and conditional addition based on the bit that has been shifted out of the register. The bit-shift operator simply shifts all bits of a register to the left or right. | 2021-07-29 08:10:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.46123963594436646, "perplexity": 424.461003159136}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046153854.42/warc/CC-MAIN-20210729074313-20210729104313-00447.warc.gz"} |
https://www.aimsciences.org/article/doi/10.3934/dcdss.2020089?viewType=html | # American Institute of Mathematical Sciences
• Previous Article
Joint identification via deconvolution of the flux and energy relaxation kernels of the Gurtin-Pipkin model in thermodynamics with memory
• DCDS-S Home
• This Issue
• Next Article
The Stokes problem in fractal domains: Asymptotic behaviour of the solutions
May 2020, 13(5): 1567-1587. doi: 10.3934/dcdss.2020089
## Stability and errors analysis of two iterative schemes of fractional steps type associated to a nonlinear reaction-diffusion equation
University "Al. I. Cuza" of Iaşi, 700506, Iaşi, Romania
Received January 2018 Revised August 2018 Published June 2019
We present the error analysis of two time-stepping schemes of fractional steps type, used in the discretization of a nonlinear reaction-diffusion equation with Neumann boundary conditions, relevant in phase transition and interface problems. We start by investigating the solvability of a such boundary value problems in the class $W^{1,2}_p(Q)$. One proves the existence, the regularity and the uniqueness of solutions, in the presence of the cubic nonlinearity type. The convergence and error estimate results (using energy methods) for the iterative schemes of fractional steps type, associated to the nonlinear parabolic equation, are also established. The advantage of such method consists in simplifying the numerical computation. On the basis of this approach, a conceptual algorithm is formulated in the end. Numerical experiments are presented in order to validates the theoretical results (conditions of numerical stability) and to compare the accuracy of the methods.
Citation: Costică Moroşanu. Stability and errors analysis of two iterative schemes of fractional steps type associated to a nonlinear reaction-diffusion equation. Discrete & Continuous Dynamical Systems - S, 2020, 13 (5) : 1567-1587. doi: 10.3934/dcdss.2020089
##### References:
[1] H.-D. Alber and P. Zhu, Comparison of a rapidely converging phase field model for interfaces in solids with the Allen-Cahn model, J. Elast., 111 (2013), 153-221. doi: 10.1007/s10659-012-9398-x. Google Scholar [2] S. M. Allen and J. W. Cahn, A microscopic theory of antiphase boundary motion and its application to antiphase domain coarsening, Acta Metallurgica, 27 (1979), 1085-1095. doi: 10.1016/0001-6160(79)90196-2. Google Scholar [3] V. Arnăutu and C. Moroşanu, Numerical approximation for the phase-field transition system, Intern. J. Com. Math., 62 (1996), 209-221. doi: 10.1080/00207169608804538. Google Scholar [4] T. Benincasa and C. Moroşanu, Fractional steps scheme to approximate the phase-field transition system with non-homogeneous Cauchy-Neumann boundary conditions, Numer. Funct. Anal. and Optimiz., 30 (2009), 199-213. doi: 10.1080/01630560902841120. Google Scholar [5] T. Benincasa, A. Favini and C. Moroşanu, A Product Formula Approach to a Non-homogeneous Boundary Optimal Control Problem Governed by Nonlinear Phase-field Transition System. PART I: A Phase-field Model, J. Optim. Theory and Appl., 148 (2011), 14-30. doi: 10.1007/s10957-010-9742-x. Google Scholar [6] G. Caginalp and X. Chen, Convergence of the phase field model to its sharp interface limits, Euro. Jnl of Applied Mathematics, 9 (1998), 417-445. doi: 10.1017/S0956792598003520. Google Scholar [7] L. Calatroni and P. Colli, Global solution to the Allen-Cahn equation with singular potentials and dynamic boundary conditions, Nonlinear Analysis: Theory, Methods & Applications, 79 (2013), 12–27, arXiv: 1206.6738. doi: 10.1016/j.na.2012.11.010. Google Scholar [8] O. Cârjă, A. Miranville and C. Moroşanu, On the existence, uniqueness and regularity of solutions to the phase-field system with a general regular potential and a general class of nonlinear and non-homogeneous boundary conditions, Nonlinear Anal. TMA, 113 (2015), 190-208. doi: 10.1016/j.na.2014.10.003. Google Scholar [9] X. Chen, Generation, propagation, and annihilation of metastable patterns, J. Differential Equations, 206 (2004), 399-437. doi: 10.1016/j.jde.2004.05.017. Google Scholar [10] L. Cherfils, S. Gatti and A. Miranville, Long time behavior to the Caginalp system with singular potentials and dynamic boundary conditions, Commun. Pure Appl. Anal., 11 (2012), 2261-2290. doi: 10.3934/cpaa.2012.11.2261. Google Scholar [11] K. Chrysafinos, Discontinuous Time-Stepping Schemes for the Allen-Cahn Equations and Applications to Optimal Control, Paper presented at the Conference on 'Advances in scientific computing and applied mathematics', Las Vegas, Nevada, October 2015. Google Scholar [12] X. Feng and A. Prohl, Numerical analysis of the Allen-Cahn equation and approximation for mean curvature flows, Numer. Math., 94 (2003), 33-65. doi: 10.1007/s00211-002-0413-1. Google Scholar [13] I. Fonseca and W. Gangbo, Degree Theory in Analysis and Applications, Clarendon, Oxford, 1995. Google Scholar [14] C. Gal, M. Grasselli and A. Miranville, Non-isothermal Allen-Cahn equations with coupled dynamic boundary conditions, Nonlinear Phenomena with Energy Dissipation, GAKUTO Internat. Ser. Math. Sci. Appl., Gakkōtosho, Tokyo, 29 (2008), 117–139. Google Scholar [15] Gh. Iorga, C. Moroşanu and S. C. Cocindău, Numerical simulation of the solid region via phase field transition system, Metalurgia International, 8 (2008), 91-95. Google Scholar [16] Gh. Iorga, C. Moroşanu and I. Tofan, Numerical simulation of the thickness accretions in the secondary cooling zone of a continuous casting machine, Metalurgia International, 14 (2009), 72-75. Google Scholar [17] H. Israel, Long time behavior of an Allen-Cahn type equation with singular potential and dynamic boundary conditions, Journal of Applied Analysis and Computation, 2 (2012), 29-56. Google Scholar [18] N. Kenmochi and M. Niezgódka, Evolution systems of nonlinear variational inequalities arising from phase change problems, Nonlinear Anal. TMA, 22 (1994), 1163-1180. doi: 10.1016/0362-546X(94)90235-6. Google Scholar [19] O. A. Ladyzhenskaya, B. A. Solonnikov and N. N. Uraltzava, Linear and Quasi-Linear Equations of Parabolic Type, Prov. Amer. Math. Soc., 1968. Google Scholar [20] J. L. Lions, Control of Distributed Singular Systems, Gauthier-Villars, Paris, 1985. Google Scholar [21] A. Miranville, Existence of solutions for a one-dimensional Allen-Cahn equation, J. Appl. Anal. Comput., 3 (2013), 265-277. Google Scholar [22] A. Miranville and C. Moroşanu, On the existence, uniqueness and regularity of solutions to the phase-field transition system with non-homogeneous Cauchy-Neumann and nonlinear dynamic boundary conditions, Appl. Math. Modell., 40 (2016), 192-207. doi: 10.1016/j.apm.2015.04.039. Google Scholar [23] A. Miranville and C. Moroşanu, Analysis of an iterative scheme of fractional steps type associated to the nonlinear phase-field equation with non-homogeneous dynamic boundary conditions, Discrete and Continuous Dynamical Systems Series S, 9 (2016), 537-556. doi: 10.3934/dcdss.2016011. Google Scholar [24] C. Moroşanu, Approximation and numerical results for phase field system by a fractional step scheme, Rev. Anal. Numér. Théor. Approx., 25 (1996), 137-151. Google Scholar [25] C. Moroşanu, Approximation of the phase-field transition system via fractional steps method, Numer. Funct. Anal. and Optimiz., 18 (1997), 623-648. doi: 10.1080/01630569708816782. Google Scholar [26] C. Moroşanu, Analysis and Optimal Control of Phase-Field Transition System: Fractional Steps Methods, Bentham Science Publishers, 2012. doi: 10.2174/97816080535061120101. Google Scholar [27] C. Moroşanu, Qualitative and quantitative analysis for a nonlinear reaction-diffusion equation, ROMAI J., 12 (2016), 85–113. https://rj.romai.ro/arhiva/2016/2/Morosanu.pdf Google Scholar [28] C. Moroşanu and A. Croitoru, Analysis of an iterative scheme of fractional steps type associated to the phase-field equation endowed with a general nonlinearity and Cauchy-Neumann boundary conditions, J. Math. Anal. Appl., 425 (2015), 1225-1239. doi: 10.1016/j.jmaa.2015.01.033. Google Scholar [29] C. Moroşanu and A.-M. Moşneagu, On the numerical approximation of the phase-field system with non-homogeneous Cauchy-Neumann boundary conditions. Case 1D, ROMAI J., 9 (2013), 91–110, https://rj.romai.ro/arhiva/2013/1/Morosanu,Mosneagu.pdf Google Scholar [30] C. Moroşanu and D. Motreanu, A generalized phase field system, J. Math. Anal. Appl., 237 (1999), 515-540. doi: 10.1006/jmaa.1999.6467. Google Scholar [31] C. Moroşanu and D. Motreanu, The phase field system with a general nonlinearity, International Journal of Differential Equations and Applications, 1 (2000), 187-204. Google Scholar [32] C. Moroşanu, S. Pavăl and C. Trenchea, Analysis of stability and errors of three methods associated to the nonlinear reaction-diffusion equation supplied with homogeneous Neumann boundary conditions, Journal of Applied Analysis and Computation, 7 (2017), 1-19. doi: 10.11948/2017001. Google Scholar [33] C. V. Pao, Nonlinear Parabolic and Elliptic Equations, Plenum Press, New York, 1992. Google Scholar [34] O. Penrose and P. C. Fife, Thermodynamically consistent models of phase-field type for kinetics of phase transitions, Phys. D., 43 (1990), 44-62. doi: 10.1016/0167-2789(90)90015-H. Google Scholar [35] R. Temam, Infinite-dimensional Dynamical Systems in Mechanics and Physics, volume 68 of Applied Mathematical Sciences, Springer-Verlag, New York, second edition, 1997. doi: 10.1007/978-1-4612-0645-3. Google Scholar [36] X. Yang, Error analysis of stabilized semi-implicit method of Allen-Cahn equation, Discrete Contin. Dyn. Syst. Ser. B, 11 (2009), 1057-1070. doi: 10.3934/dcdsb.2009.11.1057. Google Scholar [37] J. Zhang and Q. Du, Numerical studies of discrete approximations to the Allen-Cahn equation in the sharp interface limit, SIAM J. Sci. Comput., 31 (2009), 3042-3063. doi: 10.1137/080738398. Google Scholar
show all references
##### References:
[1] H.-D. Alber and P. Zhu, Comparison of a rapidely converging phase field model for interfaces in solids with the Allen-Cahn model, J. Elast., 111 (2013), 153-221. doi: 10.1007/s10659-012-9398-x. Google Scholar [2] S. M. Allen and J. W. Cahn, A microscopic theory of antiphase boundary motion and its application to antiphase domain coarsening, Acta Metallurgica, 27 (1979), 1085-1095. doi: 10.1016/0001-6160(79)90196-2. Google Scholar [3] V. Arnăutu and C. Moroşanu, Numerical approximation for the phase-field transition system, Intern. J. Com. Math., 62 (1996), 209-221. doi: 10.1080/00207169608804538. Google Scholar [4] T. Benincasa and C. Moroşanu, Fractional steps scheme to approximate the phase-field transition system with non-homogeneous Cauchy-Neumann boundary conditions, Numer. Funct. Anal. and Optimiz., 30 (2009), 199-213. doi: 10.1080/01630560902841120. Google Scholar [5] T. Benincasa, A. Favini and C. Moroşanu, A Product Formula Approach to a Non-homogeneous Boundary Optimal Control Problem Governed by Nonlinear Phase-field Transition System. PART I: A Phase-field Model, J. Optim. Theory and Appl., 148 (2011), 14-30. doi: 10.1007/s10957-010-9742-x. Google Scholar [6] G. Caginalp and X. Chen, Convergence of the phase field model to its sharp interface limits, Euro. Jnl of Applied Mathematics, 9 (1998), 417-445. doi: 10.1017/S0956792598003520. Google Scholar [7] L. Calatroni and P. Colli, Global solution to the Allen-Cahn equation with singular potentials and dynamic boundary conditions, Nonlinear Analysis: Theory, Methods & Applications, 79 (2013), 12–27, arXiv: 1206.6738. doi: 10.1016/j.na.2012.11.010. Google Scholar [8] O. Cârjă, A. Miranville and C. Moroşanu, On the existence, uniqueness and regularity of solutions to the phase-field system with a general regular potential and a general class of nonlinear and non-homogeneous boundary conditions, Nonlinear Anal. TMA, 113 (2015), 190-208. doi: 10.1016/j.na.2014.10.003. Google Scholar [9] X. Chen, Generation, propagation, and annihilation of metastable patterns, J. Differential Equations, 206 (2004), 399-437. doi: 10.1016/j.jde.2004.05.017. Google Scholar [10] L. Cherfils, S. Gatti and A. Miranville, Long time behavior to the Caginalp system with singular potentials and dynamic boundary conditions, Commun. Pure Appl. Anal., 11 (2012), 2261-2290. doi: 10.3934/cpaa.2012.11.2261. Google Scholar [11] K. Chrysafinos, Discontinuous Time-Stepping Schemes for the Allen-Cahn Equations and Applications to Optimal Control, Paper presented at the Conference on 'Advances in scientific computing and applied mathematics', Las Vegas, Nevada, October 2015. Google Scholar [12] X. Feng and A. Prohl, Numerical analysis of the Allen-Cahn equation and approximation for mean curvature flows, Numer. Math., 94 (2003), 33-65. doi: 10.1007/s00211-002-0413-1. Google Scholar [13] I. Fonseca and W. Gangbo, Degree Theory in Analysis and Applications, Clarendon, Oxford, 1995. Google Scholar [14] C. Gal, M. Grasselli and A. Miranville, Non-isothermal Allen-Cahn equations with coupled dynamic boundary conditions, Nonlinear Phenomena with Energy Dissipation, GAKUTO Internat. Ser. Math. Sci. Appl., Gakkōtosho, Tokyo, 29 (2008), 117–139. Google Scholar [15] Gh. Iorga, C. Moroşanu and S. C. Cocindău, Numerical simulation of the solid region via phase field transition system, Metalurgia International, 8 (2008), 91-95. Google Scholar [16] Gh. Iorga, C. Moroşanu and I. Tofan, Numerical simulation of the thickness accretions in the secondary cooling zone of a continuous casting machine, Metalurgia International, 14 (2009), 72-75. Google Scholar [17] H. Israel, Long time behavior of an Allen-Cahn type equation with singular potential and dynamic boundary conditions, Journal of Applied Analysis and Computation, 2 (2012), 29-56. Google Scholar [18] N. Kenmochi and M. Niezgódka, Evolution systems of nonlinear variational inequalities arising from phase change problems, Nonlinear Anal. TMA, 22 (1994), 1163-1180. doi: 10.1016/0362-546X(94)90235-6. Google Scholar [19] O. A. Ladyzhenskaya, B. A. Solonnikov and N. N. Uraltzava, Linear and Quasi-Linear Equations of Parabolic Type, Prov. Amer. Math. Soc., 1968. Google Scholar [20] J. L. Lions, Control of Distributed Singular Systems, Gauthier-Villars, Paris, 1985. Google Scholar [21] A. Miranville, Existence of solutions for a one-dimensional Allen-Cahn equation, J. Appl. Anal. Comput., 3 (2013), 265-277. Google Scholar [22] A. Miranville and C. Moroşanu, On the existence, uniqueness and regularity of solutions to the phase-field transition system with non-homogeneous Cauchy-Neumann and nonlinear dynamic boundary conditions, Appl. Math. Modell., 40 (2016), 192-207. doi: 10.1016/j.apm.2015.04.039. Google Scholar [23] A. Miranville and C. Moroşanu, Analysis of an iterative scheme of fractional steps type associated to the nonlinear phase-field equation with non-homogeneous dynamic boundary conditions, Discrete and Continuous Dynamical Systems Series S, 9 (2016), 537-556. doi: 10.3934/dcdss.2016011. Google Scholar [24] C. Moroşanu, Approximation and numerical results for phase field system by a fractional step scheme, Rev. Anal. Numér. Théor. Approx., 25 (1996), 137-151. Google Scholar [25] C. Moroşanu, Approximation of the phase-field transition system via fractional steps method, Numer. Funct. Anal. and Optimiz., 18 (1997), 623-648. doi: 10.1080/01630569708816782. Google Scholar [26] C. Moroşanu, Analysis and Optimal Control of Phase-Field Transition System: Fractional Steps Methods, Bentham Science Publishers, 2012. doi: 10.2174/97816080535061120101. Google Scholar [27] C. Moroşanu, Qualitative and quantitative analysis for a nonlinear reaction-diffusion equation, ROMAI J., 12 (2016), 85–113. https://rj.romai.ro/arhiva/2016/2/Morosanu.pdf Google Scholar [28] C. Moroşanu and A. Croitoru, Analysis of an iterative scheme of fractional steps type associated to the phase-field equation endowed with a general nonlinearity and Cauchy-Neumann boundary conditions, J. Math. Anal. Appl., 425 (2015), 1225-1239. doi: 10.1016/j.jmaa.2015.01.033. Google Scholar [29] C. Moroşanu and A.-M. Moşneagu, On the numerical approximation of the phase-field system with non-homogeneous Cauchy-Neumann boundary conditions. Case 1D, ROMAI J., 9 (2013), 91–110, https://rj.romai.ro/arhiva/2013/1/Morosanu,Mosneagu.pdf Google Scholar [30] C. Moroşanu and D. Motreanu, A generalized phase field system, J. Math. Anal. Appl., 237 (1999), 515-540. doi: 10.1006/jmaa.1999.6467. Google Scholar [31] C. Moroşanu and D. Motreanu, The phase field system with a general nonlinearity, International Journal of Differential Equations and Applications, 1 (2000), 187-204. Google Scholar [32] C. Moroşanu, S. Pavăl and C. Trenchea, Analysis of stability and errors of three methods associated to the nonlinear reaction-diffusion equation supplied with homogeneous Neumann boundary conditions, Journal of Applied Analysis and Computation, 7 (2017), 1-19. doi: 10.11948/2017001. Google Scholar [33] C. V. Pao, Nonlinear Parabolic and Elliptic Equations, Plenum Press, New York, 1992. Google Scholar [34] O. Penrose and P. C. Fife, Thermodynamically consistent models of phase-field type for kinetics of phase transitions, Phys. D., 43 (1990), 44-62. doi: 10.1016/0167-2789(90)90015-H. Google Scholar [35] R. Temam, Infinite-dimensional Dynamical Systems in Mechanics and Physics, volume 68 of Applied Mathematical Sciences, Springer-Verlag, New York, second edition, 1997. doi: 10.1007/978-1-4612-0645-3. Google Scholar [36] X. Yang, Error analysis of stabilized semi-implicit method of Allen-Cahn equation, Discrete Contin. Dyn. Syst. Ser. B, 11 (2009), 1057-1070. doi: 10.3934/dcdsb.2009.11.1057. Google Scholar [37] J. Zhang and Q. Du, Numerical studies of discrete approximations to the Allen-Cahn equation in the sharp interface limit, SIAM J. Sci. Comput., 31 (2009), 3042-3063. doi: 10.1137/080738398. Google Scholar
Numerical stability: $V^i$ at different levels of time
Errors $\|v_e-V_j^M\|_\infty$ of the Newton, the linearized and the fractional steps methods: (10)-(11)
Errors $\|v_e-V_j^M\|_\infty$ of the Newton, the linearized and the fractional steps methods: (10)-(11)
Errors $\|v_e-V_j^M\|_\infty$ of the Newton, the linearized and the fractional steps methods: (10)-(11)
[1] Ching-Shan Chou, Yong-Tao Zhang, Rui Zhao, Qing Nie. Numerical methods for stiff reaction-diffusion systems. Discrete & Continuous Dynamical Systems - B, 2007, 7 (3) : 515-525. doi: 10.3934/dcdsb.2007.7.515 [2] Joseph A. Connolly, Neville J. Ford. Comparison of numerical methods for fractional differential equations. Communications on Pure & Applied Analysis, 2006, 5 (2) : 289-307. doi: 10.3934/cpaa.2006.5.289 [3] Ya-Xiang Yuan. Recent advances in numerical methods for nonlinear equations and nonlinear least squares. Numerical Algebra, Control & Optimization, 2011, 1 (1) : 15-34. doi: 10.3934/naco.2011.1.15 [4] Z. Jackiewicz, B. Zubik-Kowal, B. Basse. Finite-difference and pseudo-spectral methods for the numerical simulations of in vitro human tumor cell population kinetics. Mathematical Biosciences & Engineering, 2009, 6 (3) : 561-572. doi: 10.3934/mbe.2009.6.561 [5] Shalva Amiranashvili, Raimondas Čiegis, Mindaugas Radziunas. Numerical methods for a class of generalized nonlinear Schrödinger equations. Kinetic & Related Models, 2015, 8 (2) : 215-234. doi: 10.3934/krm.2015.8.215 [6] Hong Wang, Aijie Cheng, Kaixin Wang. Fast finite volume methods for space-fractional diffusion equations. Discrete & Continuous Dynamical Systems - B, 2015, 20 (5) : 1427-1441. doi: 10.3934/dcdsb.2015.20.1427 [7] Laurent Desvillettes, Klemens Fellner. Entropy methods for reaction-diffusion systems. Conference Publications, 2007, 2007 (Special) : 304-312. doi: 10.3934/proc.2007.2007.304 [8] Zalman Balanov, Carlos García-Azpeitia, Wieslaw Krawcewicz. On variational and topological methods in nonlinear difference equations. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2813-2844. doi: 10.3934/cpaa.2018133 [9] Sven Jarohs, Tobias Weth. Asymptotic symmetry for a class of nonlinear fractional reaction-diffusion equations. Discrete & Continuous Dynamical Systems - A, 2014, 34 (6) : 2581-2615. doi: 10.3934/dcds.2014.34.2581 [10] Jitraj Saha, Nilima Das, Jitendra Kumar, Andreas Bück. Numerical solutions for multidimensional fragmentation problems using finite volume methods. Kinetic & Related Models, 2019, 12 (1) : 79-103. doi: 10.3934/krm.2019004 [11] Asif Yokus, Mehmet Yavuz. Novel comparison of numerical and analytical methods for fractional Burger–Fisher equation. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 0-0. doi: 10.3934/dcdss.2020258 [12] Xiaohai Wan, Zhilin Li. Some new finite difference methods for Helmholtz equations on irregular domains or with interfaces. Discrete & Continuous Dynamical Systems - B, 2012, 17 (4) : 1155-1174. doi: 10.3934/dcdsb.2012.17.1155 [13] Caojin Zhang, George Yin, Qing Zhang, Le Yi Wang. Pollution control for switching diffusion models: Approximation methods and numerical results. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3667-3687. doi: 10.3934/dcdsb.2018310 [14] Emmanuel Frénod. Homogenization-based numerical methods. Discrete & Continuous Dynamical Systems - S, 2016, 9 (5) : i-ix. doi: 10.3934/dcdss.201605i [15] Filipa Caetano, Martin J. Gander, Laurence Halpern, Jérémie Szeftel. Schwarz waveform relaxation algorithms for semilinear reaction-diffusion equations. Networks & Heterogeneous Media, 2010, 5 (3) : 487-505. doi: 10.3934/nhm.2010.5.487 [16] Wei Qu, Siu-Long Lei, Seak-Weng Vong. A note on the stability of a second order finite difference scheme for space fractional diffusion equations. Numerical Algebra, Control & Optimization, 2014, 4 (4) : 317-325. doi: 10.3934/naco.2014.4.317 [17] María Suárez-Taboada, Carlos Vázquez. Numerical methods for PDE models related to pricing and expected lifetime of an extraction project under uncertainty. Discrete & Continuous Dynamical Systems - B, 2019, 24 (8) : 3503-3523. doi: 10.3934/dcdsb.2018254 [18] Qingguang Guan, Max Gunzburger. Stability and convergence of time-stepping methods for a nonlocal model for diffusion. Discrete & Continuous Dynamical Systems - B, 2015, 20 (5) : 1315-1335. doi: 10.3934/dcdsb.2015.20.1315 [19] Iasson Karafyllis, Lars Grüne. Feedback stabilization methods for the numerical solution of ordinary differential equations. Discrete & Continuous Dynamical Systems - B, 2011, 16 (1) : 283-317. doi: 10.3934/dcdsb.2011.16.283 [20] Wenxiong Chen, Shijie Qi. Direct methods on fractional equations. Discrete & Continuous Dynamical Systems - A, 2019, 39 (3) : 1269-1310. doi: 10.3934/dcds.2019055
2018 Impact Factor: 0.545
## Tools
Article outline
Figures and Tables
[Back to Top] | 2020-04-05 20:23:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6248986721038818, "perplexity": 4808.003156581008}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371609067.62/warc/CC-MAIN-20200405181743-20200405212243-00439.warc.gz"} |
http://mathhelpforum.com/algebra/60825-proof-question.html | 1. ## a proof question..
for "a" and "b" are real numbers prove that:
ImageShack - Image Hosting :: 34860604zw7.gif
2. $\left| a \right| \leqslant \left| {a - b + b} \right| \leqslant \left| {a - b} \right| + \left| b \right|\; \Rightarrow \;\left| a \right| - \left| b \right| \leqslant \left| {a - b} \right|$
$\begin{gathered}
\left| b \right| \leqslant \left| {b - a + a} \right| \leqslant \left| {b - a} \right| + \left| a \right|\; \Rightarrow \;\left| b \right| - \left| a \right| \leqslant \left| {b - a} \right| = \left| {a - b} \right| \hfill \\
- \left| {a - b} \right| \leqslant \left| a \right| - \left| b \right| \leqslant \left| {a - b} \right| \hfill \\
\end{gathered}$
3. ## how??
first we have
|a-b|>=||a|-|b||
what did you do at the first step in order to transform it into
http://img152.imageshack.us/img152/5374/52524050qr2.gif
4. That is a simple application of the triangle inequality.
In fact, the entire problem depends upon the triangle inequality.
It also depends upon the general fact $
\left| a \right| \leqslant \left| b \right|\text{ if and only if } - \left| b \right| \leqslant a \leqslant \left| b \right|$
.
5. can you tell me
what operation you did in each step of this prove
?? | 2017-03-23 23:27:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 3, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9122834801673889, "perplexity": 1279.4638896296624}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187225.79/warc/CC-MAIN-20170322212947-00275-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://nrich.maths.org/5440/note?nomenu=1 | Copyright © University of Cambridge. All rights reserved.
## 'Pick's Quadratics' printed from http://nrich.maths.org/
### Show menu
To prove Pick's Theorem does not require any advanced mathematics, just careful reasoning. The Proof of Pick's Theorem is the challenge in the next problem which leads you step by step through the proof. This problem is about a generalistaion of Pick's Theorem.
Pick's Theorem can be generalised as follows:
'For any planar polygon with vertices at lattice points the quadratic formula $i(k)=Ak^2 - Bk +C$ gives the number of $k$-points inside the polygon and the quadratic formula $g(k)= Ak^2 + Bk +C$ gives the number of $k$-points in the closed polygon (including the boundary and the interior points), where $A$ is the area of the polygon.'
This challenge asks you to verify this generalised form of Pick's Theorem for a particular rectangle.
The proof that the given quadratic formulae hold for all polygons is difficult and requires mathematics beyond school level. However it is worth noting that this is the form of Pick's Theorem that generalises to 3 and higher dimensions. | 2018-02-17 23:05:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2614409327507019, "perplexity": 409.02427415293613}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-09/segments/1518891808539.63/warc/CC-MAIN-20180217224905-20180218004905-00751.warc.gz"} |
https://brilliant.org/problems/an-interesting-problem-227/ | # A number theory problem by Daniel Chiu
Let $$x>0$$ be the answer to this question. If $$k\neq 1$$ is a nonnegative integer, find $\dfrac{x!}{x-k}$
× | 2018-09-23 22:57:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7677105069160461, "perplexity": 298.681015367675}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159820.67/warc/CC-MAIN-20180923212605-20180923233005-00114.warc.gz"} |
https://ltwork.net/tin-hoc-ra-doi-khi-nao--205 | Tin học ra đời khi nào
Question:
Tin học ra đời khi nào
According to conflict theory, which crime is the poor segment of society more likely to commit? a. bribery
According to conflict theory, which crime is the poor segment of society more likely to commit? a. bribery b. corruption c. tax evasion d. burglary...
When straining a stock, it's advisable to use a colander lined with wet cheesecloth or a/an: a. pair of tongs. b. fine mesh
When straining a stock, it's advisable to use a colander lined with wet cheesecloth or a/an: a. pair of tongs. b. fine mesh sieve. c. aluminum pot. d. pot lid....
Write the equation of the quadraticfunction that: passes through (4,3) and has x-intercepts of -1
Write the equation of the quadratic function that: passes through (4,3) and has x-intercepts of -1 and 5 $Write the equation of the quadratic function that: passes through (4,3) and has x-intercepts of -$...
How much is this worth? Twice as much as 5 dimes and 3 pennies *
How much is this worth? Twice as much as 5 dimes and 3 pennies *...
Use enthalpies of formation to determine the δhreaction for the reaction cacl2(> cacl2(aq).
Use enthalpies of formation to determine the δhreaction for the reaction cacl2(> cacl2(aq)....
The olmec civilization a) had poor architectural skills. b) left behind mysterious stone sculptures.
The olmec civilization a) had poor architectural skills. b) left behind mysterious stone sculptures. c) were the most advanced pre-columbian civilization. d) was not able to trade because of its geographic isolation....
You should only start this SA once you have mastered the topics in lessons 3.2 and 3.3 (including SA 3.2.5).Show your
You should only start this SA once you have mastered the topics in lessons 3.2 and 3.3 (including SA 3.2.5). Show your work/support for each problem $You should only start this SA once you have mastered the topics in lessons 3.2 and 3.3 (including S$...
Ameasurement must include both a number and a(an)
Ameasurement must include both a number and a(an)...
What would be the consequence of earth taking one year to rotate on its axis we could not have a 24 hour day seasonal variations
What would be the consequence of earth taking one year to rotate on its axis we could not have a 24 hour day seasonal variations would not exist you would experience darkness for half the year all of the above...
The solubility of silver(I)phosphate at a given temperature is 1.02 g/L. Calculate the Ksp at this temperature.
The solubility of silver(I)phosphate at a given temperature is 1.02 g/L. Calculate the Ksp at this temperature. After you get your answer, take the negative log and enter that (so it's like you're taking the pKsp)...
Determine whether the point (1,5) is a solution to the system of equations. Explain your reasoning and a complete sentence.
determine whether the point (1,5) is a solution to the system of equations. Explain your reasoning and a complete sentence. $determine whether the point (1,5) is a solution to the system of equations. Explain your reasoning$...
I need help anyone please $I need help anyone please$...
Paco is telling about his daily routine. choose which word best completes the phrase. normalmente, yo
Paco is telling about his daily routine. choose which word best completes the phrase. normalmente, yo antes de mi hermana. me levanto despertarme levantarme me despierta...
What is the function of the noun clause in this sentence? the investigation of the bank robbery determined
What is the function of the noun clause in this sentence? the investigation of the bank robbery determined who committed the crime....
The sample variance a. can never be zero. b. is always smaller than the true value of the population
The sample variance a. can never be zero. b. is always smaller than the true value of the population variance. c. is always larger than the true value of the population variance. d. could be smaller, equal to, or larger than the true value of the population variance....
During which portion of the earth's revolution around the sun is the northern hemisphere tilted toward
During which portion of the earth's revolution around the sun is the northern hemisphere tilted toward the sun?...
Is a way to minimize technical problems with your computer
Is a way to minimize technical problems with your computer...
What is the answer to this equation?(solve) 5(5x+4)=45
What is the answer to this equation?(solve) 5(5x+4)=45...
HElP PLS I WILL DO ANYTHING BUT HELP $HElP PLS I WILL DO ANYTHING BUT HELP$... | 2023-01-30 20:32:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4487222135066986, "perplexity": 2732.423879864486}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499829.29/warc/CC-MAIN-20230130201044-20230130231044-00861.warc.gz"} |
http://www.standard-form.org/standard-form-of-a-circle.html | # Standard Form of a Circle
## Circle
A circle is the path of a point which moves at a constant distance from a fixed point in a plane. The fixed point is known as center of the circle and the constant distance is known as the radius of the circle.
Radius of a circle will be always a positive constant.
Look at the figure below. This is a circle with center O and radius r.
## Standard form of a circle
If the center of the circle is at the point (h, k) and has radius of the circle is r, then the equation of the circle is given by (x - h)2+ (y - k)2= r2
This representation of the circle is called the standard form.
Proof
Let c(h, k) be the center of the circle and r be the radius of the circle. Let L(x, y) be a point on the circle. The CL = r. Draw perpendiculars from CP and LQ to the x axis and and draw perpendicular CM.
We can see that CM = PQ = OQ - OP = x - h
ML = LQ - MQ = LQ - CL = y - k
Δ CML is a right angled triangle. So by Pythagoras theorem,
CM2 + ML2 = CL
That is (x - h)2 + (y - k)2 = r2
Hence proved.
Lets consider a few examples.
Example 1: -
Write the equation of the circle in the standard form, whose center is (2, 5) and radius is 3.
Solution: -
Given center of the circle,(h, k) = (2, 5) and radius of the circle, r = 3.
We know that the standard form of a circle is (x - h)2 + (y - k)2 = r2
Substituting for h, k and r, we get
(x - 2)2 + (y - 5)2 = 32
Simplifying we get,
(x - 2)2 + (y - 5)2 = 9
So the required equation is (x - 2)2 + (y - 5)2 = 9.
Example 2: -
Write the equation of the circle in the standard form whose center is (3, -2) and radius is 4.
Solution: -
Given center of the circle, (h, k) = (3, -2) and radius of the circle, r = 4.
We know that the standard form of a circle is (x - h)2 + (y - k)2= r2
Substituting for h, k and r, we get
(x - 3)2 + (y - (-2))2 = 42
Simplifying we get,
(x - 3)2 + (y + 2)2 = 16
So the required equation is (x - 3)2 + (y + 2)2 = 16.
Example 3: -
Identify the center and radius of the circle (x + 10)2 + (y - 4)2 = 25.
Solution: -
Given equation is (x + 10)2 + (y - 4)2 = 25.
We know that the standard form of a circle is (x - h)2 + (y - k)2 = r2
Comparing the given equation with the standard form, we get
h = -10, k = 4 and r2 = 25. Taking square root on both sides of r2 = 25, we get r = 5.
So the center of the circle is (-10, 4) and radius is 5.
Example 4: -
Identify the center and radius of the circle (x + 7)2 + (y + 12)2 = 36.
Solution: -
Given equation is (x + 7)2 + (y + 12)2 = 36
We know that the standard form of a circle is (x - h)2 + (y - k)2 = r2
Comparing the given equation with the standard form, we get
h = -7, k = -12 and r2 = 36. Taking square root on both sides of r2 = 36, we get r = 6
So the center of the circle is (-7, -12) and radius is 6.
### Special cases:
1. When the center is at origin and radius is r, the equation of the circle in standard form is given by x2 + y2 = r2
2. When the radius of a circle is 0, the equation of the circle in standard form becomes (x - h)2 + (y - k)2 = 0. This can happen only when the circle reduces to a point. So this circle is called a point circle.
Try yourself: -
1. Write the equation of the circle with center (2, 0) and radius 9.
2. Identify the center and radius of the circle (x + 7)2 + (y + 16)2 = 64. | 2018-10-23 16:31:17 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8155059814453125, "perplexity": 526.4102475579033}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-43/segments/1539583516480.46/warc/CC-MAIN-20181023153446-20181023174946-00493.warc.gz"} |
https://turbomachinery.asmedigitalcollection.asme.org/article.aspx?articleid=1467919 | 0
Research Papers
# Mass∕Heat Transfer in Rotating, Smooth, High-Aspect Ratio (4:1) Coolant Channels With Curved Walls
[+] Author and Article Information
Eashwar Sethuraman, Sumanta Acharya
Turbine Innovation and Energy Research (TIER) Center, Mechanical Engineering Department, Louisiana State University, Baton Rouge, LA 70803
Dimitris E. Nikitopoulos1
Turbine Innovation and Energy Research (TIER) Center, Mechanical Engineering Department, Louisiana State University, Baton Rouge, LA 70803
1
Corresponding author.
J. Turbomach 131(2), 021002 (Jan 22, 2009) (9 pages) doi:10.1115/1.2812327 History: Received July 25, 2006; Revised September 18, 2006; Published January 22, 2009
## Abstract
The paper presents an experimental study of heat∕mass transfer coefficient in 4:1 aspect ratio smooth channels with nonuniform cross sections. Curved leading and trailing edges are studied for two curvatures of 9.06 $m−1$ (0.23 $in.−1)$ and 15.11 $m−1$ (0.384 $in.−1)$ and for two different curvature configurations. One configuration has curved walls with curvature corresponding to the blade profile (positive curvature on both leading and trailing walls) and the other configuration has leading and trailing walls that curve inward into the coolant passage (negative curvature on the leading surface and positive curvature on the trailing surface). A detailed study at Re=10,000 with rotation numbers in the range of 0–0.07 is undertaken for the two different curvature configurations. All experiments are done for a 90 deg passage orientation with respect to the plane of rotation. The experiments are conducted in a rotating two-pass coolant channel facility using the naphthalene sublimation technique. Only the radially outward flow is considered for the present study. The spanwise mass transfer distributions of fully developed regions of the channel walls are also presented. The mass transfer data from the curved wall channels are compared to those from a smooth 4:1 rectangular duct with similar flow parameters. The local mass transfer data are analyzed mainly for the fully developed region, and area-averaged results are presented to delineate the effect of the rotation number. Heat transfer enhancement especially in the leading wall is seen for the lower curvature channels, and there is a subsequent reduction in the higher curvature channel when compared to the 4:1 rectangular smooth channel. This indicates that an optimal channel wall curvature exists for which heat transfer is the highest.
<>
## Figures
Figure 1
Blade profile and channel cross section (17)
Figure 2
Basic rotation effects
Figure 3
Test rig
Figure 4
Test section: (a) general layout and metering, (b) cross section
Figure 5
Comparison of Ref. 23 with present data
Figure 6
Comparison of fully developed area-averaged plots for 4:1 rectangular channels with those of Murata and Mochizuki (19)
Figure 7
Streamwise averaged curves of 4:1 rectangular channel for different rotation numbers
Figure 8
Fully developed area-averaged plots for (( cross sections at Re=10,000 and R0 varying from 0 to 0.051
Figure 9
Streamwise averaged plot for the different cross sections for (a) low and (b) high rotation numbers
Figure 10
Comparison between stationary ducts for )( cross-sectioned channels with 4:1 cross section
Figure 11
Fully developed area-averaged plots for )( channels at different R0
Figure 12
Comparison of mass∕heat transfer from )( cross section and 4:1 flat channel benchmarked at R0=0.03
Figure 13
Sherwood number distribution in the fully developed region of the trailing and the left-side walls of the )( 0.1 channel for Re=10,000 and R0=0.027
## Errata
Some tools below are only available to our subscribers or users with an online account.
### Related Content
Customize your page view by dragging and repositioning the boxes below.
Related Journal Articles
Related Proceedings Articles
Related eBook Content
Topic Collections | 2019-05-23 23:01:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 4, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17532701790332794, "perplexity": 5372.503232585558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232257432.60/warc/CC-MAIN-20190523224154-20190524010154-00387.warc.gz"} |
http://mymathforum.com/differential-equations/345892-help-differential-equation.html | My Math Forum Help with differential equation
Differential Equations Ordinary and Partial Differential Equations Math Forum
March 4th, 2019, 12:14 AM #1
Newbie
Joined: Mar 2019
From: Holland
Posts: 1
Thanks: 0
Help with differential equation
Hi, I'm new to this forum. Found this forum searching for help with a Math question. I need some help with a differential equation that I can't solve. I hope you guys can help me.
The equation
$\displaystyle \frac{d^2F}{dX^2}=-\frac{2\ast\sigma_s}{E\ast\left[\sqrt{4.65152d^2-26.853\frac{M}{\sigma_sd}-0.41955d}\right]}$
With the following replacements
$\displaystyle M=\frac{P}{2}\ast x$
$\displaystyle a=4.65152d^2$
$\displaystyle b=\frac{26.853}{2}\frac{P}{\sigma_s\ast d}$
$\displaystyle c=2\frac{\sigma_s}{E}$
$\displaystyle e=0.41955d$
We can rewrite the equation as: $\displaystyle \frac{d^2F}{dx^2}=\frac{c}{e-\sqrt{a-bX}}$
Boundary conditions: $\displaystyle x=\frac{L}{2}$, $\displaystyle \frac{dF}{dX}=0$
Solving this:
$\displaystyle dF/dx=\frac{2c\ast\left(e\ast\ln{\left(\sqrt{a-bx}\ast e\right)}+\sqrt{a-bx}\right)}{b}+C_1$
$\displaystyle C_1=-\frac{2c\ast\left(e\ast\ln{\left(\sqrt{a-b\frac{L}{2}\ }\ast e\right)}+\sqrt{a-b\frac{L}{2}}\right)}{b}$
Next we say the boundary conditions are:
$\displaystyle x=x_0$
$\displaystyle F=F_0=\frac{P}{d^4\ast E}\ast(\frac{4}{\pi}L^2\ast\ x_0-\frac{16}{3\pi}\ast\ x_0^3)$
with $\displaystyle x_0=\pi/16\ast\sigma_s/P\ast d^3$
This is the point where I don't know how to solve it anymore. I know what the solution is, but I want to know how they got there.
$\displaystyle F=2\frac{c}{b}\left\{-\frac{2}{3}\frac{\left(a-bX\right)^\frac{3}{2}}{b}-\frac{e}{b}\left[ln\left(\sqrt{a-bX}-e\right)\left(a-bX-e^2\right)+\frac{3}{2}e^2-e\sqrt{a-bX}-\frac{a-bX}{2}\right]\ \right\}+C_1X+C_2$
$\displaystyle C_1=-\frac{2c}{b}\left[\sqrt{a-b\frac{L}{2}}+e*ln\left(\sqrt{a-b\frac{L}{2}}-e\right)\right]$
$\displaystyle C_2=Y_0-\frac{2c}{b}\left\{-\frac{2}{3}\frac{\left(a-bX\right)^\frac{3}{2}}{b}-\frac{e}{b}\left[\ln{\left(\sqrt{a-bX_0}-e\right)\left(a-bX_0-e^2\right)+\frac{3}{2}e^2-e\sqrt{a-bX_0}-\frac{a-bX_0}{2}}\right]\right\}-K_1X_0$
Attached Images
equation.JPG (94.4 KB, 1 views)
Last edited by skipjack; March 4th, 2019 at 03:43 AM.
Tags differential, equation
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post max233 Calculus 4 March 26th, 2016 03:21 AM Sonprelis Calculus 6 August 6th, 2014 10:07 AM consigliere Differential Equations 5 August 23rd, 2013 10:54 AM PhizKid Differential Equations 0 February 24th, 2013 10:30 AM JohnC Differential Equations 4 March 8th, 2012 11:02 PM
Contact - Home - Forums - Cryptocurrency Forum - Top | 2019-08-25 10:05:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3594054579734802, "perplexity": 2286.6970157013943}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027323246.35/warc/CC-MAIN-20190825084751-20190825110751-00481.warc.gz"} |
http://www.ibiblio.org/pub/Linux/docs/HOWTO/other-formats/html_single/Linuxdoc-Reference.html | # Linuxdoc Reference
## Uwe Böhme, <[email protected]>
v1.1, 30 January 2000
This article is intended to be a reference for the SGML document type definition linuxdoc, which is coming along with the SGML text formatting system version 1.0. It should also be applicable to future versions which may be found at My Homepage.
## 1.1 Legal stuff
Copyright © 1997-2000 by Uwe Böhme. This document may be distributed under the terms set forth in the Linux Documentation Project License at LDP. Please contact the authors if you are unable to get the license. This is free documentation. It is distributed in the hope that it will be useful, but without any warranty; without even the implied warranty of merchantability or fitness for a particular purpose.
This document is not part of ldp (even if I took their form of license). I'm not yet playing in that league.
## 1.2 Genesis
This document was born trying to learn more about writing texts on my linux system. The one system looking like suitable to my needs was sgml-tools SGML-Tools Organsation an the linuxdoc dtd.
In [SGML-Tools User's Guide 1.0 ($Revision: 1.1.1.1$)] (see section Reference) the overall structure is described nice and easy. Also [Quick SGML Example, v1.0] (see section Reference) was helpful, but:
A lot of features are not mentioned.
On the way to learn more about it, I met [The qwertz Document Type Definition] (see section Reference). It's as detailed as hoped, but it's not made for the linuxdoc dtd (even if linuxdoc is based on qwertz).
I tried a new approach: Look at the dtd
dtd = document type definition
file itself, and try to understand it.
As time went by I noticed that I also forgot about some stuff, or - at least - didn't point it out strong enough. This will change within the next revision.
Any feedback you might have is welcome (especially help with English spelling or grammar) by e-mail at Uwe Böhme.
## 2.Introduction
The principle of any sgml'ed document (linuxdoc, docbook, html) is more or less the same:
Don't write how it should look like, but write what it is.
This is a different approach than the standard "wysiwyg"
What you see is what you (should) get (if you are a very lucky one and your computer wins the war against buggy software)
one
You might want to call it wysiwym, i.e. "What you see is what you mean"
. You do not tell the program that this line should be in a bigger font, to look like a headline. What you do is telling that this line is a headline. You do not try to make your document look like a report, but you tag it to be a report. So you tag the text with the appropriate <tag>.
The big advantages of this approache are:
1. You do not need to mess around with fontsetting, line gaps or anything directly connected to the layout.
2. You describe your document in a more abstract way so it's more reusable and can be mapped to different media types.
If you ever tried the reuse a document written in a specialized wysiwy layout for html then you know what I'm talking about.
In addition in all sgml-style documents you will find named symbols This is a concept to expand the charset of the document and to avoid inconsitences in decision of the parser, how to interpret or map some special characters.
How should the parser know weather a < character is starting a tag or should be printed directly. This is solved by the named character lt. If you write < this one will result to < in your text. For a list of the named symbols see Named Symbols.
Hint for the new user
It might be a good idea, to download this document not only as a dvi or ps document, but also to download the sgml source. This offers you the chance to look into the sources, if you find something within this article, wich might fit your needs.
## 3.A minimalistic document
In this section you'll find what you'll need for a minimalistic linuxdoc dtd conform document. It's intended to give a first touch. Skip this section, if you already now the principles.
## 3.1 Step By Step
The steps you have to do to create a nice linuxdoc document and map it to the form you need are:
• Take a plain text editor of your choice.
• Create a file and name it (or later save it as) e.g. start.sgml.
• Type the document
• Save the file and close your editor.
• Run the checker by typing sgmlcheck start.sgml.
• If you get errors reported, reopen your document in your editor again and try to correct it
The error messages of sgmlcheck will give you a hint about the type of error and also line and column where it occurred.
. Run the checker again until no more errors occur.
• Now you have to decide what's your document for. Take the apropriate parser mapper combination and translate your document. To find the mappers available in the SGML-Tools see table SGML-Tools mappers for sgml documents.
type to produce sgml2html start.sgml Hypertext markup language for web browsers sgml2lyx start.sgml Lyx or KLyx wysiwym textformat sgml2info start.sgml Info page for UN*X info sgml2latex start.sgml DVI output sgml2latex --output=tex start.sgml pure tex output sgml2latex --output=ps start.sgml postscript output sgml2rtf start.sgml rich text format sgml2txt start.sgml pure text
## 3.2 A Startup Document
We start with a simple document (the numbers and colon in the beginning of the line are for explanation, don't type it!):
1: <!doctype linuxdoc system>
2: <notes>
3: <title>A Small Linuxdoc Example</title>
4: <p>Hello <em>world</em>.</p>
5: <p><bf>Here</bf> we are.</p>
6: </notes>
Now we take a look at the single lines:
1. A linuxdoc document has to start, like all SGML conform documents, with the preamble. If you like you can take it as a piece of necessary magic, or you can try to find more information about SGML. The preamble is indicating to the SGML-parser, which dtd (document type definition) it should use for checking the syntax of the document.
2. Open the document class: You have to decide, wich type of document you want to write. See section Document Classes for detailed description about that document classes. The necessary header information, wich is depending on the document class is also explained there. In our case we place a <notes> tag forming a note, wich is indicating a simple unstructured document.
3. Even if optional it's a good idea to give a title to the document. That's done with the <title> tag.
4. A paragraph marked by the <p> tag, containing the word world wich is inline emphasized by the <em> tag.
5. Another completely tagged paragraph, with another word inline boldfaced by the <bf> tag.
6. Here we close the open document class tag.
The same example may be written a little bit shorter, by leaving out tags which are placed automatically by the parser, and by using shortened tags:
1: <!doctype linuxdoc system>
2: <notes>
3: <title>A Small Linuxdoc Example
4: <p>Hello <em/world/.
5:
6: <bf/Here/ we are.
7: </notes>
Now we look at the single lines again:
1. The preambel.
2. The document class (also unchanged).
3. The title. It's not closed, because the p tag in the next line is implicitely closing it.
4. The paragraph is implicitly closing the title. The emphasize tag is noted in short form. The short notation you can use only if your tagged text doesn't contain a litteral /. The paragraph is not explicitly closed in this line.
5. The empty line here is the reason, why you don't need to close the previous paragraph and don't need to open the next one. A empty line is interpreted as a end of the current paragraph and the start of a new one.
6. Another paragraph (not opened directly), with another short inline tag.
7. Closing the open document class tag, wich is implicitly also closing the still open paragraph.
Maybe now it's a little bit more clear, who you have to work with tags.
## 4.Document Classes
<!element linuxdoc o o
(sect | chapt | article | report |
book | letter | telefax | slides | notes | manpage ) >
This is describing the overall class of the document, so naturally it has (leave alone the doctype definition) to be the first tag enclosing your whole document. Some of the tags namely the sect and chapt (see section Sectioning Tags) doesn't make any sense taken them standalone despite being included as part of more complete classed document, so we'll describe them later as a part of the other document classes. Decide first which of the top mentioned document classes fits the type of the document you want to write best.
To find a detailed description of the document classes see table Document classes.
Chapter Class tag Article Tag Report Tag Book Tag Letter Tag Telefax Tag Slides Tag Notes Tag Manpage Tag
To me the article class is the most important one. That´s the reason why it´s described first and most detailed.
## 4.1 Article Tag
<!element article - -
(titlepag, header?,
toc?, lof?, lot?, p*, sect*,
(appendix, sect+)?, biblio?) +(footnote)>
<!attlist article
opts cdata "null">
You can see that the article needs some tags included. They will be explained in consequence.
The options attribute (opts) takes a comma separated list with thy different style (LaTeX .sty) sheets to inlude within the document.
### Titlepage Tag
<!element titlepag o o (title, author, date?, abstract?)>
The Titlepage Tag (titlepag) is implicitly placed as soon a you started your document class. You don't need to write it explicitly. Anyway you have to note it's mandatory tags. It's purpouse is to describe the layout and elements of the titlepages.
### Title Tag
<!element title - o (%inline, subtitle?) +(newline)>
Each document class wich owns a titlepage of course needs a title, wich is noted down with a <title> tag. You don't need to close thatone. A title may contain a subtitle started by the <subtitle> tag.
If you look at the headerpage of this document you'll find it to be mapped from the tags:
<title>Linuxdoc Reference
<subtitle>A introduction to the linuxdoc dtd
### Author Tag
<!element author - o (name, thanks?, inst?,
(and, name, thanks?, inst?)*)>
Usually you place the (your) name here. People should know who wrote the document, so you place a <author> tag. If you don't note the name tag it´s imlicitly placed. The author has also optional items wich can be tagged within the author tag.
If you want to say thanks to anyone (might be somebody providing usefull information) you place it within the <thanks> tag. Next, if your writing is done in your position of an institution staff member, place it within the <inst> tag.
The <and> tag is starting the whole story again, as if there would be a second author tag would have been started. Clearly thisone is for coauthors.
### Date Tag
If you want to mark your document with a date, you can do that with the <date> tag.
It's not checked weather you really place a valid date here, but don't abuse it.
### Abstract Tag
This tag is intended for an abstract description of your document. Don't mix the <abstract> tag withh an indruduction wich is likely to be placed inside the first section of your document (see section Sectioning).
### Header Tag
<!element header - - (lhead, rhead) >
<!element lhead - o (%inline)>
<!element rhead - o (%inline)>
A <header> tag specifies what should be printed at the top of each page. It consists of a left heading i.e. <lhead> and a right heading i.e. <rhead>). Both elements are required, if a heading is used at all, but either may be left empty, so that the effect of having only a left or right heading can be achieved easily enough.
As we will see, an initial header can be given after the title page. Afterwards, a new header can be given for each new chapter or section. The header printed on a page is the one which is in effect at the end of the current page. So that the header will be that of the last section starting on the page.
### Table Of Contents Tag
If you place the <toc> tag, a table of contense will be generated, by looking the section heading, and adding references.
In a hyperref document, this might be hyperrefs, in a LaTeX document you will come to see the pagenumbers.
Only the sections major to the sect3 will be included.
### List Of Figures Tag
If you place the <lof> tag, a list of figures will be generated, by looking the captions of the figures, and adding references.
### List Of Tables Tag
If you place the <lot> tag, a list of tables will be generated, by looking the captions of the tables, and adding references.
### Body
Here you place various sections according section Sectioning. There is no body tag. The body starts with the first chapter, section or paragraph.
### Appendix Tag
In the end of the article you can place the <appendix> tag
Really you shouldn't think about people (e.g. m.d.s knifing your belly here.
, wich starts a area of appended sections. The appendix tag implies a different section numbering type to the following section tags.
### Bibliography Tag
It's intended to gather all the <cites> and <ncites> you used within your document. The <biblio> tag will be replaced by a bibliography according the mapping type of the document, maybe by hyperrefs maybe by section numbers or anything wich might be useful.
Until now I've not been able to create a .bbl file, so I wasn't able to verify.
### Footnote Tag
A footnote may be place in any spot of your document. Exactly the spot in yout document where you are placing the <footnote> tag should be the one where the reference to the tagged text shuld be rendered. It should be used for additional information, wich is not necessary for understanding the primary purpouse of yor document but might be usefull, interesting, or funny.
Whereas the last one is not always true, even if you try.
anywhere within the article.
## 4.2 Report Tag
<!element report - -
(titlepag, header?, toc?, lof?, lot?, p*,
chapt*, (appendix, chapt+)?, biblio?) +(footnote)>
The report is a document class with a chapter oriented approach. So within a document clasified by a <report> tag the toplevel is grouped by the <chapt> tag (see Sectioning). The rest of the structure is identical to the article class Article Tag.
## 4.3 Book Tag
<!element book - -
(titlepag, header?, toc?, lof?, lot?, p*, chapt*,
(appendix, chapt+)?, biblio?) +(footnote) >
You will notice that the book element is identical to the report Report Tag. So anything valid there is also valid if you classify your document with a <book> tag.
## 4.4 Letter Tag
<!entity % addr "(address?, email?, phone?, fax?)" >
<!element letter - -
(from, %addr, to, %addr, cc?, subject?, sref?, rref?,
rdate?, opening, p+, closing, encl?, ps?)>
Also the purpose of the letter document class should be quite self explaining. Place a <letter> tag if you want to write one.
The letter's tags ar described in table Tags in a letter
tag mandatory what's it from yes from sender address no sender's address email no sender's email phone no sender's phone fax no sender's fax to yes receiver address no receiver's address email no receiver's email phone no receiver's phone fax no receiver's fax cc no carbon copy subject no letters subject sref no sender's reference rref no receiver's reference rdate no received date?? opening yes opening paragraphs yes see Paragraphs closing yes closing encl no enclosure ps no post scriptum
## 4.5 Telefax Tag
<!element telefax - -
(from, %addr, to, address, email?,
phone?, fax, cc?, subject?,
opening, p+, closing, ps?)>
Overall the structure is same to the letter class. The only difference is that with the <telefax> tag the receiver's <fax> tag becomes mandatory.
Should be obvious why.
## 4.6 Slides Tag
<!element slides - - (slide*) >
The slides class is intended for overhead slides and transparencies. So the structure of a document classified by a <slides> tag is a very simple one. It contains single slide(s) startes by a <slide> tag. Nothing else. If not explicitly written the first slide is started implicitly.
### Slide Tag
<!element slide - o (title?, p+) >
A <slide> tag is only allowed within the slides document class. A slide may contain:
A title (see section The Title Tag) and one or more paragraphs (see section Paragraphs). That's all.
## 4.7 Note Tag
<!element notes - - (title?, p+) >
Intended as a class for personal notes the structure is even more simplified than the slides document class (see The Slide Tag). After classifying a document with the <notes> tag only a title (see section The Title Tag) and one or more paragraphs (see section Paragraphs) are allowed.
## 4.8 Manual Page Tag
<!element manpage - - (sect1*)
-(sect2 | f | %mathpar | figure | tabular |
table | %xref | %thrm )>
This document class is intended for writing manual pages, fitting the need of the man programm. In a document classified by a <manpage> tag the topleve section tag is the sect1 tag (see section Sectioning), for easy pasting manual pages into an article or book document class. The exception here to the nortmal sectioning is, that there is only one subsection level allowed (sect2).
## 5.Inlines
<!entity % inline
" (#pcdata | f| x| %emph; |sq| %xref | %index | file )* " >
Inlines may occure anywhere within the text, and doesn't have any influence to the textflow or logical structure of the document.
#pcdata
Parsed character data is just normal written text within the flow wich may contain other inlines.
f
Inline mathematical formulas according to the maths.dtd. See The Formula Tag.
x
The external tag wich is bypassing the parser. Tagged data walks directly into the mapped file. See chapter The External Tag for detailed information.
%emph;
Emphasizes of the text. See chapter Emphasizes.
sq
Shortquotes within the textflow. See chapter The Short Quote Tad.
%xref
XReferecnces within the text or external references. See chapter Labels and References.
%index
Again I can't explain this one. If you can, please mail.
file
Again I can't explain this one (I only could guess about picture files in eps). If you can, please mail.
## 6.Sectioning
<!element chapt - o (%sect, sect*) +(footnote)>
<!element sect - o (%sect, sect1*) +(footnote)>
<!element sect1 - o (%sect, sect2*)>
<!element sect2 - o (%sect, sect3*)>
<!element sect3 - o (%sect, sect4*)>
<!element sect4 - o (%sect)>
The sectioning
Also the chapt tag is a sectioning tag.
is done by the according elements, forming the section tree. They are bringing the various paragraphs within our document to follow a nice tree. The top level tag and the allowed depth is varying with the document class (see section The Document Class).
The normal hierarchy is
chapt
sect
sect1
sect2
sect3
sect4
Just take a book, look the table of conetents and you will see.
Each of the tags out of the sectionings has nearly the same syntax. All of them owe a heading. The heading tag is placed implicitly if you don't note it down. Also the each of the sectioning tags may contain a header tag, changing the current document header (see section The Header Tag).
Within the you may place subordinate sections and paragraphs (see Paragraphs).
Some of the sectioning tags may only appear in special document classes ( Document Classes).
Hint:
It's wise to place a label tag after the text of the section tag, even if you don't want to refer to the section Labels and references. Later when your document grows you might want to.
## 7.Paragraphs
<!entity % sectpar
" %par; | figure | tabular | table | %mathpar; |
%thrm; | %litprog; ">
<!entity % par
" %list; | comment | lq | quote | tscreen " >
<!entity % litprog " code | verb " >
Each of the here described tags form a paragraph.
For obvious reason a paragraph is normally
The behaviour of the exceptions figure and tabular are explained there.
starting and ending with a new line.
How else you would notice it's a paragraph ?
There are some tags, wich always form a paragraph, and one way to form a paragraph implicitly. There are various types of paragraphs, because not every type of paragraph is allowed to appear in every document class in every place.
The different types of paragraphs are explained in the next sections. For more details about %litprog; see Literate Programming.
## 7.1 Normal Paragraph
Normal paragraphs can be formed in two ways:
### Paragraph tag
The <p> tag is starting a new paragraph. This tag is mandatory if you want to finish a section header without explicitly closing the sect tag. In this case <p> tag then closes the <sect> tag automatically.
### Empty Newline
A empty line between two paragraph is implicitly starting a new paragraph. Take care within descriptive lists. There a empty <tag> tag will not be paragraphed by an empty line.
## 7.2 List-like Paragraphs
<!entity % list
" list | itemize | enum | descrip " >
This four tags indicate the starting of a list-like paragraph. Within each of the lists the single items are separated by an item tag.
<!element item o o ((%inline; | %sectpar;)*, p*) >
As you can see, a item may again contain paragraphs (and therefore also may contain other lists - even of a different type).
### List Tag
<!element list - - (item+)>
The list tag will be mapped to a nacked list without bullets, numers or anything else.
To see it, I place a small example:
<list>
<item>A point
<item>Another one
<item>Last
</list>
Will look (depending on the mapping) like:
• A point
• Another one
• Last
• ### Itemize Tag
<!element itemize - - (item+)>
The itemize tag will be mapped to a list with bullets, wich is usually place for lists where the order of the items is not important.
A small example:
<itemize>
<item>A point
<item>Another one
<item>Last
</itemize>
Will look (depending on the mapping) like:
• A point
• Another one
• Last
### Enum Tag
<!element enum - - (item+)>
The enum tag will be mapped to a list with numbers.
A small example:
<enum>
<item>A point
<item>Another one
<item>Last
</enum>
Will look (depending on the mapping) like:
1. A point
2. Another one
3. Last
### Descrip Tag
<!element descrip - - (tag?, p+)+ >
The descrip tag will be mapped to a descriptive list. The concept here is a little bit different than with the other types of lists mentioned above.
Here you place a tag (this time the tag's name is really litteraly tag) wich is described later on.
A small example:
<descrip>
<tag/sgml/structured general markup language.
<tag/html - hypertext markup language/
A sgml implementation.
It contains some concepts about linking information together in a very
convenient way.
This made it to be so successful and to become the standard for documents
published by the internet.
<tag/internet/A worldwide connected internet (internet here as a
technical term)
</descrip>
Will look (depending on the mapping) like:
sgml
structured general markup language.
html - hypertext markup language
A sgml implementation. It contains some concepts about linking information together in a very covenient way. This made it to be so successfull and to become the standard for documents published by the internet.
internet
A worldwide connected internet (internet here as a technical term)
## 7.3 Figures and Tables
The <figure> and the <table> tags form very special paragraphs. Not always they stay within the normal textflow. Both of the tags can hold a loc (loction) attribute wich is telling how to handle the flow of this special paragraph.
The value of the loc attribute is a string of up to four letters, where each letter declares a location at which the figure or table may appear, as described in table Table Locations.
h here At the same location as in the SGML file t top At the top of a page b bottom At the bottom of a page p page On a separate page only with figures and tables
The default value of the loc attribute is top.
### Table Tag
<!element table - - (tabular, caption?) >
As you can see a table consists of the <table> tag itself, including a <tabular> tag and a optional <caption> tag.
The <tabular> tag may also be placed without a <table> tag so it is described in detail in it's own section (see Tabular Tag).
The caption is used also to place the entry for the list of tables if you stated one (see The List Of Tables Tag).
A short example will show how it's working together.
<table loc="ht">
<tabular ca="lcr">
Look|this|table@
Isn't|it|nice@
1.234|mixed|columns
</tabular>
<caption>A sample table
</table>
Look this table Isn't it nice 1.234 mixed columns
The caption "A sample table" would be the name in the list of tables.
### Figure Tag
<!element figure - - ((eps | ph ), img*, caption?)>
The usage of the <figure> tag is equivalent to the <table> tag. Instead of the <tabular> tag you place either a <eps> or a <ph> tag.
### Encapsulated Postscript™ Tag
<!attlist eps
file cdata #required
height cdata "5cm"
angle cdata "0">
The <eps> tag is intended for including a external file in encapsulated postscript™ format into the document.
The attributes of the <eps> tag are:
file
The file attribute needs the file name of a encapsulated postscript™ file ending with a .ps suffix. The mandatory .ps suffix must not be written.
height
The height of the space the file is zoomed to. If you don't specify it defaults to 5cm. Take care that there's no spcae between the number and the length unit (i, cm).
angle
The angle is given in normal degrees (0-360) and as the number is increasing the file is rotated counter clockwise.
A example:
<figure loc="here">
<eps file="logo" height="4cm" angle="15">
<img src="logo.gif">
<caption>A included encapsulated postscript™
</figure>
The img tag is ignored by LaTeX-mapping and useful for html, 'cause most browsers don't know about eps.
The caption here would go to the list of figures as decribed in section The List Of Figures Tag.
### Placeholder Tag
<!attlist ph
vspace cdata #required>
This tag doesn't place anything but keeps a clean space for good old manual picture pasting. The space kept free is destined by the vspace attribte. Caveat: The numerical argument for the vspace attribte needs a unit directly behind the number. Don't leave a space there (same as for the height attribute in Encapsulated Postscript™ Tag.
<figure loc="ht">
<ph vspace="5cm">
<caption>A blank space.
</figure>
Results to:
At this point you might want to look for your scissors and the glue.
## 7.4 Tabular Tag
<!element tabular - -
(hline?, %tabrow, (rowsep, hline?, %tabrow)*, caption?) >
The <tabular> tag is interpreted as an own paragraph, if it is written standalone. Together with a <table> tag it gets part of the paragraph of the <table> tag (see Table tag).
Within the tabular tag you have rows an collumns wich are separating the text. You have to have at least one collumn and one row.
Wouldn't be very usefull otherwise.
The <tabular> tag has a mandatory ca attribute for collumn allignement. The collumn allignement holds a single character for each collumn in their order from left to right. The chracters you may place per collumn described in table Collumns allignements
char alignment l left c centered r right
In theory you should be able to place a | into the ca attribure for drawing a horizontal line for separating two collumns. The problem: It doesn't work. The parser accepts it nicely, only the LaTeX output will map | to {$|$} wich is of course the set for four collumns with invalid collumn allignement for all four collums. I'll try to figure out what to do about it.
The columns within the <tabular> tag are separated by a collumn separator, the <colsep> tag. The character | is translated to <colsep> so you can also place that one instead
Less typing, more fun.
.
What's valid for collumns is also valid for rows. You separate the by a row separator, the <rowsep> tag. The character @ is translated to <rowsep>.
Optional you can place a horizontal line with the <hline> tag. Take care with that one: The SGML tools will parse it nicely weather you place it in front of the row you want under the line, or behind the end of the row you want over it. But the only place to write it without causing the parser to shout "error" is to write it dircetly and without space or newline behind the row separator.
<tabular ca="lcr">
Look|this|table@<hline>
Isn't|it|nice@
1.234|mixed|columns@
</tabular>
Results in table Sample table for tabular tag
Look this table Isn't it nice 1.234 mixed columns
Attention:
In LaTeX mapping everything works nice if you place a tabular tag without a table tag, only in the other mappings (e.g. html) it will be messed up.
## 7.5 Mathematical Paragraph
<!entity % mathpar " dm | eq " >
A mathematical paragraph consits either of a displayed formula, tagged by <dm>
No, sorry, not for Deutschmark! ;-)
or an equation, tagged by <eq>. They work very much the same.
Both of these tags contain a mathematical formula. See Mathematical Formulas for the tags valid here.
Note:
Because neither Netscape nor Microsoft has seen any need to add mathematical mappings to their browsers (like demanded and defined by w3c), there is no nice way of mapping, or at least displaying the math stuff in html. So if you view the online version, feel free to wonder what nonsense this man is telling here. Might be you should take a glance at the postscript version.
### Displayed Formula Tag
This tag displays a mathematical formula as a paragraph. The formula is mapped centered as a single line
No guarantee for that. You know: Mapping is a matter of taste.
.
<dm>(a+b)<sup/2/=a<sup/2/+2ab+b<sup/2/</dm>
Is mapped to: (a+b)2=a2+2ab+b2
### Equation Tag
<dm>(a+b)<sup/2/=a<sup/2/+2ab+b<sup/2/</dm>
Is mapped to: (a+b)2=a2+2ab+b2
## 7.6 Theorem Paragraph
<!entity % thrm
" def | prop | lemma | coroll | proof | theorem " >
<!element def - - (thtag?, p+) >
<!element prop - - (thtag?, p+) >
<!element lemma - - (thtag?, p+) >
<!element coroll - - (thtag?, p+) >
<!element proof - - (p+) >
<!element theorem - - (thtag?, p+) >
As you can see the different types of theorem paragraphs are nearly identical. The only exception wich is a little bit different is the proof wich doesn't own a thtag. For all the others the thtag is giving the tag of the theorem paragraph.
Yust try to use that one, wich is fitting the meaning of what you are typing.
<thrm>
<thtag>Alexander's thrm</thtag>
Let <f><fi/G/</f> be a set of non-trivially achievable subgoals
and μ an order on <f><fi/G/</f>. μ is abstractly
indicative if and only if it is a linearization of
<f><lim><op>μ</op><ll><fi/G/</ll><ul>*</ul></lim></f>.
</theorem>
The thrm is replaced by the adequate tag.
Maybe somebody knowing about mathematics would be shocked about my abuse of the types, but I'm lazy so I simply copied the examples:
Definition (def): Alexander's Definition
Let G be a set of nontrivially achievable subgoals and µ an order on G. µ is abstractly indicative if and only if it is a linearization of µG
*
.
Proposition (prop): Alexander's Proposition
Let G be a set of nontrivially achievable subgoals and µ an order on G. µ is abstractly indicative if and only if it is a linearization of µG
*
.
Lemma (lemma): Alexander's Lemma
Let G be a set of nontrivially achievable subgoals and µ an order on G. µ is abstractly indicative if and only if it is a linearization of µG
*
.
Corollation (coroll): Alexander's Corollary
Let G be a set of nontrivially achievable subgoals and µ an order on G. µ is abstractly indicative if and only if it is a linearization of µG
*
.
Alexander's Theorem
Let G be a set of nontrivially achievable subgoals and µ an order on G. µ is abstractly indicative if and only if it is a linearization of µG
*
.
The proof is just the same without the thtag:
Let G be a set of nontrivially achievable subgoals and µ an order on G. µ is abstractly indicative if and only if it is a linearization of µG
*
.
## 7.7 Code and verbatim Paragraphs
Both tags from a paragraph and have very similar behavior. Inside this tags most special characters don't need their named form as in section Named Symbols. The exceptions are:
1. &etago; -> </ -> end of tag open
Maybe later the list will grow.
In difference to the normal paragraph mapping white-spaces and newlines will be mapped literally (as you write them in your source).
Also (with respect to manual layout) the font for mapping will be a non-proportional one.
See the difference between IIWW and IIWW.
Note:
Aggain, I'm neither a native speaker not I love mathematics a lot. So I just placed some nonsense, wich might cause headache and grey hair for people who want to use this document for learning to formulate mathematical or physical theories.
Feel free to send better examples.
### Code Tag
<!element code - - rcdata>
Use the code tag, if you want to write sourcecode example within your text.
A code sample
<code>
#include <stdio.h>
int main() {
printf("Hello world");
return 1;
}
</code>
### Verbatim Tag
<!element verb - - rcdata>
Use the verbatim tag for anything else than sourcecode (use Code Tag for this) which needs the good old whitespace padding, like terminal hardcopy, ASCII-Graphics etc.
A verb sample
<verb>
/////////
| * * |
| | |
| <---> |
\_____/
</verb>
## 8.Inline Tags
Here the abstract inlines are broken down until only true and usable tags will remain. Let's recall:
<!entity % inline
" (#pcdata | f| x| %emph; |sq| %xref | %index | file )* " >
Inlines don't have a influence to paragraphing, sectioning or document classing. Just modifying text within it's normal flow.
## 8.1 Emphasizes
<!entity % emph
" em|it|bf|sf|sl|tt|cparam " >
The emphasizes are gathering the tags for emphasizing inline text.
The different types of emphasizes are:
em -> The Emphasize Tag
I hate to be redundant but I have to say: The emphasize tag you place for emphasized text. Normally it's mapped to italic letters. So if you write <em/a emphasized text/ it will be mapped to a emphasized text.
it -> The Italic Tag
The italic tag you place for a cursive mapping. If you write <it/a italic text/ it will be mapped to a italic text.
bf -> The Boldface Tag
The boldface tag you place for a bold mapping. If you write <bf/a bold text/ it will be mapped to a bold text.
sf -> The Swissfont Tag
I know that Tom Gordon from GMD is telling that this is the sans serif tag. My interpretation of the sf is swissfont wich for me is more easy to remember. This is mapping the inlined text to a font wich is out of the helvetica family. So <sf/a swissfont text/ will be mapped to a swissfont text.
sl -> The Slanted Tag
I think I skip the explanation. <sl/a slanted text/ will be mapped to a slanted text.
tt -> The Terminaltype Tag
Text tagged with terminaltype will be placed inline, just like all the other text within a paragraph. It will not be included into source output if you are workink as described in section Literate Programming, even if it's looking like typed code. <tt/a terminal typed text/ will be mapped to a terminal typed text.
## 8.2 Short-quote Tag
Normally this one could be viewed the same level like one of the emphasize tags, but the definition of the linuxdoc dtd is placing it same level like the emphasizes, and so I do.
The shortquote tag is a inline quotation, not forming an own paragraph. The text <sq/a short quote/ is mapped to "a short quote".
## 8.3 Formula Tag
The formula tag allows us to note down a mathematical formula within the normal text, not appearing in an own line. So the text <f>x=y<sup>2</sup></f> will be displayed as x=y2. See Mathematical Fomulas for the tags valid within the formula.
## 8.4 External Tag
The external tag is passing the tagged data directly through the parser, without modifying it. E.g. to LaTeX.
## 9.Mathematical Formulas
They can appear with in the tags listed in table Places of Mathematical Formulas
tag description see f inline formula The Formula Tag dm displayed formula Mathematical Paragraph eq equation Mathematical Paragraph
If you view this document mapped to html you will notice that html has no nice way of displaying mathematical formulas.
After a little hand parsing the contents of a mathematical tag looks like:
<!element xx - -
(((fr|lim|ar|root) |
(pr|in|sum) |
(#pcdata|mc|(tu|phr)) |
(rf|v|fi) |
(unl|ovl|sup|inf))*)>
The xx stands for f, dm or eq. All of them are the same.
Note:
Because neither Netscape nor Microsoft has seen any need to add mathematical mappings to their browsers (like demanded and defined by w3c), there is no nice way of mapping, or at least displaying the math stuff in html. So if you view the online version, feel free to wonder what nonsense this man is telling here. Might be you should take a glance at the postscript version.
## 9.1 Fraction Tag
<!element fr - - (nu,de) >
<!element nu o o ((%fbutxt;)*) >
<!element de o o ((%fbutxt;)*) >
So what we see from it is, that a fraction consits of a numerator and a denumerator tag, wich again each one can hold a mathematical formula.
I think an example will tell you more:
<dm><fr><nu/7/<de/13/</fr></dm>
results to:
713
In case we want to to place 1/2 instead of the numerator without cleaning it up, we'll type:
<dm><fr><nu><fr><nu/1/<de/2/</fr></nu><de/13/</fr></dm>
Which results to:
1213
## 9.2 Product, Integral and Summation Tag
<!element pr - - (ll,ul,opd?) >
<!element in - - (ll,ul,opd?) >
<!element sum - - (ll,ul,opd?) >
Each of them has a lower limit (ll tag), a upper limit (ul tag) and a optional operand, where each of them again may consist of a formula. The tags are same in syntax like shown in table Tags with upper-, lower limit and operator.
name example result Product y=i=1
nx y=i=1nxi Integral y=a
bx y=abx2 Summation y=i=1
nx y=i=1nxi
## 9.3 Limited Tag
<!element lim - - (op,ll,ul,opd?) >
<!element op o o (%fcstxt;|rf|%fph;) -(tu) >
<!element ll o o ((%fbutxt;)*) >
<!element ul o o ((%fbutxt;)*) >
<!element opd - o ((%fbutxt;)*) >
You can use that one for operators with upper and lower limits other than products, sums or integrals. The for the other types defined operator is destinied by the optag, wich can contain again a mathematical formula.
Bi=0
n
xi
## 9.4 Array Tag
<!element ar - - (row, (arr, row)*) >
<!attlist ar
ca cdata #required >
<!element arr - o empty >
<!element arc - o empty >
<!entity arr "<arr>" >
<!entity arc "<arc>" >
Of course a reasonable mathematical document needs a way to describe arrays and matrices. The array (ar) is noted down equivalent to a tabular (see section The Tabular Tag). The differences in handling are:
• No <hline> tag.
• The ca attribute character | is not allowd.
• Columns are not separated by colsep tag but with the arc tag (array collumn).
• Rows are not separated by rowsep tag but with the arr tag (array row).
Again the characters | and @ are mapped to the adequate separator tag, so you really can note a array same way as a tabular.
<dm><ar ca="clcr">
a+b+c | uv <arc> x-y | 27 @
a+b | u+v | z | 134 <arr>
a | 3u+vw | xyz | 2,978
</ar></dm>
Is mapped to:
a+b+c uv x-y 27 a+b u+v z 134 a 3u+vw xyz 2,978
## 9.5 Root Tag
<!element root - - ((%fbutxt;)*) >
<!attlist root
n cdata "">
The root is noted down by the root tag, wich contains a n attribute, holding the value for the "n'th" root.
<dm><root n="3"/x+y/</dm>
is mapped to:
x+y
## 9.6 Figure Tag
<!element fi - o (#pcdata) >
With the figure tag you can place mathematical figures. The tagged characters are directly mapped to a mathematical figure. Which character is mapped to which figure you'll find in Mathematical Figures.
## 9.7 Realfont Tag
<!element rf - o (#pcdata) >
This tag is placing a real font within a mathematical formula.
I'm really not sure about rf. What should it be?
No formula is allowed within that tag.
<dm><rf/Binom:/ (a+b)<sup/2/=a<sup/2/+2ab+b<sup/2/</dm>
is mapped to:
Binom: (a+b)2=a2+2ab+b2
## 9.8 Other Mathematical Tags
The remaining tags simply modify the tagged formula, without implying any other tag. The effect is shown in table Mathematical tags without included tags
name tag example result vector v -> a×b=0 overline ovl -> 1+1=2 underline unl -> 1+1=2 superior sup e=m×c -> e=m×c2 inferior inf x -> xi:=2xi-1+3
## 10.Labels and References
<!entity % xref
" label|ref|pageref|cite|url|htmlurl|ncite " >
As soon as it´s a little bit more sophisticated a document will need references to other places within the document.
## 10.1 Label Tag
<!element label - o empty>
<!attlist label id cdata #required>
If you want to refer to a spot, chapter or section within your document you place a label tag.
A example could look like:
<sect1>Welcome to the article<label id="intro">
<p>...
## 10.2 Reference Tag
<!element ref - o empty>
<!attlist ref
id cdata #required
name cdata "<@@refnam>">
With this tag you can refer to a place within your document labeled as in Label Tag.
The way the reference is mapped in you document again depends to the mapper. May result to a hyper-ref (HTML) or a section number (LaTeX).
## 10.3 Page reference Tag
<!element pageref - o empty>
<!attlist pageref
id cdata #required>
A example for a pageref:
<pageref id="intro">
In the HTML mapping there is no use for pageref, because there are no page numbers. In LaTeX mapping the tag is mapped to the pagenumber of the reffered label.
## 10.4 Url Tag
<!element url - o empty>
<!attlist url
url cdata #required
name cdata "<@@urlnam>" >
A example for a url:
<url url="http://www.gnu.org" name="GNU Organization">
GNU Organisation
The mapping to html brings up a hyper-ref in your document. The reference is the value of the url attribute, the text standing in the Hyperref is the name attribute's value.
In LaTeX mapping this one results to the name followed by the url.
## 10.5 Htmlurl Tag
<!element htmlurl - o empty>
<!attlist htmlurl
url cdata #required
name cdata "<@@urlnam>" >
A example for a htmlurl:
<htmlurl url="http://www.gnu.org" name="GNU Organization">
GNU Organisation
The only difference between this tag and the Url Tag is in the LaTeX mapping.
The LaTeX mapping simply drops the url attribute and emphasizes the name.
In all other cases it's absolutely the same as the url tag.
## 10.6 Cite Tag
<!element cite - o empty>
<!attlist cite
id cdata #required>
AFAIK this one need´s bibTeX to work nicely. So I'm terribly sorry, but I was not jet able to make use of it. For that reason for sure I'm the wrong one to explain about it.
## 10.7 Ncite Tag
<!element ncite - o empty>
<!attlist ncite
id cdata #required
note cdata #required>
Same as Cite Tag.
## 11.Indices
<!entity % index "idx|cdx|nidx|ncdx" >
<!element idx - - (#pcdata)>
<!element cdx - - (#pcdata)>
<!element nidx - - (#pcdata)>
<!element ncdx - - (#pcdata)>
tag my translation idx index cdx code index (terminaltype index) nidx invisible index ncdx invisible code index (terminaltype index)
The index tags serve for making a index of your document. They are only useful if you want do do LaTeX mapping. They only differ very slightly as mentioned in table Index elements.
## 11.1 Including a index
There are two ways to include indices into your document. Look at both and decide.
### Manually
1. Set the opts attribute of your document class to contain the packages makeidx. You do that by: <article opts="makeidx">.
2. Mark all the words you want to be in the index later with a idx tag or cdx tag. If the word you want to index to a location in your document is not within the text you simply write it at the location you want to index with the nidx tag. It´s like the normal idx only the tagged text will be silently dropped in the normal document.
3. Process your file with makeindex sgml2latex -m mydocument.sgml.
This will produce an additional mydocument.idx.
4. Process mydocument.idx with the makeindex command like makeindex mydocument.idx.
This will produce an additional mydocument.ind.
5. To include the now generated index in your document you process your document with sgml2latex -o tex -m mydocument.sgml.
This results in output of mydocument.tex.
6. Edit mydocument.tex with the editor of your choice.
You look for the line \end{document} (should be somewhere close to the end of the file) and insert the text \printindex bevor this line.
7. Process the modified file with latex mydocument.tex.
This gives you the final mydocument.dvi wich aggain you might process with dvips to generate a postscript document.
A lot of a mess, ain't it?
### Hacked
I'm currently working on a patch to the sgmltools to automate the inclusion and generation of a index. To find out the current state see http://www.bnhof.de/~uwe/lnd/indexpatch/index.html.
## 12.Literate Programming
<!entity % litprog " code | verb " >
This one is a funny thing. It's the idea of not to write some comment text within a program, and might be to take later some special tools, to extract the text
Think of perlpod.
, but to write a big document and later to extract the code from it.
People who don't like to document their code will not appreciate.
The principle is: All text within verb and code tags, will be gathered into a sourcefile.
That's it, because for now I don't remember the name of the tool doing thatone.
## 13.Reference
• The qwertz Document Type Definition
Norman Welsh
• SGML-Tools User's Guide 1.0 ($Revision: 1.1.1.1$)
Matt Welsh and Greg Hankins and Eric S. Raymond
November 1997
• Quick SGML Example, v1.0
Matt Welsh, <[email protected]>
March 1994
## 14.1 Named Characters
This is a slightly modified list taken from [SGML-Tools User's Guide 1.0 ($Revision: 1.1.1.1$)]. If you miss some, don't hesitate to mail. A lot of the named characters shown in table Named Characters are same as in the html-dtd.
AElig Æ Aacute Á Acirc  Ae Ä Agrave À Atilde à Auml Ä Ccedil Ç Eacute É Egrave È Euml Ë Iacute Í Icirc Î Igrave Ì Iuml Ï Ntilde Ñ Oacute Ó Ocirc Ô Oe Ö Ograve Ò Oslash Ø Ouml Ö Uacute Ú Ue Ü Ugrave Ù Uuml Ü Yacute Ý aacute á acirc â ae ä aelig æ agrave à amp & apos ' aring å arr ↓ ast * atilde ã auml ä bsol \ bull • ccedil ç cir ○ circ ^ clubs ♣ colon : comma , commat @ copy © darr ↓ deg ° diams ♦ divide ÷ dollar $dquot " eacute é ecirc ê egrave è equals = etago half 1/2 hearts ♥ hellip ... horbar ― hyphen ‐ iacute í icirc î iexcl ¡ igrave ì iquest ¿ iuml ï laquo « larr ← lcub { ldquo “ lowbar _ lpar ( lsqb [ lsquo ‘ lt < mdash — micro µ middot · mu µ ndash – not ¬ ntilde ñ num # oacute ó ocirc ô oe ö ograve ò ohm Ω ordf ª ordm º oslash ø otilde õ ouml ö para ¶ percnt % period . plus + plusmn ± pound £ quest ? quot " raquo » rarr -> rcub } rdquo ” reg ® rpar ) rsqb ] rsquo ’ sect § semi ; sol / spades ♠ sup1 ^1 sup2 ^2 sup3 ^3 sz ß szlig ß tilde ~ times × trade ™ uacute ú uarr ↑ ucirc û ue ü ugrave ù uuml ü verbar | yacute ý ## 14.2 Named Whitespaces There is a small number of whatever you want to name it. The look like named characters, but will be printed not always, or not at all. thinsp Thin space: d D ->d D emsp Emphasized space: d D -> d D ensp Normal space: /d D/ -> d D nbsp No break space: A spaces at wich the line is not allowed to be broken. Two words separated by a nbsp will be treated by parser and mapper to be a single long one. shy Suggest Hyphen: If the mapper is up to break a word, with has the shy tag inside, it will probably do the wordbreak at the place of the shy tag and place a hyphen instead. If no wordbreak is necessary the shy expands to nothging at all. ## 15.Mathematical Figures a-ab-bc-cd-de-ef-fg-gh-hi-ij-jk-kl-lm-mn-no-op-pq-qr-rs-st-tu-uv-vw-wx-xy-yz-z ― A-AB-BC-CD-DE-EF-FG-GH-HI-IJ-JK-KL-LM-MN-NO-OP-PQ-QR-RS-ST-TU-UV-VW-WX-XY-YZ-Z The special mappings for characters you might use for building up mathematical figures are shown in table Mathematical Figures. ## 16.Linuxdoc dtd Source This is the linuxdoc.dtd used to parse this document. The revision log, revision comments and a few redundant lines are taken out for saving paper and screenspace. <!-- This is a DTD, but will be read as -*- sgml -*- --> <!-- ================================================= --> <!--$Id: lnd.sgml,v 1.1.1.1 2000/03/05 14:40:31 uwe Exp \$
This is LINUXDOC96 DTD for SGML-Tools.
This was LINUXDOC.DTD,
a hacked version of QWERTZ.DTD v1.3 by Matt Welsh,
Greg Hankins, Eric Raymond, Marc Baudoin and
Tristan Debeaupuis; modified from QWERTZ.DTD by
Tom Gordon.
<!entity % emph
" em|it|bf|sf|sl|tt|cparam " >
<!entity % index "idx|cdx|nidx|ncdx" >
<!-- url added by HG; htmlurl added by esr -->
<!entity % xref
" label|ref|pageref|cite|url|htmlurl|ncite " >
<!entity % inline
" (#pcdata | f| x| %emph; |sq| %xref | %index | file )* " >
<!entity % list
" list | itemize | enum | descrip " >
<!entity % par
" %list; | comment | lq | quote | tscreen " >
<!entity % mathpar " dm | eq " >
<!entity % thrm
" def | prop | lemma | coroll | proof | theorem " >
<!entity % litprog " code | verb " >
<!entity % sectpar
" %par; | figure | tabular | table | %mathpar; |
%thrm; | %litprog; ">
<!element linuxdoc o o
(sect | chapt | article | report |
book | letter | telefax | slides | notes | manpage ) >
<!-- general' entity replaced with ISO entities - kwm -->
<!entity % isoent system "isoent">
%isoent;
<!entity urlnam sdata "urlnam" >
<!entity refnam sdata "refnam" >
<!entity tex sdata "[tex ]" >
<!entity latex sdata "[latex ]" >
<!entity latexe sdata "[latexe]" >
<!entity tm sdata "[trade ]" >
<!entity dquot sdata "[quot ]" >
<!entity ero sdata "[amp ]" >
<!entity etago '</' >
<!entity Ae 'Ä' >
<!entity ae 'ä' >
<!entity Oe 'Ö' >
<!entity oe 'ö' >
<!entity Ue 'Ü' >
<!entity ue 'ü' >
<!entity sz 'ß' >
<!element p o o (( %inline | %sectpar )+) +(newline) >
<!entity ptag '<p>' >
<!entity psplit '</p><p>' >
<!shortref pmap
"&#RS;B" null
"&#RS;B&#RE;" psplit
"&#RS;&#RE;" psplit
-- '"' qtag --
"[" lsqb
"~" nbsp
"_" lowbar
"#" num
"%" percnt
"^" circ
"{" lcub
"}" rcub
"|" verbar >
<!usemap pmap p>
<!element em - - (%inline)>
<!element bf - - (%inline)>
<!element it - - (%inline)>
<!element sf - - (%inline)>
<!element sl - - (%inline)>
<!element tt - - (%inline)>
<!element sq - - (%inline)>
<!element cparam - - (%inline)>
<!entity ftag '<f>' -- formula begin -- >
<!entity qendtag '</sq>'>
<!shortref sqmap
"&#RS;B" null
-- '"' qendtag --
"[" lsqb
"~" nbsp
"_" lowbar
"#" num
"%" percnt
"^" circ
"{" lcub
"}" rcub
"|" verbar >
<!usemap sqmap sq >
<!element lq - - (p*)>
<!element quote - - ((%inline; | %sectpar;)*, p*)+ >
<!element tscreen - - ((%inline; | %sectpar;)*, p*)+ >
<!element itemize - - (item+)>
<!element enum - - (item+)>
<!element list - - (item+)>
<!shortref desmap
"&#RS;B" null
"&#RS;B&#RE;" ptag
"&#RS;&#RE;" ptag
"~" nbsp
"_" lowbar
"#" num
"%" percnt
"^" circ
"[" lsqb
"]" rsqb
"{" lcub
"}" rcub
"|" verbar >
<!element descrip - - (tag?, p+)+ >
<!usemap desmap descrip>
<!element item o o ((%inline; | %sectpar;)*, p*) >
<!element tag - o (%inline)>
<!usemap desmap tag>
<!usemap global (list,itemize,enum)>
<!entity space " ">
<!entity null "">
<!--
<!shortref bodymap
"&#RS;B&#RE;" ptag
"&#RS;&#RE;" ptag
'"' qtag
"[" lsqb
"~" nbsp
"_" lowbar
"#" num
"%" percnt
"^" circ
"{" lcub
"}" rcub
"|" verbar>
-->
<!element figure - - ((eps | ph ), img*, caption?)>
<!attlist figure
loc cdata "tbp"
caption cdata "Caption">
<!-- eps attributes added by mb and td -->
<!element eps - o empty >
<!attlist eps
file cdata #required
height cdata "5cm"
angle cdata "0">
<!element ph - o empty >
<!attlist ph
vspace cdata #required>
<!element img - o empty>
<!attlist img
src cdata #required>
<!element caption - o (%inline)>
<!shortref oneline
"B&#RE;" space
"&#RS;&#RE;" null
"&#RS;B&#RE;" null
-- '"' qtag --
"[" ftag
"~" nbsp
"_" lowbar
"#" num
"%" percnt
"^" circ
"{" lcub
"}" rcub
"|" verbar>
<!usemap oneline tag>
<!usemap oneline caption>
<!entity % tabrow "(%inline, (colsep, %inline)*)" >
<!element tabular - -
(hline?, %tabrow, (rowsep, hline?, %tabrow)*, caption?) >
<!attlist tabular
ca cdata #required>
<!element rowsep - o empty>
<!element colsep - o empty>
<!element hline - o empty>
<!entity rowsep "<rowsep>">
<!entity colsep "<colsep>">
<!shortref tabmap
"&#RE;" null
"&#RS;&#RE;" null
"&#RS;B&#RE;" null
"&#RS;B" null
"B&#RE;" null
"BB" space
"@" rowsep
"|" colsep
"[" ftag
-- '"' qtag --
"_" thinsp
"~" nbsp
"#" num
"%" percnt
"^" circ
"{" lcub
"}" rcub >
<!usemap tabmap tabular>
<!element table - - (tabular, caption?) >
<!attlist table
loc cdata "tbp">
<!element code - - rcdata>
<!element verb - - rcdata>
<!shortref ttmap -- also on one-line --
"B&#RE;" space
"&#RS;&#RE;" null
"&#RS;B&#RE;" null
"&#RS;B" null
'#' num
'%' percnt
'~' tilde
'_' lowbar
'^' circ
'{' lcub
'}' rcub
'|' verbar >
<!usemap ttmap tt>
<!element mc - - cdata >
<!entity % sppos "tu" >
<!entity % fcs "%sppos;|phr" >
<!entity % fcstxt "#pcdata|mc|%fcs;" >
<!entity % fscs "rf|v|fi" >
<!entity % limits "pr|in|sum" >
<!entity % fbu "fr|lim|ar|root" >
<!entity % fph "unl|ovl|sup|inf" >
<!entity % fbutxt "(%fbu;) | (%limits;) |
(%fcstxt;)|(%fscs;)|(%fph;)" >
<!entity % fphtxt "p|#pcdata" >
<!element f - - ((%fbutxt;)*) >
<!entity fendtag '</f>' -- formula end -- >
<!shortref fmap
"&#RS;B" null
"&#RS;B&#RE;" null
"&#RS;&#RE;" null
"_" thinsp
"~" nbsp
"]" rsqb
"#" num
"%" percnt
"^" circ
"{" lcub
"}" rcub
"|" verbar>
<!usemap fmap f >
<!element dm - - ((%fbutxt;)*)>
<!element eq - - ((%fbutxt;)*)>
<!shortref dmmap
"&#RE;" space
"_" thinsp
"~" nbsp
"]" rsqb
"#" num
"%" percnt
"^" circ
"{" lcub
"}" rcub
"|" verbar>
<!usemap dmmap (dm,eq)>
<!element fr - - (nu,de) >
<!element nu o o ((%fbutxt;)*) >
<!element de o o ((%fbutxt;)*) >
<!element ll o o ((%fbutxt;)*) >
<!element ul o o ((%fbutxt;)*) >
<!element opd - o ((%fbutxt;)*) >
<!element pr - - (ll,ul,opd?) >
<!element in - - (ll,ul,opd?) >
<!element sum - - (ll,ul,opd?) >
<!element lim - - (op,ll,ul,opd?) >
<!element op o o (%fcstxt;|rf|%fph;) -(tu) >
<!element root - - ((%fbutxt;)*) >
<!attlist root
n cdata "">
<!element col o o ((%fbutxt;)*) >
<!element row o o (col, (arc, col)*) >
<!element ar - - (row, (arr, row)*) >
<!attlist ar
ca cdata #required >
<!element arr - o empty >
<!element arc - o empty >
<!entity arr "<arr>" >
<!entity arc "<arc>" >
<!shortref arrmap
"&#RE;" space
"@" arr
"|" arc
"_" thinsp
"~" nbsp
"#" num
"%" percnt
"^" circ
"{" lcub
"}" rcub >
<!usemap arrmap ar >
<!element sup - - ((%fbutxt;)*) -(tu) >
<!element inf - - ((%fbutxt;)*) -(tu) >
<!element unl - - ((%fbutxt;)*) >
<!element ovl - - ((%fbutxt;)*) >
<!element rf - o (#pcdata) >
<!element phr - o ((%fphtxt;)*) >
<!element v - o ((%fcstxt;)*)
-(tu|%limits;|%fbu;|%fph;) >
<!element fi - o (#pcdata) >
<!element tu - o empty >
<!usemap global (rf,phr)>
<!element def - - (thtag?, p+) >
<!element prop - - (thtag?, p+) >
<!element lemma - - (thtag?, p+) >
<!element coroll - - (thtag?, p+) >
<!element proof - - (p+) >
<!element theorem - - (thtag?, p+) >
<!element thtag - - (%inline)>
<!usemap global (def,prop,lemma,coroll,proof,theorem)>
<!usemap oneline thtag>
<!entity qtag '<sq>' >
<!shortref global
"&#RS;B" null -- delete leading blanks --
-- '"' qtag --
"[" ftag
"~" nbsp
"_" lowbar
"#" num
"%" percnt
"^" circ
"{" lcub
"}" rcub
"|" verbar>
<!usemap global linuxdoc>
<!element label - o empty>
<!attlist label id cdata #required>
<!-- ref modified to have an optional name field HG -->
<!element ref - o empty>
<!attlist ref
id cdata #required
name cdata "&refnam">
<!-- url entity added to have direct url references HG -->
<!element url - o empty>
<!attlist url
url cdata #required
name cdata "&urlnam" >
<!-- htmlurl entity added to have quieter url references esr -->
<!element htmlurl - o empty>
<!attlist htmlurl
url cdata #required
name cdata "&urlnam" >
<!element pageref - o empty>
<!attlist pageref
id cdata #required>
<!element comment - - (%inline)>
<!element x - - ((#pcdata | mc)*) >
<!usemap #empty x >
<!-- Hacked by mdw to exclude abstract; abstract now part of titlepag -->
<!element article - -
(titlepag, header?,
toc?, lof?, lot?, p*, sect*,
(appendix, sect+)?, biblio?) +(footnote)>
<!attlist article
opts cdata "null">
<!-- Hacked by mdw to exclude abstract; abstract now part of titlepag -->
<!element report - -
(titlepag, header?, toc?, lof?, lot?, p*,
chapt*, (appendix, chapt+)?, biblio?) +(footnote)>
<!attlist report
opts cdata "null">
<!element book - -
(titlepag, header?, toc?, lof?, lot?, p*, chapt*,
(appendix, chapt+)?, biblio?) +(footnote) >
<!attlist book
opts cdata "null">
<!-- Hacked by mdw, abstract now part of titlepag -->
<!element titlepag o o (title, author, date?, abstract?)>
<!element title - o (%inline, subtitle?) +(newline)>
<!element subtitle - o (%inline)>
<!usemap oneline titlepag>
<!element author - o (name, thanks?, inst?,
(and, name, thanks?, inst?)*)>
<!element name o o (%inline) +(newline)>
<!element and - o empty>
<!element thanks - o (%inline)>
<!element inst - o (%inline) +(newline)>
<!element date - o (#pcdata) >
<!usemap global thanks>
<!element newline - o empty >
<!entity nl "<newline>">
<!-- Hacked by mdw -->
<!element abstract - o (%inline)>
<!usemap oneline abstract>
<!element toc - o empty>
<!element lof - o empty>
<!element lot - o empty>
<!element header - - (lhead, rhead) >
<!element lhead - o (%inline)>
<!element rhead - o (%inline)>
<!entity % sect "heading, header?, p* " >
<!element heading o o (%inline)>
<!element chapt - o (%sect, sect*) +(footnote)>
<!element sect - o (%sect, sect1*) +(footnote)>
<!element sect1 - o (%sect, sect2*)>
<!element sect2 - o (%sect, sect3*)>
<!element sect3 - o (%sect, sect4*)>
<!element sect4 - o (%sect)>
<!usemap oneline (chapt,sect,sect1,sect2,sect3,sect4)>
<!element appendix - o empty >
<!element footnote - - (%inline)>
<!usemap global footnote>
<!element cite - o empty>
<!attlist cite
id cdata #required>
<!element ncite - o empty>
<!attlist ncite
id cdata #required
note cdata #required>
<!element file - - (#pcdata)>
<!element idx - - (#pcdata)>
<!element cdx - - (#pcdata)>
<!element nidx - - (#pcdata)>
<!element ncdx - - (#pcdata)>
<!element biblio - o empty>
<!attlist biblio
style cdata "linuxdoc"
files cdata "">
<!element slides - - (slide*) >
<!attlist slides
opts cdata "null">
<!element slide - o (title?, p+) >
<!entity % addr "(address?, email?, phone?, fax?)" >
<!element letter - -
(from, %addr, to, %addr, cc?, subject?, sref?, rref?,
rdate?, opening, p+, closing, encl?, ps?)>
<!attlist letter
opts cdata "null">
<!element from - o (#pcdata) >
<!element to - o (#pcdata) >
<!usemap oneline (from,to)>
<!element address - o (#pcdata) +(newline) >
<!element email - o (#pcdata) >
<!element phone - o (#pcdata) >
<!element fax - o (#pcdata) >
<!element subject - o (%inline;) >
<!element sref - o (#pcdata) >
<!element rref - o (#pcdata) >
<!element rdate - o (#pcdata) >
<!element opening - o (%inline;) >
<!usemap oneline opening>
<!element closing - o (%inline;) >
<!element cc - o (%inline;) +(newline) >
<!element encl - o (%inline;) +(newline) >
<!element ps - o (p+) >
<!element telefax - -
(from, %addr, to, address, email?,
phone?, fax, cc?, subject?,
opening, p+, closing, ps?)>
<!attlist telefax
opts cdata "null"
length cdata "2">
<!element notes - - (title?, p+) >
<!attlist notes
opts cdata "null" >
<!element manpage - - (sect1*)
-(sect2 | f | %mathpar | figure | tabular |
table | %xref | %thrm )>
<!attlist manpage
opts cdata "null"
title cdata ""
sectnum cdata "1" >
<!shortref manpage
"&#RS;B" null
-- '"' qtag --
"[" ftag
"~" nbsp
"_" lowbar
"#" num
"%" percnt
"^" circ
"{" lcub
"}" rcub
"|" verbar>
<!usemap manpage manpage >
` | 2015-03-06 20:06:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.616223156452179, "perplexity": 14811.928958804887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936470419.73/warc/CC-MAIN-20150226074110-00178-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://math.stackexchange.com/questions/2676571/compound-proposition-from-alice-in-wonderland | # Compound proposition from Alice in Wonderland
I am analyzing the following section from Alice in wonderland: If it makes me grow larger, I can reach the key; and if it makes me grow smaller, I can creep under the door; so either way I'll get into the garden, and I don't care which happens!.
I define the following propositional variables: $\\$
L= makes me grow larger
K= reach the key
S= makes me grow smaller
D= creep under the door
G= get into the garden
My task is to detemrine the compound propustition. I have done it this way:
$((L \rightarrow K) \mathrm{V} (S \rightarrow D)) \rightarrow G$
Does that look right ?
• Are you sure it need to be one statement? I see it as three 1)$L\to K$ 2)$S\to D$ and C) $L \lor S \to G$. and there is a fourth unstated hypothesis 2) $K \lor D \to G$ – fleablood Mar 4 '18 at 18:14
• That is not correct because it is't $L\to K$ nor $S\to D$ that implies $G$. It is $K$ and $D$ that imply $G$. And also its not a choose of $L\to D$ or $S\to D$. She knows both of those are true. It's a case of $L$ or $S$. – fleablood Mar 4 '18 at 18:17
• If I had to put it in 1 sentence: I'd say "I know L or S and I know L-> K and S-> D therefore G" with the unstated "I assume it goes without saying that K or D -> G". – fleablood Mar 4 '18 at 18:20
• Of course Alice's reasoning is flawed in that $L\to \lnot G$. Whether $K$ or not. It's only $(K\land \lnot L)\to G$..... But that's not part of this exercise. – fleablood Mar 4 '18 at 18:25
No: in order for Alice to get into the garden it needs to be both true that being larger she can get the key, and that being smaller she can creep under the door. So:
$((L \rightarrow K) \color{red}\land (S \rightarrow D)) \rightarrow G$
I think you're trying to use the $\lor$ because you're thinking of:
$(K \lor D) \rightarrow G$
which, given $L \rightarrow K$ and $S \rightarrow D$, would mean that:
$(L \lor S) \rightarrow G$
In fact, to make that inference you could also use that:
$(L \lor S) \rightarrow (K \lor D)$
And all of those constructions are compatible with Alice's reasoning.
To further see and understand the confusion you have, please note that:
$(K \lor D) \rightarrow G \Leftrightarrow (K \rightarrow G) \land (D \rightarrow G)$
So yes, it's easy to confuse the use of the $\lor$ with the $\land$ in these kinds of constructions!
Finally, though, I would make all this into an argument, rather than a single statement. That way, you can make some of the implicit assumptions in her reasoning explicit, such that she will get either smaller or larger, and that she can get into the garden with the key or by creeping under the door.
$L \lor S$
$L \rightarrow K$
$S \rightarrow D$
$K \rightarrow G$
$D \rightarrow G$
$\therefore G$
Or maybe:
$L \rightarrow K$
$S \rightarrow D$
$K \rightarrow G$
$D \rightarrow G$
$\therefore (L \lor S) \rightarrow G$
• If we had to do one statemet would you agree with $[(L\lor S)\land [(L\to K)\land (S\to D)]\land [(K\lor D\to G]]\to G$? – fleablood Mar 4 '18 at 18:22
• @fleablood Sounds good! Allow me to add that to my post? I'll credit you! – Bram28 Mar 4 '18 at 18:23
• Thanks for the great clarification @Bram28 – Elias S. Mar 4 '18 at 18:43
• @EliasS. You're welcome! :) – Bram28 Mar 4 '18 at 18:45 | 2019-12-13 21:38:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6509511470794678, "perplexity": 442.344449137167}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540569146.17/warc/CC-MAIN-20191213202639-20191213230639-00219.warc.gz"} |
https://cstheory.stackexchange.com/questions/39793/on-np-oplus-p-and-pp | # On $NP$, $\oplus P$ and $PP$?
We know $\oplus P^{\oplus P}=\oplus P$, $PP^{\oplus P}\subseteq P^{PP}$ and $NP\subseteq PP$.
1. Is $\oplus P^{PP}=PP$?
2. Why is it difficult to show $NP^{NP}\subseteq PP$?
3. What is the smallest known class $\mathcal C$ such that $PP\subseteq \oplus P^\mathcal C$ holds? Is there any class smaller than $PP$?
## 2 Answers
1. Unknown. There is an oracle $A$ s.t. $\bigoplus\mathsf{P}^A \not\subseteq \mathsf{PP}^A$.
2. There is an oracle $A$ s.t. $\mathsf{NP}^{\mathsf{NP}^A} \not\subseteq \mathsf{PP}^A$.
3. As far as I know no smaller class than $\mathsf{PP}$ is known to satisfy the inclusion.
• I meant smallest known class.
– Mr.
Dec 19 '17 at 13:41
Concerning 3, I believe $\mathrm{PP\subseteq\oplus P^{C_=P}}$, as there are at least $a$ numbers $x<2^n$ satisfying $P(x)$ if and only if the number of $y<2^n$ such that $P(y)\land|\{x\le y:P(x)\}|=a$ is odd. (Note that there is always at most one such $y$. That is, the argument actually shows $\mathrm{PP\subseteq UP^{C_=P}}$. (In fact, it even shows $\mathrm{UP^{PP}=UP^{C_=P}=UP^{C_=P[1]}}$.))
• You are right, of course. Dec 19 '17 at 14:07
• @EmilJerabek Just an unrelated query. Can $(Mod_aP)^{Mod_bP}$ be contained in $Mod_aP$ or ${Mod_bP}$ where $a,b$ are coprime?
– Mr.
Dec 19 '17 at 14:32
• That’s highly unlikely. Dec 19 '17 at 15:13
• And there are oracles relative to which they are not. Dec 19 '17 at 17:05
• @EmilJeřábek $|\{x\leq y:P(x)\}|=a$ is in $C_=P$ is a middle point argument. There is at most one such $y$ because $a$ is fixed (So we either have $1$ or $0$). We fix $a=\mbox{half number of paths}$ and query once. Correct? The querying part with $PromiseUP$ is not clear to me. Do we non-deterministically guess something and make the query?
– Mr.
Dec 25 '17 at 13:15 | 2021-10-20 16:32:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9185707569122314, "perplexity": 231.4070810471763}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585322.63/warc/CC-MAIN-20211020152307-20211020182307-00084.warc.gz"} |
https://www.nature.com/articles/s41598-017-00537-6?error=cookies_not_supported&code=e0818aa6-981d-40cd-b4b9-e9521ad4ae4c | # Fabrication of nanotweezers and their remote actuation by magnetic fields
## Abstract
A new kind of nanodevice that acts like tweezers through remote actuation by an external magnetic field is designed. Such device is meant to mechanically grab micrometric objects. The nanotweezers are built by using a top-down approach and are made of two parallelepipedic microelements, at least one of them being magnetic, bound by a flexible nanohinge. The presence of an external magnetic field induces a torque on the magnetic elements that competes with the elastic torque provided by the nanohinge. A model is established in order to evaluate the values of the balanced torques as a function of the tweezers opening angles. The results of the calculations are confronted to the expected values and validate the overall working principle of the magnetic nanotweezers.
## Introduction
Several applications in life science and biotechnology require tools for manipulating and exerting forces on micro or nanometric objects. In that regard, biocompatible magnetic nanoparticles have widely been developed and used1,2,3, owing to their ability to be actuated by external magnetic fields. The advantage of such method is that magnetic fields can penetrate human tissues in a noninvasive way. Bottom-up approaches are an efficient way to chemically produce functionalized superparamagnetic iron oxide nanoparticles in ligand matrices, and their superparamagnetic nature allows them to not agglomerate in the absence of magnetic field4. Functionalized magnetic nanoparticles can target specific biological entities for drug delivery, hyperthermia treatment or mechanical actuation1. Other methods for manipulating micro or nanoscale objects have also been explored. Notable examples are the concepts of magnetic tweezers5,6,7,8,9,10,11,12,13 and optical tweezers14. The former concept consists in binding a molecule to a substrate at one end and to a magnetic particle at the other end. By applying a magnetic field gradient, forces can be exerted on biomolecules such as DNA or cells to test their mechanical properties. Optical tweezers trap accelerated particles in an optical potential well. While the optical tweezers are a kind of “contactless” tweezers, past works have also established designs for “contact” tweezers15, 16 that are controlled by magnetic fields for grasping objects physically as well as for moving in liquid environments. They succeeded in demonstrating the possibility of transporting cargoes in the submillimeter scale through pick-and-place experiments. Devices based on the use of magnetic particles or elements along with elastic materials are also being actively developed to make programmable actuators17. In this article, we present a new design of magnetic tweezers prepared by a top-down approach. An advantage of such fabrication process is the fact that a vast number of nanotweezers can be produced at the same time. Moreover, with sizes between the nanometric and micrometric scales the tweezers will ultimately be able to interact with objects of comparable size such as cells, bacteria, or long chains of macromolecules. Being made of two actuable jaws, the concept of magnetic tweezer we propose is quite similar to that of real-life tweezers except that their ultimate role is to interact with micrometric species in fluids. They can potentially be an efficient way to trap or deliver molecules to targeted biological zones. Like for the nanoparticles developed in the past, self-agglomeration needs to be avoided by a proper choice of materials for the tweezers, e.g. synthetic antiferromagnetic18, 19 or vortex20, 21 particles. Due to their in-plane magnetic shape anisotropy, they can be remotely controlled by magnetic fields rather than magnetic field gradients, as seen in our previous works22, 23.
## Results
### Concept of the remotely actuated tweezers
The general concept of the remotely actuated nanotweezers is shown in Fig. 1. The nanotweezers are made of two parallelepipedic magnetic microelements, hereafter called “jaws”, that are bound by an elastic gold (Au) nanohinge. In this work, three different types of tweezers are explored. While the fabrication process is similar for each type, the tweezers can be either made of two soft magnetic jaws (SM/SM), or a soft magnetic jaw and a hard magnetic jaw (SM/HM), or a soft magnetic jaw and a nonmagnetic jaw (SM/NM), as shown in Fig. 1a–d. Each type of tweezers presents a different kind of interaction between the jaws, thus making the tweezers opening/closing process distinct in each case. For actuation purposes, it is crucial that at least one of the jaws is made of a magnet. For all three kinds of tweezers, when an external magnetic field is applied, the value of the opening angle is determined by the balance between the elastic torque stemming from the hinge and the magnetic torque coming from the external field and the magnetostatic interactions from each jaw. While the elastic torque is expressed in the same way for all tweezers, the magnetic torque is the physical quantity that will differ from one type of tweezer to another. Indeed, in the case of SM/NM tweezers, no magnetic field lines are emitted from the nonmagnetic layer, hence the soft magnetic layer will only be influenced by the external field. On the contrary, the contribution of the external field to the magnetic torque has to be added to that of the mutual magnetostatic interactions between jaws in the case of SM/SM and SM/HM tweezers. The scanning electron microscope (SEM) image in Fig. 1e shows a real SM/SM tweezer on a Si pillar, and one can clearly see the jaws bound together by a nanohinge. The coordinate system and the physical quantities related to the tweezers and the external field are defined in Fig. 1f.
In this work, the fabrication process relies on a top-down approach (Fig. 2). The tweezers are fabricated on silicon (Si) substrates and all metal layers are deposited by electron beam evaporation. Our method here is to transfer the square-shaped mask pattern of the longitudinal section of the jaws to an aluminum (Al) hard metal mask and then, dry-etch the unprotected Si areas so that the remaining Si forms pillars that will sustain the structure of the tweezers. The Al metal mask is patterned by UV lithography on the surface of the substrate using the AZ 5214E image reversal negative tone photoresist. Once the metal mask is patterned and deposited (Fig. 2a), the unprotected parts of the wafer undergo an isotropic reactive ion etching (RIE), using sulfur hexafluoride (SF6) and argon (Ar) as etchant gases. Then, the main body of the tweezers is constructed by stacking three metallic layers, the topmost and lowest layers constituting the upper and lower jaw, respectively. The middle layer is an Al sacrificial layer that serves to maintain the jaws as long as they are not bound to each other by a hinge. The nanohinge is built by depositing a 20-nm-thick Au layer with a 30° oblique angle. The electron beam evaporation technique offers a relatively high directivity, which limits the amount of Au deposited on the other sides of the tweezers. In the final step, the Al sacrificial layer is chemically etched and the tweezers become free to be actuated. SEM images of the structures typically obtained after the last step are shown in Fig. 2b. The dimensions of the tweezers are 1 μm × 1 μm × 100 nm or 2 μm × 2 μm × 250 nm. Permalloy (Py) is the chosen material for the soft layers, due to its low coercivity. Nonmagnetic layers are made of chromium (Cr). Hard magnetic layers can be realized with either permanent magnets such as samarium-cobalt or neodynum-iron-boron (NdFeB) alloys, or structures with a pinned magnetization such as exchanged-biased multilayers24,25,26.
### Calculation of the equilibrium torques for each type of tweezers
For the theoretical calculations and the following experimental results, all the tweezers have their lower jaws anchored to a Si pillar and the upper jaw is always a soft magnet. Therefore, only the upper jaws can be free to move under a magnetic field. Using the notations from Fig. 1f, the angles θ and α represent the tweezers opening angle and the external magnetic field angle with respect to the immobile lower jaw plane, respectively. It is assumed that the deformation of the Au nanohinge is invariant in the y direction and occurs only in the (xOz) plane. Therefore, the hinge can be divided into several slender portions of length L, thickness t and width dy. One can thus apply the Euler-Bernoulli theory27 for end-loaded beams on each portion dy and integrate over the entire width w, which results in the following relation between the hinge bending moment and the deflection δ: Γhinge = Ewt 3 δ/6L 2, where E is the Young’s modulus of the hinge, w is the total width of the hinge and L the length of the mobile part of the hinge. With the approximation δ/L sinθ, one can express Γhinge as a function of θ:
$${{\rm{\Gamma }}}_{{\rm{hinge}}}=\frac{Ew{t}^{3}}{6L}\,\sin \,\theta =K\,\sin \,\theta \mathrm{.}$$
(1)
K has the dimension of a torque and can be seen as the nanohinge stiffness coefficient. Regarding the magnetic properties of the Py soft jaws, it is clear from Fig. 3a that the in-plane magnetic hysteresis curves show very small remanence and coercivity. A magnetic force microscopy scan performed on one Py soft jaw (Fig. 3b) at remanence indicates that the soft jaw magnetization has a vortex structure. Consequently, the behavior of soft jaws under a magnetic field follows the assumptions that the magnetization responds in a quasilinear and reversible way to the magnetic field below saturation, and that the in-plane magnetic shape anisotropy is sufficiently large so that the net magnetization vector remains in the plane of the jaw.
The vortex state is the most stable magnetic configuration for an isolated square-shaped soft magnetic microelement. However, when two of those elements are put in proximity to each other as in SM/SM tweezers, dipolar interactions can actually favor a magnetization alignment despite the absence of field. The geometry of the jaw has also an influence on the nature of the most stable configuration adopted by the two interacting jaws. In order to illustrate this statement, micromagnetic simulations were performed using Magpar28, a finite element micromagnetics package. Figure 3c represents the case of square-shaped elements and that of elongated elements, for which the width is one third of its length. The materials parameters correspond to that of permalloy, i.e. M S = 1 T, A ex = 1.05 × 10−11 J.m−1, K u = 0 J.m−3, which correspond to the saturation magnetization, the exchange constant, and the magnetocrystalline anisotropy constant, respectively. After initializing the simulation in a vortex configuration, we found that in the case of square-shaped elements, the system reaches an antiparallel state and then relaxes to a two-vortex state, as shown by the values of magnetic state energy in Fig. 3c. On the contrary, the elongated geometry favors the antiparallel magnetization configuration. The conclusion of this micromagnetic simulation is that the combination of dipolar coupling and shape anisotropy tends to increase the stability of the antiparallel state. Such case of stable antiparallel magnetization alignement is depicted in Fig. 1a.
In this paragraph, we estimate the magnetic and mechanical torques on square-shaped tweezers. The magnetic torque exerted on the upper jaw is expressed as a function of the total magnetic energy E t: Γmag = ∂E t/∂θ, where E t = −M V · (B ext + B m), with M, V, B ext and B m being the magnetization, the volume of the jaw, the external field and the field generated by the lower jaw, respectively. B m is directly related to the surface magnetic charge distribution on the jaws. Since the field is not necessarily applied along one of the jaw’s plane, only the component of the field in the plane of each jaw counts. The absence of jaw-to-jaw interactions in SM/NM tweezers highlights the role played by the magnetostatic interactions in the tweezers made of two magnetic jaws. In SM/HM tweezers, the stray field produced by the hard layer will hardly vary with the strength and direction of the applied field, so magnetostatic interactions will also exist at zero field. In contrast, in SM/SM structures, the magnetic domain configurations of both soft layers are subjected to change under the applied field. As a result, at zero field, since the soft magnetic square elements have a vortex configuration, the magnetic flux closure from the lower jaw prevents its stray field from radiating to the upper jaw although it has been shown that in the case of square-shaped microelements with vortices, a stray field still arises from the Néel domain walls at the square diagonals29, 30. When the magnitude of the in-plane component of the field increases in each soft layer, vortex annihilation eventually occurs and the layers saturate, thus the material exits the linear regime. The variations of Γmag, exerted on the upper jaw, as a function of θ are represented in Fig. 4a–d for different magnitudes of B ext, and field angle α. In the case of SM/NM tweezers, the upper jaw is only influenced by the external field and tends to align with its direction. When α = 0°, deviating the upper jaw from its θ = 0° position will favor negative magnetic torque, i.e. tweezer closing, with a minimum at θ = 45°.
Similar behaviors are predicted for SM/HM and SM/SM tweezers but the magnetostatic influence from the lower jaw favors the tweezers opening at angles close to 0° or 90°. Jumps in the magnetic torques occur at values of opening angles, for which entry in the saturated state or reentry in the unsaturated state happen (Fig. 4b,d). One must note that the jumps occur at different opening angles in the three types of tweezers. The presence of magnetostatic interactions in addition to the external field in the SM/SM and SM/HM tweezers affects the opening/closing process of the tweezers. Consequently, the magnetization of the upper jaw does not saturate at the same value of tweezer opening angle in each case. Figure 4b also includes the variations of Γhinge, calculated for a 20-nm-thick Au hinge with E = 20 GPa. Au layers deposited by e-beam evaporation can have lower Young’s moduli than the bulk material31. Torque balancing is found by intersecting the curves of Γmag and Γhinge; Fig. 4e plots the values of equilibrium opening angle with respect to a B ext applied at 45°. The significant difference between the results obtained for SM/NM structures and two-magnetic-jaw structures again illustrates the effects of the presence of magnetostatic interactions. It is interesting to point out that when the field is applied perpendicularly to the lower jaw plane, two positions of torque equilibrium exist for SM/HM tweezers (Fig. 4b).
The value of Young’s modulus in the abovementioned calculation is chosen to be smaller than the value corresponding to e-beam-deposited Au layers (55–62 GPa)31, which is itself smaller than the bulk value (79 GPa). The oblique incidence angle of the Au deposition leads to self-shadowing effects at the nanometric scale, due to the columnar growth mode of the Au islands. Therefore, the density of the hinge can be drastically reduced compared to a layer that would have been grown with a normal incidence. In order to account for this difference in density due to the oblique incidence, the numerical value of the Young’s modulus of the hinge is adapted in this study, although it proves difficult to determine its precise value. An order of magnitude of about half the value mentioned in the past work31 yields mechanical torques that are compatible with the magnetic torques estimated above and the mechanical torques we obtain with the experiments described in the following section.
Although it is expected from the considered application of the tweezers that the magnetic and elastic torques are the main operators of the tweezers actuation, effects of intermolecular forces and quantum effects can arise given the nanometric to micrometric size of the tweezers. Those effects can indeed contribute to keeping the jaws closed by mutual attraction. We estimated that the effects of the Casimir effect and Van der Waals forces have a magnitude of the order of 1 nN, which is two orders of magnitude lower than the forces due to the magnetic field. Therefore, the presence of a magnetic field can always overcome the Casimir effect and the Van der Waals forces. However, in the absence of magnetic field, mutual attraction between the jaws can occur when the upper jaw is close enough to the lower jaw. In this kind of situation, the jaws stick together but a mechanical actuation from an AFM tip is sufficient to reopen the tweezer, thus enabling further actuations with an external magnetic field.
## Discussion
In order to test the ability of the tweezers to be actuated by a magnetic field, direct observation is performed inside a SEM chamber. The magnetic field is provided by a homemade probe (see details in the Methods section), which is made by soldering a NdFeB hard magnetic microsphere to an atomic force microscope (AFM) tip (Fig. 5a,b). Real-time actuation can be observed when the field lines encounter the upper jaw of a tweezer. Introducing a magnetic field in the SEM chamber can distort the path of the electrons, thus potentially rendering bad image acquisition. However, the SEM system also incorporates a focused ion beam (FIB) setup, which improves the quality of the images, since ions are not as easily deflected by a magnetic field as electrons. Moreover, the magnetic microsphere has a sufficiently small size so that its generated magnetic field is localized in the vicinity of a single tweezer, thus introducing minimal disturbance. It is important to emphasize that the images in Fig. 2b,c were acquired by using electron beams only. The opened tweezers shown in Fig. 2c is kept opened, not by an applied magnetic field, but because of the presence of a residual mechanical stress in the Au hinge. Depending on the thermal history of each hinge, the stress can be either tensile or compressive. The effects of the residual stress are visible in the results of the experimental data, when measuring tweezers opening angles.
The results of the characterization of the tweezers are presented in Fig. 5 for the cases of SM/NM and SM/SM structures. The characterization is done as follows: depending on the position of the microsphere relatively to the tweezer, the opening angle will have a certain value. This angle is determined from the SEM measurements, with an estimated uncertainty of ±1°. For each value of opening angle obtained by SEM, the magnetic torque exerted on the upper jaw by the magnetic microsphere and the lower jaw is calculated. Considering that the magnetic torque Γmag is in equilibrium with the elastic torque Γhinge, a linear relationship is expected between the calculated torque and the measured equilibrium opening angle of the tweezers. Indeed, the measured values of θ are included in a range that allows to regard Γhinge as a linear function of θ. Therefore, calculating the slope of that plot ultimately leads to the determination of the thickness of the nanohinge. The field generated by the hard magnetic microsphere is modeled as a dipole: $${\bf{B}}=\frac{{\mu }_{0}}{4\pi {r}^{3}}[\mathrm{3(}{\bf{m}}\cdot {\bf{r}})\cdot {\bf{r}}-{\bf{m}}]$$, where r is the distance from the center of the sphere and m is the dipolar moment. For a given pair of jaws, this expression is added to the contribution of the magnetostatic interaction from the lower jaw and then integrated over each point in the upper jaw. The numerical values of the equilibrium torque are deduced, as can be seen in Fig. 5c,d. The diameter of the sphere is 10 μm and its remanent magnetization is measured by vibrating sample magnetometer (VSM) to be 5.8 × 105 A/m, which is about half of the saturation value, meaning that the sphere is polycrystalline with random grain anisotropy orientation. The equilibrium torque calculations yield a significant dispersion of the points. Nevertheless an estimation of the hinge stiffness can be made by linear interpolation. The slopes corresponding to each linear fits are: K SM/NM = 3.02 × 10−13 N.m for the SM/NM tweezer, and K SM/NM = 1.95 × 10−12 N.m for the SM/SM tweezer. By using equation (1), the effective thicknesses of the Au nanohinge in each case are calculated to be t SM/NM = 25.6 nm and t SM/NM = 47.7 nm. While the value of nanohinge thickness determined for the SM/NM tweezers is relatively close to the real deposited thickness, the effective thickness found for SM/SM tweezer is approximately twice the expected value. Both types of tweezers had their nanohinge deposited with 30° oblique incidence with an expected thickness of 20 nm. Knowing that Py/Al/Py trilayer stacks tend to present rougher surfaces than Py/Al/Cr stacks, one would then expect that the thickness of the Au layer shows higher fluctuations in the case of SM/SM tweezers. This can account for the discrepancy between effective and nominal hinge thickness observed for the SM/SM tweezer. The oblique angle deposition of the Au layer, combined with the effects of the roughness of the previous metal layers, can indeed cause the Au atoms to clump up and form regions with inhomogeneous thicknesses32. The growth method can also favor nanometer-scale self-shadowing in the Au layer by nanoislands33. As Γhinge varies as the third power of the hinge thickness, porosity and thickness fluctuations will locally change the elastic torque and affect the overall stiffness of the hinge. To explain the increased stiffness of the Au nanohinge in the SM/SM structure, the following model is proposed: we consider an ideal Au layer with a nominal thickness of t nom = 20 nm grown in a two-dimensional mode and the real Au layer that is effectively deposited with a thickness t eff in a three-dimensional mode. In principle, both layer have the same volume, since they only differ by their growth modes. According to the expression of the stiffness coefficient defined in equation (1), the stiffness of the hinge varies as the third power of its thickness. Let us then we compare two Au layers used as hinges of same nominal volumes, one of them presenting a perfectly uniform thickness and the other having a large roughness characterized by thin (or even empty) regions coexisting with thicker regions. Since the two layers have the same volume, the thicker regions in the nonuniform layer can be substantially thicker than the originally intended thickness of 20 nm. Due to the cubic thickness dependence of the hinge stiffness as seen in equation (1), those thick regions contribute in increasing the effective stiffness of the hinge compared to the case where the hinge has a uniform thickness. We define τ as the proportion of void in the real nanohinge. By conservation of volume, the nominal width and thickness are related to the effective dimensions as follows, w eff = (1 − τ)w nom and t nom = (1 − τ)t eff. Therefore, the ratio of stiffness coefficient is expressed as K nom/K eff = (1 − τ)2 and leads to a void proportion of about 30% and 70% for the SM/NM and SM/SM tweezers, respectively. According to this model, the discrepancy between the values of equilibrium torque, hence Au thickness, obtained by the linear fits and the expected values actually represents a change in effective stiffness of the deposited nanohinges. Thus, according to this model, a higher void proportion leads to an enhancement of stiffness. A possible way to avoid the inhomogeneity in Au thickness is to reduce the roughness of each of the previous metal layers by Ar plasma etching. The reversibility of the nanohinge deformation was also tested by applying a mechanical pressure on the surface of the upper jaw of a tweezer with an AFM tip. The real-time elastic response of the tweezer is recorded by SEM, and evidently shows the contribution of the elastic torque from the gold nanohinge, which tends to bring the upper jaw back to its initial position (Fig. 5e).
It is also important to notice that according to equation (1), the linear fits in Fig. 5 should in principle cross the zero point. However, this is not the case. Indeed, as explained earlier, some residual stress arises within the Au hinge and depending on the thermal history of each hinge, this stress can be either tensile or compressive, as mentioned before. This explains why the curves in Fig. 5c,d do not cross the zero point. In the case of Fig. 5c, the angle value at zero torque is positive, which means that the residual stress tends to open the tweezers. In contrast, the graph in Fig. 5d represents a case where the residual stress tends to close the tweezers. While we can optimize the fabrication process to remove the porosity of the Au hinge, residual mechanical stress will always be present. But analyses such as the one presented in Fig. 5 makes it possible to characterize the effect of such stress on the tweezers.
In summary, the real-time actuation of the tweezers by moving the NdFeB microsphere and the torque calculations support the proof of concept of the nanotweezers. In this work, experimental results are shown only for SM/NM and SM/SM tweezers. Realizing hard layers in SM/HM tweezers require additional technical steps. The goal of this work was to present the working principle of the magnetic nanotweezers and showing the actuation of anchored SM/NM and SM/SM tweezers is sufficient for that purpose. SM/HM tweezers will actually be of significant importance for applications in fluids, i.e. when they are released in a solution, because the hard magnetic layer will allow to orient the tweezers depending on the external field. Both jaws will tend to orient in a way that minimizes the Zeeman energy, but the harder layer would always follow the field thus orienting the tweezer. So, as a future prospect, the tweezers could be released in a solution, be displaced in the fluid by application of a field gradient and oriented by application of a magnetic field. The effect of the magnetic field would therefore be twofold: it would control the opening/closing movements of the tweezers and orient them during their displacement. Such remote actuation of the tweezers in a fluid would make it an attractive tool for manipulating, grabbing micro or nanometric objects. With a proper surface functionalization, tweezers can be designed to target specific biological environments or catch biomolecules.
## Methods
The probe that was used in the FIB experiment is homemade and involves several preparation steps. The hard magnetic NdFeB microsphere was obtained by drying a liquid containing NdFeB powder having a relatively high dispersion in particle size, with a mean diameter of 50 μm, on a silicon wafer. The more the liquid wets the Si wafer, the more dispersed the spheres will be, thus facilitating the selection of a sphere with a proper diameter, given the size of the tweezers. The dispersed spheres are observed in a SEM chamber equipped with FIB. A soft contact is made between the selected sphere and an AFM tip, then the two elements are welded to each other with a tungsten bond, which is done by applying gas-assisted process. The gas used here is tungsten hexacarbonyl (W(CO)6). Once the probe has successfully been constructed, it is taken out of the SEM chamber in order to be magnetized between the poles of the electromagnets of a VSM (magnetic field of 1.7 T), prior to the tweezers actuation experiments.
Removing the Al sacrificial layer between the two jaws at the end of the tweezers fabrication process is actually a challenging process because one needs to dissolve this layer without altering any of the jaws. The tweezers are dipped in an Al etchant solution for 20 min. The toughest step is the drying step, since capillary forces due to the removal of the liquid can exert forces on the jaws and therefore break the hinges. Capillary forces occur when a liquid meniscus forms between the jaws. The formation of such meniscus can however be minimized by using a supercritical dryer.
No effect of electrostatic forces have been observed during SEM characterization. The tweezers themselves do not charge during the SEM observations since they are metallic and not made on insulating materials. Charge-up phenomena during SEM observations usually occur progressively and alter the image over time. However in our case, no distortion and no bleaching of certain areas were observed. The SEM images are very stable (as long as no magnetic field is introduced). The presence of electrostatic forces would indeed contribute to actuating tweezers, however such actuation was not observed. The tweezers remain in their state even after long observations with the electron beam. The pillars holding the tweezers are made of Si, which is not perfectly conducting but it is not insulating enough for charges to accumulate.
## References
1. 1.
Pankhurst, Q., Connolly, J., Jones, S. & Dobson, J. Applications of magnetic nanoparticles in biomedicine. J. Phys. D: Appl. Phys. 36, R167 (2003).
2. 2.
Krishnan, K. M. Biomedical Nanomagnetics: A Spin Through Possibilities in Imaging, Diagnostics, and Therapy. IEEE Trans. Magn. 46, 2523–2558 (2010).
3. 3.
Mahmoudi, M., Sant, S., Wang, B., Laurent, S. & Sen, T. Superparamagnetic iron oxide nanoparticles (SPIONs): development, surface modification and applications in chemotherapy. Adv. Drug Deliv. Rev. 63, 24–46 (2011).
4. 4.
Nguyen, H. H., Nguyen, H. L., Nguyen, C. & Ngo, Q. T. Preparation of magnetic nanoparticles embedded in polystyrene microspheres. J. Phys.: Conf. Ser. 187, 012009 (2009).
5. 5.
Haber, C. & Wirtz, D. Magnetic tweezers for DNA micromanipulation. Rev. Sci. Instrum. 71, 4561–4570 (2000).
6. 6.
Strick, T. R. et al. Stretching of macromolecules and proteins. Rep. Prog. Phys. 66, 1–45 (2003).
7. 7.
Walter, N., Selhuber, C., Kessler, H. & Spatz, J. P. Cellular unbinding forces of initial adhesion processes on nanopatterned surfaces probed with magnetic tweezers. Nano Lett. 6, 398–402 (2006).
8. 8.
Celedon, A. et al. Magnetic tweezers measurement of single molecule torque. Nano Lett. 9, 1720–1725 (2009).
9. 9.
Neuman, K. C. & Nagy, A. Single-molecule force spectroscopy: optical tweezers, magnetic tweezers and atomic force microscopy. Nat. Methods 5, 491–505 (2008).
10. 10.
De Vries, A. H. B., Krenn, B. E., Van Driel, R. & Kanger, J. S. Micro magnetic tweezers for nanomanipulation inside live cells. Biophys. J 88, 2137–2144 (2005).
11. 11.
Chen, L., Offenhäusser, A. & Krause, H. J. Magnetic tweezers with high permeability electromagnets for fast actuation of magnetic beads. Rev. Sci. Instr. 86, 044701 (2015).
12. 12.
Vlijm, R., Mashaghi, A., Bernard, S., Modesti, M. & Dekker, C. Experimental phase diagram of negatively supercoiled DNA measured by magnetic tweezers and fluorescence. Nanoscale 7, 3205–3216 (2015).
13. 13.
Chaves, R. C., Bensimon, D. & Freitas, P. P. Single molecule actuation and detection on a lab-on-a-chip magnetoresistive platform. J. Appl. Phys. 109, 064702 (2011).
14. 14.
Ashkin, A. Acceleration and Trapping of Particles by Radiation Pressure. Phys. Rev. Lett. 24, 156 (1970).
15. 15.
Diller, E. & Sitti, M. Three-dimensional programmable assembly by untethered magnetic robotic micro-grippers. Adv. Funct. Mater. 24, 4397–4404 (2014).
16. 16.
Zhang, J. & Diller, E. Tetherless mobile micrograsping using a magnetic elastic composite material. Smart Mater. Struct. 25, 11LT03 (2016).
17. 17.
Kim, J. et al. Programming magnetic anisotropy in polymeric microactuators. Nat. Mater. 10, 747–752 (2011).
18. 18.
Hu, W. et al. High-Moment Antiferromagnetic Nanoparticles with Tunable Magnetic Properties. Adv. Mater. 20, 1479–1483 (2008).
19. 19.
Joisten, H. et al. Self-polarization phenomenon and control of dispersion of synthetic antiferromagnetic nanoparticles for biological applications. Appl. Phys. Lett. 97, 253112 (2010).
20. 20.
Kim, D. H. et al. Biofunctionalized magnetic-vortex microdiscs for targeted cancer-cell destruction. Nat. Mater. 9, 9, 165–171 (2010).
21. 21.
Leulmi, S. et al. Comparison of dispersion and actuation properties of vortex and synthetic antiferromagnetic particles for biotechnological applications. Appl. Phys. Lett. 103, 132412 (2013).
22. 22.
Courcier, T. et al. Tumbling motion yielding fast displacements of synthetic antiferromagnetic nanoparticles for biological applications. Appl. Phys. Lett. 99, 093107 (2011).
23. 23.
Truong, A. et al. Magneto-optical micromechanical systems for magnetic field mapping. Sci. Rep. 6, 31634 (2016).
24. 24.
Nogués, J. & Schuller, I. K. Exchange bias. J. Magn. Magn. Mater. 192, 203–232 (1999).
25. 25.
Stiles, M. D. & McMichael, R. D. Temperature dependence of exchange bias in polycrystalline ferromagnet-antiferromagnet bilayers. Phys. Rev. B 60, 12950 (1999).
26. 26.
Stiles, M. D. & McMichael, R. D. Coercivity in exchange-bias bilayers. Phys. Rev. B 63, 064405 (2001).
27. 27.
Antman, S. S. Nonlinear Problems of Elasticity (Springer, New York, 2005).
28. 28.
Scholz, W. et al. Scalable Parallel Micromagnetic Solvers for Magnetic Nanostructures. Comp. Mat. Sci. 28, 366–383 (2003).
29. 29.
Rondin, L. et al. Stray-field imaging of magnetic vortices with a single diamond spin. Nat. Commun 4, 2279 (2013).
30. 30.
Tetienne, J. P. et al. Quantitative stray field imaging of a magnetic vortex core. Phys. Rev. B 88, 214408 (2013).
31. 31.
Wang, L. & Prorok, B. C. The Influence of Deposition Technique on the Mechanical Properties of Freestanding Gold Films, SEM Annual Conference & Exposition on Experimental and Applied Mechanics (2007).
32. 32.
Tokas, R. B. et al. Oblique angle deposition of HfO2 thin films: quantitative assessment of indentation modulus and micro structural properties. Mater. Res. Express 2, 035010 (2015).
33. 33.
Abelmann, L. & Lodder, C. Oblique evaporation and surface diffusion. Thin Solid Films 305, 1–21 (1997).
## Acknowledgements
This work was supported by a grant of ANR (Agence Nationale de la Recherche), project P2N NANO-SHARK (ANR-11-NANO-001). We thank the Institut Néel for providing us with the NdFeB microspheres.
## Author information
Authors
### Contributions
B.D., Y.H. and H.J. designed the research. T.L., R.C., E.G., S.A., C.I. and P.S. established the methods for the fabrication of the samples. B.D., C.I. and A.T. worked on the calculations. L.D.B.-P. and N.S. performed the micromagnetic simulations. The manuscript was written by A.T., discussed and reviewed by all authors.
### Corresponding author
Correspondence to Bernard Dieny.
## Ethics declarations
### Competing Interests
The authors declare that they have no competing interests.
Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Iss, C., Ortiz, G., Truong, A. et al. Fabrication of nanotweezers and their remote actuation by magnetic fields. Sci Rep 7, 451 (2017). https://doi.org/10.1038/s41598-017-00537-6
• Accepted:
• Published:
• ### Optical response of magnetically actuated biocompatible membranes
• H. Joisten
• , A. Truong
• , S. Ponomareva
• , C. Naud
• , R. Morel
• , Y. Hou
• , I. Joumard
• , S. Auffret
• , P. Sabon
• & B. Dieny
Nanoscale (2019)
• ### Fabrication and manipulation of nanopillars using electron induced excitation
• Nitul S. Rajput
• , Francoise Le Marrec
Journal of Applied Physics (2018)
• ### Soft Micro- and Nanorobotics
• Chengzhi Hu
Annual Review of Control, Robotics, and Autonomous Systems (2018) | 2021-01-17 01:57:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7581651210784912, "perplexity": 1796.0338624092383}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703507971.27/warc/CC-MAIN-20210116225820-20210117015820-00630.warc.gz"} |
http://cogsci.stackexchange.com/questions/5259/identifying-overly-complex-tasks | @NickStauner a large error rate among system task performers, a failure of system participants to get better over time doesn't imply that experience is helping to navigate the system. I'm not sure you've really proven your point here. – Chuck Sherrington Jan 2 '14 at 13:33
True; With the right amount of experience, professionals in tax, law, and medical coding navigate these concepts just fine does. ;) What you just quoted could result from a particularly steep learning curve too though, so you can't rule out complexity as the problem just because few or none have navigated it. My point is that you can't really prove your point (that your phrasing is necessarily more suitable), so it might boil down to a trivial distinction, especially if the original question applies reasonably well to both ambiguous and complex (somewhat synonymous!) systems. – Nick Stauner Jan 2 '14 at 13:55 | 2015-07-05 02:45:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.442028284072876, "perplexity": 1945.9386337225271}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-27/segments/1435375097199.58/warc/CC-MAIN-20150627031817-00224-ip-10-179-60-89.ec2.internal.warc.gz"} |
https://www.physicsforums.com/threads/a-journey-to-the-manifold-su-2-part-ii-comments.929020/ | # Insights A Journey to The Manifold SU(2) - Part II - Comments
Tags:
1. Oct 19, 2017
### Staff: Mentor
2. Oct 20, 2017
### lavinia
These notes would be helpful for a student who is learning about Lie groups because they work through an important specific example - the example of $SU(2,C)$.
The student would have to master the Lie group technology in a different place.
I especially like the way the Hopf fibration is worked out.
The introductory section on spheres is not specific to $SU(2,C)$ so for me personally it was distracting. I also found the initial example of a local Lie group distracting.
The first paragraph of Part 1 says that it hopes to pique interest in Lie group mathematics. For this, some comment on why representations are important/ interesting - in mathematics - would have helped.
There is a lot of calculation here and some people might like an intuitive beacon to light the way along the journey.
Other thoughts:
- The notes do not require showing that the sphere is a manifold.
- For intuition, one might describe the actions of special orthogonal groups on spheres as rotations. To me this would be more intuitive than matrix multiplication. Even non-mathematicians can imagine a rotation and would immediately see that any rotation must have two fixed poles. The stabilizer then acts transitively on the tangent sphere at the poles. For $SO(3)$ the stabilizer also acts without fixed points and one sees that $SO(3)$ is the tangent circle bundle of the 2 sphere. Perhaps one could go from here to illustrate the difference between $SO(3)$ and $SU(2,C)$.
-
Last edited: Oct 20, 2017
3. Oct 21, 2017
### Staff: Mentor
Thank you for the detailed review, @lavinia.
You are absolutely right, that the initial example and the spheres feel distracting. It disturbed me, too. The reason is, that I originally wanted to focus on vector fields instead of the group. I began by noting, that there is this general vision of vectors attached to points on one hand and the abstract formulas on the other. I thought some examples with actual curves (flows, 1-parameter groups), groups and specific functions would be helpful, as they are often banned to exercises or get lost in the "bigger" theory. That's where those two paragraphs came from. As I looked closer into the example of SU(2) I got more and more involved with it instead of my original purpose vector fields.
So the actual distraction had been SU(2). To be honest, I wanted to understand connections better, esp. Ehresmann and Levi-Civita and hope to deal with it (on the example of SU(2) again) in a third part. So the two parts so far are more of a "what has happened before" part of the story. But the more I've read about SU(2), the more I found it interesting. I kept the distracting parts, as I recognized, that they are a good to quote or a copy & paste source for answers on PF. Up to now, I used the various notations of derivatives as well as the stereographic projection in an answer to a thread here. And as one-parameter groups are essential to the theory, I kept this part. And why not have a list of spheres of small dimensions, when one of them is meant to be the primary example of actual calculations? That's basically the reason for the felt (by you and by me) inhomogeneous structure and why the article is a bit of a collection of formulas.
So thanks, again, and I'll see if I can add a couple of explanations which you suggested.
4. Oct 22, 2017
### dextercioby
It would have helped to describe in maximum 2 paragraphs the connection between a local Lie group and a global Lie group and from here the connection between the notions of globally isomorphic Lie groups and locally isomorphically Lie groups. Physicists usually gloss over these important definitions and theorems.
:) | 2018-06-23 16:35:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6255356669425964, "perplexity": 573.9992489500848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267865098.25/warc/CC-MAIN-20180623152108-20180623172108-00040.warc.gz"} |
http://www.vallis.org/blogspace/preprints/1107.4243.html | ## [1107.4243] Kinematic signature of an intermediate-mass black hole in the globular cluster NGC 6388
Authors: N. Lützgendorf, M. Kissler-Patig, E. Noyola, B. Jalali, P. T. de Zeeuw, K. Gebhardt, H. Baumgardt
Date: 21 Jul 2011
Abstract: Intermediate-mass black holes (IMBHs) are of interest in a wide range of astrophysical fields. In particular, the possibility of finding them at the centers of globular clusters has recently drawn attention. IMBHs became detectable since the quality of observational data sets, particularly those obtained with HST and with high resolution ground based spectrographs, advanced to the point where it is possible to measure velocity dispersions at a spatial resolution comparable to the size of the gravitational sphere of influence for plausible IMBH masses. We present results from ground based VLT/FLAMES spectroscopy in combination with HST data for the globular cluster NGC 6388. The aim of this work is to probe whether this massive cluster hosts an intermediate-mass black hole at its center and to compare the results with the expected value predicted by the $M_{\bullet} - \sigma$ scaling relation. The spectroscopic data, containing integral field unit measurements, provide kinematic signatures in the center of the cluster while the photometric data give information of the stellar density. Together, these data sets are compared to dynamical models and present evidence of an additional compact dark mass at the center: a black hole. Using analytical Jeans models in combination with various Monte Carlo simulations to estimate the errors, we derive (with 68% confidence limits) a best fit black-hole mass of $(17 \pm 9) \times 10ˆ3 M_{\odot}$ and a global mass-to-light ratio of $M/L_V = (1.6 \pm 0.3) \ M_{\odot}/L_{\odot}$.
#### Jul 31, 2011
1107.4243 (/preprints)
2011-07-31, 08:05 | 2018-01-21 07:07:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6246097683906555, "perplexity": 1342.5663084786474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084890314.60/warc/CC-MAIN-20180121060717-20180121080717-00610.warc.gz"} |
http://www.mzan.com/article/48755909-incompatible-with-option-fdefault-real-8-during-compilation.shtml | Home ISO_FORTRAN_ENV or -fdefault-real-8 to promote reals to double precision
Reply: 0
# ISO_FORTRAN_ENV or -fdefault-real-8 to promote reals to double precision
user2488
1#
user2488 Published in June 19, 2018, 4:19 pm
I've always been using the -fdefault-real-8 option of gfortran to automatically promote every single REAL declared anywhere in the program to double precision, along with any constant, e.g. 1.23. If I ever wanted to switch back to single precision, I only had to remove that option and recompile, without changing a single character in the source code. At a point I started using ISO_FORTRAN_ENV module, since it allows me to use constants like INPUT|OUTPUT|ERROR_UNIT, as well as IOSTAT_END and IOSTAT_EOR and others (which seemed to be a good and easy move in the direction of portability, am I wrong?). From then on, I've been seeing and ignoring the following warning Warning: Use of the NUMERIC_STORAGE_SIZE named constant from intrinsic module ISO_FORTRAN_ENV at (1) is incompatible with option -fdefault-real-8 since such incompatibility seems to have no effect so far. Now I'd like to get rid of this warning if it is possible and worth it. If I correctly understood, to avoid this warning I should give up on -fdefault-real-8 option and change every REAL to REAL(real64) and/or to REAL(dp) (provided that, in the latter case, the statement USE, INTRINSIC :: ISO_FORTRAN_ENV, dp => real64 is put in that unit), which is not a difficult task for sed or vim. Nevertheless, it seems to me that this change wouldn't be the same as using -fdefault-real-8 option, since all constants would stay single precision as long as I don't add d0 to them. Assumed the -fdefault-real-8 option is removed and ISO_FORTRAN_ENV is used anywhere, is there any way to make any constant across the program behave as each had d0 suffix? Whether or not this is possible, have I correctly extrapolated that I can put the following lines in a single module which is used by all others program units, each of which can then use dp as kind type parameter? USE, INTRINSIC :: ISO_FORTRAN_ENV INTEGER, PARAMETER :: dp = real64 I would prefer this way since I could switch to real32 or real128 or whatever by changing only that line.
You need to login account before you can post.
Processed in 0.65886 second(s) , Gzip On .
© 2016 Powered by mzan.com design MATCHINFO | 2018-06-19 16:19:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5060341358184814, "perplexity": 1512.2891059984477}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-26/segments/1529267863100.8/warc/CC-MAIN-20180619154023-20180619174023-00275.warc.gz"} |
https://www.nature.com/articles/s41598-018-25914-7?error=cookies_not_supported&code=0f12e509-de3a-4b6c-abf4-4c91b4ad4b95 | ## Introduction
The spread of emergent viral diseases critically depends upon rapid adaptation to novel hosts1. Analogous to that observed in natural populations (e.g., ref.2), experimental evolution of viral pathogens has also demonstrated rapid adaptation to new hosts3. Nevertheless, the molecular basis of host specificity within viruses remains contentious4,5,6,7,8. While selection experiments have shown that single nucleotide changes can be sufficient to facilitate a viral host shift (e.g., ref.9), bioinformatic surveys repeatedly show a high degree of genomic correspondence between viral pathogens and their hosts4,10. This is most evident within bacteriophage (phage) species7,11. For instance, codon usage of coliphages generally reflects the biased usage of their host which itself reflects the most abundant cognate tRNAs available within host cells12,13,14 and mRNA levels15. This correspondence of phage and host codon usage is not surprising given that viruses are frequently, often entirely, reliant on their hosts for biosynthesis. This dependency engenders strong selection for virus genome compatibility with potential hosts, a necessity for a successful infection.
While once referred to as “silent”, we now know that synonymous mutations can have a profound effect on both an organism’s phenotype and fitness16,17,18. Deviations from neutral expectations of codon usage can be the result of selection for translational efficiency and/or accuracy, mutational biases, drift, control of gene expression, and structure19,20,21,22,23,24,25,26,27,28. Reduced viral fitness has been detected in molecular engineering of viral codons via synonymous mutations29,30,31,32,33,34,35,36,37,38,39,40,41 (also see reviews42,43). These fitness losses are largely attributed to a reduction of genome translation and show that codon engineering is a promising avenue for generating new vaccines29,30,31,32,33,36,37,38,39,40,41,42,43. While the immediate cost of synonymous mutations on viral fitness has been observed, causes and consequences of sequence-specific host adaptation remains elusive.
Phages provide an ideal model system for exploring the evolution of codon usage bias. The literature is rich with experimental evolution of phages, applying various forms of selection44,45,46,47,48,49,50. Furthermore, substantial bioinformatic analysis of codon usage within phage7,11 and bacterial19 genomes has been conducted. Through experimental evolution of the codon-based attenuated T7 phage, fitness recovery was observed by evolutionary changes in codon use34. More rapid rescue has been observed in the passage of codon deoptimized eukaryote-infecting viruses having smaller RNA-based genomes35,36. The effects of codon deoptimization on fitness and the recovery of fitness, however, varies from one virus to the next51. As prior evidence has shown, synonymous mutations specifically introduced in species having small, compact genomes can have a profound impact on a species’ fitness52,53,54. The mechanisms that lead to pathogen-host genome compatibility remain uncertain, leaving the causal factors open questions.
We performed long-term experimental evolution to determine how and at what rate virus-host codon usage evolves, using engineered phage genomes. The coding sequence of the bacteriophage ΦX174 was targeted, replacing wild-type codons with deoptimized (relative to its host, Escherichia coli C) codons. Three different engineered strains, targeting two different coding regions within the ΦX174 genome, were created. The ΦX174 genome is small and compact, encoding for just 11 genes in the 5386 nucleotide ssDNA, circular genome; furthermore, ΦX174 is known to be sensitive to mutations54,55. The combination of engineered sequence changes allows for the simultaneous examination of the role of selection to affect sequence specific adaptation, specifically rates of reversion and translational efficiency within the E. coli host. Complementing our experimental efforts, extensive computational simulations were performed to assess the role of selection for translational efficiency. This multidisciplinary approach provides insight into how genome compatibility arises.
## Results
### Conservation of host genomic compatibility within microviruses
Codon usage within homologous gene sequences of ΦX174 and its two known closest relatives (G4 and α3) was examined. Despite only modest sequence similarity, orthologs are similar in their usage of codons favored within the highly expressed genes (HEGs) of their host, E. coli (Supplementary Fig. S1). This is true not only of the RefSeq sequences, but also microviruses isolated from environmental samples (results not shown). Furthermore, this trend was also observed within the more distant relative of ΦX174: ΦMH2K. ΦMH2K, also a microvirus, infects Bdellovibrio bacteriovorus. The variance of the estimated translation rate between genes was statistically significant (p-value = 0.00009) while the variance between the species was not. Thus, the observed level of gene-host codon compatibility in these microviruses is conserved regardless of the host species. The homologous coding regions for the F and J coding regions include a codon usage most congruent to their respective host’s codon usage biases (Supplementary Fig. S1) and thus were selected for subsequent experimental examination.
### Strain engineering
A 66 bp region within the ΦX174 capsid protein F coding region and a 69 bp region within the core protein J coding region were re-engineered to include alternate codons, often codons less preferred by the host, such that both regions were comparably deoptimized relative to the ancestral strain (see Methods; summarized in Table 1). Engineered mutants were created from a ΦX174 strain in our lab which was well-adapted to the growth conditions employed in the selection experiments carried out here. Two engineered mutants, S and E, were created for the F protein coding region. The S strain contains eleven synonymous substitutions within the 22 codon region (Supplementary Table S1); nine were achieved by single third position changes, while the remaining two codon substitutions included two base changes (first and second position Leucine). The E strain contained these same eleven synonymous substitutions in addition to one nonsynonymous codon replacement (Supplementary Table S1); this particular codon was chosen as sequenced ΦX174 strains vary in the amino acid encoded (histidine or arginine). Similarly, a deoptimized sequence was designed for a 23 codon region in J, henceforth referred to as the J strain. The J strain contains twelve synonymous substitutions; all substitutions are achieved by single third position changes (Supplementary Table S2).
The S and E strains were propagated for 35 transfers. While a single propagation of the S strain was performed, the E strain was propagated in quadruplicate. The J strains were propagated for 50 transfers, in triplicate. Four replicates of the Anc strain, the unaltered ancestral strain, were also propagated serving as a control. (Further details regarding the experimental design are included within the Methods.) To distinguish between the engineered genomic sequences and the evolved genomic sequences, the following notation will be used. The engineered mutant strains prior to propagation are referred to as the “S strain”, “E strain”, and “J strain” created from the ancestral “Anc strain”. The serially passaged S, E and J strains are denoted as the S, E, and J lines, collectively referred to as the engineered lines, with replicates denoted by number. The propagated Anc strain is henceforth referred to as the C1, C2, C3, and C4 lines. As anticipated, initial plating of the engineered strains created here showed a significant reduction in the number of successful infections relative to the propagated Anc strain, as measured by plaque forming units (PFU) and burst size (Supplementary Fig. S2).
### Responses to selection
The targeted region was sequenced for all engineered lines from isolates collected after the 1st, 5th, 11th, 21st, and 35th transfer; additionally, the J lines were sequenced after the 50th transfer. Synonymous, as well as nonsynonymous, mutations occurred both for the codons that were initially manipulated as well as other codons in the region targeted in the engineered lines (Fig. 1). While the E2 line collected after the 11th transfer shows the most nonsynonymous differences, five, by the next sampling many of these differences were no longer present in the population. Although several of the codons fixed within the engineered lines were those that were present within the Anc strain, this was not a general result. For each of the mutations identified, changes in codons were identified and assessed relative to the codon usage within the HEGs of E. coli C. For all engineered lines, the majority of the mutations result in a substitution for a codon more frequently used within E. coli C’s set of HEGs (shown in Supplementary Tables S1 and S2).
To assess the putative effects of these mutations on the protein’s translational efficiency, we examined the codon adaptiveness (CA) of the targeted windows for each engineered line. This metric represents the individual engineered line’s usage of host-preferred codons relative to this same window in the Anc strain. Both the region within the F engineered lines as well as the region within the J engineered lines showed a consistent increase in CA over the course of selection (Fig. 2). The exception being the acquisition of a single synonymous mutation in the S line after 1 transfer and the J3 line after 11 transfers; the CA value of these lines, however, was rapidly improved by the next sampling. At the end of the selection experiment, seven of the eight engineered lines include codons which are utilized more frequently within the host’s set of HEGs than are present within this same window in the Anc strain (CA > 100%). Only the F coding region engineered line E1 did not exceed 100%. As the extension of the J lines for an additional 15 transfers reveals, the rate of change in the CA value diminishes (Fig. 2B). In parallel to the steady rise in the CA value, all of the engineered lines showed fitness improvements, with respect to both plaque formation and burst size (Supplementary Fig. S2). Figure 3 illustrates the individual mutations for three of the evolved lines; the remaining lines are shown in Supplementary Fig. S3 and a full listing of the mutations can be found in Supplementary Tables S1 (for the S and E lines) and S2 (for the J lines).
The simultaneous propagation of the C lines provides insight into the probability of mutations arising within the targeted regions of the engineered lines as a result of the selection experiment. The F and J protein coding regions were also sequenced for the C line after the 1st, 5th, 11th, 21st, 35th, and 50th transfers. No nonsynonymous mutations were detected within the J protein coding region. One was observed within the F protein coding region, at genome position 1727 (L242F). This nonsynonymous mutation was first detected after the 21st transfer and became fixed in the population; a synonymous mutation at this same position was detected as early as the 5th transfer. This nonsynonymous mutation, however, is not unique to the evolved line; of 67 publicly available genomes in GenBank (Supplementary Table S3), 15 have Leucine (including the Anc strain) while the remaining 52 have Phenylalanine at this position.
### Unraveling the selective forces increasing the lines’ CA
We investigated mutational effects using simulation. The simulation included the effects of random mutation and selection for translational efficiency (see Methods). Conducting 1000 replicates captured the landscape of mutations which could be explored by each engineered sequence. Comparison of the simulations and the experimental assays are shown for the S, E1, and J1 engineered lines in Fig. 4. (The remaining lines can be found in Supplementary Fig. S4.) As the number of mutations increases, increasing divergence is observed in the average CA values for a sequence under strong selection for translational efficiency (the 100% Selection for More Abundant Host tRNA model in yellow) and in its absence (the 100% Random Substitution model in blue). Even when mutations are introduced randomly, the CA value increases because the engineered sequences were severely deoptimized; however, in no case was the random model sufficient to recover the observed increases in codon adaptiveness.
The simulations for the S and E lines suggest that selection for translational efficiency is important in shaping the codon usage of all five of the engineered lines. While the role of selection for translational efficiency between the selected lines may vary, it is not sufficient to explain the number of mutations, reversals, or dN/dS rate. The mutational dynamics in the E1 (Fig. 4B) and E2 and E3 (Supplementary Fig. S4) lines indicate that translational selection is unlikely to be the only factor shaping their codon usage, following the mixed model (shown in Fig. 4 in green). In contrast, the experimental results for the three engineered J lines (Fig. 4C and Supplementary Fig. S4) mirror the expectations under strong selection to utilize codons more frequently within the host’s set of HEGs (yellow lines). Thus, biosynthetic compatibility appears to be a significant source of selection for these lines. The S (Fig. 4A) and E4 (Supplementary Fig. S4) lines also suggest that translational selection plays an important role in shaping its codon usage, albeit not as strong as within the engineered J lines.
### Epistasis
We hypothesized that there would be sequence changes in other coding regions over the course of selection as a result of protein-protein interactions. Complete genome sequencing was performed for the final isolates of all engineered lines as well as intermediate populations for the S and E lines (see Methods). Figure 5 illustrates the mutations identified within the S, E, and J engineered lines over the course of the selection experiment. The majority of the mutations observed occurred within the structural proteins, regardless of the region engineered. Three of the E lines (E2, E3, and E4) collected after the 35th transfer include the excision of 27 nucleotides within the noncoding region between the J and F genes. In order to pinpoint when this excision occurred, isolates from the 22nd through 34th transfers were assayed for this excision via PCR of this region of the genome. The excision arose in the E2 line in the 27th transfer, in the E3 line in the 29th transfer, and in the E4 line in the 30th transfer. A full listing of the mutations observed outside of the engineered regions throughout the course of the selection experiment within the eight engineered lines can be found in Supplementary Table S4.
By comparing the genetic variability of extant ΦX174 populations (Supplementary Table S3) with the experimental populations evolved here revealed shared diversity. Five of the nonsynonymous mutations within the engineered lines (four within the S and E lines and one within the J lines) (Supplementary Table S4) and all of the nonsynonymous mutations within the control lines (C1, C2, C3, and C4) (Supplementary Table S5) have previously been detected. In addition, through sequence analysis we find that many of these sites are highly variable, in particular the mutations observed within the control lines (Supplementary Table S5), such that numerous different codons and amino acids are exploited in viable strains. Furthermore, the 27 nucleotide excision observed within the E2, E3, and E4 lines is not unique; complete genomes in GenBank also contain this deletion. In fact, there is a correspondence between one of these mutations, A16V in the H protein coding region, and the deletion, both in our lines and those available from NCBI. The remaining nonsynonymous mutations within the engineered lines have not been previously detected within the genomes examined.
## Discussion
Through direct molecular manipulation, we investigated codon bias as it evolved. The bacteriophage ΦX174 was genetically engineered to have non-optimal codons resulting in low fitness. Over the course of the selection experiment, however, fitness increased in parallel with the incorporation of more host preferred codons (Fig. 1, Supplementary Fig. S2). Both synonymous and nonsynonymous mutations were observed within the targeted regions as the engineering of synonymous substitutions likely permitted the evolving virus to explore alternative paths within sequence space. This theory and the emergence of novel nonsynonymous mutations within the evolved engineered lines is not exclusive to our study; similarly, throughout the passage of codon deoptimized HIV strains novel nonsynonymous mutations were often frequent35. By targeting two individual coding regions of ΦX174 – the F and the J coding regions – we were able to determine that the response in phage-host codon compatibility observed was not a result of a particular gene or region selected. The engineered lines fixed between two (line J1) to six (lines E1, E3, and E4) reversions to the un-engineered ancestral codon. However, the reversions were almost exclusively divergent across lineages, with an exception of a single reversion, codon position 1, within the lines in which the F coding region was targeted (Fig. 3). Across the evolving lines, the majority of mutations within the targeted regions were for codons more frequently used within the host’s HEGs (Fig. 3). These results demonstrate parallel evolution in a molecular trait: phage-host codon compatibility.
Comparing codon usage within the individual targeted windows between the engineered lines and the Anc strain, we observed an increase in codon adaptiveness over the course of the selection experiment (Fig. 2). Much of the recovery of codon adaptiveness occurred early in the selection experiment regardless of the region targeted. Virtually all improvement in codon adaptiveness occurred within 21 transfers across all evolving lineages, and five of the lines (S, E1, E2, E3, and J1) had ~50% improvements within the first 5 transfers. The increase in CA was not restricted to evolution of the engineered codons, but also involved other codons in the targeted region, indicating that selection was not specific to those codons that were initially engineered. The evolved lines presented here provide empirical evidence that attenuation via codon deoptimization is not permanent, congruent with prior assessments of similar studies51.
Caution is however necessary in interpreting the evolutionary basis for increases in codon adaptiveness. The engineered sequence was severely codon deoptimized, and many mutations could have resulted in an increased CA value. The simulations performed under a strictly random substitution model (Fig. 4, blue lines) capture this consequence; the average final CA recovery under this model was 10%. Still, the rapid increase in CA over the course of the selection experiment across all lineages suggests that translational efficiency is a contributing factor shaping genome composition over time. The results of our simulations further support this conjecture. The experimental observations, in particular those of the J engineered lines, most closely fit models incorporating significant selection for more abundant host tRNAs (Fig. 4). The simulations uncover not only the landscape of mutations which could be explored by the engineered sequence but also the selective factors by which phage-host codon usage compatibility evolves. Similar to the results observed here, other studies which saw rapid virus-host codon compatibility recovery also observed fitness recovery35,36.
The extent to which virus and host sequences are compatibile varies between genes (Supplementary Fig. S1), suggesting that it is well-tuned at a genomic level. Just as viral codon deoptimization can reduce viral fitness, so too can optimization of natively ‘non-optimal’ genes56. In fact, genome manipulation alone is known to have fitness effects57,58. Nevertheless, the reduced fitness observed for the engineered S, E, and J strains created here and other studies of codon deoptimization29,30,31,32,33,34,35,36,37,38,39,40 are unlikely to be solely due to reduced translational efficiency. Codon engineering may lead to protein and mRNA misfolding or effect genome packaging, genome-capsid interactions, and protein-protein interactions. Even for the model bacteriophage ΦX174, many of the aforementioned processes are not fully understood. For instance, while it is known that some of the amino acids within the 66 bp targeted region of F interact with J, G, and F protein subunits during capsid formation59, only one nonsynonymous mutation (Q254H in the F coding sequence) occurred within a recognized protein-protein interaction site. Nonsynonymous mutations outside of the engineered region were observed in the evolved S, E, and J lines. This observation is not unique to the engineering of ΦX174 as other studies have likewise detected mutations outside of codon-modified segments34,35,36. The 12 nonsynonymous mutations and 10 nonsynonymous mutations in the S and E lines and the J lines, respectively, have not previously been observed in ΦX174 genomes. As these nonsynonymous mutations primarily occurred in structural proteins, they may have arisen in response to conformational changes in the engineered regions due to the initial molecular engineering and subsequent evolutionary response.
Isolating the contributions of selection for translational efficiency from those of translational accuracy, mutational bias, and drift has been the subject of decades of intense research activity13,20,21,22,60,61,62,63,64. Exploration of different phage-host systems provides a greater understanding into the evolution of codon usage within viruses. Using the ΦX174 system, we investigated translational efficiency in a small virus that does not encode its own tRNAs, as some larger phages do65, and is thus entirely dependent upon its host for biosynthesis. We observed rapid evolutionary responses that involved large increases in codon adaptiveness and fitness. While prior studies evolving codon-engineered phages have not observed such a rapid recovery34, we hypothesize that the rate of response is influenced by the genome itself – its size, topology, and composition. For instance, the physical constraints of single stranded genomes66 may contribute to the difference observed between the slow response observed in codon-modified T7 lines (dsDNA genomes) and the ΦX174 lines. The long-term evolution of the three engineered ΦX174 lines presented here provides the first empirical evidence of rapid selection for genome compatibility in a phage.
Exploring selection for virus-host genome compatibility in phages has two immediate benefits: it provides a model for engineering in viruses infective of eukaryotic cells (vaccine development) and engineering of phages for therapeutic use (phage therapy). The consistent increase in codons frequently utilized in the highly expressed genes of its host E. coli C suggests that translational efficiency was important during selection. The response observed was in all engineered strains and amongst all replicate lines. Thus, the consistent increase in virus-host genomic compatibility observed is a genome phenomenon rather than a residual of engineering within a specific region/gene. This is further supported through the computational simulations performed. The results provide insight into the tempo and mode in which viruses adapt in response to available hosts.
## Materials and Methods
### Calculation of codon usage
The complete sequence and annotation for the E. coli C genome (GenBank: NC_010468) was downloaded from NCBI. The codon frequencies were calculated for the 40 highly expressed gene sequences (HEGs)19. The unscaled proportion of codons in each codon family was calculated. In contrast to the relative synonymous codon usage67, this value, which we refer to as NRSCU68, weights each amino acid equally. NRSCU values were retrieved from the Codon Bias Database68.
The genome and annotation files for the viral species ΦX174 (GenBank: NC_001422), G4 (GenBank: NC_001420), α3 (GenBank: NC_001330), and ΦMH2K (GenBank: NC_002643) were downloaded from NCBI. Comparisons for the ΦMH2K-host codon compatibility also required the files, again retrieved from NCBI, for its host Bdellovibrio bacteriovorus; the reference genome for the strain HD100 was used (GenBank: NC_005363). Similarly, the codon usage of the HEGs within the B. bacteriovorus genome was calculated. Fig. S1 illustrates the phage-host codon usage compatibility (NRSCU value) for the six homologous coding regions of ΦX174, G5, α3, and ΦMH2K (panel A) and for all 11 homologous genes of ΦX174, G5, and α3 (panel B).
### Sequence design
Using the genome sequence for the ΦX174 Anc strain (GenBank: AF176034)45, the restriction enzyme cut sites within the F capsid coding region were identified with the NEB Cutter online tool69; PshAI and AhdI were selected because each recognized unique cut sites within the phage’s genome (at nucleotide positions 1694 and 1765, respectively). A 66 bp (nucleotide positions 1700–1765) region between these two cut sites, 22 codons, was then assessed for the individual codon usage within the E. coli C host species according to the codon bias of the HEGs from our calculations (Supplementary Table S1). Two sequences were designed, each containing eleven synonymous substitutions. The S strain includes only these eleven synonymous mutations while the E strain includes the synonymous mutations as well as a single nonsynonymous mutation at genome position 1718–1720. The nonsynonymous mutation of CGC (Arginine) for the least favored codon of Leucine was chosen as it is one of the least conserved residues within the region, as denoted by PDBsum’s residue conservation calculations70,71. The J strain targeted the region within the ΦX174 Anc strain, position 893–961. The restriction enzyme cut sites within the J coding region were also identified with the NEB Cutter online tool69; BstAPI and Sau961 were selected because each recognized cut sites flanking the coding region (at nucleotide positions 898 and 978, respectively). Just as had been performed for the design of the S and E strains, each codon in the J region targeted was compared to the NRSCU value for the E. coli genome (Supplementary Table S2). Only synonymous mutations were incorporated within the design of the engineered J strain. The oligos for the engineered sequences were synthesized by and obtained from Eurofins MWG Operon.
### Creation of engineered strain
The Anc strain was originally obtained from C. Burch (University of North Carolina, NC). This ancestral strain was plated from our freezer stock collection. One plate was harvested for the C line (control) and production of the engineered strains. Genomic extraction was performed using the UltraCleanTM Microbial DNA Isolation Kit following the standard protocol with an additional heating of the prep for 10 minutes at 70 °C to increase lysis efficiency (as suggested by protocol). Double digests using the corresponding enzymes for the F and J coding regions were conducted following the manufacturer’s protocol (New England Biolabs). The digested DNA was separated by gel electrophoresis through a 1.2% agarose gel. DNA fragments were excised from the gel and purified using the UltraCleanTM 15 DNA Purification kit. Ligation was performed with 7 μl of the digested DNA, 1 μl of the synthesized oligo, 1 μl ligase 10 × buffer, and 1 μl T4 DNA ligase overnight at 4 °C.
The ligation product (5 μl) was incubated with 400 μl of E. coli C spheroplast for 20 minutes at 37 °C; PAM medium (3 ml, pre-warmed to 37 °C) was added and the preparation was incubated for 90 min. The phage was released using a 1:10 dilution into water then titered. This process was carried out for each strain. Each phage strain’s lysate was then plated as follows: 100 μl of phage was added to 3 ml 0.5% agar LB and 1 ml of turbid E. coli C culture and then overlaid on a 1.7% agar LB plate. Plates were incubated overnight at 37 °C. Plates were harvested and suspended in 0.8% saline solution and treated with 50 μl chloroform. Single plaques were selected for each strain as the initial genotype for the subsequent lines. The genomes of the three engineered strains and the ancestral strain were confirmed by capillary sequencing.
### Propagation of engineered lines
The host E. coli C strain was also obtained from C. Burch (University of North Carolina, NC). Propagations were carried out as follows. One line of the S strain, four replicate lines of the E strain, three replicate lines of the J strain, and four lines of the ancestral strain (Anc) to serve as a control were propagated. While the S and E lines were propagated for 35 transfers, the J and Anc control lines were propagated for an additional 15 transfers. Co-cultures were carried out for seven hours per transfer. The emergence of bacterial resistance was also measured for co-culture of this duration determining that phage-sensitive E. coli dominated the population.
Initially LB was inoculated with the host E. coli C strain, taken from our frozen stock collection. 2 ml of turbid E. coli C cultures in exponential growth was aliquoted into a 13 mm culture tube along with 500 μl of phage solution titered such that the initial MOI < 0.001. (Under the conditions described hereafter, bacterial growth curves for our E. coli C strain were conducted – quantified both by spectrophotometry and colony counts – to ascertain phases of growth and CFU/mL throughout, results not shown.) The tube was then capped and placed in a shaking incubator at 37 °C for 7 hours after which the tube was treated with 200 μl of chloroform and gently vortexed for 5 seconds. Next, 500 μl was collected to inoculate freshly grown E. coli C in a new culture tube and 500 μl was collected into a microcentrifuge tube and stored at 4 °C. Every third transfer an additional 100 μl was collected and plated. Phage isolates were plated as described previously; virus lysates were stored both at −80 °C in 50/50 glycerol/water (v/v) as well as at 4 °C.
In an effort to maintain a static E. coli C population and thus minimize bacterial resistant to the phage from one transfer to the next, fresh E. coli C cultures were made daily from naïve cultures. Prior to inoculation with phage lysate, the naïve E. coli C culture was grown to the same density as the initial inoculations.
### Sequencing
Genomic extraction was performed of a single genotype per collection time using the UltraCleanTM Microbial DNA Isolation kit as described previously. Twelve primer pairs were designed using the Primer3 web-application72; when all twelve pairs are used, a minimum 2 × coverage of the genome is possible (primer sequences available upon request). PCR products were purified using ExoSAP-It and sequenced by the University of Chicago Cancer Research Center DNA Sequencing Facility.
Sequencing of the complete genome with a 4 × coverage was conducted after the 1st, 5th, 11th, 21st, and 35th transfers for the S and E lines and after the 50th transfer for the J lines. The C1 line was also sequenced after the1st 5th, 11th, 21st and 35th transfers. Sequences of the final evolved lines for the C1 line (GenBank: HM775306), S line (GenBank: HM775307), E1 line (GenBank: HM775308), E2 line (GenBank: HM775309), E3 line (GenBank: HM775310), and E4 line (GenBank: HM775311). The three J lines have been deposited as well (GenBank numbers being processed).
Sequencing at each collection time was conducted initially by extracting viral DNA from lysate. As such, the potential for numerous genotypes to be pooled existed. Additionally, we also plated via serial dilutions lysate collected and selected plaques at random for sequencing. In all cases the same genotype was recovered suggesting relatively low heterogeneity within the population.
### Sequence analysis
The sequences generated in this study were assembled using LaserGene SeqMan (DNASTAR, Inc.). Comparisons between the isolate contigs to the ancestral strain’s sequence were conducted by performing multiple sequence alignments using ClustalW within BioEdit (http://www.mbio.ncsu.edu/BioEdit/bioedit.html). Comparisons with environmental samples (Table S3; GenBank: AY751298, DQ079870-2, DQ079874-9907, DQ079909, NC_007817, NC_007821, and NC_007856)73 were also downloaded from NCBI.
We used the unscaled proportion of codons in each codon family (NRSCU). This metric captures only increases/decreases in the use of host-preferred codons. These values are available through the CBDB site68. For each sequenced isolate, the codon adaptiveness or CA value was quantified. This metric represents the individual engineered line’s codon usage in comparison to this same window in the Anc strain relative to the codon usage within the host’s HEGs. This metric was implemented rather than the codon adaptation index or CAI value74 which takes length of the sequences into consideration; as we were comparing a region of the same length, length was not a contributing factor.
### Adsorption assays and burst assays
Plaque forming unit (PFU) counts were conducted by first titering the viral lysate (via dilution series conducted in triplicate) such that equivalent initial viral concentrations were plated: 100 μl of phage was added to 3 ml 0.5% agar LB and 1 ml of turbid E. coli C culture and then overlaid on a 1.7% agar LB plate. Each strain was plated with three replicates and plaques were counted. Adsorption assays were also performed, in triplicate per strain/line. The assay estimates fitness based on the doublings of phage concentration per hour which is not scaled to generation time which may differ among the engineered lines. This allows for a comparison between the Anc strain and evolved strains based on their absolute growth rate with their native host E. coli C. The assay is an additional measure of fitness and determines which phage can grow the fastest. E. coli C was grown for 90 minutes until visible turbidity was observed. 10 mL of E. coli C was inoculated with 1 mL of bacteriophage (titered such that MOI < 0.01) and incubated at 37 °C. After 5 minutes, 1 mL of the culture was removed, microcentrifuged, and the phage within the supernatant was plated via a dilution series; this represents the initial concentration of phage (No). After 60 minutes (t), another 1 mL of the culture was removed and the phage in the supernatant was again isolated and titered. This is considered the final concentration of phage (N t ). To find the adsorption rate (k), the equation $${N}_{t}={N}_{0}{e}^{-kCt}$$ can be used where C is the bacterial cell density75. The experiments for determining the adsorption time can also be used to determine the burst size, taking just one pre-lysis data point and one post-lysis data point with multiple replicates76.
### Simulations
Each engineered line was evaluated separately. At each of the five (six in the case of the J lines) time points in which sequencing was performed, the same number of experimentally observed mutations – synonymous and nonsynonymous – were introduced. The CA was then calculated for the synthetically “evolved” sequence. Two strategies for mutation were developed. In the case of the first, nucleotides were mutated with equal probability of substitution for each of the four bases. The second strategy only incorporated a mutation if the change in the codon is for a tRNA that is more abundant in the E. coli host (as assessed via the NRSCU values of the two codons). Given the fact that such a small region of the genome was being investigated, the particular nucleotides targeted were selected at random (using Marsaglia’s CMWC strategy). Simulations were executed with 1000 replicates per time point per line accounting for varying influences (from 0–100%) of each of the two strategies. Simulations were performed using code developed here in C++ (available upon request).
### Data availability
Sequence data generated during the current study are available in the GenBank, accession numbers HM775306-HM775311. The three J lines have been deposited as well (GenBank numbers being processed). Sequences analyzed in this study are listed in Supplementary Table S3. | 2023-02-09 10:51:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5163159370422363, "perplexity": 5098.542043494557}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764501555.34/warc/CC-MAIN-20230209081052-20230209111052-00819.warc.gz"} |
https://www.wyzant.com/resources/answers/topics/ratio | 235 Answered Questions for the topic Ratio
10/08/19
#### find the lengths of a triangle (ratio) and perimeter
The three sides of a triangle are in the ratio 3:4:5. The longest side of the triangle is 14.39cm. What is the perimeter of the triangle, correct to two decimal places?
09/30/19
#### equvlant ratio to 3/9
what ratios are equvilant to 3/9
06/15/19
#### Green Gold is made from gold and silver in the ratio 3:1.A green gold bracelet has the mass of 56g
(A)What is the mass of the gold?(B)Show how to check your answer in part A
05/28/19
#### i do not know the awnser
if someone had a paint mixture with 8 parts red and 12 parts blue what percent is red
05/20/19
#### Write the comparison below as a ratio in simplest form using a fraction, a colon (:), and the word to. ______ 15 dollars to 27 dollars
Write the comparison below as a ratio in simplest form using a fraction, a colon (:), and the word to.______ 15 dollars to 27 dollars
05/06/19
#### Whole ratio and part where whole is less than 50
A class includes both sixth graders and seventh graders. The ratio of sixth graders to the total number of students in the class is 3:8. There are 20 seventh graders in the class. How many... more
04/17/19
The ratio of boys to girls playing basketball was 3 to 5. When 12 more boys joined the game, the number of boys playing was the same as the number of girls. How many children in all are now playing... more
03/27/19
#### GRE Practice Question Incorrect?
This is a sample GRE question. The answer claims that we cannot make an inference due to insufficient information. > *Compare the following quantities:* > > Of the 25 people in Fran’s... more
03/19/19
#### Can ratios really be manipulated as fractions?
In high-school Maths, we were taught that it was possible to manipulate ratios as fractions. For example, 1 : 7 = 3 : x \\\\ \\frac{1}{7} = \\frac{3}{x} \\\\ \\frac{x}{7} = 3\\\\ x = 3 \ imes... more
## Still looking for help? Get the right answer, fast.
Get a free answer to a quick problem.
Most questions answered within 4 hours.
#### OR
Choose an expert and meet online. No packages or subscriptions, pay only for the time you need. | 2019-11-23 02:42:18 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3615233600139618, "perplexity": 1619.5335228704803}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496672313.95/warc/CC-MAIN-20191123005913-20191123034913-00250.warc.gz"} |
https://www.physicsforums.com/threads/prove-that-d-is-a-metric.384916/ | # Prove that d is a METRIC
## Homework Statement
Let (X,ρ) and (Y,σ) be metric spaces.
Define a metric d on X x Y by d((x1,y1),(x2,y2))=max(ρ(x1,x2),σ(y1,y2)).
Verify that d is a metric.
## The Attempt at a Solution
I proved positive definiteness and symmetry, but I am not sure how to prove the "triangle inequality" property of a metric. How many cases do we need in total, and how can we prove it?
Any help is appreciated!
So to verify the triangle inequality, we need to prove that
max(ρ(x1,x2),σ(y1,y2))≤ max(ρ(x1,x3),σ(y1,y3)) + max(ρ(x3,x2),σ(y3,y2)) for ANY (x1,y1),(x2,y2),(x3,y3) in X x Y.
How many separate cases do we need? I have trouble counting them without missing any...Is there a systematic way to count?
Case 1: max(ρ(x1,x2),σ(y1,y2))=ρ(x1,x2), max(ρ(x1,x3),σ(y1,y3))=ρ(x1,x3), max(ρ(x3,x2),σ(y3,y2)) =ρ(x3,x2)
This case is simple, the above inequality is true since ρ is a metric.
Case 2: max(ρ(x1,x2),σ(y1,y2))=ρ(x1,x2), max(ρ(x1,x3),σ(y1,y3))=σ(y1,y3), max(ρ(x3,x2),σ(y3,y2)) =ρ(x3,x2)
For example, how can we prove case 2?
Any help is appreciated!
Last edited:
Suppose that $$\rho(x_1,x_2)\ge\sigma(y_1,y_2)$$. What do you know about $$\rho(x_1,x_3)+\rho(x_3,x_2)$$? Can you infer anything about the right-hand side of your inequality based on that?
Case 2: max(ρ(x1,x2),σ(y1,y2))=ρ(x1,x2), max(ρ(x1,x3),σ(y1,y3))=σ(y1,y3), max(ρ(x3,x2),σ(y3,y2)) =ρ(x3,x2)
Suppose that $$\rho(x_1,x_2)\ge\sigma(y_1,y_2)$$. What do you know about $$\rho(x_1,x_3)+\rho(x_3,x_2)$$? Can you infer anything about the right-hand side of your inequality based on that?
We'll have ρ(x1,x3)+ρ(x3,x2) ≥ σ(y1,y2)).
But I think for case 2, we need to prove that ρ(x1,x2)≤σ(y1,y3)+ρ(x3,x2) instead?? How can we prove it?
Thanks!
$$\max(\rho(x_1,x_2),\sigma(y_1,y_2))\le\max(\rho(x_1,x_3),\sigma(y_1,y_3))+\max(\rho(x_3,x_2),\sigma(y_3,y_2))$$.
Suppose $$\rho(x_1,x_2)\ge\sigma(y_1,y_2)$$. Since $$\rho$$ is a metric, you know that $$\rho(x_1,x_3)+\rho(x_3,x_2)\ge\rho(x_1,x_2)$$.
So what do you know about $$\max(\rho(x_1,x_3),\square)+\max(\rho(x_3,x_2),\square)$$, regardless of what's in the squares? You know it's at least as big as $$\rho(x_1,x_2)$$. | 2021-05-13 06:08:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7794760465621948, "perplexity": 689.542468514887}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991537.32/warc/CC-MAIN-20210513045934-20210513075934-00067.warc.gz"} |
http://turbomachinery.asmedigitalcollection.asme.org/article.aspx?articleid=1468649 | 0
Research Papers
# Rotor Interaction Noise in Counter-Rotating Propfan Propulsion Systems
[+] Author and Article Information
Andreas Peters
Department of Aeronautics and Astronautics, Gas Turbine Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02193
Zoltán S. Spakovszky
Department of Aeronautics and Astronautics, Gas Turbine Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02193([email protected])
Certification challenges such as blade containment are acknowledged but not taken into account in the present analysis.
Advance ratio, power coefficient, and thrust coefficient are defined using the average shaft speed $N=(N1+N2)/2$ and average rotor diameter $D=(D1+D2)/2$.
At midspan, fluctuations of up to 20% in pressure coefficient around the mean were found on the front-rotor pressure side compared to 2% on the rear-rotor pressure side.
Because of the weight penalties due structural reinforcements, cabin insulation and increased propulsion system weight, the maximum takeoff weight of the CRP aircraft arrangements increased relative to the datum turbofan powered aircraft. This in turn led to slightly higher Stage 4 noise limits for the CRP powered aircraft configurations.
J. Turbomach 134(1), 011002 (May 24, 2011) (12 pages) doi:10.1115/1.4003223 History: Received July 08, 2010; Revised September 05, 2010; Published May 24, 2011; Online May 24, 2011
## Abstract
Due to their inherent noise challenge and potential for significant reductions in fuel burn, counter-rotating propfans (CRPs) are currently being investigated as potential alternatives to high-bypass turbofan engines. This paper introduces an integrated noise and performance assessment methodology for advanced propfan powered aircraft configurations. The approach is based on first principles and combines a coupled aircraft and propulsion system mission and performance analysis tool with 3D unsteady, full-wheel CRP computational fluid dynamics computations and aeroacoustic simulations. Special emphasis is put on computing CRP noise due to interaction tones. The method is capable of dealing with parametric studies and exploring noise reduction technologies. An aircraft performance, weight and balance, and mission analysis was first conducted on a candidate CRP powered aircraft configuration. Guided by data available in the literature, a detailed aerodynamic design of a pusher CRP was carried out. Full-wheel unsteady 3D Reynolds-averaged Navier-Stokes (RANS) simulations were then used to determine the time varying blade surface pressures and unsteady flow features necessary to define the acoustic source terms. A frequency domain approach based on Goldstein’s formulation of the acoustic analogy for moving media and Hanson’s single rotor noise method was extended to counter-rotating configurations. The far field noise predictions were compared to measured data of a similar CRP configuration and demonstrated good agreement between the computed and measured interaction tones. The underlying noise mechanisms have previously been described in literature but, to the authors’ knowledge, this is the first time that the individual contributions of front-rotor wake interaction, aft-rotor upstream influence, hub-endwall secondary flows, and front-rotor tip-vortices to interaction tone noise are dissected and quantified. Based on this investigation, the CRP was redesigned for reduced noise incorporating a clipped rear-rotor and increased rotor-rotor spacing to reduce upstream influence, tip-vortex, and wake interaction effects. Maintaining the thrust and propulsive efficiency at takeoff conditions, the noise was calculated for both designs. At the interaction tone frequencies, the redesigned CRP demonstrated an average reduction of 7.25 dB in mean sound pressure level computed over the forward and aft polar angle arcs. On the engine/aircraft system level, the redesigned CRP demonstrated a reduction of 9.2 dB in effective perceived noise (EPNdB) and 8.6 EPNdB at the Federal Aviation Regulations (FAR) 36 flyover and sideline observer locations, respectively. The results suggest that advanced open rotor designs can possibly meet Stage 4 noise requirements.
<>
## Figures
Figure 1
Aerodynamic and acoustic performance assessment framework for counter-rotating propfans
Figure 2
Baseline CRP design
Figure 3
CRP noise estimation methodology
Figure 4
Baseline CRP grid-block topology (left) and close-up of rotor meshes at midspan (right)
Figure 5
Baseline CRP spectrum at 85 deg polar angle from the inlet centerline
Figure 6
Baseline CRP interaction tone noise level at frequency BPF1+BPF2 (left), at 2×BPF1+BPF2 (center), and at BPF1+2×BPF2 (right)
Figure 7
Baseline CRP density distribution at midspan
Figure 8
Baseline CRP density distribution at x/D1=0.12 (top) and blade-tip vortex system (bottom): front-rotor tip-vortices interact with rear rotor
Figure 9
Baseline CRP entropy distribution near hub (at 10% span)
Figure 10
Dissection of CRP noise mechanisms for interaction tones BPF1+BPF2 (left), 2×BPF1+BPF2 (center), and BPF1+2×BPF2 (right), baseline CRP, M=0.25
Figure 11
Baseline CRP noise mechanism contributors to first six interaction tones (percentages based on p′2 averaged over forward and aft polar arcs), M=0.25
Figure 12
Advanced design CRP geometry and near-field density distribution
Figure 13
Comparison of baseline and advanced design CRP directivity at interaction tone frequencies BPF1+BPF2 (left), 2×BPF1+BPF2 (center), and BPF1+2×BPF2 (right), M=0.25
Figure 14
Advanced design CRP noise mechanism contributors to first six interaction tones (percentages based on p′2 averaged over forward and aft polar arcs), M=0.25
Figure 15
Relative change in mean SPL for advanced design CRP compared to baseline CRP
## Errata
Some tools below are only available to our subscribers or users with an online account.
### Related Content
Customize your page view by dragging and repositioning the boxes below.
Related Journal Articles
Related Proceedings Articles
Related eBook Content
Topic Collections | 2019-01-22 12:43:27 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3334590792655945, "perplexity": 9569.061288401017}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-04/segments/1547583850393.61/warc/CC-MAIN-20190122120040-20190122142040-00349.warc.gz"} |
https://homework.cpm.org/category/CCI_CT/textbook/Int3/chapter/Ch1/lesson/1.1.4/problem/1-59 | ### Home > INT3 > Chapter Ch1 > Lesson 1.1.4 > Problem1-59
1-59.
Find the error in the solution below. Identify the error and solve the equation correctly.
$\left. \begin{array}{l}{ 4.1 x = 9.5 x + 23.7 }\\{ - 4.1 x\text{ } \text{ }- 4.1 x }\\ \hline 5.4 x = 23.7 \\{ 5.4 x = 23.7 }\\{ \frac { 5.4 x } { 5.4 } = \frac { 23.7 } { 5.4 } }\\{ x = 4.39 }\end{array} \right.$
In the first step, $4.1x$ was subtracted from both sides of the equation. The solver says that $4.1x - 4.1x = 5.4x$; what does it really equal? What is really left on the right side of the equation? Solve the equation from there.
$0 = 5.4x + 23.7$
$x ≈ -4.39$ | 2021-01-19 18:09:28 | {"extraction_info": {"found_math": true, "script_math_tex": 5, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5406280755996704, "perplexity": 710.4252561888219}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519600.31/warc/CC-MAIN-20210119170058-20210119200058-00775.warc.gz"} |
https://bioinformatics.stackexchange.com/questions/13572/how-to-make-work-programs-from-the-path | # How to make work programs from the $PATH? I am trying to analyze my RNAseq reads for defective genomes and I use this program (http://www.di-tector.cyame.eu/) that is suggested by Beauclair et al (https://pubmed.ncbi.nlm.nih.gov/30012569/) for that. After downloading a .py script and running it with the following command: sudo python3 DI-tector_06.py /mnt/e/nastya/SLX066-02/sequence.fasta /mnt/e/nastya/SLX066-02/B-dVMV-RIG-1/B-dVMV-RIG-1_ACAGTG_L008_R1_001.rc.fastq.gz It says at the beginning that ================================= Program: DI-tector Version: 0.6 - Last modified 25/05/2018 ================================= Requirement: (must be in your$PATH)
-bwa
-samtools
Optional: (must be in your $PATH) -bedtools I have tried to install bwa using: sudo apt-get update -y sudo apt-get install -y bwa and samtools using cd /usr/local/bin sudo wget https://github.com/samtools/samtools/releases/download/1.9/samtools-1.9.tar.bz2 sudo tar -vxjf samtools-1.9.tar.bz2 cd samtools-1.9 sudo make I did not get exactly what does that mean: (must be in your$PATH) but I have searched here and there and discovered that it might mean /usr/local/bin
The program starts and, after some calculations it gives me the following error:
Input file: /mnt/e/nastya/SLX066-02/B-dVMV-RIG-1/B-dVMV-RIG-1_ACAGTG_L008_R1_001.rc.fastq.gz
Host reference: None
Virus reference: /mnt/e/nastya/SLX066-02/sequence.fasta
Remove segment: < 15 nt
Remove reads with MAPQ: < 25
Allow InDel. length: > 1 nt
=================================
=================================
Number of reads in input file: 18,035,544
=================================
=================================
Step 1/5 : Alignment against Viral reference...Filtering...
=================================
[E::bwa_idx_load_from_disk] fail to locate the index files
Traceback (most recent call last):
File "DI-tector_06.py", line 183, in <module>
Popen(["samtools", "view", "-Sb", "-F", "4"], stdin=out_file_onVirus, stdout=out_file_Virus).communicate()[0]
File "/usr/lib/python3.6/subprocess.py", line 709, in __init__
restore_signals, start_new_session)
File "/usr/lib/python3.6/subprocess.py", line 1344, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
PermissionError: [Errno 13] Permission denied: 'samtools'
I have no idea how samtools must be accessed, I tried sudo command with python3 but it did not work.
• generally, if you type sudo make install, after you run sudo make, it will install the binary in your $PATH and you won't need to worry about adding it there afterwards Jun 17, 2020 at 10:24 • A piece of advice: Dont "over-sudo": You shouldn't need sudo to run a data analysis program. When installing samtools, avoid downloading stuff in /usr/local/bin: Do it in a local directory (I would recommend having a $HOME/src for that), and use sudo only to install the program to /usr/local/bin.
– bli
Jun 19, 2020 at 9:28
• "Permission denied: 'samtools'" may mean that the samtools "executable" is actually not executable. This can be because it is on a disk that doesn't allow execution of files, but more likely it just needs a fix of the file permission (chmod +x <path_to_samtools>).
– bli
Jun 19, 2020 at 9:41
echo $PATH will show the directories in your search path. Either copy the samtools executable there or add its location to the$PATH variable. You would be well served with a unix command line course or tutorial.
you can add a directory to $PATH by executing following command in the terminal: export PATH="$HOME/Documents/Software/bbmap:$PATH" In this example, all executable commands in the folder bbmap can be executed from any other directory in the terminal. So you can install samtools in an appropriate directory and then execute export PATH="$HOME/your/directory:$PATH". Then check if it succeeded by running echo$PATH. This latter command will return a list of directories which are assigned to $PATH and your new directory (where samtools is located) should be included. You can run a piece of software by specifying exactly where it is, like /home/usr/software/samtools_1.1.0/samtools Or if that location is in your$PATH, you can run it with just samtools. The software you downloaded needs to run those programs, but there's no way for you to give it the specific path for them; it's going to try and run them with just samtools. So you need to make sure that will work, by putting their locations in your PATH.
It is not clear to me that running every command with sudo is wise. If you don't understand what $PATH is, I'm pretty sure you should not be running everything with sudo. The clean way to do this is to make your own bin within your user directory cd ~ mkdir bin vi .profile # or vi .profile_bash or for OS X using csh or tcsh vi .cshrc For bash (Linux, or OS X) within vi type "i" then type, PATH=$PATH:~/bin
or
PATH=~/bin:$PATH ... followed by esc key and "wp" enter Then enter for csh or tcsh e.g. for OS X set path = ($path /Users/username/bin /Users/username/anaconda3/bin)
Same instructions for using vi as for bash. Dump bedtools into any directory (not the desktop) and connect the programs into the bin using ln
cd ~/progs/bedtools
ln -si $PWD/prog ~/bin If you are using csh or tcsh you need to type rehash but bash does this automatically. The programs will then be cleaning connect to your$PATH via ln without massive memory usage and visible anywhere within your home space. Its good practice for any Linux / OS X (time gone by UNIX) program or script and whilst you don't need it for this operation as you add more stuff its better. | 2022-08-17 15:36:03 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2507122755050659, "perplexity": 5218.823770625995}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882573029.81/warc/CC-MAIN-20220817153027-20220817183027-00219.warc.gz"} |
https://in.mathworks.com/help/antenna/ref/polarpattern.html | # polarpattern
Interactive plot of radiation patterns in polar format
## Description
The `polarpattern` object creates an interactive plot of antenna or array radiation patterns in polar format with uniformly spaced angles. You can also plot other types of polar data. Use this plot for interactive data visualization or measurement. To change the properties, zoom in, or add more data to the plot, right-click or scroll or drag the Polar Measurement window.
## Creation
### Syntax
``polarpattern``
``polarpattern(data)``
``polarpattern(angle,magnitude)``
``polarpattern(___,Name,Value)``
``polarpattern(ax,___)``
``p = polarpattern(___)``
``p = polarpattern('gco')``
### Description
````polarpattern` creates an empty polar plot. You can add plots of antenna or array radiation patterns and other types of data to the plot by importing saved polari objects from MAT-files. ```
example
````polarpattern(data)` creates a polar plot with real magnitude values in the vector `data` with angles uniformly spaced on the unit circle starting at `0` degrees. Magnitudes may be negative when dB data units are used. For a matrix `data`, columns of `data` are independent datasets. For N-`data` arrays, dimensions 2 and greater are independent datasets. For complex values, magnitude and angle are derived from `data`.```
example
````polarpattern(angle,magnitude)` creates a polar plot for a set of angles and corresponding magnitudes. You can also create polar plots from multiple sets of angle vectors in degrees and corresponding sets of magnitude using the syntax: ```polarpattern(angle1, magnitude1,..., angleN, magnitudeN)```.```
example
````polarpattern(___,Name,Value)` creates a polar plot, with additional properties specified by one or more name-value pair arguments. `Name` is the property name and `Value` is the corresponding property value. You can specify several name-value pair arguments in any order as `Name1`, `Value1`, `...`, `NameN`, `ValueN`. Unspecified properties retain their default values. To list all the property names and values, use `details(p)`. You can use the properties to extract data about the radiation pattern from the polar plot. For example, `p = polarpattern(data,'Peaks',3)` identifies and displays the three highest peaks in the pattern data. For a list of properties, see PolarPattern Properties.```
````polarpattern(ax,___)` creates a polar plot using axes object, `ax` instead of the current axes object.```
````p = polarpattern(___)` creates a polari object using any combination of input arguments from the previous syntaxes. Use this syntax to customize the plot or add measurements.```
````p = polarpattern('gco')` creates a polar plot object from the polar pattern in the current figure.```
### Input Arguments
expand all
Antenna or array data, specified as one of these options
• A real length-M vector, containing M magnitude values with angles their defined as $\frac{\left(0:M-1\right)}{M}×{360}^{\circ }$ degrees.
• A real M-by-N matrix, containing M magnitude values in a dataset and N such independent data sets. Each column of the matrix has angles in degrees from the vector $\frac{\left(0:M-1\right)}{M}×{360}^{\circ }$.
• A real multidimensional array. Arrays with `2` or more dimensions contain independent data sets.
• A complex vector or matrix, that contains Cartesian coordinates (x, y) of each point. x contains the real part of the `data` and y contains the imaginary part of the `data`.
When the data is in a logarithmic form, such as dB, magnitude values can be negative. In this case, `polarpattern` plots the smallest magnitude values at the origin of the polar plot and largest magnitude values at the maximum radius.
Data Types: `double`
Complex Number Support: Yes
Set of angles in degrees, specified as a vector.
Data Types: `double`
Set of magnitude values, specified as a vector or a matrix. If you specify this input as a matrix, each column is an independent set of magnitude values and corresponds to the same set of angles in the same column of the angle input .
Data Types: `double`
Axes of the polar plot, specified as an axes object.
### Output Arguments
expand all
Stores a polari object with a set of properties. Use p to modify properties of the plot after creation. For a list of all the properties, see PolarPattern Properties.
Example: `P = polarplot(V)`
## Object Functions
`add` Add data to polar plot `addCursor` Add cursor to polar plot angle `animate` Replace existing data with new data for animation `createLabels` Create legend labels for polar plot `findLobes` Main, back, and side lobe data `replace` Replace polar plot data with new data `showPeaksTable` Show or hide peak marker table `showSpan` Show or hide angle span between two markers
## Examples
collapse all
Create a default Vivaldi antenna and calculate the directivity at 1.5 GHz.
```v = vivaldi; V = pattern(v,1.5e9,0,0:1:360);```
Plot the polar pattern of the calculated directivity.
`P = polarpattern(V);`
Create a default cavity antenna. Calculate the directivity of the antenna and write the data to `cavity.pln` using the `msiwrite` function.
```c = cavity; msiwrite(c,2.8e9,'cavity','Name','Cavity Antenna Specifications');```
Read the cavity specification file into `Horizontal`, `Vertical`, and `Optional` structures using the `msiread` function.
`[Horizontal,Vertical,Optional] = msiread('cavity.pln')`
```Horizontal = struct with fields: PhysicalQuantity: 'Gain' Magnitude: [360x1 double] Units: 'dBi' Azimuth: [360x1 double] Elevation: 0 Frequency: 2.8000e+09 Slice: 'Elevation' ```
```Vertical = struct with fields: PhysicalQuantity: 'Gain' Magnitude: [360x1 double] Units: 'dBi' Azimuth: 0 Elevation: [360x1 double] Frequency: 2.8000e+09 Slice: 'Azimuth' ```
```Optional = struct with fields: name: 'Cavity Antenna Specifications' frequency: 2.8000e+09 gain: [1x1 struct] ```
Plot the polar pattern of the cavity at azimuth angles.
`P = polarpattern(Horizontal.Azimuth,Horizontal.Magnitude);`
Create a default monopole antenna and calculate the directivity at 75 MHz.
```m = monopole; M = pattern(m,75e6,0,0:1:360);```
Plot the polar pattern of the antenna.
`P = polarpattern(M,'TitleTop','Polar Pattern of Monopole');`
Create a default dipole antenna and calculate the directivity at 75 MHz.
```d = dipole; D = pattern(d,75e6,0,0:1:360);```
Plot the polar pattern of the antenna and display the properties of the plot.
`P = polarpattern(D);`
`details(P)`
``` internal.polari handle with properties: Interactive: 1 LegendLabels: '' AntennaMetrics: 0 CleanData: 1 AngleData: [361x1 double] MagnitudeData: [361x1 double] IntensityData: [] AngleMarkers: [0x1 struct] CursorMarkers: [0x1 struct] PeakMarkers: [0x1 struct] ActiveDataset: 1 AngleLimVisible: 0 LegendVisible: 0 Span: 0 TitleTop: '' TitleBottom: '' Peaks: [] FontSize: 10 MagnitudeLim: [-50 10] MagnitudeAxisAngle: 75 MagnitudeTick: [-40 -20 0] MagnitudeTickLabelColor: 'k' AngleLim: [0 360] AngleTickLabel: {1x24 cell} AngleTickLabelColor: 'k' TitleTopFontSizeMultiplier: 1.1000 TitleBottomFontSizeMultiplier: 0.9000 TitleTopFontWeight: 'bold' TitleBottomFontWeight: 'normal' TitleTopTextInterpreter: 'none' TitleBottomTextInterpreter: 'none' TitleTopOffset: 0.1500 TitleBottomOffset: 0.1500 ToolTips: 1 MagnitudeLimBounds: [-Inf Inf] MagnitudeFontSizeMultiplier: 0.9000 AngleFontSizeMultiplier: 1 AngleAtTop: 90 AngleDirection: 'ccw' AngleResolution: 15 AngleTickLabelRotation: 0 AngleTickLabelFormat: '360' AngleTickLabelColorMode: 'contrast' PeaksOptions: {} AngleTickLabelVisible: 1 Style: 'line' DataUnits: 'dB' DisplayUnits: 'dB' NormalizeData: 0 ConnectEndpoints: 0 DisconnectAngleGaps: 0 EdgeColor: 'k' LineStyle: '-' LineWidth: 1 FontName: 'Helvetica' FontSizeMode: 'auto' GridForegroundColor: [0.8000 0.8000 0.8000] GridBackgroundColor: 'w' DrawGridToOrigin: 0 GridOverData: 0 GridAutoRefinement: 0 GridWidth: 0.5000 GridVisible: 1 ClipData: 1 TemporaryCursor: 1 MagnitudeLimMode: 'auto' MagnitudeAxisAngleMode: 'auto' MagnitudeTickMode: 'auto' MagnitudeTickLabelColorMode: 'contrast' MagnitudeTickLabelVisible: 1 MagnitudeUnits: '' IntensityUnits: '' Marker: 'none' MarkerSize: 6 Parent: [1x1 Figure] NextPlot: 'replace' ColorOrder: [7x3 double] ColorOrderIndex: 1 SectorsColor: [16x3 double] SectorsAlpha: 0.5000 View: 'full' ZeroAngleLine: 0 ```
Remove `-inf` and `NaN` values from monopole antenna polar pattern data by using the `CleanData` and `AntennaMetrics` properties of a polari object. Use `CleanData` for partial data with `-inf` and `NaN` values.
```m = monopole; m.GroundPlaneLength = inf;```
Plot the beamwidth of the antenna at 70 MHz.
```figure; beamwidth(m,70e6,0,-50:30)```
Plot the radiation pattern of the antenna at 70 MHz.
```figure; pattern(m,70e6,0,-50:30);```
Use `polarpattern` to view the antenna metrics of the radiation pattern.
```P = polarpattern('gco'); P.CleanData = 1; P.AntennaMetrics = 1;```
Compare the `beamwidth` plot and the `polarpattern` plot. The Antenna Metrics does not represent the beamwidth correctly.
You can also clean the data by right clicking on the plot and selecting Clean Data.
After you clean the data, the `polarpattern` plot calculation matches the `beamwidth` plot calculation.
## Version History
Introduced in R2016a | 2023-02-09 00:43:53 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7029933333396912, "perplexity": 2596.6838961486455}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500983.76/warc/CC-MAIN-20230208222635-20230209012635-00284.warc.gz"} |
https://web2.0calc.com/questions/i-need-help_67028 | +0
# I need help
0
48
1
+170
In rectangle $ABCD$, shown here, $\overline{CE}$ is perpendicular to $\overline{BD}$. If $BC = \sqrt 3$ and $DC = 3$, what is $CE$?
Feb 28, 2021
CE=$\boxed{\frac32}$ | 2021-04-12 06:19:16 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9995715022087097, "perplexity": 1722.787520617697}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618038066613.21/warc/CC-MAIN-20210412053559-20210412083559-00209.warc.gz"} |
https://pos.sissa.it/335/004/ | Volume 335 - 2nd World Summit: Exploring the Dark Side of the Universe (EDSU2018) - The Cosmos as a Particle Detector
Galaxy Clustering and Baryon Acoustic Oscillations
B. Hoeneisen
Full text: pdf
Pre-published on: 2018 November 27
Published on: 2018 December 11
Abstract
We present measurements of Baryon Acoustic Oscillation
(BAO) distances used as an uncalibrated standard ruler that
determine $\Omega_{\textrm{de}}(a)$, $\Omega_k$, $\Omega_m$, and $d_{\textrm{BAO}} \equiv r_* H_0 / c$;
and BAO distances used as a calibrated standard ruler
$r_*$ that constrains a combination of $\sum m_\nu$, $h$, and $\Omega_b h^2$.
The cosmological parameters obtained in this analysis are
compared with the Review of Particle Physics, PDG 2018.
DOI: https://doi.org/10.22323/1.335.0004
Open Access | 2018-12-12 00:36:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3037308156490326, "perplexity": 7231.12944709101}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823710.44/warc/CC-MAIN-20181212000955-20181212022455-00108.warc.gz"} |
https://portal.kobv.de/advancedSearch.do?fq=domain%3AOxford+University+Press+(CrossRef)&sortCrit=score&sortOrder=desc&hitsPerPage=10&usedLibs=DE-B763&usedLibs=DE-204&usedLibs=DE-523&usedLibs=DE-600&usedLibs=DE-B1528&usedLibs=DE-522&usedLibs=DE-B486&usedLibs=DE-521&usedLibs=DE-Bo133&usedLibs=DE-526&usedLibs=DE-ER522&usedLibs=DE-525&usedLibs=DE-Bo410&usedLibs=DE-Po79&usedLibs=DE-287&usedLibs=DE-364&usedLibs=DE-Po75&usedLibs=DE-2070s&usedLibs=DE-Mun1&usedLibs=DE-B1570&usedLibs=DE-F131&usedLibs=DE-B2223&usedLibs=DE-609&usedLibs=DE-B1532&usedLibs=DE-B2224&usedLibs=DE-83&usedLibs=DE-B1575&usedLibs=DE-B1533&usedLibs=DE-2552&usedLibs=DE-B1536&usedLibs=DE-2110&usedLibs=DE-B1579&usedLibs=DE-B1535&usedLibs=DE-B479&usedLibs=DE-2273&usedLibs=DE-B478&usedLibs=DE-634&usedLibs=DE-B4&usedLibs=DE-B433&usedLibs=DE-Eb1&usedLibs=DE-Po82&usedLibs=DE-517&usedLibs=DE-CP521&usedLibs=DE-11&usedLibs=DE-B1563&usedLibs=DE-B1562&usedLibs=DE-B1566&usedLibs=DE-B1525&usedLibs=DE-B785&usedLibs=DE-B103&usedLibs=DE-B464&usedLibs=DE-B185&usedLibs=DE-186&usedLibs=DE-188&usedLibs=DE-2291&usedLibs=DE-B1550&usedLibs=DE-B1595&usedLibs=DE-2533&usedLibs=DE-2377&usedLibs=DE-B775&usedLibs=DE-578&usedLibs=DE-B1539&usedLibs=DE-B496&usedLibs=DE-B177&usedLibs=DE-B171&usedLibs=DE-VOEB&usedLibs=DE-B170&usedLibs=DE-Po24&usedLibs=DE-B15&usedLibs=DE-B1583&usedLibs=DE-181&usedLibs=DE-B1543&usedLibs=DE-2565&usedLibs=DE-B1586&usedLibs=DE-1&usedLibs=DE-B768&usedLibs=DE-B1547&usedLibs=DE-B11&usedLibs=DE-B1549&unownedTitles=true&index=primoCentral&f1=author&v1=Bowman%2C+Judd+D.&conj1=&f2=&v2=&conj2=&f3=&v3=&conj3=&f4=&v4=&conj4=&f5=&v5=&conj5=&f6=&v6=&conj6=&f7=&v7=&plv=2 | # Kooperativer Bibliotheksverbund
## Berlin Brandenburg
and
and
An error occurred while sending the email. Please try again.
Proceed reservation?
Export
Filter
• Oxford University Press (CrossRef) (9)
• 1
Article
In: Monthly Notices of the Royal Astronomical Society, 2017, Vol. 470(4), pp.4720-4731
Description: We present the E-field Parallel Imaging Calibration (EPICal) algorithm, which addresses the need for a fast calibration method for direct imaging radio astronomy correlators. Direct imaging involves a spatial fast Fourier transform of antenna signals, alleviating an $\mathcal {O}(N_{\mathrm{a}} ^2)$ computational bottleneck typical in radio correlators, and yielding a more gentle $\mathcal {O}(N_{\mathrm{g}} \log _2 N_{\mathrm{g}})$ scaling, where N a is the number of antennas in the array and N g is the number of gridpoints in the imaging analysis. This can save orders of magnitude in computation cost for next generation arrays consisting of hundreds or thousands of antennas. However, because antenna signals are mixed in the imaging correlator without creating visibilities, gain correction must be applied prior to imaging, rather than on visibilities post-correlation. We develop the EPICal algorithm to form gain solutions quickly and without ever forming visibilities. This method scales as the number of antennas, and produces results comparable to those from visibilities. We use simulations to demonstrate the EPICal technique and study the noise properties of our gain solutions, showing they are similar to visibility-based solutions in realistic situations. By applying EPICal to 2 s of Long Wavelength Array data, we achieve a 65 per cent dynamic range improvement compared to uncalibrated images, showing this algorithm is a promising solution for next generation instruments.
Keywords: Instrumentation: Interferometers ; Techniques: Image Processing ; Techniques: Interferometric
ISSN: 0035-8711
E-ISSN: 1365-2966
Library Location Call Number Volume/Issue/Year Availability
Others were also interested in ...
• 2
Article
In: Monthly Notices of the Royal Astronomical Society, 2017, Vol. 467(1), pp.715-730
Description: Modern radio telescopes are favouring densely packed array layouts with large numbers of antennas ( N A ≳ 1000). Since the complexity of traditional correlators scales as $\mathcal {O}(N_{\rm A}^2)$ , there will be a steep cost for realizing the full imaging potential of these powerful instruments. Through our generic and efficient E-field Parallel Imaging Correlator ( epic ), we present the first software demonstration of a generalized direct imaging algorithm, namely the Modular Optimal Frequency Fourier imager. Not only does it bring down the cost for dense layouts to $\mathcal {O}(N_{\rm A}\log _2N_{\rm A})$ but can also image from irregular layouts and heterogeneous arrays of antennas. epic is highly modular, parallelizable, implemented in object-oriented python , and publicly available. We have verified the images produced to be equivalent to those from traditional techniques to within a precision set by gridding coarseness. We have also validated our implementation on data observed with the Long Wavelength Array (LWA1). We provide a detailed framework for imaging with heterogeneous arrays and show that epic robustly estimates the input sky model for such arrays. Antenna layouts with dense filling factors consisting of a large number of antennas such as LWA, the Square Kilometre Array, Hydrogen Epoch of Reionization Array, and Canadian Hydrogen Intensity Mapping Experiment will gain significant computational advantage by deploying an optimized version of epic . The algorithm is a strong candidate for instruments targeting transient searches of fast radio bursts as well as planetary and exoplanetary phenomena due to the availability of high-speed calibrated time-domain images and low output bandwidth relative to visibility-based systems.
Keywords: Instrumentation: Interferometers ; Techniques: Image Processing ; Techniques: Interferometric ; Telescopes
ISSN: 0035-8711
E-ISSN: 1365-2966
Library Location Call Number Volume/Issue/Year Availability
Others were also interested in ...
• 3
Article
Language: English
In: Monthly Notices of the Royal Astronomical Society, 06/11/2018, Vol.477(1), pp.864-866
Keywords: Meteorology & Climatology ; Astronomy & Astrophysics;
ISSN: 0035-8711
E-ISSN: 1365-2966
Source: Oxford University Press (via CrossRef)
Library Location Call Number Volume/Issue/Year Availability
Others were also interested in ...
• 4
Article
In: Monthly Notices of the Royal Astronomical Society, 2017, Vol. 474(4), pp.4487-4499
Description: We present a baseline sensitivity analysis of the Hydrogen Epoch of Reionization Array (HERA) and its build-out stages to one-point statistics (variance, skewness, and kurtosis) of redshifted 21 cm intensity fluctuation from the Epoch of Reionization (EoR) based on realistic mock observations. By developing a full-sky 21 cm light-cone model, taking into account the proper field of view and frequency bandwidth, utilizing a realistic measurement scheme, and assuming perfect foreground removal, we show that HERA will be able to recover statistics of the sky model with high sensitivity by averaging over measurements from multiple fields. All build-out stages will be able to detect variance, while skewness and kurtosis should be detectable for HERA128 and larger. We identify sample variance as the limiting constraint of the measurements at the end of reionization. The sensitivity can also be further improved by performing frequency windowing. In addition, we find that strong sample variance fluctuation in the kurtosis measured from an individual field of observation indicates the presence of outlying cold or hot regions in the underlying fluctuations, a feature that can potentially be used as an EoR bubble indicator.
Keywords: Methods: Statistical ; Dark Ages, Reionization, First Stars ; Cosmology: Observations
ISSN: 0035-8711
E-ISSN: 1365-2966
Library Location Call Number Volume/Issue/Year Availability
Others were also interested in ...
• 5
Article
Description: We report absolutely calibrated measurements of diffuse radio emission between 90 and 190 MHz from the Experiment to Detect the Global EoR Signature (EDGES). EDGES employs a wide beam zenith-pointing dipole antenna centred on a declination of -26.7$^\circ$. We measure the sky brightness temperature as a function of frequency averaged over the EDGES beam from 211 nights of data acquired from July 2015 to March 2016. We derive the spectral index, $\beta$, as a function of local sidereal time (LST) and find -2.60 〉 $\beta$ 〉 -2.62 $\pm$0.02 between 0 and 12 h LST. When the Galactic Centre is in the sky, the spectral index flattens, reaching $\beta$ = -2.50 $\pm$0.02 at 17.7 h. The EDGES instrument is shown to be very stable throughout the observations with night-to-night reproducibility of $\sigma_{\beta}$ 〈 0.003. Including systematic uncertainty, the overall uncertainty of $\beta$ is 0.02 across all LST bins. These results improve on the earlier findings of Rogers & Bowman (2008) by reducing the spectral index uncertainty from 0.10 to 0.02 while considering more extensive sources of errors. We compare our measurements with spectral index simulations derived from the Global Sky Model (GSM) of de Oliveira-Costa et al. (2008) and with fits between the Guzm\'an et al. (2011) 45 MHz and Haslam et al. (1982) 408 MHz maps. We find good agreement at the transit of the Galactic Centre. Away from transit, the GSM tends to over-predict (GSM less negative) by 0.05 〈 $\Delta_{\beta} = \beta_{\text{GSM}}-\beta_{\text{EDGES}}$ 〈 0.12, while the 45-408 MHz fits tend to over-predict by $\Delta_{\beta}$ 〈 0.05.
Keywords: Astrophysics - Instrumentation And Methods For Astrophysics ; Astrophysics - Astrophysics Of Galaxies
ISSN: 00358711
E-ISSN: 13652966
Library Location Call Number Volume/Issue/Year Availability
Others were also interested in ...
• 6
Article
Description: We report the spectral index of diffuse radio emission between 50 and 100 MHz from data collected with two implementations of the Experiment to Detect the Global EoR Signature (EDGES) low-band system. EDGES employs a wide beam zenith-pointing dipole antenna centred on a declination of $-26.7^\circ$. We measure the sky brightness temperature as a function of frequency averaged over the EDGES beam from 244 nights of data acquired between 14 September 2016 to 27 August 2017. We derive the spectral index, $\beta$, as a function of local sidereal time (LST) using night-time data and a two-parameter fitting equation. We find $-2.59〈\beta〈-2.54 \pm 0.011$ between 0 and 12 h LST, ignoring ionospheric effects. When the Galactic Centre is in the sky, the spectral index flattens, reaching $\beta = -2.46 \pm 0.011$ at 18.2 h. The measurements are stable throughout the observations with night-to-night reproducibility of $\sigma_{\beta}〈0.004$ except for the LST range of 7 to 12 h. We compare our measurements with predictions from various global sky models and find that the closest match is with the spectral index derived from the Guzm{\'a}n and Haslam sky maps, similar to the results found with the EDGES high-band instrument for 90-190 MHz. Three-parameter fitting was also evaluated with the result that the spectral index becomes more negative by $\sim$0.02 and has a maximum total uncertainty of 0.016. We also find that the third parameter, the spectral index curvature, $\gamma$, is constrained to $-0.11〈\gamma〈-0.04$. Correcting for expected levels of night-time ionospheric absorption causes $\beta$ to become more negative by $0.008$ - $0.016$ depending on LST.
Keywords: Astrophysics - Instrumentation And Methods For Astrophysics ; Astrophysics - Astrophysics Of Galaxies
ISSN: 00358711
E-ISSN: 13652966
Library Location Call Number Volume/Issue/Year Availability
Others were also interested in ...
• 7
Article
In: Monthly Notices of the Royal Astronomical Society, 2015, Vol. 447(3), pp.2468-2478
Description: Recent observations with the Murchison Widefield Array at 185 MHz have serendipitously unveiled a heretofore unknown giant and relatively nearby ( z = 0.0178) radio galaxy associated with NGC 1534. The diffuse emission presented here is the first indication that NGC 1534 is one of a rare class of objects (along with NGC 5128 and NGC 612) in which a galaxy with a prominent dust lane hosts radio emission on scales of ∼700 kpc. We present details of the radio emission along with a detailed comparison with other radio galaxies with discs. NGC 1534 is the lowest surface brightness radio galaxy known with an estimated scaled 1.4-GHz surface brightness of just 0.2 mJy arcmin −2 . The radio lobes have one of the steepest spectral indices yet observed: α = −2.1 ± 0.1, and the core to lobe luminosity ratio is 〈0.1 per cent. We estimate the space density of this low brightness (dying) phase of radio galaxy evolution as 7 × 10 −7 Mpc −3 and argue that normal AGN cannot spend more than 6 per cent of their lifetime in this phase if they all go through the same cycle.
Keywords: Techniques: Interferometric ; Galaxies: Active ; Galaxies: General ; Galaxies: Individual:Ngc 1534 ; Radio Continuum: Galaxies
ISSN: 0035-8711
E-ISSN: 1365-2966
Library Location Call Number Volume/Issue/Year Availability
Others were also interested in ...
• 8
Article
In: Monthly Notices of the Royal Astronomical Society, 2016, Vol. 460(4), pp.4320-4347
Description: We present first results from radio observations with the Murchison Widefield Array seeking to constrain the power spectrum of 21 cm brightness temperature fluctuations between the redshifts of 11.6 and 17.9 (113 and 75 MHz). 3 h of observations were conducted over two nights with significantly different levels of ionospheric activity. We use these data to assess the impact of systematic errors at low frequency, including the ionosphere and radio-frequency interference, on a power spectrum measurement. We find that after the 1–3 h of integration presented here, our measurements at the Murchison Radio Observatory are not limited by RFI, even within the FM band, and that the ionosphere does not appear to affect the level of power in the modes that we expect to be sensitive to cosmology. Power spectrum detections, inconsistent with noise, due to fine spectral structure imprinted on the foregrounds by reflections in the signal-chain, occupy the spatial Fourier modes where we would otherwise be most sensitive to the cosmological signal. We are able to reduce this contamination using calibration solutions derived from autocorrelations so that we achieve an sensitivity of 10 4 mK on comoving scales k ≲ 0.5 h Mpc −1 . This represents the first upper limits on the 21 cm power spectrum fluctuations at redshifts 12 ≲ z ≲ 18 but is still limited by calibration systematics. While calibration improvements may allow us to further remove this contamination, our results emphasize that future experiments should consider carefully the existence of and their ability to calibrate out any spectral structure within the EoR window.
Keywords: Techniques: Interferometric ; Dark Ages, Reionization, First Stars ; Radio Lines: General ; X - Rays: Galaxies
ISSN: 0035-8711
E-ISSN: 1365-2966
Library Location Call Number Volume/Issue/Year Availability
Others were also interested in ...
• 9
Article
Description: Using the Murchison Widefield Array (MWA), the low-frequency Square Kilometre Array (SKA1 LOW) precursor located in Western Australia, we have completed the GaLactic and Extragalactic All-sky MWA (GLEAM) survey, and present the resulting extragalactic catalogue, utilising the first year of observations. The catalogue covers 24,831 square degrees, over declinations south of $+30^\circ$ and Galactic latitudes outside $10^\circ$ of the Galactic plane, excluding some areas such as the Magellanic Clouds. It contains 307,455 radio sources with 20 separate flux density measurements across 72--231MHz, selected from a time- and frequency- integrated image centred at 200MHz, with a resolution of $\approx 2$'. Over the catalogued region, we estimate that the catalogue is 90% complete at 170mJy, and 50% complete at 55mJy, and large areas are complete at even lower flux density levels. Its reliability is 99.97% above the detection threshold of $5\sigma$, which itself is typically 50mJy. These observations constitute the widest fractional bandwidth and largest sky area survey at radio frequencies to date, and calibrate the low frequency flux density scale of the southern sky to better than 10%. This paper presents details of the flagging, imaging, mosaicking, and source extraction/characterisation, as well as estimates of the completeness and reliability. All source measurements and images are available online (http://www.mwatelescope.org/science/gleam-survey). This is the first in a series of publications describing the GLEAM survey results. Comment: 30 pages, 18 figures, 6 tables, published in Monthly Notices of the Royal Astronomical Society
Keywords: Astrophysics - Astrophysics Of Galaxies
ISSN: 00358711
E-ISSN: 13652966
Library Location Call Number Volume/Issue/Year Availability
Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. Further information can be found on the KOBV privacy pages | 2019-12-11 04:08:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5490819215774536, "perplexity": 2891.5473429395474}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-51/segments/1575540529745.80/warc/CC-MAIN-20191211021635-20191211045635-00140.warc.gz"} |
https://radar.inria.fr/report/2018/castor/uid41.html | EN FR
• Legal notice
• Accessibility - non conforme
## Section: New Software and Platforms
### FBGKI
Full Braginskii
Functional Description: The Full Braginskii solver considers the equations proposed by Braginskii (1965), in order to describe the plasma turbulent transport in the edge part of tokamaks. These equations rely on a two fluid (ion - electron) description of the plasma and on the electroneutrality and electrostatic assumptions. One has then a set of 10 coupled non-linear and strongly anisotropic PDEs. FBGKI makes use in space of high order methods: Fourier in the toroidal periodic direction and spectral elements in the poloidal plane. The integration in time is based on a Strang splitting and Runge-Kutta schemes, with implicit treatment of the Lorentz terms (DIRK scheme). The spectral vanishing viscosity (SVV) technique is implemented for stabilization. Static condensation is used to reduce the computational cost. In its sequential version, a matrix free solver is used to compute the potential. The parallel version of the code is under development.
• Contact: Sebastian Minjeaud | 2023-03-23 01:04:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3364127576351166, "perplexity": 2190.4670379669533}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296944606.5/warc/CC-MAIN-20230323003026-20230323033026-00200.warc.gz"} |
http://mathhelpforum.com/algebra/121370-math-help-basic-algebra.html | # Math Help - math help basic algebra
1. ## math help basic algebra
I used to be a physics major when I was in college. That was when I was about 19, about seven years ago. I completely lost all of my math skills, even the basic things. I'm joining the Army and they have pretty simple questions on the ASVAB (military entrance test.)
I need help solving the equations. I need an explanation and all that good stuff. Is this the right forum?
I'll start with the first question: If Lynn can type a page in p minutes, what piece of the page can she do in 5 minutes?
If im in the wrong forum, just tell me, thanx.
B. p - 5
C. p + 5
D. p/5
E. 1- p + 5
2. Originally Posted by markr1983
I used to be a physics major when I was in college. That was when I was about 19, about seven years ago. I completely lost all of my math skills, even the basic things. I'm joining the Army and they have pretty simple questions on the ASVAB (military entrance test.)
I need help solving the equations. I need an explanation and all that good stuff. Is this the right forum?
I'll start with the first question: If Lynn can type a page in p minutes, what piece of the page can she do in 5 minutes?
If im in the wrong forum, just tell me, thanx.
B. p - 5
C. p + 5
D. p/5
E. 1- p + 5
rate = $\frac{1 \, page}{p \, minutes}$
(rate)(time) = pages completed
$\frac{1}{p} \cdot 5 = \frac{5}{p}$ pages | 2014-11-27 23:28:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 2, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.48107486963272095, "perplexity": 1397.3525220036274}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416931009292.37/warc/CC-MAIN-20141125155649-00244-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://www.juliahomotopycontinuation.org/HomotopyContinuation.jl/stable/monodromy/ | # Solving parametrized systems with monodromy
Next to solve, HomotopyContinuation.jl provides the function monodromy_solve which uses the monodromy method to solve a parameterized system of polynomials. Often monodromy_solve allows to still compute all isolated solutions of system where the number of paths tracked in solve](@ref) is already infeasible. Make sure to check out our monodromy guide for a more in depth introduction into this method.
HomotopyContinuation.monodromy_solveFunction
monodromy_solve(F, [sols, p]; options..., tracker_options = TrackerOptions())
Solve a polynomial system F(x;p) with specified parameters and initial solutions sols by monodromy techniques. This makes loops in the parameter space of F to find new solutions. If F the parameters p only occur linearly in F it is eventually possible to compute a start pair $(x₀, p₀)$ automatically. In this case sols and p can be omitted and the automatically generated parameters can be obtained with the parameters function from the MonodromyResult.
monodromy_solve(F, [sols, L]; dim, codim, intrinsic = nothing, options...,
tracker_options = TrackerOptions())
Solve the polynomial system [F(x); L(x)] = 0 where L is a [LinearSubspace](@ref). If sols and L are not provided it is necesary to provide dim or codim where (co)dim is the expected (co)dimension of a component of V(F). See also linear_subspace_homotopy for the intrinsic option.
Options
• catch_interrupt = true: If true catches interruptions (e.g. issued by pressing Ctrl-C) and returns the partial result.
• check_startsolutions = true: If true, we do a Newton step for each entry of solsfor checking if it is a valid startsolutions. Solutions which are not valid are sorted out.
• compile = mixed: If true then a System (resp. Homotopy) is compiled to a straight line program (CompiledSystem resp. CompiledHomotopy) for evaluation. This induces a compilation overhead. If false then the generated program is only interpreted (InterpretedSystem resp. InterpretedHomotopy). This is slower than the compiled version, but does not introduce compilation overhead.
• distance = EuclideanNorm(): The distance function used for UniquePoints.
• loop_finished_callback = always_false: A callback to end the computation. This function is called with all current PathResults after a loop is exhausted. 2 arguments. Return true if the compuation should be stopped.
• equivalence_classes=true: This only applies if there is at least one group action supplied. We then consider two solutions in the same equivalence class if we can transform one to the other by the supplied group actions. We only track one solution per equivalence class.
• group_action = nothing: A function taking one solution and returning other solutions if there is a constructive way to obtain them, e.g. by symmetry.
• group_actions = nothing: If there is more than one group action you can use this to chain the application of them. For example if you have two group actions foo and bar you can set group_actions=[foo, bar]. See GroupActions for details regarding the application rules.
• max_loops_no_progress = 5: The maximal number of iterations (i.e. loops generated) without any progress.
• min_solutions: The minimal number of solutions before a stopping heuristic is applied. By default no lower limit is enforced.
• parameter_sampler = independent_normal: A function taking the parameter p and returning a new random parameter q. By default each entry of the parameter vector is drawn independently from Normal distribution.
• permutations = false: Whether to keep track of the permutations induced by the loops.
• resuse_loops = :all: Strategy to reuse other loops for new found solutions. :all propagates a new solution through all other loops, :random picks a random loop, :none doesn't reuse a loop.
• target_solutions_count: The computation is stopped if this number of solutions is reached.
• threading = true: Enable multithreading of the path tracking.
• timeout: The maximal number of seconds the computation is allowed to run.
• trace_test = true: If true a trace test is performed to check whether all solutions are found. This is only applicable if monodromy is performed with a linear subspace. See also trace_test.
• trace_test_tol = 1e-10: The tolerance for the trace test to be successfull. The trace is divided by the number of solutions before compared to the tracetesttol.
• unique_points_rtol: the relative tolerance for unique_points.
• unique_points_atol: the absolute tolerance for unique_points.
source
HomotopyContinuation.find_start_pairFunction
find_start_pair(F; max_tries = 100, atol = 0.0, rtol = 1e-12)
Try to find a pair (x,p) for the system F such that F(x,p) = 0 by randomly sampling a pair (x₀, p₀) and performing Newton's method in variable and parameter space.
source
It is also possible to verify (but not certify) that all solutions were found. Note that this computation can take substantially longer than the original monodromy_solve computation.
HomotopyContinuation.verify_solution_completenessFunction
verify_solution_completeness(F::System, monodromy_result; options...)
verify_solution_completeness(F::System, solutions, parameters;
trace_tol = 1e-14,
show_progress = true,
compile = COMPILE_DEFAULT[],
monodromy_options = (compile = compile,),
parameter_homotopy_options = (compile = compile,),
)
Verify that a monodromy computation found all solutions by monodromy_solve. This uses the trace test described in [dCR17] and [LRS18]. The trace is a numerical value which is 0 if all solutions are found, for this the trace_tol keyword argument is used. The function returns nothing if some computation couldn't be carried out. Otherwise returns a boolean. Note that this function requires the computation of solutions to another polynomial system using monodromy. This routine can return false although all solutions are found if this additional solution set is not complete.
Example
@var x y a b c;
f = x^2+y^2-1;
l = a*x+b*y+c;
sys = System([f, l]; parameters = [a, b, c]);
mres = monodromy_solve(sys, [-0.6-0.8im, -1.2+0.4im], [1,2,3]);
show(mres);
verify_solution_completeness(sys, mres)
MonodromyResult
==================================
• 2 solutions (0 real)
• return code → heuristic_stop
• 44 tracked paths
• seed → 367230
julia> verify_solution_completeness(sys, mres)
[ Info: Certify provided solutions...
[ Info: Got 2 dinstinct solutions.
[ Info: Compute additional witnesses for completeness...
┌ Info: MonodromyResult
│ ===============
│ • return_code → :heuristic_stop
│ • 4 solutions
│ • 28 tracked loops
└ • random_seed → 0x21e7406a
[ Info: Certify additional witnesses...
[ Info: Computed 2 additional witnesses
[ Info: Compute trace using two parameter homotopies...
[ Info: Norm of trace: 9.33238819760471e-17
true
source
## Monodromy Result
A call to monodromy_solve returns a MonodromyResult:
HomotopyContinuation.permutationsMethod
permutations(r::MonodromyResult; reduced=true)
Return the permutations of the solutions that are induced by tracking over the loops. If reduced=false, then all permutations are returned. If reduced=true then permutations without repetitions are returned.
If a solution was not tracked in the loop, then the corresponding entry is 0.
Example: monodromy loop for a varying line that intersects two circles.
using LinearAlgebra
@var x[1:2] a b c
c1 = (x - [2, 0]) ⋅ (x - [2, 0]) - 1
c2 = (x - [-2, 0]) ⋅ (x - [-2, 0]) - 1
F = [c1 * c2; a * x[1] + b * x[2] - c]
S = monodromy_solve(F, [[1, 0]], [1, 1, 1], parameters = [a, b, c], permutations = true)
permutations(S)
will return
2×2 Array{Int64,2}:
1 2
2 1
and permutations(S, reduced = false) returns
2×12 Array{Int64,2}:
1 2 2 1 1 … 1 2 1 1 1
2 1 1 2 2 2 1 2 2 2
source
## Group actions
If there is a group acting on the solution set of the polynomial system this can provided with the group_action keyword for single group actions or with the group_actions keyword for compositions of group actions. These will be internally transformed into GroupActions.
HomotopyContinuation.GroupActionsType
GroupActions(actions::Function...)
Store a bunch of group actions (f1, f2, f3, ...). Each action has to return a tuple. The actions are applied in the following sense
1. f1 is applied on the original solution s
2. f2 is applied on s and the results of 1
3. f3 is applied on s and the results of 1) and 2)
and so on
Example
julia> f1(s) = (s * s,);
julia> f2(s) = (2s, -s, 5s);
julia> f3(s) = (s + 1,);
julia> GroupActions(f1)(3)
(3, 9)
julia> GroupActions(f1, f2)(3)
(3, 9, 6, -3, 15, 18, -9, 45)
julia> GroupActions(f1,f2, f3)(3)
(3, 9, 6, -3, 15, 18, -9, 45, 4, 10, 7, -2, 16, 19, -8, 46)
source
To help with the more common group actions we provide some helper functions:
• dCR17del Campo, Abraham Martín, and Jose Israel Rodriguez. "Critical points via monodromy and local methods." Journal of Symbolic Computation 79 (2017): 559-574.
• LRS18Leykin, Anton, Jose Israel Rodriguez, and Frank Sottile. "Trace test." Arnold Mathematical Journal 4.1 (2018): 113-125. | 2022-07-02 16:57:41 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6373658180236816, "perplexity": 3189.757319393848}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104189587.61/warc/CC-MAIN-20220702162147-20220702192147-00371.warc.gz"} |
http://www.physicsforums.com/showthread.php?p=3638545 | # Find the x coordinate of the stationary point of the following curves
by studentxlol
Tags: coordinate, curves, point, stationary
P: 39 1. The problem statement, all variables and given/known data Find dy/dx and determine the exact x coordinate of the stationary point for: (a) y=(4x^2+1)^5 (b) y=x^2/lnx 2. Relevant equations 3. The attempt at a solution (a) y=(4x^2+1)^5 dy/dx=40x(4x^2+1)^4 40x(4x^2+1)^4=0 Find x... How? (b) y=x^2/lnx dy/dx=2xlnx-x^2 1/x / (lnx)^2 2xlnx-x^2 1/x / (lnx)^2=0 Find x... How?
P: 910
Quote by studentxlol 40x(4x^2+1)^4=0 Find x... How?
re 1st prob:
Then either 40x = 0 or (4x^2+1)^4 = 0.
and solve the above two equations.
PF Patron Sci Advisor Thanks Emeritus P: 38,423 You are aware that $x^2/x= x$ aren't you? $y= x^2/ln(x)$: $y'= (2xln(x)- x)/(ln(x))^2= 0$ Use parentheses! What you wrote was $y'= 2x ln(x)- (x/(ln(x))^2)= 0$. Multiply both sides of the equation by $(ln(x))^2$ and you are left with 2x ln(x)- x= x(2ln(x)- 1)= 0. Can you solve that?
Related Discussions Calculus 0 Calculus & Beyond Homework 6 General Math 3 Calculus 3 Advanced Physics Homework 1 | 2013-12-11 19:49:30 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6365089416503906, "perplexity": 4542.779659025482}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2013-48/segments/1386164044912/warc/CC-MAIN-20131204133404-00042-ip-10-33-133-15.ec2.internal.warc.gz"} |
https://dsp.stackexchange.com/tags/distortion/hot | 2022 Developer Survey is open! Take survey.
# Tag Info
### Digital Distortion effect algorithm
Thanks to the plot in Olli Niemitalo's answer I got convinced that the formula given in the book has a sign error. The non-linearity used for fuzz or distortion is always some type of smoothed ...
• 79.2k
### Precise 5th and 7th harmonics of a sampled sine wave
This answer discusses the harmonic spectra of the quantized sequence in five cases: limit $f/f_s \to 0$, synchronous sampling of a cosine with rational $f/f_s$, synchronous sampling of a sinusoid of ...
• 12.3k
### Digital modelling of circuits with diode (i.e. guitar distortion)
One possible tool is Wave Digital Filter analysis which is a type of physical modeling that represents signals as travelling waves. It can also be extended to non-linear elements such as diodes. ...
• 654
### Can someone explain waveshaping to me?
In the audio domain, waveshaping is simply applying a memoryless nonlinear function to an input signal. $$y(t) = g\big( x(t) \big)$$ The waveshaping function, $g(x)$, is most often a continuous ...
Accepted
### Looking for pratical quantitative comparison metrics for scaled, delayed and warped Signals
I'm answering the question the way I understood it - How can one find a similarity measure which isn't sensitive to scaling and shifting. An approach could be borrowed from the Computer Vision world ...
• 39.3k
Accepted
### Redistributing Color in a RGB Image According to a Gaussian Distribution
After you equalize the histogram you can think of your data as a stream of variables ${X}_{i}$ where $X \sim U \left[ 0, 1 \right]$. Now all you need is to transform samples of Uniform Random ...
• 39.3k
Accepted
### Identify the Type of Image Distortion (On Lena Image)
Blur If you want to reverse Blur applied on an image (Using Convolution, namely Linear Spatial Invariant Blur) you should use Deconvolution which, as name suggests, the inverse operation of ...
• 39.3k
Accepted
• 30.8k
### Digital Distortion effect algorithm
You can write the body of the function directly into Wolfram Alpha and it plots it: It looks like a waveshaper to me, and those can be used as you describe. But there was an error in the formula, see ...
• 12.3k
### Algorithm(s) to mix audio signals without clipping
Lower the global volume. Impulse tracker classically outputs channels at about 33% volume max by default. That seems to be both loud enough for music with few channels (4 channel Amiga MODs) and soft ...
### Camera Calibration using Single Input Image
Take a look at this paper here: "Straight lines have to be straight" by Faugeras et al. http://link.springer.com/article/10.1007/PL00013269#page-1 It is straight forward to implement, but essentially ...
• 61
### What is done to minimize distortion due to the hold operation?
I agree with Jim Clay's answer, but I think it is important to point out two things. First of all, there are no phase distortions due to the hold operation, just a simple delay of half a sampling ...
• 79.2k
### What is done to minimize distortion due to the hold operation?
What you are describing is the distortion introduced by an ideal digital-to-analog converter (DAC) in the analog domain. Two things are typically done to reduce this distortion: Analog filtering ...
• 11.8k
Accepted
### Total Harmonic Distortion calculation and its origins
You can create non-linear digital systems (an example would be a system that finds the absolute value of the input). You can also simulate an analog non-linear system using DSP. The easiest way is to ...
• 13.6k
### Distances In a Single Image With Some Real References
Well, you can have an approximation. Due to the forms in the image are non regular, completely plain, its hard if not impossible to know the camera distortion. But if you know the size of the wheels ...
### How to estimate radial distortion from lens characteristics?
There is a lot of variability because distortion can be compensated for optically and lens designs differ. Some lenses are marketed as rectilinear, most not. You would be making the distortion worse ...
• 12.3k
Accepted
### How to analyze image quality?
You are apparently in the context of no-reference, reference-free or blind image quality assessment. The topic is quite active, and I am not sure people have already a completely accepted framework ...
• 29.7k
Accepted
### Can IR emitter signal be distorted by curved glass housing around receiver?
Short answer: No. Long answer: you can of course create shadowing that way, and that would disturb operation. And of course, a glass wall will refract infrared just as it refracts any other light. ...
• 25.8k | 2022-05-23 21:12:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5957245230674744, "perplexity": 1729.7254718154857}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662561747.42/warc/CC-MAIN-20220523194013-20220523224013-00131.warc.gz"} |
https://groupprops.subwiki.org/wiki/Normal_not_implies_potentially_fully_invariant | # Normal not implies potentially fully invariant
This article gives the statement and possibly, proof, of a non-implication relation between two subgroup properties. That is, it states that every subgroup satisfying the first subgroup property (i.e., normal subgroup) need not satisfy the second subgroup property (i.e., potentially fully invariant subgroup)
View a complete list of subgroup property non-implications | View a complete list of subgroup property implications
Get more facts about normal subgroup|Get more facts about potentially fully invariant subgroup
EXPLORE EXAMPLES YOURSELF: View examples of subgroups satisfying property normal subgroup but not potentially fully invariant subgroup|View examples of subgroups satisfying property normal subgroup and potentially fully invariant subgroup
## Statement
It is possible to have a normal subgroup $H$ of a group $G$ that is not a potentially fully invariant subgroup of $G$ -- in other words, there is no group $K$ containing $G$ such that $H$ is a fully invariant subgroup of $K$.
## Facts used
1. Equivalence of definitions of complete direct factor: This states that for a complete subgroup, being normal is equivalent to being a direct factor.
2. Equivalence of definitions of fully invariant direct factor: This states that for a direct factor, being a fully invariant subgroup is equivalent to being a homomorph-containing subgroup.
3. Homomorph-containment satisfies intermediate subgroup condition
## Proof
### Example involving a complete group
Further information: Complete and potentially fully invariant implies homomorph-containing
Let $A$ be a nontrivial complete group. Define $G := A \times A$ and $H := A \times \{ e \}$. Clearly, $H$ is a normal subgroup of $G$.
Suppose $K$ is a group containing $G$, such that $H$ is fully invariant in $K$. In particular, $H$ is normal in $K$. Since $H$ is complete, it is a direct factor, so there exists a group $C$ that is a complement to $H$, so $K = H \times C$ as an internal direct product. Further, since $G/H \cong H \cong A$ is a subgroup of $K/H$, $C$ has a subgroup, say $B$, isomorphic to $A \cong H$.
Then, consider the endomorphism $\alpha$ of $K$ that sends $C$ to the trivial subgroup and $H$ isomorphically to the subgroup $B$. This endomorphism does not send $H$ to within itself.
### More general example
More generally, suppose $H$ is a fully normalized subgroup of $G$ that is normal in $G$, but such that there is a homomorphism $\theta: G \to G$ whose kernel contains $C_G(H)$ such that $\theta(H)$ is not contained in $H$ (in other words, $H$ is not a centralizer-annihilating endomorphism-invariant subgroup). Then, $H$ is not a potentially fully invariant subgroup of $G$.
Examples include:
• $G$ is the dihedral group of order 16, say $G := \langle a,x \mid a^8 = x^2 = e, xax = x^{-1}\rangle$, and $H = \langle a^2,x \rangle$. Then, $C_G(H) = \langle a^4 \rangle$ and we have a homomorphism $\theta$ from $G$ to $G$ such that $\theta(a) = a^2, \theta(x) = ax$ with kernel containing $C_G(H)$ and with $\theta(H)$ not contained in $H$. Thus, $H$ is not a potentially fully invariant subgroup of $G$. Further information: D8 is not potentially fully invariant in D16 | 2021-10-19 15:38:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 56, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9436435699462891, "perplexity": 252.1126189957656}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323585270.40/warc/CC-MAIN-20211019140046-20211019170046-00372.warc.gz"} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.