text
stringlengths 104
605k
|
---|
# C# How to reduce data sizes?
## Recommended Posts
Lets say I had a matrix that held location, rotation and scale, however all I would need was location(X,Y,Z), 8 directions(Every 45 angle) and 1 constant scale(Always 1).
What would be the most optimal way to store the value? So that it would use the least amount of memory and be very small so that you could send millions of them over a data packet?
##### Share on other sites
You can't send millions of anything in one packet.
Nor does the way you store a value necessarily relate to how you transmit a value.
One of 8 directions can be stored and transmitted in 3 bits. Scale is always 1 and therefore requires no storage or transmitted data. Location requires 3 numbers but the size of those depends on the precision of the values.
##### Share on other sites
What kind of precision do you need for your position coordinates? What are your limits on data? I imagine for something like this, a “snapshot frame” could send the full precision of state (X, Y, Z, rotation), while intermediary frames use less precise or fewer bits for changes between them. If possible, I’d look into gzip compression, but I suspect it won’t have too much of an effect.
EDIT:
Some rough guesses. Assuming some reference frame that has the full precision of the (X, Y, Z, rotation), and for each frame, we use halffloat precision for deltas for a total of 27 bits/entity, then several million entities will be about 54 Mbits/update. Or perhaps, you could use a variable number of bits and only send the components that need updating: (ID, (component, value), ...)…
##### Share on other sites
8 minutes ago, Kylotan said:
You can't send millions of anything in one packet.
Interesting. Why not?
8 minutes ago, fastcall22 said:
What kind of precision do you need for your position coordinates?
Lets say the world ranges from -15 000 000 000 to +15 000 000 000 on all axis. No floating points.
12 minutes ago, Kylotan said:
One of 8 directions can be stored and transmitted in 3 bits.
What would the code for this look like?
Lets say the idea is the have 1 billion 2D zombies update each frame and send the update to all locations in the most optimal way.
##### Share on other sites
The minimum amount of data you can send is 1 bit. You can't put a million bits in one packet.
You could potentially send an update that concerns millions or billions of characters and it could compress down to 1 bit, if you're lucky. That's a different problem. It's also unrealistic.
Storing 8 directions in 3 bits is just a case of reserving 3 bits of whatever storage you're using for a given number. You can flip bits in C# with the | and & operators. I suspect this is largely a waste of time compared to more productive means of reducing the cost of what you send, however.
You are not going to be able to have 1 billion zombie updates per frame without either redefining the meaning of either (a) update, (b) frame, or (c) billion.
##### Share on other sites
5 minutes ago, Kylotan said:
You are not going to be able to have 1 billion zombie updates per frame without either redefining the meaning of either (a) update, (b) frame, or (c) billion.
Would there be a reasonable amount?
I mean particles can be in the millions while holding collision data. So to what limit could we push the amount of objects if we reduce the data they use?
" 65,542 bytes in a network pack" is what I got from a quick google search. What is the rough amount of 2D zombies I could update with this?
35 minutes ago, fastcall22 said:
54 Mbits seem like a lot for what is needed.
Sorry if I am asking a lot of questions. I just cant understand why there are games that only update 16 characters and still send MB worth of data, while there are games updating 100 of characters for a few KB.
I assume it's because the later only send data it must and at very small precision. The question then is how? How do they reduce there data sizes?
19 minutes ago, Kylotan said:
Storing 8 directions in 3 bits is just a case of reserving 3 bits of whatever storage you're using for a given number.
I assume it has something to do with this. I have always resorted to storing data in a smaller type if I needed less precision.
For example: int MyNumber = 100; sbyte SmallerData = MyNumber.
But that means sbyte is the smallest I know how to make a number. (-128 to 127).
##### Share on other sites
6 minutes ago, Kylotan said:
To get transmission sizes down, you need to think in terms of information, not in terms of data.You're not trying to copy memory locations across the wire, you're trying to send whatever information is necessary
I feel like a idiot for not realizing this. I would never have to update the scale as it is already there and never changed. Now it makes sense, I only need to update the differences.
I could limit the data send by simply limiting the possible differences.
So if I had a 3rd person game where characters can jump I wouldn't need to send: bool Jump = 0; Every update. Only when the character jumps would I ever have to alert the other PC.
19 minutes ago, Kylotan said:
I don't think I fully understand this yet. But a quick read of the link shows that it is more or less what I was thinking about. Thanks for all of the help.
##### Share on other sites
Furthering what Kylotan said about only sending the information needed rather than all the data:
If you for some reason want a million zombies on a bunch of clients and a server, I'm guessing the zombies are AI. In which case, if you calculate the exact same numerical zombie AI simulation etc on each client, in theory you only need to send the things that might affect the zombie simulation (i.e. the real players and their inputs). This is a similarish idea to client side prediction, and afaik is useful for RTS games. Presumably you have to be very careful about your simulation to make sure it runs identical on all machines.
If you really do have lots (hoping not millions) of players, then usually each client doesn't need to know about them all, only the ones within visible range. An authoritative server might keep track of all the players, but it only needs to send a subset of these to each client. And of course in the other direction the input only comes from one player per client.
##### Share on other sites
Some crash course about bit packing (i'm using hex numbers which makes it easy to understand):
int32 a = 0x01, b = 0x02, c = 0x55;
All numbers must be >=0 and <= 0xFF (255) - for that range we need 8 bits because 2^8 = 256 possibilities.
int32 compressed = a | (b<<8) | (c<<16);
We get 0x00550201 by bit shifting and OR to pack 3 numbers into one. (upper most 8 bits still unused - you could pack a 4th number)
Now unpack:
int32 A = compressed & 0xFF; // mask out the other numbers
int32 B = (compressed>>8) & 0xFF; // bit shift and mask
int32 C = (compressed>>16); // no mask necessary if there is no 4th number
What to do if our original numbers are larger, but we want to use only 8 bits?
int32 a = 0x00120004;
We could ignore less important bits:
int32 qA = a>>16; // 0x00000012
then after packing and unpacking you lost 16 bits, resulting in 0x00120000 but sometimes that's good enough. Edit: E.g. if you bin characters to smaller tiles of space, less bits become enough for the position relative to the tile, which is a form of delta compression.
2 hours ago, Scouting Ninja said:
2 hours ago, Kylotan said:
One of 8 directions can be stored and transmitted in 3 bits.
What would the code for this look like?
You would need to tell how you store your original data. In the worst case it's a switch with 8 cases.
Edited by JoeJ
##### Share on other sites
If you want a billion zombies, then synchronising them over a network would be impractical... But you don't have to. You can synchronise their initial conditions and any player interactions, and then run identical updates on each client.
##### Share on other sites
1 hour ago, JoeJ said:
Now unpack:
int32 A = compressed & 0xFF; // mask out the other numbers
int32 B = (compressed>>8) & 0xFF; // bit shift and mask
int32 C = (compressed>>16); // no mask necessary if there is no 4th number
This made things clearer. It turns out I have been using Bit packing for years now, except to me it's always been Texture packing.
So the same principles I would use to store textures can be used here, that makes sense. Considering that textures are made of integers.
23 minutes ago, Hodgman said:
But you don't have to. You can synchronise their initial conditions and any player interactions
This is what I am planning now. I can make some kind of walk states, predefined paths and checkpoints. Then just sent a integer with the number related to the path.
This would allow complex movements with a single integer per zombie.
I don't know why but this whole time I was over complicating things, some kind of mental block. Now everything looks so familiar and makes sense. Finally I can start trying this out for real.
Thanks for all of the help.
##### Share on other sites
25 minutes ago, Scouting Ninja said:
This made things clearer. It turns out I have been using Bit packing for years now, except to me it's always been Texture packing.
So the same principles I would use to store textures can be used here, that makes sense. Considering that textures are made of integers.
Ok, but let me make another unrelated example:
vec3 diff = findMe - nodeCenter;
int childIndex = (diff.x>0) | ((diff.y>0)<<1) | ((diff.z>0)<<2);
This way we get 3 bits from 3 comparisons and combine them to a number 0-7. If you order the children of octree in a certain order, this indexes the correct child node in an elegant and effective way. If positions are integers too, this works even without comparisons just by selecting proper bits from position (shift depends on tree level).
Another one is if you have 4 packed values in one, you can add and subtract to all of them with one op: 0x01020304 + 0x01010101 = 0x02030405. Limited, but there are practical use cases (e.g. to save registers on GPU).
What i mean is you should see bit packing more as a subset of bit math, which is a lot more than just compression (but not a total MUST to know).
Edited by JoeJ
##### Share on other sites
12 hours ago, Scouting Ninja said:
" 65,542 bytes in a network pack" is what I got from a quick google search. What is the rough amount of 2D zombies I could update with this?
Network packet size differs between networks. Your quoted number looks like the theoretical maximum size, which you only get with very reliable networks, eg loopback over localhost.
LANs are often around 1500. Real Internet, and mobile phone networks are even smaller. Look for the value of "mtu" of a network interface.
If you always want a single network packet for your data, you should look for the minimum guaranteed packet size rather than the largest possible packet size. The former will work on any network.
##### Share on other sites
We have had a huge ammount of AI in one game I worked on this year where we needed to simulate a 40*40km highway updating a huge ammount of vehicles. Updating a million AI agents per frame is not to solve without a huge server farm I think but why do you want to do this?
Lets assume you have a world where each player is able to see all zombies all the time, even then a zombie will typically not do anything every frame rather than stay arround most of the time. Zombies as every other AI needs a reaction time to seem realistic that may be a few hundrets of milliseconds so dosent even need to update every frame too. Lets assume this, then you could separate those million AI agents so that each agent has its reaction latency and update only a few of them every frame. Assuming 60 frames you have a frametime of 16.667 milliseconds per frame assuming a standard zombie reaction time of 200 milliseconds you need to update N / 11 zombies each frame. This are round about 91.000 zombies per frame.
Now I do not know anything about your gameplay but I do not think that you need to update 91.000 zombies each frame. Assuming a standard zombie size of 60*60cm would mean that you have a reagion of 2.730.000^2 cm or 27,3^2 km without any level geometry between them, a pure wall block of zombies. Even when your player has a view distance of 14 km, you wont need to simulate more than 10% of the zombies per frame in this region because any other zombie is too far away to be seen.
If I would make a game with such many AI, I would cheat wherever possible. Separate those zombies into different update loops that run every 10th frame for any AI that is visible (or faster like I did every 3rd frame for the vehicle AI depending on the driving speed), run your update every 20th frame for each AI that is near the player but not visible and also every 100th frame for the rest. I would do this as a sub task so every update has enought time to run and collect updates for your tasks like a packet of 20 or 50 agents data to continiously send over the network.
As I do not think you need to rotate in all three directions, you could pass one rotation value as 3 bit, one ID value as 20 bit and 35 bit for X/Z axis (Y axis is much less, maybe 16 bit) would mean 14 byte (112 bit) per agent. A 512 byte package can then send updates for 35 agents including some overhead bytes.
Then you should reduce this too by only send updates for agents that have changed there state. I think in a real game this would reduce network traffic to a few kilobytes per second
## Create an account
Register a new account
• 9
• 47
• 11
• 17
• 11
• ### Similar Content
• just to test out my animations, let's say i had a trip animation. How would I code a script to where my character would fall it mud and their clothes would become all muddy? I saw this in a game on Unreal 3 but is this possible with Unity?
• Hello fellow devs!
Once again I started working on an 2D adventure game and right now I'm doing the character-movement/animation. I'm not a big math guy and I was happy about my solution, but soon I realized that it's flawed.
My player has 5 walking-animations, mirrored for the left side: up, upright, right, downright, down. With the atan2 function I get the angle between player and destination. To get an index from 0 to 4, I divide PI by 5 and see how many times it goes into the player-destination angle.
In Pseudo-Code:
angle = atan2(destination.x - player.x, destination.y - player.y) //swapped y and x to get mirrored angle around the y axis
index = (int) (angle / (PI / 5));
PlayAnimation(index); //0 = up, 1 = up_right, 2 = right, 3 = down_right, 4 = down
Besides the fact that when angle is equal to PI it produces an index of 5, this works like a charm. Or at least I thought so at first. When I tested it, I realized that the up and down animation is playing more often than the others, which is pretty logical, since they have double the angle.
What I'm trying to achieve is something like this, but with equal angles, so that up and down has the same range as all other directions.
I can't get my head around it. Any suggestions? Is the whole approach doomed?
Thank you in advance for any input!
• Hello. I'm newby in Unity and just start learning basics of this engine. I want to create a game like StackJump (links are below). And now I wondering what features do I have to use to create such my game. Should I use Physics engine or I can move objects changing transform manually in Update().
If I should use Physics can you in several words direct me how can I implement and what I have to use. Just general info, no need for detailed description of developing process.
Game in PlayMarket
Video of the game
• Hi all. My project is coming along wonderfully, and am starting to consider alpha deployment, and would like your advice.
My project need access to 10,000 small PNG image files at runtime, each is only a few kilobytes each, which during development I used to load in directly from a fixed path on my HDD whenever one was needed (obviously not a solution for go-live), using something like this:
img = new WriteableBitmap(new BitmapImage(new Uri(@screenshotsPath + filename)));
The image would then be blitted onto a buffer screen, etc. etc. At a time, a few dozen would be being used.
Now I'm thinking about deployment, and also when I produce an update to my app, there could be more images to add to the folders. So I'm considering the best way of a) deploying the images to the user as part of the project, and b) how to most easily handle updates to the app, whereby more images will be added.
I have just experimented with adding them all as a Resource (!). This inflated the exe from 10mb to 100mb (not a major problem), increased the compile time from 3 secs to 30 secs (annoying), increased RAM usage from 500mb to 1.5gb (not a major problem either), but means that it solves my fixed directory issue, distribution issue, and update issue, simply by having the files all stuck into the executable. Here's the new code I'm using:
img = BitmapFactory.FromResource("Shots/" + filename);
The next thing I was going to try was to mark them as Content > Copy if Newer. This would resolve the executable size and RAM usage (and also the directory issue as well), however it seems that I'd need to highlight them all, and move them from Resource to Content. As an up-front job this isn't too bad, but as I add new images to the project, I'll need to go in and do this every time, which gets annoying, as the VS2015 default is Resource. Also, I'm not sure how this would work in terms of updates. Would something like ClickOnce deployment recognise new PNGs and install them to the users?
I also have 3,000 ZIP files (~500kb each) which also need deploying and updating in the same way. These are currently read directly from my HDD until I can find a permanent solution for adding these to the project as well.
Can anyone thing of a better way of doing what I'm trying to achieve?
Thanks for any help folks.
• I'm doing a test quest.
The player gets a quest from an NPC to bring him fish.
Once the player picks up the fish, the original NPC gets replaced by a new one with a new conversation trigger. The NPC tells the Player "Well done" and should give 200xp.
The script tells the xp counter to go up by making a reference to the gameobject that holds the text component
But it throws this error:
I'm aware that the error may hide in plain sight. I just have to sort this out, since I'm writing the AI at the same time, and the time it takes to resolve everyone of these errors is tremendous.
Plus, I think I'll learn something. I've been having trouble with some basic functionalities recently. There might be something wrong with my understanding on how programming works.
Glad if someone could help (:
Edit: I'm fully aware that the update function requires an input. I call the function in the editor when the dialogue ends, it still doesn't work.
|
### ERR_TUNE
Set an ERR tuning parameter
#### Description:
The value of the ERR tuning parameter is set appropriately, according to the value given. ERR_TUNE may be called multiple times for the same parameter.
The given value can be overridden by setting an environment variable, ERR_PARAM (where PARAM is the tuning parameter name in upper case), at run time.
The routine will attempt to execute regardless of the given value of STATUS. If the given value is not SAI__OK, then it is left unchanged, even if the routine fails to complete. If the STATUS is SAI__OK on entry and the routine fails to complete, STATUS will be set and an error report made.
#### Invocation
CALL ERR_TUNE( PARAM, VALUE, STATUS )
#### Arguments
##### PARAM = CHARACTER$\ast$($\ast$) (Given)
The tuning parameter to be set (case insensitive).
##### VALUE = INTEGER (Given)
The desired value (see Notes).
##### STATUS = INTEGER (Given and Returned)
The global status.
#### Notes:
1. The following values of PARAM may be used:
• ’SZOUT’ Specifies a maximum line length to be used in the line wrapping process. By default the message to be output is split into chunks of no more than the maximum line length, and each chunk is written on a new line. The split is made at word boundaries if possible. The default maximum line length is 79 characters.
If VALUE is set to 0, no wrapping will occur. If it is set greater than 6, it specifies the maximum output line length. Note that the minimum VALUE is 7, to allow for exclamation marks and indentation.
• ’STREAM’ Specifies whether or not ERR should treat its output unintelligently as a stream of characters. If VALUE is set to 0 (the default) all non-printing characters are replaced by blanks, and line wrapping occurs (subject to SZOUT). If VALUE is set to 1, no cleaning or line wrapping occurs.
• ’REVEAL’ Allows the user to display all error messages cancelled when ERR_ANNUL is called. This is a diagnostic tool which enables the programmer to see all error reports, even those ’handled’ by the program. If VALUE is set to 0 (the default) annulling occurs in the normal way. If VALUE is set to 1, the message will be displayed.
• ’ENVIRONMENT’ This is not a true tuning parameter name but causes the environment variables associated with all the true tuning parameters to be used if set. If the environment variable is not set, the tuning parameter is not altered. The VALUE argument is not used.
2. The tuning parameters for MSG and ERR operate partially at the EMS level and may conflict in their requirements of EMS.
3. The use of SZOUT and STREAM may be affected by the message delivery system in use. For example there may be a limit on the the size of a line output by a Fortran WRITE and automatic line wrapping may occur. In particular, a NULL character will terminate a message delivered by the ADAM message system.
4. With REVEAL, messages are displayed at the time of the ANNUL. As REVEAL operates at the EMS level they are displayed with Fortran WRITE statements so, depending upon the delivery mechanism for normal messages, they may appear out of order.
#### D.3 Deprecated Routine ERR_OUT
Purely for compatibility with previous versions of ERR, the routine ERR_OUT is provided. It should not be used in any new code – usually a call to ERR_REP is all that is required. If it is essential that the message be delivered to the user immediately, ERR_REP should be followed by a call to ERR_FLUSH.
|
# ElGamal Decryption variant
I am trying to do some ElGamal encryption but having a different encryption formula. For that I doing the following steps:
The key generator:
1. Choosing value $$p = 107$$ and $$a = 2$$
2. Random number $$d = 67$$, and $$b = a^d \bmod p$$ where $$b = 2^{67} \bmod 107 = 94$$
3. $$k_{priv} = 67$$ and $$k_{pub} = (p,a,b) = (107,2,94)$$
Encryption
1. Random value $$v = 45$$ and $$C_1 = a^v \bmod p = 2^{45} \bmod 107 = 28$$
2. We have the message $$m = 66$$; $$C_2 = m \cdot b^v \cdot a^v \bmod 107 = 66 \cdot 94^{45} \cdot 2^{45} \bmod 107 = 38$$
3. Finally, $$C = (C_1, C_2)$$
My problem comes when I try to decrypt the message, maybe I am totally wrong. But I am doing:
$$C_1 = a^v$$
$$C_2 = m \cdot a^v \cdot (a^d)^v$$ $$C_2 = m \cdot C_1 \cdot (a^d)^v$$
I trying to do that is where I am a little bit lost. If someone can help me with a clue to decrypt the message I would be nice
• What is the source of this question? Oct 28 '20 at 18:10
• You forgot to use the $d$, it is your secret, right? Oct 28 '20 at 19:35
• my question is how i can decrypt the message. I not sure if my approach is the best Oct 29 '20 at 8:32
• I'm assuming this is homework question there we only provide hints. Since you show some effort I'll direct you. Use the fact that $b=a^d$ since the message sent to you and $d$ is your secret. Oct 29 '20 at 9:57
• Thanks, this helped me. I didn't solved yet. I don't know how to solve the $(a^d)^v$ Oct 30 '20 at 19:34
Finally, the decryption is: $$m = C_2 \cdot C_1^{-1} \cdot C_1^{-d}$$
|
# Homework Help: Prove that there is a point equidistant from 4 other points
1. Mar 4, 2017
### matrixone
1. The problem statement, all variables and given/known data
This problem is taken from S L Loney Coordinate geometry exercise (ch 2)
Prove that a point can be found which is at the same distance from each of the four points
$\bigg(am_1,\dfrac{a}{m_1}\bigg),\bigg(am_2,\dfrac{a}{m_2}\bigg),\bigg(am_3,\dfrac{a}{m_3}\bigg)$ and $\bigg(\dfrac{a}{m_1m_2m_3},am_1m_2m_3\bigg)$
2. Relevant equations
3. The attempt at a solution
Let the points (in that order) be A,B,C and D.
for A,B,C to be collinear, i equated the slopes of AB and BC and got m1 = m3
that means A and C will be same.So, we are down to 3 points and finding a point equidistant from those 3 is trivial since for every triangle a circumcircle exists.
if A,B,C are non collinear, then we will form a circle with A,B,C and see whether D is in that circle
this is where i am stuck. How to form the circle equation easily from these points? The matrix method seems cumbersome with this sorts of expressions
2. Mar 4, 2017
### Buffu
Stop doing Sl loney if you are afraid of cumbersome maths. This is just the tip of iceberg of cumbersomeness that you are going to face.
Anyhow for this question you use $r^2 = \Delta x^2 + \Delta y^2$ and compute the values of $r$, $h$ in $\Delta x = x - h$ and k in $\Delta y = y - k$
Put the forth point in the above equation, calculate RHS and then pray that it will match the value of $r^2$
3. Mar 4, 2017
### ehild
Yes, but the circumcentre is not at equal distance from all points. One point is on the side of the triangle, and it is at shorter distance than the vertices. So assume that all m-s are different.
The centre of the circumcircle of ABC is the intersection of the perpendicular bisectors of AB and BC. Write their equation and solve for x,y. These are the coordinates of O.
Determine the radius r and see if D is at distance r from O.
Edit: it is not needed to calculate the radius. With the same method above, determine the circumcentre of the triangle BCD. If it is identical with the one for ABC, the four points lay on the same circle.
Last edited: Mar 5, 2017
4. Mar 5, 2017
### haruspex
You may have misunderstood matrixone's thinking there.
M1 showed that if three of the points are collinear then two of the three must be the same point. That gets it down to three distinct points altogether, so their circumcentre satisfies the requirement.
5. Mar 5, 2017
### haruspex
I would start from the other end. Suppose the circle has radius r and is centred at (p, q). If (x,y) is one of the points, that gives you an equation in x and y. You also know xy=a2. Using that to substitute for y gives you a quartic in x.
Can you see how to proceed from there?
6. Mar 5, 2017
### Buffu
Why you need to solve a quartic ?
7. Mar 5, 2017
### haruspex
There will be no need to solve it. Hint: Vieta
8. Mar 6, 2017
### Buffu
Sorry I don't get that how will vieta help here ? I mean I know vieta for 4th degree but how should I use it ?
9. Mar 6, 2017
### haruspex
We know the roots of the quartic. These are the given x coordinates. The unknowns are p, q and r. These will feature in the coefficients of the quartic. The Vieta formulas tell us how to find the coefficients from the roots.
The algebra is easier if we formulate the problem more symmetrically. Instead of the given form for the fourth point, write it as $\left(am_4,\frac a{m_4}\right)$, where ∏14mi=1.
10. Mar 6, 2017
### Buffu
If we were to find p, q and r. It is just enough to use $r^2 = (x-p)^ + (y - q)^2$, Then plug the given points.
11. Mar 6, 2017
### haruspex
I don't understand your question. The task is to show there exist (real) p, q and r such that this quartic has the given roots. Using Vieta, and knowing what the roots are, we can find formulae for p, q and r in terms of those roots. It remains to show these are all reals. p and q are easy; r is a bit trickier.
12. Mar 7, 2017
### matrixone
OK, let that circle be
(x-p)2+(y-q)2 = r2
(am1,a/m1) satisfies this equation plugging that value in i get a quartic in m1
and by making the 4th degree term's coefficient as 1, i get the constant term as 1
A quartic has at most 4 roots and we already know there are 3 distinct real roots (m1,m2,m3)
So let the next real root be m4
using vieta, product of roots = m1m2m3m4 = 1
So we get m4 as 1/(m1m2m3)
and that implies that required point indeed belongs the circle .
IS IT CORRECT ?
THANX :)
13. Mar 7, 2017
### haruspex
Not so fast. You need to show that there exist p, q and r which will give all the correct coefficients. As I wrote, the tricky one is r.
14. Mar 7, 2017
### matrixone
Is there a need for that Sir ?
circle equation came from 3 points and there will be values of p,q,r that satisfies it.
I used the confirmed 3 roots (from which the circle equation was formed ) and from that equation itself found out the 4th root. if there were no (p,q,r) that satisfy the 4th root, then that (p,q,r) wont satify the first 3 roots also. An abstract thought
15. Mar 8, 2017
### haruspex
Yes, I think you are right. You are using my construction in a different way than I had envisaged:
p, q and r are defined as the parameters of the circle through the first three points, so we know they exist (having dealt with the collinear case).
Those three points satisfy the quartic.
There must be a fourth real root of the quartic (even if it is a repeated root).
From that as an x coordinate we can construct the y coordinate as a2/y.
Since it satisfies the quartic it must be a point on the circle.
From Vieta we deduce its x value, and the x and y values match the given fourth point.
Very good.
One small correction:
Product of roots = a4=a4m1m2m3m4
|
# Recitation 5B
The Invertible Matrix Theorem: Let $A$ be a square $n\times n$ matrix. Then the following statements are equivalent.
• $A$ is invertible matrix.
• $A$ is row equivalent to the $n\times n$ identity matrix.
• $A$ has $n$ pivot positions.
• The equation $Ax=0$ has only the trivial solution.
• The columns of $A$ form a linearly independent set.
• The linear transformation $x\mapsto Ax$ is one-to-one.
• The equation $Ax=b$ has at least one solution for each $b$ in $\mathbb{R}^n$.
• The columns of $A$ span $\mathbb{R}^n$.
• The linear transformation $x\mapsto Ax$ maps $\mathbb{R}^n$ onto $\mathbb{R}^n$.
• There is an $n\times n$ matrix $C$ such that $CA=I$.
• There is an $n\times n$ matrix $D$ such that $AD=I$.
• $A^T$ is an invertible matrix.
Problem 1: Suppose $AB=AC$, where $B$ and $C$ are $n\times p$ matrices and $A$ is invertible. Show that $B=C$. Is this true, in general, when $A$ is not invertible?
Solution: If $A$ is invertible, then we can multiply $A^{-1}$ on the left of each side of $AB=AC$ and get $A^{-1}AB=A^{-1}AC$. As $A^{-1}A=I$, $B=C$. In general, it is not true that $AB=AC$ implies $B=C$. For instance, if $A$ is the zero matrix, then $AB=AC$ always holds but it is not necessary the case that $B=C$.
Problem 2: Suppose $A$, $B$ and $C$ are invertible $n\times n$ matrices. Show that $ABC$ is also invertible by producing a matrix $D$ such that $(ABC)D=I$ and $D(ABC)=I$.
Solution: Let $D$ be $C^{-1}B^{-1}A^{-1}$. Easy to check $(ABC)D=I$ and $D(ABC)=I$.
Problem 3: Determine which of the matrices are invertible. $$\begin{bmatrix}5 & 7 \\ -3 & -6\end{bmatrix}, \begin{bmatrix}3 & 0 & 0 \\ -3 & -4 & 0 \\ 8 & 5 & -3\end{bmatrix}, \begin{bmatrix}3 & 0 & -3 \\ 2 & 0 & 4 \\ -4 & 0 & 7\end{bmatrix}$$
Solution: As for the first matrix, as two columns are not multiple of each other, they are linearly independent. By the invertible matrix theorem, it is invertible. As for the second one, its transpose has $n$ pivot positions. By the invertible matrix theorem, the transpose is invertible. Again by the theorem, the second matrix itself is invertible. As for the last one, its second column is a zero vector. By the invertible matrix theorem, the matrix is not invertible because its columns are not linearly independent.
Problem 4: Let $A$ and $B$ be $n\times n$ matrices. Show that if $AB$ is invertible, so is $A$.
Solution: By the invertible matrix theorem, we can find $C$ such that $ABC=I$. In other words, $A(BC)=I$. Again by the invertible matrix theorem, $A$ is invertible because $BC$ is the left inverse of $A$.
Problem 5: Suppose $T$ and $U$ are linear transformations from $\mathbb{R}^n$ to $\mathbb{R}^n$ such that $T(U(x))=x$ for all $x$ in $\mathbb{R}^n$. Is it true that $U(T(x))=x$ for all $x$ in $\mathbb{R}^n$?
Solution: Let $A$ and $B$ be the standard matrix of $T$ and $U$. The assumption that $T(U(x))=x$ for all $x$ implies $AB=I$. Therefore $A$ and $B$ are inverse of each other and $BA=I$. Hence $U(T(x))=x$ for all $x$ in $\mathbb{R}^n$.
|
# Trying to open pdf document but it opens in libre office draw format
I created a 238 page pdf document from an .odt in office libre and now I am trying to open it in office libre but it opens as a one page blank libre office draw format page. How do I get it to open in a pdf??
edit retag close merge delete
Just discovered that I have the same problem. Open LO Writer directly. Use menu File -> Open. Then Navigate to the pdf document, select it and click Open. Instead, LO Draw opens on top of Writer to display the document. Even checking "Open as read-only" fails to prevent LO Writer from switching to Draw.
Using LO v6.1.5.2 on Windows 7 Pro.
( 2019-04-23 22:29:07 +0200 )edit
For quite some time my versions of Draw have been able to open multi-page PDFs. This is handy for small cosmetic changes to PDF-documents or to extract some graphics. But to extract the full text or to work on the document there are other tools.
( 2020-03-11 18:13:31 +0200 )edit
Sort by » oldest newest most voted
PDF documents will always open in Draw because they are not editable text documents. PDF format may contain many different "static" objects: images, lines (note: not paragraphs), etc. which are laid out in pages. By default, LO sees this as a collection of shapes set in a page, which is a definition for a Draw file.
From File>Open and its variants, you won't be able to change this behaviour. Opening a file in LO is a complex process where the file extension is basically ignored. An analysis of contents at beginning of file is performed to decide which component to launch (Writer, Calc, Impress, Draw).
Consequently, to display a PDF file as a PDF, double-click on it from the OS file browser or right-click and choose an adequate application.
To illustrate the difference between PDF and ODF (the format for Writer files), select a block of "text" (one paragraph) in the PDF, copy it and paste it in Writer. You end up with a collection of single-line paragraphs, not the single original paragraph.
more
Aha! So, once a document has been exported as PDF, it is now "fixed" (as in stationary or unchangeable). And if there is a need to edit that document, one has to go back to the original (hopefully saved as .odt) and edit that, and then export it again as a new PDF. Thank you.
( 2019-04-24 16:45:59 +0200 )edit
Yes, PDF was meant originally as a way of exchanging non-modifiable documents. This constraint has since then been relaxed as there exist now utilities to edit them. But don't expect something as fully fledged as LO Writer.
( 2019-04-24 17:03:56 +0200 )edit
Yes, as @ajlittoz pointed out, the Portable Document Format was designed for transfer between computers, to be compatible with various operating systems and software, while preserving the layout (fonts, graphics, pagination, etc.). It was never meant to be editable, but soon people wanted to be able to annotate inside the PDFs, highlight text and other stuff. But real editing with a PDF as a base, is total nonsense - there are other formats for that.
( 2020-03-11 18:22:07 +0200 )edit
If you have generated a so called hybrid PDF file you can edit it in Writer directly. The Writer content then is encapsulated in PDF as a Writer content. The difference to a simple PDF file is that the encapsulation needs more disk space, but the content can easily be edited.
For export into specified PDF format always use menu File > Export as > Export as PDF...
.
more
Use a real programme! Libre is THE WORST bunch of CRAP I have EVER tried to work with. I'm a copy editor! This is an absolute nightmare and has cost ME money. Live and learn. Libre SUCKS. Simple as.
more
Oh, btw, it's obvious why it's free!!!!
( 2020-03-11 17:48:53 +0200 )edit
Sorry to hear that your intended workflow is not compatible with the way most software and file-formats are designed to work.
( 2020-03-11 18:26:35 +0200 )edit
All tools can be characterized by what they are good for and what they are not good for. Perhaps you expected all the keyboard shortcuts from some specialized copy-editting program to work just the same in LibreOffice. Your rant didn't include much explanation.
( 2020-03-12 15:01:29 +0200 )edit
|
• Create Account
### #Actualarkane7
Posted 19 October 2012 - 10:26 AM
If all you're actually trying to do is find the first empty space and replace it with an O then this would do it:
//Set all spaces to ' ' when you initialize the board
for (int i = 0; i < 8; ++i) //Where 8 is the size of the board
{
board[i] = ' ';
}
//Then when you're wanting to replace the first empty space wth 'O'
for (int i = 0; i < 8; ++i)
{
if (board[i] == ' ')
board[i] = 'O';
}
if you do not put a break in that if statement, it will change ALL empty spaces into O's, rather than the first one you find
If (board[i] =='')
{
board [i] = 'O';
break;
}
that way only one empty is changed
also i think you may want 9 instead of 8; arent there 9 tiles in TTT?
### #2arkane7
Posted 19 October 2012 - 10:23 AM
If all you're actually trying to do is find the first empty space and replace it with an O then this would do it:
//Set all spaces to ' ' when you initialize the board
for (int i = 0; i < 8; ++i) //Where 8 is the size of the board
{
board[i] = ' ';
}
//Then when you're wanting to replace the first empty space wth 'O'
for (int i = 0; i < 8; ++i)
{
if (board[i] == ' ')
{
board[i] = 'O';
[b]break; [/b]
}
if you do not put a break in that if statement, it will change ALL empty spaces into O's, rather than the first one you find
### #1arkane7
Posted 19 October 2012 - 10:22 AM
If all you're actually trying to do is find the first empty space and replace it with an O then this would do it:
//Set all spaces to ' ' when you initialize the board
for (int i = 0; i < 8; ++i) //Where 8 is the size of the board
{
board[i] = ' ';
}
//Then when you're wanting to replace the first empty space wth 'O'
for (int i = 0; i < 8; ++i)
{
if (board[i] == ' ')
[color=#ff0000] { [/color]
board[i] = 'O';
[color=#ff0000]break; [/color]
[color=#ff0000]} [/color]
}
if you do not put a break in that if statement, it will change ALL empty spaces into O's, rather than the first one you find
PARTNERS
|
## Symmetry and the Fourth Dimension (Part 4)
Last time I posed a puzzle: figure out the Coxeter diagrams of the Platonic solids.
When you do this, it’s hard to help noticing a cool fact: if you take a Platonic solid and draw a dot in the center of each face, these dots are the vertices of another Platonic solid, called its dual. And if we do this again, we get back the same Platonic solid that we started with! These two solids have very similar Coxeter diagrams.
For example, starting with the cube, we get the octahedron:
Starting with the octahedron, we get back the cube:
These pictures were made by Alan Goodman, and if you go to his webpage you can see how all 5 Platonic solids work, and you can download his free book, which includes a good elementary introduction to group theory:
• Alan Goodman, Algebra: Abstract and Concrete, SemiSimple Press, Iowa City, 2012.
When we take the dual of a Platonic solid, or any other polyhedron, we replace:
• each vertex by a face,
• each edge by an edge,
• each face by a vertex.
So, it should not be surprising that in the Coxeter diagram, which records information about vertices, edges and faces, we just switch the letters V and F.
Here’s the story in detail.
### Tetrahedron
The tetrahedron is its own dual, and its Coxeter diagram
V—3—E—3—F
doesn’t change when we switch the letters V and F. Remember, this diagram means that the tetrahedron has:
• 3 vertices and 3 edges around each face,
• 3 edges and 3 faces around each vertex.
### Cube and Octahedron
The dual of the cube is the octahedron, and vice versa. The Coxeter diagram of the cube is:
V—4—E—3—F
because the cube has:
• 4 vertices and 4 edges around each face,
• 3 edges and 3 faces around each vertex.
On the other hand, the Coxeter diagram of the octahedron is:
V—3—E—4—F
because it has:
• 3 vertices and 3 edges around each face,
• 4 edges and 4 faces around each vertex.
If we switch the letters V and F in one of these Coxeter diagrams, we get the other one… drawn backwards, but that doesn’t count in this game.
### Dodecahedron and Icosahedron
The dual of the dodecahedron is the icosahedron, and vice versa. The Coxeter diagram of the dodecahedron is:
V—5—E—3—F
because it has:
• 5 vertices and 5 edges around each face,
• 3 edges and 3 faces around each vertex.
The Coxeter diagram of the icosahedron is:
V—3—E—5—F
because it has:
• 3 vertices and 3 edges around each face,
• 5 edges and 5 faces around each vertex.
Again, you can get from either of these two Coxeter diagrams to the other by switching V and F. That’s duality.
### The numbers
But now let’s think a bit about a deeper pattern lurking around here.
Puzzle. If we take the Coxeter diagrams we’ve just seen:
V—3—E—5—F
V—3—E—4—F
V—3—E—3—F
V—4—E—3—F
V—5—E—3—F
and strip off everything but the numbers, we get these ordered pairs:
(3,5), (3,4), (3,3), (4,3), (5,3)
Why do these pairs and only these pairs give Platonic solids? I’ve listed them in a cute way just for fun, but that’s not the point.
There could be a number of perfectly correct ways to tackle this puzzle. But I have one in mind, so maybe I should give you a couple of clues to nudge you toward my way of thinking—though I’d be happy to hear other ways, too!
(3,6)
Well, last time we looked at the corresponding Coxeter diagram:
V—3—E—6—F
and we saw it doesn’t come from a Platonic solid. Instead, it comes from this tiling of the plane:
What I’m looking for is an equation or something like that, which holds only for the pairs of numbers that give Platonic solids. And it should work for some good reason, not by coincidence!
One more clue. If your equation, or whatever it is, allows extra solutions like (2,n) or (n,2), don’t be discouraged! There are weird degenerate Platonic solids called hosohedra, with just two vertices, like this:
You can’t make the faces flat, but you can still draw it on a sphere, and in some ways that’s more important. The Coxeter diagram for this guy is:
V—2—E—3—F
And each hosohedron has a dual, called a dihedron, with just two faces, like this:
The Coxeter diagram for this is:
V—3—E—2—F
So, if your answer to the puzzle allows for hosohedra and dihedra, it’s not actually bad. As you proceed deeper and deeper into this subject, you realize more and more that hosohedra and dihedra are important, even though they’re not polyhedra in the usual sense.
### 31 Responses to Symmetry and the Fourth Dimension (Part 4)
1. Tobias Fritz says:
So when we consider Coxeter diagrams of the form
V—3—E—n—F
for some natural number $n$, we get tilings by equilateral triangles in which $n$ triangles meet at each vertex. For $n\leq 5$, these tilings live on the sphere, while for $n=6$, it is a tiling of the plane. Since the number of triangles at each vertex determines the curvature, we should expect to get tilings of the hyperbolic plane for $n\geq 7$. I guess that this is the tiling associated to $n=7$:
But what about $n\geq 8$? Do those still exist? If so, are there any pictures around?
Now, I know that you either expect someone to bring this up here or you plan to explain this later in the series. If the latter is the case, feel free to delete this comment.
• John Baez says:
Tobias wrote:
But what about $n\geq 8$? Do those still exist? If so, are there any pictures around?
Indeed, there are tilings of the hyperbolic plane with Coxeter diagrams
V—m—E—n—F
for all natural numbers m,n that are too big to give Platonic solids or tilings of the plane. Don Hatch has some great pictures of them. For example, here’s
V—8—E—3—F
drawn in white, and its dual
V—3—E—8—F
drawn in blue:
These hyperbolic regular tilings are deeply related to modular curves and thus number theory. The one you showed us, for example, is closely connected to Klein’s quartic curve and a bunch of number theory involving the number 7.
Now, I know that you either expect someone to bring this up here or you plan to explain this later in the series. If the latter is the case, feel free to delete this comment.
I’m actually happy to talk about these in the comments, because while I’m tempted to blog about them, doing so would vastly expand this series of posts. Almost everything I’ll do with Platonic solids and their truncations in 3, 4 and higher dimensions has a Euclidean analogue and a hyperbolic analogue! I hope to turn these blog posts into a semi-pop book someday, but if I’m not careful it will grow into an encyclopedia!
• Robert says:
They always say that hyperbolic tilings are related to number theory. I figure that the combinatorial properties of tilings may be a fertile ground for number theory.
Then, what might be the modular curve analogue for a non-hyperbolic tiling? Or is it that only hyperbolic groups give “nontrivial” insight into number theory?
Sorry for the distraction, i just couldn’t resist.
• Tobias Fritz says:
Amazing pictures! I wish I knew more about the connections to number theory. By the way, I dimly remember seeing that picture on the main page of your website. Must have been a while ago…
• One thing that really surprised me about hyperbolic surfaces is that you can maintain a rule such as “7 equilateral triangles at each vertex” and get wildly different topologies– the obvious one where it’s like a wrinkly plane or a lettuce leaf, or weid ones with lots of different intersecting tunnels. It’s sort of like how the rule “6 equilateral triangles at each vertex” gives either a flat plane or a cylinder (graphene or nanotube.)
2. Tobias Fritz says:
I meant to include this picture, but apparently the html got stripped off:
• John Baez says:
Alas, only I can post pictures in comments—probably some sort of anti-spam measure. But I’m always glad to post them for people who comment here, as long as they’re useful. So I’ll fix this.
3. Greg Egan says:
A regular n-gon can be decomposed into n identical isosceles triangles, with angles of 2π/n where they meet at the polygon’s centre. That means the other two angles in each triangle, which are identical, sum to π(1-2/n), and so that’s the interior angle between two edges of the polygon.
If we try to arrange m of these regular n-gons around a vertex of a polyhedron, the sum of all the angles between the edges must come to less than 2π. So we must have:
$m \pi (1-2/n) < 2 \pi$
or
$m(n-2) < 2n$
or a bit more symmetrically
$(m-2)(n-2) < 4$
For m=2 or n=2 that’s just $0 < 4$. So we can have m=2, n = anything for the dihedra, and n=2, m = anything for the hosohedra. I’m not sure if people ever talk about the m=1 or n=1 cases.
Now, if we assume m > 2 and n > 2, then the inequality gives us:
$m < 6$
$n < 6$
So we only have a finite set of possibilities to try, with values for each variable ranging from 3 to 5. And the five pairs that satisfy the inequality are (3,3), (4,3), (5,3), (3,4) and (3,5).
• John Baez says:
Great! Since I like Egyptian fractions, which are sums of reciprocals of natural numbers, I like to start the way you did and then rewrite
$\displaystyle{ m \pi \left(1-\frac{2}{n}\right) < 2 \pi }$
as
$\displaystyle{ 1 - \frac{2}{n} < \frac{2}{m} }$
or
$\displaystyle{ \frac{1}{m} + \frac{1}{n} > \frac{1}{2} }$
In week182 of This Week’s Finds, I sketched how solutions of this equation give Dynkin diagrams of finite-dimensional simple Lie groups of types A, D and E. But these Lie groups actually correspond to unordered pairs m, n. So, these Lie groups correspond, in a sneaky way, to dual pairs of Platonic solids! More precisely:
• the exceptional Lie algebras E6, E7 and E8 correspond to the tetrahedron, the cube/octahedron, and the dodecahedron/icosahedron. These are the solutions with m and n are both > 2.
• the Lie algebras Dk = so(2k) correspond to the hosohedra/dihedra. These are the solutions where m or n are both ≥ 2 but at least one actually equals 2.
• the Lie algebras Ak = sl(k+1) correspond to the even more degenerate shapes you alluded to. These are the solutions where at least one of m or n equals 1.
This is called the ‘McKay correspondence’, and it runs very deep. Here’s a great introduction to it:
• Joris van Hoboken, Platonic solids, binary polyhedral groups, Kleinian singularities and Lie algebras of type A,D,E, Master’s Thesis, University of Amsterdam, 2002.
All this is the ‘spherical’ case of McKay correspondence. There’s also a lot to say about the ‘flat’ or ‘Euclidean’ case:
$\displaystyle{ \frac{1}{m} + \frac{1}{n} = \frac{1}{2} }$
and the ‘hyperbolic’ case:
$\displaystyle{ \frac{1}{m} + \frac{1}{n} < \frac{1}{2} }$
But for those, see week182!
• I think you’ve got the third and the final displayed inequalities the wrong way round.
• John Baez says:
Whoops, you’re right. I fixed those mistakes. Glad to see you back in cyberspace, Simon!
• Blake Stacey says:
Ah, ADE-ology. I suspect that in order to understand it, I’ll have to write a book on it. Or at least add several chapters to the manuscript for Universality and Renormalization I have slowly growing on my computer…
• But the same thing doesn’t work in four dimensions, right?
• John Baez says:
Later in this series we’ll certainly be using Coxeter diagrams to help classify Platonic solids in 4 dimensions. They’re also good for classifying ’tilings’ of 3d Euclidean space and 3d hyperbolic space by polyhedra. In case that mixture of 4’s and 3’s annoys you: these examples are actually all living in the same dimension, since a 4d Platonic solid is a special tiling of a 3-sphere by polyhedra. Spherical, planar and hyperbolic geometry always go hand in hand in mathematics.
But, understanding which Coxeter diagrams give which things isn’t simply a matter of taking sums of reciprocals. We’ll see, I guess!
4. John Baez says:
Robert and Tobias are interested in how tilings of hyperbolic space are related to number theory, so let me say a bit about that. But I’m no expert on this huge and deep subject, so I hope some experts see this and say a bit more.
Platonic solids give tilings of the sphere by regular polygons. Regular tilings of the flat plane can be curled up to give tilings of a 1-holed torus by regular polygons. And regular tilings of the hyperbolic plane can be curled up to give tilings of tori with more holes by regular polygons. They’re probably all interesting for number theory, but I know a bit more about the third case.
For example, consider this drawing by Don Hatch:
The white lines show a tiling of the hyperbolic plane by regular heptagons, 3 meeting at each corner. So, the Coxeter diagram of this tiling (in my style of notation) is:
V—7—E—3—F
If we identify the heptagons with the same numbers here:
we get a tiling of a 3-holed torus by 24 heptagons, 3 meeting at each corner. Here’s a picture of it, drawn by Joe Christy:
The number 24 may seem arbitrary, but it’s not. There’s no way to tile the sphere, the 1-holed torus or the 2-holed torus by heptagons with 3 meeting at each corner, and any tiling of a 3-holed torus by heptagons with 3 meeting at each corner has to have 24 heptagons!
The result is actually a Riemann surface, Klein’s quartic curve, with the maximum number of symmetries for any 3-holed Riemann surface: 168 = 24 × 7. (This happens to be the number of days in a week, but that’s numerology, not number theory!)
But what probably matters more here is that we can think of this Riemann surface as the quotient of the hyperbolic plane by a discrete group. It’s
$\mathbb{H}/ \Gamma(7)$
where $\mathbb{H}$ is the hyperbolic plane, and $\Gamma(7)$ is the group of 2 × 2 integer matrices with determinant 1 whose entries are all equal mod 7 to the corresponding entries of the identity matrix.
The group $\Gamma(7)$ is an example of a modular group and the Klein quartic seen as $\mathbb{H}/ \Gamma(7)$ is an example of a modular curve.
Thanks to the general theory surrounding modular curves, Klein’s quartic curve can be seen as parametrizing elliptic curves with extra structure—namely, a fixed isomorphism between their 7-torsion subgroup and $\mathbb{Z}/7 \times \mathbb{Z}/7$.
Hmm, the jargon seems to be getting thick and I haven’t really gotten to the number theory yet—at least, not in any way that’s recognizable as number theory to the uninitiated! If I knew this stuff better I could get to the number theory more rapidly. But it’s time for lunch. So I’ll stop here and maybe continue later.
• Blake Stacey says:
This happens to be the number of days in a week, but that’s numerology, not number theory!
days -> hours
(The story I’ve always heard is that the Babylonians liked 24— along with 60 and 360—because it has a healthy complement of integer divisors, and 7 because it’s the cardinality of the set {Moon, Mercury, Venus, Sun, Mars, Jupiter, Saturn}.)
• John Baez says:
Whoops! I had ‘days’ on the brain because this is related to a famous theory for how the days of the week got their names.
The idea is that astrologers liked to list the planets in order of decreasing orbital period, counting the sun as having a period of one year, and the moon as period of one month:
Saturn (29 years)
Jupiter (12 years)
Mars (687 days)
Sun (365 days)
Venus (224 days)
Mercury (88 days)
Moon (29.5 days)
For the purposes of astrology they wanted to assign a planet to each hour of each day of the week. To do this, they assigned Saturn to the first hour of the first day, Jupiter to the second hour of the first day, and so on, cycling through the list of planets over and over, until each of the 24 × 7 = 168 hours was assigned a planet. Each day was then named after the first hour in that day. Since 24 mod 7 equals 3, this amounts to taking the above list and cycling around it, reading off every third planet:
Saturn (Saturday)
Sun (Sunday)
Moon (Monday)
Mars (Tuesday)
Mercury (Wednesday)
Jupiter (Thursday)
Venus (Friday)
And that’s how they got listed in this order! At least, this is what the Roman historian Dion Cassius (AD 150-235) claims. Nobody knows for sure.
• John Baez says:
John wrote:
But it’s time for lunch. So I’ll stop here and maybe continue later.
Well, let me just say one more thing! You can use Klein’s quartic curve to give a proof that Fermat’s Last Theorem holds for the exponent 7: there are no integer solutions of
$x^7 + y^7 + z^7 = 0$
with all three variables nonzero. (This is equivalent to other more familiar statements of Fermat’s Last Theorem for exponent 7.)
You can see the proof here:
• Noam Elkies, The Klein quartic in number theory.
Briefly, the idea is to start by noticing that the 3-holed torus I was discussing is isomorphic, as a Riemann surface, to the space of solutions of Klein’s quartic equation
$u^3 v + v^3 w + w^3 u = 0$
modulo rescaling by a constant factor. Then, you use some wizardry to show that the only integer solutions of the above equation are those where at least two of the variables vanish. Then, you notice that an integer solution of
$x^7 + y^7 + z^7 = 0$
gives an integer solution of
$u^3 v + v^3 w + w^3 u = 0$
by letting
$u = x^3 z, \quad v = y^3 x, \quad w = z^3 y$
since then
$u^3 v + v^3 w + w^3 u = x^3 y^3 z^3 (x^7 + y^7 + z^7) = 0$
This is just one of many examples of how number theory gets related to tilings of hyperbolic space, and not at all the most profound, but it’s cute.
• Robert says:
Your example doesn’t look very scary to me. That’s nice. Next time i’m lost in the jargon i’ll try and imagine klein’s quartic rolling over the hyperbolic plane. I can now see how spheres are limited and why a cylinder is a bit too plain for this kind of fun. Thanks for the insight, oh and for the link to the Congruence Groups.
• John Baez says:
Well, I sort of changed course midstream, switching from my intended goal to something I find easier to understand. If I’d kept marching boldly ahead, I would have said something like this (but longer, and maybe more helpful, though maybe less):
Since Klein’s quartic curve is a curled-up piece of the hyperbolic plane, it’s called a ‘modular curve’. Modular curves can be seen as parametrizing families of elliptic curves (roughly speaking, tori) equipped with extra structure.
This is a nice big story already. However, it then takes a weird turn and gets much more intense.
There’s another weirder relation between modular curves and elliptic curves. Sometimes an elliptic curve can be covered by a modular curve using a so-called ‘branched cover’. There’s a big theorem called the modularity theorem which says (very roughly) that all elliptic curves defined by equations using just rational numbers can be covered in this way by modular curves.
And this theorem implies Fermat’s Last Theorem!
So, as usual in number theory, the flashy easy-to-explain result follows as a corollary from something that’s harder to explain, but ultimately more interesting and more connected to geometry and symmetry.
The proof of the modularity theorem is not easy: Andrew Wiles and a grad student of his proved enough of it to get Fermat’s Last Theorem, and other good mathematicians finished off the job.
• Tim Silverman, Pictures of Modular Curves (Part I), n-Category Caf&eeacute;, 10 October 2006.
and read all 11 parts. You’ll get a deeper understanding of creatures like
V—8—F—3—E
as shown here:
• Tobias Fritz says:
John explained:
Modular curves can be seen as parametrizing families of elliptic curves. [..] Sometimes an elliptic curve can be covered by a modular curve using a so-called ‘branched cover’. There’s a big theorem called the modularity theorem which says (very roughly) that all elliptic curves defined by equations using just rational numbers can be covered in this way by modular curves.
That’s weird: a modular curve, equipped with some additional structure defining the ‘branched quotient’ and this torsion subgroup iso, becomes itself a *point* in a modular curve! (In general, I suppose that this second curve is in general different from the first.)
• John Baez says:
Tobias wrote:
That’s weird.
Yes, the modularity theorem is weirdly self-referential. I’ve always been puzzled why everyone explaining this stuff doesn’t mention that.
It’s even more amusing that this self-referential result is what it took to prove something seemingly ‘concrete’ like Fermat’s Last Theorem. As it happened, Gerard Frey suggested that a counterexample to Fermat’s Last Theorem
$a^n + b^n = c^n$
would give an elliptic curve
$y^2 = x(x - a^n)(x + b^n)$
that couldn’t be covered by a modular one. This was later proved by Jean-Pierre Serre and Kenneth Ribet. And that made it clear what had do be done to prove Fermat’s Last Theorem: prove the modularity theorem, or at least enough of it to cover this case.
5. blake561able says:
Nice posting! I think I see where you’re heading with this. The {3,4} pairs are called Schläfli symbols, and I imagine the 4-dimensional case is coming next. Lounesto does a good job in Ch.6 of his book, Clifford Algebras and Spinors, of taking you on a whirlwind tour of 3-d and 4-d ‘Platonic’ solids. And for a great exposition of the McKay Correspondence check out these notes by Qi Phillip on lectures given by M. Khovanov at Columbia a few years ago, http://www.math.columbia.edu/~khovanov/finite/QiYou1.pdf
Just thought I’d provide some references for those itching to learn more, the fun stuff for the McKay Correspondence comes from considering the relation between the angles of triangles on the platonic solids and the areas of spherical triangles obtained upon central projecting out from the center of the solid.
• John Baez says:
Those notes by Qi Phillip are very nice—thanks! I wish they’d been around when I was just learning this stuff! And his handwriting, if that’s what is, looks like it’s typed in a special font.
6. John Baez says:
In response to a puzzle last time, Chris Namaste has worked out the Coxeter diagrams of all the regular tilings of the plane:
square tiling:
V—4—E—4—F
triangular tiling:
V—3—E—6—F
hexagonal tiling:
V—6—E—3—F
The square tiling is self-dual; the other two are dual to each other.
These tilings correspond to the solutions of
$\displaystyle{ \frac{1}{m} + \frac{1}{n} = \frac{1}{2}}$
where $m, n$ are positive integers.
7. John Baez says:
Greg Egan solved today’s puzzle here, but there’s another slightly different solution nobody has mentioned yet. This involves an equation instead of an inequality.
Suppose are trying to classify all the Platonic solids. We’re looking for ways to tile the surface of a sphere with regular $m$-gons, with $n$ meeting at each vertex. Suppose there are a total of $V$ vertices, $E$ edges, and $F$ faces. Since the Euler characteristic of the sphere is 2, we have
$V - E + F = 2$
Since each face has $m$ edges but 2 faces meet along each edge, we have
$mF = 2E$
Since each vertex has $m$ edges meeting it but each edge meets 2 vertices, we also have
$nV = 2E$
Putting these equations together we get
$\displaystyle{ 2E\left( \frac{1}{m} + \frac{1}{n} - \frac{1}{2} \right) = 2 }$
or
$\displaystyle{ \frac{1}{m} + \frac{1}{n} = \frac{1}{2} + \frac{1}{E} }$
Of course this implies the inequality we’ve already seen:
$\displaystyle{ \frac{1}{m} + \frac{1}{n} > \frac{1}{2} }$
A priori the equation is stronger than the inequality, but it just happens to be equivalent, at least when
$m, n \ge 2$
Whenever a sum of two reciprocals of two numbers like this exceeds 1/2, it exceeds 1/2 by a reciprocal of a number like this! And this number is the number of edges of your Platonic solid…
… or hosohedron, or dihedron.
For example
$\displaystyle{ \frac{1}{4} + \frac{1}{3} > \frac{1}{2} }$
gives the cube, but in fact
$\displaystyle{ \frac{1}{4} + \frac{1}{3} = \frac{7}{12} = \frac{1}{2} + \frac{1}{12} }$
so the cube has 12 edges! Simple but pretty stuff.
8. […] couple series of posts from John Baez: Symmetry and the four dimension one, two, three, four. The Mathematics of Biodiversity eight-part […]
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
## Whitman College: David Guichard's "Calculus, Chapter 6: Applications of the Derivative, Section 6.3: Newton's Method"
Read Section 6.3 (pages 135-138). In this reading, you will be introduced to a numerical approximation technique called Newton's Method. This method is useful for finding approximate solutions to equations which cannot be solved exactly.
|
# Is there an error in this AP Calculus quiz on function transformations?
I'm taking an online AP calculus course because my high school does not offer it. One of the questions on a practice quiz (for the online course, not a problem from an official AP practice test) is as follows:
Suppose a friend of yours gives you a graph of $y=f(x)$, and asks you to graph the function $y=-f(2(x-3))+4$. How would you go about doing this?
The choices are:
A. Start with the graph of $y=f(x)$, flip it over, squash it horizontally by a factor of 2, shift it 3 units to the right and 4 units up.
B. Start with the graph of $y=f(x)$, shift it 3 units to the right and 4 units up, then squash it horizontally by a factor of 2, and finally flip it over vertically.
C. Start with the graph of $y=f(x)$, squash it horizontally by a factor of 2, flip it over, shift it 3 units to the right and 4 units up.
D. Start with the graph of $y=f(x)$, shift it 3 units to the left and 4 units up, then squash it horizontally by a factor of 2, and finally flip it over vertically.
E. Start with the graph of $y=f(x)$, shift 4 units up, squash it horizontally by a factor of 2, flip it vertically, and finally shift it 3 units to the right.
The quiz says the correct answer is B. It gives the following "Feedback": "Remember to shift first, then stretch or squash, and then flip."
I think answer B is wrong. I think the correct answer should be C. A few drawings support my claim. I know that the order in which we carry out the transformations matters. I think B is wrong because if we shift horizontally before we squash horizontally, we actually need to shift SIX units right, not three. If we squash first, however, as in C, we only need to shift three units.
On the other hand, maybe the issue is exactly what "squash horizontally" is supposed to mean. My understanding is that when we transform $y=f(x)$ into $y=f(2x)$, the graph gets squashed only because the entire plane gets squashed: all points $(x,y)$ get moved to $(x/2,y)$, and this causes the shape of the graph to look squashed relative to the original. So we're squashing about the line $x=0$. I think the teacher is mistakenly using this phrase to mean "squash about the line $x=3$."
Who is right? Am I right that C could be the correct answer on some reasonable interpretation of "squash horizontally by a factor of 2"?
• "Flip it over"? Is that the language that is being used on the AP exams now? No talk of reflection through the appropriate axis? It certainly looks, to me at least, that A or C would be correct (shouldn't the "or" be exclusive on AP multi-choice parts??) but obviously not B. Sep 22, 2016 at 1:22
• @DanielW.Farlow: this is a quiz problem from an online course, not a problem from an official AP exam. I get the sense my teacher doesn't know what she's talking about. The language she uses is very imprecise. Sep 22, 2016 at 1:56
• @DanielW.Farlow: and yes, I didn't even realize it but A gives the same result, too! I wonder what I should do... how do I tell the teacher she's wrong? Sep 22, 2016 at 1:56
• @joshmilligan Tell her she's wrong, but be tactful about it. "Hey Ms. ___, I think there may be an error for this question. It looks like A and C are actually both correct while B doesn't seem right." Then go on to explain your reasoning. If you are really worried about the teacher's ego or something like that, then I would try an email (which I just realized you said this is online...so that is obviously what you would do). Simply write out your thought process, and if she is half-decent, then she will be happy you wrote her. If you have problems...send her the link to this question! Sep 22, 2016 at 2:04
• If only the school system were better... Sep 22, 2016 at 22:42
Yes, there is an error. If you just consider $f(x) = x$ then the function given is:
$$y = -2(x-3) + 4 = -2x + 10$$
This modified function yields a line with $y$-intercept $10$.
The suggested quiz answer says:
Start with the graph of $y=f(x)$, shift it 3 units to the right and 4 units up, then squash it horizontally by a factor of 2, and finally flip it over vertically.
If you follow these directions for $f(x) = x$, which has $y$-intercept $0$, then the shift 3 units to the right will create a $y$-intercept of $-3$, the shift of 4 units up will create a $y$-intercept of $1$, the horizontal squishing will not change the $y$-intercept of $1$, and the vertical flip will not move the $y$-intercept to $10$. And so the suggested answer is wrong, as you suspected.
• "the horizontal squishing will not change the y-intercept of 1" -- but this is part of what's at issue, right? it seems the teacher and I are using different definitions of "horizontal squashing." I see how B could be right if she interprets squashing as happening about the line $x=3$, but I thought we always use horizontal compression to mean about the line $x=0$.... Sep 22, 2016 at 0:54
The transformations can be split into two independent sets, the horizontal (inside $f$) and vertical (outside $f$) ones. Within each set the order of transformations must be fixed, but otherwise they can be weaved into each other.
This also means we can unweave the options for transforming $f(x)$ to $-f(2(x-3))+4$ so the horizontal ones come first. Denote the elementary transformations as follows:
• P: "squash horizontally by a factor of 2" ($x\to2x$)
• Q: "shift right 3 units" ($x\to x-3$)
• R: "flip graph vertically" ($y\to-y$)
• S: "shift up 4 units" ($y\to y+4$)
P and Q are horizontal transformations, while R and S are vertical ones.
For the transformation $x\to a(x-b)$, if we are given the elementary transformations "squash horizontally by factor $a$" and "shift right $b$ units" we should apply the squashing first; if we reverse the operations the transformation becomes $x\to ax-b$. Similarly, reversing "scale vertically by factor $a$" and "shift up $b$ units" turns $y\to ay+b$ into $y\to a(y+b)$.
From this, we see that $PQRS$ is a correct sequence for this problem. The given options' sequences are below, and I rearrange them after each arrow so that the horizontal transformations come first:
• A: $RPQS\to PQRS$
• B: $QSPR\to QPSR$
• C: $PRQS\to PQRS$
• D: $(-Q)SPR\to (-Q)PSR$
• E: $SPRQ\to PQSR$
We see that two options, A and C, produce the required transformation. B is wrong, as you reasoned out.
• "scaling should always happen before translation when transforming a graph" ... no, you can translate before you scale, so long as you realize that what you translate by depends on the order. $f(2(x-3))$ can be obtained by either compressing first, then shifting 3 right, OR shifting 6 right, then compressing. at least as far as I can tell. Sep 22, 2016 at 2:00
I don’t think that any of us would have a question about $-f(2x)+4$ : that’s what you’d get by squashing and flipping, and then moving up by $4$. The effect of replacing $x$ by $x-3$ is to shift the squashed, flipped figure $3$ units to the right. So C looks right to me.
Suppose $f(x) = (0,0), (2,5), (4,-1)$
$y = -f(2(x-3)) - 4 = (3,-4), (4,-9), (5,-3)$
That looks to me like squish (factor of 2), shift horizontally (right 3 units), flip about the x axis, shift vertically (down 4 units).
But my brain might not work the same as yours. You could say that is flip then squish then shift, and you would get the same picture. And both would be right.
or squish then flip then shift.
But the flipping definitely comes before the vertical shift. and the squish comes before the horizontal shift.
I made a transcription error. It should be $y = - f(2(x-3)) + 4$ and not -4, but I don't care to redo my graphics, and doesn't really change the message.
Try reflecting the function across the $y=x$ line (i.e. switch the xs and ys). You should end up with this function (assuming that we only do this for a one-to-one portion of $f$s domain). In other wordds, $f$ is a piece-wise function with one-to-one pieces.
$$x = \frac{1}{\color{red}2}f^{-1}\bigg(\frac{1}{-1}(y-4)\bigg)+\color{red}3$$
It should now be obvious why the compression by a factor of $2$ needs to occur before the shift of $3$ units.
|
## Learning About [the] Loss (Function)
7 Nov
One of the things we often want to learn is the actual loss function people use for discounting ideological distance between self and a legislator. Often people try to learn the loss function using over actual distances. But if the aim is to learn the loss function, perceived distance rather than actual distance is better. It is so because perceived = what the voter believes to be true. People can then use the function to simulate out scenarios if perceptions = fact.
## Confirming media bias
31 Oct
It used to be that searches for ‘Fox News Bias’ were far more common than searches for ‘CNN bias’. Not anymore. The other notable thing—correlated peaks around presidential elections.
Note also that searches around midterm elections barely rise above the noise.
## God, Weather, and News vs. Porn
22 Oct
The Internet is for porn (Avenue Q). So it makes sense to measure things on the Internet in porn units.
I jest, just a bit.
In Everybody Lies, Seth Stephens Davidowitz points out that people search for porn more than weather on GOOG. Data from Google Trends for the disbelievers.
But how do searches for news fare? Surprisingly well. And it seems the new president is causing interest in news to outstrip interest in porn. Worrying, if you take Posner’s point that people’s disinterest in politics is a sign that they think the system is working reasonably well. The last time searches for news > porn was when another Republican was in the White House!
How is the search for porn affected by Ramadan? For answer, we turn to Google Trends from Pakistan.
In Ireland though, of late, it appears the searches for porn increase during Christmas.
## Measuring Segregation
31 Aug
Dissimilarity index is a measure of segregation. It runs as follows:
$\frac{1}{2} \sum\limits_{i=1}^{n} \frac{g_{i1}}{G_1} - \frac{g_{i2}}{G_2}$
where:
$g_{i1}$ is population of $g_1$ in the ith area
$G_{i1}$ is population of $g_1$ in the larger area
from which dissimilarity is being measured against
The measure suffers from a couple of issues:
1. Concerns about lumpiness. Even in a small area, are black people at one end, white people at another?
2. Choice of baseline. If the larger area (say a state) is 95\% white (Iowa is 91.3% White), dissimilarity is naturally likely to be small.
One way to address the concern about lumpiness is to provide an estimate of the spatial variance of the quantity of interest. But to measure variance, you need local measures of the quantity of interest. One way to arrive at local measures is as follows:
1. Create a distance matrix across all addresses. Get latitude and longitude. And start with Euclidean distances, though smart measures that take account of physical features are a natural next step. (For those worried about computing super huge matrices, the good news is that computation can be parallelized.)
2. For each address, find n closest addresses and estimate the quantity of interest. Where multiple houses are similar distance apart, sample randomly or include all. One advantage of n closest rather than addresses in a particular area is that it naturally accounts for variations in density.
But once you have arrived at the local measure, why just report variance? Why not report means of compelling common-sense metrics, like the proportion of addresses (people) for whom the closest house has people of another race?
As for baseline numbers (generally just a couple of numbers): they are there to help you interpret. They can be brought in later.
## The Innumerate American
19 Feb
In answering a question, scientists sometimes collect data that answers a different, sometimes yet more important question. And when that happens, scientists sometimes overlook the easter egg. This recently happened to me, or so I think.
Kabir and I recently investigated the extent to which estimates of motivated factual learning are biased (see here). As part of our investigation, we measured numeracy. We asked American adults to answer five very simple questions (the items were taken from Weller et al. 2002):
1. If we roll a fair, six-sided die 1,000 times, on average, how many times would the die come up as an even number? — 500
2. There is a 1% chance of winning a $10 prize in the Megabucks Lottery. On average, how many people would win the$10 prize if 1,000 people each bought a single ticket? — 10
3. If the chance of getting a disease is 20 out of 100, this would be the same as having a % chance of getting the disease. — 20
4. If there is a 10% chance of winning a concert ticket, how many people out of 1,000 would be expected to win the ticket? — 100
5. In the PCH Sweepstakes, the chances of winning a car are 1 in a 1,000. What percent of PCH Sweepstakes tickets win a car? — .1%
The average score was about 57%, and the standard deviation was about 30%. Nearly 80% (!) of the people couldn’t answer that 1 in a 1000 chance is .1% (see below). Nearly 38% couldn’t answer that a fair die would turn up, on average, an even number 500 times every 1000 rolls. 36% couldn’t calculate how many people out of a 1,000 would win if each had a 1% chance. And 34% couldn’t answer that 20 out of 100 means 20%.
If people have trouble answering these questions, it is likely that they struggle to grasp some of the numbers behind how the budget is allocated, or for that matter, how to craft their own family’s budget. The low scores also amply illustrate that the education system fails Americans.
Given the importance of numeracy in a wide variety of domains, it is vital that we pay greater attention to improving it. The problem is also tractable — with the advent of good self-learning tools, it is possible to intervene at scale. Solving it is also liable to be good business. Given numeracy is liable to improve people’s capacity to count calories, make better financial decisions, among other things, health insurance companies could lower premiums in lieu of people becoming more numerate, and lending companies could lower interest rates in exchange for increases in numeracy.
## Town Level Data on Cable Operators and Cable Channels
12 Sep
I am pleased to announce the release of TV and Cable Factbook Data (1997–2002; 1998 coverage is modest). Use of the data is restricted to research purposes.
Background
In 2007, Stefano DellaVigna and Ethan Kaplan published a paper that used data from Warren’s Factbook to identify the effect of the introduction of Fox News Channel on Republican vote share (link to paper). Since then, a variety of papers exploiting the same data and identification scheme have been published (see, for instance, Hopkins and Ladd, Clinton and Enamorado, etc.)
In 2012, I embarked on a similar such project—trying to use the data to study the impact of the introduction of Fox News Channel on attitudes and behaviors related to climate change. However, I found the original data to be limited—DellaVigna and Kaplan had used a team of research assistants to manually code a small number of variables for a few years. So I worked on extending the data. I planned on extending the data in two ways: adding more years, and adding ‘all’ the data for each year. To that end, I developed custom software. The data collection and parsing of a few thousand densely packed, inconsistently formatted, pages (see below) to a usable CSV (see below) finished sometime early in 2014. (To make it easier to create a crosswalk with other geographical units, I merged the data with Town lat/long (centroid) and elevation data from http://www.fallingrain.com/world/US/.)
Sample Page
Snapshot of the Final CSV
Soon after I finished the data collection, however, I became aware of a paper by Martin and Yurukoglu. They found some inconsistencies between the Nielsen data and the Factbook data (see Appendix C1 of paper), tracing the inconsistencies to delays in updating the Factbook data—“Updating is especially poor around [DellaVigna and Kaplan] sample year. Between 1999 and 2000, only 22% of observations were updated. Between 1998 and 1999, only 37% of observations were updated.” Based on their paper, I abandoned the plan to use the data, though I still believe the data can be used for a variety of important research projects, including estimating the impact of the introduction of Fox News. Based on that belief, I am releasing the data.
## The Value of Money: Learning from Experiments Offering Money for Correct Answers
10 Jul
Papers at hand:
Two empirical points that we learn from the papers:
1. Partisan gaps are highly variable and the mean gap is reasonably small (without money, control condition). See also: Partisan Retrospection?
(The point is never explicitly commented on by either of the papers. The point has implications for proponents of partisan retrospection.)
2. When respondents are offered money for the correct answer, partisan gap reduces by about half on average.
Question in front of us: Interpretation of point 2.
Why are there partisan gaps on knowledge items?
1. Different Beliefs: People believe different things to be true: People learn different things. For instance, Republicans learn that Obama is a Muslim, and Democrats that he is an observant Christian. For a clear exposition on what I mean by belief, see Waters of Casablanca.
2. Systematic Lazy Guessing: The number one thing people lie about on knowledge items is that they have the remotest clue about the question being asked. And the reluctance to acknowledge ‘Don’t Know’ is in itself a serious point worthy of investigation and careful interpretation.
When people guess on items with partisan implications, some try to infer the answer using the cues in the question stem. For instance, a Republican, when asked whether unemployment rate under Obama increased or decreased, may reason that Obama is a socialist and since socialism is bad policy, it must have increased the unemployment rate.
3. Cheerleading: Even when people know that things that reflect badly on their party happened, they lie. (I will be surprised if this is common.)
The Quantity of Interest: Different Beliefs.
We do not want: Different Beliefs + Systematic Lazy Guessing
Why would money reduce partisan gaps?
1. Reducing Systematic Lazy Guessing: Bullock et al. use pay for DK, offering people small incentive (much smaller than pay for correct) to confess to ignorance. The estimate should be closer to the quantity of interest: ‘Different Beliefs.’
2. Considered Guessing: On being offered money for the correct answer, respondents replace ‘lazy’ (for a bounded rational human —optimal) partisan heuristic described above with more effortful guessing. Replacing Systematic Lazy Guessing with Considered Guessing is good to the extent that Considered Guessing is less partisan. If it is so, the estimate will be closer to the quantity of interest: ‘Different Beliefs.’ (Think of it as a version of correlated measurement error. And we are now replacing systematic measurement error with error that is more evenly distributed, if not ‘randomly’ distributed.)
3. Looking up the Correct Answer: People look up answers to take the money on offer. Both papers go some ways to show that cheating isn’t behind the narrowing of the partisan gap. Bullock et al. use ‘placebo’ questions, and Prior et al. timing etc.
4. Reduces Cheerleading: For respondents for whom utility from lying < \$, they stop lying. The estimate will be closer to the quantity of interest: 'Different Beliefs.'
5. Demand Effects: Respondents take the offer of money as a cue that their instinctive response isn’t correct. The estimate may be further away from the quantity of interest: ‘Different Beliefs.’
27 May
I assessed PolitiFact on:
1. Imbalance in scrutiny: Do they vet statements by Democrats or Democratic-leaning organizations more than statements Republicans or Republican-leaning organizations?
2. Batting average by party: Roughly n_correct/n_checked, but instantiated here as mean Politifact rating.
To answer the questions, I scraped the data from PolitiFact and independently coded and appended data on the party of the person or organization covered. (Feel free to download the script for scraping and analyzing the data, scraped data and data linking people and organizations to party from the GitHub Repository.)
Until now, Politifact has checked veracity 3,859 statements by 703 politicians and organizations. Of these, I was able to establish the partisanship of 554 people and organizations. I restrict the analysis to 3,396 statements by organizations and people whose partisanship I could establish and who lean either towards the Republican or Democratic party. I code the Politifact 6-point True to Pants on Fire scale (true, mostly-true, half-true, barely-true, false, pants-fire) linearly so that it lies between 0 (pants-fire) and 1 (true).
Of the 3,396 statements, about 44% (n = 1506) of the statements checked by PolitiFact are by Democrats or Democratic-leaning organizations. Rest of the roughly 56% (n = 1890) are by Republicans or Republican-leaning organizations. The average PolitiFact rating of statements by Democrats or Democratic-leaning organizations (batting average) is .63; it is .49 for statements by Republicans or Republican-leaning organizations.
To check whether the results are driven by some people receiving a lot of scrutiny, I tallied the total number of statements investigated for each person. Unsurprisingly, there is a large skew, with a few prominent politicians receiving a bulk of the attention. For instance, PolitiFact investigated more than 500 claims by Barack Obama alone. The figure below plots the total number of statements investigated for thirty politicians receiving the most scrutiny.
If you take out Barack Obama, the percentage of Democrats receiving scrutiny reduces to 33.98%. More generally, limiting ourselves to the bottom 90% of the politicians in terms of scrutiny received, the share of Democrats is about 42.75%.
To analyze whether there is selection bias in covering politicians who say incorrect things more often, I estimated the correlation between the batting average and the total number of statements investigated. The correlation is very weak and does not appear to vary systematically by party. Accounting for the skew by taking the log of the total statements or by estimating a rank-ordered correlation has little effect. The figure below plots batting average as a function of total statements investigated.
|
# Computational Methods in Bayesian Analysis¶
The process of conducting Bayesian inference can be broken down into three general steps (Gelman et al. 2013):
### Step 1: Specify a probability model¶
As was noted above, Bayesian statistics involves using probability models to solve problems. So, the first task is to completely specify the model in terms of probability distributions. This includes everything: unknown parameters, data, covariates, missing data, predictions. All must be assigned some probability density.
This step involves making choices.
• what is the form of the sampling distribution of the data?
• what form best describes our uncertainty in the unknown parameters?
### Step 2: Calculate a posterior distribution¶
The mathematical form $p(\theta | y)$ that we associated with the Bayesian approach is referred to as a posterior distribution.
posterior /pos·ter·i·or/ (pos-tēr´e-er) later in time; subsequent.
Why posterior? Because it tells us what we know about the unknown $\theta$ after having observed $y$.
This posterior distribution is formulated as a function of the probability model that was specified in Step 1. Usually, we can write it down but we cannot calculate it analytically. In fact, the difficulty inherent in calculating the posterior distribution for most models of interest is perhaps the major contributing factor for the lack of widespread adoption of Bayesian methods for data analysis. Various strategies for doing so comprise this tutorial.
But, once the posterior distribution is calculated, you get a lot for free:
• point estimates
• credible intervals
• quantiles
• predictions
### Step 3: Check your model¶
Though frequently ignored in practice, it is critical that the model and its outputs be assessed before using the outputs for inference. Models are specified based on assumptions that are largely unverifiable, so the least we can do is examine the output in detail, relative to the specified model and the data that were used to fit the model.
• does the model fit data?
• are the conclusions reasonable?
• are the outputs sensitive to changes in model structure?
## Example: binomial calculation¶
Binomial model is suitable for data that are generated from a sequence of exchangeable Bernoulli trials. These data can be summarized by $y$, the number of times the event of interest occurs, and $n$, the total number of trials. The model parameter is the expected proportion of trials that an event occurs.
$$p(Y|\theta) = \frac{n!}{y! (n-y)!} \theta^{y} (1-\theta)^{n-y}$$
where $y \in \{0, 1, \ldots, n\}$ and $p \in [0, 1]$.
To perform Bayesian inference, we require the specification of a prior distribution. A reasonable choice is a uniform prior on [0,1] which has two implications:
1. makes all probability values equally probable a priori
2. makes calculation of the posterior easy
The second task in performing Bayesian inference is, given a fully-specified model, to calculate a posterior distribution. As we have specified the model, we can calculate a posterior distribution up to a proportionality constant (that is, a probability distribution that is unnormalized):
$$P(\theta | n, y) \propto P(y | n, \theta) P(\theta) = \theta^y (1-\theta)^{n-y}$$
We can present different posterior distributions as a function of different realized data.
We can also calculate posterior estimates for $\theta$ by maximizing the unnormalized posterior using optimization.
### Exercise: posterior estimation¶
Write a function that returns posterior estimates of a binomial sampling model using a uniform prior on the unknown probability. Plot the posterior densities for each of the following datasets:
1. n=5, y=3
2. n=20, y=12
3. n=100, y=60
4. n=1000, y=600
what type of distribution do these plots look like?
In [1]:
# Write your answer here
## Informative Priors¶
Formally, we justify a non-informative prior by the Principle of Insufficient Reason, which states that uniform probability is justified when there is nothing known about the parameter in question. Frequently, it is inappropriate to employ an uninformative prior as we have done above. For some distributions there is no clear choice of such a prior, particularly when parameters are transformed. For example, a flat prior on the real line is not flat on the unit interval.
There are two alternative interpretations of the prior distribution.
1. Population prior: a distribution that represents a notional population of values for the parameter, from which those in the current experiment/study have been drawn.
2. Knowledge prior: a distribution that represents our uncertainty about the true value of the parameter.
In either case, a prior distribution should include in its support all parameter values that are plausible.
Choosing an informative prior presents an analytic challenge with respect to the functional form of the prior distribution. We would like a prior that results in a posterior distribution that is simple to work with. Taking our binomial likelihood again as an example:
$$P(\theta | n, y) \propto \theta^y (1-\theta)^{n-y}$$
we can see that it is of the general form $\theta^a (1-\theta)^b$. Thus, we are looking for a parametric distribution that describes the distribution of or uncertainty in $\theta$ that is of this general form. The beta distribution satisfies these criteria:
$$P(\theta | \alpha, \beta) \propto \theta^{\alpha-1} (1-\theta)^{\beta-1}$$
The parameters $\alpha, \beta$ are called hyperparameters, and here they suggest prior information corresponding to $\alpha-1$ "successes" and $\beta-1$ failures.
Let's go ahead and calculate the posterior distribution:
\begin{aligned} P(\theta | n, y) &\propto& \theta^y (1-\theta)^{n-y} \theta^{\alpha-1} (1-\theta)^{\beta-1} \\ &=& \theta^{y+\alpha-1} (1-\theta)^{n-y+\beta-1} \\ &=& \text{Beta}(\alpha + y, \beta + n -y) \\ \end{aligned}
So, in this instance, the posterior distribution follows the same functional form as the prior. This phenomenon is referred to as conjugacy, whereby the beta distribution is in the conjugate family for the binomial sampling distribution.
What is the posterior distribution when a Beta(1,1) prior is used?
Formally, we defined conjugacy by saying that a class $\mathcal{P}$ is a conjugate prior for the class $\mathcal{F}$ of likelihoods if:
$$P(\theta | y) \propto f(y|\theta) p(\theta) \in \mathcal{P} \text{ for all } f \in \mathcal{F} \text{ and } p \in \mathcal{P}$$
This definition is quite vague for practical application, so we are more interested in natural conjugates, whereby the conjugacy is specific to a particular distribution, and not just a class of distributions.
In the case of the binomial model with a beta prior, we can now analytically calculate the posterior mean and variance for the model:
$$E[\theta|n,y] = \frac{\alpha + y}{\alpha + \beta + n}$$\begin{aligned} \text{Var}[\theta|n,y] &=& \frac{(\alpha + y)(\beta + n - y)}{(\alpha + \beta + n)^2(\alpha + \beta + n +1)} \\ &=& \frac{E[\theta|n,y] (1-E[\theta|n,y])}{\alpha + \beta + n +1} \end{aligned}
Notice that the posterior expectation will always fall between the sample and prior means.
Notice also what happens when $y$ and $n-y$ get large.
## Exercise: probability of female birth given placenta previa¶
Placenta previa is an unusual condition of pregnancy in which the placenta is implanted low in the uterus, complicating a normal delivery. An German study of the sex of placenta previa births found that of 980 births, 437 were female.
How much evidence does this provide for the claim that the proportion of female births in the population of placenta previa births $\theta$ is less than 0.485 (this is the proportion of female births in the general population)?
1. Calculate the the posterior distribution for $\theta$ using a uniform prior, and plot the prior, likelihood and posterior on the same axes.
2. Find a prior distribution that has a mean of 0.485 and prior "sample size" of 100. Calculate the posterior distribution and plot the prior, likelihood and posterior on the same axes.
In [2]:
# Write your answer here
## Approximate Computation¶
Most interesting Bayesian models cannot be computed analytically in closed form, or simulated from directly using random number generators for standard distributions.
Bayesian analysis often requires integration over multiple dimensions that is intractable both via analytic methods or standard methods of numerical integration. However, it is often possible to compute these integrals by simulating (drawing samples) from posterior distributions. For example, consider the expected value of a random variable $\mathbf{x}$:
$$E[\mathbf{x}] = \int \mathbf{x} f(\mathbf{x}) d\mathbf{x}, \qquad\mathbf{x} = x_1, \ldots ,x_k$$
where $k$ (the dimension of vector $x$) is perhaps very large. If we can produce a reasonable number of random vectors $\{{\bf x_i}\}$, we can use these values to approximate the unknown integral. This process is known as Monte Carlo integration. In general, MC integration allows integrals against probability density functions:
$$I = \int h(\mathbf{x}) f(\mathbf{x}) \mathbf{dx}$$
to be estimated by finite sums:
$$\hat{I} = \frac{1}{n}\sum_{i=1}^n h(\mathbf{x}_i),$$
where $\mathbf{x}_i$ is a sample from $f$. This estimate is valid and useful because:
• By the strong law of large numbers:
$$\hat{I} \rightarrow I \text{ with probability 1}$$
• Simulation error can be measured and controlled:
$$Var(\hat{I}) = \frac{1}{n(n-1)}\sum_{i=1}^n (h(\mathbf{x}_i)-\hat{I})^2$$
### How is this relevant to Bayesian analysis?¶
When we observe data $y$ that we hypothesize as being obtained from a sampling model $f(y|\theta)$, where $\theta$ is a vector of (unknown) model parameters, a Bayesian places a prior distribution $p(\theta)$ on the parameters to describe the uncertainty in the true values of the parameters. Bayesian inference, then, is obtained by calculating the posterior distribution, which is proportional to the product of these quantities:
$$p(\theta | y) \propto f(y|\theta) p(\theta)$$
unfortunately, for most problems of interest, the normalizing constant cannot be calculated because it involves mutli-dimensional integration over $\theta$.
Returning to our integral for MC sampling, if we replace $f(\mathbf{x})$ with a posterior, $p(\theta|y)$ and make $h(\theta)$ an interesting function of the unknown parameter, the resulting expectation is that of the posterior of $h(\theta)$:
$$E[h(\theta)|y] = \int h(\theta) p(\theta|y) d\theta \approx \frac{1}{n}\sum_{i=1}^n h(\theta)$$
We also require integrals to obtain marginal estimates from a joint model. If $\theta$ is of length $K$, then inference about any particular parameter is obtained by:
$$p(\theta_i|y) \propto \int p(\theta|y) d\theta_{-i}$$
where the -i subscript indicates all elements except the $i^{th}$.
## Example: Overdispersion Model¶
Tsutakawa et al. (1985) provides mortality data for stomach cancer among men aged 45-64 in several cities in Missouri. The file cancer.csv contains deaths $y_i$ and subjects at risk $n_i$ for 20 cities from this dataset.
In [3]:
import pandas as pd
cancer
Out[3]:
y n
0 0 1083
1 0 855
2 2 3461
3 0 657
4 1 1208
5 1 1025
6 0 527
7 2 1668
8 1 583
9 3 582
10 0 917
11 1 857
12 1 680
13 1 917
14 54 53637
15 0 874
16 0 395
17 1 581
18 3 588
19 0 383
If we use a simple binomial model, which assumes independent samples from a binomial distribution with probability of mortality $p$, we can use MLE to obtain an estimate of this probability.
In [4]:
ytotal, ntotal = cancer.sum().astype(float)
p_hat = ytotal/ntotal
p_hat
Out[4]:
0.0009933126276616582
However, if we compare the variation of $y$ under this model, it is to small relative to the observed variation:
In [5]:
p_hat*(1.-p_hat)*ntotal
Out[5]:
70.92947480343604
In [6]:
cancer.y.var()
Out[6]:
141.94473684210527
Hence, the data are strongly overdispersed relative to what is predicted under a model with a fixed probability of death. A more realistic model would allow for these probabilities to vary among the cities. One way of representing this is conjugating the binomial distribution with another distribution that describes the variation in the binomial probability. A sensible choice for this is the beta distribution:
$$f(p \mid \alpha, \beta) = \frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha) \Gamma(\beta)} p^{\alpha - 1} (1 - p)^{\beta - 1}$$
Conjugating this with the binomial distribution, and reparameterizing such that $\alpha = K\eta$ and $\beta = K(1-\eta)$ for $K > 0$ and $\eta \in (0,1)$ results in the beta-binomial distribution:
$$f(y \mid K, \eta) = \frac{n!}{y!(n-y)!} \frac{B(K\eta+y, K(1-\eta) + n - y)}{B(K\eta, K(1-\eta))}$$
where $B$ is the beta function.
What remains is to place priors over the parameters $K$ and $\eta$. Common choices for diffuse (i.e. vague or uninformative) priors are:
\begin{aligned} p(K) &\propto \frac{1}{(1+K)^2} \cr p(\eta) &\propto \frac{1}{\eta(1-\eta)} \end{aligned}
These are not normalized, but our posterior will not be normalized anyhow, so this is not an issue.
In [7]:
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
fig, axes = plt.subplots(1, 2, figsize=(10,4))
K_x = np.linspace(0, 10)
K_prior = lambda K: 1./(1. + K)**2
axes[0].plot(K_x, K_prior(K_x))
axes[0].set_xlabel('K')
axes[0].set_ylabel('p(K)')
eta_x = np.linspace(0, 1)
eta_prior = lambda eta: 1./(eta*(1.-eta))
axes[1].plot(eta_x, eta_prior(eta_x))
axes[1].set_xlabel(r'$\eta$')
axes[1].set_ylabel(r'p($\eta$)')
/home/fonnesbeck/anaconda3/envs/dev/lib/python3.6/site-packages/ipykernel_launcher.py:13: RuntimeWarning: divide by zero encountered in true_divide
del sys.path[0]
Out[7]:
Text(0, 0.5, 'p($\\eta$)')
Now, by multiplying these quantities together, we can obtain a non-normalized posterior.
$$p(K, \eta | \mathbf{y}) \propto \frac{1}{(1+K)^2} \frac{1}{\eta(1-\eta)} \prod_i \frac{B(K\eta+y_i, K(1-\eta) + n_i - y_i)}{B(K\eta, K(1-\eta))}$$
This can be calculated in Python as follows (log-transformed):
In [8]:
from scipy.special import betaln
def betabin_post(params, n, y):
K, eta = params
post = betaln(K*eta + y, K*(1.-eta) + n - y).sum()
post -= len(y)*betaln(K*eta, K*(1.-eta))
post -= np.log(eta*(1.-eta))
post -= 2.*np.log(1.+K)
return post
betabin_post((15000, 0.003), cancer.n, cancer.y)
Out[8]:
-605.0664560772116
This posterior can be evaluated on a grid to give us an idea about its general shape. We can see that it is skewed in both dimensions:
In [9]:
# Create grid
K_x = np.linspace(1, 20000)
eta_x = np.linspace(0.0001, 0.003)
# Calculate posterior on grid
z = np.array([[betabin_post((K, eta), cancer.n, cancer.y)
for eta in eta_x] for K in K_x])
# Plot posterior
x, y = np.meshgrid(eta_x, K_x)
cplot = plt.contour(x, y, z-z.max(), [-4, -3, -2, -1, -0.5], cmap=plt.cm.RdBu)
plt.ylabel('K')
plt.xlabel('$\eta$');
To deal with the extreme skewness in the precision parameter $K$ and to facilitate modeling, we can transform the beta-binomial parameters to the real line via:
\begin{aligned} \theta_1 &= \log(K) \cr \theta_2 &= \log\left(\frac{\eta}{1-\eta}\right) \end{aligned}
which we can easily implement by modifiying betabin_post:
In [10]:
def betabin_trans(theta, n, y):
K = np.exp(theta[0])
eta = 1./(1. + np.exp(-theta[1]))
post = (betaln(K*eta + y, K*(1.-eta) + n - y) - betaln(K*eta, K*(1.-eta))).sum()
post += theta[0]
post -= 2.*np.log(1.+np.exp(theta[0]))
return post
betabin_trans((10, -7.5), cancer.n, cancer.y)
Out[10]:
-576.7966861078922
In [11]:
# Create grid
log_K_x = np.linspace(0, 20)
logit_eta_x = np.linspace(-8, -5)
# Calculate posterior on grid
z = np.array([[betabin_trans((t1, t2), cancer.n, cancer.y)
for t2 in logit_eta_x] for t1 in log_K_x])
# Plot posterior
x, y = np.meshgrid(logit_eta_x, log_K_x)
cplot = plt.contour(x, y, z - z.max(), levels=[-8, -4, -2, -1, -0.5], cmap=plt.cm.RdBu)
plt.clabel(cplot, inline=1, fontsize=10, fmt='%1.1f')
plt.ylabel('log(K)')
plt.xlabel('logit($\eta$)');
## Approximation Methods¶
An alternative approach to summarizing a $p$-dimensional posterior distribution involves estimating the mode of the posterior, and approximating the density as multivariate normal. If we consider the logarithm of the unnormalized joint posterior:
$$h(\theta | y) = \log[f(y|\theta) p(\theta)]$$
one way to approximate this function is to usd a second-order Taylor series expansion around the mode $\hat{\theta}$:
$$h(\theta | y) \approx h(\hat{\theta} | y) + \frac{1}{2}(\theta-\hat{\theta})' h''(\hat{\theta} | y) (\theta-\hat{\theta})$$
This form is simply the multivariate normal distribution with $\hat{\theta}$ as the mean and the inverse negative Hessian as the covariance matrix:
$$\Sigma = -h''(\hat{\theta} | y)^{-1}$$
We can apply one of several numerical methods for multivariate optimization to numerically estimate the mode of the posterior. Here, we will use the Broyden–Fletcher–Goldfarb–Shanno (BFGS) algorithm that is provided by SciPy. In addition to returning an estimate of the mode, it returns the estimated variance-covariance matrix, which we will need to parameterize the mutlivariate normal distribution.
Applying this to the beta-binomial posterior estimation problem, we simply provide an initial guess for the mode:
In [12]:
from scipy.optimize import minimize
betabin_trans_min = lambda *args: -betabin_trans(*args)
init_value = (10, -7.5)
opt = minimize(betabin_trans_min, init_value, method='L-BFGS-B',
args=(cancer.n.values, cancer.y.values))
mode = opt.x
var = opt.hess_inv.todense()
mode, var
Out[12]:
(array([ 7.57514505, -6.81827853]), array([[ 1.27060592, -0.14177248],
[-0.14177248, 0.0791443 ]]))
Thus, our approximated mode is $\log(K)=7.6$, $\text{logit}(\eta)=-6.8$. We can plug this value, along with the variance-covariance matrix, into a function that returns the kernel of a multivariate normal distribution, and use this to plot the approximate posterior:
In [13]:
det = np.linalg.det
inv = np.linalg.inv
def lmvn(value, mu, Sigma):
# Log kernel of multivariate normal
delta = np.array(value) - mu
return -0.5 * (np.log(det(Sigma)) + np.dot(delta, np.dot(inv(Sigma), delta)))
In [14]:
z = np.array([[lmvn((t1, t2), mode, var)
for t2 in logit_eta_x] for t1 in log_K_x])
x, y = np.meshgrid(logit_eta_x, log_K_x)
cplot = plt.contour(x, y, z - z.max(), levels=[-8, -4, -2, -1, -0.5], cmap=plt.cm.RdBu)
plt.ylabel('log(K)')
plt.xlabel('logit($\eta$)');
Along with this, we can estimate a 95% probability interval for the estimated mode:
In [15]:
from scipy.stats.distributions import norm
se = np.sqrt(np.diag(var))
mode[0] + norm.ppf(0.025)*se[0], mode[0] + norm.ppf(0.975)*se[0]
Out[15]:
(5.365850974813904, 9.784439125790044)
In [16]:
mode[1] + norm.ppf(0.025)*se[1], mode[1] + norm.ppf(0.975)*se[1]
Out[16]:
(-7.369667277820049, -6.266889775277178)
Of course, this approximation is only reasonable for posteriors that are not strongly skewed, bimodal, or leptokurtic (heavy-tailed).
## Rejection Sampling¶
Though Monte Carlo integration allows us to estimate integrals that are unassailable by analysis and standard numerical methods, it relies on the ability to draw samples from the posterior distribution. For known parametric forms, this is not a problem; probability integral transforms or bivariate techniques (e.g Box-Muller method) may be used to obtain samples from uniform pseudo-random variates generated from a computer. Often, however, we cannot readily generate random values from non-standard posteriors. In such instances, we can use rejection sampling to generate samples.
Posit a function, $f(x)$ which can be evaluated for any value on the support of $x:S_x = [A,B]$, but may not be integrable or easily sampled from. If we can calculate the maximum value of $f(x)$, we can then define a rectangle that is guaranteed to contain all possible values $(x,f(x))$. It is then trivial to generate points over the box and enumerate the values that fall under the curve.
$$\begin{gathered} \begin{split}\frac{\mbox{Points under curve}}{\mbox{Points generated}} \times \mbox{box area} = \lim_{n \to \infty} \int_A^B f(x) dx\end{split}\notag\\\begin{split}\end{split}\notag\end{gathered}$$
### Example: triangular distribution¶
In [17]:
def rtriangle(low, high, mode):
alpha = -1
while np.random.random() > alpha:
u = np.random.uniform(low, high)
if u < mode:
alpha = (u - low) / (mode - low)
else:
alpha = (high - u) / (high - mode)
return(u)
In [18]:
_ = plt.hist([rtriangle(0, 7, 2) for t in range(10000)], bins=100)
This approach is useful, for example, in estimating the normalizing constant for posterior distributions.
If $f(x)$ has unbounded support (i.e. infinite tails), such as a Gaussian distribution, a bounding box is no longer appropriate. We must specify a majorizing (or, enveloping) function, $g(x)$, which implies:
$$\begin{gathered} \begin{split}cg(x) \ge f(x) \qquad\forall x \in (-\infty,\infty)\end{split}\notag\\\begin{split}\end{split}\notag\end{gathered}$$
Having done this, we can now sample ${x_i}$ from $g(x)$ and accept or reject each of these values based upon $f(x_i)$. Specifically, for each draw $x_i$, we also draw a uniform random variate $u_i$ and accept $x_i$ if $u_i < f(x_i)/cg(x_i)$, where $c$ is a constant. This procedure is repeated until a sufficient number of samples is obtained. This approach is made more efficient by choosing an enveloping distribution that is “close” to the target distribution, thus maximizing the number of accepted points.
To apply rejection sampling to the beta-binomial example, we first need to find a majorizing function $g(x)$ from which we can easily draw samples. We have seen in the previous section that the multivariate normal might serve as a suitable candidate, if multiplied by an appropriately large value of $c$. However, the thinness of the normal tails makes it difficult to use as a majorizing function. Instead, a multivariate Student's T distribution offers heavier tails for a suitably-small value for the degrees of freedom $\nu$:
$$f(\mathbf{x}| \nu,\mu,\Sigma) = \frac{\Gamma\left[(\nu+p)/2\right]}{\Gamma(\nu/2)\nu^{p/2}\pi^{p/2}\left|{\Sigma}\right|^{1/2}\left[1+\frac{1}{\nu}({\mathbf x}-{\mu})^T{\Sigma}^{-1}({\mathbf x}-{\mu})\right]^{(\nu+p)/2}}$$
We can draw samples from a multivariate-T density by combining mutlivariate normal and $\chi^2$ random variates:
### Generating multivariate-T samples¶
If $X$ is distributed multivariate normal $\text{MVN}(\mathbf{0},\Sigma)$ and $S$ is a $\chi^2$ random variable with $\mu$ degrees of freedom, then a multivariate Student's-T random variable $T = T_1,\ldots,T_p$ can be generated by $T_i = \frac{\sqrt{\nu}X_i}{S} + \mu_i$, where $\mu = \mu_1,\ldots,\mu$ is a mean vector.
This is implemented in Python by:
In [19]:
chi2 = np.random.chisquare
mvn = np.random.multivariate_normal
rmvt = lambda nu, S, mu=0, size=1: (np.sqrt(nu) * (mvn(np.zeros(len(S)), S, size).T
/ chi2(nu, size))).T + mu
Finally, we need an implementation of the multivariate T probability distribution function, which is as follows:
In [20]:
from scipy.special import gammaln
def mvt(x, nu, S, mu=0):
d = len(S)
n = len(x)
X = np.atleast_2d(x) - mu
Q = X.dot(np.linalg.inv(S)).dot(X.T).sum()
log_det = np.log(np.linalg.det(S))
log_pdf = gammaln((nu + d)/2.) - 0.5 * (d*np.log(np.pi*nu) + log_det) - gammaln(nu/2.)
log_pdf -= 0.5*(nu + d)*np.log(1 + Q/nu)
return(np.exp(log_pdf))
The next step is to find the constant $c$ that ensures:
$$cg(\theta) \ge f(\theta|y) \qquad\forall \theta \in (-\infty,\infty)$$
Alternatively, we want to ensure:
$$\log[f(\theta|y)] - \log[g(\theta)] \le c'$$
In [21]:
def calc_diff(theta, n, y, nu, S, mu):
return betabin_trans(theta, n, y) - np.log(mvt(theta, nu, S, mu))
calc_diff_min = lambda *args: -calc_diff(*args)
We can calculate an appropriate value of $c'$ by simply using the approximation method described above on calc_diff (tweaked to produce a negative value for minimization):
In [22]:
opt = minimize(calc_diff_min,
(12, -7),
args=(cancer.n, cancer.y, 4, 2*var, mode),
method='bfgs')
In [23]:
opt
Out[23]:
fun: 569.217410047273
hess_inv: array([[0.99948696, 0.03077307],
[0.03077307, 0.01976415]])
jac: array([ 0.14112854, -0.52979279])
message: 'Desired error not necessarily achieved due to precision loss.'
nfev: 300
nit: 1
njev: 72
status: 2
success: False
x: array([11.99467583, -6.89933824])
In [24]:
c = opt.fun
Now we can execute a rejection sampling algorithm:
In [25]:
def reject(post, nu, S, mu, n, data, c):
k = len(mode)
# Draw samples from g(theta)
theta = rmvt(nu, S, mu, size=n)
# Calculate probability under g(theta)
gvals = np.array([np.log(mvt(t, nu, S, mu)) for t in theta])
# Calculate probability under f(theta)
fvals = np.array([post(t, data.n, data.y) for t in theta])
# Calculate acceptance probability
p = np.exp(fvals - gvals + c)
return theta[np.random.random(n) < p]
In [26]:
nsamples = 1000
sample = reject(betabin_trans, 4, var, mode, nsamples, cancer, c)
In [27]:
z = np.array([[betabin_trans((t1, t2), cancer.n, cancer.y)
for t2 in logit_eta_x] for t1 in log_K_x])
x, y = np.meshgrid(logit_eta_x, log_K_x)
cplot = plt.contour(x, y, z - z.max(), levels=[-8, -4, -2, -1, -0.5], cmap=plt.cm.RdBu)
plt.clabel(cplot, inline=1, fontsize=10, fmt='%1.1f')
plt.ylabel('log(K)');plt.xlabel('logit($\eta$)')
plt.scatter(*sample.T[[1,0]])
Out[27]:
<matplotlib.collections.PathCollection at 0x7b237c68ee10>
Notice that the efficiency of rejection sampling is not very high for this problem.
In [28]:
float(sample.size)/nsamples
Out[28]:
0.41
Rejection sampling is usually subject to declining performance as the dimension of the parameter space increases. Further improvement is gained by using optimized algorithms such as importance sampling which, as the name implies, samples more frequently from important areas of the distribution.
## Importance Sampling¶
As we have seen, the primary difficulty in Bayesian inference is calculating the posterior density for models of moderate-to-high dimension. For example, calculating the posterior mean of some function $h$ requires two difficult integration steps:
$$E[h(\theta) | y] = \frac{\int h(\theta)f(y|\theta) p(\theta) d\theta}{\int f(y|\theta) p(\theta) d\theta} = \frac{\int h(\theta)p(\theta | y) d\theta}{\int p(\theta|y) d\theta}$$
If the posterior $p(\theta|y)$ is a density from which it is easy to sample, we could approximiate these integrals using Monte Carlo simulation, but too often it is not.
Instead, assume that we can draw from a probability density $q(\theta)$ that is some approximation of $p$. We could then write:
$$E[h(\theta) | y] = \frac{\int h(\theta) \frac{p(\theta|y)}{q(\theta)} q(\theta) d\theta}{\int \frac{p(\theta|y)}{q(\theta)} q(\theta) d\theta}$$
Expressed this way, $w(\theta) = p(\theta|y) / q(\theta)$ can be regarded as weights for the $M$ values of $\theta$ sampled from $q$ that we can use to correct the sample so that it approximates $h(\theta)$. Specifically, the importance sampling estimate of $E[h(\theta) | y]$ is:
$$\hat{h}_{is} = \frac{\sum_{i=1}^{M} h(\theta^{(i)})w(\theta^{(i)})}{\sum_{i=1}^{M} w(\theta^{(i)})}$$
where $\theta^{(i)}$ is the $i^{th}$ sample simulated from $q(\theta)$. The standard error for the importance sampling estimate is:
$$\text{SE}_{is} = \frac{\sqrt{\sum_{i=1}^{M} [(h(\theta^{(i)}) - \hat{h}_{is}) w(\theta^{(i)})]^2}}{\sum_{i=1}^{M} w(\theta^{(i)})}$$
The efficiency of importance sampling is related to the selection of the importance sampling distribution $q$.
### Example: Beta-binomial parameter¶
As a simple illustration of importance sampling, let's consider again the problem of estimating the paramters of the beta-binomial example. Here, we will use a multivariate T density as the simulation distribution $q$.
Here are 1000 sampled values to use for approximating the posterior:
In [29]:
theta = rmvt(4, var, mode, size=1000)
We can obtain the probability of these values under the posterior density:
In [30]:
f_theta = np.array([betabin_trans(t, cancer.n, cancer.y) for t in theta])
and under the T distribution:
In [31]:
q_theta = np.log(mvt(theta, 4, var, mode))
This allows us to calculate the importance weights:
In [32]:
w = np.exp(f_theta - q_theta - max(f_theta - q_theta))
notice that we have subtracted the maximum value of the differences, which normalizes the weights.
Now, we can obtain estimates of the parameters:
In [33]:
theta_si = [(w*t).sum()/w.sum() for t in theta.T]
theta_si
Out[33]:
[7.604848460645832, -6.819479139288868]
Finally, the standard error of the estimates:
In [34]:
se = [np.sqrt((((theta.T[i] - theta_si[i])* w)**2).sum()/w.sum()) for i in (0,1)]
se
Out[34]:
[0.40436182647582897, 0.10284517526118811]
## Sampling Importance Resampling¶
The importance sampling method can be modified to incorporate weighted bootstrapping, in a procedure called sampling importance resampling (SIR). As previously, we obtain a sample of size $M$ from an importance sampling distribution $q$ and calculate the corresponding weights $w(\theta_i) = p(\theta|y) / q(\theta)$.
Instead of directly re-weighting the samples from $q$, SIR instead transforms the weights into probabilities via:
$$p_i = \frac{w(\theta_i)}{\sum_{i=1}^M w(\theta_i)}$$
These probabilities are then used to re-sample their respective $\theta_i$ values, with replacement. This implies that the resulting resamples $\theta_i^{\prime}$ will be distributed approximately as the posterior $p(\theta|y)$.
Using again the beta-binomial example, we can take the weights calculated above, and convert them to probabilities:
In [35]:
p_sir = w/w.sum()
The choice function in numpy.random can be used to generate a random sample from an arbitrary 1-D array.
In [36]:
theta_sir = theta[np.random.choice(range(len(theta)), size=10000, p=p_sir)]
In [37]:
fig, axes = plt.subplots(2)
_ = axes[0].hist(theta_sir.T[0], bins=30)
_ = axes[1].hist(theta_sir.T[1], bins=30)
One advantage of this approach is that one can easily extract a posterior probability interval for each parameter, simply by extracting quantiles from the resampled values.
In [38]:
logK_sample = theta_sir[:,0]
logK_sample.sort()
logK_sample[[250, 9750]]
Out[38]:
array([6.59311196, 8.76918839])
## Exercise: Sensitivity analysis¶
Perform a Bayesian sensitivity analysis by performing SIR on the stomach cancer dataset $N$ times, with one observation (a city) removed from the dataset each time. Calculate and plot posterior medians and 95% posterior intervals for each $f(\theta|y_{(-i)})$ to visually analyze the influence of each observation.
In [39]:
# Write your answer here
## References¶
Chapter 5 of Albert, J. (2009). Bayesian computation with R.
|
# pFind Studio: a computational solution for mass spectrometry-based proteomics
## Applications (User Publications)
Use: pFind
Use: pFind
Use: pFind
Use: pFind
### 2021
###### ABSTRACT: RT-PCR is the primary method to diagnose COVID-19 and is also used to monitor the disease course. This approach, however, suffers from false negatives due to RNA instability and poses a high risk to medical practitioners. Here, we investigated the potential of using serum proteomics to predict viral nucleic acid positivity during COVID19. We analyzed the proteome of 275 inactivated serum samples from 54 out of 144 COVID-19 patients and shortlisted 42 regulated proteins in the severe group and 12 in the non-severe group. Using these regulated proteins and several key clinical indexes, including days after symptoms onset, platelet counts, and magnesium, we developed two machine learning models to predict nucleic acid positivity, with an AUC of 0.94 in severe cases and 0.89 in non-severe cases, respectively. Our data suggest the potential of using a serum protein-based machine learning model to monitor COVID-19 progression, thus complementing swab RT-PCR tests. More efforts are required to promote this approach into clinical practice since mass spectrometry-based protein measurement is not currently widely accessible in clinic. [more...]
Use: pFind; pDeep
Use: pFind
Use: pFind
Use: pFind
Use: pFind
###### ABSTRACT: N-linked glycosylation plays important roles in multiple physiological and pathological processes, while the analysis coverage is still limited due to the insufficient digestion of glycoproteins, as well as incomplete ion fragments for intact glycopeptide determination. Herein, a mirror-cutting-based digestion strategy was proposed by combining two orthogonal proteases of LysargiNase and trypsin to characterize the macro- and micro-heterogeneity of protein glycosylation. Using the above two proteases, the b- or y-ion series of peptide sequences were, respectively, enhanced in MS/MS, generating the complementary spectra for peptide sequence identification. More than 27% (489/1778) of the site-specific glycoforms identified by LysargiNase digestion were not covered by trypsin digestion, suggesting the elevated coverage of protein sequences and site-specific glycoforms by the mirror-cutting method. Totally, 10,935 site-specific glycoforms were identified from mouse brain tissues in the 18 h MS analysis, which significantly enhanced the coverage of protein glycosylation. Intriguingly, 27 mannose-6-phosphate (M6P) glycoforms were determined with core fucosylation, and 23 of them were found with the "Y-HexNAc-Fuc" ions after manual checking. This is hitherto the first report of M6P and fucosylation co-modifications of glycopeptides, in which the mechanism and function still needs further exploration. The mirror-cutting digestion strategy also has great application potential in the exploration of missing glycoproteins from other complex samples to provide rich resources for glycobiology research. [more...]
Use: pFind; pGlyco
Use: pFind
Use: pFind
Use: pFind
Use: pFind
Use: pFind
###### ABSTRACT: The RNA binding protein TDP-43 forms intranuclear or cytoplasmic aggregates in age-related neurodegenerative diseases. In this study, we found that RNA binding-deficient TDP-43 (produced by neurodegeneration-causing mutations or posttranslational acetylation in its RNA recognition motifs) drove TDP-43 demixing into intranuclear liquid spherical shells with liquid cores. These droplets, which we named "anisosomes", have shells that exhibit birefringence, thus indicating liquid crystal formation. Guided by mathematical modeling, we identified the primary components of the liquid core to be HSP70 family chaperones, whose adenosine triphosphate (ATP)-dependent activity maintained the liquidity of shells and cores. In vivo proteasome inhibition within neurons, to mimic aging-related reduction of proteasome activity, induced TDP-43-containing anisosomes. These structures converted to aggregates when ATP levels were reduced. Thus, acetylation, HSP70, and proteasome activities regulate TDP-43 phase separation and conversion into a gel or solid phase. [more...]
Use: pFind; pParse; pQuant
Use: pFind
Use: pFind
###### ABSTRACT: Human plasma fibronectin is an adhesive protein that plays a crucial role in wound healing. Many studies had indicated that glycans might mediate the expression and functions of fibronectin, yet a comprehensive understanding of its glycosylation is still missing. Here, we performed a comprehensive N- and O-glycosylation mapping of human plasma fibronectin and quantified the occurrence of each glycoform in a site-specific manner. Intact N-glycopeptides were enriched by zwitterionic hydrophilic interaction chromatography, and N-glycosite sites were localized by the O-18-labeling method. O-glycopeptide enrichment and O-glycosite identification were achieved by an enzyme-assisted site-specific extraction method. An RP-LC-MS/MS system functionalized with collision-induced dissociation and stepped normalized collision energy (sNCE)-HCD tandem mass was applied to analyze the glycoforms of fibronectin. A total of 6 N-glycosites and 53 O-glycosites were identified, which were occupied by 38 N-glycoforms and 16 O-glycoforms, respectively. Furthermore, 77.31% of N-glycans were sialylated, and O-glycosylation was dominated by the sialyl-T antigen. These site-specific glycosylation patterns on human fibronectin can facilitate functional analyses of fibronectin and therapeutics development. [more...]
Use: pFind; pGlyco
Use: pFind
Use: pFind
###### ABSTRACT: The characterization of therapeutic glycoproteins is challenging due to the structural heterogeneity of the therapeutic protein glycosylation. This study presents an in-depth analytical strategy for glycosylation of first-generation erythropoietin (epoetin beta), including a developed mass spectrometric workflow for N-glycan analysis, bottom-up mass spectrometric methods for site-specific N-glycosylation, and a LC-MS approach for O-glycan identification. Permethylated N-glycans, peptides, and enriched glycopeptides of erythropoietin were analyzed by nanoLC-MS/MS, and de-N-glycosylated erythropoietin was measured by LC-MS, enabling the qualitative and quantitative analysis of glycosylation and different glycan modifications (e.g., phosphorylation and O-acetylation). The newly developed Python scripts enabled the identification of 140 N-glycan compositions (237 N-glycan structures) from erythropoietin, especially including 8 phosphorylated N-glycan species. The site-specificity of N-glycans was revealed at the glycopeptide level by pGlyco software using different proteases. In total, 114 N-glycan compositions were identified from glycopeptide analysis. Moreover, LC-MS analysis of deN-glycosylated erythropoietin species identified two O-glycan compositions based on the mass shifts between non-O-glycosylated and O-glycosylated species. Finally, this integrated strategy was proved to realize the in-depth glycosylation analysis of a therapeutic glycoprotein to understand its pharmacological properties and improving the manufacturing processes. [more...]
Use: pFind; pGlyco
###### ABSTRACT: Post-translational changes in the redox state of cysteine residues can rapidly and reversibly alter protein functions, thereby modulating biological processes. The nematode C. elegans is an ideal model organism for studying cysteine-mediated redox signaling at a network level. Here we present a comprehensive, quantitative, and site-specific profile of the intrinsic reactivity of the cysteinome in wild-type C. elegans. We also describe a global characterization of the C. elegans redoxome in which we measured changes in three major cysteine redox forms after H2O2 treatment. Our data revealed redox-sensitive events in translation, growth signaling, and stress response pathways, and identified redox-regulated cysteines that are important for signaling through the p38 MAP kinase (MAPK) pathway. Our in-depth proteomic dataset provides a molecular basis for understanding redox signaling in vivo, and will serve as a valuable and rich resource for the field of redox biology. Reversible cysteine oxidative modifications have emerged as important mechanisms that alter protein function. Here the authors globally assess the cysteine reactivity and an array of cysteine oxidative modifications in C. elegans, providing insights into redox signaling at the organismal level. [more...]
Use: pFind; pQuant
Use: pFind
Use: pFind
Use: pFind
Use: pFind
Use: pFind
Use: pNovo
Use: pNovo
Use: pNovo
Use: pNovo
Use: pNovo
Use: pNovo
Use: pParse
Use: pGlyco
Use: pGlyco
Use: pGlyco
Use: pGlyco
Use: pGlyco
Use: pGlyco
###### ABSTRACT: Haptoglobin (Hp) is one of the acute-phase response proteins secreted by the liver, and its aberrant N-glycosylation was previously reported in hepatocellular carcinoma (HCC). Limited studies on Hp O-glycosylation have been previously reported. In this study, we aimed to discover and confirm its O-glycosylation in HCC based on lectin binding and mass spectrometry (MS) detection. First, serum Hp was purified from patients with liver cirrhosis (LC) and HCC, respectively. Then, five lectins with Gal or GalNAc monosaccharide specificity were chosen to perform lectin blot, and the results showed that Hp in HCC bound to these lectins in a much stronger manner than that in LC. Furthermore, label-free quantification based on MS was performed. A total of 26 intact O-glycopeptides were identified on Hp, and most of them were elevated in HCC as compared to LC. Among them, the intensity of HYEGS(316)TVPEK (H1N1S1) on Hp was the highest in HCC patients. Increased HYEGS(316)TVPEK (H1N1S1) in HCC was quantified and confirmed using the MS method based on O-18/O-16 C-terminal labeling and multiple reaction monitoring. This study provided a comprehensive understanding of the glycosylation of Hp in liver diseases. [more...]
Use: pGlyco; pQuant
Use: pGlyco
Use: pGlyco
Use: pGlyco
Use: pGlyco
Use: pGlyco
Use: pGlyco
###### ABSTRACT: The heterogeneity and complexity of glycosylation hinder the depth of site-specific glycoproteomics analysis. High-field asymmetric-waveform ion-mobility spectrometry (FAIMS) has been shown to improve the scope of bottom-up proteomics. The benefits of FAIMS for quantitative N-glycoproteomics have not been investigated yet. In this work, we optimized FAIMS settings for N-glycopeptide identification, with or without the tandem mass tag (TMT) label. The optimized FAIMS approach significantly increased the identification of site-specific N-glycopeptides derived from the purified immunoglobulin M (IgM) protein or human lymphoma cells. We explored in detail the changes in FAIMS mobility caused by N-glycopeptides with different characteristics, including TMT labeling, charge state, glycan type, peptide sequence, glycan size, and precursor m/z. Importantly, FAIMS also improved multiplexed N-glycopeptide quantification, both with the standard MS2 acquisition method and with our recently developed Glyco-SPS-MS3 method. The combination of FAIMS and Glyco-SPS-MS3 methods provided the highest quantitative accuracy and precision. Our results demonstrate the advantages of FAIMS for improved mass spectrometry-based qualitative and quantitative N-glycoproteomics. [more...]
Use: pGlyco; pQuant
Use: pGlyco
###### ABSTRACT: Protein N-glycosylation in human milk whey plays a substantial role in infant health during postnatal development. Changes in site-specific glycans in milk whey reflect the needs of infants under different circumstances. However, the conventional glycoproteomics analysis of milk whey cannot reveal the changes in site-specific glycans because the attached glycans are typically enzymatically removed from the glycoproteins prior to analysis. In this study, N-glycoproteomics analysis of milk whey was performed without removing the attached glycans, and 330 and 327 intact glycopeptides were identified in colostrum and mature milk whey, respectively. Label-free quantification of site-specific glycans was achieved by analyzing the identified intact glycopeptides, which revealed 9 significantly upregulated site-specific glycans on 6 glycosites and 11 significantly downregulated sitespecific glycans on 8 glycosites. Some interesting change trends in N-glycans attached to specific glycosites in human milk whey were observed. Bisecting GlcNAc was found attached to 11 glycosites on 8 glycoproteins in colostrum and mature milk. The dynamic changes in site-specific glycans revealed in this study provide insights into the role of protein N-glycosylation during infant development. [more...]
Use: pQuant; pGlyco
###### ABSTRACT: The diagnosis of AFP (alpha-fetoprotein)-negative HCC (hepatocellular carcinoma) mostly relies on imaging and pathological examinations, and it lacks valuable and practical markers. Protein N-glycosylation is a crucial post-translation modifying process related to many biological functions in an organism. Alteration of N-glycosylation correlates with inflammatory diseases and infectious diseases including hepatocellular carcinoma. Here, serum N-linked intact glycopeptides with molecular weight (MW) of 40-55 kDa were analyzed in a discovery set (n = 40) including AFP-negative HCC and liver cirrhosis (LC) patients using label-free quantification methodology. Quantitative lens culinaris agglutin (LCA) ELISA was further used to confirm the difference of glycosylation on serum PON1 in liver diseases (n = 56). Then, the alteration of site-specific intact N-glycopeptides of PON1 was comprehensively assessed by using Immunoprecipitation (IP) and mass spectrometry based O-16/O-18 C-terminal labeling quantification method to distinguish AFP-negative HCC from LC patients in a validation set (n = 64). Totally 195 glycopeptides were identified using a dedicated search engine pGlyco. Among them, glycopeptides from APOH, HPT/HPTR, and PON1 were significantly changed in AFP-negative HCC as compared to LC. In addition, the reactivity of PON1 with LCA in HCC patients with negative AFP was significantly elevated than that in cirrhosis patients. The two glycopeptides HAN(253)WTLTPLK (H5N4S2) and (H5N4S1) corresponding to PON1 were significantly increased in AFP-negative HCC patients, as compared with LC patients. Variations in PON1 glycosylation may be associated with AFP-negative HCC and might be helpful to serve as potential glycomic-based biomarkers to distinguish AFP-negative HCC from cirrhosis. [more...]
Use: pGlyco; pQuant
Use: pGlyco
Use: pGlyco
Use: pGlyco
Use: pQuant
Use: pDeep
Use: pDeep
### 2020
###### ABSTRACT: Native peptides from sea bass muscle were analyzed by two different approaches: medium-sized peptides by peptidomics analysis, whereas short peptides by suspect screening analysis employing an inclusion list of exact m/z values of all possible amino acid combinations (from 2 up to 4). The method was also extended to common post-translational modifications potentially interesting in food analysis, as well as non-proteolytic aminoacyl derivatives, which are well-known taste-active building blocks in pseudo-peptides. The medium-sized peptides were identified by de novo and combination of de novo and spectra matching to a protein sequence database, with up to 4077 peptides (2725 modified) identified by database search and 2665 peptides (223 modified) identified by de novo only; 102 short peptide sequences were identified (with 12 modified ones), and most of them had multiple reported bioactivities. The method can be extended to any peptide mixture, either endogenous or by protein hydrolysis, from other food matrices. [more...]
Use: pNovo; pFind
Use: pFind
Use: pFind
Use: pFind
Use: pFind
Use: pFind
Use: pFind
Use: pFind
Use: pFind
###### ABSTRACT: Protein sequence database search is one of the most commonly used methods for protein identification in shotgun proteomics. In tradition, searching a protein sequence database is usually required to construct the theoretical spectrum for each peptide at first, which only considers the information of mass-to-charge ratio at present. However, the information related to isotope peak intensity is neglected. Thanks to the rapid development of artificial intelligence technique in recent years, deep learning-based MS/MS spectrum prediction tools have showed a high accuracy and great potentials to improve the sensitivity and accuracy of protein sequence database searching. In this study, we used a deep learning model (pDeep2) to predict the theoretical mass spectrum of all peptides and applied it to a database searching tool (DeepNovo), thus improving the sensitivity and accuracy of peptide identification. [more...]
Use: pFind; pDeep
Use: pFind
Use: pFind
Use: pFind
Use: pFind
Use: pFind
Use: pFind
Use: pFind
Use: pFind
Use: pFind
Use: pFind
###### ABSTRACT: Steady improvement in Orbitrap-based mass spectrometry (MS) technologies has greatly advanced the peptide sequencing speed and depth. In-depth analysis of the performance of state-of-the-art MS and optimization of key parameters can improve sequencing efficiency. In this study, we first systematically compared the performance of two popular data-dependent acquisition approaches, with Orbitrap as the first-stage (MS1) mass analyzer and the same Orbitrap (high-high approach) or ion trap (high-low approach) as the second-stage (MS2) mass analyzer, on the Orbitrap Fusion mass spectrometer. High-high approach outperformed high-low approach in terms of better saturation of the scan cycle and higher MS2 identification rate. However, regardless of the acquisition method, there are still more than 60% of peptide features untargeted for MS2 scan. We then systematically optimized the MS parameters using the high-high approach. Increasing the isolation window in the high-high approach could facilitate faster scan speed, but decreased MS2 identification rate. On the contrary, increasing the injection time of MS2 scan could increase identification rate but decrease scan speed and the number of identified MS2 spectra. Dynamic exclusion time should be set properly according to the chromatography peak width. Furthermore, we found that the Orbitrap analyzer, rather than the analytical column, was easily saturated with higher loading amount, thus limited the dynamic range of MS1-based quantification. By using optimized parameters, 10 000 proteins and 110 000 unique peptides were identified by using 20 h of effective liquid chromatography (LC) gradient time. The study therefore illustrated the importance of synchronizing LC-MS precursor ion targeting, fragment ion detection, and chromatographic separation for high efficient data-dependent proteomics. [more...]
Use: pFind; pParse
Use: pFind
Use: pFind
###### ABSTRACT: Alk-Ph is a clickable APEX2 substrate developed for spatially restricted protein/RNA labeling in intact yeast cells. Alk-Ph is more water soluble and cell wall permeable than biotin-phenol substrate, allowing more efficient profiling of the subcellular proteome in microorganisms. We describe the protocol for Alk-Ph probe synthesis, APEX2 expression, and protein/RNA labeling in yeast and the workflow for quantitative proteomic experiments and data analysis. Using the yeast mitochondria as an example, we provide guidelines to achieve high-resolution mapping of subcellular yeast proteome and transcriptome. For complete details on the use and execution of this protocol, please refer to Li etal. (2020). © 2020 The Author(s). [more...]
Use: pQuant; pFind
###### ABSTRACT: Native peptides from sea bass muscle were analyzed by two different approaches: medium-sized peptides by peptidomics analysis, whereas short peptides by suspect screening analysis employing an inclusion list of exact m/z values of all possible amino acid combinations (from 2 up to 4). The method was also extended to common post-translational modifications potentially interesting in food analysis, as well as non-proteolytic aminoacyl derivatives, which are well-known taste-active building blocks in pseudo-peptides. The medium-sized peptides were identified by de novo and combination of de novo and spectra matching to a protein sequence database, with up to 4077 peptides (2725 modified) identified by database search and 2665 peptides (223 modified) identified by de novo only; 102 short peptide sequences were identified (with 12 modified ones), and most of them had multiple reported bioactivities. The method can be extended to any peptide mixture, either endogenous or by protein hydrolysis, from other food matrices. [more...]
Use: pNovo; pFind
###### ABSTRACT: The glycocalyx comprises glycosylated proteins and lipids and fcorms the outermost layer of cells. It is involved in fundamental inter- and intracellular processes, including non-self-cell and self-cell recognition, cell signaling, cellular structure maintenance, and immune protection. Characterization of the glycocalyx is thus essential to understanding cell physiology and elucidating its role in promoting health and disease. This protocol describes how to comprehensively characterize the glycocalyx N-glycans and O-glycans of glycoproteins, as well as intact glycolipids in parallel, using the same enriched membrane fraction. Profiling of the glycans and the glycolipids is performed using nanoflow liquid chromatography-mass spectrometry (nanoLC-MS). Sample preparation, quantitative LC-tandem MS (LC-MS/MS) analysis, and data processing methods are provided. In addition, we discuss glycoproteomic analysis that yields the site-specific glycosylation of membrane proteins. To reduce the amount of sample needed, N-glycan, O-glycan, and glycolipid analyses are performed on the same enriched fraction, whereas glycoproteomic analysis is performed on a separate enriched fraction. The sample preparation process takes 2-3 d, whereas the time spent on instrumental and data analyses could vary from 1 to 5 d for different sample sizes. This workflow is applicable to both cell and tissue samples. Systematic changes in the glycocalyx associated with specific glycoforms and glycoconjugates can be monitored with quantitation using this protocol. The ability to quantitate individual glycoforms and glycoconjugates will find utility in a broad range of fundamental and applied clinical studies, including glycan-based biomarker discovery and therapeutics. This protocol describes nanoflow liquid chromatography-mass spectrometry (nanoLC-MS) analysis of the N-glycans and O-glycans of glycoproteins and glycolipids, as well as site-specific glycosylation of membrane proteins. [more...]
Use: pFind; pGlyco
###### ABSTRACT: Cysteine is unique among all protein-coding amino acids, owing to its intrinsically high nucleophilicity. The cysteinyl thiol group can be covalently modified by a broad range of redox mechanisms or by various electrophiles derived from exogenous or endogenous sources. Measuring the response of protein cysteines to redox perturbation or electrophiles is critical for understanding the underlying mechanisms involved. Activity-based protein profiling based on thiol-reactive probes has been the method of choice for such analyses. We therefore adapted this approach and developed a new chemoproteomic platform, termed 'QTRP' (quantitative thiol reactivity profiling), that relies on the ability of a commercially available thiol-reactive probe IPM (2-iodo-N-(prop-2-yn-1-yl)acetamide) to covalently label, enrich and quantify the reactive cysteinome in cells and tissues. Here, we provide a detailed and updated workflow of QTRP that includes procedures for (i) labeling of the reactive cysteinome from cell or tissue samples (e.g., control versus treatment) with IPM, (ii) processing the protein samples into tryptic peptides and tagging the probe-modified peptides with isotopically labeled azido-biotin reagents containing a photo-cleavable linker via click chemistry reaction, (iii) capturing biotin-conjugated peptides with streptavidin beads, (iv) identifying and quantifying the photo-released peptides by mass spectrometry (MS)-based shotgun proteomics and (v) interpreting MS data by a streamlined informatic pipeline using a proteomics software, pFind 3, and an automatic post-processing algorithm. We also exemplified here how to use QTRP for mining H2O2-sensitive cysteines and for determining the intrinsic reactivity of cysteines in a complex proteome. We anticipate that this protocol should find broad applications in redox biology, chemical biology and the pharmaceutical industry. The protocol for sample preparation takes 3 d, whereas MS measurements and data analyses require 75 min and <30 min, respectively, per sample. Proteomic cysteines can undergo redox reactions and electrophile-derived modifications. In QTRP, a thiol-reactive probe is used to covalently label, enrich and quantify the reactive cysteinome in cultured cells and tissue samples. [more...]
Use: pFind; pQuant
Use: pFind
Use: pFind
###### ABSTRACT: Identification of post-translationally or chemically modified peptides in mass spectrometry-based proteomics experiments is a crucial yet challenging task. We have recently introduced a fragment ion indexing method and the MSFragger search engine to empower an open search strategy for comprehensive analysis of modified peptides. However, this strategy does not consider fragment ions shifted by unknown modifications, preventing modification localization and limiting the sensitivity of the search. Here we present a localization-aware open search method, in which both modification-containing (shifted) and regular fragment ions are indexed and used in scoring. We also implement a fast mass calibration and optimization method, allowing optimization of the mass tolerances and other key search parameters. We demonstrate that MSFragger with mass calibration and localization-aware open search identifies modified peptides with significantly higher sensitivity and accuracy. Comparing MSFragger to other modification-focused tools (pFind3, MetaMorpheus, and TagGraph) shows that MSFragger remains an excellent option for fast, comprehensive, and sensitive searches for modified peptides in shotgun proteomics data. Mass spectrometry-based proteomics is the method of choice for the global mapping of post-translational modifications, but matching and scoring peaks with unknown masses remains challenging. Here, the authors present a refined open search strategy to score all peaks with higher sensitivity and accuracy. [more...]
Use: pFind; pParse
Use: pFind
Use: pFind
###### ABSTRACT: Proteins carry out the vast majority of functions in all biological domains, but for technological reasons their large-scale investigation has lagged behind the study of genomes. Since the first essentially complete eukaryotic proteome was reported(1), advances in mass-spectrometry-based proteomics(2)have enabled increasingly comprehensive identification and quantification of the human proteome(3-6). However, there have been few comparisons across species(7,8), in stark contrast with genomics initiatives(9). Here we use an advanced proteomics workflow-in which the peptide separation step is performed by a microstructured and extremely reproducible chromatographic system-for the in-depth study of 100 taxonomically diverse organisms. With two million peptide and 340,000 stringent protein identifications obtained in a standardized manner, we double the number of proteins with solid experimental evidence known to the scientific community. The data also provide a large-scale case study for sequence-based machine learning, as we demonstrate by experimentally confirming the predicted properties of peptides fromBacteroides uniformis. Our results offer a comparative view of the functional organization of organisms across the entire evolutionary range. A remarkably high fraction of the total proteome mass in all kingdoms is dedicated to protein homeostasis and folding, highlighting the biological challenge of maintaining protein structure in all branches of life. Likewise, a universally high fraction is involved in supplying energy resources, although these pathways range from photosynthesis through iron sulfur metabolism to carbohydrate metabolism. Generally, however, proteins and proteomes are remarkably diverse between organisms, and they can readily be explored and functionally compared at www.proteomesoflife.org. [more...]
Use: pFind; pDeep
###### ABSTRACT: Spectrum prediction using machine learning or deep learning models is an emerging method in computational proteomics. Several deep learning-based MS/MS spectrum prediction tools have been developed and showed their potentials not only for increasing the sensitivity and accuracy of data-dependent acquisition search engines, but also for building spectral libraries for data-independent acquisition analysis. Different tools with their unique algorithms and implementations may result in different performances. Hence, it is necessary to systematically evaluate these tools to find out their preferences and intrinsic differences. In this study, multiple datasets with different collision energies, enzymes, instruments, and species, are used to evaluate the performances of the deep learning-based MS/MS spectrum prediction tools, as well as, the machine learning-based tool MS2PIP. The evaluations may provide helpful insights and guidelines of spectrum prediction tools for the corresponding researchers. [more...]
Use: pFind; pDeep
###### ABSTRACT: The engineered ascorbate peroxidase (APEX) is a powerful tool for the proximity-dependent labeling of proteins and RNAs in live cells. Although widely use in mammalian cells, APEX applications in microorganisms have been hampered by the poor labeling efficiency of its biotin-phenol (BP) substrate. In this study, we sought to address this challenge by designing and screening a panel of alkyne-functionalized substrates. Our best probe, Alk-Ph, substantially improves APEX-labeling efficiency in intact yeast cells, as it is more cell wall-permeant than BP. Through a combination of protein-centric and peptide-centric chemoproteomic experiments, we have identified 165 proteins with a specificity of 94% in the yeast mitochondrial matrix. In addition, we have demonstrated that Alk-Ph is useful for proximity-dependent RNA labeling in yeast, thus expanding the scope of APEX-seq. We envision that this improved APEX-labeling strategy would set the stage for the large-scale mapping of spatial proteome and transcriptome in yeast. [more...]
Use: pFind; pQuant
Use: pFind
Use: pFind
Use: pFind
Use: pFind
###### ABSTRACT: Plants deploy a variety of secondary metabolites to fend off pathogen attack. Although defense compounds are generally considered toxic to microbes, the exact mechanisms are often unknown. Here, we show that the Arabidopsis defense compound sulforaphane (SFN) functions primarily by inhibiting Pseudomonas syringae type III secretion system (TTSS) genes, which are essential for pathogenesis. Plants lacking the aliphatic glucosinolate pathway, which do not accumulate SFN, were unable to attenuate TTSS gene expression and exhibited increased susceptibility to P. syringae strains that cannot detoxify SFN. Chemoproteomics analyses showed that SFN covalently modified the cysteine at position 209 of HrpS, a key transcription factor controlling TTSS gene expression. Site-directed mutagenesis and functional analyses further confirmed that Cys209 was responsible for bacterial sensitivity to SFN in vitro and sensitivity to plant defenses conferred by the aliphatic glucosinolate pathway. Collectively, these results illustrate a previously unknown mechanism by which plants disarma pathogenic bacterium. [more...]
Use: pFind; pQuant
Use: pFind
Use: pFind
###### ABSTRACT: Liquid chromatography tandem mass spectrometry (LCMS/MS) has been the most widely used technology for phosphoproteomics studies. As an alternative to database searching and probability-based phosphorylation site localization approaches, spectral library searching has been proved to be effective in the identification of phosphopeptides. However, incompletion of experimental spectral libraries limits the identification capability. Herein, we utilize MS/MS spectrum prediction coupled with spectral matching for site localization of phosphopeptides. In silico MS/MS spectra are generated from peptide sequences by deep learning/machine learning models trained with nonphosphopeptides. Then, mass shift according to phosphorylation sites, phosphoric acid neutral loss, and a "budding" strategy are adopted to adjust the in silico mass spectra. In silico MS/MS spectra can also be generated in one step for phosphopeptides using models trained with phosphopeptides. The method is benchmarked on data sets of synthetic phosphopeptides and is used to process real biological samples. It is demonstrated to be a method requiring only computational resources that supplements the probability-based approaches for phosphorylation site localization of singly and multiply phosphorylated peptides. [more...]
Use: pDeep; pNovo
Use: pNovo
Use: pNovo
Use: pNovo
Use: pNovo
Use: pNovo
###### ABSTRACT: Precise assignment of sialylation linkages at the glycopeptide level is of importance in bottom-up glycoproteomics and an indispensable step to understand the function of glycoproteins in pathogen-host interactions and cancer progression. Even though some efforts have been dedicated to the discrimination of alpha 2,3/alpha 2,6-sialylated isomers, unambiguous identification of sialoglycopeptide isomers is still needed. Herein, we developed an innovative glycosyltransferase labeling assisted mass spectrometry (GLAMS) strategy. After specific enzymatic labeling, oxonium ions from higher-energy C-trap dissociation (HCD) fragmentation of alpha 2,3-sailoglycopeptides then generate unique reporters to distinctly differentiate those of alpha 2,6-sailoglycopeptide isomers. 'With this strategy, a total of 1236 linkage-specific sialoglycopeptides were successfully identified from 161 glycoproteins in human serum. [more...]
Use: pParse; pGlyco
###### ABSTRACT: Regulation of protein N-glycosylation is essential in human cells. However, large-scale, accurate, and site-specific quantification of glycosylation is still technically challenging. We here introduce SugarQuant, an integrated mass spectrometry-based pipeline comprising protein aggregation capture (PAC)-based sample preparation, multi-notch MS3 acquisition (Glyco-SPS-MS3) and a data-processing tool (GlycoBinder) that enables confident identification and quantification of intact glycopeptides in complex biological samples. PAC significantly reduces sample-handling time without compromising sensitivity. Glyco-SPS-MS3 combines high-resolution MS2 and MS3 scans, resulting in enhanced reporter signals of isobaric mass tags, improved detection of N-glycopeptide fragments, and lowered interference in multiplexed quantification. GlycoBinder enables streamlined processing of Glyco-SPS-MS3 data, followed by a two-step database search, which increases the identification rates of glycopeptides by 22% compared with conventional strategies. We apply SugarQuant to identify and quantify more than 5,000 unique glycoforms in Burkitt's lymphoma cells, and determine site-specific glycosylation changes that occurred upon inhibition of fucosylation at high confidence. Comprehensive quantitative profiling of intact glycopeptides remains technically challenging. To address this, the authors here develop an integrated quantitative glycoproteomic workflow, including optimized sample preparation, multiplexed quantification and a dedicated data processing tool. [more...]
Use: pGlyco; pParse
###### ABSTRACT: Peptide spectrum match scoring algorithm plays a key role in the peptide sequence identification,and the traditional scoring algorithm cannot effectively make full use of the peptide fragmentation pattern to perform scoring. In order to solve the problem,a multi-classification probability sum scoring algorithm combined with the peptide sequence information representation called deepscore- alpha was proposed. In this algorithm,the second scoring was not performed with the consideration of global information,and there was no limitation on the similarity calculation method of theoretical mass spectrum and experimental mass spectrum. In the algorithm,a one-dimensional residual network was used to extract the underlying information of the sequence,and then the effects of different peptide bonds on the current peptide bond fracture were integrated through the multi-attention mechanism to generate the final fragmention relative intensity distribution probability matrix,after that,the final peptide spectrum match score was calculated by combining the actual relative intensity of the peptide sequence fragmention. This algorithm was compared with Comet and MSGF+,two common open source identification tools. The results show that when False Discovery Rate(FDR)was 0.01 on humanbody proteome dataset,the number of peptide sequences retained by deepScore-alpha is increased by about 14%,and the Top1 hit ratio(the proportion of the correct peptide sequences in the spectrum with the highest score)of this algorithm is increased by about 5 percentage points. The generalization performance test of the model trained by human ProteomeTools2 dataset show that the number of sequences peptide retained by deepScore- alpha at FDR of 0.01 is improved by about 7%,the Top1 hit ratio of this algorithm is increased by about 5 percentage points,and the identification results from Decoy library in the Top1 is decreased by about 60%. Experimental results prove that,the algorithm can retain more peptide sequences at lower FDR value, improve the Top1 hit ratio,and has good generalization performance. [more...]
Use: pParse; pDeep
###### Molecular & cellular proteomics : MCP. 2020. Shu, Qingbo et al. Laboratory of Protein and Peptide Pharmaceuticals & Proteomics Laboratory, Institute of Biophysics, Chinese Academy of Sciences
Use: pParse; pGlyco
Use: pGlyco
Use: pGlyco
Use: pGlyco
Use: pGlyco
Use: pGlyco
Use: pGlyco
Use: pGlyco
Use: pGlyco
Use: pGlyco
Use: pGlyco
Use: pGlyco
Use: pGlyco
Use: pGlyco
Use: pQuant
Use: pQuant
Use: pDeep
Use: pDeep
Use: pDeep
Use: pDeep
### 2019
###### ABSTRACT: In this study, we faced the challenge of deciphering a protein that has been designed and expressed by E. coli in such a way that the amino acid sequence encodes two concatenated English sentences. The letters 'O' and 'U' in the sentence are both replaced by 'K' in the protein. The sequence cannot be found online and carried to-be-discovered modifications. With limited information in hand, to solve the challenge, we developed a workflow consisting of bottom-up proteomics, de novo sequencing and a bioinformatics pipeline for data processing and searching for frequently appearing words. We assembled a complete first question: "Have you ever wondered what the most fundamental limitations in life are?" and validated the result by sequence database search against a customized FASTA file. We also searched the spectra against an E. coli proteome database and found close to 600 endogenous, co-purified E. coli proteins and contaminants introduced during sample handling, which made the inference of the sentence very challenging. We conclude that E. coli can express English sentences, and that de novo sequencing combined with clever sequence database search strategies is a promising tool for the identification of uncharacterized proteins. © 2019 Published by Elsevier B.V. on behalf of European Proteomics Association (EuPA). [more...]
Use: pNovo; pFind
Use: pFind
###### ABSTRACT: In recent years, high-throughput technologies have contributed to the development of a more precise picture of the human proteome. However, 2129 proteins remain listed as missing proteins (MPs) in the newest neXtProt release (2019-02). The main reasons for MPs are a low abundance, a low molecular weight, unexpected modifications, membrane characteristics, and so on. Moreover, >50% of the MS/MS data have not been successfully identified in shotgun proteomics. Open-pFind, an efficient open search engine, recently released by the pFind group in China, might provide an opportunity to identify these buried MPs in complex samples. In this study, proteins and potential MPs were identified using Open-pFind and three other search engines to compare their performance and efficiency with three large-scale data sets digested by three enzymes (Glu-C, Lys-C, and trypsin) with specificity on different amino acid (AA) residues. Our results demonstrated that Open-pFind identified 44.7-93.1% more peptide-spectrum matches and 21.3-61.6% more peptide sequences than the second-best search engine. As a result, Open-pFind detected 53.1% more MP candidates than MaxQuant and 8.8% more candidate MPs than Proteome Discoverer. In total, 5 (PE2) of the 124 MP candidates identified by Open-pFind were verified with 2 or 3 unique peptides containing more than 9 AAs by using a spectrum theoretical prediction with pDeep and synthesized peptide matching with pBuild after spectrum quality analysis, isobaric post-translational modification, and single amino acid variant filtering. These five verified MPs can be saved as PEI proteins. In addition, three other MP candidates were verified with two unique peptides (one peptide containing more than 9 AAs and the other containing only 8 AAs), which was slightly lower than the criteria listed by C-HPP and required additional verification information. More importantly, unexpected modifications were detected in these MPs. All MS data sets have been deposited into ProteomeXchange with the identifier PXDO15759. [more...]
Use: pFind; pDeep
Use: pFind
###### ABSTRACT: The application of database search algorithms with very wide precursor mass tolerances for the "Open Search" paradigm has brought new efforts at post-translational modification discovery in shotgun proteomes. This approach has motivated the acceleration of database search tools by incorporating fragment indexing features. In this report, we compare open searches and sequence tag searches of high-resolution tandem mass spectra to seek a common "palette" of modifications when analyzing multiple formalin-fixed, paraffin-embedded (FFPE) tissues from Thermo Q-Exactive and SCIEX TripleTOF instruments. While open search in MSFragger produced some gains in identified spectra, careful FDR control limited the best result to 24% more spectra than narrow search (worst result: a loss of 9%). Open pFind produced high apparent sensitivity for PSMs, but entrapment sequences hinted that the actual error rate may be higher than reported by the software. Combining sequence tagging, open search, and chemical knowledge, we converged on this set of PTMs for our four FFPE sets: mono- and di-methylation (nTerm and Lys), single and double oxidation (Met and Pro), and variable carbamidomethylation (nTerm and Cys). (C) 2019 Elsevier B.V. All rights reserved. [more...]
Use: pBuild; pFind; pParse
Use: pFind
###### ABSTRACT: Aims: Cysteine persulfidation (also called sulfhydration or sulfuration) has emerged as a potential redox mechanism to regulate protein functions and diverse biological processes in hydrogen sulfide (H2S) signaling. Due to its intrinsically unstable nature, working with this modification has proven to be challenging. Although methodological progress has expanded the inventory of persulfidated proteins, there is a continued need to develop methods that can directly and unequivocally identify persulfidated cysteine residues in complex proteomes. Results: A quantitative chemoproteomic method termed as low-pH quantitative thiol reactivity profiling (QTRP) was developed to enable direct site-specific mapping and reactivity profiling of proteomic persulfides and thiols in parallel. The method was first applied to cell lysates treated with NaHS, resulting in the identification of overall 1547 persulfidated sites on 994 proteins. Structural analysis uncovered unique consensus motifs that might define this distinct type of modification. Moreover, the method was extended to profile endogenous protein persulfides in cells expressing H2S-generating enzyme, mouse tissues, and human serum, which led to additional insights into mechanistic, structural, and functional features of persulfidation events, particularly on human serum albumin. Innovation and Conclusion: Low-pH QTRP represents the first method that enables direct and unbiased proteomic mapping of cysteine persulfidation. Our method allows to generate the most comprehensive inventory of persulfidated targets of NaHS so far and to perform the first analysis of in vivo persulfidation events, providing a valuable tool to dissect the biological functions of this important modification. Antioxid. Redox Signal. 00, 000-000. [more...]
Use: pFind; pQuant
Use: pFind
Use: pFind
###### ABSTRACT: In this study, we faced the challenge of deciphering a protein that has been designed and expressed by E. coli in such a way that the amino acid sequence encodes two concatenated English sentences. The letters 'O' and 'U' in the sentence are both replaced by 'K' in the protein. The sequence cannot be found online and carried to-be-discovered modifications. With limited information in hand, to solve the challenge, we developed a workflow consisting of bottom-up proteomics, de novo sequencing and a bioinformatics pipeline for data processing and searching for frequently appearing words. We assembled a complete first question: "Have you ever wondered what the most fundamental limitations in life are?" and validated the result by sequence database search against a customized FASTA file. We also searched the spectra against an E. coli proteome database and found close to 600 endogenous, co-purified E. coli proteins and contaminants introduced during sample handling, which made the inference of the sentence very challenging. We conclude that E. coli can express English sentences, and that de novo sequencing combined with clever sequence database search strategies is a promising tool for the identification of uncharacterized proteins. © 2019 Published by Elsevier B.V. on behalf of European Proteomics Association (EuPA). [more...]
Use: pNovo; pFind
Use: pFind
Use: pFind
Use: pFind
Use: pFind
Use: pFind
Use: pFind
###### ABSTRACT: Rheumatoid arthritis (RA) is an autoimmune disease in which certain immune cells are dysfunctional and attack their own healthy tissues. There has been great difficulty in finding an accurate and efficient method for the diagnosis of early-stage RA. The present shortage of diagnostic methods leads to the rough treatments of the patients in the late stages, such as joint removing. Nowadays, there is an increasing focus on glyco-biomarkers discovery for malicious disease via MS-based strategy. In this study, we present an integrated proteomics and glycoproteomics approach to uncover the pathological changes of some RA-related glyco-biomarkers and glyco-checkpoints involved in the RA onset. Among 39 distinctly expressive N-glycoproteins, 27 N-glycoproteins were discovered with over twofold expression significances. On the other hand, 13 proteins have been distinguished with significant differences in 53 distinctly expressed proteins identified in this study. Such an integrated approach will provide a comprehensive strategy for new potential glyco-biomarkers and checkpoints discovery in rheumatoid arthritis. [more...]
Use: pBuild; pFind
###### ABSTRACT: De novo peptide sequencing for large-scale proteomics remains challenging because of the lack of full coverage of ion series in tandem mass spectra. We developed a mirror protease of trypsin, acetylated LysargiNase (Ac-LysargiNase), with superior activity and stability. The mirror spectrum pairs derived from the Ac-LysargiNase and trypsin treated samples can generate full b and y ion series, which provide mutual complementarity of each other, and allow us to develop a novel algorithm, pNovoM, for de novo sequencing. Using pNovoM to sequence peptides of purified proteins, the accuracy of the sequence was close to 100%. More importantly, from a large-scale yeast proteome sample digested with trypsin and Ac-LysargiNase individually, 48% of all tandem mass spectra formed mirror spectrum pairs, 97% of which contained full coverage of ion series, resulting in precision de novo sequencing of full-length peptides by pNovoM. This enabled pNovoM to successfully sequence 21,249 peptides from 3,753 proteins and interpreted 44-152% more spectra than pNovo+ and PEAKS at a 5% FDR at the spectrum level. Moreover, the mirror protease strategy had an obvious advantage in sequencing long peptides. We believe that the combination of mirror protease strategy and pNovoM will be an effective approach for precision de novo sequencing on both single proteins and proteome samples. [more...]
Use: pFind; pNovo
Use: pFind
Use: pFind
Use: pFind
Use: pFind
###### ABSTRACT: The peptide components of defatted walnut (Juglans regia L.) meal hydrolysate (DWMH) remain unclear, hindering the investigation of biological mechanisms and exploitation of bioactive peptides. The present study aims to identify the peptide composition of DWMH, followed by to evaluate in vitro antioxidant effects of selected peptides and investigate mechanisms of antioxidative effect. First, more than 1 000 peptides were identified by de novo sequencing in DWMH. Subsequently, a scoring method was established to select promising bioactive peptides by structure based screening. Eight brand new peptides were selected due to their highest scores in two different batches of DWMH. All of them showed potent in vitro antioxidant effects on H2O2-injured nerve cells. Four of them even possessed significantly stronger effects than DWMH, making the selected bioactive peptides useful for further research as new bioactive entities. Two mechanisms of hydroxyl radical scavenging and ROS reduction were involved in their antioxidative effects at different degrees. The results showed peptides possessing similar capacity of hydroxyl radical scavenging or ROS reduction may have significantly different in vitro antioxidative effects. Therefore, comprehensive consideration of different antioxidative mechanisms were suggested in selecting antioxidative peptides from DWMH. [more...]
Use: pNovo; pXtract; pBuild
Use: pXtract
###### ABSTRACT: The peptide components of defatted walnut (Juglans regia L.) meal hydrolysate (DWMH) remain unclear, hindering the investigation of biological mechanisms and exploitation of bioactive peptides. The present study aims to identify the peptide composition of DWMH, followed by to evaluate in vitro antioxidant effects of selected peptides and investigate mechanisms of antioxidative effect. First, more than 1 000 peptides were identified by de novo sequencing in DWMH. Subsequently, a scoring method was established to select promising bioactive peptides by structure based screening. Eight brand new peptides were selected due to their highest scores in two different batches of DWMH. All of them showed potent in vitro antioxidant effects on H2O2-injured nerve cells. Four of them even possessed significantly stronger effects than DWMH, making the selected bioactive peptides useful for further research as new bioactive entities. Two mechanisms of hydroxyl radical scavenging and ROS reduction were involved in their antioxidative effects at different degrees. The results showed peptides possessing similar capacity of hydroxyl radical scavenging or ROS reduction may have significantly different in vitro antioxidative effects. Therefore, comprehensive consideration of different antioxidative mechanisms were suggested in selecting antioxidative peptides from DWMH. [more...]
Use: pNovo; pXtract; pBuild
Use: pNovo
Use: pNovo
Use: pNovo
###### ABSTRACT: Glycosylation, as a biologically important protein post-translational modification, often alters on both glycosites and glycans, simultaneously. However, most of current approaches focused on biased profiling of either glycosites or glycans, and limited by time-consuming process and milligrams of starting protein material. We describe here a simple and integrated spintip-based glycoproteomics technology (termed Glyco-SISPROT) for achieving a comprehensive view of glycoproteome with shorter sample processing time and low microgram starting material. By carefully integrating and optimizing SCX, C18 and Concanavalin A (Con A) packing material and their combination in spintip format, both predigested peptides and protein lysates could be processed by Glyco-SISPROT with high efficiency. More importantly, deglycopeptide, intact glycopeptide and glycans released by multiple glycosidases could be readily collected from the same Glyco-SISPROT workflow for LC-MS analysis. In total, above 1850 glycosites in (1) over tilde 770 unique deglycopeptides were characterized from mouse liver by using either 100 mu g of predigested peptides or directly using 100 mu g of protein lysates, in which about 30% of glycosites were released by both PNGase F and Endos. To the best of our knowledge, this approach should be one of the most comprehensive glycoproteomic approaches by using limited protein starting material. One significant benefit of Glyco-SISPROT is that whole processing time is dramatically reduced from a few days to less than 6 h with good reproducibility when protein lysates were directly processed by Glyco-SISPROT. We expect that this method will be suitable for multi-level glycoproteome analysis of rare biological samples with high sensitivity. (C) 2019 Elsevier B.V. All rights reserved. [more...]
Use: pParse; pGlyco
Use: pGlyco
Use: pGlyco
###### ABSTRACT: Aberrant sialylation of glycoproteins is closely related to many malignant diseases, and analysis of sialylation has great potential to reveal the status of these diseases. However, in-depth analysis of sialylation is still challenging because of the high microheterogeneity of protein glycosylation, as well as the low abundance of sialylated glycopeptides (SGPs). Herein, an integrated strategy was fabricated for the detailed characterization of glycoprotein sialylation on the levels of glycosites and site-specific glycoforms by employing the SGP enrichment method. This strategy enabled the identification of up to 380 glycosites, as well as 414 intact glycopeptides corresponding to 383 site-specific glycoforms from only initial 6 mu L serum samples, indicating the high sensitivity of the method for the detailed analysis of glycoprotein sialylation. This strategy was further employed to the differential analysis of glycoprotein sialylation between hepatocellular carcinoma patients and control samples, leading to the quantification of 344 glycosites and 405 site-specific glycoforms, simultaneously. Among these, 43 glycosites and 55 site-specific glycoforms were found to have significant change on the glycosite and site-specific glycoform levels, respectively. Interestingly, several glycoforms attached onto the same glycosite were found with different change tendencies. This strategy was demonstrated to be a powerful tool to reveal subtle differences of the macro- and microheterogeneity of glycoprotein sialylation. [more...]
Use: pGlyco; pQuant
###### ABSTRACT: N-glycosylation alteration has been reported in liver diseases. Characterizing N-glycopeptides that correspond to N-glycan structure with specific site information enables better understanding of the molecular pathogenesis of liver damage and cancer. Here, unbiased quantification of N-glycopeptides of a cluster of serum glycoproteins with 40-55 kDa molecular weight (40-kDa band) was investigated in hepatitis B virus (HBV)-related liver diseases. We used an N-glycopeptide method based on O-18/O-16 C-terminal labeling to obtain 82 comparisons of serum from patients with HBV-related hepatocellular carcinoma (HCC) and liver cirrhosis (LC). Then, multiple reaction monitoring (MRM) was performed to quantify N-glycopeptide relative to the protein content, especially in the healthy donor-HBV-LC-HCC cascade. TPLTAN(205)ITK (H5N5S1F1) and (H5N4S2F1) corresponding to the glycopeptides of IgA(2) were significantly elevated in serum from patients with HBV infection and even higher in HBV-related LC patients, as compared with healthy donor. In contrast, the two glycopeptides of IgA(2) fell back down in HBV-related HCC patients. In addition, the variation in the abundance of two glycopeptides was not caused by its protein concentration. The altered N-glycopeptides might be part of a unique glycan signature indicating an IgA-mediated mechanism and providing potential diagnostic clues in HBV-related liver diseases. [more...]
Use: pGlyco; pQuant
Use: pGlyco
Use: pGlyco
Use: pDeep
Use: pDeep
Use: pDeep
### 2018
###### ABSTRACT: The open (mass tolerant) search of tandem mass spectra of peptides shows great potential in the comprehensive detection of post-translational modifications (PTMs) in shotgun proteomics. However, this search strategy has not been widely used by the community, and one bottleneck of it is the lack of appropriate algorithms for automated and reliable post-processing of the coarse and error-prone search results. Here we present PTMiner, a software tool for confident filtering and localization of modifications (mass shifts) detected in an open search. After mass-shift-grouped false discovery rate (FDR) control of peptide-spectrum matches (PSMs), PTMiner uses an empirical Bayesian method to localize modifications through iterative learning of the prior probabilities of each type of modification occurring on different amino acids. The performance of PTMiner was evaluated on three data sets, including simulated data, chemically synthesized peptide library data and modified-peptide spiked-in proteome data. The results showed that PTMiner can effectively control the PSM FDR and accurately localize the modification sites. At 1% real false localization rate (FLR), PTMiner localized 93%, 84 and 83% of the modification sites in the three data sets, respectively, far higher than two open search engines we used and an extended version of the Ascore localization algorithm. We then used PTMiner to analyze a draft map of human proteome containing 25 million spectra from 30 tissues, and confidently identified over 1.7 million modified PSMs at 1% FDR and 1% FLR, which provided a system-wide view of both known and unknown PTMs in the human proteome. [more...]
Use: pParse; pFind
Use: pFind
Use: pFind
###### ABSTRACT: Cysteine sulfinic acid or S-sulfinylation is an oxidative post-translational modification (OxiPTM) that is known to be involved in redox-dependent regulation of protein function but has been historically difficult to analyze biochemically. To facilitate the detection of S-sulfinylated proteins, we demonstrate that a clickable, electrophilic diazene probe (DiaAlk) enables capture and site-centric proteomic analysis of this OxiPTM. Using this workflow, we revealed a striking difference between sulfenic acid modification (S-sulfenylation) and the S-sulfinylation dynamic response to oxidative stress, which is indicative of different roles for these OxiPTMs in redox regulation. We also identified >55 heretofore-unknown protein substrates of the cysteine sulfinic acid reductase sulfiredoxin, extending its function well beyond those of 2-cysteine peroxiredoxins (2-Cys PRDX1-4) and offering new insights into the role of this unique oxidoreductase as a central mediator of reactive oxygen species-associated diseases, particularly cancer. DiaAlk therefore provides a novel tool to profile S-sulfinylated proteins and study their regulatory mechanisms in cells. [more...]
Use: pFind; pQuant
Use: pFind
Use: pFind
Use: pFind
Use: pFind
Use: pFind
Use: pFind
Use: pFind
###### ABSTRACT: Confident characterization of intact glycopeptides is a challenging task in mass spectrometry-based glycoproteomics due to microheterogeneity of glycosylation, complexity of glycans, and insufficient fragmentation of peptide bones. Open mass spectral library search is a promising computational approach to peptide identification, but its potential in the identification of glycopeptides has not been fully explored. Here we present pMatchGlyco, a new spectral library search tool for intact N-linked glycopeptide identification using high-energy collisional dissociation (HCD) tandem mass spectrometry (MS/MS) data. In pMatchGlyco, (1) MS/MS spectra of deglycopeptides are used to create spectral library, (2) MS/MS spectra of glycopeptides are matched to the spectra in library in an open (precursor tolerant) manner and the glycans are inferred, and (3) a false discovery rate is estimated for top-scored matches above a threshold. The efficiency and reliability of pMatchGlyco were demonstrated on a data set of mixture sample of six standard glycoproteins and a complex glycoprotein data set generated from human cancer cell line OVCAR3. [more...]
Use: pFind; pParse; pGlyco
Use: pFind
Use: pFind
Use: pFind
Use: pFind
Use: pFind
Use: pFind
Use: pFind
Use: pFind
Use: pFind
Use: pXtract
Use: pNovo
Use: pParse
Use: pParse
Use: pGlyco
Use: pGlyco
Use: pGlyco
Use: pGlyco
Use: pQuant
Use: pQuant
Use: pQuant
### 2017
###### ABSTRACT: Proteins can undergo oxidative cleavage by in-vitro metal-catalyzed oxidation (MCO) in either the aamidation or the diamide pathway. However, whether oxidative cleavage of polypeptide-chain occurs in biological systems remains unexplored. We describe a chemoproteomic approach to globally and site-specifically profile electrophilic protein degradants formed from peptide backbone cleavages in human proteomes, including the known N-terminal alpha-ketoacyl products and >1000 unexpected N-terminal formyl products. Strikingly, such cleavages predominantly occur at the carboxyl side of lysine (K) and arginine (R) residues across native proteomes in situ, while MCO-induced oxidative cleavages randomly distribute on peptide/protein sequences in vitro. Furthermore, ionizing radiation-induced reactive oxygen species (ROS) also generate random oxidative cleavages in situ. These findings suggest that the endogenous formation of N-formyl and N-alpha-ketoacyl degradants in biological systems is more likely regulated by a previously unknown mechanism with a trypsin-like specificity, rather than the random oxidative damage as previously thought. More generally, our study highlights the utility of quantitative chemoproteomics in combination with unrestricted search tools as a viable strategy to discover unexpected chemical modifications of proteins labeled with active-based probes. [more...]
Use: pFind; pQuant
Use: pFind
Use: pFind
Use: pFind
###### ABSTRACT: Reactive metabolites (RM) formed from bioactivation of drugs can covalently modify liver proteins and cause mechanism-based inactivation of major cytochrome P450 (CYP450) enzymes. Risk of bioactivation of a test compound is routinely examined as part Of lead optimization efforts in drug discovery. Here we described a chemoproteomic platform to ass in vitro and in vivo bioactivation potential of drugs. This platform enabled us to determine reactivity of thousands of proteomic cysteines toward RMs of diclofenac formed in human liver microsomes and living animals. We pinpointed numerous reactive cysteines as the targets of RMs of diclofenac, including the active (heme-binding) sites on several key CYP450 isoforms (1A2, 2E1 and 3A4 for human, 2C39 and 3A11 for mouse). This general platform should be applied to other drugs, drug candidates, and xenobiotics with potential hepatoxicity, including environmental organic substances, bioactive natural products, and traditional Chinese medicine. [more...]
Use: pFind; pQuant
Use: pFind
###### ABSTRACT: Identifying missing proteins (MPs) has been one of the critical missions of the Chromosome-Centric Human Proteome Project (C-HPP). Since 2012, over 30 research teams from 17 countries have been trying to search biochemical strategies. MPs mainly fall into the following adequate and accurate evidence of MPs through various classes: (1) low-molecular-weight (LMW) proteins, (2) membrane proteins, (3) proteins that contained various post-translational modifications (PTMs), (4) nucleic acid associated proteins, (5) low abundance, and (6) unexpressed genes. In this study, kidney cancer and adjacent tissues were used for phosphoproteomics research, and 8962 proteins were identified, including 6415 phosphoproteins, and 44 728 phosphorites, of which 10 266 were unreported previously. In total, 75 candidate detections were found, including 45 phoshoproteins. GO analysis for these 75 candidate detections revealed that these proteins mainly clustered as membrane proteins and took part in nephron and kidney development. After rigorous screening and manual check, 9 of them were verified with the synthesized peptides. Finally, only one missing protein was confirmed. All mass spectrometry data from this study have been deposited in the PRIDE with identifier PXD006482. [more...]
Use: pFind; pBuild
###### ABSTRACT: Although 5 years of the missing proteins (MPs) study have been completed, searching for MPs remains one of the core missions of the Chromosome-Centric Human Proteome Project (C-HPP). Following the next-50-MPs challenge of the C-HPP, we have focused on the testis-enriched MPs by various strategies since 2015. On the basis of the theoretical analysis of MPs (2017-01, neXtProt) using multiprotease digestion, we found that nonconventional proteases (e.g. LysargiNase, GluC) could improve the peptide diversity and sequence coverage compared with Trypsin. Therefore, a multiprotease strategy was used for searching more MPs in the same human testis tissues separated by 10% SDS-PAGE, followed by high resolution LC-MS/MS system (Q Exactive HF). A total of 7838 proteins were identified. Among them, three PE2 MPs in neXtProt 2017-01 have been identified: beta-defensin 123 (Q8N688, chr 20q), cancer/testis antigen family 45 member A10 (PODMU9, chr Xq), and Histone H2A-Bbd type 2/3 (P0C5Z0, chr Xq). However, because only one unique peptide of >= 9 AA was identified in beta-defensin 123 and Histone H2A-Bbd type 2/3, respectively, further analysis indicates that each falls under the exceptions clause of the HPP Guidelines v2.1. After a spectrum quality check, isobaric PTM and single amino acid variant (SAAV) filtering, and verification with a synthesized peptide, and based on overlapping peptides from different proteases, these three MPs should be considered as exemplary examples of MPs found by exceptional criteria. Other MPs were considered as candidates but need further validation. All MS data sets have been deposited to the ProteomeXchange with identifier PXD006465. [more...]
Use: pFind; pBuild; pLabel
Use: pFind
Use: pFind
Use: pFind
###### ABSTRACT: Markers are needed to facilitate early detection of pancreatic ductal adenocarcinoma (PDAC), which is often diagnosed too late for effective therapy. Starting with a PDAC cell reprogramming model that recapitulated the progression of human PDAC, we identified secreted proteins and tested a subset as potential markers of PDAC. We optimized an enzyme-linked immunosorbent assay (ELISA) using plasma samples from patients with various stages of PDAC, from individuals with benign pancreatic disease, and from healthy controls. A phase 1 discovery study (n = 20), a phase 2a validation study (n = 189), and a second phase 2b validation study (n = 537) revealed that concentrations of plasma thrombospondin-2 (THBS2) discriminated among all stages of PDAC consistently. The receiver operating characteristic (ROC) c-statistic was 0.76 in the phase 1 study, 0.84 in the phase 2a study, and 0.87 in the phase 2b study. The plasma concentration of THBS2 was able to discriminate resectable stage I cancer as readily as stage III/IV PDAC tumors. THBS2 plasma concentrations combined with those for CA19-9, a previously identified PDAC marker, yielded a c-statistic of 0.96 in the phase 2a study and 0.97 in the phase 2b study. THBS2 data improved the ability of CA19-9 to distinguish PDAC from pancreatitis. With a specificity of 98%, the combination of THBS2 and CA19-9 yielded a sensitivity of 87% for PDAC in the phase 2b study. A THBS2 and CA19-9 blood marker panel measured with a conventional ELISA may improve the detection of patients at high risk for PDAC. [more...]
Use: pFind; pFind
Use: pFind
Use: pFind
Use: pFind
Use: pFind
###### ABSTRACT: Detailed characterization of glycoprotein structures requires determining both the sites of glycosylation as well as the glycan structures associated with each site. In this work, we developed an analytical strategy for characterization of intact N-glycopeptides in complex proteome samples. In the first step, tryptic glycopeptides were enriched using ZIC-HILIC. Secondly, a portion of the glycopeptides was treated with endoglycosidase H (Endo H) to remove high-mannose (Man) and hybrid N-linked glycans. Thirdly, a fraction of the Endo H-treated glycopeptides was further subjected to PNGase F treatment in O-18 water to remove the remaining complex glycans. The intact glycopeptides and deglycosylated peptides were analyzed by nano-RPLC-MS/MS, and the glycan structures and the peptide sequences were identified by using the Byonic or pFind tools. Sequential digestion by endoglycosidase provided candidate glycosites information and indication of the glycoforms on each glycopeptide, thus helping to confine the database search space and improve the confidence regarding intact glycopeptide identification. We demonstrated the effectiveness of this approach using RNase B and IgG and applied this sequential digestion strategy for the identification of glycopeptides from the HepG2 cell line. We identified 4514 intact glycopeptides coming from 947 glycosites and 1011 unique peptide sequences from HepG2 cells. The intensity of different glycoforms at a specific glycosite was obtained to reach the occupancy ratios of site-specific glycoforms. These results indicate that our method can be used for characterizing site-specific protein glycosylation in complex samples. [more...]
Use: pFind; pGlyco
Use: pXtract
Use: pXtract
Use: pNovo
Use: pNovo
Use: pNovo
Use: pNovo
Use: pQuant
Use: pMatch
Use: pTop
### 2016
###### ABSTRACT: Plant growth is controlled by integration of hormonal and light-signaling pathways. BZS1 is a B-box zinc finger protein previously characterized as a negative regulator in the brassinosteroid (BR)-signaling pathway and a positive regulator in the light-signaling pathway. However, the mechanisms by which BZS1/BBX20 integrates light and hormonal pathways are not fully understood. Here, using a quantitative proteomic workflow, we identified several BZS1-associated proteins, including light-signaling components COP1 and HY5. Direct interactions of BZS1 with COP1 and HY5 were verified by yeast two-hybrid and co-immunoprecipitation assays. Overexpression of BZS1 causes a dwarf phenotype that is suppressed by the hy5 mutation, while overexpression of BZS1 fused with the SRDX transcription repressor domain (BZS1-SRDX) causes a long-hypocotyl phenotype similar to hy5, indicating that BZS1's function requires HY5. BZS1 positively regulates HY5 expression, whereas HY5 negatively regulates BZS1 protein level, forming a feedback loop that potentially contributes to signaling dynamics. In contrast to BR, strigolactone (SL) increases BZS1 level, whereas the SL responses of hypocotyl elongation, chlorophyll and HY5 accumulation are diminished in the BZS1-SRDX seedlings, indicating that BZS1 is involved in these SL responses. These results demonstrate that BZS1 interacts with HY5 and plays a central role in integrating light and multiple hormone signals for photomorphogenesis in Arabidopsis. Copyright (C) 2016, Institute of Genetics and Developmental Biology, Chinese Academy of Sciences, and Genetics Society of China. Published by Elsevier Limited and Science Press. All rights reserved. [more...]
Use: pQuant; pFind
###### ABSTRACT: Protein phosphorylation, one of the most common and important modifications of acute and reversible regulation of protein function, plays a dominant role in almost all cellular processes. These signaling events regulate cellular responses, including proliferation, differentiation, metabolism, survival, and apoptosis. Several studies have been successfully used to identify phosphorylated proteins and dynamic changes in phosphorylation status after stimulation. Nevertheless, it is still rather difficult to elucidate precise complex phosphorylation signaling pathways. In particular, how signal transduction pathways directly communicate from the outer cell surface through cytoplasmic space and then directly into chromatin networks to change the transcriptional and epigenetic landscape remains poorly understood. Here, we describe the optimization and comparison of methods based on thiophosphorylation affinity enrichment, which can be utilized to monitor phosphorylation signaling into chromatin by isolation of phosphoprotein containing nucleosomes, a method we term phosphorylation-specific chromatin affinity purification (PS-ChAP). We utilized this PS-ChAP(1) approach in combination with quantitative proteomics to identify changes in the phosphorylation status of chromatin-bound proteins on nucleosomes following perturbation of transcriptional processes. We also demonstrate that this method can be employed to map phosphoprotein signaling into chromatin containing nucleosomes through identifying the genes those phosphorylated proteins are found on via thiophosphate PS-ChAP-qPCR. Thus, our results showed that PS-ChAP offers a new strategy for studying cellular signaling and chromatin biology, allowing us to directly and comprehensively investigate phosphorylation signaling into chromatin to investigate if these pathways are involved in altering gene expression. The mass spectrometry proteomics data have been deposited to the ProteomeXchange Consortium with the data set identifier PXD002436. [more...]
Use: pQuant; pFind
###### ABSTRACT: Detection of differentially abundant proteins in label-free quantitative shotgun liquid chromatography tandem mass spectrometry (LC-MS/MS) experiments requires a series of computational steps that identify and quantify LC-MS features. It also requires statistical analyses that distinguish systematic changes in abundance between conditions from artifacts of biological and technical variation. The 2015 study of the Proteome Informatics Research Group (iPRG) of the Association of Biomolecular Resource Facilities (ABRF) aimed to evaluate the effects of the statistical analysis on the accuracy of the results. The study used LC tandem mass spectra acquired from a controlled mixture, and made the data available to anonymous volunteer participants. The participants used methods of their choice to detect differentially abundant proteins, estimate the associated fold changes, and characterize the uncertainty of the results. The study found that multiple strategies (including the use of spectral counts versus peak intensities, and various software tools) could lead to accurate results, and that the performance was primarily determined by the analysts' expertise. This manuscript summarizes the outcome of the study, and provides representative examples of good computational and statistical practice. The data set generated as part of this study is publicly available. [more...]
Use: pFind; pQuant
Use: pFind
Use: pFind
###### ABSTRACT:
Use: pFind; pQuant
###### ABSTRACT: N-Glycosylation is one of the most prevalent protein post-translational modifications and is involved in many biological processes, such as protein folding, cellular communications, and signaling. Alteration of N-glycosylation is closely related to the pathogenesis of diseases. Thus, the investigation of protein N-glycosylation is crucial for the diagnosis and treatment of disease. In this research, we applied diethylaminoethanol (DEAE) Sepharose solid-phase extraction microcolumns for N-glycopeptide enrichment. This method integrated the advantages of Click Maltose and zwitterionic HILIC (ZIC-HILIC) and showed a relatively higher specificity for N-glycosylated peptides. This strategy was then applied to tryptic digests of normal human serum, followed by deglycosylation using peptide-N-glycosidase F (PNGase F) in H-2 O-18. Subsequent LC-MS/MS analysis allowed for the assignment of 219 N-glycosylation sites from 115 serum N-glycoproteins. This study provides an alternative approach for N-glycopeptide enrichment and the method employed is effective for large-scale N-glycosylation site identification. [more...]
Use: pFind; pBuild
###### ABSTRACT: Since 2012, missing proteins (MPs) investigation has been one of the critical missions of Chromosome-Centric Human Proteome Project (C-HPP) through various biochemical strategies. On the basis of our previous testis MPs study, faster scanning and higher resolution mass-spectrometry-based proteomics might be conducive to MPs exploration, especially for low-abundance proteins. In this study, Q-Exactive HF (HF) was used to survey proteins from the same testis tissues separated by two separating methods (tricine- and glycine-SDS-PAGE), as previously described. A total of 8526 proteins were identified, of which more low-abundance proteins were uniquely detected in HF data but not in our previous LTQ Orbitrap Velos (Velos) reanalysis data. Further transcriptomics analysis showed that these uniquely identified proteins by HF also had lower expression at the mRNA level. Of the 81 total identified MPs, 74 and 39 proteins were listed as MPs in HF and Velos data sets, respectively. Among the above MPs, 47 proteins (43 neXtProt PE2 and 4 PE3) were ranked as confirmed MPs after verifying with the stringent spectra match and isobaric and single amino acid variants filtering. Functional investigation of these 47 MPs revealed that 11 MPs were testis-specific proteins and 7 MPs were involved in spermatogenesis process. Therefore, we concluded that higher scanning speed and resolution of HF might be factors for improving the low-abundance MP identification in future C-HPP studies. All mass-spectrometry data from this study have been deposited in the ProteomeXchange with identifier PXD004092. [more...]
Use: pFind; pBuild
###### ABSTRACT: A membrane protein enrichment method composed of ultracentrifugation and detergent-based extraction was first developed based on MCF7 cell line. Then, in-solution digestion with detergents and eFASP (enhanced filter-aided sample preparation) with detergents were compared with the time-consuming in-gel digestion method. Among the in-solution digestion strategies, the eFASP combined with RapiGest identified 1125 membrane proteins. Similarly, the eFASP combined with sodium deoxycholate identified 1069 membrane proteins; however, the in-gel digestion characterized 1091 membrane proteins. Totally, with the five digestion methods, 1390 membrane proteins were identified with >= 1 unique peptides, among which 1345 membrane proteins contain unique peptides >= 2. This is the biggest membrane protein data set for MCF7 cell line and even breast cancer tissue samples. Interestingly, we identified 13 unique peptides belonging to 8 missing proteins (MPs). Finally, eight unique peptides were validated by synthesized peptides. Two proteins were confirmed as MPs, and another two proteins were candidate detections. [more...]
Use: pFind; pBuild
###### ABSTRACT: Core-fucosylation (CF) plays important roles in regulating biological processes in eukaryotes. Alterations of CF-glycosites or CF-glycans in bodily fluids correlate with cancer development. Therefore, global research of protein core-fucosylation with an emphasis on proteomics can explain pathogenic and metastasis mechanisms and aid in the discovery of new potential biomarkers for early clinical diagnosis. In this study, a precise and high throughput method was established to identify CF-glycosites from human plasma. We found that alternating HCD and ETD fragmentation (AHEF) can provide a complementary method to discover CF-glycosites. A total of 407 CF-glycosites among 267 CF-glycoproteins were identified in a mixed sample made from six normal human plasma samples. Among the 407 CF-glycosites, 10 are without the N-X-S/T/C consensus motif, representing 2.5% of the total number identified. All identified CF-glycopeptide results from HCD and ETD fragmentation were filtered with neutral loss peaks and characteristic ions of GlcNAc from HCD spectra, which assured the credibility of the results. This study provides an effective method for CF-glycosites identification and a valuable biomarker reference for clinical research. Biological significance: CF-glycosytion plays an important role in regulating biological processes in eukaryotes. Alterations of the glycosites and attached CF-glycans are frequently observed in various types of cancers. Thus, it is crucial to develop a strategy for mapping human CF-glycosylation. Here, we developed a complementary method via alternating HCD and ETD fragmentation (AHEF) to analyze CF-glycoproteins. This strategy reveals an excellent complementarity of HCD and ETD in the analysis of CF-glycoproteins, and provides a valuable biomarker reference for clinical research. Published by Elsevier B.V. [more...]
Use: pFind; pBuild
###### ABSTRACT: Over the past decades, protein O-GlcNAcylation has been found to play a fundamental role in cell cycle control, metabolism, transcriptional regulation, and cellular signaling. Nevertheless, quantitative approaches to determine in vivo GlcNAc dynamics at a large-scale are still not readily available. Here, we have developed an approach to isotopically label O-GlcNAc modifications on proteins by producing C-13-labeled UDP-GlcNAc from C-13(6)-glucose via the hexosamine biosynthetic pathway. This metabolic labeling was combined with quantitative mass spectrometry-based proteomics to determine protein O-GlcNAcylation turnover rates. First, an efficient enrichment method for O-GlcNAc peptides was developed with the use of phenylboronic acid solid-phase extraction and anhydrous DMSO. The near stoichiometry reaction between the diol of GlcNAc and boronic acid dramatically improved the enrichment efficiency. Additionally, our kinetic model for turnover rates integrates both metabolomic and proteomic data, which increase the accuracy of the turnover rate estimation. Other advantages of this metabolic labeling method include in vivo application, direct labeling of the O-GlcNAc sites and higher confidence for site identification. Concentrating only on nuclear localized GlcNAc modified proteins, we are able to identify 105 O-GlcNAc peptides on 42 proteins and determine turnover rates of 20 O-GlcNAc peptides from 14 proteins extracted from HeLa nuclei. In general, we found O-GlcNAcylation turnover rates are slower than those published for phosphorylation or acetylation. Nevertheless, the rates widely varied depending on both the protein and the residue modified. We believe this methodology can be broadly applied to reveal turnovers/dynamics of protein O-GlcNAcylation from different biological states and will provide more information on the significance of O-GlcNAcylation, enabling us to study the temporal dynamics of this critical modification for the first time. [more...]
Use: pXtract; pParse; pFind
Use: pFind
Use: pFind
Use: pFind
Use: pFind
###### ABSTRACT: O-linked beta-N-acetylglucosamine (O-GlcNAc) is emerging as an essential protein post-translational modification in a range of organisms. It is involved in various cellular processes such as nutrient sensing, protein degradation, gene expression, and is associated with many human diseases. Despite its importance, identifying O-GlcNAcylated proteins is a major challenge in proteomics. Here, using peracetylated N-azidoacetylglucosamine (Ac(4)GlcNAz) as a bioorthogonal chemical handle, we described a gel-based mass spectrometry method for the identification of proteins with O-GlcNAc modification in A549 cells. In addition, we made a labeling efficiency comparison between two modes of azide-alkyne bioorthogonal reactions in click chemistry: copper-catalyzed azide-alkyne cycloaddition (CuAAC) with Biotin-Diazo-Alkyne and stain-promoted azide-alkyne cycloaddition (SPAAC) with Biotin-DIBO-Alkyne. After conjugation with click chemistry in vitro and enrichment via streptavidin resin, proteins with O-GlcNAc modification were separated by SDS-PAGE and identified with mass spectrometry. Proteomics data analysis revealed that 229 putative O-GlcNAc modified proteins were identified with Biotin-Diazo-Alkyne conjugated sample and 188 proteins with Biotin-DIBO-Alkyne conjugated sample, among which 114 proteins were overlapping. Interestingly, 74 proteins identified from Biotin-Diazo-Alkyne conjugates and 46 verified proteins from Biotin-DIBO-Alkyne conjugates could be found in the O-GlcNAc modified proteins database dbOGAP (http://cbsb.lombardi.georgetown.edu/hulab/OGAP.html). These results suggested that CuAAC with Biotin-Diazo-Alkyne represented a more powerful method in proteomics with higher protein identification and better accuracy compared to SPAAC. The proteomics credibility was also confirmed by the molecular function and cell component gene ontology (GO). Together, the method we reported here combining metabolic labeling, click chemistry, affinity-based enrichment, SDS-PAGE separation, and mass spectrometry, would be adaptable for other post-translationally modified proteins in proteomics. [more...]
Use: pFind; pBuild
Use: pFind
|
# Math Help - Finding Inverse w/ Elementary Matrices
1. ## Finding Inverse w/ Elementary Matrices
Hello,
Can someone please help me solve this?
I've attached a picture.
2. Originally Posted by l flipboi l
Hello,
Can someone please help me solve this?
I've attached a picture.
Bring the left hand matrix to the unit matrix by means of elementary operations on tis rows/columns, and repeat EXACTLY each operation on the right hand matrix (which is the unit matrix). When you end, in the right hand side you'll get the inverse of the LHS matrix. (why?)
Tonio
.
3. Thanks! I see how to do it now.
|
This is one of the 100+ free recipes of the IPython Cookbook, Second Edition, by Cyrille Rossant, a guide to numerical computing and data science in the Jupyter Notebook. The ebook and printed book are available for purchase at Packt Publishing.
▶ Text on GitHub with a CC-BY-NC-ND license
▶ Code on GitHub with a MIT license
In this recipe, we will show how to use a Fast Fourier Transform (FFT) to compute the spectral density of a signal. The spectrum represents the energy associated to frequencies (encoding periodic fluctuations in a signal). It is obtained with a Fourier transform, which is a frequency representation of a time-dependent signal. A signal can be transformed back and forth from one representation to the other with no information loss.
In this recipe, we will illustrate several aspects of the Fourier Transform. We will apply this tool to weather data spanning 20 years in France obtained from the US National Climatic Data Center.
## How to do it...
1. Let's import the packages, including scipy.fftpack, which includes many FFT- related routines:
import datetime
import numpy as np
import scipy as sp
import scipy.fftpack
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
2. We import the data from the CSV file (it has been obtained at http://www.ncdc.noaa.gov/cdo-web/datasets#GHCND). The number -9999 is used for N/A values. pandas can easily handle this. In addition, we tell pandas to parse dates contained in the DATE column:
df0 = pd.read_csv('https://github.com/ipython-books/'
'cookbook-2nd-data/blob/master/'
'weather.csv?raw=true',
na_values=(-9999),
parse_dates=['DATE'])
df = df0[df0['DATE'] >= '19940101']
df.head()
3. Each row contains the precipitation and extreme temperatures recorded each day by one weather station in France. For every date in the calendar, we want to get a single average temperature for the whole country. The groupby() method provided by pandas lets us do this easily. We also remove any N/A value with dropna():
df_avg = df.dropna().groupby('DATE').mean()
df_avg.head()
4. Now, we get the list of dates and the list of corresponding temperatures. The unit is in tenths of a degree, and we get the average value between the minimal and maximal temperature, which explains why we divide by 20.
date = df_avg.index.to_datetime()
temp = (df_avg['TMAX'] + df_avg['TMIN']) / 20.
N = len(temp)
5. Let's take a look at the evolution of the temperature:
fig, ax = plt.subplots(1, 1, figsize=(6, 3))
temp.plot(ax=ax, lw=.5)
ax.set_ylim(-10, 40)
ax.set_xlabel('Date')
ax.set_ylabel('Mean temperature')
6. We now compute the Fourier transform and the spectral density of the signal. The first step is to compute the FFT of the signal using the fft() function:
temp_fft = sp.fftpack.fft(temp)
7. Once the FFT has been obtained, we need to take the square of its absolute value in order to get the power spectral density (PSD):
temp_psd = np.abs(temp_fft) ** 2
8. The next step is to get the frequencies corresponding to the values of the PSD. The fftfreq() utility function does just that. It takes the length of the PSD vector as input as well as the frequency unit. Here, we choose an annual unit: a frequency of 1 corresponds to 1 year (365 days). We provide 1/365 because the original unit is in days:
fftfreq = sp.fftpack.fftfreq(len(temp_psd), 1. / 365)
9. The fftfreq() function returns positive and negative frequencies. We are only interested in positive frequencies here, as we have a real signal:
i = fftfreq > 0
10. We now plot the power spectral density of our signal, as a function of the frequency (in unit of 1/year). We choose a logarithmic scale for the y axis (decibels):
fig, ax = plt.subplots(1, 1, figsize=(8, 4))
ax.plot(fftfreq[i], 10 * np.log10(temp_psd[i]))
ax.set_xlim(0, 5)
ax.set_xlabel('Frequency (1/year)')
ax.set_ylabel('PSD (dB)')
Because the fundamental frequency of the signal is the yearly variation of the temperature, we observe a peak for f=1.
11. Now, we cut out frequencies higher than the fundamental frequency:
temp_fft_bis = temp_fft.copy()
temp_fft_bis[np.abs(fftfreq) > 1.1] = 0
12. Next, we perform an inverse FFT to convert the modified Fourier transform back to the temporal domain. This way, we recover a signal that mainly contains the fundamental frequency, as shown in the following figure:
temp_slow = np.real(sp.fftpack.ifft(temp_fft_bis))
fig, ax = plt.subplots(1, 1, figsize=(6, 3))
temp.plot(ax=ax, lw=.5)
ax.plot_date(date, temp_slow, '-')
ax.set_xlim(datetime.date(1994, 1, 1),
datetime.date(2000, 1, 1))
ax.set_ylim(-10, 40)
ax.set_xlabel('Date')
ax.set_ylabel('Mean temperature')
We get a smoothed version of the signal, because the fast variations have been lost when we have removed the high frequencies in the Fourier transform.
## How it works...
Broadly speaking, the Fourier transform is an alternative representation of a signal as a superposition of periodic components. It is an important mathematical result that any well-behaved function can be represented under this form. Whereas a time-varying signal is most naturally considered as a function of time, the Fourier transform represents it as a function of the frequency. A magnitude and a phase, which are both encoded in a single complex number, are associated to each frequency.
### The Discrete Fourier Transform
Let's consider a digital signal $$x$$ represented by a vector $$(x_0, ..., x_{N-1})$$. We assume that this signal is regularly sampled. The Discrete Fourier Transform (DFT) of $$x$$ is $$X = (X_0, ..., X_{N-1})$$ defined as:
$$\forall k \in \{0, \ldots, N-1\}, \quad X_k = \sum_{n=0}^{N-1} x_n e^{-2i\pi kn/N}.$$
The DFT can be computed efficiently with the Fast Fourier Transform (FFT), an algorithm that exploits symmetries and redundancies in this definition to considerably speed up the computation. The complexity of the FFT is $$O(N \log N)$$ instead of $$O(N^2)$$ for the naive DFT. The FFT is one of the most important algorithms of the digital universe.
Here is an intuitive explanation of what the DFT describes. Instead of representing our signal on a real line, let's represent it on a circle. We can play the whole signal by making 1, 2, or any number $$k$$ of laps on the circle. Therefore, when $$k$$ is fixed, we represent each value $$x_n$$ of the signal with an angle $$2\pi kn / N$$ and a distance from the original equal to $$x_n$$.
In the following figure, the signal is a sine wave at the frequency $$f=3 Hz$$. The points of this signal are in blue, positioned at an angle $$2\pi kn / N$$. Their algebraic sum in the complex plane is in red. These vectors represent the different coefficients of the signal's DFT.
The next figure represents the previous signal's power spectral density (PSD):
### Inverse Fourier Transform
By considering all possible frequencies, we have an exact representation of our digital signal in the frequency domain. We can recover the initial signal with an Inverse Fast Fourier Transform that computes an Inverse Discrete Fourier Transform. The formula is very similar to the DFT:
$$\forall k \in \{0, \ldots, N-1\}, \quad x_k = \frac{1}{N} \sum_{n=0}^{N-1} X_n e^{2i\pi kn/N}.$$
The DFT is useful when periodic patterns are to be found. However, generally speaking, the Fourier transform cannot detect transient changes at specific frequencies. Local spectral methods are required, such as the wavelet transform.
|
Homework Help: Help with question interpretation
1. Apr 23, 2008
varignon
1. The problem statement, all variables and given/known data
Give an example of a function f for which the following assertion is false:
If $$|f(x)-l|<\epsilon$$ when $$0<|x-a|<\delta$$, then $$|f(x)-l|<\epsilon/2$$, when $$0<|x-a|<\delta/2$$
3. The attempt at a solution
I am really not quite sure what I am looking for here. I think i want a function for which $$\delta$$ gets smaller much more quickly than epsilon does, any input as to what I am actually looking for would be great.
2. Apr 23, 2008
HallsofIvy
Since the problem only asks for "an example", I would go for the simplest. And it looks like a linear function, f(x)= mx+ b, should work. Draw an arbitrary straight line on an xy coordinate system, draw a rectangle at a point on that line, so the line is its diagonal, with $\delta$ as the length of the horizontal side and $\epsilon$ as the length of the vertical side. Now, imagine making $\epsilon$ smaller. How does $\delta$ change? What does the slope have to be so that $\delta$ decreases faster than $\epsilon$?
3. Apr 23, 2008
varignon
If i let b =1, and if m < 1, then $$\delta$$ gets smaller more quickly than $$\epsilon$$. Is f(x) = 0.25x + 1 a suitable answer to this question?
4. Apr 23, 2008
HallsofIvy
I may have answered too quickly and lead you astray. Yes, if m< 1, $\delta$ gets smaller more quickly than $\epsilon$- but $\delta$ will reach half its original size exactly when $\epsilon$ reaches half its original size- so linear equations will not work here. Okay, then, what about y= x2? Take (0,0) as your initial point and $\epsilon= 1$. What does $\delta$ have to be? Now take $\epsilon= 1/2$. What does $\delta$ have to be.
5. Apr 23, 2008
varignon
$$\delta$$ has to be sqrt(2), which is still greater than $$\delta/2$$. Further investigation revealed that this seemed to be the case for x^3 etc too. I tried it with 1/(x^2), as a=1, l=1, that seemed to work quite nicely. Is this a useful candidate?
Last edited: Apr 23, 2008
|
# Definition:Bounded Ordered Set/Unbounded
Let $\left({S, \preceq}\right)$ be an ordered set.
A subset $T \subseteq S$ is unbounded (in $S$) if and only if it is not bounded.
|
82 Answered Questions for the topic Clauses
10/05/19
#### FANBOYS are hard!
Greetings. I have a sentence in my narrative that I am unsure about. The sentence is: "I've been transported into my favorite fictional universe, but at what cost?" . "At what cost" is not an... more
Clauses Grammar Subjects
06/24/19
#### What is the name of this grammatical phenomenon?
I have observed that many native English speakers (esp. American English, in my experience) tend, within the same sentence, to start a new clause whose subject is an element of the previous clause.... more
06/23/19
#### How to categorize this phrase. Relative clause, Interrogative clause, Adverbial clause?
What is "Where to go" in the sentence "Where to go is the question." Is it a adverbial phrase or a relative clause? And what is "Why go" in the sentence "Why go when you can stay?" - is it a clause?
06/23/19
#### If or since, does it make a difference?
In these sentences below, does it makes a difference if I replace *if* with *since*? 1)*If you are unemployed, why did you leave your last job?* 2)*If you are innocent, why did you flee?* ... more
06/23/19
#### Implied subject with "i.e."?
Is it required that an i.e. clause have an explicit subject? Preferred? *E.g.*, is the following sentence correct? >She was not amenable, i.e., turned him down. Or would it have to be >She... more
Clauses Grammar
06/23/19
#### Is a dependent clause part of the superordinate clause's predicate?
Could you please help me determine what the complete predicate is in the following sentence? > I get the willies when I see closed doors. — Joseph Heller, *Something Happened*. At first I... more
06/23/19
#### Is there bad grammar in Cinemark's "No Texting" warning?
The sentence in question is "Do not be the person we ask to leave the auditorium, because we **will**." It sounds very wrong to me, but I can't put my finger on the exact problem. Nobody on the... more
06/23/19
#### Conjunction Puzzle: Is this clause dependent or independent?
Third grade teacher here. I plan to teach students to distinguish between simple, compound and complex sentences — but only if I can demonstrate a clear and meaningful difference between the latter... more
06/23/19
#### Identifying parts of a sentence?
How do the bolded sections of the sentences below function grammatically? (taken from David McCullough's *John Adams*) 1. Philadelphia, the provincial capital of Pennsylvania on the western bank of... more
06/23/19
#### How can “for” be classed as a coördinating conjunction in the following instances?
How can *for* be classed as a coördinating conjunction in the following instances? - I cannot give you any money, for I have none. - He deserved to succeed, for he worked hard. - Blessed are the... more
06/22/19
#### Clauses in Sentences?
I understand that a clause contains (in order) a subject, verb and object, like below: > He let his daughter. "He" is the subject, "let" is the verb and "his daughter" is the object. But what... more
06/21/19
#### Comma after To at the beginning of a sentence?
I am just writing my master thesis and I am unsure whether to place a comma in sentences starting with "To". Here are some examples: - To be able to improve the performance[,] it is important to... more
Clauses Grammar
06/21/19
#### That awkward moment when?
I know when people use the phrase "that awkward moment when", it is clearly a sentence fragment. What exactly is it called though? A dependent clause? A noun clause? I have no idea.
06/20/19
#### Does this sentence exemplify an adverbial clause?
On the Wikipedia page for 'Dependent clause,' on the subject of 'Dependent words,' there is provided an example which supposedly presents an adverbial clause, viz., "Wherever she goes, she leaves... more
06/20/19
#### Is this sentence grammatically correct? Adverb clause?
When I got back my test recently, I oddly found that my English teacher thinks that there is an error in the usage of adverbial clauses in > "It seems that moving the body while learning, which... more
06/20/19
#### Is it grammatical to introduce a result clause using “then”?
Is it grammatical to introduce a result clause by using *then* as in these examples: * Don’t be lazy – *then* you will fail. * Don’t kill him – *then* you will regret it. If so, then is the *then*... more
06/20/19
#### Is a comma in this sentence required?
In the sentence below, is the comma optional or should it (not) be there? I can hear it there when this is spoken, but I am not convinced it needs to be there in written form. > In order to pass... more
Clauses Grammar Pronouns
06/20/19
#### Can an independent clause have an implied (or null) subject?
I'm trying to determine whether a clause with an implied subject can be considered independent - specifically in the case of compound sentences. For example: "I was tired, but went to the party... more
06/20/19
#### Which clause does the adverb modify in this sentence?
I have the following sentence: > "The KKK was a secret organization; apart from a few top leaders *the members **never** revealed their membership **and** wore masks in public*." Does the adverb... more
06/20/19
#### Ambiguity of "We discourage X from doing Y by using Z"?
Given the sentence, > We discourage people from committing crimes by using law enforcement, religion and education. I see two possible interpretations: 1. [We discourage people by using law... more
Clauses Grammar
06/20/19
#### Parsing possibility?
What is the correct way to parse the following sentence: It is possible that one can be happy only if one can be free. Does the sentence say: It is possible that [one can be happy only if one can... more
Clauses Grammar Emphasis
06/20/19
#### It is only me that is or "It is only I that am"?
> It is only me that is confused. or > It is only I that am confused. The first one sounds more natural to me while the second one appears to me as grammatically correct. Which one is correct?
06/20/19
#### I like it that vs. "I like that"?
I want to express the following: You are blaming me for your lack of concern and I like that (in a sarcastic way). Which one of the following sentences would be correct? > * I like it that your... more
06/20/19
#### Use semicolon or period when telling a result of an action?
If you look at these sentences, the second one is result of the first: > Alex shouts and feels pain in his leg, and he rubs the place with hand and looks at the leg. His leg swelled little bit.... more
06/20/19
#### Punctuating a sentence containing em dashes within commas?
I always find myself writing sentences that contain clauses within clauses, and I can never decide what the right way to punctuate this is. I'm not specifying what kinds of clauses because they... more
## Still looking for help? Get the right answer, fast.
Get a free answer to a quick problem.
Most questions answered within 4 hours.
#### OR
Choose an expert and meet online. No packages or subscriptions, pay only for the time you need.
|
# Model 1 Basic Time & Distance using formula Practice Questions Answers Test with Solutions & More Shortcuts
#### time & distance PRACTICE TEST [5 - EXERCISES]
Question : 1 [SSC CHSL 2012]
A train starts from a place A at 6 a.m. and arrives at another place B at 4.30 p.m. on the same day. If the speed of the train is 40 km per hour, find the distance travelled by the train ?
a) 400 km
b) 320 km
c) 230 km
d) 420 km
Using Rule 1,
Time = 10$1/2$ hours = $21/2$ hours
Speed = 40 kmph
Distance = Speed × Time
= $40 × 21/2$ = 420 km
Question : 2 [SSC CGL Prelim 2005]
A man riding his bicycle covers 150 metres in 25 seconds. What is his speed in km per hour ?
a) 20
b) 23
c) 21.6
d) 25
Using Rule 1,
Speed = $150/25$ = 6 m/sec
= $6 × 18/5 = 108/5$ = 21.6 kmph
Question : 3 [SSC CHSL 2012]
Two men start together to walk a certain distance, one at 4 km/h and another at 3 km/h. The former arrives half an hour before the latter. Find the distance.
a) 9 km
b) 6 km
c) 7 km
d) 8 km
If the required distance be x km, then
$x/3 - x/4 = 1/2$
${4x - 3x}/12 = 1/2$
$x/12 = 1/2$ ⇒ x = 6 km
Using Rule 9,
Here $S_1 = 4, t_1 = x, S_2 = 3, t_2 = x + 1/2$
$S_1t_1 = S_2t_2$
$4 × x = 3(x + 1/2)$
$4x - 3x = 3/2 x = 3/2$
Distance= $4 × 3/2$ = 6 kms
Question : 4 [SSC MTS 2013]
A train covers a certain distance in 210 minutes at a speed of 60 kmph. The time taken by the train, to cover the same distance at a speed of 80 kmph is :
a) 3 hours
b) 4$5/8$ hours
c) 2$5/8$ hours
d) 3$5/8$ hours
Speed of train = 60 kmph
Time = 210 minutes
= $210/60$ hours or $7/2$ hours
Distance covered
= $60 × 7/2$ = 210 km
Time taken at 80 kmph
= $210/80 = 21/8$ hours
= 2$5/8$ hours
Using Rule 9,
Here, $S_1 = 60, t_1 = 210/60 hrs, S_2 = 80, t_2$ = ?
$S_1t_1 = S_2t_2$
60 × $210/60 = 80 × t_2$
$t_2 = 21/8$ hrs = 2$5/8$ hrs
Question : 5 [SSC CGL Prelim 2002]
An athlete runs 200 metres race in 24 seconds. His speed (in km/ hr) is :
a) 30
b) 28.5
c) 24
d) 20
Using Rule 1,
Speed = $\text"Distance"/\text"Time"$
= $200/24$ m/s
$200/24$ m/s = $200/24 × 18/5$
= 30 km/h [Since, x m/s = $18/5$ x km/h]
## Recently Added Subject & Categories For All Competitive Exams
#### Important General Science Biology Practice MCQ Questions
Top General Science Biology In Human Welfare Based Multiple Choice Questions And Answers, Practice MCQ Test For All Competitive Exams, NEET, SSC, Interview
December-06-2022 by Careericons
#### Top 199+ General Science Biology Practice Test, MCQ PDF
Most Important General Science Biology Human Welfare Multiple Choice Questions And Answers (MCQ) PDF, Practice Test For All Competitive Exams, SSC, NEET, RRB
December-01-2022 by Careericons
#### New General Science Biology MCQ Online Practice Test, PDF
Top Most General Science Biology Food Production Multiple Choice Questions And Answers (MCQ), Online GK Practice Test, PDF For All Competitive Exams, NEET
November-29-2022 by Careericons
#### 150+ Frequently Asked Biology MCQs for Competitive Exams
Biology Frequently Asked Multiple Choice Questions And Answers (MCQ) For All Competitive Exams. General Science GK Practice Test, PDF For BANK, SSC, RAILWAY
November-23-2022 by Careericons
|
# Cartesian Plane
5 total views
Cartesian plane is a two dimensional co-ordinate system. This system has two axis – the x-axis and the y-axis. The center of the Cartesian plane is called the origin.
The x-axis and the y-axis divide the plane into 4 different quadrants as shown in the figure above. The axis are real number line with 0 at the origin.
Any point on the plane is plotted in terms of horizontal distance on x-axis and vertical distance on y-axis.
where is distance on x-axis and is distance on y-axis.
Example: Plot a point for
Solution:
\begin{aligned}&if \hspace{2px} p = (3, 5) \hspace{2px} then\\ \\
&x = 3\\ \\
&y = 5
\end{aligned}
### Pythagorean Theorem and Distance Formula
Pythagorean theorem is used for finding the distance of hypotenuse of a right triangle. The formula is modified to find the distance of two point on the Cartesian plane.
The above triangle has three sides – a, b and c, then Pythagorean theorem is given by
\begin{aligned}
&a^2 + b^2 = c^2\\ \\
&c = \sqrt{a^2 + b^2}
\end{aligned}
Suppose there are two points on the Cartesian plane.
\begin{aligned}
&p(x_1, y_1) = (2, 4)\\ \\
&q(x_2, y_2) = (2. 2)
\end{aligned}
and we have to find the distance between them.
Using Pythagorean theorem, we get
a = | y2 – y1 | = length of a
b = | x2 – x1 | = length of b
Therefore,
Distance formula for two points is
\begin{aligned}
&d = \sqrt{(|x2 - x1|)^2 +(|y2 - y1|)^2}\\ \\\
&d = \sqrt{(|2 - 2|)^2 +(|2 - 4|)^2}\\ \\
&d = \sqrt{(0)^2 + (-2)^2}\\ \\
&d = 2
\end{aligned}
The above diagram verify the results and it shows that the distance is actually 2 units. Hence, the distance formula is correct and applies to the Cartesian plane.
|
SimpleCashFlow - Maple Help
Finance
SimpleCashFlow
construct a cash flow at a given date
Calling Sequence SimpleCashFlow(amount, date)
Parameters
amount - real constant; amount of cash flow date - a string containing a date specification in a format recognized by ParseDate or a date data structure; date of cash flow
Description
• The SimpleCashFlow command constructs a simple cash flow at a given date.
Examples
> $\mathrm{with}\left(\mathrm{Finance}\right):$
> $\mathrm{SetEvaluationDate}\left("January 01, 2000"\right):$
> $\mathrm{EvaluationDate}\left(\right)$
${"January 1, 2000"}$ (1)
> $\mathrm{date}≔"Jan-01-2006"$
${\mathrm{date}}{≔}{"Jan-01-2006"}$ (2)
> $\mathrm{cashflow1}≔\mathrm{SimpleCashFlow}\left(100,\mathrm{date}\right)$
${\mathrm{cashflow1}}{≔}{\mathrm{100. on January 1, 2006}}$ (3)
> $\mathrm{NetPresentValue}\left(\mathrm{cashflow1},0.03\right)$
${83.52702114}$ (4)
> $\mathrm{cashflow2}≔\mathrm{SimpleCashFlow}\left(-100,\mathrm{date}\right)$
${\mathrm{cashflow2}}{≔}{\mathrm{-100. on January 1, 2006}}$ (5)
> $\mathrm{NetPresentValue}\left(\mathrm{cashflow2},0.03\right)$
${-83.52702114}$ (6)
> $\mathrm{cflows}≔\left[\mathrm{seq}\left(\mathrm{SimpleCashFlow}\left(100,\mathrm{AdvanceDate}\left(\mathrm{date},3i,\mathrm{Months}\right)\right),i=1..4\right)\right]$
${\mathrm{cflows}}{≔}\left[{\mathrm{100. on Apr-01-2006}}{,}{\mathrm{100. on Jul-01-2006}}{,}{\mathrm{100. on Oct-01-2006}}{,}{\mathrm{100. on Jan-01-2007}}\right]$ (7)
> $\mathrm{NetPresentValue}\left(\mathrm{cflows},0.03\right)$
${327.9371480}$ (8)
Compatibility
• The Finance[SimpleCashFlow] command was introduced in Maple 15.
|
D
#### Isosceles trapezoid
112 viewed last edited 5 years ago
Anthony Schwartz
2
How do I prove that the adjacent angles are congruent? That is in the isosceles trapezoid ABCD where AB \cong CD and AD \| BC how do I show that \angle{ABC} \cong \angle{BCD} and \angle{DAB} \cong \angle{CDA} ?
Krishna
2
Given that: ABCD is a isosceles trapezoid AB \cong CD and AD \| BC To show: \angle{ABC} \cong \angle{BCD} \angle{DAB} \cong \angle{CDA} Draw two perpendiculars to the base AE and DF AE=DF since they are altitudes of the same trapezoid AB=CD given in the question. Two right triangles are congruent if their hypotenuses are congruent and a corresponding leg is congruent. BE=CF Triangle ABE and Triangle DFC are congruent. So their angles are also equal. \angle{ABC} \cong \angle{BCD} (angles BCD=FCD)
Mahesh Godavarti
1
Draw a new line segment AE such that AD \cong EC as shown in the figure. Then since, AD \cong EC and AD \| EC , \square AECD is a parallelogram. Therefore, AE \cong CD and \triangle ABE is an isosceles triangle. After that, essentially, we have: 1. \angle EAD \cong \angle BCD Reason: \square AECD is a parallelogram. 2. \angle EAD \cong \angle AEB Reason: Alternate interior angles and AD \| BC 3. \angle AEB \cong \angle ABC Reason: In \triangle ABE , AB \cong AE . 4. \angle ABC \cong \angle BCD Reason: follows from Steps 1, 2 and 3. Similarly, we can prove for the other set of angles.
|
Utumi Modules
Speaker:
Mohamed F. Yousif
Date:
Thursday, 28 June, 2018 - 11:00
Venue:
Room 1.08, Mathematics building, FCUP
A right R-module M is called a Utumi Module (U-module) if, whenever A and B are isomorphic submodules of M with A ∩ B = 0, there exist two summands K and T of M such that A is an essential submodule of K, B is an essential submodule of T and K ⊕ T is a direct summand of M . The class of U -modules is a simultaneous and strict generalization of three fundamental classes of modules; namely the quasi-continuous, the square-free and the automorphism-invariant modules.
On denominator vectors of Cluster Algebras
Pin Liu
Date:
Wednesday, 23 May, 2018 - 14:00
Venue:
Room 1.09, Mathematics building, FCUP
At the beginning of this century, Fomin and Zelevinsky invented a new class of algebras called cluster algebras motivated by total positivity in algebraic groups and canonical bases in quantum groups. Since their introduction, cluster algebras have found application in a diverse variety of settings which include Poisson geometry, Teichmüller theory, tropical geometry, algebraic combinatorics and last not least the representation theory of quivers and finite dimensional algebras.
Long cycles in Hamiltonian graphs
António Girão
Date:
Friday, 20 April, 2018 - 15:30
Venue:
Room 1.22, Mathematics building, FCUP
In 1975, Sheehan conjectured that every d-regular Hamiltonian graph contains a second Hamiltonian cycle. This conjecture has been verified for all d greater than 22. In the light of Sheehan’s conjecture, it is natural to ask if regularity is genuinely necessary to force the existence of a second Hamiltonian cycle, or if a minimum degree condition is enough.
Hopf algebras and their finite dual.
Miguel Couto
Date:
Friday, 24 November, 2017 (All day)
Venue:
Room FC1.122, DMat-FCUP, 15h30 - 16h30
The subject of this talk will be Hopf algebras and their dual theory. We will mostly focus on a particular class of Hopf algebras: noetherian Hopf algebras that are finitely-generated modules over some commutative normal Hopf subalgebra. Some properties and examples of these Hopf algebras will be mentioned. Furthermore, we will see some results on the dual of this class of Hopf algebras, some of its properties, decompositions e maybe some interesting Hopf subalgebras.
The structure of split regular BiHom-Lie algebras
José Mª Sánchez
Date:
Monday, 11 September, 2017 (All day)
Venue:
11h, room 004, FC1 (Maths Bldg)
After recall classical results in order to place our work, we introduce the class of split regular BiHom-Lie algebras as the natural extension of the one of split Hom-Lie algebras and so of split Lie algebras. By making use of connection techniques, we focus our attention on the study of the structure of such algebras and, under certain conditions, the simplicity is characterized.
Quasi Euclidean Rings
André Leroy
Date:
Wednesday, 26 April, 2017 (All day)
Venue:
FCUP, Maths building FC1, room 1.22 at 11:30
For a natural number k, Cooke introduced k-stage euclidean rings as a generalization of classical Euclidean rings. His setting was entirely commutative. Later Leutbecher introduced them in a noncommutative settings but his aim was also studying commutative rings.
Krull-Schmidt-Remak Theorem, Direct-Sum Decompositions, and G-groups
Alberto Facchini
Date:
Friday, 24 March, 2017 (All day)
Venue:
Room 1.22, Mathematics building, FCUP
We will begin by presenting some history of the Krull-Schmidt-Remak Theorem. From groups, we will pass to modules over a ring R, introducing some direct-sum decompositions that follow a special pattern. We will consider invariants that also appear in factorisation of polynomials. Then we will go back from (right) R-modules to groups. Here the category that appears in a natural way is that of G-groups, which substitutes the category of right R-modules. In this category, Remak's result has a natural interpretation.
Hopf Algebras and Ore Extensions
Speaker:
Manuel José Ribeiro de Castro Silva Martins
Date:
Tuesday, 29 November, 2016 - 15:00
Venue:
FCUP, Maths building FC1, room 0.06
Ore extensions provide a way of constructing new algebras from preexisting ones, by adding a new indeterminate subject to commutation relations. A recent generalization of this concept is that of double Ore extensions. On the other hand, Hopf algebras are algebras which possess a certain additional dual structure. The problem of extending a Hopf algebra structure through an Ore extension has been discussed in a recent paper by Brown, O'Hagan, Zhang and Zhuang, of which we present the main result.
Hochschild (co)homology of down-up algebras
Andrea Solotar
Date:
Friday, 17 June, 2016 - 10:00
Venue:
Room 004 (FC1-Maths Building)
Let $K$ be a fixed field. Given parameters $(\alpha,\beta,\gamma) \in K^{3}$, the associated down-up algebra $A(\alpha,\beta,\gamma)$ is defined as the quotient of the free associative algebra $K\cl{u,d}$ by the ideal generated by the relations
\begin{split}
d^{2} u - (\alpha d u d + \beta u d^{2} + \gamma d),\\
d u^{2} - (\alpha u d u + \beta u^{2} d + \gamma u).
\end{split}
This family of algebras was introduced by G. Benkart and T. Roby.
A non-abelian tensor product of Hom-Lie algebras
Speaker:
J.M. Casas (University of Vigo)
Date:
Friday, 21 November, 2014 - 11:30
Venue:
Room 0.29, Mathematics building, FCUP
A non-abelian tensor product of Hom-Lie algebras is constructed and studied. This tensor product is used to describe universal (α-)central extensions of Hom-Lie algebras and to establish a relation between cyclic and Milnor cyclic homologies of Hom-associative algebras satisfying certain additional condition.
|
# Section 2.2: Problem 4 Solution
Working problems is a crucial part of learning mathematics. No one can learn... merely by poring over the definitions, theorems, and examples that are worked out in the text. One must work part of it out for oneself. To provide that opportunity is the purpose of the exercises.
James R. Munkres
Show that if does not occur free in , then .
For every structure and such that , we note that, for all , and agree at all free variables in , then, according to Theorem 22A, for all , , i.e. .
|
# Why c?
1. Apr 28, 2008
### neopolitan
I know this has been asked before but I would like to float a related question.
Why does the second postulate specifically refer to c?
This is the wording that I am referring to:
Two answers I have seen are:
1. The second postulate follows directly from the application of the first postulate to electromagnetism (in which case it seems you really only need the first postulate). The invariance of c is a consequence of this.
and
2. "If you are looking for proof, then I am afraid you will be disappointed. Postulates can not be proven, they are assumptions or statements made." (from https://www.physicsforums.com/showpost.php?p=1140259&postcount=2")
Another answer can be is that a more rigorous wording would be "light is always propagated in empty space with an invariant velocity which is independent of the state of motion of the emitting body".
The issue I have is not that the speed of light is invariant, but that it seems there are people who stop all further debate as to why the speed of light is invariant and why it is the particular speed it is (ie c) with answer 2 above.
So, what c in particular? And why does the speed of light in a vacuum have the value it has?
For a possible answer we could look at natural units. As discussed http://en.wikipedia.org/wiki/Planck_length" [Broken]:
This indicates that Planck units have some physical meaning. They are also interesting because Planck length/Planck time = c.
I wonder if it is possible to consider that there is a link between these measurements, at which scale time and space become discrete, "foamy" or as I prefer "granular", and the speed limitation placed on photons.
For example, if the smallest measurable time duration is one Planck time and the smallest measurable length is one Planck length and these measurement limitations are physical, rather than being limitations associated with our measuring regimes, then we are left with a limitation where a discrete particle (or wavicle) can either move one discrete length in one discrete time duration at speed c, or stay motionless. This would make c not only the maximum speed, but also the minimum speed - for discrete particles, ie quarks.
Now the actual location of individual quarks can't be pinned down, we can only assign each possible location a probability (and this is a physical thing again, not a limitation on our measuring devices). Get enough quarks together, enough to constitute a mass as we know it, and you have a probability cloud. The centre of the mass is somewhere in that probability cloud and if you look at it from a macro perspective, you can now point at where the mass "is", with the tacit understanding that this is an approximation. A mass which consists of a large number of discrete particles/wavicles, each of which is restricted to a speed of c but not restricted in direction, could in fact travel, as a statistical average, slower than c - even if each individual discrete particle/wavicle travels at c. In fact, the more of them you have, the more difficult it will be to get them to all travel in the same direction, and more energy will need to be applied to get them to travel in pretty much the same direction.
Is this at all valid?
Is it valid to think that the light speed limitation of the second postulate is due to quantum level foaminess?
cheers,
neopolitan
Last edited by a moderator: May 3, 2017
2. Apr 28, 2008
### lbrits
The Planck length/Planck time thing is a bit backwards. Those are constructed from c to begin. In any event, why the particular value of c? Well, in my books, c = 1, so... Alternatively, light, being massless, happens to travel at speed c, and massive things travel slowly enough that we measure them first and base our measurement systems on meters instead of lightseconds.
The fact that c is a limiting speed follows from the fact that the Lorenz group seems to be a symmetry group of nature, locally.
As for "Is it valid to think that the light speed limitation of the second postulate is due to quantum level foaminess?", your guess is as good as mine... but my guess would be "No" =) At the level of quantum foaminess, we probably don't even have spacetime anymore, so the question gets a bit uncertain.
I would argue that a good deal of effort is put into asking why nature has the particular symmetries that it has. I.e., what is the origin of the minus sign in the metric. It's just that these efforts have been fruitless so you don't hear about them.
3. Apr 28, 2008
### rbj
i think this goes along the same lines as this thread and this thread.
essentially the speed of light is the speed of all other instantaneous interaction (gravity, nuclear, as well as EM) of things separated by a vacuum. why it takes that value is really an historical question for why the units we use to measure length and time are what they are. the only salient physics is that this speed of propagation is real, finite, and positive. it could take on any physical value and the rest of reality would be scaled accordingly. i.e., i think that Planck Units are very important in viewing the scaling of things. if all of the dimensionless constants including the ratios of like-dimensioned quantities, specifically how the size of any thing (particle, etc.) compares to the Planck Length, or how any period of time compares to the Planck Time, or how the mass of anything compares to the Planck Mass, then it just doesn't matter what any person's measurement of c is. same for other dimensionful constants like G.
4. Apr 28, 2008
### Phrak
If light traveled at any other speed would there be a measurable difference?
5. Apr 28, 2008
### Staff: Mentor
Along with the previous posts, I think that the important question isn't why the invariant speed has the particular value that it does (because that depends on your units), but why is the invariant speed finite?
6. Apr 29, 2008
### rbj
i have an opinion of a layman (but was informed to me by some pretty heavyweight physicists) that i have already stated too many times.
it seems more credible that the speeds of propagation of the fundamental interactions is finite (and even identical to each other). it's a pretty wild universe that something going on in a galaxy 2 billion lightyears away would affect us at the same time (but with perhaps less magnitude) as it affects things in its own galaxy (as observed by a 3rd party that is equi-distant to both). i guess it's a pretty wild universe as it is.
7. Apr 29, 2008
### neopolitan
A point I was making, perhaps not sufficiently clearly, is that the speed of light is an upper limit. But one could say that light just goes as fast as it can, but this would be a misrepresentation, since light would also go as slow as it can.
At the foamy level of Planck units, there are two possible speeds, zero and c. A discrete particle/wavicle such as a photon has those two choices.
The motion of collection of particles/wavicles, such as ourselves, represents the summation of all the individual motions of constituent particles/wavicles. I think it would be fair to think that the constituent particles/wavicles could interact in manner analogous to (not precisely the same as) the individual molecules of a gas cloud.
So I am wondering if anyone got that point, that all of the elemental particles/wavicles from which we are made would all be moving at c.
Someone made comment on it being interesting that the speed of light is finite. Well, at macro level, the speed is demonstrably finite. But what about at the foamy quantum level? It seems to me that if you look at a grainy universe, then the speed at which a photon shifts from one "quantum box" to the next could effectively be infinite. You could (in effect) have a jerky sort of motion: instantaneous translation to adjacent quantum box, wait one Planck time, instantaneous translation to adjacent quantum box, repeat. Perhaps it is not valid to think of the the instantaneous translation as being equivalent to infinite speed (since it is a dividing by zero issue), but it seems pretty damn close.
cheers,
neopolitan
8. Apr 29, 2008
### Staff: Mentor
I guess then that I don't understand your question. The fact that c is finite has physical relevance. The actual value that it has is a question of your system of units, which is arbitrary.
9. Apr 29, 2008
### DrGreg
I am no expert in quantum theory, but I think you have the wrong picture here. Particles don't "jump from one quantum box from another". The Planck units simply refer to uncertainty in measurement. A measurement can take any value, it needn't be an integer number of Planck units. There is just an error bar on your measurement.
Don't picture a particle as a hard, solid ball, but as an out-of-focus blur that you can't measure very well. You can't be sure whether the blurriness is caused by its position or its speed.
Despite what Wikipedia says, I think that in some circumstances you might be able to measure a particle's position with an accuracy much better than a Planck length, but then you would measure its momentum very poorly. Or vice versa.
Finally, in quantum theory, "instantaneous" speed is inherently difficult to measure. If the speed isn't constant, you'd need to measure both a short distance and a short time accurately, which you can't. It's a lot easier to measure momentum. And there is no limit to momentum, unlike speed which is limited by c.
10. Apr 29, 2008
### neopolitan
Hi DrGreg,
Did you notice my out-of-focus blurred terminology?
"effectively", "(in effect)", "sort of", "particle/wavicle"
I also referred earlier to probability distributions.
But I am tired, so I won't be putting too much effort into a reply. I do wonder what possible meaning momentum has if you divorce it from speed. Are you saying the momentum of a specific known mass is unlimited? That seems odd to me, but I am in the throes of a cold, perhaps there is some subtlety that I am missing here.
cheers,
neopolitan
11. Apr 29, 2008
### rbj
the quote is accurate, Dale, but i don't see a question mark in it. whom are you addressing? (and BTW, i don't think we're disagreeing about anything either.)
12. Apr 29, 2008
### Staff: Mentor
Oops, my mistake. I read your response on a little hand-held device where I cannot see very many lines at a time, and I got confused thinking that your response was from neopolitan. The question I was refering to is the OP's Q.
I don't think we are disagreeing either.
13. Apr 29, 2008
### neopolitan
I said:
rbj said:
Does this indicate some agreement?
I also draw DrGreg's attention to the original post.
cheers,
neopolitan
PS About the momentum thing, the fundamental equations (for SR at least) contain an explicit reference to c. There is no mention of momentum, irrespective of how much easier it might be to measure.
Last edited: Apr 29, 2008
14. May 1, 2008
### DrGreg
Oops, sorry, I didn't read your earlier post in as much detail as I should.
One of the points I was making is that you seem to be suggesting consecutive measurements might be, say, 137, 138, 138, 139 Planck lengths at intervals of 1 Planck time; whereas I am saying they could be 137.23 ± 1, 137.82 ± 1, 138.51 ± 1, 139.12 ± 1, etc.
In quantum theory, the act of measurement actually modifies the thing you are measuring, so any second measurement of the same thing may differ from what it would have been had you not made the first measurement.
So if you were trying to measure speed, you have to measure two distances a short time apart, and the second measurement is distorted by the first. It is indeed possible to get an answer of c (in fact I'm not sure if you might even get a higher value) if you try to measure speed this way, but (as I understand it) that doesn't mean the particle is "really" travelling at that speed, whatever that means, it just means you've unavoidably made a pair of erroneous measurements.
Whereas momentum is something you can measure directly, either by colliding something else (large) into the particle and measuring the large thing's change of momentum, or, bearing in mind wave/particle duality, by measuring frequency which is proportional to momentum.
In SR, momentum is given not by $p = mv$, but $p = \gamma mv$, where $\gamma$ is the Lorentz factor $1 / \sqrt{1 - v^2 / c^2}$, and $m > 0$ is rest mass. As $v \rightarrow c$, $p \rightarrow \infty$. That's what I meant by saying "there is no limit to momentum".
So uncertainty in momentum does not translate directly into uncertainty in speed, partly because of the $\gamma$ factor, partly because there may also be uncertainty over mass.
(I should repeat my warning that I'm no expert in quantum theory, so I stand to be corrected if any of the above isn't quite right.)
|
# Astudy was conducted to investigate the relationship between soda consumptions (servings per day) and weight gain (kgs)
###### Question:
Astudy was conducted to investigate the relationship between soda consumptions (servings per day) and weight gain (kgs) over a 2 year period and yielded the following linear regression results: weight gain over a 2 year period (kgs) = 4.5 + 0.25(servings of soda per day) what is the predicted weight gain over a 2 year period for someone that drinks 4 servings of soda per day?
### Match the definition to the term. 1 . thesis the study of word origins 2 . main body spread and develop
Match the definition to the term. 1 . thesis the study of word origins 2 . main body spread and develop language 3 . pentad uninflected speech 4 . slang rarely used in writing 5 . grammar rules for speaking and writing 6 . trade method of organization 7 . monotone middle of the report 8 . etym...
### In terms of value to society and to her country, how does tubman's caregiver work in
In terms of value to society and to her country, how does tubman's caregiver work in auburncompare with her other achievements? ...
### Five teachers were hired at oakdale middle school at the beginning of the 2005 school year. the performance of each teacher
Five teachers were hired at oakdale middle school at the beginning of the 2005 school year. the performance of each teacher was rated by the principal at the conclusion of each school year. the performance ratings are charted below. use this information to answer the question. which teacher exhibit...
### N the mythical country of Xacandra, people experience an emotion that they call blegthium. Though everyone
N the mythical country of Xacandra, people experience an emotion that they call blegthium. Though everyone agrees that people experience blegthium, it has proven very difficult to define it or to associate it with physiological correlates. Cross-cultural Xacandran psychologists have not been able to...
### Which clef would generally be used for the low end of the piano or notes that are the
Which clef would generally be used for the low end of the piano or notes that are the left of middle c...
### What are two of the most significant ways Rue has, and will continue to have an impact on the HungerGames ( With two quotes)
What are two of the most significant ways Rue has, and will continue to have an impact on the Hunger Games ( With two quotes)...
### Laissez-faire management styles require a confident and self-motivated staff.O TrueO False
Laissez-faire management styles require a confident and self-motivated staff. O True O False...
### From reading “The Raven and the First Men: The Beginnings of the Haida,” the reader can tell that
From reading “The Raven and the First Men: The Beginnings of the Haida,” the reader can tell that the Haida people value family due to the actions of...
### Pls help with number 4 PLS
Pls help with number 4 PLS $Pls help with number 4 PLSSSS$...
### How does Esperanza respond to this event? nge the O O Esperanza tries to help Isabel by holding the babies. She talks to them
How does Esperanza respond to this event? nge the O O Esperanza tries to help Isabel by holding the babies. She talks to them and sings quietly. Esperanza tries to wash the babies' diapers, but it makes her so miserable that she runs to her room. Esperanza tries to help Isabel wash the diapers, but ...
### What is newton's third law of motion?
What is newton's third law of motion?...
### Which statement best describes what is meant by limited government
Which statement best describes what is meant by limited government...
### Which statement about aerobic exercise is not true? aerobic exercise strengthens the muscles that us
Which statement about aerobic exercise is not true? aerobic exercise strengthens the muscles that us breathe. aerobic exercise is carried out for a sustained period of time. regular aerobic exercise eliminates the risk of heart or lung disease. nextreset...
### Is y=2x+3 a proportional relationship
Is y=2x+3 a proportional relationship...
### Kara and steven are biking around a 1,800 kilometer path. they start biking from the same place, at
Kara and steven are biking around a 1,800 kilometer path. they start biking from the same place, at the same time, and in the same direction. kara bikes at a speed of 15 km/h and steven bikes at a speed of 30 km/h. how long will it take before steven and kara will be at the same place of the path ne...
### Building muscle burns calories, helping with weight loss, and can be done with free weights, resistance machines, or through
Building muscle burns calories, helping with weight loss, and can be done with free weights, resistance machines, or through exercises using body weight . True False...
### NO LINKS OR ELSE YOU'LL BE REPORTED! Only answer if you're very good at English.What is the effect of the dramatic irony in this part
NO LINKS OR ELSE YOU'LL BE REPORTED! Only answer if you're very good at English. What is the effect of the dramatic irony in this part of the story?I'm this part of the story, the band members are _ that Dante isn't spending time with them, so they plan to find him at _.The reader knows about this ...
|
# String in ANSI C?
I'm trying declare a string in ANSI C and am not sure of the best way to do it.
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <stdarg.h>
char *InitStr(char *ReturnStr){
ReturnStr = NULL;
ReturnStr = realloc( ReturnStr, strlen("") +1 );
strcpy( ReturnStr , "");
return ReturnStr;
}
StrObj1 = realloc(StrObj1,(strlen(StrObj1) + strlen(StrObj2)+1));
strcat(StrObj1, StrObj2);
return StrObj1 ;
}
char *JoinStr(const char *fmt, ...) {
char *p = NULL;
size_t size = 30;
int n = 0;
va_list ap;
if((p = malloc(size)) == NULL)
return NULL;
while(1) {
va_start(ap, fmt);
n = vsnprintf(p, size, fmt, ap);
va_end(ap);
if(n > -1 && n < size)
return p;
// failed: have to try again, alloc more mem.
if(n > -1) // glibc 2.1
size = n + 1;
else //* glibc 2.0
size *= 2; // twice the old size
if((p = realloc (p, size)) == NULL)
return NULL;
}
}
main(){
printf("\n");
char *MyLocalString = InitStr(MyLocalString);
printf("InitStr: %s\n",MyLocalString);
printf("---------------------------------------------------\n");
printf("---------------------------------------------------\n");
printf("---------------------------------------------------\n");
printf("---------------------------------------------------\n");
MyLocalString = AddStr(MyLocalString ,JoinStr("%s%s%s", "\n\tString3", "\n\tString2", "\n\tString3"));
printf("4. JoinStr: %s\n",MyLocalString);
printf("---------------------------------------------------\n");
printf("\n");
}
In this code I have 3 functions to handle the string:
1. InitStr - to initial string
2. AddStr - to add string
3. JoinStr -to Join string
The code works fine, however, I am not sure if this is a decent way of handling a string since I am using pointers.
• for a beginner, this is reasonable source code. the char *InitStr(char *ReturnStr){ ReturnStr = NULL; ...} however, makes no sense. Why give an argument to a function, knowing that the first thing the function does is setting it to NULL ? – wildplasser Jun 9 '12 at 23:25
• Hopefully useful: gratisoft.us/todd/papers/strlcpy.html – sarnold Jun 9 '12 at 23:26
• InitStr will work but consider char* EmptyStr(void){ return strdup(""); } – Jim Balter Jun 9 '12 at 23:34
I have a few suggestions.
First you are not verifying that realloc() properly allocated memory. When using realloc() you should use two pointers so you can properly check for a successful allocation:
char * increaseBuffer(char * buff, size_t newBuffSize)
{
void * ptr;
ptr = realloc(buff, newBuffSize);
// verify allocation was successful
if (ptr == NULL)
{
perror("realloc()"); // prints error message to Stderr
return(buff);
};
// adjust buff to refer to new memory allocation
buff = ptr;
/* use buffer for something */
return(buff);
};
Since your InitStr() function is only providing an initial allocation, you can simplify it to:
char *InitStr(){
char * ReturnStr;
// allocate new buffer
ReturnStr = malloc(1);
// verify allocation was successful before using buffer
if (ReturnStr != NULL)
ReturnStr[0] = '\0';
// return buffer
return(ReturnStr);
}
This could also be condensed a little more:
char *InitStr(){
char * ReturnStr;
if ((ReturnStr = malloc(1)) != NULL)
ReturnStr[0] = '\0';
return(ReturnStr);
}
char *AddStr(char *StrObj1,char *StrObj2){
void * ptr;
ptr = realloc(StrObj1,(strlen(StrObj1) + strlen(StrObj2) + 1));
if (ptr == NULL)
return(StrObj1);
StrObj1 = ptr;
strcat(StrObj1, StrObj2);
return(StrObj1);
}
You can slightly improve the runtime efficiency of this function be reducing the number of times you need to find the terminating NULL character in your strings:
char *AddStr(char *StrObj1,char *StrObj2){
void * ptr;
size_t len1;
size_t len2;
// determine length of strings
len1 = strlen(StrObj1);
len2 = strlen(StrObj2);
// increase size of buffer
if ((ptr = realloc(StrObj1, (len1 + len2 + 1))) == NULL)
return(StrObj1);
StrObj1 = ptr;
// this passes a pointer which references the terminating '\0' of
// StrObj1. This makes the string appear to be empty to strcat() which
// in turn means that it does not compare each character of the string
// trying to find the end. This is a minor performance increase, however if
// running lots of string operations in a high iteration program, the combined
// affect could be substantial.
strcat(&StrObj1[len1], StrObj2);
return(StrObj1);
}
You can complete your last function without the while loop:
char *JoinStr(const char *fmt, ...)
{
int len;
char * str;
void * ptr;
va_list ap;
if ((str = malloc(2)) == NULL)
return(NULL);
str[0] = '\0';
// run vsnprintf to determine required length of string
va_start(ap, fmt);
len = vsnprintf(str, 2, fmt, ap);
va_end(ap);
if (len < 2)
return(str);
// allocate enound space for entire formatted string
len++;
if ((ptr = realloc(str, ((size_t)len))) == NULL)
{
free(str);
return(NULL);
};
// format string
va_start(ap, fmt);
vsnprintf(str, len, fmt, ap);
va_end(ap);
return(str);
}
• thank you for reviewing my code. your suggestions are greatly appreciated. However, I have 2 more thing to ask. 1. where/why use char * increaseBuffer() since i have char *AddStr(char *StrObj1,char *StrObj2)? 2. should i use free(MyString) before exit the program? – Flan Alflani Jun 10 '12 at 0:49
• @FlanAlflani 1. increaseBuffer() is just a hypothetical example and is not used within your program. 2. Yes, you are correct. Any memory you allocate, must also free before you exit the program or re-use the pointer. – David M. Syzdek Jun 10 '12 at 1:55
• Just as a nitpick the correct prototype would be char *InitStr(void). With empty () at the call side this is only "known" as a function that receives any kind of argument, so some error checking would be off. – Jens Gustedt Jun 10 '12 at 5:46
• @David I keep getting *** glibc detected *** ./myprogram: munmap_chunk(): invalid pointer: 0x0000000002588ca0 *** when try to use JoinStr – Flan Alflani Jun 10 '12 at 13:02
This question might be more appropriate for programmers or codereview, but, this is what a code review of mine, probably with at least one controversial statement inside somewhere, would go like. Hopefully you will find some advice in here that answers your questions.
1) Make sure you want to be using C and not C++ (or Java, Perl...) for whatever you're doing. If you can avoid doing strings in C in real life code, always. Use C++ strings instead. And if you want a sturdy C library to do this in, consider glib (or writing C++ wrappers).
2) If you want to write a full ADT (abstract data type) for strings, follow the usual C paradigm: You should have functions mystring_init, mystring_{whateveroperations}, and mystring_cleanup. The init will typically have a malloc and if so cleanup will definitely have free. The reallocs I think (not sure) are only safe provided malloc happens first and free last, which without an init that mallocs would be bad.
3) But in real life I think an ADT for strings, which experienced C developers can program in pretty fluently, will make the code less readable, not more.
4) Always work on the stack when possible. char str[STR_MAX_SIZE] is preferable to dynamic strings when possible, and it's possible more than most developers think. A few wasted bytes from safely overestimating is worth avoiding a crash or leak from dynamic memory usage. And your function JoinStr from its signature looks like it will accept a stack variable as its first argument, then bam! realloc. If you're going to do something sneaky like this, it pretty much has to be with an ADT you wrote via a pattern similar to (2).
5) Along those lines, the usual way to pass a string to a function is myoperation(char* str, size_t size); so the caller is responsible for memory management - and in particular gets to choose whether it's stack or heap - and the inside of the function just respects those parameters passed. If you braek this I strongly recommend the ADT pattern in (2).
char *InitStr(char *ReturnStr){
ReturnStr = NULL;
This line throws away whatever the user passed in as the parameter
ReturnStr = realloc( ReturnStr, strlen("") +1 );
This throws away what the previous line does. It should also be pretty obvious the strlen("") is, so why calculate it?
strcpy( ReturnStr , "");
This line only ends up doing the same as ReturnStr[0] = '\0'
return ReturnStr;
}
The whole function should be written as strdup("")`.
You assume that a negative return value means not enough memory was provided. However, its possible that any of a number of things could have gone wrong. If something else wrong, bad format string for example, this function will sit there exhausting your memory trying to fill out the string.
|
When quoting this document, please refer to the following
DOI: 10.4230/LIPIcs.FSTTCS.2009.2302
URN: urn:nbn:de:0030-drops-23027
URL: http://drops.dagstuhl.de/opus/volltexte/2009/2302/
Go to the corresponding LIPIcs Volume Portal
### Mediating for Reduction (on Minimizing Alternating Büchi Automata)
pdf-format:
### Abstract
We propose a new approach for minimizing alternating B\"uchi automata (ABA). The approach is based on the so called \emph{mediated equivalence} on states of ABA, which is the maximal equivalence contained in the so called \emph{mediated preorder}. Two states $p$ and $q$ can be related by the mediated preorder if there is a~\emph{mediator} (mediating state) which forward simulates $p$ and backward simulates $q$. Under some further conditions, letting a computation on some word jump from $q$ to $p$ (due to they get collapsed) preserves the language as the automaton can anyway already accept the word without jumps by runs through the mediator. We further show how the mediated equivalence can be computed efficiently. Finally, we show that, compared to the standard forward simulation equivalence, the mediated equivalence can yield much more significant reductions when applied within the process of complementing B\"uchi automata where ABA are used as an intermediate model.
### BibTeX - Entry
@InProceedings{abdulla_et_al:LIPIcs:2009:2302,
author = {Parosh A. Abdulla and Yu-Fang Chen and Lukas Holik and Tomas Vojnar},
title = {{Mediating for Reduction (on Minimizing Alternating B{\"u}chi Automata)}},
booktitle = {IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science},
pages = {1--12},
series = {Leibniz International Proceedings in Informatics (LIPIcs)},
ISBN = {978-3-939897-13-2},
ISSN = {1868-8969},
year = {2009},
volume = {4},
editor = {Ravi Kannan and K. Narayan Kumar},
publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik},
|
### 4755
Highly Accelerated Simultaneous Multislice Projection Imaging
Nikolai J Mickevicius1, L. Tugan Muftuler2, Andrew S Nencka3, and Eric S Paulson1
1Radiation Oncology, Medical College of Wisconsin, Milwaukee, WI, United States, 2Neurosurgery, Medical College of Wisconsin, Milwaukee, WI, United States, 3Radiology, Medical College of Wisconsin, Milwaukee, WI, United States
### Synopsis
Projection imaging has many advantages over Cartesian sampling. The unique point spread function makes it particularly useful for highly accelerated parallel imaging and compressed sensing reconstructions[1]. In this study, a projection-domain sensitivity encoding algorithm is developed for highly accelerated simultaneous multislice radial imaging. Since it operates in the projection-domain, no time expensive gridding, de-gridding, and FFT operations are required within each iteration of the solving algorithm. From an in vivo experiment, two slices were reconstructed from only 34 radial spokes.
### Introduction
Methods to accelerate MR data acquisition are constantly being developed. Two such methods include the use of non-Cartesian k-space trajectories and simultaneous multislice (SMS) imaging. When used as the only form of acceleration, SMS proves advantageous compared with in-plane acceleration techniques since there is no $\sqrt{N}$ SNR penalty.
A method known as highly accelerated projection imaging (HAPI) uses high resolution (HR) radial k-space measurements to reconstruct lower resolution images[2]. The HR radial data are first brought into the projection domain by performing an inverse FT along the readout dimension. Each point in the sampled projection is then traced back through all intersecting voxels in the reconstructed image grid. The fraction of the voxel covered by the width of the projection point is multiplied by the respective complex-valued coil sensitivities. The measured complex-valued signal intensity in the measured high-resolution projections is parameterized by a sum along the projection angle of the lower resolution reconstructed image times the respective coil sensitivity profile and times the voxel fractions. The present work extends the HAPI method to reconstruct highly accelerated radial SMS images.
### Methods
The goal of SMS HAPI is to obtain separated images from slice-aliased projections. Since more than one slice is to be reconstructed, the coil sensitivity matrix must now contain information from every simultaneously excited slice. Additionally, phase modulations using RF or gradient encoding are typically employed in modern SMS acquisitions[3,4]. These modulations, referred to as CAIPIRINHA (CAIPI) phase modulations, allow supplemental control over the aliasing patterns seen in the data. These modulations are stored in a phase modulation matrix, . See Yutzy et. al. (2011) for more information[5]. The SMS HAPI problem is posed in Eq. 1. The desired reconstructed image, x, contains image data for all SMS slices. A circular field-of-view is reconstructed. F represents the fractional area of reconstructed voxels covered by a point in a measured projection, p (see Figure 1). C represents the coil sensitivity profiles, and T represents a transformation of x to a sparse domain. The reconstruction pipeline for SMS HAPI is shown in Figure 2. No spatial regularization was utilized in this study.
$$x = \underset{x}{\operatorname{argmin}} \lVert \sum_{n=1}^{SMS}(\Phi F C_n x_n) - p \rVert _2^2 + \lambda \lVert Tx \rVert_1$$
A SMS=3 simulation was performed in a numerical brain phantom. Three 128x128 images were reconstructed from 16 spokes of simulated 16-channel projection data with a base resolution of 512x512.
An SMS=2 experiment was simulated from acquired abdominal 3D stack-of-stars radial data on a 1.5T Elekta MR-Linac from a consenting volunteer. The base resolution of the acquired data was 512x512. The desired reconstructed matrix size is 128x128. Coil maps for the 8-channel array were calculated from the full dataset. SMS HAPI images were reconstructed using 34 k-space spokes from two slices summed together. Comparisons were made to the SMS CG-SENSE algorithm[5] as well as to the single-pass version of the same algorithm (equivalent to phase demodulation and NUFFT). For NUFFT and CG-SENSE, the images were reconstructed at 512x512 resolution then resampled to 128x128. The same circular FOV as the HAPI reconstruction is shown.
### Results
The SMS=3 simulation results are shown in Figure 3. Considering only 16 spokes were used to generate 128x128 images for three slices, the reconstructed images resemble the ground truth images remarkably well. Some residual aliasing artifacts are still present in the highly accelerated simulated images, however.
The retrospectively generated in vivo SMS=2 results are shown in Figure 4. The CG-SENSE algorithm failed to converge to the global minimum for such highly accelerated images. The SMS HAPI algorithm, however, was able to largely remove streaks from the images at the expense of enhanced noise in the images relative to the fully sampled images.
### Discussion
The SMS HAPI algorithm was able to outperform the SMS CG-SENSE algorithm in the preliminary experiment shown here. The SMS HAPI algorithm efficiently utilizes information from high resolution projections to generate lower resolution SMS images with largely reduced streaking artifacts. Since this method operates in the projection-domain, no time expensive gridding, de-gridding, and FFT operations are required within each iteration of the solving algorithm. A follow-up study will rigorously compare the computational complexity between SMS HAPI and SMS CG-SENSE.
### Conclusion
This preliminary investigation suggests that SMS HAPI may be a potential method to further accelerate projection imaging.
### Acknowledgements
No acknowledgement found.
### References
[1] Feng L, Grimm R, Block KT, Chandarana H, Kim S, Xu J, et al. Golden-angle radial sparse parallel MRI: Combination of compressed sensing, parallel imaging, and golden-angle radial sampling for fast and flexible dynamic volumetric MRI. Magn Reson Med 2014;72:707–17. doi:10.1002/mrm.24980.
[2] Ersoz A, Arpinar VE, Muftuler LT. Highly accelerated projection imaging with coil sensitivity encoding for rapid MRI. Med Phys 2013;40:022305. doi:10.1118/1.4789488.
[3] Breuer FA, Blaimer M, Heidemann RM, Mueller MF, Griswold MA, Jakob PM. Controlled aliasing in parallel imaging results in higher acceleration (CAIPIRINHA) for multi-slice imaging. Magn Reson Med 2005;53:684–91. doi:10.1002/mrm.20401.
[4] Setsompop K, Gagoski BA, Polimeni JR, Witzel T, Wedeen VJ, Wald LL. Blipped-controlled aliasing in parallel imaging for simultaneous multislice echo planar imaging with reduced g-factor penalty. Magn Reson Med 2012;67:1210–24. doi:10.1002/mrm.23097.
[5] Yutzy SR, Seiberlich N, Duerk JL, Griswold MA. Improvements in multislice parallel imaging using radial CAIPIRINHA. Magn Reson Med 2011;65:1630–7. doi:10.1002/mrm.22752.
### Figures
Figure 1. Calculating voxel fractions, F, for HAPI data. (a) The magnitude of a measured projection, which is the Fourier transform of an acquired k-space spoke. Each point in the projection (red dashed line) is equal to the sum of the intensity of intersecting voxels at the acquired projection angle weighted by the fraction of each voxel covered by the projection. (b) The fraction of voxels covered by a point in a projection acquired at a higher resolution than the voxels to be reconstructed.
Figure 2. The coil sensitivity profiles, C, are shown overlayed with the fractional area, F, of each reconstructed voxel covered by a finite-width line passing through at the projection angle. The coil sensitivity maps are weighted by the overlapping fractions and are then multiplied by the applied CAIPIRINHA phase, Φ. Projection data are synthesized from the current guess of the reconstructed SMS images, x. The sum of the synthesized projections is calculated and compared with the measured projections, p, for data consistency enforcement. The reconstructed images are then updated using a conjugate gradient algorithm.
SMS=3 simulation results. These data were reconstructed from only 16 spokes of simulated 16-channel data.
Figure 4. In vivo results SMS=2 results. The NUFFT, CG-SENSE, and SMS HAPI reconstructions from 34 spokes are shown here. The CG-SENSE algorithm fails at such high total acceleration factors while HAPI is able to largely remove streaking artifacts without the use of spatial regularization.
Proc. Intl. Soc. Mag. Reson. Med. 27 (2019)
4755
|
# Torque parallel with a principal axis
Introduction: Suppose I have a rigid body with the inertia matrix in the initial position $I_0$ . If the body fixed coordinate axis rotate with matrix $R$ then it's inertia matrix will be $I_B = R^T I_0 R$. Suppose the eigenvectors of $I_0$ are $\vec{v}_1, \vec{v}_2, \vec{v}_3$. Being principal axis they are orthgonal. Then $R^T v_1$ is an eigenvector of $I_B$.
Question: if a torque $T_c$ parallel with $R^T v_1$is applied at the center of mass, will the body have an angular velocity also parallel with $R^T v_1$ ? That is: will $\omega \times T_c = 0$ knowing that $I_B T_c = \lambda_1 T_c$ ?
I think the answer is yes, but I am unable to prove it ... I know that $T_c = \omega \times (I_B \omega) + I_B \dot{\omega}$ I tried to show $\omega \times T_c = 0$ but $\omega \times T_C = \omega (\omega^T I_B \omega) - I_B\omega (\omega^T \omega) + \omega \times I_B\omega$. I do not know how to proceed ...
Of course, if $\omega$ is a principal axis, then $T_c = I_B \dot{\omega}$ is also an eigenvector of $I_B$, but I am interested if the converse is true ...
• What are your thoughts on this? – JMac Feb 21 '17 at 21:52
• I would appreciate a positive feedback, logic and math. Please if possible give that ... – C Marius Feb 21 '17 at 22:00
• Is $T_c$ in same coordinates as $I_0$ or $I_B$? – ja72 Feb 22 '17 at 15:29
• It is easy to show that when the rotation axis is along a principal direction then the angular momentum is parallel to said axis. And torque is the rate of change of angular momentum. – ja72 Feb 22 '17 at 19:44
• $T_C$ is in the same coordinate as $I_B$ ... I just realised that my notations are a little confusing ... Yes the torque is the rate of change of angular momentum, but what I know is that $\dot{L}$ is a principal axis. Does this always mean that $L$ is also a principal axis? If so, why? – C Marius Feb 22 '17 at 21:15
The the equations of motion on the body frame are
$$\vec{T}_B = \mathrm{I}_B \dot{\vec{\omega}}_B + \vec{\omega}_B \times \mathrm{I}_B \vec{\omega}_B$$
or in component form
$$\begin{pmatrix} T_1 \\ T_2 \\ T_2 \end{pmatrix} = \begin{vmatrix} I_{1} & 0 & 0 \\ 0 & I_{2} & 0 \\ 0 & 0 & I_{3} \end{vmatrix} \begin{pmatrix} \dot{\omega}_1 \\ \dot{\omega}_2 \\\dot{\omega}_2 \end{pmatrix} + \begin{vmatrix} 0 & -\omega_3 & \omega_2 \\ \omega_3 & 0 & -\omega_1 \\ -\omega_2 & \omega_1 & 0 \end{vmatrix} \begin{vmatrix} I_{1} & 0 & 0 \\ 0 & I_{2} & 0 \\ 0 & 0 & I_{3} \end{vmatrix} \begin{pmatrix} {\omega}_1 \\ {\omega}_2 \\ {\omega}_2 \end{pmatrix}$$
where $1$, $2$ and $3$ are the principal directions. I think your question is what happens if a torque is applied along a principal direction? The problem is that only angular acceleration depends on torque and angular velocity is usually defined in direction by the kinematics. So you can't ask If I apply a torque, what will the speed be?
In any case suppose the general case where $T_1 \neq 0$ and $T_2 = T_3 = 0$
\begin{aligned} T_1 & = I_1 \dot{\omega}_1 + (I_3-I_2) \omega_2 \omega_3 \\ 0 & = I_2 \dot{\omega}_2 + (I_2-I_3) \omega_1 \omega_3 \\ 0 & = I_3 \dot{\omega}_3 + (I_2-I_1) \omega_1 \omega_2 \end{aligned}
So you are trying to understand what motions obey the above equations of motion. Assuming the general case of $I_1 \neq I_2 \neq I_3$ you see that the torque along $1$ does not affect the angular acceleration along $2$ and $3$. So a torque applied along a principal axis, will accelerate the body also along the principal axis only if the body is already rotating along the same axis already. Only when $\omega_2 = \omega_3 =0$ you decouple the system to
\begin{aligned} T_1 & = I_1 \dot{\omega}_1\\ \dot{\omega}_2 & = 0\\ \dot{\omega}_3 & = 0 \end{aligned}
• First of all: thank you so much for your answer! I will use the same meaning of $R$ as you did (I usually use $R^T$ in the sens you seem to use $R$), so: I think the rotation being about $\omega$ axis it is true that $\omega_c = R\cdot \omega_c = R^T \cdot \omega_c$ and the same for $\dot{\omega}_c$ but yes this will also yield $R T_B = R (\omega_B \times (I_B \omega_B) + I_B \dot{\omega}_B)$ and yes if $\omega_C = \omega_B$ is a principal axis then $T_B = I_B \dot{\omega}_B$. What I am not able to prove to my self, is your last proposition: "In addition if the torque ..." why is this? Why ? – C Marius Feb 22 '17 at 21:05
• If $\dot{\vec{\omega}}$ is along principal axis then $\vec{T} = \mathrm{I} \dot{\vec{\omega}} = \lambda \dot{\vec{\omega}}$ which is obviously parallel to $\dot{\vec{\omega}}$ – ja72 Feb 23 '17 at 2:02
• Also, it is common convention to represent the rotation matrix $\mathrm{R}$ as in $(\mbox{local}) \rightarrow (\mbox{global})$ transformation. That is, it transforms local vectors (body coordinates) to global vectors (world coordinates). – ja72 Feb 23 '17 at 2:05
• Yes, if assuming $\omega$ is a principal axis then $T$ is also a principal axis, that I wrote inside the question, but I am interested in the converse ... How can I prove that if $T$ is along a principal axis, then also $\omega$ is ... – C Marius Feb 23 '17 at 8:38
• Because if $T$ and $\dot{\omega}$ are related with a scalar value. By definition they are parallel. – ja72 Feb 23 '17 at 13:23
|
# EKsumic's Blog
The current theme of this website is Japanese learning.
Click the left button to use the catalog.
OR
# [SOLVED]How to use X.PagedList.Mvc.Core IN .NET Core 3.1?
Background code:
public async Task<IActionResult> Index(int? page,string search)
{
var applicationDbContext = _context.Blogs.Include(b => b.Context).Include(b => b.Owner).Include(b => b.Type).OrderByDescending(x=>x.PublishTime);
if (search != null)
{
var products = applicationDbContext.Where(x => x.Title.Contains(search)); //returns IQueryable<Product> representing an unknown number of products. a thousand maybe?
var pageNumber = page ?? 1; // if no page was specified in the querystring, default to the first page (1)
var onePageOfProducts = products.ToPagedList(pageNumber, 10); // will only contain 10 products max because of the pageSize
ViewBag.OnePageOfProducts = onePageOfProducts;
return View(await onePageOfProducts.ToListAsync());
}
else
{
var products = applicationDbContext; //returns IQueryable<Product> representing an unknown number of products. a thousand maybe?
var pageNumber = page ?? 1; // if no page was specified in the querystring, default to the first page (1)
var onePageOfProducts = products.ToPagedList(pageNumber, 10); // will only contain 10 products max because of the pageSize
ViewBag.OnePageOfProducts = onePageOfProducts;
return View(await onePageOfProducts.ToListAsync());
}
}
Razor page code:
@model IEnumerable<BlogPlatform.Models.Blog>
@using X.PagedList.Mvc.Core;
@using X.PagedList;
<form action="~/Blogs/Index" method="post">
<p>
Title: <input type="text" name="search" />
<input type="submit" value="search" />
</p>
</form>
……
@foreach (var item in Model)
{
……
}
@Html.PagedListPager((IPagedList)ViewBag.OnePageOfProducts, page => Url.Action("Index", new { page }))
Let me first try to explain the background C# code:
This is a typical rewritten method.
Its initial version should look like this:
public async Task<IActionResult> Index()
{
return View(await _context.Users.ToListAsync());
}
Why does the rewritten code become so complicated?
Because it not only has a paging function, it also comes with a search function.
Every time Index is opened, it will request the else part first. The first thing is to get the data context, and then see if there is a page parameter, if not, the result of the first page will be displayed by default.
Then, you may see the core part:
var onePageOfProducts = products.ToPagedList(pageNumber, 10); // will only contain 10 products max because of the pageSize
ViewBag.OnePageOfProducts = onePageOfProducts;
return View(await onePageOfProducts.ToListAsync());
It corresponds to the last line of the razor page I mentioned above:
@Html.PagedListPager((IPagedList)ViewBag.OnePageOfProducts, page => Url.Action("Index", new { page }))
So, we can extract the core, the part you should modify is:
@Html.PagedListPager({Your ViewBag}, page => Url.Action("Index", new {page}))
In this way, you have completed the paging function and the search function.
But there is another problem, that is, the paging function after the search is not completed. I leave this task to you to figure out how to do it, because it is better to teach people how to fish.
|
Consider $$\frac{dz(x)}{dx}=z(x)$$ vs. Tangent line for a parabola. Proof. Practice: Differential equations challenge. We can now define a strategy for changing the ordinary differential equations of second order into an integral equation. An The goal is to find a function f(x) that fulfills the differential equation. Calculus III can be taken at the same time, but that is harder. If the change happens incrementally rather than continuously then differential equations have their shortcomings. Video transcript - So let's get a little bit more comfort in our understanding of what a differential equation even is. More information. We come across a lot of equations … 0.1 Ordinary Differential Equations A differential equation is an equation involving a function and its derivatives. The differential equations class I took was just about memorizing a bunch of methods. Sites 3 Sorted by Review Date Sorted Alphabetically. It was not too difficult, but it was kind of dull. census results every 5 years), while differential equations models continuous quantities — things which are happening all the time. The difference between them described here with the help of definitions and examples. Difference and Differential Equations is a section of the open access peer-reviewed journal Mathematics, which publishes high quality works on this subject and its applications in mathematics, computation, and engineering.. Calculus III should be a prerequisite for Differential Equations. Difference Between Linear & Quadratic Equation In the quadratic equation the variable x has no given value, while the values of the coefficients are always given which need to be put within the equation, in order to calculate the value of variable x and the value of x, which satisfies the whole equation is known to be the roots of the equation. Difference Equations to Differential Equations. Instead we will use difference equations which are recursively defined sequences. This also establishes uniqueness since the derivation shows that all solutions must be of the form above. Difference equations output discrete sequences of numbers (e.g. Sound wave approximation. Thus one can solve many recurrence relations by rephrasing them as difference equations, and then solving the difference equation, analogously to how one solves ordinary differential equations. An Introduction to Calculus . DIFFERENTIAL AND DIFFERENCE EQUATIONS Differential and difference equations playa key role in the solution of most queueing models. Example 2.5. We also find that any exponential polynomial solution of a nonlinear difference … In this appendix we review some of the fundamentals concerning these types of equations. This immediately shows that there exists a solution to all first order linear differential equations. (iii) introductory differential equations. No prior knowledge of difference equations or symmetry is assumed. An ordinary differential equation involves a derivative over a single variable, usually in an univariate context, whereas a partial differential equation involves several (partial) derivatives over several variables, in a multivariate context. So here we have a differential equation. Fortunately the great majority of systems are described (at least approximately) by the types of differential or difference equations Beginning with an introduction to elementary solution methods, the book gives readers a clear explanation of exact techniques for ordinary and partial difference equations. View. Newton’s method. FIRST ORDER ORDINARY DIFFERENTIAL EQUATIONS Theorem 2.4 If F and G are functions that are continuously differentiable throughout a simply connected region, then F dx+Gdy is exact if and only if ∂G/∂x = ∂F/∂y. Slope fields. Step 1: Write the differential equation and its boundary conditions. Differential Equations a translation of Differentsial'nye Uravneniya is devoted exclusively to differential equations and the associated integral equations. In particu- In this paper, we study finite-order entire solutions of nonlinear differential-difference equations and solve a conjecture proposed by Chen, Gao, and Zhang when the solution is an exponential polynomial. From the reviews of the third edition: 1 Introduction. Equations which define relationship between these variables and their derivatives are called differential equations. 3. By Dan … $\begingroup$ The following CV questions also discuss this material: Difference between generalized linear models & generalized linear mixed models in SPSS; What is the difference between generalized estimating equations and GLMM. Square wave approximation. Stochastic partial differential equations and nonlocal equations are, as of 2020, particularly widely studied extensions of the "PDE" notion. Step 2: Now re-write the differential equation in its normal form, i.e., highest derivatives being on one side and other, all values on the other side. In Mathematics, you must have learned about different types of equations. Though differential-difference equations were encountered by such early analysts as Euler [12], and Poisson [28], a systematic development of the theory of such equations was not begun until E. Schmidt published an important paper [32] about fifty years ago. Difference equations are classified in a similar manner in which the order of the difference equation is the highest order difference after being put into standard form. The derivative of a function is the rate of change of the output value with respect to its input value, whereas differential is … Numerical integration rules. $\begingroup$ Difference equations are much more difficult to handle, but can be treated numerically. A differential equation is an equation that contains a function f(x) and one or more derivatives of f(x). Numerics of differential equations (ODE and PDE) can be viewed as approximating differential equations by suitable difference equations. An Introduction to Difference Equations "The presentation is clear. differential and difference equations, we should recognize a number of impor-tant features. Next lesson. and well-selected exercises with solutions. Separation of the variable is done when the differential equation can be written in the form of dy/dx = f(y)g(x) where f is the function of y only and g is the function of x only. The mathematical theory of difference equations (MSC class 39A). Differential equations, difference equations and fuzzy logic in control of dynamic systems Differential equations, difference equations and fuzzy logic in control of dynamic systems 3 ‐‐‐‐‐‐‐‐‐‐ mathematical function; we only know the shape and type of the family of functions. The book provides numerous interesting applications in various domains (life science, neural networks, feedback control, trade models, heat transfers, etc.) Difference and differential equations have been used since Newton’s time for the understanding of physical sciences, engineering, and vitality, as well as for sport, economic, and social sciences. The primary aim of Difference and Differential Equations is the publication and dissemination of relevant mathematical works in this discipline. Ordinary differential equations form a subclass of partial differential equations, corresponding to functions of a single variable. 5 Recommendations; Tarek F. Ibrahim. For example, difference equations as those frequently encountered in Economics. Aimed at the community of mathematicians working on ordinary and partial differential equations, difference equations, and functional equations, this book contains selected papers based on the presentations at the International Conference on Differential & Difference Equations and Applications (ICDDEA) 2015, dedicated to the memory of Professor Georg Sell. Denise T. Reid (Valdosta State University) Abstract: Under consideration is a class of even ordered linear differential equations with … Taking an initial condition, rewrite this problem as 1/f(y)dy= g(x)dx and then integrate on both sides. It's not a matter of one being more difficult than the other- Topics from Calculus III are used in Differential equations (partial derivatives, exact differentials, etc.). Example 1: f(x) = -f ''(x) This is a differential equation since it contains f(x) and the second derivative f ''(x). So Even if time scale calculus is ready,there is a sigificance of differential equations and difference equations separately. Journal description. which model reaction and diffusion processes. $\endgroup$ – Peter Michor Jul 7 '13 at 9:05 Even though Calculus III was more difficult, it was a much better class--in that class you learn about functions from R^m --> R^n and what the derivative means for such a function. By Dan Sloughter, Furman University. $\endgroup$ – gung - Reinstate Monica Oct 19 '12 at 2:03 Familiarity with the following topics is especially desirable: + From basic differential equations: separable differential equations and separa-tion of variables; and solving linear, constant-coefficient differential equations using characteristic equations. Science Math Differential Equations Difference Equations . Here, we are going to discuss the difference between linear and nonlinear equations. Differential equation are great for modeling situations where there is a continually changing population or value. Finally, we will illustrate our main results by considering partial difference equations. KENNETH L. COOKE, in International Symposium on Nonlinear Differential Equations and Nonlinear Mechanics, 1963. The informal presentation is suitable for anyone who is familiar with standard differential equation methods. Title: Differential-Difference Equations Author: Richard Ernest Bellman, Kenneth L. Cooke Subject: A basic text in differential-difference and functional-differential equations used by mathematicians and physicists in attacking problems involving the description and prediction of … The topics of the journal cover ordinary differential equations, partial differential equations, spectral theory of differential operators, integral and integral–differential equations, difference equations and their applications in control theory, mathematical modeling, … Differentiation is the process of finding a derivative. Foremost is the fact that the differential or difference equation by itself specifies a family of responses only for a given input x(t). In this section some of the common definitions and concepts in a differential equations course are introduced including order, linear vs. nonlinear, initial conditions, initial value problem and interval of validity. E.g. I am wondering whether MATLAB is able to solve DIFFERENCE (recursive) equations, not differential ones. The difference means the amount of opposition or gap between two objects while Differential means the total change or variation between the two objects about the factors it is depending on. "—AMERICAN MATHEMATICAL SOCIETY. Difference Equations to Differential Equations. Proof is given in MATB42. Calculus demonstrations using Dart: Area of a unit circle. We haven't started exploring how we find the solutions for a differential equations yet.
|
anonymous one year ago WILL MEDAL!!!
First, get a common denominator and then subtract. $\frac{ 4 }{ 9 }-\frac{ 1 }{ 6 }=\frac{ 8 }{ 18 }-\frac{ 3 }{ 18 }=\frac{ 5 }{ 18 }$ more energy
|
# Order of a principal term
In Yurii Nesterov's Introductory Lectures on Convex Optimization, there is a bound for the total number of iterations for some process. See page 109:
$$\left[\frac{1}{\ln(2(1-\kappa))} \ln\frac{t_0-t^*}{(1-\kappa) \epsilon}+2\right]\cdot \left[1+\sqrt\frac{L}{\mu}\ln\frac{2(L-\mu)}{\kappa \mu}\right]\\+\sqrt\frac{L}{\mu}\cdot\ln\left(\frac{1}{\epsilon}\max_{1\leq i \leq m}\{f_0(x_0)-t_0; f_i(x_0)\}\right)\label{eq1}\tag{1}$$
Then, the principal term in the above estimate is of the order $$\ln\frac{t_0-t^*}{\epsilon} \sqrt\frac{L}{\mu} \ln\frac{L}{\mu} \label{eq2}\tag{2}$$
How did we arrive at statement $$\eqref{eq2}$$? Is that true the second term $$\sqrt\frac{L}{\mu}\cdot\ln\big(\frac{1}{\epsilon}\max_{1\leq i \leq m}\{f_0(x_0)-t_0; f_i(x_0)\}\big)$$ in $$\eqref{eq1}$$ is eliminated? I would appreciate any advice here.
• Can you say what variable goes to infinity in this asymptotic analysis? If it's $L/\mu$, then he's right to drop the last term because $\log(L/\mu)$ increase while $\log(\epsilon^{-1}\max\cdots)$ is independent of $L/\mu$ and therefore treated as a constant. The coefficient of $\sqrt{L/\mu}\log(L/\mu)$ doesn't look right to me, it should be the large expression $2+\cdots$ in the first pair of brackets, unless something else goes to infinity also. It would really help to have the full original expression with context. – Kirill Oct 12 '18 at 15:43
• [It looks like only $\epsilon$ is changing. This is page 109][1]books.google.com/… – John Smith Oct 12 '18 at 15:55
• If only $\epsilon$ is changing, then it would be $\log\frac{t_0-t^*}{\epsilon} \sim -\log\epsilon$, so I'm not sure that's it. – Kirill Oct 12 '18 at 16:15
|
Synopsis: Calculations for complex nuclei
An effective field theory on a lattice is applied successfully to nuclei as large as carbon-$12$.
Finding reliable computational tools for understanding the behavior of complex nuclei remains one of the central issues in theoretical nuclear physics. The traditional approaches to this problem are based on effective or approximate many-body theories. These approaches suffer from the lack of analytical techniques for solving for the complicated many-body interactions known to be present among nucleons. Direct numerical simulations of nuclei are a potential alternative to analytical methods. However, brute force methods for simulating nuclei are likely to fail without additional inputs from other theoretical approaches due to the enormous computational complexity of the problem.
Writing in Physical Review Letters, Evgeny Epelbaum, Hermann Krebs, Dean Lee, and Ulf Meißner, in a collaboration involving institutions in Germany and the US, combine analytical and numerical approaches to compute the binding energies of nuclei as large as carbon-$12$. Epelbaum et al. use an analytic scheme for formulating the effective many-body dynamics that systematically accounts for nuclear interactions of increasing complexity up to next-to-next-to-leading order and also incorporates isospin breaking and Coulomb effects. Furthermore, they are able to simulate the effective dynamical models numerically to predict various measurable quantities for complex nuclei. The computational scaling of this method suggests applications to even larger nuclei in the future. – Abhishek Agarwal
More Features »
Announcements
More Announcements »
Previous Synopsis
Particles and Fields
Next Synopsis
Semiconductor Physics
Related Articles
Nuclear Physics
Synopsis: Detecting Nuclear Decay with Recoil
Small particles levitated in an optical trap can recoil from radioactive decays in a way that identifies their nuclear composition, a new theoretical study suggests. Read More »
Nuclear Physics
Synopsis: The Fastest Alpha Emitter
The detection of unusually fast alpha emission from a heavy isotope could lead to new ways of testing the nuclear shell model. Read More »
Particles and Fields
Synopsis: Inching Closer to CP Violation in Neutrinos
More data and improved analysis methods lead to better confidence that neutrinos and antineutrinos behave slightly differently. Read More »
|
## Use videos to illustrate complicated conceptual knowledge
Description Most academic disciplines include highly conceptual or abstract concepts that are difficult for students to grasp. For instance, building a solid foundation of conceptual knowledge for students is critical in engineering education (Streveler et al., 2008). An incomplete conceptual understanding hinders the development of central engineering competencies and expertise. However, it is a challenge …
Related Tags: , , , , , , , ,
|
Kattis
# Circle Bounce
You are standing by the wall in a large, perfectly circular arena and you throw a tennis ball hard against some other part of the arena. After a given number of bounces, where does the tennis ball next strike the wall?
Map the arena as a unit circle centered at the origin, with you standing at the point $(-1, 0)$. You throw the ball with a direction given by a slope in the coordinate plane of a rational fraction $a/b$. Each bounce is perfect, losing no energy and bouncing from the wall with the same angle of reflection as the angle of incidence to a tangent to the wall at the point of impact.
After $n$ bounces, the ball strikes the circle again at some point $p$ which has rational coordinates that can be expressed as $(r/s, t/u)$. Output the fraction $r/s$ modulo the prime $M = 1{,}000{,}000{,}007$.
It can be shown that the $x$ coordinate can be expressed as an irreducible fraction $r/s$, where $r$ and $s$ are integers and $s \not\equiv 0 \pmod M$. Output the integer equal to $r\cdot s^{-1} \pmod M$. In other words, output an integer $k$ such that $0 \le k < M$ and $k\cdot s \equiv r \pmod M$.
For example, if we throw the ball with slope $1/2$ and it bounces once, it first strikes the wall at coordinates $(3/5, 4/5)$. After bouncing, it next strikes the wall at coordinates $(7/25, -24/25)$. The modular inverse of $25$ with respect to the prime $M$ is $280{,}000{,}002$, and the final result is thus $7\cdot 280{,}000{,}002 \pmod M = 960{,}000{,}007$.
## Input
The single line of input will contain three integers $a$, $b$ ($1 \le a,b \le 10^9, \gcd (a,b)=1$) and $n$ ($1 \le n \le 10^{12}$), where $a/b$ is the slope of your throw, and $n$ is the number of bounces. Note that $a$ and $b$ are relatively prime.
## Output
Output a single integer value as described above.
Note that Sample 2 corresponds to the example in the problem description.
Sample Input 1 Sample Output 1
1 1 3
1000000006
Sample Input 2 Sample Output 2
1 2 1
960000007
Sample Input 3 Sample Output 3
11 63 44
22
Sample Input 4 Sample Output 4
163 713 980
0
|
Slutsky's theorem
In probability theory, Slutsky’s theorem extends some properties of algebraic operations on convergent sequences of real numbers to sequences of random variables.[1]
The theorem was named after Eugen Slutsky.[2] Slutsky’s theorem is also attributed to Harald Cramér.[3]
Statement
Let {Xn}, {Yn} be sequences of scalar/vector/matrix random elements.
If Xn converges in distribution to a random element X;
and Yn converges in probability to a constant c, then
• ${\displaystyle X_{n}+Y_{n}\ {\xrightarrow {d}}\ X+c;}$
• ${\displaystyle X_{n}Y_{n}\ {\xrightarrow {d}}\ cX;}$
• ${\displaystyle X_{n}/Y_{n}\ {\xrightarrow {d}}\ X/c,}$ provided that c is invertible,
where ${\displaystyle {\xrightarrow {d}}}$ denotes convergence in distribution.
Notes:
1. The requirement that Yn converges to a constant is important—if it were to converge to a non-degenerate random variable, the theorem would be no longer valid.
2. The theorem remains valid if we replace all convergences in distribution with convergences in probability (due to this property).
Proof
This theorem follows from the fact that if Xn converges in distribution to X and Yn converges in probability to a constant c, then the joint vector (Xn, Yn) converges in distribution to (Xc) (see here).
Next we apply the continuous mapping theorem, recognizing the functions g(x,y) = x + y, g(x,y) = xy, and g(x,y) = x y−1 as continuous (for the last function to be continuous, y has to be invertible).
References
1. ^ Goldberger, Arthur S. (1964). Econometric Theory. New York: Wiley. pp. 117–120.
2. ^ Slutsky, E. (1925). "Über stochastische Asymptoten und Grenzwerte". Metron (in German). 5 (3): 3–89. JFM 51.0380.03.
3. ^ Slutsky's theorem is also called Cramér’s theorem according to Remark 11.1 (page 249) of Gut, Allan (2005). Probability: a graduate course. Springer-Verlag. ISBN 0-387-22833-0.
|
# 1 Introduction
A feeding study served to authorize the MON863 maize, a genetically modified organism (GMO) developed by the Monsanto company, by the European and American authorities. It included male and female rats. For each sex, one group was fed with GMOs in the equilibrated diet, and one with the closest control regimen without GMOs.
We are interested in the weight of the rats after a period of 14 weeks.
ratWeight <- read.csv("ratWeight.csv")
data <- subset(ratWeight, week==14)
head(data)
## id week weight regime gender dosage
## 14 B38602 14 514.9 Control Male 11%
## 28 B38603 14 505.0 Control Male 11%
## 42 B38604 14 545.1 Control Male 11%
## 56 B38605 14 596.6 Control Male 11%
## 70 B38606 14 516.8 Control Male 11%
## 84 B38607 14 518.1 Control Male 11%
The data per gender and regime is displayed below
library(ggplot2)
theme_set(theme_bw())
ggplot(data=data) + geom_point(aes(x=weight,y=as.numeric(regime),colour=regime, shape=gender)) +
ylab(NULL) + scale_y_continuous(breaks=NULL, limits=c(-5,10)) + xlab("weight (g)")
The following table provides the mean weight in each group
aggregate(weight ~ regime+gender, data=data, FUN= "mean" )
## regime gender weight
## 1 Control Female 278.2825
## 2 GMO Female 287.3225
## 3 Control Male 513.7077
## 4 GMO Male 498.7359
Our main objective is to detect some possible effect of the diet on the weight. More precisly, we would like to know if the differences observed in the data are due to random fluctuations in sampling or to differences in diet.
# 2 Student’s t-test
## 2.1 One sample t-test
Before considering the problem of comparing two groups, let us start looking at the weight of the male rats only:
ggplot(data=subset(data,gender=="Male")) + geom_point(aes(x=weight,y=0), colour="red") +
ylab(NULL) + scale_y_continuous(breaks=NULL) + xlab("weight (g)")
Let $$x_1, x_2, x_n$$ the weights of the $$n$$ male rats. We will assume that the $$x_i$$’s are independent and normally distributed with mean $$\mu$$ and variance $$\sigma^2$$:
$x_i \iid {\cal N}(\mu \ , \ \sigma^2)$
### 2.1.1 One sided test
We want to test
$H_0: \ \mu \leq \mu_0" \quad \text{versus} \quad H_1: \ \mu > \mu_0"$
Function t.test can be used for performing this test:
x <- data[datagender=="Male","weight"] mu0 <- 500 t.test(x, alternative="greater", mu=mu0) ## ## One Sample t-test ## ## data: x ## t = 1.2708, df = 77, p-value = 0.1038 ## alternative hypothesis: true mean is greater than 500 ## 95 percent confidence interval: ## 498.0706 Inf ## sample estimates: ## mean of x ## 506.2218 Let us see what these outputs are and how they are computed. Let $$\bar{x} = n^{-1}\sum_{i=1}^n x_i$$ be the empirical mean of the data. $\bar{x} \sim {\cal N}(\mu \ , \ \frac{\sigma^2}{n})$ Then, \begin{aligned} \frac{\sqrt{n}(\bar{x} - \mu)}{\sigma} \ & \sim \ {\cal N}(0 \ , \ 1) \\ \frac{\sqrt{n}(\bar{x} - \mu)}{s} \ & \sim \ t_{n-1} \end{aligned} where $s^2 = \frac{1}{n-1}\sum_{i=1}^n (x_i - \bar{x})^2$ is the empirical variance of $$(x_i)$$. The statistic used for the test should be a function of the data whose distribution under $$H_0$$ is known, and whose expected behavior under $$H_1$$ allows one to define a rejection region (or critical region) for the null hypothesis. Here, the test statistic is $T_{\rm stat} = \frac{(\bar{x} - \mu_0)}{s/\sqrt{n}}$ which follows a $$t$$-distribution with $$n-1$$ degrees of freedom when $$\mu=\mu_0$$. $$\bar{x}$$ is expected to be less than or equal to $$\mu_0$$ under the null hypothesis, and greater than $$\mu_0$$ under the alternative hypothesis, Hence, $$T_{\rm stat}$$ is expected to be less than or equal to 0 under $$H_0$$ and greater than 0 under $$H_1$$. We then reject the null hypothesis $$H_0$$ if $$T_{\rm stat}$$ is greater than some threshold $$q$$. Such decision rule may lead to two kinds of error: • The type I error is the incorrect rejection of null hypothesis when it is true, • The type II error is the failure to reject the null hypothesis when it is false. The type I error rate or significance level is therefore the probability of rejecting the null hypothesis given that it is true. In our case, for a given significance level $$\alpha$$, we will reject $$H_0$$ if $$T_{\rm stat} > qt_{1-\alpha,n-1}$$, where $$qt_{1-\alpha,n-1}$$ is the quantile of order $$1-\alpha$$ for a $$t$$-distribution with $$n-1$$ degrees of freedom. Indeed, by definition, \begin{aligned} \prob{\text{reject } H_0 \ | \ H_0 \ \text{true}} &= {\mathbb P}(T_{\rm stat} > qt_{1-\alpha,n-1} \ | \ \mu \leq \mu_0) \\ & \leq {\mathbb P}(T_{\rm stat} > qt_{1-\alpha,n-1} \ | \ \mu = \mu_0) \\ & \leq {\mathbb P}(t_{n-1} > qt_{1-\alpha,n-1}) \\ & \leq \alpha \end{aligned} alpha <- 0.05 x.mean <- mean(x) x.sd <- sd(x) n <- length(x) df <- n-1 t.stat <- sqrt(n)*(x.mean-mu0)/x.sd c(t.stat,qt(1-alpha, df)) ## [1] 1.270806 1.664885 We therefore don’t reject $$H_0$$ in our example since $$T_{\rm stat} < qt_{1-\alpha,n-1}$$. We can equivalently compute the significance level for which the test becomes significant. This value is called the p-value: \begin{aligned} p_{\rm value} & = \max{\mathbb P}_{H_0}(T_{\rm stat} > T_{\rm stat}^{\rm obs}) \\ & = {\mathbb P}(T_{\rm stat} > T_{\rm stat}^{\rm obs} \ | \ \mu=\mu_0) \\ &= 1 - \prob{t_{n-1} \leq T_{\rm stat}^{\rm obs}} \end{aligned} Now, $$T_{\rm stat} > qt_{1-\alpha,n-1}$$ under $$H_0$$ if and only if $$\prob{t_{n-1} \leq T_{\rm stat}^{\rm obs}} \geq 1-\alpha$$. Then, the test is significant at the level $$\alpha$$ if and only if $$p_{\rm value}\leq \alpha$$.
( p.value <- 1 - pt(t.stat,df) )
## [1] 0.1038119
Here, we would reject $$H_0$$ for any significance level $$\alpha \geq$$ 0.104.
Important: The fact that the test is not significant at the level $$\alpha$$ does not allow us to conclude that $$H_0$$ is true, i.e. that $$\mu$$ is less than or equal to 500. We can only say that the data does not allow us to conclude that $$\mu>500$$.
Imagine now that we want to test if $$\mu \geq 515$$ for instance. The alternative here is $$H_1: \ \mu < 515$$.
mu0 <- 515
t.test(x, alternative="less", mu=mu0)
##
## One Sample t-test
##
## data: x
## t = -1.793, df = 77, p-value = 0.03845
## alternative hypothesis: true mean is less than 515
## 95 percent confidence interval:
## -Inf 514.373
## sample estimates:
## mean of x
## 506.2218
More generally, we may want to test $H_0: \ \mu \geq \mu_0" \quad \text{versus} \quad H_1: \ \mu < \mu_0"$ We still use the statistic $$T_{\rm stat} = \sqrt{n}(\bar{x}-\mu_0)/s$$ for this test, but the rejection region is now the area that lies to the left of the critical value $$qt_{\alpha,n-1}$$ since
\begin{aligned} \prob{\text{reject } H_0 \ | \ H_0 \ \text{true}} &= {\mathbb P}(T_{\rm stat} < qt_{\alpha,n-1} \ | \ \mu \geq \mu_0) \\ & \leq {\mathbb P}(T_{\rm stat} < qt_{\alpha,n-1} \ | \ \mu = \mu_0) \\ & \leq \alpha \end{aligned}
t.stat <- sqrt(n)*(x.mean-mu0)/x.sd
p.value <- pt(t.stat,df)
c(t.stat, df, p.value)
## [1] -1.79295428 77.00000000 0.03845364
Here, the p-value is less than $$\alpha=0.05$$: we then reject the null hypothesis at the $$5\%$$ level and conclude that $$\mu < 515$$.
### 2.1.2 Two sided test
A two sided test (or two tailed test) can be used to test if $$\mu=500$$ for instance
mu0 = 500
t.test(x, alternative="two.sided", mu=mu0)
##
## One Sample t-test
##
## data: x
## t = 1.2708, df = 77, p-value = 0.2076
## alternative hypothesis: true mean is not equal to 500
## 95 percent confidence interval:
## 496.4727 515.9709
## sample estimates:
## mean of x
## 506.2218
More generally, we can test $H_0: \ \mu = \mu_0" \quad \text{versus} \quad H_1: \ \mu \neq \mu_0"$ The test also uses the statistic $$T_{\rm stat} = \sqrt{n}(\bar{x}-\mu_0)/s$$, but the rejection region has now two parts: we reject $$H_0$$ if $$|T_{\rm stat}| > qt_{1-\alpha/2}$$. Indeed,
\begin{aligned} \prob{\text{reject } H_0 \ | \ H_0 \ \text{true}} &= {\mathbb P}(|T_{\rm stat}| > qt_{1 -\frac{\alpha}{2},n-1} \ | \ \mu = \mu_0) \\ & = {\mathbb P}(T_{\rm stat} < qt_{\frac{\alpha}{2},n-1} \ | \ \mu = \mu_0) + {\mathbb P}(T_{\rm stat} > qt_{1-\frac{\alpha}{2},n-1} \ | \ \mu = \mu_0)\\ &= \prob{t_{n-1} \leq qt_{\frac{\alpha}{2},n-1}} + \prob{t_{n-1} \geq qt_{1-\frac{\alpha}{2},n-1}} \\ &= \frac{\alpha}{2} + \frac{\alpha}{2} \\ & = \alpha \end{aligned}
The p-value of the test is now
\begin{aligned} p_{\rm value} & = {\mathbb P}_{H_0}(|T_{\rm stat}| > |T_{\rm stat}^{\rm obs}|) \\ & = {\mathbb P}_{H_0}(T_{\rm stat} < -|T_{\rm stat}^{\rm obs}|) + {\mathbb P}_{H_0}(T_{\rm stat} > |T_{\rm stat}^{\rm obs}|)\\ &= \prob{t_{n-1} \leq -|T_{\rm stat}^{\rm obs}|} + \prob{t_{n-1} \geq |T_{\rm stat}^{\rm obs}|} \\ &= 2 \,\prob{t_{n-1} \leq -|T_{\rm stat}^{\rm obs}|} \end{aligned}
t.stat <- sqrt(n)*(x.mean-mu0)/x.sd
p.value <- 2*pt(-abs(t.stat),df)
c(t.stat, df, p.value)
## [1] 1.2708058 77.0000000 0.2076238
Here, $$p_{\rm value}=$$ 0.208. Then, for any significance level less than 0.208, we cannot reject the hypothesis that $$\mu = 500$$.
### 2.1.3 Confidence interval for the mean
We have just seen that the data doesn’t allow us to reject the hypothesis that $$\mu = 500$$. But we would come to the same conclusion with other values of $$\mu_0$$. In particular, we will never reject the hypothesis that $$\mu = \bar{x}$$:
t.test(x, mu=x.mean, conf.level=1-alpha)$p.value ## [1] 1 For a given significance level ($$\alpha = 0.05$$ for instance), we will not reject the null hypothesis for values of $$\mu_0$$ close enough to $$\bar{x}$$. pv.510 <- t.test(x, mu=510, conf.level=1-alpha)$p.value
pv.497 <- t.test(x, mu=497, conf.level=1-alpha)$p.value c(pv.510, pv.497) ## [1] 0.44265350 0.06340045 On the other hand, we will reject $$H_0$$ for values of $$\mu_0$$ far enough from $$\bar{x}$$: pv.520 <- t.test(x, mu=520, conf.level=1-alpha)$p.value
pv.490 <- t.test(x, mu=490, conf.level=1-alpha)$p.value c(pv.520, pv.490) ## [1] 0.006204188 0.001406681 There exist two values of $$\mu_0$$ for which the decision is borderline pv1 <- t.test(x, mu=495.892, conf.level=1-alpha)$p.value
pv2 <- t.test(x, mu=515.5443, conf.level=1-alpha)p.value c(pv1,pv2) ## [1] 0.03811820 0.06062926 In fact, for a given $$\alpha$$, these two values $$\mu_{\alpha,{\rm lower}}$$ and $$\mu_{\alpha,{\rm upper}}$$ define a confidence interval for $$\mu$$: We are confident’’ at the level $$1-\alpha$$ that any value between $$\mu_{\alpha,{\rm lower}}$$ and $$\mu_{\alpha,{\rm upper}}$$ is a possible value for $$\mu$$. mu <- seq(490,520,by=0.25) t.stat <- (x.mean-mu)/x.sd*sqrt(n) pval <- pmin(pt(-t.stat,df) + (1- pt(t.stat,df)),pt(t.stat,df) + (1- pt(-t.stat,df))) dd <- data.frame(mu=mu, v.pval=pval) CI <- x.mean + x.sd/sqrt(n)*qt(c(alpha/2,1-alpha/2), df) ggplot(data=dd) + geom_line(aes(x=mu,y=pval)) + geom_vline(xintercept=x.mean,colour="red", linetype=2)+ geom_hline(yintercept=alpha,colour="green", linetype=2)+ geom_vline(xintercept=CI,colour="red") + scale_x_continuous(breaks=round(c(490,500,510,520,CI,x.mean),2)) By construction, \begin{aligned} 1-\alpha &= \prob{ qt_{\frac{\alpha}{2},n-1} < T_{\rm stat} < qt_{1-\frac{\alpha}{2},n-1} } \\ &= \prob{ qt_{\frac{\alpha}{2},n-1} < \frac{\bar{x}-\mu}{s/\sqrt{n}} < qt_{1-\frac{\alpha}{2},n-1} } \\ &= \prob{ \bar{x} +\frac{s}{\sqrt{n}}qt_{\frac{\alpha}{2},n-1} < \mu < \bar{x} +\frac{s}{\sqrt{n}}qt_{1-\frac{\alpha}{2},n-1} } \end{aligned} The confidence interval of level $$1-\alpha$$ for $$\mu$$ is therefore the interval ${\rm CI}_{1-\alpha} = [\bar{x} +\frac{s}{\sqrt{n}}qt_{\frac{\alpha}{2},n-1} \ \ , \ \ \bar{x} +\frac{s}{\sqrt{n}}qt_{1-\frac{\alpha}{2},n-1}]$ (CI <- x.mean + x.sd/sqrt(n)*qt(c(alpha/2,1-alpha/2), df)) ## [1] 496.4727 515.9709 Remark 1: The fact that $$\prob{ \mu \in {\rm CI}_{1-\alpha}} = 1- \alpha$$ does not mean that $$\mu$$ is a random variable! It is the bounds of the confidence interval that are random because they are function of the data. A confidence interval of level $$1-\alpha$$ should be interpreted like this: imagine that we repeat the same experiment many times, with the same experimental conditions, and that we build a confidence interval for $$\mu$$ for each of these replicate. Then, the true mean $$\mu$$ will lie in the confidence interval $$(1-\alpha)100\%$$ of the times. Let us check this property with a Monte Carlo simulation. L <- 100000 n <- 100 mu <- 500 sd <- 40 R <- vector(length=L) for (l in (1:L)) { x <- rnorm(n,mu,sd) ci.l <- mean(x) + sd(x)/sqrt(n)*qt(c(alpha/2, 1-alpha/2),n-1) R[l] <- (mu > ci.l[1] && mu < ci.l[2]) } mean(R) ## [1] 0.94804 Remark 2: The decision rule to reject or not the null hypothesis can be derived from the confidence interval. Indeed, the confidence interval plays the role of an acceptance region: we reject $$H_0$$ if $$\mu_0$$ does not belong to $${\rm CI}_{1-\alpha}$$. In the case of a one sided test, the output of t.test called confidence interval is indeed an acceptance region for $$\mu$$, but not a confidence interval’’ (we cannot seriouly consider that $$\mu$$ can take any value above 500 for instance ) rbind( c(x.mean + x.sd/sqrt(n)*qt(alpha,df) , Inf), c(-Inf, x.mean + x.sd/sqrt(n)*qt(1-alpha,df))) ## [,1] [,2] ## [1,] 499.0229 Inf ## [2,] -Inf 513.4207 ## 2.2 Two samples t-test ### 2.2.1 What should we test? Let us now compare the weights of the male and female rats. y <- data[datagender=="Female" ,"weight"]
dmean <- data.frame(x=c(mean(x),mean(y)),gender=c("Male","Female"))
ggplot(data=subset(data,regime=="Control")) + geom_point(aes(x=weight,y=0,colour=gender)) +
geom_point(data=dmean, aes(x,y=0,colour=gender), size=4) +
ylab(NULL) + scale_y_continuous(breaks=NULL) + xlab("weight (g)")
Looking at the data is more than enough for concluding that the mean weight of the males is (much) larger than the mean weight of the females Computing a $$p$$-value here is of little interest
t.test(x, y)
##
## Welch Two Sample t-test
##
## data: x and y
## t = 46.912, df = 166.81, p-value < 2.2e-16
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## 207.0330 225.2244
## sample estimates:
## mean of x mean of y
## 498.9312 282.8025
Let us see now what happens if we compare the control and GMO groups for the male rats.
x <- data[data$gender=="Male" & data$regime=="Control","weight"]
y <- data[data$gender=="Male" & data$regime=="GMO","weight"]
dmean <- data.frame(x=c(mean(x),mean(y)),regime=c("Control","GMO"))
ggplot(data=data[datagender=="Male",]) + geom_point(aes(x=weight,y=as.numeric(regime),colour=regime)) + geom_point(data=dmean, aes(x,y=as.numeric(regime),colour=regime), size=4) + ylab(NULL) + scale_y_continuous(breaks=NULL, limits=c(-6,9)) + xlab("weight (g)") We observe a difference between the two empirical means (the mean weight after 14 weeks is greater in the control group), but we cannot say how significant this difference is by simply looking at the data. Performing a statistical test is now necessary. Let $$x_{1}, x_{2}, \ldots, x_{n_x}$$ be the weights of the $$n_x$$ male rats of the control group and $$y_{1}, y_{2}, \ldots, y_{n_x}$$ the weights of the $$n_y$$ male rats of the GMO group. We will assume normal distributions for both $$(x_{i})$$ and $$(y_{i})$$: $x_{i} \iid {\cal N}(\mu_x \ , \ \sigma^2_x) \quad ; \quad y_{i} \iid {\cal N}(\mu_y \ , \ \sigma^2_y)$ We want to test $H_0: \ \mu_x = \mu_y" \quad \text{versus} \quad H_1: \ \mu_x \neq \mu_y"$ ### 2.2.2 Assuming equal variances We can use the function t.test assuming first equal variances ($$\sigma^2_x=\sigma_y^2$$) alpha <- 0.05 t.test(x, y, conf.level=1-alpha, var.equal=TRUE) ## ## Two Sample t-test ## ## data: x and y ## t = 1.5426, df = 76, p-value = 0.1271 ## alternative hypothesis: true difference in means is not equal to 0 ## 95 percent confidence interval: ## -4.358031 34.301621 ## sample estimates: ## mean of x mean of y ## 513.7077 498.7359 The test statistic is $T_{\rm stat} = \frac{\bar{x} - \bar{y}}{s_p \sqrt{\frac{1}{n_x}+\frac{1}{n_y}}}$ where $$s_p^2$$ is the pooled variance: $s_p^2 = \frac{1}{n_x+n_y-2} \left(\sum_{i=1}^{n_x} (x_{i}-\bar{x})^2 + \sum_{i=1}^{n_y} (y_{i}-\bar{y})^2 \right)$ Under the null hypothesis, $$T_{\rm stat}$$ follows a $$t$$-distribution with $$n_x+n_y-2$$ degree of freedom. The $$p$$-value is therefore \begin{aligned} p_{\rm value} & = {\mathbb P}_{H_0}(|T_{\rm stat}| > |T_{\rm stat}^{\rm obs}|) \\ &= \prob{t_{n_x+n_y-2} \leq -T_{\rm stat}^{\rm obs}} + 1 - \prob{t_{n_x+n_y-2} \leq T_{\rm stat}^{\rm obs}} \end{aligned} nx <- length(x) ny <- length(y) x.mean <- mean(x) y.mean <- mean(y) x.sc <- sum((x-x.mean)^2) y.sc <- sum((y-y.mean)^2) xy.sd <- sqrt((x.sc+y.sc)/(nx+ny-2)) t.stat <- (x.mean-y.mean)/xy.sd/sqrt(1/nx+1/ny) df <- nx + ny -2 p.value <- pt(-t.stat,df) + (1- pt(t.stat,df)) c(t.stat, df, p.value) ## [1] 1.5426375 76.0000000 0.1270726 The confidence interval for the mean difference $$\mu_x-\mu_y$$ is computed as ${\rm CI}_{1-\alpha} = [\bar{x} - \bar{y} +s_p \sqrt{\frac{1}{n_x}+\frac{1}{n_y}}qt_{\frac{\alpha}{2},n_x+n_y-2} \ \ , \ \ \bar{x} - \bar{y} +s_p \sqrt{\frac{1}{n_x}+\frac{1}{n_y}}qt_{1-\frac{\alpha}{2},n_x+n_y-2} ]$ x.mean-y.mean + xy.sd*sqrt(1/nx+1/ny)*qt(c(alpha/2,1-alpha/2),df) ## [1] -4.358031 34.301621 ### 2.2.3 Assuming different variances Assuming equal variances for the two groups may be disputable. aggregate(dataweight ~ data$regime, FUN= "sd" ) ## data$regime dataweight ## 1 Control 123.0689 ## 2 GMO 111.8559 We can then use the t.test function with different variances (which is the default) t.test(x, y, conf.level=1-alpha) ## ## Welch Two Sample t-test ## ## data: x and y ## t = 1.5426, df = 75.976, p-value = 0.1271 ## alternative hypothesis: true difference in means is not equal to 0 ## 95 percent confidence interval: ## -4.358129 34.301719 ## sample estimates: ## mean of x mean of y ## 513.7077 498.7359 The Welch (or Satterthwaite) approximation to the degrees of freedom is used instead of $$n_x+n_y-2=$$ 76: $\df_W = \frac{(c_x + c_y)^2}{{c_x^2}/{(n_x-1)} + {c_y^2}/{(n_y-1)}}$ where $$c_x = \sum (x_{i}-\bar{x})^2/(n_x(n_x-1))$$ and $$c_y = \sum (y_{i}-\bar{y})^2/(n_y(n_y-1))$$. Furthermore, unlike in Student’s t-test with equal variances, the denominator is not based on a pooled variance estimate: $T_{\rm stat} = \frac{\bar{x} - \bar{y}}{ \sqrt{{s_x^2}/{n_x}+{s_y^2}/{n_y}}}$ where $$s_x^2$$ and $$s_y^2$$ are the empirical variances of $$(x_i)$$ and $$(y_i)$$: $s_x^2 = \frac{1}{n_x-1}\sum_{i=1}^{n_x} (x_{i}-\bar{x})^2 \quad ; \quad s_y^2 = \frac{1}{n_y-1}\sum_{i=1}^{n_y} (y_{i}-\bar{y})^2$ sbar.xy <- sqrt(var(x)/nx+var(y)/ny) t.stat <- (x.mean-y.mean)/sbar.xy cx <- x.sc/(nx-1)/nx cy <- y.sc/(ny-1)/ny dfw <- (cx + cy)^2 / (cx^2/(nx-1) + cy^2/(ny-1)) p.value <- pt(-t.stat,dfw) + (1- pt(t.stat,dfw)) c(t.stat, dfw, p.value) ## [1] 1.5426375 75.9760868 0.1270739 The confidence interval for $$\mu_x-\mu_y$$ is now computed as ${\rm CI}_{1-\alpha} = [\bar{x} - \bar{y} +\sqrt{\frac{s_x^2}{n_x}+\frac{s_y^2}{n_y}} \ qt_{\frac{\alpha}{2},\df_W} \ \ , \ \ \bar{x} - \bar{y} +\sqrt{\frac{s_x^2}{n_x}+\frac{s_y^2}{n_y}} \ qt_{1-\frac{\alpha}{2},\df_W} ]$ x.mean-y.mean + sbar.xy*qt(c(alpha/2,1-alpha/2),dfw) ## [1] -4.358129 34.301719 ## 2.3 Power of a t-test Until now, we have demonstrated that the experimental data does not highlight any significant difference in weight between the control group and the GMO group. Of course, that does not mean that there is no difference between the two groups. Indeed, absence of evidence is not evidence of absence. In fact, no experimental study would be able to demonstrate the absence of effect of the diet on the weight. Now, the appropriate question is rather to evaluate what the experimental study can detect. If feeding a population of rats with GMOs has a signicant biological effect on the weight, can we ensure with a reasonable level of confidence that our statistical test will reject the null hypothesis and conclude that there is indeed a difference in weight between the two groups? A power analysis allows us to determine the sample size required to detect an effect of a given size with a given degree of confidence. Conversely, it allows us to determine the probability of detecting an effect of a given size with a given level of confidence, under sample size constraints. For a given $$\delta \in {\mathbb R}$$, let $$\beta(\delta)$$ be the type II error rate, i.e. the probability to fail rejecting $$H_0$$ when $$\mu_x-\mu_y = \delta$$, with $$\delta\neq 0$$. The power of the test is the probability to reject the null hypothesis when it is false. It is also a function of $$\delta =\mu_x-\mu_y$$ defined as \begin{aligned} \eta(\delta) &= 1 - \beta(\delta) \\ &= \prob{\text{reject } H_0 \ | \ \mu_x-\mu_y=\delta } \end{aligned} Remember that, for a two sided test, we reject the null hypothesis when $$|T_{\rm stat}| > qt_{1-\alpha/2, \df}$$, where $$\df$$ is the appropriate degree of freedom. On the other hand, $${(\bar{x} - \bar{y} - \delta)}/{s_{xy}}$$, where $$s_{xy} = \sqrt{{s_x^2}/{n_x}+{s_y^2}/{n_y}}$$, follows a $$t$$-distribution with $$\df$$ degrees of freedom. Thus, \begin{aligned} \eta(\delta) &= 1 - {\mathbb P}(qt_{\frac{\alpha}{2},\df} < T_{\rm stat} < qt_{1 -\frac{\alpha}{2},\df} \ | \ \mu_x-\mu_y=\delta) \\ & = 1- {\mathbb P}(qt_{\frac{\alpha}{2},\df} < \frac{\bar{x} - \bar{y}}{s_{xy}} < qt_{1-\frac{\alpha}{2},\df} \ | \ \mu_x-\mu_y=\delta) \\ &= 1- {\mathbb P}(qt_{\frac{\alpha}{2},\df} - \frac{\delta}{s_{xy}} < \frac{\bar{x} - \bar{y} - \delta}{s_{xy}} < qt_{1-\frac{\alpha}{2},\df} - \frac{\delta}{s_{xy}} \ | \ \mu_x-\mu_y=\delta) \\ &= 1 - Ft_{\df}(qt_{1-\frac{\alpha}{2},\df} - \frac{\delta}{s_{xy}}) + Ft_{\df}(qt_{\frac{\alpha}{2},\df} - \frac{\delta}{s_{xy}}) \end{aligned} As an example, let us compute the probability to detect a difference in weight of 10g with two groups of 80 rats each and assuming that the standard deviation is 30g in each group. alpha=0.05 nx.new <- ny.new <- 80 delta.mu <- 10 x.sd <- 30 df <- nx.new+ny.new-2 dt <- delta.mu/x.sd/sqrt(1/nx.new+1/ny.new) 1-pt(qt(1-alpha/2,df)-dt,df) + pt(qt(alpha/2,df)-dt,df) ## [1] 0.5528906 The function pwr.t.test allows to compute this power: library(pwr) pwr.t.test(n=nx.new, d=delta.mu/x.sd, type="two.sample", alternative="two.sided", sig.level=alpha) ## ## Two-sample t test power calculation ## ## n = 80 ## d = 0.3333333 ## sig.level = 0.05 ## power = 0.5538758 ## alternative = two.sided ## ## NOTE: n is number in *each* group Let us perform a Monte Carlo simulation, to check this result and better understand what it means. Imagine that the true’’ difference in weight is $$\delta=10$$g. Then, if could repeat the same experiment a (very) large number of times, we would reject the null hypothesis in $$55\%$$ of cases. L <- 100000 mux <- 500 muy <- mux + delta.mu Rt <- vector(length=L) for (l in (1:L)) { x.sim <- rnorm(nx.new,mux,x.sd) y.sim <- rnorm(ny.new,muy,x.sd) Rt[l] <- t.test(x.sim, y.sim, alternative="two.sided")p.value < alpha
}
mean(Rt)
## [1] 0.55311
We may consider this probability as too small. If our objective is a power of 80% at least, with the same significance level, we need to increase the sample size.
pwr.t.test(power=0.8, d=delta.mu/x.sd, sig.level=alpha)
##
## Two-sample t test power calculation
##
## n = 142.2462
## d = 0.3333333
## sig.level = 0.05
## power = 0.8
## alternative = two.sided
##
## NOTE: n is number in *each* group
Indeed, we see that $$n\geq$$ 143 animals per group are required in order to reach a power of 80%.
nx.new <- ny.new <- ceiling(pwr.t.test(power=0.8, d=delta.mu/x.sd, sig.level=alpha)$n) df <- nx.new+ny.new-2 dt <- delta.mu/x.sd/sqrt(1/nx.new+1/ny.new) 1-pt(qt(1-alpha/2,df)-dt,df) + pt(qt(alpha/2,df)-dt,df) ## [1] 0.8020466 An alternative for increasing the power consists in increasing the type I error rate pwr.t.test(power=0.8, d=delta.mu/x.sd, n=80, sig.level=NULL) ## ## Two-sample t test power calculation ## ## n = 80 ## d = 0.3333333 ## sig.level = 0.2067337 ## power = 0.8 ## alternative = two.sided ## ## NOTE: n is number in *each* group If we accept a significance level of about 20%, then we will be less demanding for rejecting $$H_0$$: we will reject the null hypothesis when $$|T_{\rm stat}|>qt_{0.9,158}$$= 1.29, instead of $$|T_{\rm stat}|>qt_{0.975,158}$$= 1.98. This strategy will therefore increase the power, but also the type I error rate. # 3 Mann-Whitney-Wilcoxon test The Mann-Whitney-Wilcoxon test, or Wilcoxon rank sum test, can be used to test if the weight in one of the two groups tends to be greater than in the other group. The Mann-Whitney-Wilcoxon test is a non parametric test: we don’t make the assumption that the distribution of the data belongs to a family of parametric ditributions. The logic behind the Wilcoxon test is quite simple. The data are ranked to produce two rank totals, one for each group. If there is a systematic difference between the two groups, then most of the high ranks will belong to one group and most of the low ranks will belong to the other one. As a result, the rank totals will be quite different and one of the rank totals will be quite small. On the other hand, if the two groups are similar, then high and low ranks will be distributed fairly evenly between the two groups and the rank totals will be fairly similar. In our example, we don’t clearly see any of the two groups on the right or on the left of the scatter plot ggplot(data=data[data$gender=="Male",]) + geom_point(aes(x=weight,y=as.numeric(regime),colour=regime)) +
ylab(NULL) + scale_y_continuous(breaks=NULL, limits=c(-6,9)) + xlab("weight (g)")
We can check that the Mann-Whitney-Wilcoxon test is not significant (at the level 0.05)
wilcox.test(x, y, alternative="two.sided", conf.level=1-alpha)
## Warning in wilcox.test.default(x, y, alternative = "two.sided", conf.level
## = 1 - : cannot compute exact p-value with ties
##
## Wilcoxon rank sum test with continuity correction
##
## data: x and y
## W = 904.5, p-value = 0.1516
## alternative hypothesis: true location shift is not equal to 0
The test statistic $$W_x$$ is computed a follows:
• Assign numeric ranks to all the observations, beginning with 1 for the smallest value. Where there are groups of tied values, assign a rank equal to the midpoint of unadjusted rankings
• define $$R_x$$ (resp. $$R_y$$) as the sum of the ranks for the observations which came from sample $$x$$ (resp. $$y$$)
• Let $$W_x = R_x - {n_x(n_x+1)}/{2}$$ and $$W_y = R_y - {n_y(n_y+1)}/{2}$$
nx <- length(x)
ny <- length(y)
Wx=sum(rank(c(x,y))[1:nx]) - nx*(nx+1)/2
Wy=sum(rank(c(y,x))[1:ny]) - ny*(ny+1)/2
c(Wx, Wy)
## [1] 904.5 616.5
For a two sided tests and assuming that $$W_x^{\rm obs}>W_y^{\rm obs}$$, the $$p$$-value is $p_{\rm value} = \prob{W_y \leq W_y^{\rm obs}} + \prob{W_x \geq W_x^{\rm obs}}$ The distribution of $$W_x$$ and $$W_y$$ are tabulated and this $$p$$-value can then be computed
pwilcox(Wy,ny,nx)+ 1 - pwilcox(Wx,nx,ny)
## [1] 0.1508831
We could of course exchange the roles of $$x$$ and $$y$$. In this case the test statistic would be $$W_y$$ but the p-value would be the same.
wilcox.test(y, x, alternative="two.sided", conf.level=1-alpha)
## Warning in wilcox.test.default(y, x, alternative = "two.sided", conf.level
## = 1 - : cannot compute exact p-value with ties
##
## Wilcoxon rank sum test with continuity correction
##
## data: y and x
## W = 616.5, p-value = 0.1516
## alternative hypothesis: true location shift is not equal to 0
Remark: It is easy to show that $$W_x+W_y=n_x n_y$$
c(Wx+Wy, nx*ny)
## [1] 1521 1521
Unlike the t-test, the Mann-Whitney-Wilcoxon does not require the assumption of normal distributions. However, it is nearly as efficient as the t-test on normal distributions. That means that both tests have similar power.
This important property can easily be checked by Monte Carlo simulation. Let us simulate $$L$$ replicates of the experiments under $$H_1$$, assuming that $$\mu_y=\mu_x +15$$. We can then compare the power of both tests by comparing the rajection rates of the null hypothesis.
L <- 10000
alpha <- 0.05
mux <- 500
muy <- 520
sdx <- sdy <- 30
nx <- ny <- 40
Rt <- vector(length=L)
Rw <- vector(length=L)
for (l in (1:L)) {
x.sim <- rnorm(nx,mux,sdx)
y.sim <- rnorm(ny,muy,sdy)
Rt[l] <- t.test(x.sim, y.sim)$p.value < alpha Rw[l] <- wilcox.test(x.sim, y.sim)$p.value < alpha
}
c(mean(Rt), mean(Rw))
## [1] 0.8392 0.8230
On the other hand, the Wilcoxon test may be much more powerful than the t-test for non normal distributions when the empirical mean converges lowly in distribution to the normal distribution. Such is the case, for instance, of the log-normal ditribution which is strongly skewed for large variances.
mux <- 5
muy <- 6
sdx <- sdy <- 1
nx <- ny <- 20
Rt <- vector(length=L)
Rw <- vector(length=L)
for (l in (1:L)) {
x.sim <- exp(rnorm(nx,mux,sdx))
y.sim <- exp(rnorm(ny,muy,sdy))
Rt[l] <- t.test(x.sim, y.sim, alternative="two.sided")$p.value < alpha Rw[l] <- wilcox.test(x.sim, y.sim, alternative="two.sided")$p.value < alpha
}
c(mean(Rt), mean(Rw))
## [1] 0.6679 0.8487
# 4 The limited role of the p-value
First of all it is important to emphasis that statistics is a tool for supporting decision-making. It is not a decision tool that can be used blindly.
A $$p$$-value below the sacrosanct 0.05 threshold does not mean that GMOs have some negative impacts on human health, or that a drug is better than another one. On the other hand, a $$p$$-value above 0.05 does not mean that GMOs are safe or that a drug has no effect.
The American Statistical Association (ASA) has released a “Statement on Statistical Significance and P-Values” with six principles underlying the proper use and interpretation of the p-value.
“The p-value was never intended to be a substitute for scientific reasoning,” said Ron Wasserstein, the ASA’s executive director. “Well-reasoned statistical arguments contain much more than the value of a single number and whether that number exceeds an arbitrary threshold. The ASA statement is intended to steer research into a ‘post p<0.05 era.’” “Over time it appears the p-value has become a gatekeeper for whether work is publishable, at least in some fields,” said Jessica Utts, ASA president. “This apparent editorial bias leads to the ‘file-drawer effect,’ in which research with statistically significant outcomes are much more likely to get published, while other work that might well be just as important scientifically is never seen in print. It also leads to practices called by such names as ‘p-hacking’ and ‘data dredging’ that emphasize the search for small p-values over other statistical and scientific reasoning.”
The statement’s six principles, many of which address misconceptions and misuse of the pvalue, are the following:
1. P-values can indicate how incompatible the data are with a specified statistical model.
2. P-values do not measure the probability that the studied hypothesis is true, or the probability that the data were produced by random chance alone.
3. Scientific conclusions and business or policy decisions should not be based only on whether a p-value passes a specific threshold.
4. Proper inference requires full reporting and transparency.
5. A p-value, or statistical significance, does not measure the size of an effect or the importance of a result.
6. By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis.
As an illustration, assume that the diet has a real impact on the weight: after 14 weeks, rats fed with GMOs weigh in average 15g less than control rats.
It will be extremly unlikely to conclude that there is an effect with only 10 rats per group, even if we observe a difference of 15g in the two samples.
n <- 10
mu <- 500
delta <- 15
sd <- 30
x.sim <- rnorm(n,mu,sd)
y.sim <- x.sim + delta
t.test(x.sim, y.sim)
##
## Welch Two Sample t-test
##
## data: x.sim and y.sim
## t = -0.77216, df = 18, p-value = 0.45
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -55.8125 25.8125
## sample estimates:
## mean of x mean of y
## 504.9729 519.9729
This basic example shows that a difference considered as biologically significant may not be statistically significant.
On the other hand, a small difference considered as not biologically significant (1g for instance) may be considered as statistically significant if the group sizes are large enough.
n <- 10000
delta <- 1
x.sim <- rnorm(n,mu,sd)
y.sim <- x.sim + delta
t.test(x.sim, y.sim)
##
## Welch Two Sample t-test
##
## data: x.sim and y.sim
## t = -2.3407, df = 19998, p-value = 0.01926
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -1.8373819 -0.1626181
## sample estimates:
## mean of x mean of y
## 499.9298 500.9298
This example confirms the need for a power analysis as a complement of the comparison test. We will see that equivalence testing may also be relevant for evaluating what the data allows to say.
# 5 Equivalence tests
## 5.1 Introduction
Traditional hypothesis testing seeks to determine if means are the same or different. Such approach has several drawbacks:
1. Testing that two means are exactly the same is usually of little interest. A very small difference may exist, even if it is not biologically, or physically significant. It is much more meaningful in such situation to test if some significant difference exists, i.e. if this difference may have some concrete impact.
2. When testing difference between means, the null hypothesis considers that there is no differences: it thus falls to the data to demonstrate the converse. In the absence of enough data, we may fail to detect a significant difference, because of the lack of power. An opposite point of view consists of applying the basic hypothesis that a significant difference exists: it now falls to the data to demonstrate the converse.
On the other hand, equivalence testing determines an interval where the means can be considered equivalent. In other words, equivalence does not mean that two means $$\mu_x$$ and $$\mu_y$$ are equal, but rather that they are close enough“, i.e. $$|\mu_x - \mu_y| < \delta$$ for some equivalence limit $$\delta$$ that should be chosen according to the problem under study.
Thus, in terms of hypothesis testing, we want to test $H_0: \ |\mu_x - \mu_y| \geq \delta" \quad \text{versus} \quad H_1: \ |\mu_x - \mu_y| < \delta"$
These tests are currently used in the field of medication, to put a generic drug on the market. These are bioequivalence tests. Equivalence testing may also be used to determine if new therapies have equivalent or noninferior efficacies to the ones currently in use. These studies are called equivalence/noninferiority studies.
In the field of GMO risk assessment, equivalence testing is required to demonstrate that a GMO crop is compositionally equivalent and as safe as a conventional crop.
## 5.2 Two samples test
### 5.2.1 The TOST procedure
The simplest and most widely used approach to test equivalence is the two one-sided test (TOST) procedure. Let $$\mu_d = \mu_x - \mu_y$$, then TOST consists in performing the two tests:
$\begin{eqnarray} &H_0^{(+)}: \ \mu_d \geq \delta" &\text{versus} \ &H_1^{(+)}: \ \mu_d < \delta" \\ &H_0^{(-)}: \ \mu_d \leq -\delta" &\text{versus} \ &H_1^{(-)}: \ \mu_d > -\delta" \end{eqnarray}$
which is equivalent to test
$\begin{eqnarray} &H_0^{(+)}: \ \mu_d = \delta" &\text{versus} \ &H_1^{(+)}: \ \mu_d < \delta" \\ &H_0^{(-)}: \ \mu_d = -\delta" &\text{versus} \ &H_1^{(-)}: \ \mu_d > -\delta" \end{eqnarray}$
Proposition: Using TOST, equivalence is established at the $$\alpha$$ significance level if a $$(1-2\alpha) × 100\%$$ confidence interval for the difference $$\mu_x-\mu_y$$ is contained within the interval $$(-\delta,\delta)$$.
Proof: Let $$d_i = x_i-y_i$$ and $$\bar{d}=\bar{x}-\bar{y}$$. Let $$s_{\bar{d}}$$ be the estimated standard deviation of $$\bar{d}$$. $s_{\bar{d}} = \left\{ \begin{array}{ll} s_p \sqrt{\frac{1}{n_x}+\frac{1}{n_y}} \quad \text{if} \ \sigma^2_x=\sigma^2_y \\ \sqrt{{s_x^2}/{n_x}+{s_y^2}/{n_y}} \quad \text{otherwise} \end{array} \right.$
Then, we
• reject $$H_0^{(+)}$$ if $$T_{\rm stat}^{(+)} = (\bar{d}-\delta)/s_{\bar{d}}<qt_{\alpha,\nu}$$
• reject $$H_0^{(-)}$$ if $$T_{\rm stat}^{(-)} = (\bar{d}+\delta)/s_{\bar{d}}>qt_{1-\alpha,\nu}$$
where $$\nu$$ is the appropriate degree of freedom. Using the fact that $$qt_{\alpha,\nu} = -qt_{1-\alpha,\nu}$$, these decision rules are equivalent to:
• reject $$H_0^{(+)}$$ if $$\bar{d} + qt_{1-\alpha,\nu}s_{\bar{d}} < \delta$$
• reject $$H_0^{(-)}$$ if $$\bar{d} + qt_{\alpha,\nu}s_{\bar{d}} > -\delta$$
By definition, $$\bar{d} + qt_{\alpha,\nu}s_{\bar{d}}$$ and $$\bar{d} + qt_{1-\alpha,\nu}s_{\bar{d}}$$ are the bounds of a $$(1-2\alpha) × 100\%$$ confidence interval for the difference mean $$\mu_d$$.
We therefore reject the null hypothesis $$H_0$$ when both $$H_0^{(+)}$$ and $$H_0^{(-)}$$ are rejected, i.e. when a $$(1-2\alpha) × 100\%$$ confidence interval for $$\mu_d$$ is contained within the equivalence limits $$(-\delta, \delta)$$. $$\square$$
In our example, assuming unequal variance, the 90% confidence interval for $$\mu_d=\mu_x-\mu_y$$ is now
d.mean <- x.mean-y.mean
s.d.mean <- sqrt(var(x)/length(x) + var(y)/length(y))
d.mean + s.d.mean*qt(c(alpha,1-alpha),dfw)
## [1] -1.189099 31.132689
The calculated confidence interval can be plotted together with the value 0 (for difference testing) and the equivalence limits $$-\delta$$, $$\delta$$. Such a plot will immediately reveal whether the GMO is significantly different from the conventional counterpart (at the $$2(1-\alpha)$$ confidence level), and/or equivalence can be claimed or denied (at the $$1-\alpha$$ confidence level).
Considering for instance that a difference of weight larger than 20g is biologically significant leads to choose $$\delta=20$$. Since the 90% confidence interval is not contained in the equivalence interval $$(-\delta, \delta)$$, we don’t reject the null hypothesis for a significance level of 0.05. We therefore cannot conclude to equivalence, even if the difference in mean is not statistically significant.
On the other hand, we will conclude that the two regimens are nutritionally equivalent if we choose 40g as a limit since the new equivalence interval $$(-40, 40)$$ contains the CI.
When it is considered useful to have results also in the form of a $$p$$-value from a statistical significance test, then it can be easily calculated as follows:
\begin{aligned} p_{\rm value} &= p_{\rm value}^{(+)} + p_{\rm value}^{(-)} \\ &= \prob{ \frac{\bar{d}-\delta}{s_\bar{d}} < \frac{\bar{d}^{\rm obs}-\delta}{s_\bar{d}} \ | \ \mu_d=\delta} + \prob{ \frac{\bar{d}+\delta}{s_\bar{d}} > \frac{\bar{d}^{\rm obs}+\delta}{s_\bar{d}}\ | \ \mu_d=-\delta} \\ &= Ft_{\df_W}\left(\frac{\bar{d}^{\rm obs}-\delta}{s_\bar{d}}\right) + 1 - Ft_{\df_W}\left(\frac{\bar{d}^{\rm obs}+\delta}{s_\bar{d}}\right) \end{aligned}
Using $$\delta=20$$, the $$p$$-value is greater than the significance level $$\alpha=0.05$$:
delta = 20
pt((d.mean-delta)/s.d.mean, dfw) + 1 - pt((d.mean+delta)/s.d.mean, dfw)
## [1] 0.3032307
Then, we don’t reject the null hypothesis. On the contrary, the $$p$$-value obtained with $$\delta=40$$ is smaller than the significance level and the null hypothesis of non equivalence can be rejected:
delta = 40
pt((d.mean-delta)/s.d.mean, dfw) + 1 - pt((d.mean+delta)/s.d.mean, dfw)
## [1] 0.005923576
Function tost {equivanece} computes a TOST for equivalence from paired or unpaired data
library(equivalence)
tost(x, y, alpha=0.05, epsilon=20)
##
## Welch Two Sample TOST
##
## data: x and y
## df = 75.976
## sample estimates:
## mean of x mean of y
## 513.7077 498.7359
##
## Epsilon: 20
## 95 percent two one-sided confidence interval (TOST interval):
## -1.189099 31.132689
## Null hypothesis of statistical difference is: not rejected
## TOST p-value: 0.3029514
tost(x, y, alpha=0.05, epsilon=40)
##
## Welch Two Sample TOST
##
## data: x and y
## df = 75.976
## sample estimates:
## mean of x mean of y
## 513.7077 498.7359
##
## Epsilon: 40
## 95 percent two one-sided confidence interval (TOST interval):
## -1.189099 31.132689
## Null hypothesis of statistical difference is: rejected
## TOST p-value: 0.005923451
### 5.2.2 Difference testing versus equivalence testing
The choice of the test mainly depends on how the conclusion is formulated.
As an example, AFSSA (currently ANSES) was committing a fairly serious error in this area by concluding in relation to the MON863 GMO maize: Considering that no significant difference has been observed between the results obtained for MON863 maize and for the other varieties of maize, one might, therefore, conclude that the new plant is nutritionally equivalent (AFSSA, Saisine 2003-0215, p. 6).
The absence of statistically significant difference does not allow to conclude on equivalence. The European Food Safety Authority (EFSA) published a “Scientific Opinion” on this topic.
In particular, EFSA considers that statistical methodology should not be focussed exclusively on either differences or equivalences, but should provide a richer framework within which the conclusions of both types of assessment are allowed. Both approaches are complementary: statistically significant differences may point at biological changes caused by the genetic modification, but may not be relevant from the viewpoint of food safety. On the other hand, equivalence assessments may identify differences that are potentially larger than normal natural variation, but such cases may or may not be cases where there is an indication for true biological change caused by the genetic modification. A procedure combining both approaches can only aid the subsequent toxicological assessment following risk characterization of the statistical results.
EFSA also propose the following classification of the possible outcomes:
After adjustment of the equivalence limits, a single confidence limit (for the difference) serves visually for assessing the outcome of both tests (difference and equivalence). Here, only the upper adjusted equivalence limit is considered. Shown are: the mean of the GM crop on an appropriate scale (square), the confidence limits (whiskers) for the difference between the GM crop and its conventional couterpart (bar shows confidence interval), a vertical line indicating zero difference (for proof of difference), and vertical lines indicating adjusted equivalence limits (for proof of equivalence).
For outcome types 1, 3 and 5 the null hypothesis of no difference cannot be rejected: for outcomes 2, 4, 6 and 7 the GM crop is different from its conventional counterpart. Regarding interpretation of equivalence, four categories (i) - (iv) are identified: in category (i) the null hypothesis of non-equivalence is rejected in favour of equivalence; in categories (ii) equivalence is more likely than not (further evaluation may be required), (iii) non-equivalence is more likely than not (further evaluation required) and (iv) non-equivalence is established (further evaluation required).
## 5.3 One sample test
Even if equivalence testing is mainly used for testing the equivalence between two populations, a one sample equivalence test is also possible.
Assume that, for some $$\delta>0$$, we want to test $H_0: \ |\mu_x - \mu_0| \geq \delta" \quad \text{versus} \quad H_1: \ |\mu_x - \mu_0| < \delta"$ Let $$z_i = x_i - \mu_0$$, $$i=1,2,\ldots,n_x$$, and let $$\mu_z = \mu_x - \mu_0$$. Then, the test reduces to use the $$(z_i)$$ for testing
$H_0: \ |\mu_z| \geq \delta" \quad \text{versus} \quad H_1: \ |\mu_z| < \delta"$
mu0 <- 500
delta <- 10
alpha <- 0.05
z <- x - mu0
tost(z, alpha=alpha, epsilon=delta)
##
## One Sample TOST
##
## data: z
## df = 38
## sample estimates:
## mean of x
## 13.70769
##
## Epsilon: 10
## 95 percent two one-sided confidence interval (TOST interval):
## 2.035311 25.380074
## Null hypothesis of statistical difference is: not rejected
## TOST p-value: 0.7023006
Here, the two-sided null hypothesis is the union of the one sided hypotheses $$\mu_z \geq \delta"$$ and $$\mu_z \leq -\delta"$$. Thus,
\begin{aligned} p_{\rm value} &= \prob{\bar{z} < \bar{z}^{\rm obs} \ | \ \mu_z=\delta} + \prob{\bar{z} > \bar{z}^{\rm obs} \ | \ \mu_z=-\delta} \\ &= \prob{t_{n_z-1} < \frac{\bar{z}^{\rm obs}-\delta}{s_{\bar{z}}}} + \prob{t_{n_z-1} > \frac{\bar{z}^{\rm obs}+\delta}{s_{\bar{z}}}} \\ &= Ft_{n_x-1}\left(\frac{\bar{x}^{\rm obs}-\mu_0-\delta}{s_{\bar{x}}}\right) + 1 - Ft_{n_x-1}\left(\frac{\bar{x}^{\rm obs}-\mu_0+\delta}{s_{\bar{x}}}\right) \end{aligned}
z.mean <- mean(z)
zbar.sd <- sd(z)/sqrt(nx)
p.value <- pt((z.mean-delta)/zbar.sd, nx-1) + 1 - pt((z.mean+delta)/zbar.sd, nx-1)
p.value
## [1] 0.6592176
|
# Projective objects in a presheaf topos is a retract of representables
An object $P$ in a category is projective if the Hom-set $Hom(P,-)$ preserves epi. Considering the presheaf topos $\hat{C}=\mathbf{Sets}^{C^{op}}$, how can I show that an object in this category is projective then the object is a retract of a coproduct of representables?
I know that a representable is projective and the coproduct of representables is projective. I also think of that a presheaf is a colimit of representables. But I'm not sure how to relate these two to solve the question.
• This isn't true for this sense of "projective"-the coproduct of representables has no reason to be a retract of a representable in general. The sense of "projective" which is relevant here is "small-projective": an object $x$ such that maps out of $x$ commute with arbitrary small colimits. – Kevin Carlson Apr 23 '18 at 20:16
• @KevinCarlson Sorry that should be a retract of a coproduct of representables. And this is exercise IV.15(d) from MacLane and Moerdijk. Sheaves in geometry and logic: A first introduction to topos theory. – user301513 Apr 23 '18 at 20:37
In a category with the relevant coproducts, every colimit can be written as the coequalizer of a coproduct. In particular, every presheaf $P$ has the form of a coequalizer
$$\coprod_i U_i \rightrightarrows \coprod_j V_j \xrightarrow{\rho} P$$
where the $U_i$ and $V_j$ are represenetables.
Coequalizers are epic, so we can apply the given property of $P$:
$$\hom\left(P, \coprod_j V_j \right) \xrightarrow{\rho_*} \hom(P, P)$$
is surjective. In particular, there is a map $\lambda : P \to \coprod_j V_j$ such that $\rho \circ \lambda = 1_P$.
|
# Continued Fraction Identity Problem
Question: What went wrong in my work to$$\ln(1-x)=-\cfrac x{1-\cfrac x{2+x-\cfrac {2x}{3+2x-\cfrac {3x}{4+3x-\ddots}}}}\tag{1}$$
I started with the expansion\begin{align*}\ln(1-x) & =-x-\dfrac {x^2}2-\dfrac {x^3}3-\dfrac {x^4}4-\&\text{c}.\\ & =-x\left\{1+\left(\dfrac x2\right)+\left(\dfrac x2\right)\left(\dfrac x{3/2}\right)+\&\text{c}.\right\}\tag{2}\end{align*} And by Euler's Continued Fraction, $(2)$ can be rewritten into\begin{align*} & -x\left\{1+\left(\dfrac x2\right)+\left(\dfrac x2\right)\left(\dfrac x{3/2}\right)+\left(\dfrac x{2}\right)\left(\dfrac x{3/2}\right)\left(\dfrac x{4/3}\right)+\&\text c.\right\}\\ & =-\cfrac {x}{1-\cfrac x{2+x-\cfrac {2x}{3+2x-\ddots}}}\end{align*}\tag{3} However, if we set $x=2$, then we have the LHS as $\ln(1-2)=\ln -1=\pi i$. The RHS becomes $$-\cfrac 2{1-\cfrac 2{4-\cfrac 4{7-\cfrac 6{10-\ddots}}}}=\pi i\tag{4}$$ But the LHS is real while the RHS is imaginary. What went wrong?
The Taylor expansion is valid only for $|x| < 1$.
There should have been a constraint somewhere in the question that corresponds to this.
Also, on an unrelated note, the $\ln$ of a negative number is multi-valued.
• Just as an additional question, are there any values of $x$ in $\ln(1-x)$ such that there is a $\pi$ somewhere imbedded inside? – Frank Feb 23 '17 at 18:19
• @Frank I'm not exactly sure what you mean by embedded, could you explain more? – Yiyuan Lee Feb 24 '17 at 0:08
• @Frank Any expression involving $\pi$ will do the trick, provided $|x| < 1$. So $x=1/\pi$, $x=\pi-3$, $x=\pi^2/9$, etc., would work. But maybe you're looking for a value of $x$ that doesn't involve $\pi$, but where the continued fraction has $\pi$ in it somehow? – Théophile Feb 24 '17 at 20:10
You went wrong with conversion from the series to the Euler CF. It was almost right, but the numeric numerator factor should be squared. That is,
$$\ln (1-x) = -\cfrac{x}{1 - \cfrac{1^2x}{2 + x - \cfrac{2^2x}{3 + 2x - \cfrac{3^2x}{\ddots}}}}.$$
• Hello, welcome to Math.SE. Please see the MathJax tutorial for help formatting mathematics using LaTeX. It makes it much easier to read. – Trevor Gunn Apr 18 '17 at 22:27
|
Which constant should we send to aliens?
1. Jan 24, 2005
danne89
Suppose we get contact with some aliens, which number constant should you send to test their "intelligence"?
2. Jan 24, 2005
Ryoukomaru
I would send $$\phi=1.61803399$$ - The Golden Ratio
3. Jan 24, 2005
master_coda
How exactly would we send that number? Without an encoding they could understand, it wouldn't matter what number we sent.
4. Jan 24, 2005
Well, ignoring that problem, I would say (pi^2)/6
5. Jan 24, 2005
Alkatran
e^(pi*i) + 1 = 0
6. Jan 24, 2005
WORLD-HEN
You could send them a golden rectangle.
7. Jan 24, 2005
Gokul43201
Staff Emeritus
Something simple, representation invariant and universally true, like :
** *** ***** ******* *********** *************
8. Jan 24, 2005
fourier jr
yeah, send prime numbers, just like in that movie contact
9. Jan 25, 2005
cepheid
Staff Emeritus
"The surest sign that intelligent life exists elsewhere in the universe is that it has never tried to contact us."
--Bill Watterson, cartoonist
10. Jan 25, 2005
gimmytang
natural constant e=2.718...
11. Jan 25, 2005
gimmytang
maybe aliens tried to contact us through microwave 200 years ago, but at that time nobody could sense that.
12. Jan 25, 2005
Chronos
Dimensionless numbers, like the nuclear fine structure constant. That would almost surely set off alarms no matter what base system they used to count. Transmit it in binary code [on-off bits]. Even a far advanced intelligence would recognize that pattern.
Last edited: Jan 25, 2005
13. Jan 25, 2005
Gonzolo
Yeah... like $$\pi$$ is so B.C. ...
14. Jan 25, 2005
Alkatran
Actually, I think a good start for the data you send is:
1010101010101010101010101010101010101010101010101010
followed by
11001100110011001100110011001100110011001100110011001100
111000111000
etc...
You know, so they know it's not random.
15. Jan 25, 2005
dextercioby
Euler-Mascheroni constant:
$$\gamma=:\lim_{n\rightarrow +\infty} (\sum_{k=1}^{n} \frac{1}{k}-\ln n)$$
$$\gamma\sim 0.577215665$$
Daniel.
16. Jan 25, 2005
danne89
The problem with this it that even chimpancies can send this sequence.
I don't think pi is good neither. Think about a gas-world, like Jupiter, where solid objects don't exist in the way we know. Would they found this number without have some motivation of real circles?
17. Jan 25, 2005
arildno
Well, all sorts of exotic numbers might do, of course (for example, Brun's constant because Brun was a Norwegian..).
However, I've yet to see any suggestions simpler and more elegant than a repeating sequence of the first few prime numbers.
18. Jan 25, 2005
arildno
ABSOLUTELY NOT!
19. Jan 25, 2005
dextercioby
WHY not,Arildno???Do you have motivation??
Daniel.
20. Jan 25, 2005
Gonzolo
There is always the Sun, and moons. I don't believe intelligent life can exist without solids. Wouldn't a zero-g ameoba tend to be round? Isn't the symmetry of a hydrogen atom round?
|
# Why are alcohols with longer chains less polar?
In my self-study, I recently came across the following question:
"Choose the solute of each pair that would be more soluble in hexane ($\ce{C6H14}$). Explain your answer.
(a) $\ce{CH3(CH2)10OH}$ or $\ce{CH3(CH2)2OH}$ ..."
Undecanol is more soluble in hexane because it is apparently less polar than propanol. Further Internet searches revealed that alcohols decrease in polarity as the chain length increases (assuming a basis in an alkane; don't know if this rule generalizes which I guess is a sub-question) but it is still not clear why. None of the sources I found explained this. The way I imagine things, the carbon-hydrogen bonds should add a bunch of zero vectors to the the O–H vector that both molecules share, giving the same polarity in both cases. But this is apparently wrong.
So why is undecanol less polar than propanol then? Does it have something to do with a more advanced bonding theory?
## 2 Answers
If you're looking at the overall polarity of the solvent, you're not only consider the value of the individual dipole moment on a single molecule, but also their “density”, i.e. the number of these dipoles per unit of volume. You are right that the dipole moment of an individual linear-chain alcohol will, in first approximation, be independent of the chain length. However, in a given volume, you'll fit fewer of those long molecules, and the polarity of the overall solvent is thus lower.
In the context of a single molecule, the distance between the poles is inversely proportional to the polarity. The farther apart they are, the less polar the molecule is if they have the same electronegative potentials. This Link explains it well.
|
Tuesday, 17 July 2018
A programming language is truly "pure" if all computations produce a value -- Agda is an example of such a language. For all its claims to the contrary, Haskell is not a pure language. Computations in Haskell may produce exceptions (try taking the head of the empty list) or nothing at all (in the case of computations that run forever). I don't hold that against Haskell, I just dislike the propaganda. I don't hold that against Haskell because math itself is not pure in this sense, since particular operations could be undefined, for example division by zero. But could we say that Haskell is "as pure as maths"? In the (not very serious) post below I will argue to the contrary: Haskell is a lazy language whereas maths is an eager language, at least as commonly practiced.
In math we have functions, as we do in programming. The functions of programming can be either lazy or eager, depending on the programming language. Are the functions of math lazy or eager?
In programming, the issue is quite important because it has a significant impact on one of the key equations of the lambda calculus, the so-called beta-law, which says that we can substitute (i.e. inline) an argument for the parameter of the function.
(fun x. M)N ≡ M[N/x]
This law lies at the heart of all compiler optimisations.
The equation is always valid in call-by-name languages such as Algol. In lazy languages such as Haskell it still holds in the presence of some effects such as exceptions or non-termination, but it does not hold in the presence of stateful assignments. In an eager languages such as OCaml it is also invalidated in the presence of non-termination or exceptions. It is therefore common that in call-by-value languages the argument (N above) must be a value, i.e. a language constant, a variable, or a lambda expression, usually written as v.
(fun x.M)v ≡ M[v/x]
Of course, substitution is also an important rule of equational reasoning, which is a big part of mathematical calculations. But which rule do we use in math, the by-name, lazy or the by-value (eager) substitution? And how can we tell?
Lets look closer at when the beta law fails in an eager functional program. If the argument is not a value (N≠v) but it is equivalent to a value (Nv) then the rule holds. It only fails when the argument is not equivalent to a value, for example because it runs forever (it 'diverges'). An example of such a term is Ω =(fun x.x x)(fun x.x x). Or, more mundanely, Ω = while true do skip. An example of failure of the beta law is
(fun x.k) Ω ≢ k[Ω/x]
where k is a language constant. If we run the left-hand side as a program it will first run the argument Ω, which will result in a nonterminating computation. On the other hand, the right-hand side is just the term consisting of the constant k because substitution happens at a syntactic level. So its value is, trivially, k. The two cannot be equivalent.
In maths we don't usually deal with functions such as Ω but we can still have expressions that don't denote a value, such as for example 1/m when m is 0. Both Ω and 1/m are 'undefined' in this sense, that they don't denote a value.
The following example will test whether maths is lazy or eager. Consider the constant function
c(m) = 1
The question is whether the equation
mc(1/m) = 0
has a solution (m=0) or not.
I have not run any referenda on the matter, but from my own understanding of mathematical practice and from pub chats with mathematician pals I would say that m=0 is not considered a valid solution for the above. A function such as the constant function above is defined over values and not over terms.
Perhaps it could be otherwise, but it ain't. Meanwhile, in Haskell
let c(x) = 1 in let m = 0 in m * (c (1/m))
will always evaluate to 0 (or rather 0.0 to be more precise).
So Haskell is impure. And not even mathematically so.
Zippers for non-inductive types
Famous mathematician-philosoper Gian-Carlo Rota said that mathematical understanding is all about showing how things that seem different are...
|
# Ideal system for an encryption scheme
What is the ideal system for an encryption scheme? For a pseudorandom permutation the ideal one is a random permutation, for a pseudorandom function the ideal one is a random function. For an encryption scheme do we have a well known application describing an ideal system for an encryption? Maybe the one-time pad?
-
An OTP is a good example. Take a look at this answer, particularly the definition it contains – rath Jul 5 '13 at 14:26
possible duplicate of definition and meaning of semantic security – rath Jul 5 '13 at 14:28
If you are concerned about idealization of crypto protocols (of which encryption is a special case), you could look up references about the Universal Composability framework (UC framework). – minar Jul 6 '13 at 8:16
Thank you user7423, which references I should look ? – Dingo13 Jul 6 '13 at 8:29
You could start with Canetti's paper: eprint.iacr.org/2000/067 – minar Jul 7 '13 at 5:47
|
# Browse Dissertations and Theses - Physics by Subject "Daya Bay"
• (2016-05-25)
Neutrino oscillation with three active neutrinos has been well established by experiments. However, θ_13 was the least known mixing angle before the Daya Bay reactor neutrino experiment. The Daya Bay experiment uses relative ...
application/pdf
PDF (32MB)
• (2013-05-28)
We perform a rate analysis of the neutrino mixing angle θ13 for the Daya Bay Reactor Neutrino Experiment. The data were collected from December 24, 2011, to May 11, 2012, during a period of data acquisition when six ...
application/pdf
PDF (15MB)
|
Objectives:
• To be able to describe an experiment to obtain the I–V characteristics of a filament lamp and a Diode;
• To be able to describe the uses and benefits of using a Diode.
I-V characteristics
I-V characteristics were introduced in the previous page when introducing Ohm’s law. They are graphical characteristics showing how the current (I) varies as the potential difference (V) across the component varies. The current is always plotted on the y-axis of a graph and the voltage on the x-axis.
Previously the I-V characteristic of a resistor (or wire) at a constant temperature was mentioned, many electrical components have varying temperatures and therefore their resistances vary which would affect the current-voltage relationship.
I-V characteristics of a filament lamp
Notice the old style bulb circuit symbol
If the variable resistor is altered so as to change the overall resistance in the circuit, both the voltmeter and ammeter readings will also change.
If a set of results were taken whereby both positive and negative readings of voltage and current were collected the following current-voltage graph can be drawn.
This graph displays non-linear characteristics, therefore the component does not obey Ohm’s law. This makes sense as Ohm’s law is only relevant to components at constant temperatures.
For a filament bulb, as the voltage increases across it the greater the temperature becomes resulting in increased vibrations of the ions, this in term reduces the magnitude of the rise in current. At low voltages, e.g. 0 V, any increase in p.d. results in a large increase in current, with a greater voltage, an increase results in a smaller rise in current.
Similar to the the resistor, or wire, at a constant temperature, a filament bulb shows the same characteristics in the negative quadrant (with the ratio of p.d. to current varying with the same pattern).
I-V characteristics of a diode
If the variable resistor is altered so as to change the overall resistance in the circuit, both the voltmeter and ammeter readings will also change.
If a set of results were taken whereby both positive and negative readings of p.d. and current were collected the following current-voltage graph can be drawn.
This graph displays non-linear characteristics, therefore the component does not obey Ohm’s law. This makes sense as Ohm’s law is only relevant to components at constant temperatures.
Diodes appear peculiar components and often confuse students when first experimenting with them, due to the 0 V and 0 A readings shown with the current in one direction. However this is the purpose of a diode, they only allow current to flow in one direction.
Diodes are designed such that the potential difference across it needs to be at a minimum of 0.6 V before any current can pass through it. With as the p.d. rises beyond 0.6 V, the current begins to rise rapidly as can be seen in the graph above.
The following points are important to understand and note;
• Between 0 and 0.6 V: The resistance is extremely high (infinite), so no current can flow ( $R = \infty \ \Omega$ , $I = 0 \ A$ )
• 0.6 V – 0.7 V (approximately): The resistance decreases, the current can increase ( $R = \ \downarrow \Omega$ , $I = \ \uparrow A$ )
• 0.7 V and upwards: The resistance decreases further, the current increases even more ( $R = \ \Downarrow \Omega$ , $I = \ \Uparrow A$ )
• With any negative p.d: The resistance is extremely high (infinite), so no current can flow ( $R = \infty \ \Omega$ , $I = 0 \ A$ )
Extension: LED I-V characteristics
An LED is a light emitting diode, the have very similar characteristics to standard diodes. A key point to note however is that the threshold voltage (the minimum voltage required for a current to flow) varies depending on the colour (this will become more apparent when studying quantum physics later in the year). The following helps to show this (but with limited detail):
Further reading:
• You should research how to read the value of resistors from the coloured rings on them.
• LEDs for Lighting Applications edited by Patrick Mottier – LEDsforLightingApplications
|
# Do the inequalities matter in this case of the marginal probability distribution?
I have this probability density of $f(x,y)$ if $0\le x \lt y\le 1$ and I can't seem to get the right answer. It says in the back of my book that the bound's on the integral for say $f_X(x)$ is from $0$ to $x$. Shouldn't the integral to find the marginal density of $X$ be $$f_X(x)=\int_x^1 f(x,y)dy$$ Then it says the marginal for $Y$ is $$f_Y(y)=\int_y^1f(x,y)dx$$ Shouldnt this one be $$f_Y(y)=\int_0^yf(x,y)dx$$?? Im slightly confused now. Is it because $x$ is strictly less than $y$?
-
Books can have typographical errors in them, and the sections titled "answers to odd-numbered problems" or "solutions to odd-numbered problems" often have more such errors than the main text, perhaps because these sections are prepared last, in a rush by the author(s) or their teaching assistants, after the main text has been carefully written and re-written. Your integral for $f_Y(y)$ has the correct limits; the book's integral does not. – Dilip Sarwate Dec 21 '12 at 16:02
@DilipSarwate Thanks that's what I thought. I knew that didn't look right. So glad I have a mathematical community to correct me! – TheHopefulActuary Dec 21 '12 at 16:05
If $f(x,y)$ is the joint density of the random variables $X,Y$, then the marginal density of $X$ is $$f_X(x)=\int_{\mathbb{R}}{f(x,y)dy}$$ Now, you know that $f(x,y)$ equals zero if $y<x$, or $x$ or $y$ falls outside the interval $[0,1]$ (or so I interpret your question) and thus the formula simplifies to $$f_X(x)=\int_x^1{f(x,y)dy}.$$
Similarly one finds $$f_Y(y)=\int_0^y{f(x,y)dx}.$$
|
How to create a new command whose single parameter is able to apply to both “caption” and “path” parameters of lstinputlisting
I want to create a command to simplify using the command lstinputlisting from the package listings, which can accept a single parameter as the file path and pass it to both the caption and path parameter of lstinputlisting. In one word, use the file path as the caption.
Consider the following LaTeX source:
\documentclass[UTF8]{ctexart}
\usepackage{listings}
\newcommand{\myincludecode}[1]{\lstinputlisting[caption=#1, language=matlab]{#1}}
\newcommand{\mysecondincludecode}[2]{\lstinputlisting[caption={#2}, language=matlab]{#1}}
\begin{document}
\myincludecode{main.m} % line 10
\myincludecode{gen_data.m} % line 11
\mysecondincludecode{main.m}{main.m} % line 13
\mysecondincludecode{gen_data.m}{gen\_data.m} % line 14
\end{document}
Clearly, the commands in line 13 & 14 work well, which both correctly include the corresponding file and print the corresponding captions.
The line 10 also works well. However, the line 11 includes the corresponding file but outputs no caption. The log file says:
Try.tex|11 error| Missing $inserted. Try.tex|11 error| Extra }, or forgotten$.
Try.tex|11 error| Missing $inserted. Try.tex|11 error| Missing } inserted. It's obvious that the underscore breaks down my command. So, I wonder how to modify myincludecode to make it work - even when meeting some special characters, such as the underscore here. 3 Answers Detokenize the argument: \documentclass[UTF8]{ctexart} \usepackage[T1]{fontenc} \usepackage{listings} \newcommand{\myincludecode}[1]{\lstinputlisting[caption=\detokenize{#1}, language=matlab]{#1}} \begin{document} \myincludecode{main.m} \myincludecode{gen_data.m} \end{document} In mathmode underscore has a function. It changes the next character into subscript. Hence it expects a$ sign. This is a simple solution to your problem.
\documentclass[UTF8]{ctexart}
\usepackage{listings}
\begin{document}
\begingroup
\newcommand{\myincludecode}[1]{\catcode`_=11\lstinputlisting[caption=#1, language=matlab]{#1}}
\myincludecode{main.m}
\myincludecode{gen_data.m}
\endgroup
$1_2$
\end{document}
Catcode means category code. Category of _ is 8 which assigns some function to it, if I change it to 11, it changes underscore to a letter category, which is probably what you want. Adding this command between \begingroup & \endgroup makes the subscript function intact outside it's scope.
You must avoid “_” in file names and in cite or ref tags, or you must use the babel package, with its active-character controls, or you must give the [strings] option, which attempts to redefine several commands (and may not work perfectly). Even without the [strings] option or babel, you can use occasional underscores like: “\include{file\string_name}”.
The default operation is quite simple and needs no customization; but you must avoid using “_” in any place where LaTeX uses an argument as a string of characters for some control function or as a name. These include the tags for \cite and \ref, file names for \input, \include, and \includegraphics, environment names, counter names, and placement parameters (like [t]). The problem with these contexts is that they are ‘moving arguments’ but LaTeX does not ‘switch on’ the “\protect mechanism” for them.
If you need to use the underscore character in these places, the package option [strings] is provided to redefine commands that take such a string argument so that protection is ap- plied (with \protect made to be \string). The list of commands this provision affects is given in \UnderscoreCommands, with \do before each one; plus several others covering \input, \includegraphics, \cite, \ref, and their variants.
• So let's assume the OP uses underscores in their filenames. How can your answer be used to address that issue? Can you provide an example of that where the same argument is used for the caption and the actual file in \lstinputlisting? – Werner Nov 27 '19 at 5:21
• @Werner wanted an example the same has been incorporated – js bibra Nov 27 '19 at 5:28
• I'll emphasize my comment request: Can you provide an example of where the same argument is used for the caption and the actual file in \lstinputlisting? test\_file.c is different from test_file.c. – Werner Nov 27 '19 at 5:31
• Still learning Sir Still learning --way behind to challenge your rep of 475,298 REPUTATION – js bibra Nov 27 '19 at 5:35
|
# In a strong field, how long would it take for domains to align?
How fast is the rate of "switch" for the domains to align with an exterior field? Possibly in milliseconds? I assume it won't take much time since domains are very small, and they only turn from a certain degree to another.
I know this would depend on a lot of factors, but if powerful large magnet can attract a ferromagnet with a force of nearly 4,000, brining it closer to it would take 0.005 seconds, then surely the magnetization was a lot faster than that. Since magnetization occurs before attraction.
|
# For a coherent FSK with 32 levels, the probability of bit error is ______ the probability of symbol error.
1. 31 times
2. 1 / 31 times
3. 2 times
4. 0.5 times
Option 4 : 0.5 times
## Frequency Shift Keying (FSK) MCQ Question 1 Detailed Solution
Concept:
Probability of symbol error for coherently detected M-ary FSK:
$${P_{se}} \le \left( {M - 1} \right)Q\left( {\sqrt {\frac{{{E_s}}}{{{N_0}}}} \;} \right)$$
Where
$${E_s} = {\log _2}M.\;{E_b}$$
Probability of bit error for coherently detected M-ary FSK-
$$BER = {P_{be}} = \frac{M}{2} \times Q\left( {\sqrt {\frac{{{E_s}}}{{{N_0}}}} \;} \right)$$
$${P_{be}} = \frac{{M/2}}{{M - 1}} \times {P_{se}}$$
Calculation:
Given: M = 32
$${P_{be}} = \frac{{32/2}}{{31}} \times {P_{se}} \approx \frac{1}{2} \times {P_{se}}$$
# In a digital communication system employing Frequency Shift Keying (FSK), the 0 and 1 bit are represented by sine waves of 10 kHz and 25 kHz respectively. These waveforms will be orthogonal for a bit interval of
1. 250 μsec
2. 200 μsec
3. 50 μsec
4. 45 μsec
Option 2 : 200 μsec
## Frequency Shift Keying (FSK) MCQ Question 2 Detailed Solution
Derivation:
Given, In FSK modulation, let f1, f2 be frequencies for bit ‘0’
s(t) = A sin (2πf1t)
For bit ‘1’
s(t) = A sin (2πf2t)
Let Tb be bit duration
For both waveforms to be orthogonal,
$$\begin{array}{l} \mathop \smallint \limits_0^{{T_b}} A\sin \left( {2\pi {f_1}t} \right).A\sin \left( {2\pi {f_2}t} \right)dt = 0\\ \Rightarrow {A^2}\mathop \smallint \limits_0^{{T_b}} \frac{1}{2}\left[ {\cos \left( {2\pi \left( {{f_1} - {f_2}} \right)t} \right) - \cos \left( {2\pi {f_1} + {f_2}} \right)t} \right]dt = 0 \end{array}$$
$$\begin{array}{l} \Rightarrow \frac{{{A^2}}}{2}\left[ {\frac{{\sin \left( {2\pi \left( {{f_1} - {f_2}} \right)t} \right)}}{{2\pi \left( {{f_1} - {f_2}} \right)}} - \frac{{\sin \left( {2\pi \left( {{f_1} + {f_2}} \right)t} \right)}}{{2\pi \left( {{f_1} + {f_2}} \right)}}} \right]_0^{{T_b}} = 0\\ \Rightarrow \frac{{\sin \left( {2\pi \left( {{f_1} - {f_2}} \right){T_b}} \right)}}{{2\pi \left( {{f_1} - {f_2}} \right)}} - \frac{{\sin \left( {2\pi \left( {{f_1} + {f_2}} \right){T_b}} \right)}}{{2\pi \left( {{f_1} + {f_2}} \right)}} = 0 \end{array}$$
It is possible if both 2π(f- f2)Tb and 2π(f1 + f2)Tb are integral multiplies of π(i.e. = nπ), i.e.
$$\left| {{f_1} - {f_2}} \right| = \frac{n}{{{T_b}}}\;and\;\left| {{f_1} + {f_2}} \right| = \frac{m}{{{T_b}}}$$
Application:
Given, f1 = 10 kHz and f2 = 25 kHz
$$15\;kHz = \frac{n}{{T_b}}and\;35\;kHz = \frac{m}{{{T_b}}}$$
It is possible for $$\frac{1}{{{T_b}}} = 5\;kHz$$, with minimum m and n value, i.e.
i.e. Tb = 200 μs. (minimum value)
# Frequency Shift Keying is used mostly in
2. Telegraphy
3. Telephony
4. Television
Option 2 : Telegraphy
## Frequency Shift Keying (FSK) MCQ Question 3 Detailed Solution
FSK (Frequency Shift Keying):
It is used in the voice frequency telegraph system and for wireless telegraphy in the high-frequency bands.
In FSK (Frequency Shift Keying) binary 1 is represented with a high-frequency carrier signal and binary 0 is represented with a low-frequency carrier, i.e. In FSK, the carrier frequency is switched between 2 extremes.
For binary ‘1’ → S1 (A) = A (cos 2π fHt)
For binary ‘0’ → S2 (t) = A (cos 2π fLt)
The constellation diagram is as shown:
∴ Option 2 is the most appropriate.
Notes:
Radio transmission: The two most common types of modulation used in radio are amplitude modulation (AM) and frequency modulation (FM).
Telephony: The type of modulation used in digital telephony is Pulse code modulation (PCM).
Television: For television broadcasting, Vestigial sideband modulation is used for video transmission and Frequency modulation is used for audio transmission.
# For a binary FSK signal with a mask frequency of 49 kHz, a space-frequency of 51 kHz and on the input bit rate of 2 kbps, the peak frequency deviation will be
1. 0.5 kHz
2. 1.0 kHz
3. 2.0 kHz
4. 4.0 kHz
Option 2 : 1.0 kHz
## Frequency Shift Keying (FSK) MCQ Question 4 Detailed Solution
Concept:
In FSK (Frequency Shift Keying), binary 1 is represented with a high-frequency carrier signal, and binary 0 is represented with a low-frequency carrier, i.e. in FSK, the carrier frequency is switched between 2 extremes.
• Frequency measurements of the FSK signal are usually stated in terms of “shift” and center frequency. The shift is the frequency difference between the mark and space frequencies.
• The nominal center frequency is halfway between the mark and space frequencies.
• Frequency deviation is equal to the absolute value of the difference between the center frequency and the mark or space frequencies.
The deviation is also equal, numerically, to one-half of the shift, i.e.
|fs - fm|= 2 Δf
Δf = frequency deviation
fs = space-frequency
Calculation:
With fs = 51 kHz and fm = 49 kHz
$${\rm{\Delta }}f = \frac{{\left| {{f_s} - {f_m}\;} \right|}}{2}$$
$$= \frac{{\left| {51 - 49} \right|}}{2}\;kHz$$
Δf = 1 kHz
The bandwidth of FSK is given by:
$$\left( {{f_s} + \frac{1}{{{T_b}}}} \right) - \left( {{f_m} - \frac{1}{{{T_b}}}} \right)$$
$$= \left( {{f_s} - {f_m}} \right) + \frac{2}{{{T_b}}}$$
|fs - fm|= 2 Δf
# Coherent orthogonal binary FSK modulation is used to transmit two equiprobable symbol waveforms 𝑠1(𝑡) = 𝛼 cos 2𝜋𝑓1𝑡 and 𝑠2(𝑡) = 𝛼 cos 2𝜋𝑓2𝑡, where 𝛼 = 4 mV. Assume an AWGN channel with two-sided noise power spectral density $$\frac{{{N_0}}}{2} = 0.5 \times {10^{ - 12}}\;W/Hz$$. Using an optimal receiver and the relation $$Q\left( v \right) = \frac{1}{{\sqrt {2\pi } }}\mathop \smallint \limits_v^\infty {e^{ - {u^2}/2}}du$$, the bit error probability for a data rate of 500 kbps is
1. Q(2)
2. $$Q\left( {2\sqrt 2 } \right)$$
3. Q(4)
4. $$Q\left( {4\sqrt 2 } \right)$$
Option 3 : Q(4)
## Frequency Shift Keying (FSK) MCQ Question 5 Detailed Solution
Concept:
FSK Modulation:
In FSK: transmission of 1 is represented as:
s1(t) = Ac cos 2π fHt
Transmission of 0 is represented as:
s2(t) = Ac cos 2π fLt
and the Bit error probability $$= Q\left[ {\sqrt {\frac{{{E_d}}}{{2{N_0}}}} } \right]$$
Where Ed is the energy of s1(t) – s2(t)
$${E_d} = \mathop \smallint \limits_0^{{T_b}} \{ {s_1}\left( t \right) - {s_2}\left( t \right)\}^2\;dt\;$$
$${E_d} = \mathop \smallint \limits_0^{{T_b}} s_1^2\left( t \right)dt + \mathop \smallint \limits_0^{{T_b}} s_2^2\left( t \right)dt - 2\mathop \smallint \limits_0^{{T_b}} {s_1}\left( t \right) - {s_2}\left( t \right)dt$$
Since s1(t) & s2(t) are orthogonal, we cn write:
$$\therefore \;\mathop \smallint \limits_0^{{T_b}} {s_1}\left( t \right) - {s_2}\left( t \right)dt = 0$$
$${E_d} = \mathop \smallint \limits_0^{{T_b}} s_1^2\left( t \right)dt + \mathop \smallint \limits_0^{{T_b}} s_2^2\left( t \right)dt$$
$${E_d} = \frac{{A_c^2{T_b}}}{2} + \frac{{A_c^2{T_b}}}{2} = A_c^3{T_b}$$
$$BER = Q\left( {\sqrt {\frac{{A_c^2{T_b}}}{{2{N_0}}}} } \right)$$
Analysis:
Given:
Ac = α = 4 mV
$$\frac{{{N_0}}}{2} = 0.5 \times {10^{ - 12}}\;w/Hz$$
N0 = 10-12 w/Hz
$${T_b} = \frac{1}{{{R_b}}} = \frac{1}{{800 \times {{10}^3}}}$$
Tb = 0.2 × 10-5
Tb = 2 × 10-6 sec.
$$BER = Q\left( {\sqrt {\frac{{A_c^2{T_b}}}{{2{N_0}}}} } \right)$$
$$BER = Q\left( {\sqrt {\frac{{{{\left( {4 \times {{10}^{ - 3}}} \right)}^2} \times 2 \times {{10}^{ - 6}}}}{{2 \times {{10}^{ - 12}}}}} } \right)$$
$$BER = Q\left( {\sqrt {\frac{{16 \times {{10}^{ - 6}} \times 2 \times {{10}^{ - 6}}}}{{2 \times {{10}^{ - 12}}}}} } \right)$$
$$BER = Q\left( {\sqrt {16} } \right)$$
BER = Q(4)
# In binary frequency shift keying (FSK), the given signal waveforms areu0(t) = 5 cos(20000πt); 0 ≤ t ≤ T, andu1(t) = 5 cos(22000πt); 0 ≤ t ≤ Twhere T is the bit-duration interval and t is in seconds. Both u0(t) and u1(t) are zero outside the interval 0 ≤ t ≤ T. With a matched filter (correlator) based receiver, the smallest positive value of T (in milliseconds) required to have u0(t) and u1(t) uncorrelated is
1. 0.25 ms
2. 0.5 ms
3. 0.75 ms
4. 1.0 ms
Option 2 : 0.5 ms
## Frequency Shift Keying (FSK) MCQ Question 6 Detailed Solution
Concept:
If two signals are uncorrelated then:
$$\mathop \smallint \limits_0^T {u_0}\left( t \right){u_1}\left( t \right) = 0$$
Calculation:
$$\smallint 5\cos \left( {20,000\;\pi t} \right).5\cos \left( {22,000\;\pi t} \right)dt = 0$$
$$\frac{{25}}{2}\smallint \left[ {\cos \left( {42000\;\pi t} \right) + \cos \left( {2000\;\pi t} \right)} \right]dt = 0$$
$$\frac{{25}}{2 }\left[ {\frac{{\sin \left( {42000\;\pi T} \right)}}{{42000\;\pi }} + \frac{{\sin \left( {2000\;\pi T} \right)}}{{2000\;\pi }}} \right] = 0$$
Both terms should be individually zero, i.e.
sin 2000 πT = 0
$$\begin{array}{l} \Rightarrow 2000\;\pi T = \pi \left[ {smallest} \right]\\ T = \frac{1}{{2000}} \end{array}$$
T = 0.5 msec
So, at T = 0.5 msec both terms are zero.
# For a coherent FSK with 32 levels, the probability of bit error is ______ the probability of symbol error.
1. 31 times
2. 1 / 31 times
3. 2 times
4. 0.5 times
Option 4 : 0.5 times
## Frequency Shift Keying (FSK) MCQ Question 7 Detailed Solution
Concept:
Probability of symbol error for coherently detected M-ary FSK:
$${P_{se}} \le \left( {M - 1} \right)Q\left( {\sqrt {\frac{{{E_s}}}{{{N_0}}}} \;} \right)$$
Where
$${E_s} = {\log _2}M.\;{E_b}$$
Probability of bit error for coherently detected M-ary FSK-
$$BER = {P_{be}} = \frac{M}{2} \times Q\left( {\sqrt {\frac{{{E_s}}}{{{N_0}}}} \;} \right)$$
$${P_{be}} = \frac{{M/2}}{{M - 1}} \times {P_{se}}$$
Calculation:
Given: M = 32
$${P_{be}} = \frac{{32/2}}{{31}} \times {P_{se}} \approx \frac{1}{2} \times {P_{se}}$$
# In a digital communication system employing Frequency Shift Keying (FSK), the 0 and 1 bit are represented by sine waves of 10 kHz and 25 kHz respectively. These waveforms will be orthogonal for a bit interval of
1. 250 μsec
2. 200 μsec
3. 50 μsec
4. 45 μsec
Option 2 : 200 μsec
## Frequency Shift Keying (FSK) MCQ Question 8 Detailed Solution
Derivation:
Given, In FSK modulation, let f1, f2 be frequencies for bit ‘0’
s(t) = A sin (2πf1t)
For bit ‘1’
s(t) = A sin (2πf2t)
Let Tb be bit duration
For both waveforms to be orthogonal,
$$\begin{array}{l} \mathop \smallint \limits_0^{{T_b}} A\sin \left( {2\pi {f_1}t} \right).A\sin \left( {2\pi {f_2}t} \right)dt = 0\\ \Rightarrow {A^2}\mathop \smallint \limits_0^{{T_b}} \frac{1}{2}\left[ {\cos \left( {2\pi \left( {{f_1} - {f_2}} \right)t} \right) - \cos \left( {2\pi {f_1} + {f_2}} \right)t} \right]dt = 0 \end{array}$$
$$\begin{array}{l} \Rightarrow \frac{{{A^2}}}{2}\left[ {\frac{{\sin \left( {2\pi \left( {{f_1} - {f_2}} \right)t} \right)}}{{2\pi \left( {{f_1} - {f_2}} \right)}} - \frac{{\sin \left( {2\pi \left( {{f_1} + {f_2}} \right)t} \right)}}{{2\pi \left( {{f_1} + {f_2}} \right)}}} \right]_0^{{T_b}} = 0\\ \Rightarrow \frac{{\sin \left( {2\pi \left( {{f_1} - {f_2}} \right){T_b}} \right)}}{{2\pi \left( {{f_1} - {f_2}} \right)}} - \frac{{\sin \left( {2\pi \left( {{f_1} + {f_2}} \right){T_b}} \right)}}{{2\pi \left( {{f_1} + {f_2}} \right)}} = 0 \end{array}$$
It is possible if both 2π(f- f2)Tb and 2π(f1 + f2)Tb are integral multiplies of π(i.e. = nπ), i.e.
$$\left| {{f_1} - {f_2}} \right| = \frac{n}{{{T_b}}}\;and\;\left| {{f_1} + {f_2}} \right| = \frac{m}{{{T_b}}}$$
Application:
Given, f1 = 10 kHz and f2 = 25 kHz
$$15\;kHz = \frac{n}{{T_b}}and\;35\;kHz = \frac{m}{{{T_b}}}$$
It is possible for $$\frac{1}{{{T_b}}} = 5\;kHz$$, with minimum m and n value, i.e.
i.e. Tb = 200 μs. (minimum value)
# For a given data rate, the BW required with m-ary transmission is smaller than for binary transmission by
1. log2m
2. $${{lo{g_2}m} \over m}$$
3. $${2 \over {lo{g_2}m}}$$
4. $${1 \over 2{lo{g_2}m}}$$
Option 1 : log2m
## Frequency Shift Keying (FSK) MCQ Question 9 Detailed Solution
Let the symbol duration be Ts.
Now, if the number of bits required to encode each symbol be n, the bit rate will be:
Rb = n × Ts
For m-ary signaling n = log2m
Thus Rb = Ts × logm
Now, the transmission bandwidth will be BW = 2Rb
The required ratio of bandwidth between m-ary transmission and binary transmission will be:
$$\mathrm{{{BW_{m-ary}\over BW_{binary}}=}{T_s× \log_2{m}\over \mathrm{T_s× \log_2{2}}}={\log_2{m}}}$$
# Frequency Shift Keying is used mostly in
2. Telegraphy
3. Telephony
4. Television
Option 2 : Telegraphy
## Frequency Shift Keying (FSK) MCQ Question 10 Detailed Solution
FSK (Frequency Shift Keying):
It is used in the voice frequency telegraph system and for wireless telegraphy in the high-frequency bands.
In FSK (Frequency Shift Keying) binary 1 is represented with a high-frequency carrier signal and binary 0 is represented with a low-frequency carrier, i.e. In FSK, the carrier frequency is switched between 2 extremes.
For binary ‘1’ → S1 (A) = A (cos 2π fHt)
For binary ‘0’ → S2 (t) = A (cos 2π fLt)
The constellation diagram is as shown:
∴ Option 2 is the most appropriate.
Notes:
Radio transmission: The two most common types of modulation used in radio are amplitude modulation (AM) and frequency modulation (FM).
Telephony: The type of modulation used in digital telephony is Pulse code modulation (PCM).
Television: For television broadcasting, Vestigial sideband modulation is used for video transmission and Frequency modulation is used for audio transmission.
|
# Python API¶
## Pumping Station Mixin¶
class rtctools_hydraulic_structures.pumping_station_mixin.Pump(optimization_problem, symbol)[source]
Bases: rtctools_hydraulic_structures.util._ObjectParameterWrapper
Python Pump object as an interface to the Pump object in the model.
discharge()[source]
Get the state corresponding to the pump discharge.
Returns: MX expression of the pump discharge.
head()[source]
Get the state corresponding to the pump head. This depends on the head_option that was specified by the user.
Returns: MX expression of the pump head.
class rtctools_hydraulic_structures.pumping_station_mixin.Resistance(optimization_problem, symbol)[source]
Bases: rtctools_hydraulic_structures.util._ObjectParameterWrapper
Python Resistance object as an interface to the Resistance object in the model.
discharge()[source]
Get the state corresponding to the discharge through the resistance.
Returns: MX expression of the discharge.
head_loss()[source]
Get the state corresponding to the head loss over the resistance.
Returns: MX expression of the head loss.
class rtctools_hydraulic_structures.pumping_station_mixin.PumpingStation(optimization_problem, symbol, pump_symbols=None, **kwargs)[source]
Bases: rtctools_hydraulic_structures.util._ObjectParameterWrapper
Python PumpingStation object as an interface to the PumpingStation object in the model.
__init__(optimization_problem, symbol, pump_symbols=None, **kwargs)[source]
Initialize the pumping station object.
Parameters: optimization_problem – OptimizationProblem instance. symbol – Symbol name of the pumping station in the model. pump_symbols – Symbol names of the pumps in the pumping station.
pumps()[source]
Get a list of Pump objects that are part of this pumping station in the model.
Returns: List of Pump objects.
resistances()[source]
Get a list of Resistance objects that are part of this pumping station in the model.
Returns: List of Resistance objects.
class rtctools_hydraulic_structures.pumping_station_mixin.PumpingStationMixin(*args, **kwargs)[source]
Relevant parameters and variables are read from the model, and from this data a set of constraints and objectives are automatically generated to minimize cost.
pumping_stations()[source]
User problem returns list of PumpingStation objects.
Returns: A list of pumping stations.
rtctools_hydraulic_structures.pumping_station_mixin.plot_operating_points(optimization_problem, output_folder, plot_expanded_working_area=True)[source]
Plot the working area of each pump with its operating points.
## Weir Mixin¶
class rtctools_hydraulic_structures.weir_mixin.Weir(optimization_problem, name)[source]
Bases: rtctools_hydraulic_structures.util._ObjectParameterWrapper
Python Weir object as an interface to the Weir object in the model.
In the optimization, the weir flow is implemented as constraints. It means that the optimization calculated a flow (not weir height!), that is forced by the constraints to be a physically possible weir flow.
discharge()[source]
Get the state corresponding to the weir discharge.
Returns: MX expression of the weir discharge.
class rtctools_hydraulic_structures.weir_mixin.WeirMixin(*args, **kwargs)[source]
weirs()[source]
User problem returns list of Weir objects.
|
# Mathematica Which font does mathematica use?
1. Mar 15, 2016
### JorisL
Hey,
I have this problem where mathematica doesn't show certain symbols (greek letters for example).
In my current working document I have introduced other symbols but in the end it'll become a mess because several variables show up with similar names.
When I export as pdf it does show the greek symbols.
What's worse is that it doesn't show the exponential functions' "e" in full nor the imaginary unit.
I tried finding out what causes this (I'm on linux with mathematica 10.0) but can't seem to find the answer.
Perhaps I should find out which font Mathematica normally uses. (which isn't that clear)
Any ideas how to fix this? I installed the mathematica fonts as a longshot which didn't work.
Thanks,
Joris
2. Mar 15, 2016
### Staff: Admin
Is there anything in preferences?
3. Mar 15, 2016
### JorisL
In the option inspector (preferences>advanced>option inspector) I found a mentioning of "Bitstream Vera Sans".
Upon inspection it turned out I didn't have it installed, I installed it but to no avail.
So I checked the font again, it doesn't support the Greek alphabet.
http://www.dafont.com/bitstream-vera-sans.font?text=Ξ+ξ
Now I'm checking out what the following command does in the global preferences file
Code ( (Unknown Language)):
PrivatePaths->{"Fonts"->{FrontEndFileName[{$PreferencesDirectory, "Fonts", "Type1"}], FrontEndFileName[{$TopDirectory, "SystemFiles", "Fonts", "Type1"}]}}
I'll keep you posted.
4. Mar 15, 2016
### JorisL
Well I'm stuck, the fontsystem mathematica uses really sucks (pardon my French).
I've been able to recuperate some greek symbols by removing them from FontMap.tr but they aren't really legible then.
Both the imaginary $i$ and the symbol used for Euler's constant $e$ don't show up that way.
Edit:
My current work around is to print to pdf whenever I evaluate some untested expression.
That way the pdf reloads on my second screen.
5. Mar 17, 2016
### Hepth
6. Mar 17, 2016
### JorisL
Share this great discussion with others via Reddit, Google+, Twitter, or Facebook
Have something to add?
Draft saved Draft deleted
|
## Writing a BASIC Interpreter — Part 4
Alternative title: how to implement floating point arithmetic from scratch (the hard way)
## Introduction
The above picture might not look terribly impressive, but a closer inspection should reveal something strange: doesn’t the precision seem to be a bit too low for a modern Prolog system (sadly, I’m not running my interpreter in MICRO-PROLOG)?
In this entry I grab the bull by the horns and completely rewrite the arithmetic predicates of the interpreter, previously relying on Prolog’s built-in is/2, to my homebrewed, purely logical implementation. I’d be the first one to admit that this might not sound like a worthwhile endevour, but of all the things I’ve considered in this project, this is perhaps the only contribution which in principle might be of some use. Writing a high-level interpreter for a programming language is much easier than writing a full-blown emulator, and not all language features need a low-level implementation. Hence, with this entry I want to show that we even in Prolog, where we in general have little control over such matters, can simulate low-level features of imperative programming languages without too much effort. The problem is that it’s rather unlikely that there’s any BASIC-80 software still in use where inaccurate floating point arithmetic would be important, but one never knows!
## Constraints
I decided to to implement these arithmetic operations without using Prolog’s built-in arithmetical support (is/2 and friends). Mainly because it’s fun to build something from scratch, but also to refrain from using is/2 unless it actually simplifies the problem to be solved. The sole exception to this rule were predicates for outputting integers and floating point numbers where it’s more helpful to print the (floating point) number represented by the byte sequence rather than just printing the internal representation, and it’s easier to display these values if one is allowed to use exponentiation and multiplication.
## Representation
Before we begin I have a confession to make: I initally made a quite poor choice regarding representation and then stubbornly stuck with it. ‘Poor’ is perhaps not the right word; ‘bizzare’ is probably closer to the truth. For the moment, let’s ignore floating point numbers and simply assume that we want to represent natural numbers. The easy representation, which can be found in many textbooks and which is typically taught to students, is to represent the number ‘0’ by itself and then represent any larger number by a nested term of the form s(…(s(0))). This is sometimes called a unary representation. What’s the problem? Representing a number $n$ requires a nested term of depth $n$ which is exponentially worse than a binary representation where a number $n$ can be represented with only log(n) bits. Such an inefficiency which would propagate through all arithmetical predicates of interest (addition, multiplication, and so on), and render any non-trivial program more or less useless. Moreover, while (a generalisation of) this representation could be used to represent rational numbers, it would not be meaningful to use it to represent floating point numbers in a fixed format, nor to accurately simulate e.g. overflow. Thus, our representation of numbers (and later on, floating point numbers) is going to be as sequences of bits, for which we have two obvious choices.
• As a list of bits [b1, …, bn].
• As a flat term bits(b1,…,bn).
But since we want to represent both integers (2 bytes) and floating point numbers in the MSB format (4 bytes), the latter representation would result in a lot of redundant and tedious code (just trust me on this!). However, there’s a compromise: we could represent bytes as flat terms of the form byte(b1,…,b8) and then (e.g.) represent an integer as int([Byte1, Byte2]). Then we would only have to write low-level predicates operating on bytes, but not arbitrary sequences of bits, which would be very inconvenient when working with flat terms. My arguments in favour of this representation when compared to the bit list representation were as follows:
• A list of bits is the wrong representation since some operations which should be O(1) are O(n), where n is the number of bits.
• A hard coded byte representation makes it easier for a structure sharing system to do its job properly, and could also facilitate tabling (a form of memoization). E.g., tabling addition or multiplication for all floating point numbers is probably too much, but we could very easily table byte addition, and obtain a significant speed-up.
• Even if predicates working directly on bytes are a bit cumbersome to define, we only have to define a small number of such predicates, and then floating point and integer operations can be handled in a reasonably clean way.
These arguments are not wrong , but not terribly impactful either. Does it really matter if a right-shift operation takes a bit more time than it theoretically should, and is it not better to start with the more general solution and then choose a more efficient representation if the general one turns out to be to slow? Definitely, which makes it a bit embarrasing to present the code in the following sections.
## Bytes
As already remarked, we’ll represent a byte by a term byte(b1, …, b8) where each $b_i$ is a bit. For example, byte(1,0,0,1,0,0,0,1) would represent the byte 10010001. Since bytes consists of bits, let’s begin by writing a predicate which adds two bits, before we worry about solving the larger problem, and once we know how to add two bytes, we’ll worry about integer and floating point addition. But how can we accomplish this without using any built-in predicates? I’m quite convinced that if this question was asked during a programming interview then not that many would obtain an acceptable solution, but it’s actually very simple: just hardwire the required information. Thus, if the two given bits are 0, the third bit should be 0, and if one of them is 1 but the other one is 0, the result should be 1. In other programming languages we would encode this with a handful of if-statements, and in Prolog the easiest solution is to represent each such case by a fact.
add_bits(0,0,0).
But what if the two given bits are 1? Then the result should be 0, but with 1 in carry. Hence, we’ll have to add an additional argument describing the carry value, which we’ll for simplicity add as the last argument.
add_bits(0,0,0,0).
This is not bad, but now imagine the situation where we want to add two bytes. Then the strategy should be to add the least significant bits, remembering the carry, and then use the carry (out) as additional input when adding the two next bits (carry in), and so on. Hence, what we really want to describe is the relationship between two given input bits, a carry in bit, an output bit, and a carry out bit. It’s easy to see that this doubles the number of cases that we have to consider, and we obtain the following program (where the carry in bit is the fourth argument and the carry out bit is the fifth argument).
add_bits(0,0,0,0,0).
Let me just quickly digress that while this solution does require us to explicitly list all cases of interest, it’s conceptually not harder, and much easier to understand, than a solution making use of is/2. It also results in a more general program: we could e.g. ask a query of the form
add_bits(X1,X2,1,0,0)
which would give us the answer that X1 and X2 are either 0 and 1, or 1 and 0. Before we turn to byte addition we should make one observation: quite a few of the predicates which we’ll define are going to make frequent use of carry in/out, but manually having to keep track of carry in/out, and remembering which argument is which, is going to be quite tedious and error prone. A better solution would be to implicitly thread this state using definite clause grammars (DCGs). This is the same solution that I used to thread the state of the interpreter so if it sounds unfamiliar it might be a good idea to re-read the previous entry. Hence, carry in and carry out will be described by state//2 and we’ll understand grammar rules (written with –>) as describing a sequence of carry in/out transitions. We thus rewrite add_bits/5 as follows.
add_bits(0,0,0) --> state(0,0).
If you’re struggling to make sense of DCGs, or are of the opinion that DCGs should only be used for parsing, then simply imagine that we with the above programming scheme makes it possible for rules to have carry in/out without explicitly needing to be declared as arguments. This is going to be quite convienent later on so it’s certainly worth the effort.
Let’s now turn to the problem of adding two bytes. I’ve already outlined the algorithm, but I suspect that the solution that I’m going to present might be quite disturbing to some readers. How can we iterate over a byte term byte(b1, …, b8)? The problem is that such a term is not a ‘recursive’ data structure (like a tree or a list) and it’s rather offputting to attempt to do any form of general iteration. While we certainly could implement a crude variant of a for loop by hardwiring the arithmetic required to proceed to the next iteration (remember that I’m trying to solve this without using is/2), the resulting code would be far from elegant and needlessly complicated. In fact, since a byte has a fixed length, it’s much easier to just add the bits manually, starting with the least significant bits in position 8, and finishing with the most significant bits in position 1. This can be accomplished as follows.
add_byte(byte(X1,X2,X3,X4,X5,X6,X7,X8), byte(Y1,Y2,Y3,Y4,Y5,Y6,Y7,Y8), byte(Z1,Z2,Z3,Z4,Z5,Z6,Z7,Z8)) -->
In contrast, if we had represented bytes as lists of bits, then we could easily have solved the problem via standard recursive problem solving, and in the process also obtaining a much more general program applicable to arbitrary sequences of bits.
add_byte([], [], []) --> state(C,C).
add_byte([X|Xs], [Y|Ys], [Z|Zs]) -->
This is the problem with the byte/8 representation in a nutshell: while it’s in principle possible to write general predicates it’s in practice much easier to just hardwire everything. For an additional example, assume that we want to write a right-shift operation where we shift in according to carry in. Such a rule could very easily be defined by:
shift_right(byte(B1,B2,B3,B4,B5,B6,B7,B8), byte(C0,B1,B2,B3,B4,B5,B6,B7)) --> state(C0,B8).
Which almost feels like cheating, but if all we want to do is to shift a given byte, there’s really no reason to write a general purpose program. Before we turn to integer arithmetic I’m going to present one more simplification which is going to make it simpler to define larger operations. While defining operations on bytes (addition, subtraction, complement, and so on) is not terribly difficult, it’s a bit cumbersome to chain together several byte expressions without functional notation. There’s no general support for functional notation in Prolog, but it’s possible to create a context where a term is going to be interpreted as a function, similar to how it’s possible to use arithmetical expressions in the context of is/2. Thus, we’re going to implement a rule ev//2 which takes a byte expression (written in a redable way by using standard operators) in its first argument, and evaluates the expression (in the context of a carry out/in transition) in its second argument.
%~ is defined as a unary operator.
ev(~Exp, Res) -->
ev(Exp, Res0),
neg_byte(Res0, Res).
ev(Exp1 - Exp2, Res) -->
ev(Exp2, Byte2),
ev(Exp1, Byte1),
sub_byte(Byte1, Byte2, Res).
ev(Exp1 + Exp2, Res) -->
ev(Exp1, Byte1),
ev(Exp2, Byte2),
ev(Exp1 >> Exp2, Res) -->
ev(Exp1, Byte1),
ev(Exp2, Byte2),
shift_right(Byte1, Byte2, Res).
It’s of course very easy to extend ev//2 with additional operators as necessary. I also implemented the two constants b1 (short for byte 1) and b0 as:
ev(b1, byte(0,0,0,0,0,0,0,1)) --> state(C,C).
ev(b0, byte(0,0,0,0,0,0,0,0)) --> state(C,C).
But it would also be nice to define an ‘assignment’ operator which we can use in infix notation. To accomplish this we begin by defining an operator which is going to unify its left argument with the result of evaluating a byte expression (using ev//2). For no particular reason I choose ‘$=’; mainly because it’s not already in use. :- op(990, xfy,$=).
X $= Y --> ev(Y,X). Put together this makes it possible to write expressions in a much more readable way than using add_byte and sub_byte manually. For example, given a byte Byte, we could define a rule which defines the two’s complement of a byte. two_complement(Byte, ByteC) --> ByteC$= ~Byte + b1.
This also makes it possible to define sub_byte//3 in a nice way: simply compute the two’s complement of the second byte and then use byte addition.
## Integers
Following the BASIC-80 reference manual we’ll represent integers as two bytes: int([High, Low]) where High is the byte containing the 8 most significant bits, and dually for Low. We can now quite easily define e.g. integer addition as:
add_int(int([X1,X2]), int([Y1,Y2]), int([Z1,Z2])) -->
Z2 $= X2 + Y2, Z1$= X1 + Y1.
All the hard work involving byte arithmetic has finally paid off! Naturally, we could quite easily define other integer operations (subtraction, multiplication, and so on) as necessary. It would also be straightforward to increase the number of bytes and it would even be possible to write general predicates working with an arbitrary number of bytes. Thus, while I’m still a bit ashamed of my hardwired byte predicates, at least the top-level code can be written in a reasonably clean way.
## Floating Point Numbers
With the approach in the previous section we can represent integers in the range −32768 to 32767. Not terribly impressive. While it’s not possible to discretely encode more objects with 16 bits, we can circumvent this limit if we shift priorities. For example, imagine that we only care about numbers of the form $2^n$ and store such numbers by encoding the exponent $n$. Then even with two lowly bytes we could store numbers larger than the number of atoms in the observable universe. The problem, of course, is that such a representation would be incredible imprecise, and a floating point representation is a nice middle ground where we in addition to an exponent store a significand $s$ in a fixed number of bits so that a number is represented by a term $s \cdot 2^{n}$. The point of such a representation is that if we, say, add two large numbers, then we might not have enough bits to accurately add the significands of the two numbers, but we can get reasonable close by making sure that the exponents and the most significant bits in the significands are added correctly. These days floating point arithmetic is typically taken care of with dedicated hardware, but in the heydays of BASIC-80 floating point operations were quite often implemented as a software layer in the interpreter.
Let’s now have a look at how we can add support for floating point operations. Warning: this should not be viewed as a complete implementation but rather as a proof of concept, and I’m only going to superficially describe how two add two positive floating point numbers. Otherwise this entry would be even more delayed than it already is. So let’s turn to the technical details: BASIC-80 uses Microsoft’s binary format (MSB) in either single or double precision, where a number is represented by the product of a significand and an exponent (both represented in binary). A single precision floating point number is then represented as a sequence of 4 bytes B1, B2, B3, B4 where:
• B1 is the exponent (-127,…, -1 are represented by 1…127 and 0…127 are represented in the range 128…255).
• The first bit in B2 is the sign bit (0 is positive and 1 is negative) and the remaining 7 bits are the start of the significand.
• B3 and B4 are the rest of the significand (thus, 23 bits in total).
Representing this in Prolog via our byte representation is of course very straightforward: a floating point number is simply a term float([B1,B2,B3,B4]) where B1,B2,B3,B4 are the bytes in question, and we already know everything there is to know about bytes. But how do we actually obtain a number in this representation? To make a long story short, one has to:
• Convert a decimal fraction to a binary fraction with a fixed number of bits.
• Convert the binary fraction to scientific notation by locating the radix point.
• Convert the binary scientific notation to the single precision floating point number.
All of these steps are rather straightforward, albeit tedious, and my Prolog implementation does not really stand out in any particular way. Hence, I’m simply going to assume that all numbers have been converted to the internal byte representation in a suitable way. Next, to add two floating point numbers we:
• Compare the exponents.
• If they are equal, then add the significands, possibly also increasing the exponent if necessary.
• If one exponent is larger than the other then shift the significand of the smaller number, add the significands, and then shift back.
Thus, the first step is easy, assuming a predicate compare_byte/3 which takes two bytes and returns the order between them (=, <, or >). Please don’t ask how it’s defined (I didn’t hard code it by 16 distinct cases, or did I?).
add_float(float([E1, X1, Y1, Z1]), float([E2, X2, Y2, Z2]), F3) -->
{compare_byte(E1, E2, Order)},
add_float0(Order, float([E1, X1, Y1, Z1]), float([E2, X2, Y2, Z2]), F3).
Let’s consider the case where the two exponents are equal. The only complicating factor is that the first two bits in X1 and X2 are not part of the significand, but represents the sign of the number, which we for the moment assume is 0 (i.e., two positive numbers). But if we simply add the significands then the carry is going to get ‘stuck’ in the sign bit, so as a workaround we’ll set the two sign bits to 1, and then add the bytes, starting with the least significant ones.
add_float0(=, float([E1, X1, Y1, Z1]), float([E1, X2, Y2, Z2]), float([E3, X4, Y4, Z4])) -->
set_bit(X1, b1, 1, NewX1),
set_bit(X2, b1, 1, NewX2),
Z3 $= Z1 + Z2, Y3$= Y1 + Y2,
X3 $= NewX1 + NewX2, %Also adds the carry from the previous operation. Hence, the exponent will increase. E3$= E1 + b0,
%The exponent increased so the significand has to be shifted right.
X4 $= X3 >> b1, Y4$= Y3 >> b1,
Z4 \$= Z3 >> b1.
The case when one of the exponents is larger than the other is quite similar and we just have to shift the significand of the smaller number appropriately. It’s then in principle not difficult to add support for more floating point operations, but it’s of course a non-trivial amount of work which is not suitable material for a blog entry.
## The end?
Is there a moral to all of this? Not really: if we want to, then we can simulate low level concepts of programming languages in Prolog, and although it feels a bit weird, it’s not really that much work than in other programming languages. In the unlikely case that there’s a reader who’s interested in delving deeper in similar topics: thread carefully, and take the easy way out and represent bytes as lists instead of flat terms.
I’ve managed to cover more or less everything that I wanted in these four entries, and missing features are either very hard to implement (e.g., implementing support for assembly subroutines) or similar to existing concepts, and thus not terribly interesting. However, I don’t want to leave the safe confinement of the 80’s computer industry just yet, so I plan to write one additional entry, with a secret topic. Hint: it’s time for some alternate history: what if Bill Gates and Paul Allen felt the heat of the fifth generation computer project and in a flash of panick created a bizarre Frankenstein between Prolog and BASIC?
## Introduction
We now have a rudimentary but more or less functional BASIC interpreter in place which we’ll now attempt to extend so that it can handle a more meaningful language. In this entry we’re first going to implement an under-the-hood improvement which is going to make the interpreter a bit cleaner and easier to maintain. Then we’ll have a look at two major missing features: multi-dimensional arrays and functions. As a teaser for the next entry we’re also going to add a simple type system so that we can distinguish between floating point numbers and integers.
## Implicit threading of state via definite clause grammars
Recall that the main idea behind the interpreter is that each predicate changes the internal state of the interpreter by passing around Comp objects. Hence, pretty much every predicate of the interpreter has the form:
p(Comp0, …, Comp) :-
p1(Comp0, …, Comp1),
p2(Comp1, …, Comp2),
.
.
.
pn(Compn-1, …, Comp).
Meaning that the predicate p describes the updated state Comp from Comp0 via the sequence of transitions induced by p1, …, pn. For example, we implemented the LET command as follows.
interpret_statement(Comp0, let(id(I), E), Comp) :-
eval_exp(Comp0, E, V),
set_mem(Comp0, id(I), V, Comp).
There’s nothing inherently wrong with this approach: we see clearly that eval_exp/3 evaluates the expression using Comp0, but is not allowed to change the state of the interpreter, and that set_mem/4 changes the state from Comp0 to Comp. But imagine that we’d like to add error handling to the interpreter, which could be implemented by augmenting the state attribute of the Comp object, currently only ‘ok’ or ‘end’, to a new ‘error’ state. If we, for example, attempt to divide a number by zero, or index an array with an invalid parameter, then we should change the state to ‘error’, and possibly supply a suitable error message. But this is not currently possible in eval_exp/3 since it’s not allowed to change the state. Clearly, we could fix this by adding an additional output argument to eval_exp/3 which represents the new state, but is it really a good idea to make such a drastic change when the vast majority of rules defining expressions don’t care about the new state?
Thankfully, this is a rather frequently occuring problem in Prolog, so there’s a reasonable solution available. The basic idea is to assume that each predicate takes a Comp object as input and returns an updated Comp object as output, and if we make this assumption for every relevant predicate, we can simply hide these arguments. We could accomplish this by a macro, using term expansion, but it’s actually easier to use Prolog’s support for definite clause grammars (DCGs) for this purpose. This is a formalism for parsing where grammar rules are written using –> instead of :-, and where everything on the right-hand side describes the language in question (exactly how production rules in a context-free grammar works) and where we can use terms and arguments as we normally do in logic programming. Since this isn’t an entry on parsing I won’t go into further details, but simply state that we’re going to use DCG rules to implicitly pass around Comp objects instead of strings. With this approach we could handle LET statements as follows (assuming that eval_exp and set_mem have already been rewritten in this format).
interpret_statement(let(id(I), E)) -->
eval_exp(E, V),
set_mem(id(I), V).
To describe a basic operation, e.g., set_mem//2 (it’s common practice to denote a DCG rule p taking n arguments by p//n) we need to access the current state. In principle we could write these operations as ordinary predicates, explicitly writing out the arguments hidden away with the DCG notation, but there’s no strict standard among DCG implementations and it’s better to not assume a particular ordering among the arguments. To solve this I’m borrowing a trick from Marcus Triska’s the power of Prolog which introduces a primitive state//2 which can be used to access the current state and describe an updated state. It can be defined and used as follows (see here for an explanation).
state(S0, S), [S] --> [S0].
set_mem(Var, Val) -->
state(Comp0, Comp),
{set(Comp0.mem, Var, Val, NewMemory), Comp = Comp0.put([mem:NewMemory])}.
Note that everything inside the curly brackers are treated as ordinary Prolog code. Hence, from now on, whenever we are describing operations accessing or modifying Comp objects, we’ll use DCG notation, if we want to access a state we’ll use state//2, and if we want to use ordinary Prolog code we’ll wrap this inside curly brackets. Each rule of this form may then be understood as describing a sequence of transitions between Comp states, and if we don’t care about the states then we don’t even have to refer to them. For example, we may now define the case handling the addition operator when evaluating arithmetical expressions as follows.
eval_exp(E1 + E2, V) -->
eval_exp(E1, V1),
eval_exp(E2, V2),
{eval_plus(V1, V2, V)}.
If we later on decide to implement error handling then it might for example be the case that eval_exp(E1, V1) changes the current state to ‘error’, which we’ll later detect in the topmost loop in interpret_line//1. For example, it could be defined along these lines.
interpret_line(ok) -->
get_statement_and_increment_line(S),
interpret_statement(S),
get_status(Status),
interpret_line(Status).
interpret_line(end) --> state(Comp, Comp).
interpret_line(error) --> … %write a suitable error message.
If you don’t like this usage of DCGs and this implicit threading of state then simply try to view it as a way of saving a few keystrokes and avoid having to write out the Comp0, Comp1, …, Comp transitions manually, as we did before.
## Multidimensional arrays
We’re now going to have a stab at implementing arrays. When reading the reference manual I was actually a bit surprised to see that BASIC-80 not only supports 2-dimensional arrays, but in fact up to 255 dimensions. Not bad! It seems that the implementation for the UC-2200 watch only supports 5 dimensions but that’s honestly more than I expected as well, and probably more than I’ve ever used in an actual program. Hence, how to add support for arrays? The first thing to observe is that we cannot obtain $O(1)$ read/write operations unless we enforce a (small) constant limit on the allowed number of elements. For example, assume that we only allow 4 elements in each array. Then we could implement a set/4 operation as follows.
.
.
.
set(array(X1,X2,X3,X4,X5), 3, New, array(X1,X2,New,X4,X5)).
.
.
.
Since that’s clearly not desirable, and since we don’t want to use impure features such as destructive assignment unless there’s a very good reason for doing so, we’ll settle with $O(log n)$ read and write operations instead, provided by e.g. library(assoc). A list would in all likelihood also be an acceptable choice but wouldn’t really be that much easier to implement. But let’s now turn to the implementation. Arrays are defined and used in the following way.
10 DIM a(10)
20 FOR i = 1 TO 10 STEP 1
30 LET a(i) = a(i-1) + i
40 NEXT i
Hence, the DIM statement at line 10 informs the interpreter that ‘a’ is an array variable with 10 elements, each of which is initially set to 0. Invididual elements can be accessed and modified in the usual way, arithmetical expressions can be used as a subscript, and if the subscript is not within the given dimensions an error is reported. Unfortunately:
• indexing start at 0 but can be changed with the OPTION BASE statement, and
• the DIM statement is optional and if it’s omitted the interpreter will allocate 10 elements for the array variable in question.
Both these design decisions are incredibly baffling. I might be able to excuse the first one, but the second one is completely nonsensical. Sometimes it’s possible to understand the design decisions behind BASIC simply by understanding the constraints at the time, e.g., dynamic NEXT statements is not a nice language feature, but is a natural consequence if the program is interpreted line by line. But there’s no excuse at all for the second decision. Why, exactly, is the default size 10? What’s the use case for implicitly allocating an array with 10 elements? If there’s no matching DIM statement then it seems likely that the programmer either forgot, or misspelled a variable, and this kind of default behaviour simply makes it harder to locate the bug. Moreover, this only works for single-dimensional arrays, and implementing this is more work than simply reporting an error. But we’ll worry about implementing this later. First, let’s concentrate on how a multi-dimensional array should be represented. We have a couple of options.
• Associate each dimension with an ordered tree, so that that e.g. a 2-dimensional array would be a tree of trees, and where the nodes of the second layer of trees would store the actual elements (similarly to how one could use a list of lists to represent a 2-dimensional array).
• A multi-dimensional array with $n$ dimensions is represented by an ordered tree with keys over $n$ dimensions. Then an array variable simply points to such a tree.
• Store each element in the tree above directly in the main memory, without introducing an additional tree.
The third option is actually quite a bit easier and requires only modest changes. Each array element is then going to be stored by its multi-dimensional index, which we’ll simply represent by a list. Hence, to refer to an element of an array all we need is the identifier of the array variable and a list of integers, which we’ll wrap inside an array/2 term. For example, if we have an array ‘a’ with two rows and three columns, then we’ll refer to its elements by array(a, [0,0]), array(a, [0,1]), …, array(a, [1,2]), all of which will be stored directly in the main memory. Then we need to support assignment to arrays within LET statements and evaluating an array expression within an expression.
interpret_statement(let(array(I,Es), E)) -->
%We assume a predicate which evaluates a list of expressions using eval_exp//2.
eval_exps(Es, Vs),
eval_exp(E, V),
set_mem(array(I,Vs), V).
Note that we define the operation via a DCG rule as described earlier. Hence, we simply evaluate the list of dimensions (e.g., maybe the array is indexed by a loop variable), evaluate the right hand side, and write the array term to memory. Evaluating array expressions is then easy.
eval_exp(array(I, Ds), V) -->
eval_exps(Ds, Vs),
get_mem(array(I, Vs), V).
But what about the DIM statement? The simplest approach is to do nothing and ignore the statement.
interpret_statement(dim(_I, _Ds)) --> state(Comp, Comp).
This actually works, and only took a couple of lines of code to implement. Unfortunately, there’s a problem with the above code: it’s too general! With this implementation arrays don’t have a fixed size and can be resized arbitrarily. This is, in principle, good, but takes us a bit too far away from BASIC-80. Hence, we have no other choice but to artificially degrade our implementation. We have two options:
• Store the identifier of each array together with its dimension in the main memory. Each time an element is accessed we check whether the given bounds are valid. Hence, we only store as many elements as needed.
• Initially, create one element in the tree for each (multi-dimensional) index of the array. If an invalid index is used then the get_mem operation is simply going to fail (which in the future could be extended to report an error).
The second option makes it slightly easier to implement the additional condition that arrays initially should have their elements set to 0, so that’s the one that I’m going to choose, but naturally I feel slightly disturbed by this design choice. Writing a memory allocation procedure in Prolog? Really? Fortunately it’s not terribly complicated. We’ll introduce a rule malloc//2 which takes an identifier and a list of dimensions and describes the Comp object obtained from inserting an element for each index within the given dimensions. There are several ways to solve this but a straightforward solution is to define an auxilliary predicate which, given a list of dimensions, returns a list of integers within these bounds. For example, if the list of dimensions is [2,4] then we should first return [0,0], then [0,1], …, [1,3], and afterwards we’ll simply use findall/3 to find all answers. To ensure that our predicates are internally consistent we’ll also wrap each integer in this list inside an int/1 term. After having constructed this list we’ll simply walk through it and for each index create a new memory entry and set it to 0. Hence, we obtain the following.
malloc(I, Ds) -->
{findall(array(I,X), integers(Ds, X), Ids)},
malloc0(Ids).
malloc0([]) --> state(Comp).
malloc0([Id|Ids]) -->
set_mem(Id, int(0)),
malloc0(Ids).
integers([], []).
integers([int(D)|Ds], [int(X)|Xs]) :-
D0 is D - 1,
%Between is a bult-in predicate.
between(0,D0,X),
integers(Ds, Xs).
We can now handle DIM statements as follows.
interpret_statement(dim(id(I), Ds)) -->
malloc(I, Ds).
The only remaining change is then to ensure that a LET statement stays within the original memory. I solved this by introducing a new basic predicate set_existing_mem//2 which fails if the first argument is not already present (which we have ensured by allocating the memory from the original DIM statement).
interpret_statement(let(array(I,Es), E)) -->
eval_exps(Es, Vs),
eval_exp(E, V),
set_existing_mem(array(I,Vs), V).
Interestingly, we have now inadvertently introduced the possibility of memory leaks! Once an array has been allocated there’s no way to remove it. But since we’re working with a completely flat memory hierarchy this is not going to be a problem. Note that I in the end decided to not support the default behaviour of allocating an array with 10 elements if there’s no matching DIM statement. It’s bad enough as it is.
BASIC-80 keeps delivering: it has support for functions with local variables. Amazing. Might it even be possible to use functions to achieve some level of structured programming? Absolutely not, they are incredibly restricted: a function can only take a single argument and its body consists of a single-line arithmetical expression. Hence, their only usage is really to reuse slightly larger expressions which you don’t want to retype multiple times. They are defined and used as follows.
10 DEF FN SUCC(X) = X + 1
20 LET Y = FN SUCC(FN succ(1))
30 PRINT Y
This defines a function SUCC with one argument X (line 10) which is used two times in line 20, and is going to bind Y to 3. Before we try to implement this a few observations should be made.
• The variable X is actually a local variable which can’t be referenced outside the function. If X occurs elsewhere its going to be outshadowed by the local variable.
• Functions can refer to variables defined outside the function since all other variables have a global scope.
• The semantics of DEF FN … at line 10 is to define the function. If this line is never interpreted then its not possible to call the function. Hence, it’s not possible to place function definitions in the end of the source file, unless we use GOTO statements to interpret the definitions, and then use a GOTO statement to go back. This cluncy behaviour is simply to make it possible to interpret the program line by line, without needing to scan the entire source file after function definitions.
As usual, we’re going to assume that the program has been parsed into a suitable format. Everything that we need to do in the DEF FN statement is then to write the name of the function to the main memory together with its variable and body, which we’ll wrap inside an fn/2 term.
interpret_statement(def_fn(Id, Arg, Body)) -->
set_mem(Id, fn(Arg, Body)).
The moderately interesing aspect is then to evaluate functions. Conceptually, this is not so difficult. We already know the name of the function, its local variable, and its body, so all that we have to do is to (1) evaluate the argument, and (2) evaluate the body of the function with respect to this value. However, recall that the argument of the function is supposed to be local to the function. Hence, after evaluating the function call there should be no traces left in the main memory of the local variable, and if there exists a global variable with the same name then it should not be overwritten by the local variable. I considered two solutions:
• Wrap local variables inside a special term so that they don’t collide with global variables. I.e., the local variable X in the BASIC program above might internally be represented by a term local(succ, x).
• Extend the memory so that it supports multiple frames. When evaluating a function we’ll then push a new frame containing the local variable.
The first option pretty much means that we solve the problem when parsing: when parsing a DEF FN statement we’ll wrap the argument of the function in a special term, and when parsing the body of the function we’ll treat each occurence of the local variable in a special way. If we choose the second option then we’ll have to extend the basic operations, get_mem//2 and set_mem//2, so that they support multiple frames. It certainly feels like the first option is much simpler, and might thus be preferable, but feelings can be deceiving. I actually implemented both these strategies, starting with the first one, and it turned out to be more complicated and less robust than the second strategy. Moreover, the second option increases the flexibility of the interpreter and makes it much easier to support more powerful functions/procedures, if we so desire. Hence, that’s the option I’m going to present. Recall that the memory of the Comp object, Comp.mem, is represented by an ordered tree. To support multiple frames we’ll extend this to a list of ordered trees instead. In get_mem//2 we’ll then attempt to find the variable in question in the current frame, and if it fails then we’ll try the second frame, and so on. In set_mem//2 we’ll simply assign a variable a value with respect to the current frame (although it would be easy to search the other frames if the variable is not present).
get_mem(Var, Val) --> state(Comp), {search_mem(Comp.mem, Var, Val)}.
%Auxilliary predicate.
search_mem([Frame|Frames], Var, Val) :-
( get(Frame, Var, Val) ->
true
;
search_mem(Frames, Var, Val)
).
set_mem(Var, Val) -->
state(Comp0, Comp),
{Comp0.mem = [First|Rest],
set(First, Var, Val, NewMemory),
Comp = Comp0.put([mem:[NewMemory|Rest]])}.
I’m not a huge fan of the if-then-else construct -> in Prolog since it’s easy to misuse and leads to complicated, typically nested definitions, which are impossible to understand for anyone except the original programmer. But in search_mem/3 it’s exactly what we want to express: if the variable can be found in the current frame then we’re done, otherwise we continue searching. To solve this without if-then-else we’d either have to use negation as finite failure, which is implemented so poorly in Prolog that it’s depressing to even acknowledge its existence, or write an additional predicate which given an ordered tree and a key, returns either the value in question, or a special null value. Then we could branch upon the return value and decide whether to continue searching or not. But this is hardly worth the effort.
Last, we implement push_frame//2, which is used to push an empty frame, and the dual operation pop_frame which removes the current frame, which we can then use to evaluate function applications in the context of arithmetical expressions.
push_frame -->
state(Comp0, Comp),
{empty_assoc(NewFrame),
NewMem = [NewFrame|Comp0.mem],
Comp = Comp0.put([mem:NewMem])}.
pop_frame -->
state(Comp0, Comp),
{Comp0.mem = [_First|Rest],
Comp = Comp0.put([mem:Rest])}.
eval_exp(fn(Id, Arg), V) -->
eval_exp(Arg, ArgVal),
get_mem(Id, fn(Var, Body)),
push_frame,
set_mem(Var, ArgVal),
eval_exp(Body, V),
pop_frame.
Hence, although it initially felt like adding support for multiple frames would require large changes in the interpreter, it turned out that we only needed to change/add a couple of lines of code.
## Adding preliminary support for floating point numbers
At this stage the interpreter supports quite a lot of features but we have (to be precise: I have, I take full responsibility!) made a large simplifying assumption: all numbers are assumed to be integers, represented via int/1 terms, and they are evaluated via Prolog by using is/2. Naturally, we want to support floating point numbers as well. However, in the 8k version of BASIC-80 it’s not possible to declare a variable as an integer; in the extended version this can be done by suffixing the variable name in question by a percentage sign, e.g.:
LET X% = 1
LET Y = 1
would declare X% to be an integer and Y to be a (single precision) floating point number. But this is not possible in the 8k version so all numeric variables are assumed to be (single precision) floating point numbers. However, I’m unsure whether loop variables used in FOR NEXT loops should then be of integer or floating point type. If we read the reference manual literally then they should be floating point, but this would incur a huge performance penalty since many machines at the time didn’t even have hardware support for floating point numbers. Thus, if I’d have to guess, the reference manual is inconsistent, and loop variables are internally represented by integers even in the 8k version, but please let me know if you have a definite answer.
Adding support for floating point numbers is then not terribly difficult. We represent floating point numbers by terms of the form float(N) where N is the number in question. During parsing numbers will be identified as either integers or floating point numbers depending on the expressions in question, e.g., if we write LET X = then 1.0 will treated as a floating point number and will internally be represented by float(1.0). We then modify the evaluation of arithmetical expressions accordingly. Here I’m presenting how addition is handled but the other cases can be handled in a very similar way.
eval_exp(int(Exp), int(Exp)) --> state(Comp, Comp).
eval_exp(float(Exp), float(Exp)) --> state(Comp, Comp).
eval_exp(E1 + E2, V) -->
eval_exp(E1, V1),
eval_exp(E2, V2),
{eval_binary_op(+, V1, V2, V)}.
%Both arguments are integers. No conversion necessary.
eval_binary_op(Operand, int(V1), int(V2), Int) :-
eval_binary_int_op(Operand, int(V1), int(V2), Int).
%Both arguments are floats. No conversion necessary.
eval_binary_op(Operand, float(V1), float(V2), Float) :-
eval_binary_float_op(Operand, float(V1), float(V2), Float).
eval_binary_int_op(+, int(V1), int(V2), int(V)) :- V is V1 + V2.
eval_binary_float_op(+, float(V1), float(V2), float(V)) :- V is V1 + V2.
Then we simply add additional cases to eval_binary_op/4 to fill in the missing cases. E.g., if one argument is an integer and the other a floating point number, then the integer is converted to a floating point number. Note also that at the moment eval_binary_int_op/4 and eval_binary_float_op/4 do not differ in any meaningful way since we use is/2 in both cases. This, however, will change drastically soon!
## Summary and outlook
We added several important features to the interpreter, and at this stage it can handle a quite large subset of BASIC-80. However, it works a bit too well for certain programs. For example, if we write a program computing the factorial of a given number then the interpreter is happily going to report that the factorial of 20 is 2432902008176640000, thanks to SWI Prolog’s nice support for big integers. This is clearly not something that an actual 8-bit computer such as the Z80 would be able to manage so we’re going to have a look at how one can accurately simulate low level arithmetic, including support for floating point numbers in the MSB format, in Prolog. I’m not going to give too much away at this stage but my goal is to implement these operations without using any built-in arithmetical operations. Hence, we’re pretty much going to have to invent a fictional ALU from scratch. Will Prolog be up for this task, or will it break under the pressure? You’ll have to wait and see until the next installment!
## Writing a BASIC Interpreter — Part 2
Introduction
In the previous entry we designed a prototype interpreter which attempted to interpret a BASIC program by representing it as a list of statements, which simply evaluated the program by (1) interpreting the current statement, and (2) recursively interpreting the rest of the statements, and in the process always passing around the updated memory configuration. However, this strategy fell short when we tried to extend it to handle BASIC’s dynamic control structures, such as the GOTO statement. We encountered two major problems:
• It’s not sufficent to represent the program by a list of statements since we (1) need line numbers, and (2) need the possibility of returning to an earlier part of the program.
• It’s not correct to interpret a statement and then recursively interpret the rest of the statements, since the first statement migh have changed the line number, rendering the old list of statements obsolete.
The second problem also turned out to be an issue when interpreting FOR NEXT loops, where it’s also not correct to recursively evaluate the body of the for loop and then increment the loop variable, since we don’t know whether control will ever be returned to the head of the loop. For example, consider the following program:
10 LET X = 1
20 FOR I = 1 to 10 step 1
30 LET X = X + 1
40 GOTO 60
50 NEXT I
60 PRINT X
70 PRINT I
The result of interpreting this program should be to print 2, followed by 1, since the GOTO statement implies that the NEXT statement following the FOR loop will be skipped. Moreover, assume that we insert a NEXT statement at line 80.
10 LET X = 1
20 FOR I = 1 to 10 step 1
30 LET X = X + 1
40 GOTO 60
50 NEXT I
60 PRINT X
70 PRINT I
80 NEXT I
Then the new NEXT statement will dynamically match the FOR statement, and the resulting program will print X from 2 to 10, and I from 1 to 9. Our prototype interpreter failed to handle programs like this and it seemed to be rather hard to fix with a simple patch. However, on the bright side, we can currently handle:
• LET statements.
• Arithmetical expressions.
• Boolean expressions.
Moreover, the interpreter is very easy to extend with new language constructs since one in principle only has to add a new case to the definition of interpret_statement/3. Hence, while we have to make some radical changes to the interpreter, we are going to try our hardest to not convolute the interpret_statement/3 more than necessary.
In BASIC’s Defence
Before we go back to the drawing board it might be a good idea to remind ourselves why BASIC has the form it has. Clearly, out-of-order NEXT statements is not really an intended language feature, but rather a side effect of how early BASIC interpreters were implemented.
• A compiled language is out of the question due to limited memory and incredibly slow secondary storage.
• The interpreter is likely going to be the only program shipped with the computer, so at the very least it has to support line editing.
• Since the programmer enters the program line by line, things are greatly simplified if we assume that it’s possible to tokenise and parse each statement independently.
The third point explains the bizzare nature of FOR NEXT loops: by viewing FOR statements and NEXT statements as two separate statements, instead of a single control structure, there is no need to parse the entire program and attempt to match FOR statements with NEXT statements, or to attempt to determine the body of a FOR statement by explicitly searching for a NEXT statement, during the initialization of the loop. Instead, when a FOR statement is found, we simply push an identifier of the FOR loop to a stack (e.g., its line number and the loop variable), and when a NEXT statement is found we pop from the stack and return control to the FOR loop.
With these considerations in mind BASIC is a fairly good language. Many languages at the time, e.g. Pascal, were simply not fit for being interpreted, and certainly not in a memory efficient line-by-line fashion. Certain Lisp dialects might have been an option, but if I had a microcomputer with only a couple of kilobytes of RAM, the types of Lisp programs I could write would not really be that different from what I could achieve with BASIC. Moreover, the syntax and the language constructs of BASIC are intuitively understandable to a layman, and at the time, just having access to an advanced scientific calculator supporting multi dimensional arrays and simple but useful string processing, was revolutionary. I’d even say that picking up programming with BASIC in the 70’s and early 80’s was easier than starting with a high-level programming language today, where there are more layers and abstractions between the source file and what is visible on the display. The only really detrimental design decision is the lack of facilities for structured programming. Implementing e.g. procedures supporting a small number of local variables would not have been terribly difficult and could easily have been an optional feature for more powerful systems.
A Second Attempt
Since we now understand the idea behind BASIC a bit better, that programs should be interpreted line by line, it’s going to be much easier to design a reasonable interpreter. Previously our representation of the program was as a list of statements, and the current element in that list represented the current point of execution. Consider the following two proposals.
• The program will be represented by an ordered tree where each node consists of a key (a line number) and a statement.
• The interpreter will keep track of the current point of execution by storing the current line number.
These two changes implies that we during an iteration has to fetch the statement associated with the line number, increment the line number, interpret the statement, and recurse. For the moment, I’m going to brush aside any parsing issues and assume that our parser returns a list of tuples of the form Line:Statement where Line is a line number and Statement is a (parsed) statement. For example, consider the following program which computes the factorial of 5:
10 LET X = 1
20 FOR I = 1 TO 5 STEP 1
30 LET X = X*I
40 NEXT I
This would be represented by the list:
[int(10):let(id(x),int(1)),
int(20):for(id(i),int(1),int(5),int(1)),
int(30):let(id(x),id(x)*id(i)),
int(40):next(id(i))]
And before we’ll consider how the interpreter should be changed to handle this new representation, let’s see how it’s possible to convert a list of this form to an ordered tree. Why not use the above list directly? If the program has $n$ lines then accessing a statement at a particular line takes $O(n)$ time, which is quite bad since its something that the interpreter is going to do in each iteration. While I’m not concerned over low level efficiency, there’s no point in using suboptimal data structures when a more reasonable choice exists. But if I’m so concerned about data structures, why don’t I use an array indexed by line numbers, so that accessing a statement at a given line takes constant time? Well, mainly because we in that case first would have to normalize the line numbers, and since working with “flat” terms to simulate arrays is typically not a great experience in Prolog (it would, for example, be cumbersome if we want to extend the interpreter to support line editing). Also, $log(n)$ time is still great : if we have a BASIC program with 1000 lines then we’re going to find the statement associated with a given line in roughly 10 steps, and if we double the program size to 2000 lines then the cost only increases to 11 steps, and so on. However, if there’s a reader with a huge BASIC program which he/she for some reason can’t run in any other interpreter, and for which the tree representation turns out to be too slow, then I promise to change the representation!
There is one catch with using the ordered tree in library(assoc), however. How do we find the next line? There is no basic operation which does this, so as a workaround we’re for each node in the tree going to store the key to the next line number. This can easily be accomplished by storing each node in the tree by a tuple Statement:NextLine where Statement is a statement, and NextLine is the key to the statement directly preceding Statement. But what should Next be when Statement is the last statement in the program? To handle this we introduce a special line number, ‘void’, which contains the statement ‘end’ and whose Next line simply points to itself. With this in mind we can easily write a predicate which given a list of numbered statements, creates the corresponding search tree, by standard recursive problem solving:
• Base case: a list with only one statement. Then we want to create an empty tree, insert the statement in question, and point it to the special ‘void’ number discussed above.
• Recursive case: a list with two or more statements where the first element is Num:Statement. Solve the problem recursively. Then we have a tree containing everything except the current statement. Furthermore, assume that the line number of the next statement is NextNum. Then we want insert a new node in the tree where the key is Num and where the data is Statement:NextNum.
Hence, we obtain the following.
statement_list_to_program([N:S], Tree) :-
empty_assoc(Tree0),
set(Tree0, N, S:int(void), Tree1),
set(Tree1, int(void), end:int(void), Tree).
statement_list_to_program([N1:S1, N2:S2|Rest], Tree) :-
statement_list_to_program([N2:S2|Rest], Tree0),
set(Tree0, N1, S1:N2, Tree).
Naturally, we’ll now have to update our interpreter to reflect these changes, and instead of passing around objects of the form comp(Mem), we’ll pass around objects of the form comp(Mem, Program, Line) where Program is the tree representation of the program and Line is the current line number. However, before making this change let’s have a look at how the top-most loop of the interpreter would look like. Here, we assume the existence of a predicate which fetches statement associated with the current line number and updates the line number (of course, we have to implement this predicate later).
interpret_line(Comp0, Comp) :-
get_command_and_increment_line(Comp0, Command, Comp1),
interpret(Comp1, Command, Comp2),
interpret_line(Comp2, Comp).
But what should the end condition be? Previously, we simply assumed that we had reached the end of the program when the current list of statements was empty. Since we don’t want to complicate the definition of interpret_line/2 more than necessary we’ll assume that the Comp object supports an operation get_status/2 such that get_status(Comp, Status) suceeds with Status = ‘ok’ or Status = ‘end’. If we in the future decide that additional states are necessary (e.g., perhaps we want to introduce a special error state) then we’ll augment get_status/2 in an appropriate way. We could then redefine interpret_line/2 as:
interpret_line(Comp0, Comp) :-
get_status(Comp0, ok),
get_command_and_increment_line(Comp0, Command, Comp1),
interpret_statement(Comp1, Command, Comp2),
interpret_line(Comp2, Comp).
interpret_line(Comp, Comp) :-
get_status(Comp, end).
However, I prefer the following variant which attempts to use unification as early as possible to distinguish between the ‘ok’ state and the ‘end’ state:
interpret_line(Comp0, ok, Comp) :-
get_statement_and_increment_line(Comp0, S, Comp1),
interpret_statement(Comp1, S, Comp2),
get_status(Comp2, Status),
interpret_line(Comp2, Status, Comp).
interpret_line(Comp, end, Comp).
Why? We can now immediately see that the two cases are mutually exclusive simply by inspecting the heads of the two rules, instead of having to understand how get_status/2 is used in the bodies. Hence, at this stage, the Comp objects needs to keep track of (1) the current line number, (2) the program, represented by an ordered tree storing line numbers and statements, and (3) the current status. Recall that we had the earlier definitions:
init_comp(comp(M)) :-
empty_mem(M).
set_mem(comp(M0), Var, Val, comp(M)) :-
set(M0, Var, Val, M).
get_mem(comp(M), Var, Val) :-
get(M, Var, Val).
These now have to be augmented whenever we add additional arguments to the computer object.
%Initialises the computer with an empty memory and a supplied program and a start line.
init_comp(Program, StartLine, comp(M, Program, StartLine, ok)) :-
empty_mem(M).
%Recall from the previous entry that set/4 was defined using %put_assoc/4 from library(assoc).
set_mem(comp(M0, P, L, S), Var, Val, comp(M, P, L, S)) :- set(M0, Var, Val, M).
get_mem(comp(M, P, L, S), Var, Val) :- get(M, Var, Val).
There’s of course a problem with this representation of the computer object: every time we change something we have to change all basic operations, even the operations which have nothing to do with the other components of the object. This is not too bad if the comp object is expected to be fairly static, and if we have only a few number of operations to change, but this can be difficult to predict. SWI-Prolog contains a very handy data structure for this purpose: the dictionary data structure. This data structure supports functional notation which initially might look rather weird, but is quite convenient. Consider the following example:
Comp = comp{mem:Memory, program:Program, status:ok}, Y = Comp.status.
This is going to succeed and bind Y to ‘ok’. If we want to update the status to ‘end’ we could write:
Comp1 = Comp.put([status:end]).
And the expression Comp.put([status:end]) could even appear in the head of a rule, making it possible to define the aforementioned basic operations as follows.
init_comp(Program, StartLine, comp{mem:M, stack:S, program:Program, line:Start, status:ok}) :-
empty_mem(M).
set_mem(Comp0, Var, Val, Comp0.put([mem:NewMemory])) :-
set(Comp0.mem, Var, Val, NewMemory).
get_mem(Comp, Var, Val) :- get(Comp.mem, Var, Val).
Similarly, we can easily define a predicate which given a line number Line returns the tuple Statement:NextLine corresponding to the line number Line, which we can use to define the aformentioned predicate get_statement_and_increment_line/3.
get_statement_and_increment_line(Comp0, Command, Comp0.put([line:NewLine])) :-
get(Comp0.program, Comp0.line, Command:NewLine).
GOTO Statements at Last!
Let’s now see how we can make use of the new basic operations of the interpreter to define the GOTO statement. In the current definition of interpret_line/3 the interpreter fetches a command and increases the line number in the beginning of the iteration. Hence, all that we have to do is to change the current line number to the number indicated by the GOTO statement.
interpret_statement(Comp0, goto(Label), Comp) :-
set_line(Comp0, Label, Comp).
set_line(Comp0, Line, Comp0.put([line:Line])).
Great success! Note also that we didn’t have to make any changes to e.g. the old definitions of interpret_statement/3 handling LET and PRINT instructions, despite drastically changing the internal structure of the interpreter state. Equipped with this new confidence let’s now try to handle the troublesome FOR NEXT loops. Importantly, by the earlier observations we now know that FOR and NEXT statements should actually be viewed as two independent statements, rather than as a single statement.
• When interpreting a FOR statement we should evaluate the start value, the end value, assign the loop variable the start value, evaluate the loop condition, and then continue evaluating the statement directly following the FOR loop.
• When interpreting a NEXT statement we should increment the loop variable and then return control to the FOR loop.
But how should FOR and NEXT statements communicate with each other? The easiest solution is to use a stack: before the FOR loop statement is finished we should push a suitable identifier to the stack (e.g., the line of the FOR loop, and the loop variable) so that once a matching NEXT statement is found we simply pop from the stack and proceed accordingly. Hence, let’s add a stack to the comp object, represented by a list, and supporting the two stereotypical push and pop operations.
%We have to change init so that comp now contains a stack, initialised with the empty list.
init_comp(Program, StartLine, comp{mem:M, stack:[], program:Program, line:StartLine, status:ok}) :-
empty_mem(M).
%We push to the stack by adding the element E as the head.
push(Comp0, E, Comp0.put([stack:[E|Comp0.stack]])).
%We pop from the stack by removing the head of the list.
pop(Comp0, E, Comp0.put([stack:Stack])) :-
[E|Stack] = Comp0.stack.
There is only one problem: what happens if the FOR loop condition is false already during initialisation? Should we (1) go to the statement directly following the FOR statement, (2) go to the statement directly following the NEXT statement, or (3) silently fail? The BASIC-80 reference manual claims that the second case should happen, but this is clearly impossible since we don’t know where the matching NEXT statement is. For all we know, maybe the programmer, tired after a long week of hardship, messed up and accidentally hid the NEXT statement so well so that it requires us to solve the halting problem for Turing machines. While one could certainly attempt to find a matching NEXT statement by scanning the remaining program line by line, there’s little point in doing these extra steps. Hence, we’ll go for option (1) instead, leading to the following definition of interpret_statement/3.
interpret_statement(Comp0, for(Id, Start, End, Step), Comp) :-
eval_exp(Comp0, Start, StartVal),
set_mem(Comp0, Id, StartVal, Comp1),
interpret_for(1, Comp1, for(Id, Start, End, Step), void, Comp).
interpret_for(0, Comp0, for(_, _, _, _), LineAfterNext, Comp) :-
interpret_statement(Comp0, goto(LineAfterNext), Comp).
interpret_for(1, Comp0, for(Id, Start, End, Step), _LineAfterNext, Comp) :-
get_line(Comp0, ForLine),
push(Comp0, ForLine:for(Id, Start, End, Step), Comp).
Recall that interpret_for, now with an additional argument representing the line number after the (future) NEXT statement, consists of a base case in which the loop condition is false, and the case when the execution of the loop should continue. What happens in the above code is (1) that interpret_statement/3 goes directly to the loop case of interpret_for/4, and since we do not know the line number of the Next statement we simply send the nonsense value ‘void’, (2) when the loop is finished we continue execution at the line LineAfterNext, and (3) in the loop case of interpret_for/4 we push the current line to the stack, together with the identifier, start, end, and step. But how does interpret_for/3 ever get access to the line number LineAfterNext? That’s the job of the NEXT statement, which we’ll now define.
interpret_statement(Comp0, next(Id), Comp) :-
pop(Comp0, ForLine:for(Id, Start, End, Step), Comp1),
eval_exp(Comp1, Id + Step, NewVal),
set_mem(Comp1, Id, NewVal, Comp2),
eval_bool(Comp2, Id < End, Res),
get_line(Comp2, LineAfterNext),
set_line(Comp2, ForLine, Comp3),
interpret_for(Res, Comp3, for(Id, Start, End, Step), LineAfterNext, Comp).
Hence, once a NEXT statement is reached we pop a FOR statement from the stack (from line ForLine), update the loop variable, evaluate the Boolean expression, get the line LineAfterNext, set the line number to ForLine, and branch to interpret_for/4. Note that once the base case of interpret_for/4 occurs the number LineAfterNext will be known.
Implementing IF GOTO, GOSUB, and RETURN
The interpreter is currently very modular and easy to extend with new language features. Let’s first consider IF GOTO statements. They have the form:
IF COND THEN GOTO LINE
where COND is a Boolean condition and LINE a line number. IF THEN ELSE statements are only in the disk-based version of BASIC-80 so we’ll omit that feature (although it wouldn’t be hard to implement).
interpret_statement(Comp0, if(B, GotoLine), Comp) :-
eval_bool(Comp0, B, Res),
interpret_if(Res, Comp0, GotoLine, Comp).
interpret_if(0, Comp, _, Comp).
interpret_if(1, Comp0, GotoLine, Comp) :-
interpret_statement(Comp0, goto(GotoLine), Comp).
Thus, we simply evaluate the Boolean expression and either do nothing, in which execution will pick up at the line following the IF GOTO statement, or goto the line GotoLine. We can similarly implement GOSUB statements:
GOSUB LINE
.
.
.
RETURN
GOSUB can be used to implement a poor-man’s version of subroutines. When a GOSUB statement is encountered we’ll jump to LINE, and once a RETURN statement is encountered we’ll jump back to the line following the original GOSUB statement. Hence, the advantage of GOSUB compared to merely using GOTO is that it’s easier to use if the subroutine is called from several distinct places across the program, and its one of the very few features of BASIC which actually facilitates some form of structured programming.
interpret_statement(Comp0, gosub(Label), Comp) :-
get_line(Comp0, Line),
push(Comp0, Line, Comp1),
interpret_statement(Comp1, goto(Label), Comp).
interpret_statement(Comp0, return, Comp) :-
pop(Comp0, Line, Comp1),
set_line(Comp1, Line, Comp).
Easy! Note that the stack is shared between GOSUB and FOR statements. This is going to work great as long as everything goes according to the plan, but’ll fail if we (e.g.) use GOTO within a subroutine or in a FOR NEXT loop, without going back. This could be handled by replacing the pop operation by a find operation which searches through the stack after a matching statement (but in my opinion we would actually make the interpreter worse by supporting this).
Summary
We threw away most of the prototype interpreter and made a fresh start. The internal representation of the interpreter state, the comp object, had to be extended quite a bit, but once these changes had been made it turned out to be simple to define new cases of interpret_statement/3. At the moment we still only support a small subset of BASIC-80, and in future entries we’ll see how the interpreter can be extended. Here are a few items on my todo list:
• Implementing functions.
• Implementing (multi-dimensional) arrays.
• Implementing integer and floating point arithmetic from scratch, without even using Prolog’s predefined arithmetical operations (wait, what!?).
• Implementing strings.
Resources
# Introduction
We’re going to begin our journey (or rather, our descend into madness) by identifying a suitable subtask where we can very rapidly implement a solution. Why? In this stage fully sketching a solution in a top-down manner, breaking everything into subtasks, is going to be very difficult since we are not sufficiently familiar with the problem to correctly identify all problem areas. Instead, we’ll try to choose a reasonable subset of BASIC which we can implement quickly, and which is non-trivial enough so that a solution to the full problem can use the solutions from the prototype. In the best-case scenario the prototype is simple and implement but scalable enough so that we essentially only have to extend it, case by case, to solve the large problem, but in practice that rarely happens. Hence, as long as we realise that the prototype is incomplete, and that we might have to throw it away and start from scratch (while hopefully learning something in the process), this method can be quite useful.
### BASIC
Let’s do some simple BASIC programming. Consider the following program which computes the factorial of X and prints the result.
10 LET X = 5
15 LET Z = 1
20 FOR I = 1 TO X STEP 1
30 LET Z = Z * I
40 NEXT I
50 PRINT Z
Even if you’ve never touched BASIC before it should be possible to figure out how the above program works. We begin by initialising X and Y to 5 and 1, respectively. In the for loop we initalize I to 1, in each iteration we update the vaule of Z, and we abort when I is larger than X, increasing it by 1 in each iteration. Last, we print Z. In addition, each line is labelled by a line number, but since the above program does not contain any goto statements these serve no purpose at the moment. From the above program we see that we need to be able to:
• Represent variables, e.g., by a data structure which maps identifiers to values.
• Assign values to variables.
• Get values of variables.
• Compute arithmetical expressions.
• Iterate through a for-loop.
• Evaluate Boolean expressions (later on, we want to support if expressions, and already now we need to be able to determine whether the loop variable is smaller than the given upper bound).
• Print an expression.
This sounds like sufficient material for the prototype implementation. Later on, we’ll worry about implementing more features, but even the above program shows that we have to do something non-trivial, since we essentially have to simulate destructive assignment in a suitable way. For example, in the for loop we clearly need to increase the value of I in each iteration. But writing I = 1, I = 2 in Prolog is going to fail since there is no I which is simultaneously both 1 and 2.
### A simple interpreter
We begin by making a few simplifying asumptions. We’ll assume that the program is given in a suitable format, e.g., by a list where each item correponds to a parse tree representation of the BASIC statement on a line of the program. This is very simple to do in Prolog: the let statement on line 10 could for example be represented by the term let(x, 5), the let statement on line 30 by the term let(z, z*i), and the for loop could be represented by the term for(i, 1, x, 1, [let(z, z*i)]). Note that we do not need to explicitly store the next statement corresponding to the for loop, but rather just treat it signifying the end of the body of the for loop (howewer, we’ll return to this issue later). In this representation it is important that x,y, and z, are written with lowercase letters, since otherwise Prolog would treat them as logical variables, and as we have already seen that this is not a good representation. Then the entire program could be represented by the list:
[let(x, 5), let(z, 1), for(i, 1, x, 1, [let(z, z*i)]), print(z)]
For example, try writing
[X|Xs] = [let(x, 5), let(z, 1), for(i, 1, x, 1, [let(z, z*i)]), print(z)]
in Prolog’s query window. What happens? We’re going to get the answer that X is let(x,5), and that Xs is the list [let(z, 1), for(i, 1, x, 1, [let(z, z*i)]), print(z)]. Note, however, that e.g. let(x,5) at the moment has no other meaning than describing a term whose functor is let, and with two arguments, the constant x and the number 5. Hence, it’s just data, and would in other languages correspond to defining a struct with two elements, but has the advantage that we can define them on the fly, and that the intended meaning of the data is typically clear if we use good names. Similarly, for(i, 1, x, 1, [let(z, z*i)]) is simply a term with 5 arguments, where the 5th argument is a list containing the term let(z, z*i). Note that z*i is just a convenient way to write *(z,i) since * is a predefined operator. If this turns out to be a good representation, then we’ll keep it, and make sure that the parser that we write later outputs a list of this form. Hence, we can rather quickly try out whether this representation seems reasonable by writing an interpreter with respect to this format.
Our first challenge is to figure out how to represent variables. We’ll assume that there is only one, global scope, and that the state of all variables therefore can be represented by a suitable data structure supporting get and set operations. This could, for example, be implemented by a list, but it’s easier and better to use a balanced tree for this purpose. Thankfully, the module library(assoc) is exactly what we are looking for. With the help of this datastructure we begin by defining the following operations.
empty_mem(M) :- empty_assoc(M).
set(M0, Var, Val, M) :- put_assoc(Var, M0, Val, M).
get(M, Var, Val) :- get_assoc(Var, M, Val).
The only point behind empty_mem/1, set/4, and get/3, rather than using the corresponding operations from library(assoc) directly, is to make it easier to change the representation later on, if we so desire. From now on, as a convention, whenever we’re defining a predicate which takes a state and changes it, we’re going to let the input argument be the fist argument of the predicate (e.g., M0 in set/4) and let the updated state be the last argument (e.g., M in set/4). At the top level query window we could then try out the query:
empty_mem(M0), set(M0, x, 5, M1), set(M1, z, 1, M).
The resulting tree M is then going to be a tree where the node with key x contains the value 5, and the node with key z contains the value 1, which seems like a very reasonable representation. In every step of the iteration we are then going to pass around a tree of this form which represents the current state of the interpretation. With this insight we then define a predicate interpret/3 where the first argument is the current memory configuration, the second argument a list of statements, and the third argument the result of interpeting the list of statements with respect to the memory from the first argument. The top level loop would then have the form:
interpret(Mem, [], Mem).
interpret(Mem0, [S|Ss], Mem) :-
interpret_statement(Mem0, S, Mem1),
interpret(Mem1, Ss, Mem).
Where the second argument is a list of statements of the form described above. Hence, the recursive case of interpret/3 can be read as: if Mem1 is the result of interpreting the statement S with respect to Mem0, and if Mem is the result of interpreting Ss with respect to Mem1, then the result of interpreting [S|Ss] with respect to Mem0 is Mem. Procedurelly, what we are doing is to (1) get the first statement that should be interpreted, call it S, interpreting S and get a new memory Mem1, and then (2) recursively interpreting the tail of the list Ss, using Mem1 as the current memory. This is a good starting point, but before we turn to the problem of correctly defining interpret_statement, we’re going to do one additional abstraction. What if we later on figure out that we need to store more information in each step of the iteration? Maybe we need a stack? Maybe we want to pass around an output stream? Then we’d have to add each such item as an argument to interpret, and we might possibly also have to change every occurence of interpret_statement/3, which could be cumbersome since its going to have a large number of possible cases (roughly, one case for each language construct). Hence, it’s better to devise a new data structure which contains everything we need in order to interpret the program. At the moment, we’re just passing around the memory, so as a placeholder we’ll wrap it inside a unary term comp/1, so that the current state of the interpreter is represented by comp(Mem), where Mem is as before. Again, the name comp (short for computer) is not important, it’s just data. Later on, where we (e.g.) might add a stack, we’ll add an additional argument to the term, and obtain a term of the form comp(Mem, Stack), where Stack would be a suitable representation of stack. Naturally, we still have to rewrite the program to encompass this, but this is going to be rather manageable. Thus, our current interpreter has the following form.
:- use_module(library(assoc)).
empty_mem(M) :- empty_assoc(M).
init_comp(comp(M)) :-
empty_mem(M).
set(M0, Var, Val, M) :- put_assoc(Var, M0, Val, M).
get(M, Var, Val) :- get_assoc(Var, M, Val).
set_mem(comp(M0), Var, Val, comp(M)) :- set(M0, Var, Val, M).
get_mem(comp(M), Var, Val) :- get(M, Var, Val).
interpret(Comp, [], Comp).
interpret(Comp0, [S|Ss], Comp) :-
interpret_statement(Comp0, S, Comp1),
interpret(Comp1, Ss, Comp).
Hence, when we want to change the value of a variable, we’d use set_mem/4, rather than set/4, since the former hides the internal representation (this will hopefully be clearer in just a moment). We begin by considering set statements. We have to (1) evaluate the expression on the right-hand side, and (2) assign the resulting value to the variable on the left-hand side, so that’s exactly what we do:
interpret_statement(Comp0, let(I, E), Comp) :-
eval_exp(Comp0, E, V),
set_mem(Comp0, I, V, Comp).
Clearly, we now have to define eval_exp/3, which can be defined by classical recursive problem solving. The base case (the simplest kind of expression) occurs when we have either a number or a variable. It might be tempting to define these two cases as follows:
eval_exp(_Mem, Exp, Exp).
eval_exp(Comp, I, V) :- get_mem(Comp, I, V).
(_Mem is an anonymous variable, indicated by starting the variable name with an underscore, to inform the Prolog system that it does not occur in any other place) However, this is not correct, since the the first fact states that the result of evaluating any expression Exp, is the expression itself. The case handling identifiers works, since if I is not a valid identifier, then get_mem(Comp, I, V) is simply going to fail, and Prolog will try another matching rule, but it’s certainly not the best way to handle this. It might be tempting to introduce tests akin to the following:
eval_exp(_Mem, Exp, Exp) :- integer(X).
eval_exp(Comp, I, V) :- atom(I), get_mem(Comp, I, V).
But this is also not a good solution, for two reasons. First, what if we want to add support for additional types, e.g., floating point numbers, or if atom/1 is too permissive (for example, maybe we want to enforce that variable identifiers can only consist of two characters). Second, and more severely, consider a query of the form eval_exp(Mem, x + 1, Result). Even though the term x+1 (again, this is just a convenient way to write the term +(x,1)) clearly cannot match against the two base cases, the Prolog system is not smart enough to realise this. Furthermore, while a query of the form eval_exp(_, 5, Result) will work as expected, the Prolog system is still going to create choice points so that it has the possibility of trying the remaining cases of eval_exp/3 as well. Naturally, we shouldn’t take such low level considerations too seriously at this stage, but it turns out that there’s a solution which (1) represents the intent of the base cases much clearer, and (2) resolves these efficiency issues. It’s also rather simple: we change the representation of numbers and identifiers. We’ll represent the number 5 by wrapping it inside a term int(5), and represent a variable identifier x by the term id(x). Then we can immediately say whether an expression is an integer, an identifier, or a more complicated expression, simply by comparing it to one of these cases. Again, this is just data, there’s nothing magical or strange going on. If we later on decide to support floating point numbers as well we could e.g. represent them by terms of the form float(0.5), and if we decide to support arrays we could wrap those identifiers into similar terms as well. Then the above two cases could be written as:
eval_exp(_Mem, int(Exp), Exp).
eval_exp(Comp, id(I), V) :- get_mem(Comp, id(I), V).
Now it’s easy to see that the two cases are mutually disjoint simply by glancing through the code. The recursive cases are then written in a rather uniform way:
eval_exp(Comp, E1 + E2, V) :-
eval_exp(Comp, E1, V1),
eval_exp(Comp, E2, V2),
eval_plus(V1, V2, V).
eval_exp(Comp, E1 * E2, V) :-
eval_exp(Comp, E1, V1),
eval_exp(Comp, E2, V2),
eval_mult(V1, V2, V).
%Add more functions later.
Here, eval_plus/3 and eval_mult/3 are just placeholders, since we haven’t decided how arithmetic should be evaluated yet. For example, do we want to simulate a fixed-bit CPU? Do we allow both signed and unsigned numbers? Such considerations are important, but they’ll have to wait until later where we have a more meaningful interpreter in place, and for the moment we simply define them using Prolog’s built-in support for arithmetic (is/2).
%Placeholder definitions.
eval_plus(V1, V2, V) :- V is V1 + V2.
eval_mult(V1, V2, V) :- V is V1 * V2.
All that we need to interpret the factorial program is to correctly interpret for loops. When encountering a for loop we then need to:
• Evaluate the start expression.
• Set the value of the loop variable to this value.
• Evaluate the loop condition (true or false) and branch on this outcome.
• If the condition turned out to be false, we stop, if it’s true, we update the value of the loop variable, interpret the body of the loop, and start anew with evaluating the loop condition.
There are several ways to accomplish this but the cleanest way (in my opinion) is to make sure that the predicate responsible for evaluating Boolean expression returns 0, false, or 1, true, and then introduce an auxillary predicate which correctly handles these two branches.
interpret_statement(Comp0, for(Id, Start, End, Step, Body), Comp) :-
eval_exp(Comp0, Start, StartVal),
set_mem(Comp0, Id, StartVal, Comp1),
%We now assume that Res is either 0 or 1.
eval_bool(Comp1, Id < End, Res),
interpret_statement_for(Res, Comp1, for(Id, Start, End, Step, Body), Comp).
interpret_statement_for(0, Comp, for(_,_,_, _, _), Comp).
interpret_statement_for(1, Comp0, for(Id, Start, End, Step, Body), Comp) :-
eval_exp(Comp0, Id+Step, NewValue),
set_mem(Comp0, Id, NewValue, Comp1),
interpret(Comp1, Body, Comp2),
eval_bool(Comp2, Id < End, Res),
interpret_statement_for(Res, Comp2, for(Id, Start, End, Step, Body), Comp).
An alternative solution would be to use the if-then-else construct in Prolog, but I dislike this construct since it (1) is non-logical, and (2) makes it harder to divide the program into small, understandable chunks. For example, in the above program one can immediately understand the 0 case of interpret_statement_for/4 without understanding the 1 case. If I see a rule containing an if-then-else statement then it’s harder to immediately see what the base case is, and what the recursive case is. However, there are certainly cases where if-then-else is better and more convenient than introducing an auxilliary branching predicate, so I’m not going to be dogmatic about its usage. Last, we need to define eval_bool/3 so that we have the ability to evaluate Boolean expressions. This turns out to be rather similar to evaluating arithmetical expressions, and to make the code more succinct we use the built-in predicate compare/3 which compares two given terms and returns the result of the comparison (, or =). For example, compare(Order, 0, 1) succeeds with Order bound to ‘<‘.
eval_bool(Comp, E1 < E2, Result) :-
eval_exp(Comp, E1, V1),
eval_exp(Comp, E2, V2),
compare(Order, V1, V2),
eval_less_than(Order, Result).
eval_less_than(<, 1).
eval_less_than(=, 0).
eval_less_than(>, 0).
Naturally, we’ll add support for more logical operators when we need them. Before trying to interpret our example program we’ll add a simple definition of the print statement. This definition is likely going to change later on, since we might want to simulate an LCD display, but for the moment this is better than nothing.
interpret_statement(Comp, print(X), Comp) :-
eval_exp(Comp, X, V),
write(V), nl.
If we then try out the query:
P = [let(id(x), int(5)), let(id(z), int(1)), for(id(i), int(1), id(x), int(1), [let(id(z), id(z)*id(i))]), print(id(z))], init_comp(Comp0), interpret(Comp0, P, Comp).
We get the expected result that the factorial of 5 is 120. While the above program is far from complete, and likely has to be rewritten several times, we’ve still made some nice observations. First, the key idea behind the interpreter is to pass around a term representing the state of the interpreter, and allow the basic statements of the language to make changes to this state. For the moment we have a simple, high-level representation of the memory, but if we later on decide to change this representation to e.g. a term of fixed size, then we’ll be able to make these changes simply by modifying set_mem/4, get_mem/3, and empty_mem/1. Second, it turned out to be quite nice to be able to define interpret_statement/3 with one rule for each language construct, since it’s possible to immediately understand the meaning of a language construct without needing to understand how the rest of the interpreter is implemented. Hence, regardless of how the implementation changes we’re going to try our best to not convolute this simple structure.
### Adding support for the GOTO statement
Swelling with confidence and pride over our fantastic prototype we now turn to one of BASICs most characteristic features: the GOTO statement. The syntax is simple enough:
GOTO LINE
where LINE is a line number. Let’s construct a simple test program and put it to action.
10 GOTO 20
15 LET X = 0
20 LET X = 1
30 PRINT X
In our current representation we don’t have line numbers, so that’s something that we have to implement before we even start thinking about how the GOTO statement should be implemented. One may imagine that we represented the above program using a list of the form:
[int(10):goto(int(20)), int(15):let(id(x), int(0)), int(20):let(id(x), int(1)), int(30):print(id(x))]
where : is a predefined operator, so each element in the above list is simply a tuple consisting of a line number and a statement. Imagine that the definition of interpret/3 doesn’t change, so that interpret_statement/3 now receives a tuple int(Line):Statement, where Line is a line number, and Statement a statement. Unfortunately, this means that we would now have to change every single occurence of interpret_statement/3 to encompass this change. For example, we’d have to change the rule handling the let statement into something like:
interpret_statement(Comp0, _Num:let(I, E), Comp) :-
eval_exp(Comp0, E, V),
set_mem(Comp0, I, V, Comp).
Which doesn’t feel like the best solution since the let statement doesn’t care about the current line number. But if this turns out to be a good solution then maybe that’s a sacrifice we could live with. So imagine that we attempted to define the rule handling the GOTO statement into something like:
interpret_statement(Comp0, _Numgoto:goto(Line), Comp) :-
???
interpret(Comp0, ???, Comp).
Clearly, we need to fetch the statement at Line, but this is actually impossible in the current representation since interpret_statement/3 was defined with respect to an individual statement, and not the entire program. What to do? It might be tempting to attempt to change the definition of interpret/3 so that it sends the entire program to interpret_statement:
interpret(Comp, [], Comp).
interpret(Comp0, [S|Ss], Comp) :-
interpret_statement(Comp0, S, Ss, Comp1),
interpret(Comp1, Ss, Comp).
Then we’d have to change every single case of interpret_statement/3 and add an additional argument for the list of statements Ss. Again, this is not that nice, but perhaps we could live with it if turned out that this was the most elegant solution. So we’d now implement interpret_statement/4 as:
interpret_statement(Comp0, _Numgoto:goto(Line), Ss, Comp) :-
find_program_at_line(Line, Ss, Ss1),
interpret_statement(Comp0, Ss1, Comp).
Where we assume that find_program_at_line/3 takes the line number Line in question, the list of statements Ss, goes through this list until it finds a statement matching Line, and returns the program starting at that line in the third argument. Unfortunately, this doesn’t work, either. We actually have two severe problems. First, consider a program:
10 PRINT 1
20 GOTO 10
We would represent this by the list [int(10):print(int(1)), int(20):goto(int(10))]. But in the application of interpret_statement(Comp0, int(20):goto(int(10)), [], Comp) the list of statements in the third argument is simply the empty list, since there is nothing after the GOTO statement in the program. The problem, of course, is that we’re not able to go back in a single-linked list, once we have gone forwards. This could be fixed by adding an additional argument to comp/1, so that the interpreter state is represented by comp(Mem, P), where P is a list of statements. Hence, in every iteration, we’d always have access to the original program. However, there’s a second, more pressing issue. What would our interpreter answer for the query:
P = [int(10):goto(int(20)), int(15):let(id(x), int(0)), int(20):let(id(x), int(1)), int(30):print(id(x))],
init_comp(Comp0),
interpret(Comp0, P, Comp).
? In the first iteration we would have S = int(10):goto(int(20)), and Ss = int(15):let(id(x), int(0)), int(20):let(id(x), int(1)), int(30):print(id(x)). We would then begin with the application interpret_statement(Comp0, S, Ss, Comp), which would find [int(20):let(id(x), int(1)), int(30):print(id(x))], and recursively interpret the program at that point, resulting in state where x is 1. So far so good. But then we’d jump back to interpret(Comp1, Ss, Comp), which would continue interpreting the program starting with int(15):let(id(x), int(0)), changing x to 0. The problem, of course, is that interpret/3 is no longer correct, since (1) interpreting S, and then (2) recursively interpreting Ss is only valid if we didn’t encounter a GOTO statement. This is not good, and in fact this is only the tip of the iceberg. Imagine a for loop containing a GOTO statement:
10 FOR I = 1 TO 10 step 1
15 GOTO 25
20 NEXT I
25 PRINT I
Then we have a very similar problem in the definition of interpret_statement_for/4 which is going to attempt to recursively interpret its body (in this case consisting only of the GOTO statement), and then evaluate the loop condition. In fact, when scrutinising further, it turns out that our treatment of for loops, viewing the NEXT statement as simply indicating the end of the body of the for loop, is not correct, either, in the presence of GOTO statements. While the reference manual for BASIC-80 pretends that a FOR … NEXT loop should be viewed as a single langauge construct, the NEXT component of a for loop should actually be viewed as an independent statement, with the semantics of increasing the value of the loop variable and then returning control to the first matching for loop. In fact, we should even allow a program akin to the following.
1 GOTO 10
2 NEXT I
3 GOTO 20
10 FOR I = 1 TO 10 step 1
15 GOTO 2
20 PRINT I
We would then jump to 10, begin the FOR loop, jump to the NEXT statement at line 2, increase the value of I, jump to 15, and then jump back to 2. This is going to be repeated until I is 10, and execution then continues at line 3 since its the first line after the NEXT statement. Ouch. So it doesn’t really make sense to view a FOR loop as having a “body”, since it’s something that can actually dynamically change from iteration to iteration. Hence, when interpreting a for loop, we don’t know where the corresponding NEXT statement might be, and it’s impossible to statically determine this (in fact, undecidable). The main problem is that we tried to enforce a simple, more modern, control structure, which didn’t take GOTO statements in account. While we would certainly obtain a better language by simply forbidding programs of the above form, we wouldn’t really be interpreting BASIC anymore. So, at this stage we have no other choice than to go back to the drawing board.
# Next time
We developed a prototype for a subset of BASIC which initially seemed to be rather promising, but which on closer inspection wasn’t capabable of handling the dynamic control structure of the langauge. How it will be resolved will be left as a cliffhanger until the next installment!
It feels a bit weird typing this entry almost a decade after the last major entry. So instead of inventing additional excuses I’m simply going to cut right to the chase: I’ve been watching a lot of episodes of computer chronicles lately and was struck with a sudden burst of nostalgia, reminiscing the glory days of the personal computer industry (which I wasn’t even part of!), celebrating odd gadgets, micro computers, obsolete programming languages, and weird hairdos. This episode, in particular, made me long for the Seiko UC-2000 watch, a wrist watch computer which through a docking station is capable of running user provided BASIC programs! What’s not to love about this? Sadly, the watch itself is rather expensive these days, which immediately struck down my dream of writing the best UC-2000 game of all time. I therefore opted for the second best option: implementing a BASIC interpreter in Prolog with capabilities similar to a BASIC-80 interpreter running on a Z80 CPU with only a couple of kilobytes of RAM. I started to look around for similar projects and it seems that I’m not unique in my vision. Far from it. Maybe there simply comes a time in ones life when the call of the wilderness is too strong to resist, and one simply has to implement a BASIC interpreter from scratch. However, I was unable to find an interpreter written in Prolog, so my project at least fills that niche. This is certainly not due to inabilities of Prolog, in fact, writing interpreters in Prolog is easier than in most other programming languages, but rather because sane people strive to get away from BASIC, instead of returning to it.
### A brief introduction to logic programming with Prolog
Since there’s been a while since the last entry I suppose it can’t hurt to repeat some fundamental concepts, before we turn to the main task. Logic programming represents a different programming paradigm compared to standard imperative programming languages. There are several equivalent characterizations:
• (Database description) Think of a logic program as an extension of a relational database. Hence, we can define basic relations between objects, and can perform certain fundamental operations on these relations (join, intersection, and so on). However, in a logic program we’re in addition allowed to define relations using implications/rules of the form: $p \gets p_1, \ldots, p_m$, where $p_1, \ldots, p_m$ is treated as a conjunction. Implications of this form should be interpreted as: $p$ is true if $p_1, \ldots, p_m$ are true. For example, if we have a database defining a $\mathrm{child}$ relation between two individuals, so that $\mathrm{child}(a,b)$ holds if $b$ is a child to $a$, then we could define the $\mathrm{grandparent}$ relationship between two individuals as: $\mathrm{grandparent}(X,Z) \gets \mathrm{child}(X,Y), \mathrm{child}(Y,Z).$ By convention, variables are written with uppercase letters, and are universally quantified when they appear in the head of a rule, and existentially quantified otherwise. Hence, the previous rule should be read as: $X$ is a grandparent of $Z$ if there exists an individual $Y$ such that $Y$ is a child to $X$, and $Z$ is a child to $Y$. In addition, rules are allowed to be recursive, and variables are allowed to range over compound expressions, terms, rather than just simple constants. Hence, our program describes relations, and we can then query this program similarly to how we would query a relational database.
• (Logical description) A logic program consists of a set of axioms of the form $p \gets p_1, \ldots, p_m$, where we view simple statements of the form $p$ as a shorthand for $p \gets \mathrm{true}$. The meaning of a logic program is the set of all logical consequences of the program. Hence, we encode our problems as logic, and solutions correspond to logical consequences.
• (Operational description) When we’re defining a rule $p \gets p_1, \ldots, p_m$, it can sometimes be convenient to view as defining a “procedure”. If we “call” this procedure via a query $p$, then we are going to enter the body of $p$, $p_1, \ldots, p_m$, and evaluate each $p_i$ before returning control.
The programming language Prolog is then a simple but efficient logic programming language where queries are answered in a top-down fashion using a backtracking search, and using a simple ASCII syntax where backward implications $\gets$ are written as $:-$. For example, assume that we want to define a relation which we can use to check whether an element occurs in a list (Prolog has built-in syntax for lists where an expression of the form $[X|Rest]$ means that $[X|Rest]$ is a list where the first element is $X$, and where the tail of the list is $Rest$). The usual definition of this program is:
member(X, [X|Rest]).
member(X, [Y|Rest]) :- member(X, Rest).
The logical reading of this program is as follows: $X$ is an element of the list $[X|Rest]$, and if $X$ is a member of $Rest$, then it also a member of the list $[Y|Rest]$, regardless of what $Y$ is. In the operational reading of this program we may interpret it as: the base case of the recursion happens when we have found the element in question, i.e., when $X$ and the first element of the list are equal. This is concisely expressed by $\mathrm{member}(X, [X|Rest])$, rather than the more cumbersome: $\mathrm{member}(X, [Y|Rest]) :- X = Y$. In the recursive case we remove the first element from the list, call it $Y$, and continue searching in the rest of the list. The logical, declarative reading of a program is simpler, but the operational reading is important, too, since it is closer to how Prolog behaves. However, understanding programs only on an operational level can be misleading, since then we may struggle to understand how a query of the form
member(a, L).
where L is a variable, could have (multiple) answers.
# Goals and limitations
• The project will be written in SWI-Prolog, but any SWI-specific code
will be suitably encapsulated.
• Most of the code will be pure, i.e., no negation, no if-then-else, no cut, and so on, but I’ll happily use standard data structures instead of implementing them from scratch.
• The implementation will not be intented to provide a genuine UC-2000 experience. It’s just for fun, and I’ll typically strive for simplicity rather than authenticity, although I plan to implement some restrictions (e.g., fixed-bit arithmetic).
• I’ll implement a subset of BASIC-80 as described in the reference manual and this unofficial manual. Sadly, I think that the official reference manual is only available in Japanese. If a certain feature is not worth the implementation effort I’ll simply omit it.
• I’ll mainly concentrate on the interpreting step and’ll typically assume that I’m given a tokenized and parsed program in a suitable representation. The latter two steps are very easy to implement in Prolog and the details do not differ in any meaningful sense compared to parsing other programming languages. However, if anyone wants access to these tools, I’ll distribute them.
# Next time
In the forthcoming entry I’ll start with something simple: a prototype interpreter for a small subset of BASIC.
## Prologomenon is Taking a Hiatus
As you’ve probably noticed by now, the frequency of updates have been rather low during the past months. And by rather low, I mean close to zero. And by close to zero, I mean zero. This stems from the fact that my ongoing master’s thesis (structural restrictions of a certain class of “easy” but NP-complete constraint satisfaction problems) has nothing to do with logic programming. Hence I just don’t have the motivation or mental energy to simultaneously update the blog.
But fret not. I have every intention to keep the blog running once things have calmed down a bit. And if any of my learned readers have suggestions for upcoming topics I’m all ears. Just shoot me an email or write a comment.
## Meta-Programming in Prolog – Part 2
Here is the story thus far: a meta-program is a program that takes another program as input or output. Based on this idea we wrote an interpreter for a simple logic programming language and later extended it to build a proof tree. A proof of concept, if you will. Meta-interpreters have lost a lot of steam in the last years. The reason being that they are just too hard to write in most popular programming languages. There’s no a priori reason that prevents us from writing a meta-interpreter in e.g. Python or Java, but the truth is that it’s such a lot of work that it’s not worth the trouble in most cases. The only exception that I can think of are integrated development environments which typically have at least some semantic awareness of the object language. But these languages doesn’t have a simple core and makes parsing awkward to say the least. In logic programming the situation is different. If an interpreter supports definite Horn clauses — facts and rules — and built-in operations it’s powerful enough to run quite a lot of real programs.
So what’s the purpose then? Is meta-programming just a sterile, academic exercise that has no place in real world software development? Since that was a rhetorical question, the answer is no. A resounding no! First, meta-interpreters are great for experimenting with new language features and implementation techniques. For instance we could ask ourself if it would be worthwhile to add support for new search rules in Prolog instead of defaulting to a simple depth-first search. Implementing a new search rule in a meta-interpreter can be done in a few hours, and the resulting program won’t be longer than perhaps a page of code (unless you screwed up, that is). Doing the same task in an imperative programming environment could take days or even weeks depending on the complexity of the existing code base. So meta-programming is useful for prototyping. What else? It can actually be a great aid in debugging. In the following sections we’re going to explain what debugging means in logic programming and develop a simple but functional system for squashing bugs.
### Algorithmic debugging
Assume that we have a logic program $P$ and a goal query $\leftarrow G$. Sterling and Shapiro cites three possible bugs in The Art of Prolog:
1. The interpreter could fail to terminate.
2. The interpreter could return a false solution $G\theta$. (incorrectness)
3. The interpreter could fail to return a true solution $G\theta$. (insufficiency)
Since the first problem is undecidable in general we shall focus on the latter two. But first we need to decide what the words true and false means in this context, and in order to do that some remarks about the semantics of logic programs have to be made. If you’re feeling a bit rusty, I urge you to read up a bit on Herbrand models. Wikipedia and my own earlier post are both good starting points. The basic idea is fortunately rather simple. Logic formulas and programs can be viewed as specifications of models. A model is an interpretation in which the program is true. In general there are many, infinitely many, models of any given definite logic program. Which one should we choose? In a model we are free to reinterpret the non-logical vocabulary in any way we see fit. Consider the following logic program:
$natural(zero).$
$natural(s(X)) \leftarrow natural(X).$
It can be seen as a specification of either the set $\{natural(0), natural(1), \ldots\}$ or the set $\{natural(zero), natural(s(zero)), \ldots \}$. Notice the subtle difference. The latter model is simpler in the sense that it doesn’t take us outside the domain of the textual representation of the program itself. Such models are known as Herbrand models. Could we be so lucky that Herbrand models are the only kind of models that we need to pay attention to? This is indeed the case. If a logic program has a model then it also has a Herbrand model. But we still need to pick and choose between the infinitely many Herbrand models. The intuition is that a model of a logic program shouldn’t say more than it have to. Hence we choose the smallest Herbrand model as the meaning of a logic program. Or, put more succinct, the intersection of all Herbrand models. For a logic program $P$, let $M_P$ denote the smallest Herbrand model of $P$.
This is good news since we now know that every well-formed logic program has a meaning. Let’s return to the question of false solutions. This notion is only relevant if the programmer has an intended meaning that differs from the actual meaning of the program. In all but the most trivial programming tasks this happens all the time. An intended meaning $I_P$ of a logic program $P$ is the set of ground goals for which the program should succeed. Note the “should”. If we briefly return to $natural/1$, the intended meaning is nothing else than the actual meaning, i.e. the set $\{natural(zero), natural(s(zero)), \ldots \}$. With this terminology it’s possible to give a precise definition of incorrectness and insufficiency of a logic program $P$:
1. $P$ is incorrect iff $M_P \not\subseteq I_P$.
2. $P$ is insufficient iff $I_P \not\subseteq M_P$.
With these definitions we see that the $natural/1$ program is neither incorrect nor insufficient. But let’s introduce some bugs in it:
$natural1(\_).$
$natural1(s(X)) \leftarrow natural1(X).$
$natural2(zero).$
$natural2(s(X)) \leftarrow natural2(s(X)).$
Can you spot them? $natural1/1$ is incorrect since the base clause is too inclusive. $M_P$ is not a subset of $I_P$ since e.g. the element $natural(-1)$ is not a member of $I_P$. In the same vein, $natural2/1$ is insufficient since it’s equivalent to just $natural2(zero)$.
Quite a lot of legwork to explain something which is actually rather simple! What remains is to put everything in practice. Due to space constraints we’ll focus on the incorrectness problem.
### Incorrectness
A logic program $P$ is incorrect if it gives solutions that are not included in the intended model. In a real-world situation this means that the programmer has found a goal which the program should reject, but it doesn’t, and hence it contains at least one bug. The purpose is to find the part in the program that is responsible for the bug. In logic programming terms this is of course a clause. A clause $A \leftarrow B$ is false iff $B$ is true and $A$ is false. The purpose of the algorithm is to traverse the proof tree and find such a clause. With this in mind we can at least write the top-level predicate:
false_solution(Goal, Clause) :-
%Build a proof tree.
interpreter::prove(Goal, Tree),
%Find a false clause.
false_goal(Tree, Clause).
Well, that wasn’t too hard. What about $false\_goal/2$? The tree is of the form $A \leftarrow B$. Hence there are two cases: either $B$ is false or it’s true. If it’s false, then we must continue the search in $B$. If it’s true, then the current clause is the clause that we’re looking for. To determine whether $B$ is false we need an auxiliary predicate, $false\_conjunction/2$, where the first argument is the conjunction of nodes and the second argument is the false clause (if it exists).
false_goal((A :- B), Clause) :-
( false_conjunction(B, Clause) ->
true
; Clause = (A :- B1),
%Necessary since we don't want the whole tree.
extract_body(B, B1)
).
By the way, this is a fine example of top-down development. In each step we’re breaking the original problem into easier problems and assume that we’re able to solve them later. $false\_conjunction/2$ is a bit trickier. The first argument is a conjunction of nodes of the form $A \leftarrow B$. Just like before there are two cases since $A$ is either false or true. If it’s true, then we move on to the rest of the nodes. If it’s false, then we’d like to know whether $B$ is true or false. Luckily we’ve already solved this problem before — a call to $false\_goal/2$ will do the trick just fine.
false_conjunction(((A :- B), _Bs), Clause) :-
query_goal(A, false),
!,
false_goal((A :- B), Clause).
%Almost the same case as above, but with only one element.
false_conjunction((A :- B), Clause) :-
query_goal(A, false),
!,
false_goal((A :- B), Clause).
false_conjunction((_A, As), Clause) :-
%A is implicitly true.
false_conjunction(As, Clause).
Only the most perplexing predicate remains: $query\_goal/2$. The second argument is $true$ if $A$ is true and $false$ if it’s false. How can we know this? This is where the programmer’s intended model enters the picture. For now, we’re just going to use her/him as an oracle and assume that all choices are correct. The predicate is then trivial to write:
query_goal(G, Answer) :-
%Change later.
write('Is the goal '),
write(G),
write(, ' true?'),
nl,
In essence the user will be asked a series of questions during a session with the program. Depending on the answers, i.e. the intended model, the program will dive deeper and deeper into the proof tree in order to find the troublesome clause. As an example, here’s an append program where the base case is wrong:
append([_X], Ys, Ys) :- true.
append([X|Xs], Ys, [X|Zs]) :-
append(Xs, Ys, Zs).
And the session with the program would look like this:
[1] ?- debugging::false_solution(append([a,b,c], [d,e], Xs), Clause).
Is the goal append([b,c],[d,e],[b,d,e]) true?
|: false.
Is the goal append([c],[d,e],[d,e]) true?
|: false.
Xs = [a, b, d, e],
Clause = (append([c], [d, e], [d, e]):-true)
And we clearly see that it’s the base case that’s wrong.
### Summary
The algorithm was taken from The Art of Prolog. Some simplifying assumptions have been made. Among other things there’s currently no support for built-in operations. This is rather easy to fix, however. A more serious question is if it would be possible to minimize the role of the oracle, since it’s now queried every time a decision needs to be made. There are two techniques for coping with this. Either we do a smarter traversal of the proof tree with e.g. divide and conquer, or we find a way to approximate the intended model of the program without the use of an oracle.
### Source code
The source code is available at https://gist.github.com/1351227.
## Meta-Programming in Prolog – Part 1
### Introduction
Meta-programming is part of the folklore in Prolog, and is in general a rather old concept with roots tracing back to at least the 50’s. To give a definition that captures all the relevant concepts is outside the scope of this introductory text, but I shall at least provide some pointers that’ll be useful later on. Programs are useful in many different domains. We might be working with numbers, with graphs, with lists or with any other data structure. What happens when the domain is another programming language? Well, nothing, really, from the computer’s point of view there’s no difference between this scenario and the former. But conceptually speaking we’re writing programs that are themselves working with programs. Hence the word “meta” in meta-programming. A compiler or interpreter is by this definition a meta-program. But in logic programming we’re usually referring to something more specific when we’re talking about meta-programming, namely programs that takes other logic programs as data. Since Prolog is a homoiconic language there’s also nothing that stops us from writing programs that takes other Prolog programs as data, but even though there’s a subtle distinction between this and the former scenario they are often referred to as one and the same. So, to summarize, when we’re talking about meta-programs in logic programming we’re quite often referring to Prolog programs that uses logic programs as data.
The road map for this post is to see some examples of meta-interpreters in Prolog. Then we’re going to use the interpreters to aid program development with a technique known as algorithmic debugging. But enough talk, let’s do this!
### Meta-interpreters
There’s still ample room for confusion regarding the word “meta” in meta-interpreter. I shall use the word whenever I refer to an interpreter for a logic programming language, even though this is not factually correct since one usually demands that the object language and the meta language are one and the same. That is: we write an interpreter for Prolog in Prolog. There are good reasons for not doing this. Prolog is a large and unwieldy language with many impure features such as cut, IO, assert/retract and so on, and when we’re working with meta-interpreters we’re often only interested in a small, declarative part of the language. Hence we shall restrict our focus to a programming language akin to pure Prolog which is basically just a set of Horn clauses/rules.
Even though we still haven’t decided the syntax for the object language we know that we must represent at least two things: facts and rules. Since a fact $A$ is equivalent to the rule $A \leftarrow true$ we can store these in the same manner. Assume that $P$ is a definite logic program. How should we represent it? As a list or a search tree? This could be a good approach if we were interested in implementing dynamic predicates in a declarative way, but since $P$ is static it’s much easier to just use the database and store everything as facts. For every rule $A \leftarrow B_1, ..., B_n \in P$, represent it as the fact $rule(A, [B_1, ..., B_n])$. If a rule only has the single atom $true$ in its body, i.e. it is a fact, then the second argument is the empty list. Obviously this is just one of many possible representations, but it’s simple to implement and work with.
As an example, here’s how we would write $append/3$:
rule(append([], Ys, Ys), []).
rule(append([X|Xs], Ys, [X|Zs]),[append(Xs, Ys, Zs)]).
Simple, but not exactly pleasing to the eye. Fortunately it’s easy to add some syntactic sugar with the help of Prolog’s term expansion mechanism. Instead of directly using $rule/2$ we can rewrite $append/3$ as:
append([], Ys, Ys) :- true.
append([X|Xs], Ys, [X|Zs]) :-
append(Xs, Ys, Zs).
And then define a suitable expansion object so that we end up with a set of $rule/2$ facts. This is a rather mundane and not very exciting programming task and hence omitted. Now on to the interpreter. It will be defined by a set of $prove/1$ clauses where the single argument is a list of goals. If you’ve never seen a meta-interpreter in Prolog before, you’re probably in for some serious disappointment since the program is so darn simple. So simple that a first reaction might be that it can’t possibly do anything useful. This first impression is wrong, however, since it’s easy to increase the granularity of the interpreter by implementing features instead of borrowing them from the Prolog system.
As mentioned the interpreter takes a list of goals as argument. This means that there’s a base case and a recursive case. In the base case of the empty list we are done. In the recursive case we have a list of the form $[G|Gs]$ where $G$ is the first goal that shall be proven. How do we prove $G$ then? By looking if there’s a corresponding rule $rule(A, [B_1, ..., B_n])$ where $A$ and $G$ are unifiable with mgu $\theta$ and recursively prove $([B_1, ..., B_n|Gs]) \theta$. In almost any other language this would be considerable work, but since Prolog is a logic programming language we already know how to do unification. Thus we end up with:
%Initialize the goal list with G.
prove(G) :-
prove1([G]).
prove1([]).
prove1([G|Gs]) :-
rule(G, B),
prove1(B),
prove1(Gs).
This is a prime example of declarative programming. We’ve only described what it means for a conjunction of goals to be provable and left the rest to the Prolog system. If you’re unsure why or how the interpreter works I urge you to try it for yourself.
### Extensions
To prove that I wasn’t lying before I shall illustrate some neat extensions to the bare-bone interpreter. Strictly speaking we don’t really need anything else since the language is already Turing complete. It’s e.g. trivial to define predicates that define and operate on the natural numbers. For example:
nat(zero) :- true.
nat(s(X)) :- nat(X).
add(zero, Y, Y) :- true.
add(s(X), Y, s(Z)) :-
But since these operations can be implemented much more efficiently on any practical machine it’s better to borrow the functionality. Hence we shall define a set of built-in predicates that are proved by simply executing them. The easiest way is to add a $rule/2$ definition for every built-in predicate.
rule(rule(A, B), []) :-
rule(A, B).
rule((X is Y), []) :-
X is Y.
Why the first clause? So that we can facilitate meta-programming and use $rule/2$ in our object language. I mentioned earlier that the interpreter as defined is not really a meta-interpreter in the strict sense of the word, and that Prolog is such a large language that writing meta-interpreters for it is probably not worth the hassle. But now we have a very restricted yet powerful language. Can we write a real meta-interpreter in that language? Yes! Actually it’s hardly any work at all since we already have the source code for the old interpreter.
prove(G) :-
prove1([G]).
prove1([]) :- true.
prove1([G|Gs]) :-
rule(G, B),
prove1(B),
prove1(Gs).
Glorious. Perhaps not very practical, but glorious.
### Building a proof tree
When our interpreter gives an answer it doesn’t provide any indication as to why that answer was produced. Perhaps the answer is in fact wrong and we want to localize the part of the code that is responsible for the error. The first step in this process is to build a proof tree. A proof tree for a goal $\leftarrow G$ and logic program $P$ is a tree where 1) the root is labeled $G$, and 2) each node has a child for every subgoal with respect to $P$. Hence the proof tree is more or less a representation of a sequence of trace steps.
It might sound like a complex task, but it’s really not. All we need is to extend the $prove/1$ predicate with an additional argument for the proof tree. In the base case of the empty list the tree contains the single node $true$. If $[G|Gs]$ are the current goals then we prove $G$ and $Gs$ and builds a proof tree from the recursive goals.
prove(G, T) :-
prove1([G], T).
prove1([], true).
prove1([G|Gs], ((G :- T1), T2)) :-
rule(G, B),
prove1(B, T1),
prove1(Gs, T2).
And when called with $G = append([a,b], [c,d], Xs)$ the resulting tree looks like this:
?- interpreter::prove(append([a,b], [c,d], Xs), T).
Xs = [a, b, c, d],
T = ((append([a, b], [c, d], [a, b, c, d]):- (append([b], [c, d], [b, c, d]):- (append([], [c, d], [c, d]):-true), true), true), true)
NB: this tree has a lot of redundant $true$ entries. How can we fix this?
### Summary
We’re now able to build proof trees. In the next entry we’re going to use them to localize errors in logic programs.
For a good discussion of meta-interpreters in Prolog the reader should turn to The Craft of Prolog by Richard O’Keefe. This post was just the tip of the iceberg. Another interesting subject is to experiment with different search rules, and for this I shamelessly promote my own bachelor’s thesis which is available at http://www.diva-portal.org/smash/record.jsf?searchId=1&pid=diva2:325247.
### Source code
The source code is available at https://gist.github.com/1330321.
## Arcane Abuses of append
First I should point out that the following predicates hardly qualify as arcane, and they’re not really that abusive either. But they do use append, and one out of three isn’t so bad after all? $append/3$ is one of Prolog’s most useful predicates and is often one of the list predicates first taught to students. Beginners, and especially those familiar with other programming languages, sometimes have a hard time recognizing the multiple usages of the predicate however. Just for reference and to make sure that we’re on the same page, the usual definition goes like this:
append([], Ys, Ys).
append([X|Xs], Ys, [X|Zs]) :-
append(Xs, Ys, Zs).
Nothing fanciful. Just a standard recursive predicate which holds if $Zs$ is the list obtained when appending all the elements of $Xs$ with all the elements of $Ys$. So when should we use this predicate? When we want to append two lists? No! Upon years of using Prolog I don’t think I’ve used $append/3$ for this purpose in a serious program even once. The reason being that difference lists are usually a much better choice in these instances since they can be appended in constant instead of linear time. So let’s try to figure out some other usages.
$member(X, Xs)$ is true if $X$ is a member of the list $Xs$. It’s of course not hard to write this as a recursive predicate as we did with $append/3$, but why bother if there’s an easier way? So let’s solve it with $append/3$ instead. Upon a first inspection it might not look like they have anything to do with each other. How can we find an element in a list by appending two lists? The answer is actually pretty simple. We know that we take a list, $Xs$, as argument. Can we find two other lists such that they give $Xs$ when appended? Of course. Just call $append/3$ with $Xs$ as the third argument. Remember that $append/3$ is a relation and not a function:
?- Xs = [a,b,c], append(A, B, Xs).
Xs = [a, b, c],
A = [],
B = [a, b, c] n
n
Xs = [a, b, c],
A = [a],
B = [b, c] n
n
Xs = [a, b, c],
A = [a, b],
B = [c] n
n
Xs = A, A = [a, b, c],
B = [] n
n
false.
That was the first step. Now let’s find an interpretation of membership that can be cast in terms of these three lists. How about this: $X$ is a member of $Xs$ if $Xs$ can be divided into two parts, $A$ and $B$, such that $X$ comes between $A$ and $B$. Put into code this is:
member(X, Xs) :-
append(_A, [X|_B], Xs).
Very easy once you know the trick, but difficult if one is afraid of using $append/3$ as a relation instead of a function. A similar problem is the sublist problem: given a list $Xs$, is $Ys$ a sublist of $Xs$? Again it’s not hard to imagine how a recursive version would look, but perhaps we can find an easier solution with the help of $append/3$. A sublist is a continuous subsequence. This can be expressed in terms of three lists: $Ys$ is sublist of $Xs$ if there exists two lists, $A$ and $B$, such that $A$ appended with $Ys$ and $B$ results in $Xs$. That was quite a mouthful, but in essence it’s the same thing as we did with $member/2$ with the difference being that we’re looking for a list instead of a single element. Assume that we had the predicate $append/4$. Then sublist could be solved as:
sublist(Xs, Ys) :-
append(_A, Ys, _B, Xs).
Alas, since we don’t have such a predicate we’re going to use $append/3$ two times instead. First $Xs$ is divided into $A$ and $B$. Then we find the sublist $Ys$ by saying that $Ys$ is a suffix of $B$.
sublist(Xs, Ys) :-
append(_A, B, Xs),
append(_, Ys, B).
It should be noted that this solution gives rise to many duplicate answers. Why? Assume that $Xs = [a,b]$. Then the answer $Ys = [b]$ can be found by first binding $B$ to $[a,b]$ and then $Ys$ to the prefix $[b]$ of this list. Or it can be found by binding $B$ to $[b]$ and then binding $Ys$ to the prefix $[b]$ of $B$. This is a bummer since we’re only interested in one of these answers. The implementation of an optimized version is left as an exercise to the reader.
$select/3$, $last/2$ and other basic list processing predicates can be implemented in essentially the same manner. As a last example we’re going to implement $nth/3$ with $append/3$ and $length/2$. $nth(X, Xs, N)$ is true if $X$ is the $N$:th member of $Xs$, starting from $0$. One observation is enough to give us a solution: $X$ is the $N$:th element of $Xs$ if the number of elements preceding $Xs$ is equal to $N$. This is easy to check with $length/2$:
nth(X, Xs, N) :-
append(A, [X|_], Xs),
length(A, N).
A question to the observant reader: why is the order of the two goals in the body not swapped? Also, as a concluding remark: I’ve been told that it’s not always a good idea to do something just because you can. That might well be true. This version of $nth/3$ is rather inefficient and I would not recommend anyone to try it at home!
## Delightful Summer Reading
Apologies for the lack of updates! In my defense, I’ve been rather busy in an attempt to finish the drafts of not only one, but two, novels. The first of these is about bananas, puns and slapstick humour while the second is heavily influenced by my interest in logic and incompleteness. Over the course of the summer I’ve also been doing a fair amount of reading. The two non-fiction books that I’m currently digging my teeth in are The World of Mathematics and Logic, Logic and Logic.
The World of Mathematics is a vast collection of essays (the Swedish edition, Sigma, consists of six volumes in total!) spanning topics such as biographies of the great thinkers, historical problems and also more recent investigations of the foundations of mathematics. Highly recommended, and as a bonus it looks great in the bookshelf.
Logic, Logic and Logic is a collection of articles by the prominent logician-philosopher George Boolos. The bulk of the text is dedicated to various papers on Frege, that among other things sheds light of the importance of his Begriffsschrift. You probably knew that Frege’s life work was demolished when Bertrand Russel presented his infamous paradox in a letter to him, but surprisingly enough it has been found that a rather large portion of it can be salvaged by very small means. The end result is a consistent, second order theory of arithmetics that in some ways is much more elegant than the usual Peano axiomatic formulation (these are instead derived). The fine details are not always that easy to follow, but Boolos’ interesting philosophical remarks makes it worthwhile in the end. Also recommended!
If time permits, I’ll also revisit Computing With Logic by David Maier and David S. Warren. For some reason I left it half unfinished when I read it the last time. It more or less contains everything you need to know about the theory and practice to implement a reasonably efficient Prolog system (excluding parsing), and could potentially serve as inspiration for a few blog entries in a not-so-distant future.
|
## Algebra 1: Common Core (15th Edition)
Consider BODMAS - Bracket, Order, Division, Multiplication, Addition, Subtraction. Addition comes before subtraction so for the expression $34.5+12.9-50$, the first two numbers will be added together then the sum would be subtracted by 50. $(34.5+12.9)-50$ $(47.4)-50$ $=-2.6$ The answer is negative
|
### On Tracking The Partition Function
Markov Random Fields (MRFs) have proven very powerful both as density estimators and feature extractors for classification. However, their use is often limited by an inability to estimate the partition function $Z$. In this paper, we exploit the gradient descent training procedure of restricted Boltzmann machines (a type of MRF) to {\bf track} the log partition function during learning. Our method relies on two distinct sources of information: (1) estimating the change $\Delta Z$ incurred by each gradient update, (2) estimating the difference in $Z$ over a small set of tempered distributions using bridge sampling. The two sources of information are then combined using an inference procedure similar to Kalman filtering. Learning MRFs through Tempered Stochastic Maximum Likelihood, we can estimate $Z$ using no more temperatures than are required for learning. Comparing to both exact values and estimates using annealed importance sampling (AIS), we show on several datasets that our method is able to accurately track the log partition function. In contrast to AIS, our method provides this estimate at each time-step, at a computational cost similar to that required for training alone.
### MCMC for Hierarchical Semi-Markov Conditional Random Fields
Deep architecture such as hierarchical semi-Markov models is an important class of models for nested sequential data. Current exact inference schemes either cost cubic time in sequence length, or exponential time in model depth. These costs are prohibitive for large-scale problems with arbitrary length and depth. In this contribution, we propose a new approximation technique that may have the potential to achieve sub-cubic time complexity in length and linear time depth, at the cost of some loss of quality. The idea is based on two well-known methods: Gibbs sampling and Rao-Blackwellisation. We provide some simulation-based evaluation of the quality of the RGBS with respect to run time and sequence length.
### Discrete Restricted Boltzmann Machines
We describe discrete restricted Boltzmann machines: probabilistic graphical models with bipartite interactions between visible and hidden discrete variables. Examples are binary restricted Boltzmann machines and discrete naive Bayes models. We detail the inference functions and distributed representations arising in these models in terms of configurations of projected products of simplices and normal fans of products of simplices. We bound the number of hidden variables, depending on the cardinalities of their state spaces, for which these models can approximate any probability distribution on their visible states to any given accuracy. In addition, we use algebraic methods and coding theory to compute their dimension.
### Neuroscientists Transform Brain Activity to Speech with AI
Artificial intelligence is enabling many scientific breakthroughs, especially in fields of study that generate high volumes of complex data such as neuroscience. As impossible as it may seem, neuroscientists are making strides in decoding neural activity into speech using artificial neural networks. Yesterday, the neuroscience team of Gopala K. Anumanchipalli, Josh Chartier, and Edward F. Chang of University of California San Francisco (UCSF) published in Nature their study using artificial intelligence and a state-of-the-art brain-machine interface to produce synthetic speech from brain recordings. The concept is relatively straightforward--record the brain activity and audio of participants while they are reading aloud in order to create a system that decodes brain signals for vocal tract movements, then synthesize speech from the decoded movements. The execution of the concept required sophisticated finessing of cutting-edge AI techniques and tools.
### Policy Design for Active Sequential Hypothesis Testing using Deep Learning
Information theory has been very successful in obtaining performance limits for various problems such as communication, compression and hypothesis testing. Likewise, stochastic control theory provides a characterization of optimal policies for Partially Observable Markov Decision Processes (POMDPs) using dynamic programming. However, finding optimal policies for these problems is computationally hard in general and thus, heuristic solutions are employed in practice. Deep learning can be used as a tool for designing better heuristics in such problems. In this paper, the problem of active sequential hypothesis testing is considered. The goal is to design a policy that can reliably infer the true hypothesis using as few samples as possible by adaptively selecting appropriate queries. This problem can be modeled as a POMDP and bounds on its value function exist in literature. However, optimal policies have not been identified and various heuristics are used. In this paper, two new heuristics are proposed: one based on deep reinforcement learning and another based on a KL-divergence zero-sum game. These heuristics are compared with state-of-the-art solutions and it is demonstrated using numerical experiments that the proposed heuristics can achieve significantly better performance than existing methods in some scenarios.
|
Apr 6, 2011
The semantics of a simple type-theoretic language
The language for predicate logic contains this: connectives (∨, ∧, → etc.), quantifiers (∀ and ∃), individual constants (a, b, c, d etc.) and variables (x, y, z etc.), and predicate constants (W, B, H etc.). These have their prowess as well as their limits. In predicate logic, you can only say something about entities (the properties they have) and the relation of entities to other entities. Logicians and linguists alike felt the need to add more structure to this language when faced with sentences like:
(a) If Winnie the Pooh is a mammal, then there is at least one thing he has in common with Lady Gaga
(b) Mary is yelling furiously
(c) Jumbo is a terribly small elephant
And, mind you, these are not terribly complicated sentences. For instance, (a) feels like quantifying over properties, saying that there is a property such that both Winnie and Lady Gaga share, so we need to introduce a variable for predicates that renders it into (a’) below. That would already upgrade our first-order predicate logic into second-order predicate logic. But consider (b). It feels like attributing the property of being furious to the property of yelling – the latter in turn being attributed to Mary. For that, we would need a representation on the lines of (b’) below, where the scripts are expressions of a third-order. Needless to add, (c’) is even more elaborate, with terribly modifying small elephant, and small being applied to elephant, and elephant to Jumbo.
(a’) Mp → ∃X(Xl ∧ Xp)
(b’) (Y)(m)
The theory of types is a very handy tool which linguists (some, not all) use to operate on these natural language sentences. It is good to note that type theory did not arise out of this need, but the purely logical one of overcoming the paradoxes discovered in set-theory (Russell’s paradox, more precisely). We will come back to this later. Let us proceed by defining the set of all types. As we’ve mentioned before, types are labels for categories of expressions. What’s really nice is that we can go along with just two basic types (e – for entities, and t – for formulas[1]) and build up from there. So let’s start:
Definition 1:
T, the set of types, is the smallest set such that:
(i) e, t ∈ T
(ii) if a, b ∈ T, then <a, b> ∈ T
The notation <a, b> is what stands for the so-called derived types and should be read as follows. An expression of the type <a, b> combines with an expression of the type “a” and results in an expression of the type “b”. Let α be an expression of the type <a, b> and β an expression of the type “a”. This means that if we combine α with β, we get α(β), which we know will be of the type “b”. This process of applying expressions to other expressions is called “functional application of α of type <a, b> to β of type a”. I guess you know where we’re going with this. Predicates will be of the type <e, t>, because they combine with individuals (which are of the type e) and result in truth values (t). We will say then that the predicate walks is of the type <e, t> because it combines with John and gives the formula W(j) – which we know, and which is true or false.
Not all types can be identified in natural language (and this is quite normal). But some are. Expressions of the type <<e, t>, t>, for instance, combine with predicates (<e, t>) and give truth values (t) – which is just what we want for second-order predicates in sentences like “Walking is healthy”. Expressions of the type <<e, t>, <e, t>> combine with one place predicates and give one-place predicates. In natural language, adverbs and relative adjectives correspond to expressions of this type. Now what would two-place predicates look like? Let’s look at the sentence:
(1) Paepaenoy loves Zmika
What loves does in (1) is it combines with Zmika (an expression of the type e) and gives loves Zmika, which applied to Paepaenoy results in a formula. Thus, two-place predicates will be of the type <e, <e, t>>.
We can now give the syntax of our type-theoretic language in a very precise form:
(i) If α is a variable or a constant of type a in L, then α is an expression of type a in L
(ii) If α is an expression of a type <a, b> in L, and β is an expression of the type a, then α(β) is an expression of the type β in L
(iii) If ψ and ϕ are expressions of type t in L (i.e. formulas), then so are (ϕ ∧ ψ), (ϕ ∨ ψ), (ϕ → ψ), (ϕ ↔ ψ).
(iv) If ϕ is an expression of type t in L, and v is a variable (of any type), then ∀xϕ and ∃xϕ are expressions of type t in L.
(v) If α and β are expressions that belong to the same arbitrary type, then (α = β) is an expression of type t in L.
(vi) Every expression in L is to be constructed by means of (i)-(v) in a finite number of steps
Given this, we can have a notation for “all well-formed expressions of type a”. Because of HTML constraints, I will note that as WE[a/L], but fortunately one never notes the name of the language, so I will use WEa – e.g. WEt for the set of all formulas. Another thing that will change in our talk, due to these upgrades, is that we will not talk about sets any more, not normally, but of the “characteristic function of sets”. These two ways of speaking are interchangeable, but the latter gives a better picture of what’s going on: a one-place predicate like “walks” is not, strictly speaking, the same as “the set of all walkers” – although saying this is not wrong, in the set-view of expressions – but “a function from individual to truth values”, namely, that function which maps walkers to 1 and non-walkers to 0. This is called the functional view and goes hand in hand with the functional application of types. The notation also changes – although some authors prefer not to introduce a different notation for something which is in the end pretty similar. Thus, if
B = {1, 2}, and D = {x | x is a natural number}, then the characteristic function of B can be written as fB(1) = fB(2) = 1 and fB(3) = fB(4) = fB(5) = … fB(n) = 0. The domain of fB is D, and the range is {1, 0} – which goes hand in hand with the fact that something like B will be of the type <e, t>.
With this we arrive at the basis of the semantics for our language, for one-place predicates will be interpreted as the characteristic functions of subsets of the domain. As we move up the ladder of types of expressions, an expression of the type <a, b> is treated as a function from “type a things” to “type b things”. It is useful now to remember the general notation XY which stands for “all functions that map Y into Z”. Thus, the set of all one-place predicates will be {0, 1}D and the set of all two-place predicates will be ({0, 1}D)D. These are functions that “spill out” other functions.
Since more complex types are constructed by means of two simple types, we will be able to specify the domain of their characteristic functions in the same way. Thus, with the notation Da, D standing for “the domain of interpretation of expressions of type a, given a domain D”, we will naturally have:
Definition 3:
(i) De, D = D (i.e. the domain of interpretation of WEe is D, the set of individuals)
(ii) Dt, D = {1, 0} (i.e. the domain of interpretation of WEt is {1, 0}, the set of truth values)
(ii) D<a, b>, D = (Db)D a (i.e. the set of functions from Da to Db)
So, the domain of interpretation of a one place predicate will be (Dt)D e, which is the same as {1, 0}D; for two-place predicates, (DtD e)D e, which is ({1, 0}D)D. What we just did is specify the domains of the interpretation functions for each type (simple or derivate). We now have a model M for our language L, consisting of a non-empty domain D and an interpretation function I as specified in definition 3. This function I assigns to each constant an element from the domain which corresponds to the type of the constant. Within WEa, then, we can differentiate between CONa and VARa – that is, between constants of type a and variables of type a.
The best thing to do now is some exercises. I’ll come back with that.
[1] Be very careful not to mix up the meta-languages when we discuss types. “e” is a type, but <a, b> is not a type, but a notation for the type that combines with “a” and results in “b”. So <a, b> stands for all types which have an “a” and a “b” as types, namely <e, t>, <<e, t>, t>, <<e, t>, <e, t>>, etc.
|
$\mathbb{Z}$-Polynomials in an Enumeration Identity
I've conjectured the following identity: For $1 \leqslant k \leqslant l \leqslant n$ and $m \in \mathbb{N}$, \begin{align} \sum_{1 \leqslant i_1 < \cdots < i_l \leqslant n} i_{k}^{m} = \sum_{j = 1}^{m} P_{m,j}(k) \binom{n+j}{l+m}, \end{align} where $P_{m,j}(k)$ generates the $\mathbb{Z}$-polynomial triangle: \begin{align} \begin{array}{cccccc} m / j & 1 & 2 & 3 & 4 & 5 \\ 1 & k & & & & \\ 2 & k & k^{2} & & & \\ 3 & k & 3k^{2} + k & k^{3} & & \\ 4 & k & 7k^{2} + 4k & 6 k^{3} + 4k^{2} + k & k^{4} & \\ 5 & k & 15 k^{2} + 11 k & 25 k^{3} + 30 k^{2} + 11k & 10 k^{4} + 10 k^{3} + 5 k^{2} + k & k^{5} \end{array} \quad etc \end{align} In particular, \begin{align} P_{m,1}(k) & = k \\ P_{m,2}(k) & = (2^{m-1} - 1) k^{2} + (2^{m-1} - m) k \\ P_{m,m-1}(k) & = \sum_{j = 1}^{m-1} \binom{m}{j+1} k^{m-j} \\ P_{m,m}(k) & = k^{m}. \end{align} When $k = 1$, the polynomials specialize to Eulerian numbers. Summing over $j$, I conjecture \begin{align} \sum_{j = 1}^{m} P_{m,j}(k) = \sum_{l = 0}^{m} |s(m,l)|k^{l} = (k)_{m}, \end{align} where $s(m,l)$ is the $(m,l)^{\text{th}}$-Stirling Number of the first kind. Are these polynomials well known?
-
|
Select Page
Up to this point we have dealt only with Gaussian integrals having the single variable x. This criterion is illustrated for the Gaussian integral R e x2dx from prob-ability theory, the logarithmic integral R dt=log(t) from the study of primes, and elliptic integrals. SEMATH INFO. 1.1 dx = x + c 1.2 k dx = k x + c , where k is a constant. o As a quick example, let’s estimate A(z) at = 2.546. o The simplest way to interpolate, which works for both increasing and decreasing values, is to always work from top to bottom, equating the s{*6�O�0�ĵ3��� �-"�9��Pƨ���䯣���ɵ+b�s�2���2>T]*4���D�0쥎̜}k��C9���7���ux^OP�9��x�6� o As a quick example, let’s estimate A(z) at = 2.546. o The simplest way to interpolate, which works for both increasing and decreasing values, is to always work from top to bottom, equating the [/CalRGB Get your Gaussian On. s.o. Step-by-step Solutions » Walk through homework problems step-by-step from beginning to end. Indefinite integrals are antiderivative functions. N.B. ) Degree of Precision 2 √3/3 1.0 3 −√3/3 1.0 3 0. dKoL!8Ka#EV,@V!\j8ZFbp6EE<9cn=N6j0nf;(&;QU6bUD')c@\ The Gaussian integral, also called the probability integral and closely related to the erf function, is the integral of the one-dimensional Gaussian function over (-infty,infty). (1) is valid for complex values of a in the case of Rea > 0. 1 0 obj Gauˇsches Integral und Stirling-Formel Lemma 0.1 (Gauˇsches Integral) Es gilt f ur alle a>0: Z R e ax2 dx= r ˇ a (1) Beweis: Wir rechnen: Z R e ax2 dx 2 = Z R e ax2 dx R e ay2 dy Z R2 e ax2e ay2 dxdy (mit dem Satz von Fubini) Z R2 e a(x2+y2) dxdy: Nun verwenden wir Polarkoordinaten: stream e−ax2dx= 1 2 π a # $% & ’(1 2 0 ∞ ∫ ax xe−2dx= 1 2a 0 ∞ ∫ x2e−ax2dx= 1 4a π a #$% & ’(1 2 0 ∞ ∫ x3e−ax2dx= 1 2a2 0 ∞ ∫ x2ne−ax2dx= 1⋅3⋅5⋅⋅⋅(2n−1) 2n+1an π a $%& ’ 1 2 0 ∞ ∫ x2n+1e−ax2dx= n! 2 0 obj Abscissae (. STATISTICAL TABLES 1 TABLE A.1 Cumulative Standardized Normal Distribution A(z) is the integral of the standardized normal distribution from −∞to z (in other words, the area under the curve to the left of z). <> Gauß-Algorithmus einfach erklärt Aufgaben mit Lösungen Zusammenfassung als PDF Jetzt kostenlos dieses Thema lernen! Solutions to Gaussian Integrals Douglas H. Laurence Department of Physical Sciences, Broward College, Davie, FL 33314 The basic Gaussian integral is: I= Z 1 1 e 2 x dx Someone gured out a very clever trick to computing these integrals, and \higher-order" integrals of xne x2. Signals & Systems - Reference Tables 5 Useful Integrals cos(x)dx sin(x) sin(x)dx cos(x) xcos(x)dx cos(x) xsin(x) xsin(x)dx sin(x) xcos(x) x2 cos(x)dx 2xcos(x) (x2 2)sin(x) x2 sin(x)dx 2xsin(x) (x2 2)cos(x) e xdx a e x xe xdx 2 1 a a x e x x2e xdx 2 3 2 2 2 a a x a x e x x dx x ln 1 2 2 x2 dx tan ( ) 1 1 x. J,g]g+e/h_!_gCtO=0f)$P%cIi8Zdfc5&3j_8$7g. 1) For each , () is a monic polynomial of degree . t=¿. Tabelle von Ableitungs- und Stammfunktionen Ableitung f0(x) Funktion f(x) Stammfunktion F(x) (eigentlich immer + C) x 1 x ( 2R) 8 <: 1 +1 x +1 wenn 6= 1 lnjxj wenn = 1 s.o. N.B. A constant (the constant of integration) may be added to the right hand side of any of these formulas, but has been suppressed here in the interest of brevity. closed Gaussian quadrature rule. The function p1 2ˇ e 2x =2 is called a Gaussian, and (4.1) says the integral of the Gaussian over the whole real line is 1. But in quantum fieldtheorytherecanbeaninfinitenumberofvariables,andsoweneedtoinvestigatehowtheGaussianintegrals behave when the variable xbecomes the n-dimensional vector x, where the dimension nmay be infinite. e−ax2dx= 1 2 π a #$% & ’(1 2 0 ∞ ∫ ax xe−2dx= 1 2a 0 ∞ ∫ x2e−ax2dx= 1 4a π a # $% & ’(1 2 0 ∞ ∫ x3e−ax2dx= 1 2a2 0 ∞ ∫ x2ne−ax2dx= 1⋅3⋅5⋅⋅⋅(2n−1) 2n+1an π a$ %& ’ 1 2 0 ∞ ∫ x2n+1e−ax2dx= n! ��zӪE���;2�h�a5�OC&�T?ԇ�+F��Kg{_!�Z�������k��5RO��,é�b٩"%Cl6ԧ��4}�[�%БZ�G�F=�SR�*A>�8>�CL-G(wx3z�� /YO�8��� ��6ߍN�n&������Cq���KQ���>�s���z=%k��ݔSh*V��U۠����-�͐ Gaussian integrals involving absolute value functions. Remark: Gaussian quadrature formula (more in Table 4.12) () 1 −1 ≈ () =1. endstream Scheibenelemente FEM 4.2-8 2. is the double factorial) List of integrals of exponential functions 3 ... edu/ ~vhm/ Table. �5�P8$�BaP�R�DbPhtN-�5ⱘ�v�Hc� �/$�� �c]Y*��S)��c/��$Ӹ|�y?��(P���E�P)��.�&�S�TY|�a��f�ʴ��@�l}��b�Yl�KU��b�[�1*�JGv��o��Ϋ���w�uWV��0Y�CB�c�9��'�M�����)���qX|6g1���/�z�w���f�8�~q]�V �}uz%,�p6z��O��a5�V��������oO�4��9�~F�Sݎh���3��I�Xm'?������_�_Q~�D~������, �p"�5�2(���� ��bd�;���1.���6�F�=��� �-:��E;="��-0��/�;�"�qGp���Ċ�B�J�B����ֺ�K�"F)\�-��$�,Ŏ��.L�ܒ�̫,�=HĢ�M�d�5Ύ��ή3�����V��s����r,�6LR��Qn��)�4��OD�JQ���#%��+S��8"��'E44�I��m5?�$�2�T-_Z��,|��j�Nu��N=�MoT6A?I�=~��T,�Z�5����vj�g����h��d�Rv��sZD�u(�e�w!���y(7��{Aw�}_w� B� Tables of the Exponential Integral Ei(x) In some molecular structure calculations it is desirable to have values of the integral Ei(s) to higher accuracy than is provided by the standard tables [1} �ߗ� 2 0 obj >0(we just did this) 2. P�H�$�&�N��E����� /WhitePoint [0.9505 1 1.089] 4. such approximations is given by the logarithmic integral Li(x) = R x 2 dt=log(t) for x>2. You cannot integrate the Normal Gaussian distribution, because you cannot express it in terms of elementary functions.. z����� K�nLPv �Oqͷ�ӽ�r��1ґu�. Wenbo V. Li and Ang Wei. For some of them analytical solution is presented and for some others, the solution is written in terms of the Owen’s T-function (Owen, 1980). How would you write each of the below probabilities as a function of the standard normal CDF, Φ? An integral domain is a commutative ring with identity and no zero-divisors. In what follows, c is a constant of integration and can take any constant value. endobj endobj The most general deflnite, dimensionless integral involving 4 0 obj Integral of Gaussian PDF. Bei Integralen über echt gebrochenrationale Funktionen wird auf die Methode der Partialbruchzerlegung verwiesen. Matrix integrals are used for calculations in several different areas of physics and mathematics; for example quantum field theory, string theory,quantumchromodynamics,andrandommatrixtheory. (It is an exercise to show Li(x) ˘x=log(x) as x!1.) @LYKUJNGBP\poR=_;Dl'P(T >> << Integrals of polynomials Fourth Proof: Another differentiation under the integral sign Here is a second approach to nding Jby di erentiation under the integral sign. 4#�"7R To overcome this di culty numerical methods are used. Integrals with Trigonometric Functions (71) Z sinaxdx= 1 a cosax (72) Z sin2 axdx= x 2 sin2ax 4a (73) Z sin3 axdx= 3cosax 4a + cos3ax 12a (74) Z sinn axdx= 1 a cosax 2F 1 1 2; 1 n 2; 3 2;cos2 ax (75) Z cosaxdx= 1 a sinax (76) Z cos2 axdx= x 2 + sin2ax 4a (77) Z cos3 axdx= 3sinax 4a + sin3ax 12a 8 Let ~=3,. /Length 614 x/D 2nn! For example, consider the Gaussian integral R e x2dxcalled the Gaussian for short. The Gaussian integration is a type of improper integral. A Brief Look at Gaussian Integrals WilliamO.Straub,PhD Pasadena,California January11,2009 Gaussianintegralsappearfrequentlyinmathematicsandphysics. A constant (the constant of integration) may be added to the right hand side of any of these formulas, but has been suppressed here in the interest of brevity. The Normal or Gaussian Distribution. Legendre polynomials () satisfy: . (3) The ring Z[x] of polynomials with integer coecients is an integral … To obtain an analytic and simple radial expression of the Fourier transform of an SGTO, the functional form of f n.r2/is critical. %���� �(!�>I����W�ch�l���Τ��]�X�RN2�x`�"Nē�V� AU����w�wp�;l9�H]��rw��'���6��o@A]� >�&��;7�J6Қ�����# L�P6��q;�F ��K�ݐ��'_�-�m�[7�|�f6ݣ��jb�}{��P 6���U�z�P�@�o�4ǭ�=8?�m��]U��k0�G��xmj坽�2hEѭ�l :�W���)r/�2�o�J�$�G����ln��A3�n (�F�Snv�b��ד�� X+��7��qDn-���0��>R���E�1��e�[��k�k�$5Y5�����Sj��$&�x�C��]���HA������{ʷ��MR̙��Q/kՆ8�2������n[��'t���G�� J�[�h�ޞ�� So to begin,let’slookatthegeneralizationof(2)inn dimensions,whichlookslike 1 1 1 1::: 1 1 1 - Integrals of Elementary Functions. List of integrals of exponential functions 2 where where and is the Gamma Function when , , and when , , and Definite integrals for, which is the logarithmic mean (the Gaussian integral) (see Integral of a Gaussian function) (!! 2, then =Φ − • Symmetry of the PDF of Normal RV implies 1. Integrals of polynomials For example, consider the Gaussian integral R e x2dxcalled the Gaussian for short. −3>6. ©2005 BE Shapiro Page 3 This document may not be reproduced, posted or published without permission. <>/OutputIntents[<>] /Metadata 538 0 R>> 4 0 obj ... {2 \pi}} e^{-\frac{x^2}{2}}$ is the pdf of a standard Gaussian Random Variable. e−1 2( x−µ σ) 2, (1) such that its cumulative distribution … << Is valid for complex values of a in the case of Rea > 0 and the Gaussian Z... By taking derivatives of equation 2 with respect to a California January11,2009 Gaussianintegralsappearfrequentlyinmathematicsandphysics sin 1 cos22x.... > 0 and answers with built-in step-by-step gaussian integral table pdf » Walk through homework problems step-by-step from beginning to end ( is! Formulas to compute the expectations of absolute value and sign of Gaussian integers modulo 3 a! X2Dxcalled the Gaussian integral R e x2dxcalled the Gaussian density function and the Gaussian cumulative distribution function the! An integral domain is a commutative ring with identity and no zero-divisors of Gauss quadrature are not to... ( it is an exercise to show Li ( x ) as!... Save a du x dx sin ( ) =1 Generator » Unlimited random practice problems and answers with step-by-step. Should be noted that the classical variants of Gauss quadrature are not applicable (. 1 −1 ≈ ( ) 1 −1 ≈ ( ) ii just did this ) 2 Abstract Chapter... Erklärt Aufgaben mit Lösungen Zusammenfassung als PDF Jetzt kostenlos dieses Thema lernen step on your own in! Only with Gaussian integrals WilliamO.Straub, PhD Pasadena, California January11,2009 Gaussianintegralsappearfrequentlyinmathematicsandphysics, while Gauss published the precise integral the... Die in der Differential-und Integralrechnung benötigt werden through homework problems step-by-step from beginning to end, 4x = x+x+x+x 0! To nding Jby di erentiation under the integral Moivre originally discovered this type integral... + c, where k is a very important integral, one of the sine is and! Gaussian quadratic forms, i.e problems and answers with built-in step-by-step Solutions » Walk through homework problems step-by-step beginning!, 4x = x+x+x+x = 0 mod 3 learn how to evaluate this integral in gaussian integral table pdf case of Rea 0! If the power of the properties being that it is the double factorial List. Sign of Gaussian quadratic forms, i.e the variable xbecomes the n-dimensional gaussian integral table pdf x where... That the classical variants of Gauss quadrature are not applicable to ( ). 12, 4x = x+x+x+x = 0 KB ) Abstract ; Chapter info and citation ; First ;! Linear integral Equations, McGraw-Hill Book Co., Inc., New York,.. The German mathematician Carl Friedrich Gauss, the integral sign 12, 4x = x+x+x+x = 0 mod 3 show. Function of the sine is odd and positive: Goal: ux cos i where a a! Presents a great variety of integrals involving the Gaussian gaussian integral table pdf short, dimensionless involving... K is a second approach to nding Jby di erentiation under the is. 1980 ) presents a great variety of integrals of functions is presented.! Pdf Jetzt kostenlos dieses Thema lernen likely give much better approximations in cases. X = 0 mod 3 c, where k is a second approach to nding Jby di under. That the classical variants of Gauss quadrature are not applicable to ( 1.1.... Dealt only with Gaussian integrals WilliamO.Straub, PhD Pasadena, California January11,2009 Gaussianintegralsappearfrequentlyinmathematicsandphysics odd. The ring Z [ x ] of polynomials with integer coecients is an integral.... Properties being that it is the curve that represents the normal distribution a.k.a 1733, while Gauss published the integral. 3 is a second approach to nding Jby di erentiation under the integral of 1.2... Better approximations in most cases is valid for complex values of a ring ) functional... Quadrature rule Z [ x ] of polynomials with integer coecients is an exercise to show (. 1. the subring { 0,4,8,12 } of Z 12 gaussian integral table pdf 4x x+x+x+x. Radial expression of the Fourier transform of an SGTO, the functional of. Complex values of a in the case where a is a monic polynomial of.. Vector x, where k is a constant, posted or published without permission Gaussian integration: integration. Is a very important integral, one of the properties being that it is the double factorial ) List integrals! Zweispaltig aufgebaut at Gaussian integrals WilliamO.Straub, PhD Pasadena, California January11,2009 Gaussianintegralsappearfrequentlyinmathematicsandphysics 12, =! Chapter info and citation ; First Page ; References ; Abstract to compute expectations. Fieldtheorytherecanbeaninfinitenumberofvariables, andsoweneedtoinvestigatehowtheGaussianintegrals behave when the variable xbecomes the n-dimensional vector x, where k is a constant integration! Integrals can be solved explicitly compute the expectations of absolute value and sign of gaussian integral table pdf integers modulo 3 is real. ) 1 −1 ≈ −√3 3 + √3 3 has degree of precision 3 i.e! Write each of the properties being that it is the curve that represents normal. Step-By-Step from beginning to end functions is presented below ( 3 ) the Gaussian integers Z [ ]! = x + x + x + x = 0 Series, and Gaussian!, 1924 x 2 Z 3 [ i ], 3x = x + c 1.2 k =. Try the next step on your own the case where a is very. A very important integral, one of the most commonly used probability for. ( t ) for x > 2... edu/ ~vhm/ table ) ˘x=log x... Distribution for applications points but this is not likely the best line for Approximating the integral sign Ableitungs- und (. On your own the functional form of f n.r2/is critical dealt only with Gaussian integrals,... Cos i in quantum fieldtheorytherecanbeaninfinitenumberofvariables, andsoweneedtoinvestigatehowtheGaussianintegrals behave when the variable xbecomes the n-dimensional vector,! ( 1.1 ) important integral, one of the gaussian integral table pdf commonly used distribution... = 0 mod 3 be infinite the best line for Approximating the integral sign, way function the... Shapiro Page 3 this document may not be reproduced, posted or published permission! Integral Li ( x ) as x! 1. n even integral 7 can be done by derivatives! Characteristic of a in the case where a is a field, with the Gaussian is field... Integral of ( 1.2 ) can be done by taking derivatives of equation 2 with respect to a Übersicht. Of Gauss quadrature are not applicable to ( 1.1 ) dx = x + x + =! Transformation as significant and can take any constant value for any x 2 Z } is an integral 4... We have dealt only with Gaussian integrals WilliamO.Straub, PhD Pasadena, California January11,2009 Gaussianintegralsappearfrequentlyinmathematicsandphysics ( it is the factorial. Di culty numerical methods are used » Unlimited random practice problems and answers with step-by-step. Integration is a very important integral, one of the Fourier transform of an SGTO the... B 2 Z } is an integral domain gauß-algorithmus einfach erklärt Aufgaben mit Lösungen Zusammenfassung als PDF Jetzt dieses... Williamo.Straub, PhD Pasadena, California January11,2009 Gaussianintegralsappearfrequentlyinmathematicsandphysics and sign of Gaussian quadratic forms, i.e methods are.! 2 dt=log ( t ) for each, ( ) 1 −1 ≈ −√3 3 + 3! Factors to cos ( ) =1 of an SGTO, the logarithmic table Useful!... edu/ ~vhm/ table ( we just did this ) 2 Proof: Another differentiation under integral... 4X = x+x+x+x = 0 R x 2 dt=log ( t ) for each, ( ) valid. Beginning to end built-in step-by-step Solutions » Walk through homework problems step-by-step from beginning to end identity... A ring ), ( ) 1 −1 ≈ ( ) ii with Gaussian having! Field, with the multiplication table for the nonzero elements below: Note the integers Z [ x ] polynomials. Here is a type of integral in the case where a is a type integral... Sine is odd and positive: Goal: ux cos i sin 1 cos22x x. > 2, Gauss! For complex values of a ring ) nding Jby di erentiation under the integral (! Solved explicitly characteristic of a ring Definition ( characteristic of a ring ) mit Lösungen Zusammenfassung als PDF kostenlos.
|
# Is it possible to write a sum as an integral to solve it?
I was wondering, for example,
Can:
$$\sum_{n=1}^{\infty} \frac{1}{(3n-1)(3n+2)}$$
Be written as an Integral? To solve it. I am NOT talking about a method for using tricks with integrals.
But actually writing an integral form. Like
$$\displaystyle \sum_{n=1}^{\infty} \frac{1}{(3n-1)(3n+2)} = \int_{a}^{b} g(x) \space dx$$
What are some general tricks in finding infinite sum series.
• math.stackexchange.com/questions/1002440/… – lab bhattacharjee Nov 3 '14 at 13:26
• @labbhattacharjee, I did not meant that. I know the solution to this, I was just asking if in general it is possible to write a sum as an actual integral. – Amad27 Nov 3 '14 at 13:28
• You can trivially write the sum as an integral using the Iverson bracket (add a factor of $[n \in \mathbb{N}]$ to the integrand). This ignores the question of how to evaluate the resulting integral, of course. – chepner Nov 3 '14 at 19:10
• "I am NOT talking about a method for using tricks with integrals." "But actually writing an integral form." "What are some general tricks" Combining these quotes with the accepted answer that does not seem to be a general trick, I'm a bit confused on what this question is asking. – JiK Nov 4 '14 at 8:28
• @Amad27 $\int_\mathbb{N}\frac{d \mu}{(3n-1)(3n+2)}$ where $\mu$ is the counting measure on $\mathbb{N}$. It doesn't give you anything you didn't already have though. I didn't really mean it seriously although it is true. – Tim Seguine Nov 5 '14 at 17:21
A General Trick
A General Trick for summing this series is to use Telescoping Series: \begin{align} \sum_{n=1}^\infty\frac1{(3n-1)(3n+2)} &=\frac13\lim_{N\to\infty}\sum_{n=1}^N\left(\frac1{3n-1}-\frac1{3n+2}\right)\\ &=\frac13\lim_{N\to\infty}\left[\sum_{n=1}^N\frac1{3n-1}-\sum_{n=1}^N\frac1{3n+2}\right]\\ &=\frac13\lim_{N\to\infty}\left[\sum_{n=0}^{N-1}\frac1{3n+2}-\sum_{n=1}^N\frac1{3n+2}\right]\\ &=\frac13\lim_{N\to\infty}\left[\frac12-\frac1{3N+2}\right]\\ &=\frac16 \end{align}
An Integral Trick
Since $$\int_0^\infty e^{-nt}\,\mathrm{d}t=\frac1n$$ for $n\gt0$, we can write \begin{align} \sum_{n=1}^\infty\frac1{(3n-1)(3n+2)} &=\sum_{n=1}^\infty\frac13\int_0^\infty\left(e^{-(3n-1)t}-e^{-(3n+2)t}\right)\mathrm{d}t\\ &=\frac13\int_0^\infty\frac{e^{-2t}-e^{-5t}}{1-e^{-3t}}\mathrm{d}t\\ &=\frac13\int_0^\infty e^{-2t}\,\mathrm{d}t\\ &=\frac16 \end{align}
• I think this is a better "trick" for dealing with sums. Integral "tricks" are nice however integrals and infinite series' are very different in what they calculate and manipulating a sum or integral on its own without switching is preferred. – Ali Caglayan Nov 3 '14 at 16:23
• @Alizter: For the most part, I agree. However, sometimes pure series manipulation can be extremely complicated, and the proper integral representation of a sum can be useful. However, in this case, I think staying with series manipulation is easiest. That being said, I have added an integral approach, as well. – robjohn Nov 3 '14 at 18:40
• The sum under the first integral could have been computed as a telescoping series either. Considering this, I think the use of integrals in the second solution is completely void. Edit: I mean exactly what Henning Makholm points out under the other answer. – Adayah Nov 4 '14 at 19:41
• @Adayah: My reply to Henning was meant as an agreement. I first posted only the telescoping series, but then added an integral approach to satisfy the first part of the question. In any approach where one breaks up the summand using partial fractions, it could be said that, at that point, the answer could be computed as a telescoping sum. – robjohn Nov 4 '14 at 20:48
Since $\int_{0}^{1}x^k\,dx = \frac{1}{k+1}$, $$\frac{1}{(3n-1)(3n+2)}=\frac{1}{3}\left(\frac{1}{3n-1}-\frac{1}{3n+2}\right)=\frac{1}{3}\int_{0}^{1}x^{3n-2}(1-x^3)\,dx,$$ so, summing over $n$: $$\sum_{n=1}^{+\infty}\frac{1}{(3n-1)(3n+2)}=\frac{1}{3}\int_{0}^{1}x\,dx=\frac{1}{6}.$$
• I thought we need uniform convergence in order to interchange the limit and integral. The power series is uniformly convergent inside the radius of convergence, how to pass it to the whole interval $[0,1]$? – John Nov 3 '14 at 13:45
• @JohnZHANG Actually no, Fubini and Tonelli's theorems allow this for a monotone sequence supposedly, I believe. – Amad27 Nov 3 '14 at 13:45
• Nice trick for the given sum, but this still doesn't answer the bold-marked question of general tricks. – Ruslan Nov 3 '14 at 16:34
• Isn't the integral just a detour here? The operative step is exactly the same telescoping that could have been done without rewriting the terms into integrals. – Henning Makholm Nov 4 '14 at 10:11
• @HenningMakholm: smoke and mirrors. – robjohn Nov 4 '14 at 12:07
Actually writing it as an integral, as asked for:
$$\displaystyle \sum_{n=1}^{\infty} \frac{1}{(3n-1)(3n+2)} = \int_{1}^{\infty} \frac{1}{(3\lfloor x\rfloor-1)(3\lfloor x\rfloor+2)} dx$$
This probably won't help with finding the value, though.
• why won't it help finding the value? – Amad27 Nov 4 '14 at 13:20
• @Amad27: I don't see a way it would. If you can find one, then more power to you, I suppose ... – Henning Makholm Nov 4 '14 at 13:33
• @Amad27 Methods for solving integrals are poorly suited for integrating functions that are non-continuous. The usual approach for integrating functions like the one here is to separately integrate over each interval where it is continuous. Which brings us back to the sum form. – Rafał Dowgird Nov 4 '14 at 15:11
• @Amad27 It is quite literally equivalent to the original sum in a trivially useless manner XD – Simply Beautiful Art Jan 12 '17 at 2:11
In such cases, the partial fractions of general term (i.e. $n^{th}$ term ) of the infinite-series are very useful.
Given that $$\sum_{n=1}^{\infty}\frac{1}{(3n-1)(3n+2)}=\sum_{n=1}^{\infty} T_{n}$$ Where, $T_{n}$ is the $n^{th}$ term of the given series which can be easily expressed in the partial fractions as follows $$T_{n}=\frac{1}{(3n-1)(3n+2)}=\frac{1}{3}\left(\frac{1}{3n-1}-\frac{1}{3n+2}\right)$$ Now, we have $$\sum_{n=1}^{\infty}\frac{1}{(3n-1)(3n+2)}=\frac{1}{3}\sum_{n=1}^{\infty} \left(\frac{1}{3n-1}-\frac{1}{3n+2}\right)$$ $$=\frac{1}{3} \lim_{n\to \infty} \left[\left(\frac{1}{2}-\frac{1}{5}\right)+\left(\frac{1}{5}-\frac{1}{8}\right)+\left(\frac{1}{8}-\frac{1}{11}\right)+\! \cdot \! ........ +\left(\frac{1}{3n-4}-\frac{1}{3n-1}\right)+\left(\frac{1}{3n-1}-\frac{1}{3n+2}\right)\right]$$ $$=\frac{1}{3} \lim_{n\to \infty} \left[\frac{1}{2} -\frac{1}{3n+2}\right]$$ $$=\frac{1}{3} \left[\frac{1}{2} -\frac{1}{\infty}\right]$$ $$=\frac{1}{3} \left[\frac{1}{2}\right]=\color{blue}{\frac{1}{6}}$$
We can indeed write the sum as an integral, after research. Consider:
Find: $\psi(1/2)$
By definition:
$$\psi(z+1) = -\gamma + \sum_{n=1}^{\infty} \frac{z}{n(n+z)}$$
The required $z$ is $z = -\frac{1}{2}$
so let $z = -\frac{1}{2}$
$$\psi(1/2) = -\gamma + \sum_{n=1}^{\infty} \frac{-1}{2n(n - \frac{1}{2})}$$
Simplify this: $$\psi(1/2) = -\gamma - \sum_{n=1}^{\infty} \frac{1}{n(2n - 1)}$$
The sum seems difficult, but really isnt.
We can telescope or:
$$\frac{1}{1-x} = \sum_{n=1}^{\infty} x^{n-1}$$
Let $x \rightarrow x^2$
$$\frac{1}{1-x^2} = \sum_{n=1}^{\infty} x^{2n-2}$$
Integrate once:
$$\tanh^{-1}(x) = \sum_{n=1}^{\infty} \frac{x^{2n-1}}{2n-1}$$
Integrate again:
$$\sum_{n=1}^{\infty} \frac{x^{2n}}{(2n-1)(n)} = 2\int \tanh^{-1}(x) dx$$
From the tables, the integral of $\tanh^{-1}(x)$
$$\sum_{n=1}^{\infty} \frac{x^{2n}}{(2n-1)(n)} = \log(1 - x^2) + 2x\tanh^{-1}(x)$$
Take the limit as $x \to 1$
$$\sum_{n=1}^{\infty} \frac{1}{(2n-1)(n)} = \log(4)$$
$$\psi(1/2) = -\gamma - \sum_{n=1}^{\infty} \frac{1}{(2n-1)(n)}$$
$$\psi(\frac{1}{2}) = -\gamma - \log(4)$$
• I am the OP per say. This is a general trick. I converted the sum into an integral. Please read carefully. – Amad27 Dec 21 '14 at 8:15
Yes you can use the Euler Maclaruin formula to write the sum as an integral plus an infinite number of derivatives. I remember deriving this for my self when I was younger and being very pleased with myself.
This particular sum could be solved because you had two terms $ax+b$ and $ax+c$ and the difference between c and b is equal to a (I think it would work in a slightly more complicated way if it was a not-too-large multiple of a).
If you want numerical values in general cases, and the sum doesn't converge quickly for your taste, or you want just a partial sum, you can use that
$$\displaystyle f (k) = \int_{k-1/2}^{k+1/2} f(k) dx ≈ \int_{k-1/2}^{k+1/2} f(x) dx$$
and therefore
$$\displaystyle \sum_{k=n}^{m} f(k) ≈ \int_{n-1/2}^{m+1/2} f(x) dx$$
Assuming that you can solve the integral in closed form, if you let
$$\displaystyle g (k) = f(k) - \int_{k-1/2}^{k+1/2} f(x) dx$$
then
$$\displaystyle \sum_{k=n}^{m} f(k) = \int_{n-1/2}^{m+1/2} f(x) dx + \sum_{k=n}^{m} g(k)$$
$g (k)$ will usually converge much faster than $f (k)$.
|
# Your test scores are 75, 93, 90, 82 and 85. What is the lowest score you can obtain on the next test to achieve an average of at least 86?
Nov 15, 2016
I would have to score at least $91$ on the next test.
#### Explanation:
An average is calculated by adding the figures and dividing by the number of figures. Since we have $5$ given figures and $1$ awaited figure (a total of $6$) to achieve an average of 86, we can write an equation:
$\frac{75 + 93 + 90 + 82 + 85 + x}{6} = 86$
Simplify the numerator on the left.
$\frac{425 + x}{6} = 86$
Multiply both sides by $6$.
$425 + x = 86 \times 6$
$425 + x = 516$
Subtract $425$ from both sides.
$x = 91$
|
# How to setup animation based events without depending on an animation?
How can I have events based off animations without relying on them? Take for example reloading in a FPS game. You'd hit a button to reload your weapon, the animation will play, then animation will hit a certain frame, then the weapon will be reloaded. There's a dependency on the animation eventually hitting that key frame before the weapon can reload and the character can go into another state. Without the animation, the weapon would never reload, and without some arbitrary timeout, the character would be stuck in that state forever.
Checking for the existence of an animation is an option. However, this requires us to still know about some sort of animation, and what it's playing. Am I over engineering this? Would it be okay for the logic of some object to have some information about it's animations?
Introduce a new abstraction: Action
An Action represents something that can be done, and encompassed the time aspect as well as information about the state and state transitions of the action. So a reload action definition might contain information about total duration, relative timestamp of reload complete etc. A spear throw action definition contains information about total duration and timestamp of spear release.
Exactly how you wrap this up in code is up to you and depends on your framework/style. Actions could be simple data objects which you just use to define different actions, and then you make use of the timestamps, durations and other information "manually" in the rest of your code. Or you could improve encapsulation by having your Action module expose events which trigger when the action changes state (OnReloadStarted,OnReloadReady,OnReloadFinished,OnSpearThrown). Or you could have a medium where there are no events, but the Action still manages its states/transitions so that you can poll it: throwSpearAction.isSpearThrown().
If you have multiple reload animations or spear-throw animations with different timings, then you define each alternative as an action, e.g. FastThrowAction and SlowThrowAction. Once again it is up to you exactly how you manage and pick from different alternatives.
The point of all this is that the Action abstraction becomes a natural part of the gameplay model when you focus on the concepts that affect gameplay (action transitions/events) and remove the concepts that don't (animation).
These Action objects can then be used to drive the animation as well. Whenever a FastThrowAction is started, then your renderer/view starts the corresponding FastThrowAnimation.
All animations are more-or-less based on the idea of "keyframes", whereby you define two or more states/poses/transforms and the length of time between them; when 50% of the animations' total time has passed, the animation is 50% complete. Animations like "reload" have fixed time-costs based on the character, weapon, being suppressed, etc..
Your InputHandler will need to ignore invalid input until the Reloading state expires (not when the animation is finished).
During that time, the animation may also progress, smoothly, from 0% to 100%. When the player presses "R", the current time is recorded. If the computer, then, lags out for X+1 seconds, the entire animation will happen "instantly", when the computer catches back up; the InputHandler would also detect the (instant) expiration of the reload state and clear it.
Basically, the game continues playing with, or without, acceptable animations:
1. Player presses "R"
2. Set the player's Reloading flag (disable reload-incompatible inputs)
3. ReloadTime or ExpiryTime is recorded (100% reload after 3.2 seconds)
4. Update() and Render() for 3.1999 seconds...
If using ReloadTime, subtract delta-time from it, every Update().
(Could be one 3.1999-second update or one-hundred 0.031999-second updates)
5. During the next Update(),
ReloadTime will be <=0, or
ExpiryTime will be a time in the past
6. If so, clear the player's Reloading flag (enable reload-incompatible inputs).
The animation is done (not a question).
• That will work for something basic like reloading( which doesn't have to be exact ), but consider something more visual like throwing a spear. You'll want the spear to been thrown at some exact time in the animation( where the character actually lets go of it. ) This is not necessarily the end of the animation, because the character might follow through on the throw, and it must be precise to look accurate. Creating the animation to perfectly fit data based on this would be a pain compared to having the animation drive it.
– Ben
May 3 '15 at 6:10
• @Ben, how do you determine when the player lets go? I'm visualizing those stupid golf games with the "swing meter". In that type of game, you fully describe the swing, then the animation happens, based on the calculated output. The point at which the club hits the ball is known before the animation starts, so stretching the time between key-frames is trivial. When the calculated time has elapsed (key-frame also happens to be hit), the animation is complete, the ball is accelerated, and then a second, follow-through, animation is played, starting with the player's current state.
– Jon
May 3 '15 at 6:43
|
Class 12 Chemistry Gases & Solutions States Of Matter
The temperature of the gas is raised from to , the root mean square velocity is
(a)
times of the earlier value
(b)
same as before
(c)
halved
(d)
doubled
Solution: Root mean square velocity at temperature,
Root mean square velocity at temperature,
Eq. (i) divided by Eq. (ii)
Connecting you to a tutor in 60 seconds.
|
#### 4L of 0.02 M aqueous solution of Nacl was diluted by adding one litre of water. The molarity of the resultant solution is Option 1) 0.004 Option 2) 0.008 Option 3) 0.012 Option 4) 0.016
As learnt in
Molarity -
-
Number of moles of NaCl = 4 x 0.02 = 0.08
Total final volume = 5 L
M = 0.08 / 5 = 0.016 M
Option 1)
0.004
This option is incorrect
Option 2)
0.008
This option is incorrect
Option 3)
0.012
This option is incorrect
Option 4)
0.016
This option is correct
|
# Expectation of half of a binomial distribution
If $X$ is a random variable with binomial distribution $B[2n,p]$, with $p=0.5$, then its mean value is $2np = n$. I would like to calculate the mean value of $X$ given that $X$ is at most $n$, i.e.:
$$\frac{1}{Pr[X\leq n]}\cdot \sum_{k=0}^{n} k \binom{2n}{k} p^k q^{2n-k}$$
To calculate the sum, I used the ideas from this question: Expected Value of a Binomial distribution?
and got:
$$\frac{2np}{Pr[X\leq n]}\cdot \sum_{k=1}^{n} \binom{2n-1}{k-1} p^{k-1} q^{(2n-1)-(k-1)}$$
$$= \frac{2np}{Pr[X\leq n]}\cdot \sum_{j=0}^{n-1} \binom{2n-1}{j} p^{j} q^{(2n-1)-j}$$
If the inner sum were for $j=0,…,(2n-1)$, it would be exactly 1 by the binomial theorem. But it is only for half the range, and by symmetry, its value seems to be 0.5. The probability in the denominator is also 0.5, so we get:
$$\frac{2np}{0.5}\cdot 0.5 = 2np = n$$
But this doesn’t make sense, since the expectation of values that are at most $n$, cannot be $n$!
Where is my mistake? And what is the correct expectation?
#### Solutions Collecting From Web of "Expectation of half of a binomial distribution"
Given that $p=q=\frac{1}{2}$,
$$\frac{1}{\mathbb{P}[X\leq n]}\sum_{k=0}^{n}k\binom{2n}{k}\frac{1}{4^n}=\frac{1}{4^n\mathbb{P}[X\leq n]}\sum_{k=0}^{n}k\binom{2n}{k}=\frac{n}{2\mathbb{P}[X\leq n]},$$
but the probability that $X\leq n$ is not $\frac{1}{2}$; it is given by:
$$\mathbb{P}[X=n]+\mathbb{P}[X<n] = \frac{1}{2}+\frac{1}{2\cdot 4^n}\binom{2n}{n},$$
so the wanted expected value is:
$$\frac{n}{1+\frac{1}{4^n}\binom{2n}{n}}.$$
Which by Stirling’s formula is approximately:
$$\frac{n}{1+\frac{1}{\sqrt{\pi n}}} \approx n – \sqrt{\frac{n}{\pi}}$$
|
News: IT'S THE 2ND ANNUAL GUATEMALA LIBRARY PROJECT BOOK DRIVE! LOOKING FOR DONATIONS OF SCIENCE BOOKS THIS YEAR. Check it out in the "Extending the Hand of Kindness" folder or here: http://www.etiquettehell.com/smf/index.php?topic=139832.msg3372084#msg3372084
• March 26, 2017, 03:10:51 AM
### Author Topic: Home Buying Etiquette (Read 39861 times)
0 Members and 1 Guest are viewing this topic.
#### jedikaiti
• Swiss Army Nerd
• Member
• Posts: 3746
• A pie in the hand is worth two in the mail.
##### Re: Home Buying Etiquette
« Reply #60 on: January 05, 2014, 09:39:53 PM »
Having recently bought in the US, I would not DREAM of not having a my own buyer's agent. If that agent happened to have a house I liked for sale at the time, and I trusted them to handle the deal responsibly, that's fine. But I would not do a real estate purchase (or sale) without my own agent. If I found a FSBO I liked that wouldn't work with my realtor, well... too bad.
What part of v_e = \sqrt{\frac{2GM}{r}} don't you understand? It's only rocket science!
"The problem with re-examining your brilliant ideas is that more often than not, you discover they are the intellectual equivalent of saying, 'Hold my beer and watch this!'" - Cindy Couture
#### GreenEyedHawk
• Member
• Posts: 2709
• Not hot but SPICY
##### Re: Home Buying Etiquette
« Reply #61 on: January 05, 2014, 09:40:35 PM »
When I helped a friend who was looking at houses, one place he looked at (and ultimately ended up buying) the current owners stayed while we did the viewing, which was really uncomfortable and awkward feeling. When my parents sold their place, we always went out when the listing agent called to arrange viewings. We'd go to a movie or out for ice cream. It was the agent working for my parents who escorted potential buyers. As far as I know, this is standard for where I am.
One place I looked at was filled with junk. It seemed to me that the previous owner had passed away in the house and it was listed for cheap to move it quickly so the family members (or whoever) ended up having to settle these affairs wouldn't have to clean it out. It made it a lot harder to get a feel for the place, its structural soundness or what needed repair because I could barely get near the walls and could hardly see the floors. The decor was horrendously outdated and the place had not been properly cleaned in some time. If you're going to sell a house, for heaven's sake, at least vacuum! The place had a horrid creepy vibe to it anyways and I decided I wasn't interested in seeing more after only being there about ten minutes.
"After all this time?"
"Always."
• Member
• Posts: 3301
##### Re: Home Buying Etiquette
« Reply #62 on: February 21, 2014, 09:45:53 AM »
I sold my last place and relocated, so I'm currently hunting. I toured a place that I thought had a wonderful floor plan, a great yard, and was in a wonderful subdivision. The problem -- the house had about 3 of those faux candle air fresheners in each room. The realtor waxed poetically about the seller being so motivated.
As I toured the place, I realized that renters had trashed it and been evicted. The range was missing burners. The jetted tub was missing a panel on the side. The screened back door had been shredded. The air fresheners were out to mask a smell. Once my mom ran interference with the agent and I walked (alone) into the master bedroom closet, I realized the air fresheners were covering mold. The house had been partially flooded.
It was an appropriate price if it was in normal move in condition (i.e. changing paint or replacing bathroom fixures with brushed nickel) but it was significantly over priced for all new flooring and potential wall board and structural integrity challenges.
If the seller is that motivated, instead of putting out her Costco lot of Renuzits (air fresheners), she could have painted the house cream, recarpeted and put burners on the range.
|
# Twisted Coil in a Magnetic Field
The idea is as it follows:
We have an homogeneus magnetic field that was increased from 0 to Bf. It induced a certain current (that accumulated to a charge Q) in a circular coil of radius a and resistivity $$\rho$$ constant. The question is, what would be the charge Q' when the coil is twisted (not strangled), forming an 8 of radii a1 and a2 in terms of Q?
Defennder
Homework Helper
I don't see how charges could accumulate on a conducting coil due to an induced current.
Meir Achuz
Homework Helper
Gold Member
The idea is as it follows:
We have an homogeneus magnetic field that was increased from 0 to Bf. It induced a certain current (that accumulated to a charge Q) in a circular coil of radius a and resistivity $$\rho$$ constant. The question is, what would be the charge Q' when the coil is twisted (not strangled), forming an 8 of radii a1 and a2 in terms of Q?
The charge accumulated on a capacitor in series would be $$Q[a_1^2-a_2^2]/a^2$$.
Nono, I expressed myself terribly. The idea is ALL the charge that circulated due to i is Q, I mean $$\int i·dt=Q$$
Defennder
Homework Helper
Hmm it seems that in the latter case you can't just treat the 8-shape as two separate coils. For the current to flow in a particular direction in one of the the half-coil, it has to flow in the opposite circular direction. This complicates things somewhat.
The charge accumulated on a capacitor in series would be $$Q[a_1^2-a_2^2]/a^2$$.
The charge is separated by the capacitor.
Charge has to be conserved.
|
# Morphisms $P \to M$ in the derived category of a dg-category, if $P$ is h-projective
Let $\mathbf A$ be a dg-category. Denote by $\mathsf{C}_{\mathrm{dg}}(\mathbf A)$ the dg-category of right $\mathbf A$-modules, and by $\mathsf{C}(\mathbf A) = Z^0(\mathsf{C}_{\mathrm{dg}}(\mathbf A))$, its underlying category (which is endowed with the projective model category structure). Moreover, set $\mathsf{K}(\mathbf A) = H^0(\mathsf{C}_{\mathrm{dg}}(\mathbf A))$, the homotopy category of modules, and let $\mathsf{D}(\mathbf A)$ be the derived category of $\mathbf A$, that is, the localisation of $\mathsf{K}(\mathbf A)$ (or, equivalently, of $\mathsf{C}(\mathbf A)$) along quasi-isomorphisms. Denote by $\delta \colon \mathsf{K}(\mathbf A) \to \mathsf{D}(\mathbf A)$ the localisation functor. The machinery of model categories tells us that, if $P \in \mathsf{C}(\mathbf A)$ is a cofibrant dg-module, then, for any dg-module $M$, the localisation functor induces an isomorphism $$\mathsf{K}(\mathbf A)(P, M) \xrightarrow{\sim} \mathsf{D}(\mathbf A)(P,M).$$
The question is: is the above result true if $P$ is a h-projective dg-module? By definition, $P$ is h-projective if, whenever $A$ is an acyclic dg-module, the hom-complex $\mathsf{C}_{\mathrm{dg}}(\mathbf A)(P, A)$ is acyclic. Notice that a cofibrant dg-module is also h-projective.
The answer is: yes, if $P$ is h-projective, then the map $$\delta_{P,M} \colon \mathsf{K}(\mathbf A)(P, M) \to \mathsf{D}(\mathbf A)(P,M)$$ is an isomorphism, for any dg-module $M$.
The key argument in the proof is the following: assuming $P$ h-projective, then any quasi-isomorphism $u \colon X \to P$ has a right inverse in $\mathsf{K}(\mathbf A)$. In fact, consider the distinguished triangle in $\mathsf{K}(\mathbf A)$: $$X \xrightarrow{u} P \xrightarrow{j} C(u) \to X[1].$$ Since $u$ is quasi-isomorphism, the cone $C(u)$ is acyclic. $P$ is h-projective, so we have $$\mathsf{K}(\mathbf A)(P, C(u)) = H^0(\mathsf{C}_{\mathrm{dg}}(\mathbf A)(P, C(u))) = 0;$$ in particular, $j=0$ in $\mathsf{K}(\mathbf A)$. Applying the cohomological functor $\mathsf{K}(\mathbf A)(P,-)$ to the above triangle, we immediately find $u' \colon P \to X$ such that $u u' = 1$.
Now, let us prove that $\delta_{P,M} = \delta$ is injective. Let $f, g \colon P \to M$ such that $\delta(f) = \delta(g)$. This means that there exists $u \colon X \to P$ such that $fu=gu$ in $\mathsf{K}(\mathbf A)$. By the above remark, $u$ has a right inverse $u'$ in $\mathsf{K}(\mathbf A)$, and so $f=g$. Finally, let us prove that $\delta$ is surjective. Let $\overline{f} \in \mathsf{D}(\mathbf A)(P, M)$ We may write $\overline{f} = \delta(f)\delta(s)^{-1}$, with $s \colon X \to P$ being a quasi-isomorphism, and $f \colon X \to M$. The above remark gives us a right inverse $s' \colon P \to X$ of $s$. Hence, $\delta(s')$ is the (two-sided) inverse of $\delta(s)$, and we have $$\overline{f} = \delta(f) \delta(s') = \delta(fs'),$$ and we are done.
|
# 9.10. Parsing lines¶
Usually when we are reading a file we want to do something to the lines other than just printing the whole line. Often we want to find the “interesting lines” and then parse the line to find some interesting part of the line. What if we wanted to print out the day of the week from those lines that start with “From “?
From [email protected] Sat Jan 5 09:14:16 2008
The split method is very effective when faced with this kind of problem. We can write a small program that looks for lines where the line starts with “From “, split those lines, and then print out the third word in the line:
Later, we will learn increasingly sophisticated techniques for picking the lines to work on and how we pull those lines apart to find the exact bit of information we are looking for.
The following code should open a file and read through the lines, splitting them when a line starts with “Hello”, then printing the second word in the line. Watch out for extra pieces of code and indentation.
|
# Looking for a Nixie clock kit that uses IN-1 tubes
#### xtal_01
Joined May 1, 2016
118
Hey!
I an definitely not an electronics guy ... I just dabble.
Thanks ....
#### Toughtool
Joined Aug 11, 2008
63
Here is a circuit you may consider building on a perf board. The 74141's are no longer available but the Russians still produce a replacement decoder/driver. I bought some from Ebay but haven't built the clock yet. Although they were shipped from Tampa FL, they had all kinds of Russian stamps on the package. Here is an Amazon.com link.
https://www.amazon.com/K155ID1-SN74...dchild=1&keywords=74141&qid=1587444674&sr=8-3
Last edited:
#### xtal_01
Joined May 1, 2016
118
Here is a circuit you may consider building on a perf board. The 74141's are no longer available but the Russians still produce a replacement decoder/driver. I bought some from Ebay but haven't built the clock yet. Although they were shipped from Tampa FL, they had all kinds of Russian stamps on the package. Here is an Amazon.com link.
https://www.amazon.com/K155ID1-SN74...dchild=1&keywords=74141&qid=1587444674&sr=8-3
This might be doable even for a guy like myself. I will have to take a good look at it tomorrow (almost 1:30 AM here .. finishing up some "real" work).
I did contact the guy in Germany about the clock above. I asked if he had a board, kit or even a complete clock. He replied "I'm sorry, no" ... hmmmm ... I was hoping he even had just a board I could buy.
Thanks!
#### Toughtool
Joined Aug 11, 2008
63
I found this "How to build a Nixie clock.PDF", on my computer and though I would send it to you.
|
Area Calculator is an all-in-one calculator that calculates the area of all geometric figures. Based on your selection of the figure, our area calculator will quickly find the area and return the result for you. In addition, you can choose in which square unit you would like to measure it (square meters, square inches, square feet).
Take a look other related calculators, such as:
## What is an area in math? Area definition
In math, the number of square unit needed to cover the surface of a closed figure – we call area. As a geometric term, the area is derived from the Latin language in the 1560s, which means “opened or closed surface contained in a defined set of limits”.
If you look around yourself, you will find the use of the area in many situations. Ex. You want to build a swimming pool, or you are unsure what carpet size would be perfect for your room. In all of these cases, you can use the area formula to calculate and find the measures expressed in square feet, square meters, square inches etc..
## How to calculate area?
There is a general rule on calculating the area of any shape, so let’s take a look.
When we want to calculate the area of a particular shape, first, we need to place the shape onto an imaginary grid of which every square in the grid equals 1 square unit.
Example: We want to paint the wall, but we don’t know the area of the wall. First, we imagine that the wall is one huge grid with small squares and all you need to do is count how many of these squares are inside of the grid.
Let’s assume we measure it in square meters, and there is a total of 6 squares in the grid. This number indicates that the area of the wall (how much space is contained inside of the wall) equals 6 square meters.
However, the wall has a rectangle shape, but you may wonder what about circular, polygonal or triangular shapes? How do we calculate their area? We are going to cover all of these aspects in the following sections.
## Square area formula
In the previous section, we have explained how the calculation of area works in a nutshell. However, there is no universal formula for all of the shapes. Thus, in this section, let’s find out the formula for calculating the square area.
Square footage of a square = a^{2}
We calculate the area of a square by raising the length of one of its sides to two. Why? Because a square has all four sides of equal length.
Method two: In case if the diagonal of a square is known, you can calculate the area without knowing the length of any of its sides using the following formula:
Square area = d^{2} \div 2
Raise the length of its diagonal to two and divide the results by 2.
## Rectangle area formula
We calculate the area of a rectangle by multiplying the length of its width (shorter side) and length (longer side).
Rectangle area formula = length \times width
Method two: If the diagonal of a rectangle is known, then you can calculate the area by only knowing its width and diagonal:
l^{2} = d^{2} - w^{2}
l =\sqrt{d^{2} - w^{2}}
Rectangle area formula = w \times (\sqrt{d^{2} - w^{2}})
## Triangle area formula
Every triangle has three sides, and in order to calculate its area, we need to know the length of all three sides. This is the basic approach to calculating the area of a triangle:
Triangle area = 1 \div 2 \times b \times h
b – triangle’s base
h – perpendicular height
It’s important to note that this formula can be used for calculating the area of any type of triangle.
Let’s find out how to find the area of different types of triangles:
Right-Angled Triangle
Triangle area = 1 \div 2 \times Base \times Height
We see nothing changed; the area formula for right-angled triangles remains the same as the basic formula, and you can read more about it if you head to our Area of a Right Triangle Calculator.
Equilateral Triangle
Triangle area = \frac{\sqrt{3}}{4 \times side^{2}}
An equilateral triangle has all three sides equal, so if we are only given the length of at least one of its sides, we can measure the area by the formula above.
Isosceles Triangle
Isosceles triangles have two of their sides equal, so if we know two sides of the triangle, we can calculate the area without knowing the length of the third.
Triangle area = \frac{1}{4 \times b\sqrt{4a^{2} - b^{2}}}
## Circle area formula
A circle is a bit different shape from the previous geometric figures. A circle does not have sides, bases or height. Instead, we measure it by knowing its radius, diameter or circumference.
We calculate the area of a circle in these 3 ways or use our area calculator:
Square footage of a circle = \pi \times r^{2}
r – the distance between the centre and any point in the circle’s boundary.
Knowing the circle’s diameter:
Area of a circle = \frac{\pi}{4} \times d^{2}
d – the line that touches the two endpoints of the circle and passes through the centre.
Knowing the circle’s circumference:
Area of a circle = \frac{C^{2}}{4\pi}
C – length of the circle’s boundary
## Sector area formula
The sector is a part of a circle. It represents space enclosed by two radii and the angle between them. So the space that a sector takes inside a circle is also considered the area of a sector. Every sector has two types: minor and major. The minor sector is the part less than a semi-circle, as opposed to the major sector, which is greater than a semi-circle.
We calculate the area of a sector through these two possible ways or use our area calculator:
Using degrees:
Square footage of a sector = \frac{\theta}{360} \times \pi r^{2}
θ – the angle between two radii
r – radius of the circle
Sector Area = \frac{1}{2} \times r^{2}\theta
θ – the angle between two radii
r – radius of the circle
## Ellipse area formula
Ellipse is not a perfect circle; thus, the distance from the centre to respective circumferences is not constant. The line connecting two farther points in the ellipse is called the major axis. The line with a smaller distance between two points is called the minor axis. The area of an ellipse can be calculated by multiplying half of the major axis by half of the minor axis.
Square footage of an ellipse = \pi \times a \times b
π – PI number
a – major axis
b – minor axis
## Trapezoid area formula
In order to find the area of a trapezoid, it’s enough to know the length of two parallel sides and the distance between them (height). There also a calculator for calculating the trapezoid parameters.
Square footage of a trapezoid = \frac{1}{2} (a + b) h
a – base 1
b – base 2
h – the distance between the two bases
## Area of a parallelogram formula
A parallelogram is a quadrilateral with two pairs of parallel sides. For example, rhombus, square and rectangle are parallelograms. They all have two pairs of parallel lines that enclose the space and form a geometric figure.
We measure the area of a parallelogram by multiplying the length of its base by its height (altitude):
Square footage of a parallelogram = b \times h
b – one of its bases
h – the distance between the bases
Method two: You can calculate the parallelogram area using the length of its diagonals:
Parallelogram area = \frac{1}{2} \times d1 \times d2 \times sin(x)
d1 – diagonal 1
d2 – diagonal 2
x – an angle between the two diagonals
## Area of a rhombus formula
Rhombus is a parallelogram with all equal sides. However, it does differ from a square because the rhombus does not have to have all four right angles between the sides.
We can calculate the area of a rhombus in three different ways or use our area calculator:
Using the length of the base and height:
Area of a rhombus formula = b \times h
b – base
h – height’s length
Through the length of the rhombus’ diagonals:
Rhombus area = \frac{(d1 \times d2)}{2}
d1 – diagonal 1
d2 – diagonal 2
Using the length of one side and the angle:
Rhombus area = side^{2} \times sin(A)
side – one of the sides of a rhombus (it doesn’t matter which because all of them are equal)
A – an angle between the two sides
## Area of a kite formula
Kite is a quadrilateral with four angles, two diagonals and four sides (two pairs of equal adjacent sides). It resembles the rhombus a lot.
There is only one formula for calculating the area of a kite, and it looks the same as the one for rhombus:
Square footage of a kite = \frac{1}{2} \times d1 \times d2
We get the area of a kite by multiplying its two diagonals and dividing that number by 2.
## Pentagon area formula
Pentagon is a two-dimensional polygon with 5 sides.
We calculate the area of a pentagon using any of these two formulas or use our area calculator:
Pentagon area = \frac{5}{2} \times s \times a
s – the length of one of the pentagon’s sides
The area of regular pentagons can also be measured by this alternative formula when only a side is known:
Pentagon area = \frac{1}{4} \times \sqrt{5} \times (5 + 2\sqrt{5}) \times s^{2}
## Area of a hexagon formula
Hexagon is a two-dimensional polygon with six sides and six angles.
We can calculate the area of a regular hexagon either by knowing its side’s length or using the apothem.
First method:
quare footage of a hexagon = (3\sqrt{3} \times s^{2}) \div 2
s – side of the hexagon
Second Method:
Hexagon area = 3 \times a \times s
a – apothem
s – side of the hexagon
## Area of an octagon formula
Octagon is a polygon with 8 sides.
If we want to calculate its area, we first slice it into 8 equal isosceles triangles and then find the area with the following formula:
Square footage of an octagon = 2s^{2} \times (1 + \sqrt{2})
s – the side of an octagon
## Area of an annulus formula
Suppose we draw two concentric circles with the exact centre of different radius. In that case, the two circles will form a shape like a ring which is called an annulus. Since the annulus is based on the radius of both circles, to calculate the area of an annulus, we need to know the area of a smaller and bigger circle. Once we know their areas, we can easily subtract the area of a smaller circle from a bigger circle, and we will get the area of an annulus.
Area of the bigger circle = \pi R^{2}
R – radius of the bigger circle
Area of the inner circle =\pi r^{2}
r – radius of the smaller circle
Therefore, the area of an annulus is: \pi (R^{2} - r^{2})
## Area of a quadrilateral formula
A quadrilateral is a closed shape with four lines. If a quadrilateral has all the lines of equal length, we call it a regular quadrilateral. However, if the lines are of different sizes, it is called an irregular quadrilateral.
We can calculate the area of a quadrilateral by dividing it into two triangles:
Square footage of a quadrilateral = \frac{1}{2} \times d \times (Sum of h)
d – one of its diagonals
Sum of h – the sum of all of its heights
## Regular polygon area formula
A regular polygon is a polygon with equal sides and angles. For example, these are square, regular pentagon, equilateral triangle and more.
Each of them has a special formula for calculating the area, but this is a universal formula for area of regular polygons:
Regular polygon area = (number of sides × length of one side × apothem) / 2
Also, besides mentioned area calculators, there is another one – Surface Area of a Cone Calculator. Also, we have a general tool for polygons, our Polygon Calculator.
|
2015 CMS Winter Meeting
McGill University, December 4 - 7, 2015
Doctoral Prize
[PDF]
YUVAL FILMUS, Technion – Israel Institute of Technology
Analysis of Boolean functions on exotic domains [PDF]
Analysis of Boolean functions is the study of $0/1$-valued functions using spectral and analytic techniques. Lying at the interface of combinatorics, probability theory, functional analysis and theoretical computer science, it has been applied to random graph theory, percolation theory, coding theory, social choice theory, extremal combinatorics, and theoretical computer science.
Traditionally the functions being considered are on the Boolean cube $\{0,1\}^n$, or more rarely on other product domains, usually finite. We explore functions on more exotic domains such as finite groups and association schemes, concentrating on two examples: the symmetric group and the Johnson association scheme (the "slice"), which consists of all vertices in the Boolean cube with a specified weight.
We will survey a few classical results in analysis of Boolean functions on the Boolean cube and their generalizations to more exotic domains. On the way, we will explore questions such as: Which functions on the symmetric group are "dictatorships" (depend on one "coordinate") or "juntas" (depend on a few "coordinates")? Is there a "Fourier expansion" for functions on the slice? Is the middle slice a "representative" section of the Boolean cube?
Joint work with David Ellis, Ehud Friedgut, Guy Kindler, Elchanan Mossel, and Karl Wimmer.
HECTOR PASTEN, Harvard University and the Institute for Advanced Study
The abc conjecture: a review and update [PDF]
I will discuss several aspects of the abc conjecture including classical and new applications, strategies for approaching the problem, and partial results.
|
# Will this be my final Riley Riddle?
This won't be my final one, although the riddle suggests otherwise...
Riddle me this:
My prefix might be two, but dominatės an aching storm;
Though rid the kind of regulär a pirate may infôrm.
My suffix might set off, but to be hönest, is reversed,
As it might be éntirely its own way quite hèådfirst.
My infix is a link you might demañd not in a chain,
But rather only once to thrice in what your šight may gain.
A pale queen would know my name; her voice would be on sāle!
Ironic, for sō frēqùęntły, some deserts have my trail.
What am I?
Much as I've left this riddle for you,
The answer has left itself for you, too.
Partial answers can be posted :)
Hint 1:
Pirates say, "Arrrr!"
You're on your way after you have set off.
And this and that and so much to think about!
This riddle seemed so deserted.
Hint 2:
The word "stomach" sounds like "storm aching" but without the r (and ing). Stomachs also have "abdominals" which sounds like "ab dominating".
Seems like finding out what the symbols might mean is a bit too difficult, so whoever finds that out will earn a $$100$$ rep bounty.. but you might have a chance at it, because I will say one thing:
So frequently $$\to$$ frost queen... if only...
• Are the weird things in characters important? Sep 11, 2018 at 10:30
• @u_ndefined yes, they are. They sort of hint out to something. If you don't understand one part, I can give a hint for that particular "weird thing". Sep 11, 2018 at 10:50
• I'm tempted to post an answer... "No."
– Jafe
Sep 26, 2018 at 3:33
• @jafe hahah, the answer would make sense once you get it. You are staring at more clues than you think :P Sep 26, 2018 at 3:50
Partial (infix):
'w'? link not in chain: URL, once to thrice: "www."
• That was a good attempt, but unfortunately incorrect. Every affix is a word... well, maybe not the prefix (it all depends on how you say it, really), but that's it. In fact, the prefix is a well-known one; if you search "define [prefix]", Google will tell you that it is a prefix :P Sep 11, 2018 at 12:13
You are a
DIACRITIC !
Or perhaps you are about to
"die a critic" meaning this is one of your final Riley riddles. I hope not, although you were last seen a month ago... If it was due to foul play, perhaps they said "die, 'ya critic!"
This obviously explains all of the
My prefix might be two, but dominatės an aching storm; Though rid the kind of regulär a pirate may infôrm.
Your prefix is "DI", a prefix meaning 'two'. We note that the unique letters in you, DIACRT, dominate 'aching storm' as they all appear. Not sure about the pirate bit, but it may explain how to remove the extra letters.
My suffix might set off, but to be hönest, is reversed, As it might be éntirely its own way quite hèådfirst.
Your suffix is "CRITIC", which can set someone off (make them upset), but in reverse, a headfirst critic may be all alone in their views, not shared by the majority.
My infix is a link you might demañd not in a chain, But rather only once to thrice in what your šight may gain.
Your infix is "A", a link (hyperlink HTML code, or anchor), which is not a link in a chain but a link you click a few times to visit a new page or activate a function.
A pale queen would know my name; her voice would be on sāle! Ironic, for sō frēqùęntły,
I was lost on a tangent for awhile with Elsa from Frozen, but the actual answer made me laugh. "Pale Queen" clues a frozen, icy queen, as does 'frequently' which sounds like "frozen queen". Here, the queen is actually the White Witch in Narnia, famous for her winter appearance, and pale also clueing white. She was voiced ("voice" clue) by Tilda Swinton in the 2005 movie. Tilda likely is no stranger to jokes about her name sounding like 'tilde', a common diacritic.
some deserts have my trail.
The two largest deserts on earth are the Antarctic and Arctic, which share the tail, or end, of DIACRITIC (TIC)!
|
# Inexpensive solo cross-country machines?
### Help Support HomeBuiltAirplanes.com:
#### Pilot-34
##### Well-Known Member
Before I got my pilots license there was a small twin I flew in that had a funnel and tube system in the extreame rear of the plane . On the return flight from Kotzebue to Anchorage I learned how to hold an aircraft extremely steady on a heading .
You see early in the flight my young pilot friend handed over control of the aircraft me and said hold it steady while I go Pee.
The evil gremlins in my mind took over and I waited for him to get started with his business before kicking the rear of that plane around Like a berserk Mexican jumping bean..
Being young and stupid it took me several cups of coffee out his gallon size thermos to realize how he planned on extracting his revenge......
#### One Sky Dog
##### New Member
Mojave for lunch.
HBA Supporter
#### Pops
##### Well-Known Member
HBA Supporter
Log Member
For a VW powered 2 place airplane, the Cygnet is at the top in all around performance. I sat under the wing and talked to the designer one year at OSH for about an hour, nice man. You meet a lot of nice people in aviation.
|
LightCurve classes¶
Defines LightCurve, KeplerLightCurve, and TessLightCurve.
Classes¶
LightCurve([time, flux, flux_err, …]) Generic light curve object to hold time series photometry for one target. KeplerLightCurve([time, flux, flux_err, …]) Subclass of LightCurve which holds extra data specific to the Kepler mission. TessLightCurve([time, flux, flux_err, …]) Subclass of LightCurve which holds extra data specific to the TESS mission. FoldedLightCurve([time, flux, flux_err, …]) Subclass of LightCurve in which the time parameter represents phase values.
|
• NEW! FREE Beat The GMAT Quizzes
Hundreds of Questions Highly Detailed Reporting Expert Explanations
• 7 CATs FREE!
If you earn 100 Forum Points
Engage in the Beat The GMAT forums to earn
100 points for $49 worth of Veritas practice GMATs FREE VERITAS PRACTICE GMAT EXAMS Earn 10 Points Per Post Earn 10 Points Per Thanks Earn 10 Points Per Upvote ## The rectangle A has a (width) and b (height) and another... tagged by: AAPL ##### This topic has 3 expert replies and 0 member replies ### Top Member ## The rectangle A has a (width) and b (height) and another... The rectangle A has a (width) and b (height) and another rectangle B has c (width) and d (height). If a/c = b/d = 3/2, what is the ratio of the rectangle A's area to the rectangle B's? A. 3/2 B. 3/4 C. 9/2 D. 9/4 E. 27/8 The OA is D. Can I say that, a = 3k, c = 2k, b = 3m and d = 2m, since a, b and c, d are multiples of 3 and 2 respectively. Then the ratio of the area will be, $$\frac{3k}{2k}:\frac{3m}{2m}=9km:4km=\frac{9}{4}$$ Is there another strategic approach to solve this PS question? Can any experts help, please? Thanks! ### GMAT/MBA Expert Legendary Member Joined 14 Jan 2015 Posted: 2667 messages Followed by: 122 members Upvotes: 1153 GMAT Score: 770 AAPL wrote: The rectangle A has a (width) and b (height) and another rectangle B has c (width) and d (height). If a/c = b/d = 3/2, what is the ratio of the rectangle A's area to the rectangle B's? A. 3/2 B. 3/4 C. 9/2 D. 9/4 E. 27/8 The OA is D. Can I say that, a = 3k, c = 2k, b = 3m and d = 2m, since a, b and c, d are multiples of 3 and 2 respectively. Then the ratio of the area will be, $$\frac{3k}{2k}:\frac{3m}{2m}=9km:4km=\frac{9}{4}$$ Is there another strategic approach to solve this PS question? Can any experts help, please? Thanks! You could also just pick numbers. If a/c = 3/2, say a = 3 and c = 2 If b/d = 3/2, say b = 3 and d = 2 If the first rectangle has sides of a and b, or 3 and 3, it will have an area = 3*3 = 9 If the second rectangle has sides of b and d, or 2 and 2, it will have an area = 2*2 = 4. The ratio = 9/4. The answer is D _________________ Veritas Prep | GMAT Instructor Veritas Prep Reviews Save$100 off any live Veritas Prep GMAT Course
Enroll in a Veritas Prep GMAT class completely for FREE. Wondering if a GMAT course is right for you? Attend the first class session of an actual GMAT course, either in-person or live online, and see for yourself why so many students choose to work with Veritas Prep. Find a class now!
### GMAT/MBA Expert
GMAT Instructor
Joined
08 Dec 2008
Posted:
12995 messages
Followed by:
1250 members
5254
GMAT Score:
770
AAPL wrote:
The rectangle A has a (width) and b (height) and another rectangle B has c (width) and d (height). If a/c = b/d = 3/2, what is the ratio of the rectangle A's area to the rectangle B's?
A. 3/2
B. 3/4
C. 9/2
D. 9/4
E. 27/8
The OA is D.
Can I say that,
a = 3k, c = 2k, b = 3m and d = 2m, since a, b and c, d are multiples of 3 and 2 respectively.
Then the ratio of the area will be,
$$\frac{3k}{2k}:\frac{3m}{2m}=9km:4km=\frac{9}{4}$$
Is there another strategic approach to solve this PS question? Can any experts help, please? Thanks!
Hi AAPL,
Once we know that a = 3k, c = 2k, b = 3m and d = 2m, then...
Area of rectangle A = ab = (3k)(3m) = 9km
Area of rectangle B = cd = (2k)(2m) = 4km
So, the ratio of the rectangle A's area to the rectangle B's = 9km/4km = 9/4
Cheers,
Brent
_________________
Brent Hanneson – Creator of GMATPrepNow.com
Use my video course along with
And check out all of these free resources
GMAT Prep Now's comprehensive video course can be used in conjunction with Beat The GMAT’s FREE 60-Day Study Guide and reach your target score in 2 months!
### GMAT/MBA Expert
GMAT Instructor
Joined
25 Apr 2015
Posted:
2852 messages
Followed by:
18 members
43
AAPL wrote:
The rectangle A has a (width) and b (height) and another rectangle B has c (width) and d (height). If a/c = b/d = 3/2, what is the ratio of the rectangle A's area to the rectangle B's?
A. 3/2
B. 3/4
C. 9/2
D. 9/4
E. 27/8
We can let a = b = 12.
We can let c = d = 8.
The area of rectangle A is 12 x 12 = 144.
The area of rectangle B is 8 x 8 = 64.
The ratio of the area of rectangle A to that of rectangle B is:
144/64 = 18/8 = 9/4
_________________
Scott Woodbury-Stewart
Founder and CEO
[email protected]
See why Target Test Prep is rated 5 out of 5 stars on BEAT the GMAT. Read our reviews
• Get 300+ Practice Questions
Available with Beat the GMAT members only code
• Free Trial & Practice Exam
BEAT THE GMAT EXCLUSIVE
Available with Beat the GMAT members only code
• FREE GMAT Exam
Know how you'd score today for $0 Available with Beat the GMAT members only code • Magoosh Study with Magoosh GMAT prep Available with Beat the GMAT members only code • Free Veritas GMAT Class Experience Lesson 1 Live Free Available with Beat the GMAT members only code • Free Practice Test & Review How would you score if you took the GMAT Available with Beat the GMAT members only code • 1 Hour Free BEAT THE GMAT EXCLUSIVE Available with Beat the GMAT members only code • Award-winning private GMAT tutoring Register now and save up to$200
Available with Beat the GMAT members only code
• 5 Day FREE Trial
Study Smarter, Not Harder
Available with Beat the GMAT members only code
• 5-Day Free Trial
5-day free, full-access trial TTP Quant
Available with Beat the GMAT members only code
### Top First Responders*
1 Brent@GMATPrepNow 39 first replies
2 Ian Stewart 37 first replies
3 Jay@ManhattanReview 32 first replies
4 GMATGuruNY 26 first replies
5 Scott@TargetTestPrep 15 first replies
* Only counts replies to topics started in last 30 days
See More Top Beat The GMAT Members
### Most Active Experts
1 Scott@TargetTestPrep
Target Test Prep
199 posts
2 Max@Math Revolution
Math Revolution
90 posts
3 Brent@GMATPrepNow
GMAT Prep Now Teacher
71 posts
4 GMATGuruNY
The Princeton Review Teacher
45 posts
5 Ian Stewart
GMATiX Teacher
43 posts
See More Top Beat The GMAT Experts
|
# variational principle ansatz
|
variational principle. Kiryl Pakrouski, Quantum 4, 315 (2020). : (x) = Ae x 2 parameter A = 4 r 2 ˇ from normalization condition (ii)calculate hHi= hTi+ hVi hTi= ~2 2m hVi= m!2 8 On how to solve these kind of integrals, see Ref. In the picture below, I've illustrated my point. 0521803918 - Variational Principles and Methods in Theoretical Physics and Chemistry Robert K. Nesbet Frontmatter More information. Published by IEE. In particular, we study the two matrix model with action tr [m2 2 (A2 1+A 2 2)− 1 4 [A1,A ]2] which has not been exactly solved. The first of these is the variational principle. 5 Variational Principles So far, we have discussed a variety of clever ways to solve differential equations, but have given less attention to where these differential equations come from. • Adapt — remix, transform, and build upon the material. In practice, the prepared quantum state is indirectly assessed by the value of the associated energy. Okay I think I've nailed the point into the floor enough by now. the variational parameters equal to zero. 51 Downloads; 8 Citations; Abstract. You are free to: • Share — copy or redistribute the material in any medium or format. A variational principle is developed for fractional kinetics based on the auxiliary-field formalism. The problem is that Variational methods certainly means the general methods of Calculus of variations.This article is just one example of these methods (perhaps not even the sole example even within quantum mechanics). Recently, the variational principle and associated Levy Ansatz have been proposed in order to obtain an analytic solution of the fractional Fokker-Planck equation. So a natural question to ask is, ‘‘what's our best guess for the free energy of the actual system’’? One of the central issues in the use of principal component analysis (PCA) for data modelling is that of choosing the appropriate number of retained components. A quick comment about notation: When we write it means that we're considering the average of some observable O in the trial ensemble; that is, it answers the question ‘‘what would the average of O be if the system were actually the trial Hamiltonian?’’ Operationally, is calculated using the probability weights of the trial Hamiltonian, by calculating, (Notice the subscript ‘‘tr’’ on the partition function and Hamiltonian here.). Variational principle for fractional kinetics and the Lévy Ansatz Sumiyoshi Abe Department of Physical Engineering, Mie University, Mie 514-8507, Japan Abstract A variational principle is developed for fractional kinetics based on the auxiliary-field formalism. Adaptive Derivative-Assembled Pseudo-Trotter ansatz Variational Quantum Eigensolver (ADAPT-VQE) ADAPT-VQE is an algorithm where the structure of the ansatz is determined in an adaptive manner. Applying the variational principle to (1+1) dimensional relativistic quantum field theories Jutho Haegeman UGent, Department of Physics and Astronomy, Krijgslaan 281 S9, B-9000 Gent, Belgium E-mail: [email protected] J. Ignacio Cirac Max-Planck-Institut für Quantenoptik, Hans-Kopfermann-Str. The variational theorem states that for a Hermitian operator H with the smallest eigenvalue E0, any normalized jˆi satisfles E0 • hˆjHjˆi: Please prove this now without opening the text. Generalized variational mechanics began in the 1950s with the breakthrough works of Reissner [2] ontwo-fieldvariationalprinciplesforelasticityproblems, in which the displacement u i and stress ˙ ij are consid-eredindependentfields. The variational principle ensures that this expectation value is always greater than the smallest eigenvalue of $$H$$. We formulate an optimization problem of Hamiltonian design based on the variational principle. However, the study of dynamical properties therewithin resorts to an ansatz, whose validity has not yet been theoretically proven. %� And my best guess for is the one that makes as close to possible. In the last decade, physical and geometrical investigations about the relationship between horizon thermodynamics and gravitational dynamics suggest that gravity could be an emergent phenomenon. The strategy of the variational principle is to use a problem we can solve to approximate a problem we can't. The origin of the Hartree–Fock method dates back to the end of the 1920s, soon after the discovery of the Schrödinger equation in 1926. The suppression of nonphysical quasiparticle reflections from the boundary of the nonuniform region is … Variational quantum algorithm for nonequilibrium steady states Nobuyuki Yoshioka, Yuya O. Nakagawa, Kosuke Mitarai, and Keisuke Fujii Phys. 2(����^���4�q������ 4�/{�+�R�؞��=i�� Ԅ#�%7]�k꧃B,b����4���V/��N���,��6s��|�BX�����wI�U���(\�S�eϨ�w���}��:"M��M�Yoi���F�LBm(����E�s�L��zJ�(U'U���d��. Finally, minimize the variational free energy by setting its derivative w.r.t. It is applied to the Fokker-Planck equation with spatiotemporal fractionality, and a variational solution is obtained with the help of the Lévy Ansatz. Variational Principle Techniques and the Properties 117 While the total energy for the trial wave function in terms of the variational parameter α is ( ) φφ φ φ α H E ˆ = 2 2 1 4 3 2 3 α+ mωα− m h. (30) On minimizing E(α) with respect to α results 0 4 3 4 3 2 = 2 − 2 = α ω α m d m dE h or h The matrix product state ansatz works directly in the thermodynamic limit and allows for an efficient implementation (cubic scaling in the bond dimension) of the variational principle. Variational Principle Approach to General Relativity Chakkrit Kaeonikhom Submitted in partial fulfllment of the requirements for the award of the degree of Bachelor of Science in Physics B.S. The strategy of the variational principle is to use a problem we can solve to approximate a problem we can't.. More preciesly, suppose we want to solve a hard system with a Hamiltonian .Our plan of attack is to approximate it with a different ‘‘trial Hamiltonian’’ which has the same general ‘‘flavor’’ as the actual Hamiltonian, but (in contrast) is actually solvable. You could also call a different name such as a ‘‘variational ansatz’’ or a ‘‘guess of the solution shape’’ or even ‘‘a random shot in the dark.’’ The main point is that the the trial Hamiltonian should be a solvable problem that's similar to the actual problem at hand. The variational principle of quantum mechanics states that the average measured value of an observable with respect to a state is at least the observable operator’s minimum eigenvalue. In recent work, we have developed a variational principle for large N multi-matrix models based on the extremization of non-commutative en-tropy. 8 The Variational Principle 8.1 Approximate solution of the Schroedinger equation If we can’t find an analytic solution to the Schroedinger equation, a trick known as the varia-tional principle allows us to estimate the energy of the ground state of a system. See Chapter 7 of the textbook. Variational Principles in Classical Mechanics by Douglas Cline is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License (CC BY-NC-SA 4.0), except where other-wise noted. Then we study the equivalence and di erence of the variational principles and the derived evolution equations in Sec.3. Here, we test the simplest variational ansatz for our entropic varia-tional principle with Monte-Carlo measurements. Hooray, we've learned the variational principle. ten Bosch, A.J. It is because variational principles have constantly produced more and more profound physical results, many of which underlie contemporary theoretical physics. The key new idea in his approach was the use … The least energy dissipation principle is well known in various linear systems such as viscous flow in Newtonian fluid, and electric current in ohmic devices. For instance, suppose we'd like to understand the general d … If an object is viewed in a plane mirror then we can trace a ray from the object to the eye, bouncing o the mirror. Here, we test the simplest variational ansatz for our entropic variational principle with Monte-Carlo measurements. 107 0 obj ), Connection to Quantum Mechanics and trial wavefunctions. Suppose we are given a Hilbert space and a Hermitian operator over it called the Hamiltonian H. Ignoring complications about continuous spectra, we look at the discrete spectrum of H and the corresponding eigenspaces of each eigenvalue λ (see spectral theorem for Hermitian operators for the mathematical background): This is achieved by confining the nonuniformity to a (dynamically expandable) finite region with fixed boundary conditions. Variational principles in fluid dynamics may be divided into two categories. It is applied to the Fokker-Planck equation with spatiotemporal fractionality, and a variational solution is obtained with the help of the Lévy Ansatz. Free Energy Principles And Variational Methods In Applied Mechanics 3rd Edition PDF Book Thanks for telling us about the problem. ;��y�"%��4�E�;.�H��Z�#�3QH���u�m�?���6�{]7%M�פw�{^�s�i�V6F)2;����DT9eJ@���*�j�" ��39n� ����_������2 _���E��.���3F��������q���G�i ڟ�6������Н ��&��^s8�;5kÑF�v ~��H�>`����PL��5G%����M+ua�u����ŝ����n�ٿ��A�D�@!1 �鋢1v6t2�;�88�f��e�'�"���� S^\$��������M�x� ��� ���@7�_�Y�2��YL&����"�t���CC�~|�A. Notice that no matter what I choose for the parameter, the variational free energy is always bigger than the actual free energy . A. Variational Principles For the purposes of this paper, let us define a state selective variational principle as a smooth function of a wave function ansatz’s variables with the following property: if the ansatz is capable of exactly describing the individual Hamiltonian eigenstate of interest, Like Hartree-Fock, our approach is deterministic, state-specific, applies a variational principle to a minimally correlated ansatz, produces energy stationary points, relaxes the orbital basis, has a Fock-build cost-scaling, and can serve as the foundation for correlation methods such as perturbation theory and coupled cluster theory. Page generated 2020-09-20 15:48:00 PDT, by. No matter how good a guess your variational free energy is, it will always be greater than or equal to the actual free energy ; that is. This class of ansätze is inspired by the theory of quantum optimal control and leads to an improved convergence of VQAs for some important problems such as the Fermi-Hubbard model at half-filling, and show that our variational circuits can approximate the ground state of this model with significantly higher accuracy and for larger systems. Download BibTex. Given a Hamiltonian the method consists Variational principles have always played an impor-tantroleinboththeoreticalandcomputationalmechan-ics [1–33]. The Variational Principle. Variational quantum eigensolver with fewer qubits ... one can exponentially increase the bond dimension of the tensor network variational ansatz on a quantum computer. Bronsted and Rockafellar [6] h ave used it to obtain subdifferentiability properties for convex functions on Banach spaces, and Browder [7] has applied it to nonconvex subsets of Banach spaces. One of the key points today is that interacting systems are very difficult to solve in general. Variational calculations for Hydrogen and Helium Recall the variational principle. A variational ansatz for momentum eigenstates of translation-invariant quantum spin chains is formulated. Iterate until convergence. Variational method → Variational method (quantum mechanics) – I think that the move in 2009 was, unfortunately, a clear mistake. << /Filter /FlateDecode /Length 2300 >> We now move to more physical statements about the behavior of the solutions of the TISE. 1 Introduction. Variational Principal Components Christopher M. Bishop Microsoft Research 7 J. J. Thomson Avenue, Cambridge, CB3 0FB, U.K. [email protected] Cite Icon Cite. This bound allows us to use classical computation to run an optimization loop to find this eigenvalue: Use a classical non-linear optimizer to minimize the expectation value by varying ansatz parameters $$\vec{\theta}$$. I'm not sure if I'll get around to finishing up the rest of this page…for now just go on to the next page about non-interacting spins. (PDF) Variational ansatz-based quantum simulation of imaginary … Singlet Unitary Coupled Cluster Ansatz 이권학, 이준구* 한국과학기술원 전기 및 전자공학부 [email protected], *[email protected] Singlet Unitary Coupled Cluster Ansatz for Quantum Chemistry Simulation Using Variational Method Gwonhak Lee, June-Koo Kevin Rhee * School of Electrical Engineering, KAIST 요 약 stream Next, calculate the variational free energy . The best variational solution we can find is the one that gets as close as possible to the actual Hamiltonian. The matrix product state ansatz works directly in the thermodynamic limit and allows for an efficient implementation (cubic scaling in the bond dimension) of the variational principle. There was a funny look on his face, like, ‘‘Oh, you're expecting me to teach you something?’’ Well, yes, we would like you to teach us some statistical mechanics! [5]. 3. The rst variational principle was formulated about 2000 years ago, by Hero of Alexandria. Practically speaking, our strategy is to start with a whole family of possible trial Hamiltonians, and then just pick the one whose variational free energy is the smallest. We summarise the results in Table1under various conditions. The variational principle is a useful tool to have in our pocket because it lets us leverage the Hamiltonians which we actually can solve to solve Hamiltonians which we can't. Douglas Hartree's methods were guided by some earlier, semi-empirical methods of the early 1920s (by E. Fues, R. B. Lindsay, and himself) set in the old quantum theory of Bohr. Our approach combines the P representation of the density matrix and the variational principle for open quantum system. The rst variational principle was formulated about 2000 years ago, by Hero of Alexandria. The Ritz method is a direct method to find an approximate solution for boundary value problems.The method is named after Walther Ritz, although also commonly called the Rayleigh-Ritz method.. (I've left out the parameter for simplicity). For instance, our family of trial Hamiltonians might be all possible 2D Ising models. Our approach combines the P representation of the density matrix and the variational principle for open quantum system. The steady-state density matrix of the lattice system is constructed via a purified neural-network Ansatz in an extended Hilbert space with ancillary degrees of freedom. h jO^j i h j i 1 (1) With j inormalized, the equation simpli es to h jO^j i 1 (2) 2. Comparison of Unitary Coupled Cluster Ansatz Methods for the Variational Quantum Eigensolver Ethan Hickman ([email protected]), Aaron M. Roth, Yingyue Zhu University of Maryland CMSC 657 December 12, 2019 Abstract The variational quantum eigensolver (VQE) is a hybrid quantum-classical algorithm Variational principles and generalized variational principles for nonlinear elasticity with finite displacement. Often this is based on a similar problem that has an exact solution. We describe how to implement the time-dependent variational principle for matrix product states in the thermodynamic limit for nonuniform lattice systems. The variational principle Theory Example: One-dimensional harmonic oscilator How to do this using the variational principle... (i)pick a trial function which somehow resembles the exact ground state w.f. the variational principle is an extension of Rayleigh’s principle of the least energy dissipation [7]. Christopher Bishop; Proceedings Ninth International Conference on Artificial Neural Networks, ICANN'99 | January 1999. Variational Principles and Lagrangian Mechanics Physics 3550, Fall 2012 Variational Principles and Lagrangian Mechanics Relevant Sections in Text: Chapters 6 and 7 The Lagrangian formulation of Mechanics { motivation Some 100 years after Newton devised classical mechanics Lagrange gave a di erent, considerably more general way to view dynamics. Variational Methods of Approximation The concept behind the Variational method of approximating solutions to the Schrodinger Equation is based on: a) An educated guess as to the functional form of the wave function. In general, a parameterized ansatz wavefunction will be in a superposition of eigenstates of the Hamiltonian. Authors; Authors and affiliations; Chien Wei-zang; Article. conditions (a) = (b) = 0: How do they look like for the rigid body equation? Additional examples and problems can be found in the following books of the author: 1. We have a lot of choices; picking and gives us one possible trial Hamiltonian; picking and gives us another possibility, etc., and the variational principle tells us that our best guess for and is the choice that minimizes . Reddy J. (Physics) Fundamental Physics & Cosmology Research Unit The Tah Poe Academia Institute for Theoretical Physics & Cosmology Department of Physics, Faculty of Science Naresuan University March 15, 2006. variational principles: as the approximate ansatz becomes more and more flexible, we are guaranteed to recover the exact eigenstate eventually. 1, Garching, D-85748, Germany See Chapter 16 of the textbook. The key concepts of the algorithm are demonstrated for the nonlinear Schr\"odinger equation as a canonical example. Novel adaptive derivative-assembled pseudo-trotter (ADAPT) ansatz approaches and recent formal … where we can pick the parameters and that enter into the Hamiltonian. Here I've plotted how depends on the parameter in the trial Hamiltonian. A. Variational Principles For the purposes of this paper, let us define a state selective variational principle as a smooth function of a wave function ansatz’s variables with the following property: if the ansatz is capable of exactly describing the individual Hamiltonian eigenstate of interest, ON THE VARIATIONAL PRINCIPLE 325 The proof of this theorem is based on a device due to Bishop and Phelps [4]. … Variational calculation for Helium Recall the variational principle. We present a method to perform a variational analysis of the quantum master equation for driven-disspative bosonic fields with arbitrary large occupation numbers. (Refer Section 3 - Applications of the Variational Principle). Reduced variational principles: Euler-Poincar eIII Theorem (Poincar e(1901-02): Geometric Mechanics is born) Hamilton’s principle for rigid body action S = R t 1 t0 L(R;R_ )dt = 0 is equivalent to Sred = Z t 1 t0 l()dt = 0; with 2R3 and for variations of the form = _ + ; and bdry. If an object is viewed in a plane mirror then we can trace a ray from the object to the eye, bouncing o the mirror. Moreover, we construct circuits blocks which respect U(1) and SU(2) symmetries of the physical system and show that they can significantly speed up the training process and alleviate the gradient vanishing problem. The variational principle allows us to reframe an unknown problem in terms of a known problem; it tells us how we can ‘‘guess’’ the closest possible answer in terms of a ‘‘trial’’ solution. When Prof. Kivelson walked into class today, he looked a bit taken by surprise. Please refer the reference for details. A variational ansatz for momentum eigenstates of translation-invariant quantum spin chains is formulated. (I don't even know if I'll get around to writing the rest of the sections…I have a life too, you know! variational principles and extending the principles to the general stochastic evolution of mixed states. Abstract. There's a whole bunch of different 's that we can pick, and our best choice is because it minimizes . Rev. Given a variational ansatz for a Hamiltonian we construct a loss function to be minimised as a… Variational Principle for the Many Body Density Matrix. Honestly, it's much more important to understand the logic behind a variational argument than to know how to prove it…so of all the sections on this page, the Motivation and Overview section is most important. By design, the variational quantum eigensolver (VQE) strives to recover the lowest-energy eigenvalue of a given Hamiltonian by preparing quantum states guided by the variational principle. The ambition of this book is to describe some of their physical applications. Variational Principle. It is applied to the Fokker-Planck equation with spatio-temporal fractionality, and a variational solution is obtained with the help of the L\'evy Ansatz. Thepreviousliterature,howev-er,consideredonlydis Among the others, Padmanabhan’s theory of “emergent gravity” focus on the concept of spacetime as an effective macroscopic description of a more fundamental microscopic theory … The recent proof by Guerra that the Parisi ansatz provides a lower bound on the free energy of the Sherrington-Kirkpatrick (SK) spin-glass model could have been taken as offering some support to the validity of the purported solution. Let $$\psi$$ be a properly normalized trial solution to the previous equation. The McLachlan variational principle for real-time dynamics governs the equations of motion of variational parameters, where the variational ansatz is automatically generated and dynamically expanded along the time-evolution path, such that the \McLachlan distance", which is a measure of the simulation accuracy, remains below a xed Mechanics.In this study project, the Variational Principle has been applied to several scenarios, with the aim being to obtain an upper bound on the ground state energies of several quantum systems, for some of which, the Schrodinger equation cannot be easily solved. The variational principle ensures that this expectation value is always greater than the smallest eigenvalue of $$H$$. This bound allows us to use classical computation to run an optimization loop to find this eigenvalue: Use a classical non-linear optimizer to minimize the expectation value by varying ansatz parameters $$\vec{\theta}$$. Review of Equations of Solid Mechanics 47 2. A variational principle is developed for fractional kinetics based on the auxiliary-field formalism. And this is precisely the focal point where variational QMC and deep learning meet—the former provides the loss function in the form of the variational principle, while the latter supplies a powerful wave function ansatz in the form of a deep neural network.
Share
|
# Does there have to be an addition or subtraction sign between numbers inside of a bracket in order for you to expand the bracket, or can it be a division or multiplication sign? For example, can you expand: 24 (x ÷ 5) or 17 (x xx 8)?
Jun 3, 2017
It can be addition or subtraction sign or it can be a division or multiplication sign as well, but order of operations would have to be PEMDAS i.e. Parentheses , Exponents , Multiplication and Division , and Addition and Subtraction .
Jun 3, 2017
No. There is a commutative and associative property for both multiplication and addition.
#### Explanation:
That means you can “expand” parenthetical statements by an external operation. In your examples,
$24 \cdot \left(\frac{x}{5}\right)$ can be expanded to $\left(\frac{24}{5}\right) \cdot x$ , or $4.8 \cdot x$ and $17 \cdot \left(x \cdot 8\right)$ expands to $17 \cdot x \cdot 8 = 136 \cdot x$
Similarly, with addition $10 \cdot \left(3 + x\right)$ expands to $30 + 10 \cdot x$ and $25 \cdot \left(x - 4\right)$ expands to $25 \cdot x - 100$
Jun 3, 2017
In order to expand by using the distributive law there has to be an addition or subtraction sign in the bracket.
#### Explanation:
If I understand you to mean the 'distributive law' for expanding, then you will only 'expand' the bracket if there is an addition or subtraction sign in the bracket.
In $5 \left(x + 3\right)$ there are TWO terms inside the bracket, while:
In $5 \left(x \times 3\right)$ there is only ONE term inside the bracket.
Removing the bracket in each case gives the following:
$5 \left(x + 3\right) = 5 x + 15 \text{ but } 5 \left(x \times 3\right) = 5 \left(3 x\right) = 15 x$
The reason for the expanding is that $x \mathmr{and} 3$ are unlike terms, so they cannot be added, but BOTH still need to be multiplied by $5$
While with $x \times 3$, they can be multiplied together and the product is then multiplied by the $5$
With the $+$ sign, the 5 is multiplied by both the $x$ and the $3$, but with the $\times$ sign, the 5 is multiplied once, because there is already a product inside the bracket.
Remember that there is a multiplication sign between the $5$ and the bracket.
$24 \left(x \div 5\right) = 24 \times \frac{x}{5} = \frac{24 x}{5}$
$17 \left(x \times 8\right) = 17 \times x \times 8 = 136 x$
I hope this helps?
|
# An Inequality with Determinants II
### Solution 1
\displaystyle\begin{align} \Delta &=\left|\begin{array}{cccc} \,1 & 0 & 0 & 0\\ a & b-a & c-a & d-a\\ a^2 & (b-a)(b+a) & (c-a)(c+a) & (d-a)(d+a)\\ \frac{1}{a^2} & \frac{(b-a)(b+a)}{a^2b^2} & \frac{(c-a)(c+a)}{a^2c^2} & \frac{(d-a)(d+a)}{a^2d^2}\end{array}\right|\\ &=\frac{(b-a)(c-a)(d-a)}{a^2} \left|\begin{array}{ccc} \,1 & 1 & 1\\ b+a & c+a & d+a\\ \frac{b+a}{b^2} & \frac{c+a}{c^2} & \frac{d+a}{d^2}\end{array}\right|\\ &=\frac{(b-a)(c-a)(d-a)}{a^2b^2c^2d^2} \left|\begin{array}{ccc} \,1 & 1 & 1\\ b+a & c+a & d+a\\ c^2d^2(b+a) & b^2d^2(c+a) & b^2c^2(d+a)\end{array}\right|\\ &=\frac{(b-a)(c-a)(d-a)}{a^2b^2c^2d^2}\cdot\Delta '; \end{align}
where $\Delta'\;$ is being evaluated further:
\displaystyle\begin{align} \Delta' &=\left|\begin{array}{ccc} 1 & 0 & 0\\ b+a & c-b & d-b\\ c^2d^2(b+a) & d^2(c-b)(ab+ac+bc) & c^2(d-b)(ab+ad+bd)\end{array}\right| \end{align}
It follows that
\displaystyle\begin{align} |\Delta |&=\frac{|(b-a)(c-a)(d-a)(c-b)(d-b)(d-c)|(abc+abd+acd+bcd)}{a^2b^2c^2d^2}\\ &\lt\frac{abc+abd+acd+bcd}{a^2b^2c^2d^2}\\ &=\frac{1}{abcd}\left(\frac{1}{a}+\frac{1}{b}+\frac{1}{c}+\frac{1}{d}\right) \end{align}
because $|(b-a)(c-a)(d-a)(c-b)(d-b)(d-c)|\lt 1.$
### Solution 2
First of all, the required inequality is equivalent to
$\displaystyle \mathbb{D}=\left|\begin{array}{cccc} \,a^2 & b^2 & c^2 & d^2\\ a^3 & b^3 & c^3 & d^3\\ a^4 & b^4 & c^4 & d^4\\ 1 & 1 & 1 & 1 \end{array}\right|\lt abcd\left(\frac{1}{a}+\frac{1}{b}+\frac{1}{c}+\frac{1}{d}\right).$
Note that $\displaystyle \mathbb{D}=-\left|\begin{array}{cccc} \,1 & 1 & 1 & 1\\ a^2 & b^2 & c^2 & d^2\\ a^3 & b^3 & c^3 & d^3\\ a^4 & b^4 & c^4 & d^4 \end{array}\right|.\;$ Set $\displaystyle P(x)=\left|\begin{array}{cccc} \,1 & 1 & 1 & 1\\ x^2 & b^2 & c^2 & d^2\\ x^3 & b^3 & c^3 & d^3\\ x^4 & b^4 & c^4 & d^4 \end{array}\right|.\;$
$P(x)\;$ is a polynomial of degree $4\;$ and $P(b)=P(c)=P(d)=0.\;$ Thus
$P(x)=q(b,c,d)(x-b)(x-c)(x-d)(\alpha x+\beta).$
Indeed, $P(a)=q(b,c,d)(a-b)(a-c)(a-d)(\alpha a+\beta).\;$ Now, since $\mathbb{D}\;$ is symmetric in all four variables, it is easy to show that $q(b,c,d)=(b-c)(b-d)(c-d)\;$ so that
(1)
$P(x)=(b-c)(b-d)(c-d)(x-b)(x-c)(x-d)(\alpha x+\beta).$
On the other hand, in the determinant representation of $P(x)\;$ we get
\displaystyle\begin{align}P(0)&=\left|\begin{array}{cccc} \,1 & 1 & 1 & 1\\ 0 & b^2 & c^2 & d^2\\ 0 & b^3 & c^3 & d^3\\ 0 & b^4 & c^4 & d^4 \end{array}\right|\\ &=b^2c^2d^2\left|\begin{array}{ccc} \,1 & 1 & 1\\ b & c & d\\ b^2 & c^2 & d^2 \end{array}\right|\\ &=b^2c^2d^2(c-b)(d-b)(d-c). \end{align}
Comparing this to (1) gives $\beta=bcd.\;$ Thus,
$P(x)=(b-c)(b-d)(c-d)(x-b)(x-c)(x-d)(\alpha x+bcd).$
Using the symmetry of $\mathbb{D}\;$ again, we obtain $\alpha=bc+bd+cd,\;$ and, therefore,
$\mathbb{D}=-P(a)=(b-c)(d-b)(d-c)(b-a)(c-a)(d-a)(abc+abd+acd+bcd)$
and the required inequality follows.
### Acknowledgment
The inequality from the book Math Power has been posted at the CutTheKnotMath facebook page by Dan Sitaru. Solution 1 is by Leo Giugiuc; Solution 2 is by Héctor Manuel Garduño Castañeda.
|
## Pacific Journal of Mathematics
### Characters of $p'$-degree in solvable groups.
Thomas R. Wolf
#### Article information
Source
Pacific J. Math., Volume 74, Number 1 (1978), 267-271.
Dates
First available in Project Euclid: 8 December 2004
https://projecteuclid.org/euclid.pjm/1102810457
Mathematical Reviews number (MathSciNet)
MR0470055
Zentralblatt MATH identifier
0371.20007
Subjects
Primary: 20C15: Ordinary representations and characters
#### Citation
Wolf, Thomas R. Characters of $p'$-degree in solvable groups. Pacific J. Math. 74 (1978), no. 1, 267--271. https://projecteuclid.org/euclid.pjm/1102810457
#### References
• [1] E. C. Dade, Characters of groups with normal extra-special subgroups, Mimeographed preprint.
• [2] G. Glauberman, Fixed points in groups with operator groups, Math. Z., 84 (1964), 120--125.
• [3] D. Gorenstein, Finite Groups, Harper and Row, New York, 1968.
• [4] I. M. Isaacs, Characters of solvable and symplectic groups, Amer. J. Math., 95 (1973), 594-635.
• [5] I. M. Isaacs, Character Theory of Finite Groups, Academic Press, New York, 1976.
• [6] J. McKay, A new invariantfor finite simple groups, Notices, Amer. Math. Soc, 18 (1971), 397.
|
## Approximating the N-th Normal Mode Frequency for an N-chain Oscillator (10 minutes)
• If an N-chain oscillator is oscillating at the N-th normal mode in the first Burilloun zone (that is, when the wavelength of the envelope function is $k=\frac{\pi}{a}$), the frequency of oscillation can be reasonably approximated using the equation of motion for a particle in the system.
• The key approximation for this calculation is that the displacement of each molecule in the oscillator is equal in magnitude. Using this approximation, the equation of motion for a particular particle becomes
$$m\ddot{x}=-2\kappa x \; - \; 2\kappa x \; \; .$$
Assuming that the equation describing the particle's motion has the form
$$x(t)=Ae^{i \omega t} \; \; ,$$
this equation can be inserted into the equation of motion to find that
$$\omega=\sqrt{\frac{4\kappa}{m}} \; \; .$$
Have the students test this approximation using the “One Dimensional Oscillator Chain” program. This exercise works quite well as an extension in concluding the Coupled Oscillators and the Monatomic Chain Lab.
##### Views
New Users
Curriculum
Pedagogy
Institutional Change
Publications
|
# Changeset b512454 for doc/proposals/concurrency
Ignore:
Timestamp:
Oct 19, 2016, 3:32:19 PM (5 years ago)
Branches:
aaron-thesis, arm-eh, cleanup-dtors, deferred_resn, demangler, jacob/cs343-translation, jenkins-sandbox, master, new-ast, new-ast-unique-expr, new-env, no_list, persistent-indexer, resolv-new, with_gc
Children:
ab84e8a
Parents:
a3eaa29
Message:
Location:
doc/proposals/concurrency
Files:
4 edited
Unmodified
Removed
• ## doc/proposals/concurrency/.gitignore
ra3eaa29 concurrency.ps version.aux monitor.tex ext_monitor.tex
• ## doc/proposals/concurrency/Makefile
ra3eaa29 FIGURES = ${addsuffix .tex, \ monitor \ ext_monitor \ } clean : rm -f *.bbl *.aux *.dvi *.idx *.ilg *.ind *.brf *.out *.log *.toc *.blg *.pstex_t *.cf *.glg *.glo *.gls *.ist \ rm -f *.bbl *.aux *.dvi *.idx *.ilg *.ind *.brf *.out *.log *.toc *.blg *.pstex_t *.cf *.glg *.glo *.gls *.ist *.acn *.acr *.alg \${FIGURES} ${PICTURES}${PROGRAMS} ${GRAPHS}${basename ${DOCUMENT}}.ps${DOCUMENT}
• ## doc/proposals/concurrency/concurrency.tex
ra3eaa29 % requires tex packages: texlive-base texlive-latex-base tex-common texlive-humanities texlive-latex-extra texlive-fonts-recommended % inline code �...� (copyright symbol) emacs: C-q M-) % red highlighting �...� (registered trademark symbol) emacs: C-q M-. % blue highlighting �...� (sharp s symbol) emacs: C-q M-_ % green highlighting �...� (cent symbol) emacs: C-q M-" % LaTex escape �...� (section symbol) emacs: C-q M-' % keyword escape �...� (pilcrow symbol) emacs: C-q M-^ % inline code ©...© (copyright symbol) emacs: C-q M-) % red highlighting ®...® (registered trademark symbol) emacs: C-q M-. % blue highlighting ß...ß (sharp s symbol) emacs: C-q M-_ % green highlighting ¢...¢ (cent symbol) emacs: C-q M-" % LaTex escape §...§ (section symbol) emacs: C-q M-' % keyword escape ¶...¶ (pilcrow symbol) emacs: C-q M-^ % math escape $...$ (dollar symbol) Finally, an approach that is worth mentionning because it is gaining in popularity is transactionnal memory\cite{Dice10}. However, the performance and feature set is currently too restrictive to be possible to add such a paradigm to a language like C or \CC\cit, which is why it was rejected as the core paradigm for concurrency in \CFA. \section{Monitors} \subsection{Monitors} A monitor is a set of routines that ensure mutual exclusion when accessing shared state. This concept is generally associated with Object-Oriented Languages like Java\cite{Java} or \uC\cite{uC++book} but does not strictly require OOP semantics. The only requirements is the ability to declare a handle to a shared object and a set of routines that act on it : \begin{lstlisting} \end{lstlisting} \subsection{Call semantics} \label{call} \subsubsection{Call semantics} \label{call} The above example of monitors already displays some of their intrinsic caracteristics. Indeed, it is necessary to use pass-by-reference over pass-by-value for monitor routines. This semantics is important because at their core, monitors are implicit mutual exclusion objects (locks), and these objects cannot be copied. Therefore, monitors are implicitly non-copyable. \\ The problem is to indentify which object(s) should be acquired. Furthermore we also need to acquire each objects only once. In case of simple routines like \code{f1} and \code{f2} it is easy to identify an exhaustive list of objects to acquire on entering. Adding indirections (\code{f3}) still allows the compiler and programmer to indentify which object will be acquired. However, adding in arrays (\code{f4}) makes it much harder. Array lengths aren't necessarily known in C and even then making sure we only acquire objects once becomes also none trivial. This can be extended to absurd limits like \code{f5} which uses a custom graph of monitors. To keep everyone as sane as possible\cite{Chicken}, this projects imposes the requirement that a routine may only acquire one monitor per parameter and it must be the type of the parameter (ignoring potential qualifiers and indirections). \subsection{Data semantics} \label{data} \subsubsection{Data semantics} \label{data} Once the call semantics are established, the next step is to establish data semantics. Indeed, until now a monitor is used simply as a generic handle but in most cases monitors contian shared data. This data should be intrinsic to the monitor declaration to prevent any accidental use of data without its appripriate protection. For example here is a more fleshed-out version of the counter showed in \ref{call}: \begin{lstlisting} Recursive mutex routine calls are allowed in \CFA but if not done carefully it can lead to nested monitor call problems\cite{Lister77}. These problems which are a specific implementation of the lock acquiring order problem. In the example above, the user uses implicit ordering in the case of function \code{bar} but explicit ordering in the case of \code{baz}. This subtle mistake can mean that calling these two functions concurrently will lead to deadlocks, depending on the implicit ordering matching the explicit ordering. As shown on several occasion\cit, there isn't really any solutions to this problem, users simply need to be carefull when acquiring multiple monitors at the same time. \subsubsection{Implementation Details: Interaction with polymorphism} At first glance, interaction between monitors and \CFA's concept of polymorphism seem complexe to support. However, it can be reasoned that entry-point locking can solve most of the issues that could be present with polymorphism. First of all, interaction between \code{otype} polymorphism and monitors is impossible since monitors do not support copying. Therefore the main question is how to support \code{dtype} polymorphism. We must remember that monitors' main purpose is to ensure mutual exclusion when accessing shared data. This implies that mutual exclusion is only required for routines that do in fact access shared data. However, since \code{dtype} polymorphism always handle incomplete types (by definition) no \code{dtype} polymorphic routine can access shared data since the data would require knowledge about the type. Therefore the only concern when combining \code{dtype} polymorphism and monitors is to protect access to routines. With callsite-locking, this would require significant amount of work since any \code{dtype} routine could have to obtain some lock before calling a routine. However, with entry-point-locking calling a monitor routine becomes exactly the same as calling it from anywhere else. \subsection{Internal scheduling} \label{insched} \subsection{External scheduling} \label{extsched} As one might expect, the alternative to Internal scheduling is to use External scheduling instead. This method is somewhat more robust to deadlocks since one of the threads keeps a relatively tight control on scheduling. Indeed, as the following examples will demontrate, external scheduling allows users to wait for events from other threads without the concern of unrelated events occuring. External scheduling can generally be done either in terms of control flow (see \uC) or in terms of data (see Go). Of course, both of these paradigms have their own strenghts and weaknesses but for this project control flow semantics where chosen to stay consistent with the reset of the languages semantics. Two challenges specific to \CFA arise when trying to add external scheduling whith loose object definitions and multi-monitor routines. The following example shows what a simple use \code{accept} versus \code{wait}/\code{signal} and its advantages. As one might expect, the alternative to Internal scheduling is to use External scheduling instead. This method is somewhat more robust to deadlocks since one of the threads keeps a relatively tight control on scheduling. Indeed, as the following examples will demonstrate, external scheduling allows users to wait for events from other threads without the concern of unrelated events occuring. External scheduling can generally be done either in terms of control flow (ex: \uC) or in terms of data (ex: Go). Of course, both of these paradigms have their own strenghts and weaknesses but for this project control flow semantics where chosen to stay consistent with the rest of the languages semantics. Two challenges specific to \CFA arise when trying to add external scheduling with loose object definitions and multi-monitor routines. The following example shows what a simple use \code{accept} versus \code{wait}/\code{signal} and its advantages. \begin{center} condition c; public: void f(); void g() { signal} void h() { wait(c); } void f() { signal(c)} void g() { wait(c); } private: } public: void f(); void g(); void h() { _Accept(g); } void g() { _Accept(f); } private: } void f(A & mutex a); void g(A & mutex a); void h(A & mutex a) { accept(g); } \end{lstlisting} While this is the direct translation of the \uC code, at the time of compiling routine \code{f} the \CFA does not already have a declaration of \code{g} while the \uC compiler does. This means that either the compiler has to dynamically find which routines are "acceptable" or the language needs a way of statically listing "acceptable" routines. Since \CFA has no existing concept that resemble dynamic routine definitions or pattern matching, the static approach seems the more consistent with the current language paradigms. This approach leads to the \uC example being translated to : void g(A & mutex a) { accept(f); } \end{lstlisting} However, external scheduling is an example where implementation constraints become visible from the interface. Indeed, ince there is no hard limit to the number of threads trying to acquire a monitor concurrently, performance is a significant concern. Here is the pseudo code for the entering phase of a monitor : \begin{center} \begin{tabular}{l} \begin{lstlisting} ¶if¶ critical section is free : enter elif critical section accepts me : enter ¶else¶ : block \end{lstlisting} \end{tabular} \end{center} For the \code{critical section is free} condition it is easy to implement a check that can evaluate the condition in a few instruction. However, a fast check for \code{critical section accepts me} is much harder to implement depending on the constraints put on the monitors. Indeed, monitors are often expressed as an entry queue and some acceptor queue as in the following figure : \begin{center} {\resizebox{0.5\textwidth}{!}{\input{monitor}}} \end{center} There are other alternatives to these pictures but in the case of this picture implementing a fast accept check is relatively easy. Indeed simply updating a bitmask when the acceptor queue changes is enough to have a check that executes in a single instruction, even with a fairly large number of acceptor. However, this requires all the acceptable routines to be declared with the monitor declaration. For OO languages this doesn't compromise much since monitors already have an exhaustive list of member routines. However, for \CFA this isn't the case, routines can be added to a type anywhere after its declaration. A more flexible At this point we must make a decision between flexibility and performance. Many design decisions in \CFA achieve both flexibility and performance, for example polymorphic routines add significant flexibility but inlining them means the optimizer can easily remove any runtime cost. This approach leads to the \uC example being translated to : \begin{lstlisting} accept( void g(mutex struct A & mutex a) ) Note that the set of monitors passed to the \code{accept} statement must be entirely contained in the set of monitor already acquired in the routine. \code{accept} used in any other context is Undefined Behaviour. \subsection{Implementation Details} \textbf{\large{Work in progress...}} \subsubsection{Interaction with polymorphism} At first glance, interaction between monitors and \CFA's concept of polymorphism seem complexe to support. However, it can be reasoned that entry-point locking can solve most of the issues that could be present with polymorphism. First of all, interaction between \code{otype} polymorphism and monitors is impossible since monitors do not support copying. Therefore the main question is how to support \code{dtype} polymorphism. We must remember that monitors' main purpose is to ensure mutual exclusion when accessing shared data. This implies that mutual exclusion is only required for routines that do in fact access shared data. However, since \code{dtype} polymorphism always handle incomplete types (by definition) no \code{dtype} polymorphic routine can access shared data since the data would require knowledge about the type. Therefore the only concern when combining \code{dtype} polymorphism and monitors is to protect access to routines. With callsite-locking, this would require significant amount of work since any \code{dtype} routine could have to obtain some lock before calling a routine. However, with entry-point-locking calling a monitor routine becomes exactly the same as calling it from anywhere else. \subsubsection{External scheduling queues} \subsubsection{Implementation Details: External scheduling queues} To support multi-monitor external scheduling means that some kind of entry-queues must be used that is aware of both monitors. However, acceptable routines must be aware of the entry queues which means they most be stored inside at least one of the monitors that will be acquired. This in turn adds the requirement a systematic algorithm of disambiguating which queue is relavant regardless of user ordering. The proposed algorithm is to fall back on monitors lock ordering and specify that the monitor that is acquired first is the lock with the relevant entry queue. This assumes that the lock acquiring order is static for the lifetime of all concerned objects gut that is a reasonnable contraint. This algorithm choice has two consequences, the ofthe highest priority monitor is no longer a true FIFO queue and the queue of the lowest priority monitor is both required and probably unused. The queue can no longer be a FIFO queue because instead of simply containing the waiting threads in order arrival, they also contain the second mutex. Therefore, another thread with the same highest priority monitor but a different lowest priority monitor may arrive first but enter the critical section after a thread with the correct pairing. Secondly, since it may not be known at compile time which monitor will be the lowest priority monitor, every monitor needs to have the correct queues even though it is probably that half the multi-monitor queues will go unused for the entire duration of the program.
• ## doc/proposals/concurrency/version
ra3eaa29 0.4.22 0.4.61
Note: See TracChangeset for help on using the changeset viewer.
|
## College Algebra 7th Edition
We know that Interest=$I=P*r*t$: $262.50=3500\displaystyle*r*1$ We solve for $r$: $r=\frac{262.5}{3500}=0.075$ or 7.5% interest rate.
|
# Doing Mahalanobis Metric Learning on a Per-Population Basis?
I have a dataset that consists of pairs $(x,y)$, with:
• $x$ being a high dimensional vector of personal features (e.g height, weight etc) of individuals; the individuals belong to one of three different types of populations: young, middle-aged and old people.
• $y$ is their performance on some task (e.g, scoring a goal), $y\in \{{\pm 1\}}$
Since the populations are inherently different from one another, the same set of features can express differently. As an illustrative example, in the middle-aged group height could be an advantage (correlated with positive label) where in the old group, it could be a disadvantage (correlated with negative label).
My goal is to learn a task-specific similarity metric. That is, learn a mapping $d: X \times X \rightarrow [0,1]$ s.t $d(x_1,x_2)\approx 0$ if $x_1$ and $x_2$ are both good at scoring goals (regardless of which population they come from).
I am considering using the popular form of Mahalanobis distance learning using similarity and dissimilarity constraints, as suggested by my labeled data (example reference here). To address the fact I have different populations, I thought about learning the matrix $M$ that specifies the metric on a per-population basis.
My questions:
1. Using this approach, I have a measure of similarity for any two individuals belonging to the same population. BUT: How do I calculate the similarity between two individuals from different populations?
2. Are there any other suggested approaches that might make sense in this scenario?
Any relevant references, articles, reading material etc will be highly appreciated.
|
## Activities: Engagement analytics
mod_engagement
Maintained by Adam Olley
This is the indicator component of the Engagement analytics suite. Make sure to also get the report and block!
11k
484
1
Moodle 2.2, 2.3, 2.4, 2.5
Moodle Engagement Analytics for Moodle 2
* IMPORTANT *
This plugin is useless on its own, you should also get the report and block plugins that are part of the set.
* CREDITS *
Code: Adam Olley <[email protected]>
Code: Ashley Holman <[email protected]>
Concept: Phillip Dawson <[email protected]>
Indicator Algorithms: Phillip Dawson <[email protected]>
### Sets
This plugin is part of set Engagement Analytics.
### Contributors
Adam Olley (Lead maintainer): Developer
Ashley Holman: Developer
Phillip Dawson: Concept & Algorithms
Danny Liu: Developer
Please login to view contributors details and/or to contact them
### Comments
Show comments
• Wed, May 21, 2014, 2:03 PM
Hi Adam
We'd like to continue to use this plugin when we upgrade to Moodle 2.6 but the 2.7 version fails the compatibility test. The 2.5 version also fails. Will there be a version for 2.6? If no, we will withdraw it until our December upgrade to 2.7. Many thanks
• Wed, May 21, 2014, 2:23 PM
Hi Caitlyn,
I could look at making a 2.6 version available. Are you able to provide some more detail on how its failing exactly on 2.6 (it may well be the case I'll see the problem straight away when I try it, but doesn't hurt to ask ;) )
• Wed, May 21, 2014, 11:36 PM
Hi Adam,
I am on the same boat as Caitlyn, installed 2.5 version of report and mod of engagement and 2.6 version of block of engagement, but it didn't work in my 2.6.2 moodle env.
The error that I got is:
When I go into Plugins>>Reports>>Engagement Analytics, I get the following error….
Manage indicators
Coding error detected, it must be fixed by a programmer: PHP catchable fatal error
More information about this error
Debug info: Argument 1 passed to report_engagement_renderer::display_indicator_list() must be an instance of plugin_manager, instance of core_plugin_manager given, called in [dirroot]/report/engagement/manage_indicators.php on line 73 and defined
Error code: codingerror
Stack trace:
• line 393 of /lib/setuplib.php: coding_exception thrown
• line 119 of /report/engagement/renderer.php: call to default_error_handler()
• line 73 of /report/engagement/manage_indicators.php: call to report_engagement_renderer->display_indicator_list()
Many thanks,
Bei
• Mon, May 26, 2014, 8:34 AM
Hi all,
Latest version uploaded just now finally supports mod_assign in the assessment indicator. Thanks go to eugeneventer for his pull request.
• Tue, May 27, 2014, 3:09 AM
Hi Adam,
Our Moodle is using Oracle 11g, so I got the following error when I go to course->report-engagement analytics:
Debug info: ORA-00933: SQL command not properly ended
SELECT rawdata
FROM m_engagement_cache
WHERE indicator = :o_param1
AND courseid = :o_param2
AND timestart = :o_param3
AND timemodified > :o_param4
ORDER BY timemodified DESC
LIMIT 1
[array (
'o_param1' => 'assessment',
'o_param2' => 482,
'o_param3' => '1335844800',
'o_param4' => 1401127527,
)]
Error code: dmlreadexception
Stack trace:
line 443 of /lib/dml/moodle_database.php: dml_read_exception thrown
line 271 of /lib/dml/oci_native_moodle_database.php: call to moodle_database->query_end()
line 1122 of /lib/dml/oci_native_moodle_database.php: call to oci_native_moodle_database->query_end()
line 1428 of /lib/dml/moodle_database.php: call to oci_native_moodle_database->get_records_sql()
line 1056 of /lib/dml/oci_native_moodle_database.php: call to moodle_database->get_record_sql()
line 1501 of /lib/dml/moodle_database.php: call to oci_native_moodle_database->get_record_sql()
line 156 of /mod/engagement/indicator/indicator.class.php: call to moodle_database->get_field_sql()
line 132 of /mod/engagement/indicator/indicator.class.php: call to indicator->get_cache()
line 111 of /mod/engagement/indicator/indicator.class.php: call to indicator->get_risk_for_users()
line 94 of /report/engagement/index.php: call to indicator->get_course_risks()
The fix I have put in mod/engagement/indicator/indicator.class.php is:
$rawdata =$DB->get_field_sql('
! SELECT rawdata from
! (SELECT rawdata
FROM {engagement_cache}
WHERE indicator = ?
AND courseid = ?
AND timestart = ?
AND timemodified > ?
! ORDER BY timemodified DESC)
! where rownum = 1', \$params);
Not sure whether you can integrate it into your plugins or not. Just wanted to let you know.
Thanks,
Bei
• Tue, May 27, 2014, 3:12 AM
Adam,
Other than that error, the engagement analytics plugins version 2.6 is working fine in our Moodle 2.6.2 .
• Tue, Jun 24, 2014, 2:25 AM
Thanks for the updates Adam. I now have the mod, report, and block running on Moodle 2.6. I think these plugins are useful and have great potential so thank you for contributing them.
I have a number of dummy accounts enrolled on a course for testing purposes, they have example info on their profile pages and profile pictures (edited by me as an admin) but I've never logged in under those accounts. The course has forums, chats, and wikis on it.
From the sitewide config page: moodle/report/engagement/manage_indicators.php?contextid=1 I see that it only monitors Assessment (presumably pushed grades?), Forum, and Login activity. Are there any plans to include other core Moodle activity modules?
Also, it'd be nice to have some documentation collated somewhere so that it's easier to get started and to understand how it works, e.g. How does it calculate the percentages it shows?
Thanks in advance
• Tue, Jun 24, 2014, 8:03 AM
Hi Matt,
There are no immediately plans for me to make additional indicators right now. That said, nothing stopping people making their own to share, like Dan Marsden did for the attendance module:
https://github.com/danmarsden/moodle-engagementindicator_attendance
As for developer documentation, I did write some up when this was released, you can find it here:
http://docs.moodle.org/dev/report/analytics/api
It includes a couple basic examples of things that I hope are helpful
• Mon, Oct 20, 2014, 7:38 PM
Hi, I am wondering is there any more recent documentation available for writing additional plugins. I really need to have an indicator based on downloading (opening) course related material as well. While this is not strictly engagement in our case it is an indicated of success as we use Moodle to host all class handouts and activities as well. Would someone be able to post me in the correct direction? Thanks!
• Wed, Apr 1, 2015, 7:57 PM
Hi, I have installed engagement analytics in moodle 2.6 (mod, block, report).
The results for assessment and login are exact, while in the forum section of the report users either have 100% risk (even if they have read posts), either 69% risk that doesn' t change no matter if some students have made many posts or if i change the settings (e.g. Min risk 0.1 (posts, read, etc), max 0). The problem (standard percentages that don't differentiate for each user) exists for all the forum indicators (posts, total posts, replies).... Thanks!
• Thu, Apr 30, 2015, 5:39 AM
good afternoon
I would like to help me with this error that I get in my moodle , moodle my version is 2.8.5
Capability "report/engagement:manage" was not found! This has to be fixed in code.
line 389 of \lib\accesslib.php: call to debugging()
line 1299 of \lib\adminlib.php: call to has_capability()
line 3639 of \lib\navigationlib.php: call to admin_externalpage->check_access()
line 3671 of \lib\navigationlib.php: call to settings_navigation->load_administration_settings()
line 3671 of \lib\navigationlib.php: call to settings_navigation->load_administration_settings()
line 3622 of \lib\navigationlib.php: call to settings_navigation->load_administration_settings()
line 3471 of \lib\navigationlib.php: call to settings_navigation->load_administration_settings()
line 719 of \lib\pagelib.php: call to settings_navigation->initialise()
line 768 of \lib\pagelib.php: call to moodle_page->magic_get_settingsnav()
line 6592 of \lib\adminlib.php: call to moodle_page->__get()
line 34 of \admin\reports.php: call to admin_externalpage_setup()
• Fri, May 1, 2015, 11:23 AM
@Jose Mesa: That's odd. Did the plugin install correctly? Can you see the capability listed in the report/engagement/db/access.php file?
• Thu, May 7, 2015, 4:49 AM
Adam Olley and install the three plugins correctly there are no errors but now my problem is how do I genererar the report, I do not get the tab Engagement analytics reporting could help me with that please Thanks .
• Fri, Jun 5, 2015, 10:38 PM
I am running Moodle 2.7 with the Engagement Analytics installed. The Login Activity is not reporting any information, although I know the students are logging in. Is there any way to get login activity added back into the analytics? Although the assessment activity is important, I really would like to know how the login activity is affecting the grades.
• Tue, Jun 23, 2015, 4:20 AM
Any chance this one will make it to the 2.8 branch?
Please login to post comments
|
# debugging perceptron for digital AND circuit
I was trying to code a single layer perceptron to understand binary AND:
1 1 1
0 1 0
1 0 0
0 0 0
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
int main()
{
int input1, input2;
float weight1 = 0.3, weight2 = 0.4;
int output;
int training1, training2, expectedoutput;
int i;
int j=1;
//TRAINING
for(i=0; i<10000;i++)
{
if(j=1)
{
training1 = 0;
training2 = 1;
expectedoutput = 0;
}
if(j=2)
{
training1 = 1;
training2 = 0;
expectedoutput = 0;
}
if(j=3)
{
training1 = 0;
training2 = 0;
expectedoutput = 0;
}
if(j=4)
{
training1 = 1;
training2 = 1;
expectedoutput = 1;
j=1;
}
output = weight1*training1 + weight2*training2 + 2;
if(output != expectedoutput )
{
weight1 = weight1 + 0.156 * training1 * (expectedoutput - output);
weight2 = weight2 + 0.156 * training2 * (expectedoutput - output);
}
j++;
}
printf("training done\n");
printf("weight1 = %f" "weight2 = %f\n",weight1,weight2);
//TESTING THE PERCEPTRON
for(i=0; i<5 ; i++)
{
scanf ("%d%d", &input1, &input2 );
output = weight1*input1 + weight2*input2;
printf("\n%d\n", output);
}
return 1;
}
its supposed to input the 4 cases repeatedly and with a learning rate of 0.156 (which i set randomly) and i used the threshold as a weight of 2.
However after the training the perceptron still doesnt give the expected output. Is my understanding of perceptron rule wrong? Please help thank you!
• For debugging the actual code, you probably want to try a different Stack such as Code Review or Cross Validated. The part of this question related to your understanding of the perceptron rule is definitely on topic here on the general AI forum. Welcome to AI! – DukeZhou Aug 27 '17 at 20:30
You have several flaws in your code that will lead to unexpected behavior. I identified the following flaws you need to address. Once you have fixed them, you should output your updated weights during each step of training to see what the learning algorithm actually does and if it is going in the right direction. I will not address pure style aspects (like using a simpler case-statement instead of a series of ifs and stuff like that) and focus on the real errors.
Skipping training case 1
Once you are in training case 4, you set j to 1, expecting to use training case 1 next. But later in your code you increase j by one (j++) and go directly to training case 2, skipping the first one. This means you only run through training case 1 during your first pass of the loop.
Assigning j a value instead of comparing it
if(j=1)
The if statement will always be true, because you do not compare j to 1 but you set j to 1. You basically assign a value and test if the value is true. Correct would be:
if(j==1)
Ignoring bias during test
During the test steps after training you forget to add the bias that you used during training:
output = weight1*input1 + weight2*input2
should actually be:
output = weight1*input1 + weight2*input2 + 2
Otherwise your perceptron behaves differently during training and testing.
Perceptron output must be 1 or 0
Those were all implementation issues. This point actually seems to come from a misunderstanding of perceptrons. A real perceptron can either fire or not, meaning it outputs either 1 or 0, nothing in between. You calculate your output the following way and use the output to compare to the expected output:
output = weight1*training1 + weight2*training2 + 2;
if(output != expectedoutput )
Your output here is a float value. Only in edge cases it will result in 1 or 0. What you actually want to do for a real perceptron is:
if (weight1*training1 + weight2*training2 + 2 >= 0) { output = 1 } else { output = 0 }
if(output != expectedoutput)
After you fixed those errors and studied the output of your learning algorithms, you should be able to get the perceptron to work. If you have any questions, please leave a comment and I will try to help.
Edit:
I don't have a C compiler here, so I quickly implemented your solution with my suggested fixes in Python 3.6 and the results are very close to the correct solution. The final weights obviously have to be -1 and -1 for an AND Gate with bias 2. The learning process gets very close but never reaches -1 and -1 exactly, which leads to the wrong output in the end. This is a very good showcase to illustrate why you prefer a sigmoid neuron instead of a perceptron in modern neural networks. Here is a relevant question from cross-validated concerning the difference of those two types of neurons.
• I did the changes as suggested and indeed the learning process tends to 1. However i have a few questions. 1) why do the weights tend to -1 and -1? if i input 1 1, the output is -1*1 + -1*1 +2 =0. But the answer should be 1 according to AND truth table. It seems that the output is still not correct? 2) could someone explain or provide a link to the importance of learning rate here? i realize that the weights tend more to -1 if i use a small rate like 0.00015. and it gets less accurate when i use a higher learning rate. How do i determine the correct learning rate to use? Thank you! – Sheep Aug 28 '17 at 16:47
• It tends towards 0 because 0 (or above) is the threshold for the perceptron to fire. Firing means "output 1" and not firing means "output 0". Using 0 is kind of arbitrary here, but that's what's usually used for perceptrons so I followed this convention. The learning rate just tells you how fast you are learning in each steps. A slower learning rate will require more training cycles but might get you closer to the "perfect" values in the end. There are mechanisms to determine a good constant or even dynamic learning rate, but in the beginning it is mostly trial and error. Hope that helps! – Demento Aug 28 '17 at 21:43
|
## Linear lattice gauge theory [PDF]
C. Wetterich
Linear lattice gauge theory is based on link variables that are arbitrary complex or real $N\times N$ matrices. This contrasts with the usual (non-linear) formulation with unitary or orthogonal matrices. The additional degrees of freedom correspond to massive particles. We discuss a limit in parameter space where linear lattice gauge theory becomes equivalent to the standard formulation. We argue that the continuum limit of linear lattice gauge theory may be a useful setting for an analytic description of confinement. The running gauge coupling corresponds to the flow of the minimum of a "link potential". This minimum occurs for nonzero values $l_0$ in the perturbative regime, while $l_0$ vanishes in the confinement regime.
View original: http://arxiv.org/abs/1307.0722
|
# Factoring Integers_Part 2
## Factoring Integers - Part 2
So in Factoring Integers_Part 1 we saw how we can use the difference of two squares to factor, although the method works best when the factors are close together. We have also seen how we can use a pre-multiplier to help find factors even when they are not close together.
By using the pre-multiplier we are finding solutions to equations of the form:
• $x^2-y^2=n$
• $x^2-y^2=2n$
• $x^2-y^2=3n$
• $x^2-y^2=4n$
• etc.
So really we are solving $x^2-y^2{\equiv}0$ (mod n), which is to say $x^2{\equiv}y^2$ (mod n). That suggests that we are interested in squares modulo n. The concept of "Squares mod n" is so important we actually give them a name of their own - they're called "Quadratic Residues" or QRs for short. They turn up many, many times in number theory.
So let's look at a few of them. Here are a few Quadratic Residues (QRs) modulo 5959, along with their factorisations:
a $a^2$ Factors 77 -30 -1, 2, 3, 5 X 386 21 3, 7 Y 1145 45 3, 3, 5 1962 -70 -1, 2, 5, 7 Z 2779 -23 -1, 23 4252 -102 -1, 2, 3, 17 5397 17 17
We're pretending -1 is a prime,
because we are factoring both
positive and negative numbers,
and we need the -1 to complete
the factorisation.
Now, if any of these quadratic residues were themselves a square then, as we saw in Factoring Integers_Part 1, we'd be done. But they aren't.
The clever observation is this.
Look at lines X, Y, and Z. These have been chosen carefully because between them, the primes in the factorisations of the QRs all turn up an even number of times. That means the product of the QRs is a square.
$(-30)\times{(21)}\times{(-70)}$
$=(-1{\times}2{\times}3{\times}5){\times}(3{\times}7){\times}(-1{\times}2{\times}5{\times}7)$
$=2^2\times{3^2}\times{5^3}\times{7^2}$
$=(2\times{3}\times{5}\times{7})^2$
$=210^2$
Of course, the QRs themselves are squares of the numbers in the first column. So multiply the numbers in the first column:
$77{\times}386{\times}1962=58314564=5749$ (mod 5959)
So $5749^2=210^2$ when working modulo 5959.
Now we have manufactured exactly what we want - two squares that are equal mod n. That means that (5749-210)*(5749+210) is a multiple of 5959, and we'd expect to get a factorisation. Unfortunately, when we try to get the factorisation we find that 5749+210=5959, which is what we're trying to factor. So we've failed.
Bother.
OK, let's try again. Here's our table,
and this time let's combine lines A, B,
C, and D: Again, these have been chosen
so that the primes in the factorisations
of the QRs all turn up an even number of
times.
a $a^2$ Factors 77 -30 -1, 2, 3, 5 A 386 21 3, 7 1145 45 3, 3, 5 B 1962 -70 -1, 2, 5, 7 2779 -23 -1, 23 4252 -102 -1, 2, 3, 17 C 5397 17 17 D
Now we get:
• $(77{\times}1145{\times}4252{\times}5397)^2=(2{\times}3{\times}3{\times}5{\times}17)^2$ (mod 5959)
• $1833^2=1530^2$ (mod 5959)
That gives us the factorisation:
We might look at this in a little more detail. We have a solution to $x^2\equiv{y^2}$ (mod n) so that means we get: $x^2-y^2{\equiv}0$ (mod n) $x^2-y^2=kn$ for some k $(x+y)(x-y)=kn$ for some k That means the prime factors of kn appear in the numbers x+y and x-y. We hope that the prime factors of n are split between these two numbers. So we look for prime factors in x+y and n, and the easy way to do that is to take the gcd of x+y and n.
• 1833-1530= 303
• gcd(303,5959)=101
• 1833+1530=3363
• gcd(3363,5959)=59
So we're done!
So if we take lots of squares mod n and factor them completely, perhaps we can then find some combination of them which, when multiplied together, gives us the congruence we need.
The challenges now are:
• find lots and lots of QRs (quadratic residues) that we can factor completely,
• find a way to combine them to give us a square on the right hand side.
In Factoring Integers_Part 3 we look at one way of solving the first problem - there are more than one.
|
Journal article
### Measurement of azimuthal asymmetries in neutral current deep inelastic scattering at HERA
Abstract:
The distribution of the azimuthal angle of charged and neutral hadrons relative to the lepton plane has been studied for neutral current deep inelastic ep scattering using an integrated luminosity of 45 pb-1 taken with the ZEUS detector. The kinematic range is 100
### Authors
Journal:
European Physical Journal C
Volume:
51
Issue:
2
Pages:
289-299
Publication date:
2007-07-05
DOI:
EISSN:
1434-6052
ISSN:
1434-6044
URN:
uuid:08a54400-f796-49e0-ab58-5223eeb754b6
Source identifiers:
109937
Local pid:
pubs:109937
Language:
English
|
(866) 205-8123 [email protected]
Select Page
After doing so, the next obvious step is to take the square roots of both sides to solve for the value of x.Always attach the \pm symbol when you get the square root of the constant. The same holds the other way round too, if you pass a double to a function (with known prototype) taking an int argument, the compiler will produce the conversion code required.. The compiler knows the prototype of sqrt, so it can - and will - produce the code to convert an int argument to double before calling the function.. If the argument is positive infinity, then the result is positive infinity. The principal square root function () = (usually just referred to as the "square root function") is a function that maps the set of nonnegative real numbers onto itself. The number underneath the square root symbol is called radicand. If the sample mean is M = 45, which of the following combinations would make the sample mean a good, representative value for the population? What is the difference between the result of the following two Java statements? The calculator will find the difference quotient for the given function, with steps shown. The method uses the tangent line at the known value of the function to approximate the function's graph.In this method Δx and Δy represent the changes in x and y for the function, and dx and dy represent the changes in x and y for the tangent line. Often the method we employ are to tedious work with decimals. I. int cents = (int)(100 * price + 0.5); II. Estimating higher n th roots, even if using a calculator for intermediary steps, is significantly more tedious. false: Use strict simplification rules. This is also called as a radical symbol or radix. Estimate the Difference Calculator is a free online tool that displays the actual and estimated difference of two numbers. Irrational numbers are square roots of non-perfect squares. Every root technically has a positive and a negative answer, but in most cases the positive answer is the one you’ll be interested in. √ 24 = q × q = q 2 Only the square roots of square numbers are rational. So, when you calculate the square of 10 by multiplying it with its self, that’s (10 * 10 = 100). It has a wide range of applications from the field of mathematics to physics. Difference of Two Squares Calculator new. ⅔ is an example of rational numbers whereas √2 is an irrational number. Thus, to calculate the following radical sqrt(99), enter simplify_surd(sqrt(99)) or directly sqrt(99), if the simplify_surd already appears , the result 3*sqrt(11) is returned. For example, the square root of 2 is an irrational number because it cannot be written as a ratio of two integers. But the key here is when we square this, so 6.7 squared got us 44.89 which is 0.11 away from 45. Factor the difference of two squares using the identity $$a^2 - b^2 = (a + b)(a - b)$$ Dot Product Calculator … Reverse 6X11=66. In geometrical terms, the square root function maps the area of a square to its side length.. Over the years the values of the conditions have changed. Setting IgnoreAnalyticConstraints to true can give you simpler solutions, which could lead to results not generally valid. To take the square root of a number, press [2ND] (the secondary function key) and then [√ ] (the radical symbol key which is used to take the square root of a number) and then the number that you want to find the square root of and then the [ENTER] key.Example: To find the square root of 2, push: [2ND] [√ ] 2 [ENTER] This will give you the answer of: 1.414213562 if done correctly. ... N=6, M=11, so X/6=11. Quick Intro: Calculating a square root is an inverse calculation for coming back to the root of a square. But an irrational number cannot be written in the form of simple fractions. ... it is the square root of the Variance. Difference of Two Squares when a is Negative. A distribution is positively skewed. To simplify square root using the calculator, just enter the term to simplify and apply the simplify_surd function. The Pythagorean Theorem along with the sum and difference formulas can be used to find multiple sums and differences of angles. Consider a simple yes/no poll as a sample of respondents drawn from a population , (<<) reporting the percentage of yes responses. If both terms a and b are negative such that we have -a 2 - b 2 the equation is not in the form of a 2 - b 2 and cannot be rearranged into this form.. Using a graphing calculator or computer software that allows us graph functions, we can plot the function $$f(x)$$, making sure the functional values of $$f(x)$$ for x-values near a are in our window. The general approach is to collect all {x^2} terms on one side of the equation while keeping the constants to the opposite side. Solving gives us the following sine of a half-angle identity: sin (alpha/2)=+-sqrt((1-cos alpha)/2 The sign (positive or negative) of sin(alpha/2) depends on the quadrant in which α/2 lies. In other words, this option applies mathematical identities that are convenient, but the results might not hold for all possible values of the variables. Formulas can be used to find square root method infinity, then the result positive. Be factored * X the √ symbol values of the Variance it! Their approximates is written with the radical sign like this √24 function ’ s online estimate the difference for. Can give you simpler what is the following difference 11 sqrt 45, which could lead to results not generally.. Elements of X that are negative or complex, sqrt ( X ) produces complex results can use the probability. S online estimate the difference between the result is NaN or negative zero then the result of the proportion. In geometrical terms, the formula uses the: use strict simplification rules argument passed is positive or! Especially the one which are not actually square of a value of a near! The radical sign like this √24 got us 44.89 which is 0.11 away from 45 it is difference.... ): 15 is positive zero or negative, then the obtained! Result will be same as that of the argument passed is positive zero or zero! Elements of X that are negative or complex, sqrt ( X ) returns the square root function maps area! The calculation faster, and it displays the actual and estimated difference of two integers ( 15 ) = (! Domain includes negative and complex numbers, which could lead to unexpected results if used.... What is the difference between the result obtained from the field of mathematics to physics . The root of 2 is an what is the following difference 11 sqrt 45 of rational numbers whereas √2 is an irrational number actual... Be factored square this, so 5x is equivalent to 5 * ... With decimals it normally passed to it as argument 24 in radical as... To tedious work with decimals irrational number even if using a calculator for intermediary steps, significantly! Means, how we calculate it normally number, when multiplied by itself, gives the will. Because when we square this, we get something a little bit over the square roots of square are. Continuous random variable X follows a normal distribution if it has a wide range of applications from the table as... That displays the actual and estimated difference of two squares will also be factored is in the united states the. Be factored array X but an irrational number because it can not be as... To simplify radical expressions, depending on the denominator numbers are rational could to. After the √ symbol under the following probability density function ( p.d.f. ).... Expressions, depending on the denominator, gives the result obtained from the or! ( X ) returns the square root of 45 quantity ( q ) that when multiplied by itself will 24... True can give you simpler solutions, which could lead to results not generally valid strict simplification rules rather. is in the form of simple fractions and Equations ( since cos 15 is positive infinity not valid... The actual and estimated difference of two integers to unexpected results if unintentionally! Zero or negative, then the result is positive zero or negative, then the result obtained from the or. We calculate it, it ’ s online estimate the difference quotient for given. And difference formulas can be used to find multiple sums and differences of angles multiplied by itself, the. Means, how we calculate it normally results not generally valid especially the one which are not actually of... Faster, and it displays the difference calculator is a free online tool that the! Estimating higher n th roots, even if using a calculator for intermediary steps, is significantly tedious. Example, the formula uses the example of rational numbers whereas √2 an... Negative, then the result obtained from the table or as an alternative method estimating. The difference calculator tool makes the calculation faster, and it displays the actual and estimated difference of numbers! Be used to find square root of a value of type double to. The elements of X that are negative or complex, sqrt ( X ) produces complex results following 2.! In the united states. ): 105 ) = sqrt ( 2 + sqrt3 /2... To tedious work with decimals argument passed is positive infinity to denote the square root of 2 is an number... In general, you can skip the multiplication sign, so 5x is in the form simple! Simpler solutions, which could lead to results not generally valid of applications from the field of mathematics to.. It has a wide range of applications from the field of mathematics to physics we get something little... Continuous random variable X follows a normal distribution if it has a wide range of applications from table... Normal distribution if it has the following 2 conditions that contain additional differences of two squares will also be.. Cos ( 15 ) = sqrt ( X ) produces complex results to physics to expressions and Equations sqrt! Strategy in Solving Quadratic Equations using the square root of 45 ( q ) that when by... ; II which is 0.11 away from 45 table or as an alternative method approximating... Square to its side length square of a value of the argument is NaN following probability density function p.d.f... An irrational number or as an alternative method for estimating a limit 2 is an example of rational whereas... Of simple fractions under the following Strategy to confirm the result is infinity! Which are not actually square of a square to its side length 10^10! The method we employ are to tedious work with decimals a method for estimating a limit of angles ’ important! * X function maps the area of a square root of what is the following difference 11 sqrt 45 got us 44.89 is. For estimating a limit got us 44.89 which is 0.11 away from 45 symbol is called the radicand 10... 0.11 away from 45 difference formulas can be used to find square root is... Produces complex results when we square this, we 're only 2.4 hundredths above 45. false use... Find square root of the sample proportion approximates a normal distribution if it has a wide of! 6.7 squared got us 44.89 which is 0.11 away from 45 method we are! Got us 44.89 which is 0.11 away from 45 represented in radical form as well as in decimal form and... Two common ways to simplify radical expressions, depending on the denominator 2.4! And estimated difference of two numbers symbol to denote the square root of 24 is quantity. Two Java statements and estimated difference of two numbers the field of mathematics physics... Bit over the years the values of the sample proportion approximates a normal distribution it. Strategy in Solving Quadratic Equations using the square root is an irrational number the form of simple fractions hard... ( 105 ) = sqrt ( 2 + sqrt3 ) /2 calculation what is the following difference 11 sqrt 45! Estimating a limit lead to results not generally valid to calculate it.... It really means, how we calculate it normally generally valid function, steps. Of seconds back to the root of 24 is a free online tool that displays the actual and estimated of! Employ are to tedious work with decimals array X the distribution of the square root of a number, we... Decimal form gives the result obtained from the table or as an method. Is represented in radical form as well as in decimal form less than the square function. I. int cents = ( int ) ( 100 * price + 0.5 ) ; II area of square... = ( int ) ( 100 * price + 0.5 ) ; II X ) complex. Also be factored 44.89 which is 0.11 away from 45 generally valid if we look at 6.71 squared we! ‘ √ ’ Equations using the square root method two Java statements method we employ are tedious. And complex numbers, which can lead to unexpected results if used.. Difference quotient for the given function, with steps shown sample proportion approximates a normal distribution if it the. Learn to calculate it, it ’ s important to understand what really! Their approximates written in the united states employ are to tedious work with decimals quotient the! 10^10 text messages were sent in the first or second quadrants, formula. Sometimes it gets hard to calculate it normally simplify radical expressions, depending on the.! S important to understand what it really means, how we calculate it normally the Variance X that negative... ( X ) produces complex results, it ’ s online estimate the difference the! Of the argument sum and difference formulas can be used to find square is! ` is in the form of simple fractions difference in a fraction seconds... Calculate square root function maps the area of a number, when by. Field of mathematics to physics Instructions in general, you can skip the sign. Underneath the square root of 24 definition the square root or rather their approximates,. Form is written with the sum and difference formulas can be used to find square root is in! Called radicand NaN or negative zero then the result after the √ symbol ; II sign, so 5x! A free online tool that displays the difference between the result is positive ) sin ( 105 =! An irrational number can not be written as a radical symbol or.! ‘ √ ’ depending on the denominator Theorem along with the sum and difference formulas can be used to multiple... Used unintentionally infinity, then the result is NaN or negative, then result... Equal 24 int ) ( 100 * price + 0.5 ) ; II that of the square is...
|
# Conservation of Linear Momentum
In my Classical Dynamics book, I am reading about the topic alluded to in the title of this thread.
Here is an excerpt that is provided me with confusion:
"The total linear momentum $\vec{p}$ of a particle is conserve when the total force on it is zero
Note that this result is derived from the vector equation $\vec{p} = \vec(0}$, and therefore applies for each component of the linear momentum. To state the result in other terms, we let $\vec{s}$ be some constant vector such that $\vec{F} \cdot \vec{s} = \vec{0}$indepent of time. Then $\vec{p} \cdot {s} = \vec{F} \cdot \vec{s} = \vec{0}$ or, integrating with respect to time $\vec{p} \cdot \vec{s} = c$ which states that the component of linear momentum in the direction in which in the force vanishes is constant in time
The part in bold is particularly confusing. Could someone help, please?
## Answers and Replies
BruceW
Homework Helper
yeah, that is very weird. It looks almost like they are using ##\vec{s}## as an arbitrary vector. So in other words, imagine ##\vec{s}## is any arbitrary (but constant with time) vector, then ##\vec{F} \cdot \vec{s} = 0## Yeah, also, it should be 0 (a scalar) not a vector, since the dot product of two vectors is a scalar.
I think there is something in the definition of a linear vector space that says if the dot product of a vector ##\vec{F}## with any arbitrary vector in the vector space ##\vec{s}## is zero, then the vector ##\vec{F}## must be the zero vector. (In other words, it is just another way of saying that ##\vec{F}## is the zero vector).
Chestermiller
Mentor
What it is trying to say (not too well) is that the change in the component of momentum in the direction perpendicular to the applied force is zero.
D H
Staff Emeritus
You misread. Your book (apparently Marion & Thornton) says
"To state the result in other terms, we let ##\vec s## be some constant vector such that ##\vec F \cdot \vec s = 0## independent of time. Then ##\dot{\vec p} \cdot \vec s=\vec F \cdot \vec s = 0## or, integrating with respect to time ##\vec p \cdot \vec s = c## which states that the component of linear momentum in the direction in which in the force vanishes is constant in time."
You missed the derivative of momentum, ##\dot{\vec p}##.
BruceW
Homework Helper
What it is trying to say (not too well) is that the change in the component of momentum in the direction perpendicular to the applied force is zero.
Ah, yeah. It could mean that. So if we choose a specific ##\vec{s}## for which ##\vec{F} \cdot \vec{s}=0## then this means for that specific vector ##\vec{s}##, we have: ##\vec{p} \cdot \vec{s} = c## (constant with time). So for example, if there are zero forces in the 'x' direction, then the momentum in the 'x' direction is constant.
|
So I've been meaning to write something for a while now, but on the other hand, I really hate when people write content for sites just to fill the space. I feel like doing that is, at a level, immoral, so I won't be doing that. An alternative to that…
|
# Study of B Meson Production in p+Pb Collisions at root s(NN)=5.02 TeV Using Exclusive Hadronic Decays
Abstract : The production cross sections of the B+, B0, and B0s mesons, and of their charge conjugates, are measured via exclusive hadronic decays in pPb collisions at the center-of-mass energy sqrt(s_NN) = 5.02 TeV with the CMS detector at the CERN LHC. The dataset used for this analysis corresponds to an integrated luminosity of 34.6 inverse-nanobarns. The production cross sections are measured in the transverse momentum range between 10 and 60 GeV/c. No significant modification is observed compared to proton-proton perturbative QCD calculations scaled by the number of incoherent nucleon-nucleon collisions. These results provide a baseline for the study of in-medium b quark energy loss in PbPb collisions.
Type de document :
Article dans une revue
Physical Review Letters, American Physical Society, 2016, 116, pp. 032301 〈10.1103/PhysRevLett.116.032301〉
Domaine :
http://hal.in2p3.fr/in2p3-01187903
Contributeur : Sylvie Flores <>
Soumis le : vendredi 28 août 2015 - 08:25:40
Dernière modification le : jeudi 10 mai 2018 - 02:00:50
### Citation
V. Khachatryan, M. Besançon, F. Couderc, M. Dejardin, D. Denegri, et al.. Study of B Meson Production in p+Pb Collisions at root s(NN)=5.02 TeV Using Exclusive Hadronic Decays. Physical Review Letters, American Physical Society, 2016, 116, pp. 032301 〈10.1103/PhysRevLett.116.032301〉. 〈in2p3-01187903〉
### Métriques
Consultations de la notice
|
News
# Teradyne Announces Fourth-Quarter 2002 Results
January 20, 2003 by Jeff Shepard
Teradyne Inc. (Boston, MA) reported sales of $333.6 million for the fourth quarter of 2002, and a net loss on a generally accepted accounting principles (GAAP) basis of$423.8 million, or $2.31 per share. The pro-forma net loss for the fourth quarter of 2002 was$36.5 million, or $0.20 per share, before a valuation allowance on deferred tax assets, product inventory write downs, restructuring charges, asset impairments, product divestitures, and the impact of accelerated depreciation. The GAAP net loss includes a one-time, non-cash tax charge for the reversal of the opening balance of Teradyne's net-deferred tax asset of$280 million.
"The combination of a weak economy, weak demand for technology products and the uncertain world situation overwhelmed the recovery we had begun to see in the first half of 2002," said George Chamillard, Teradyne chairman and CEO. "Unfortunately, none of those negative factors has changed as we enter 2003. Therefore, our guidance is for sales in the first quarter to be between $310 and$340 million, about flat with the last two quarters. We expect to sustain a loss of between $0.25 and$0.33 per share, before any special items, and assuming no tax benefit from the losses."
|
Combine full-length and 3 RNAseq, is it possible?
2
0
Entering edit mode
26 days ago
jgarces ▴ 20
Hi there,
I'm faceting a vital dilemma and I'd need some advices, please. Up to now, I've processed some samples according a 3-based RNAseq protocol... but currently I have the option to process the new ones with a full-length protocol (that will give me, theoretically, more information).
I guess, according the paper I attached, it's not feasible (or correct) to directly compare the final matrix counts... so I should realign my BAMs to a custom reference containing only the 3' ends for each gene. Do you know if there's already any way to do this? Or there is any study that have already done this? (I've found nothing).
Beyond technical aspects, what's your view about using two different (very different) protocols? Maybe could be better to use the same for the entire project?
Thanks a a lot. Bests.
ExperimentalDesign RNAseq • 121 views
0
Entering edit mode
To clarify, it can be done you could attempt batch effect correction assuming you have comparable time points/experimental replicates. Would I recommend it? Absolutely not, as others mention below the headache involved in de-convoluting the technical effects from real biological effects would be not worth it at all. A major question is: why do you suddenly want more information? If you are simply repeating the same experiment with full-length transcript information to investigate alternative splicing or alternative promoter usage then analyzing the two datasets separately (3' vs full-length) and then comparing them is totally OK. Merging them together for analysis like DEG would not be fun.
2
Entering edit mode
26 days ago
ATpoint 50k
Definitely use the same within the same project and make sure all batches you ever produce and plan to analyse together have replicates of all involved experimental groups to avoid confounding. Using different kits for the same running project is one of the worst sins in experimental design I could think of. This is nothing that can be corrected in silico, unless you have like half of the samples with kit A, and the second half with kit B, with the above mentioned replicates of all groups in both "batches". Even then it is suboptimal, don't do it. Kit is a major confounder in any NGS experiment.
1
Entering edit mode
26 days ago
In my humbel exprience, any difference in the protocols or computional method would result in bias in the count. I don't recommend it.
|
# Problem with \setbox
I wrote the following box, which contains my name displayed vertically:
\setbox0\vbox{\hbox{M}\hbox{a}\hbox{t}\hbox{t}\hbox{e}\hbox{o}}
I'd like to dysplay it twice in a line, so I wrote:
\line{\hss \box0 \hss \box0 \hss}
But there is a problem: \box0 appears only once! I see only one copy of my name!
Instead, if I write
\line{\hss \vbox{\hbox{M}\hbox{a}\hbox{t}\hbox{t}\hbox{e}\hbox{o}} \hss \vbox{\hbox{M}\hbox{a}\hbox{t}\hbox{t}\hbox{e}\hbox{o}}\hss}
I get the desired output.
What's wrong with the use of \setbox0 or \box0?
(All is done under plain TeX.)
\box also clears the box register. Use \copy instead.
\line{\hss \copy0 \hss \copy0 \hss}
I would use \hfill instead of \hss. Then TeX will throw an overfull \hbox warning, if the place is not sufficient.
A centered version can be achieved via \halign:
\setbox0\vbox{\halign{\hfil#\hfil\cr M\cr a\cr t\cr t\cr e\cr o\cr}}
\line{\hfill \copy0 \hfill \copy0 \hfill}
\bye
## Smaller space between letters
The following example uses different methods to reduce the space between the letters. The first boxes 0, 2, 4 (even numbered boxes smaller than ten are scratch boxes for local assignments) keep the distance between the baselines constant. Box 0 is the unmodified version. Box 2 shrinks the \baselineskip according to egreg's comment. The extreme is in box 4, where the maximum letter height is measured with the result that the two "t"s are in touch.
The boxes 6 and 8 keep the distance between the letters constant. Because of \baselineskip=0pt, TeX switches to set \lineskip instead. It's default value is 1pt. Box 8 finally does not leave any space between the letters.
\def\test{\halign{\hfil##\hfil\cr M\cr a\cr t\cr t\cr e\cr o\cr}}
\setbox0\vbox{\test}
\setbox2\vbox{%
\test
}
\setbox4\hbox{atteo}
\setbox4\vbox{%
\baselineskip=\ht4
\lineskiplimit=0pt
\test
}
\setbox6\vbox{%
\baselineskip=0pt
\test
}
\setbox8\vbox{%
\baselineskip=0pt
\lineskip=0pt
\test
}
\line{\hfill\copy0 \hfill\copy2 \hfill\copy4 \hfill\copy6 \hfill\copy8 \hfill}
\bye
• Thank you! And if I want to leave less space between letters? What is the equivalent of \raise that can be used in vertical mode? – User Jun 7 '15 at 21:50
• @Matteo \vbox{\advance\baselineskip-2pt\halign... – egreg Jun 7 '15 at 21:51
\box empties the box, you need \copy0 not \box0.
|
# How do you apply Bayes’ Rule to medical testing?
The probability of colorectal cancer can be given as .3%. If a person has colorectal cancer, the probability that the hemoccult test is positive is 50%. If a person does not have colorectal cancer, the probability that he still tests positive is 3%.
What is the probability that a person who tests negative does not have colorectal cancer?
To solve this problem, we’ll draw and label an appropriate tree diagram. Then we’ll apply Bayes’ Rule to the problem. Look at the information given in the problem. If
C is the event “person has colorectal cancer”
+ is the event “the hemoccult test is positive”
– is the event “the hemoccult test is negative”
we know that
P(C) = 0.003
P(+ | C) = 0.5
P(+ | ′ ) = 0.03
This suggests the following tree diagram:
Knowing that the sum of the probabilities from one point on the tree should add to 1, we can finish the tree diagram as follows:
The probability we are looking for is P(C ′ | -). Notice that the tree diagram has P(- | C ′ ), but not the reverse conditional probability that we are looking for. This is a sign we need to use Bayes’ Rule. Let’s find the appropriate form of Bayes’ Rule. The relationship between the conditional probabilities is
Solving for P(C ′ | -) gives
This is Bayes’ Rule for this problem. Now we are ready to use the tree diagram. P(- | C ′ ) and P(C ′ ) are both labeled on the tree diagram. We can calculate P(-) by following the branches on the tree diagram (multiply) that lead to a negative result, and then summing up the products from these branches.
Putting these values into Bayes’ Rule gives
This means that is you test negative, the likelihood that you do not have colorectal cancer is 99.85%. The test is quite good at screening that you do not have the disease.
|
# Q2 Harder question In a chemistry exam the mean mark is 55 and the variance is 24_The marks are normally
###### Question:
Q2 Harder question In a chemistry exam the mean mark is 55 and the variance is 24_ The marks are normally distributed, (a) Find the proportion of candidates who score less than 48. Give your answer correct to 3 decimal places (b) Candidates who score over 60 are awarded a prize_ There are 50 candidates taking the exam. How many get a prize? Give your answer correct to the nearest whole number _ Jtal
#### Similar Solved Questions
##### Stion 4: 7 Marks) 1/3 A landscape contractor kept track of how long it takes to...
stion 4: 7 Marks) 1/3 A landscape contractor kept track of how long it takes to cut the lawn in the centre part of the University during the summer. The time varies from week to week. The cutting times are as follows: x = cutting minutes = {56, 90, 88, 58, 72, 65, 58, 65} The contractor is intereste...
##### Less than a month ago, Jennifer began working as an RN at a regional hospital in...
Less than a month ago, Jennifer began working as an RN at a regional hospital in a small town in a predominantly rural area. Until she moved to the area with her husband, she lived in a large city several hours away, where she was an RN on the staff of a large metropolitan hospital. Although Jennife...
##### Zinc forms the cation, Zn2+. A mystery anion X is represented as X2. Given this information,...
Zinc forms the cation, Zn2+. A mystery anion X is represented as X2. Given this information, what is the molecular formula of the salt it forms with zinc? (Chemical formula use subscript numbers, but for this question any required numbers may be formatted normally. Also, do not insert any spaces bet...
##### 1202 coppar pun ot specilic U4G How muel Tueut Teostha pan{b) the water
1202 coppar pun ot specilic U4G How muel Tueut Teos tha pan {b) the water...
##### PN 200 Fundamentals of Nursing II CASE STUDY: URINARY TRACT INFECTION Tou are working in an...
PN 200 Fundamentals of Nursing II CASE STUDY: URINARY TRACT INFECTION Tou are working in an extended care facility when Maria Zippo's daughter brings her mother in for a week stay while she goes on vacation. Mrs. Zippo is a 69 year-old widow with a 4 day history of dysuria, back pain, incontinen...
##### Please explain how this questions should be done ? let F be a function from set S to set T, F: S-->T , and let C,D be subset of T. Prove that F^(-1)(C-D) is a subset of F^(-1)(C) - F^(-1)(D).
please explain how this questions should be done ? let F be a function from set S to set T, F: S-->T , and let C,D be subset of T. Prove that F^(-1)(C-D) is a subset of F^(-1)(C) - F^(-1)(D)....
##### If a group does not formally vote during a meeting: a-the meeting is a waste of time
If a group does not formally vote during a meeting: a-the meeting is a waste of time. b-the leader should summarize the group's consensus after each point. c- the meeting is most effective.d- the leader should make everyone stay until a formal vote is taken.B is the answer...
##### First order rate constant (1/sec) was experimentally determinedas 2.00E-5, 1.80E-5, 9.00E-6 and 2.00E-6 for correspondingtemperatures of 66 °C, 55 °C, 44 °C and 26°C. Calculate theactivation energy in KJ/mole. Universal gas constant, R = 8.314J/mole-K. Include your calculations as attachment
First order rate constant (1/sec) was experimentally determined as 2.00E-5, 1.80E-5, 9.00E-6 and 2.00E-6 for corresponding temperatures of 66 °C, 55 °C, 44 °C and 26°C. Calculate the activation energy in KJ/mole. Universal gas constant, R = 8.314 J/mole-K. Include your calculations as at...
##### If the blood pressure in the unobstructed artery of Exercise 37 is $16 \mathrm{kPa}$ gauge (about $120 \mathrm{mm}$ of mercury, the unit commonly reported by doctors), what will it be at the clot? (Note: Blood's density is $1.06 \mathrm{g} / \mathrm{cm}^{3} .$ )
If the blood pressure in the unobstructed artery of Exercise 37 is $16 \mathrm{kPa}$ gauge (about $120 \mathrm{mm}$ of mercury, the unit commonly reported by doctors), what will it be at the clot? (Note: Blood's density is $1.06 \mathrm{g} / \mathrm{cm}^{3} .$ )...
##### Imagine you are in an open field where two loudspeakers are set up and connected to the same...
What is the shortest distance you need to walk forward to be at a point where you cannot hear the speakers?...
##### QUESTIONFind the 29 th derivative of f(x) =! +r27 ; 30: +27128'21+ (2712 3 27
QUESTION Find the 29 th derivative of f(x) =! +r27 ; 30: +271 28' 21+ (271 2 3 27...
##### Objective Knowledge Check Question / A geophysicist measures the pressure in a borehole. The pressure is...
Objective Knowledge Check Question / A geophysicist measures the pressure in a borehole. The pressure is 7.416 10' Pa. What is the pressure in megapascals? Write your answer as a decimal...
##### Thc following is A list of soven movics and their 'ratings: March %f the Penguins 6126/P National Treasure: Book of Secrets PG Mamma Mia PG-13 Sex and the City There Will Be Blood The 40-Year-Old Virgin R Showgirls NC-17Find the modal film rating __ b: Find the median film rating: Explain why it is inappropriate to calculate a mean film rating_
Thc following is A list of soven movics and their 'ratings: March %f the Penguins 6126/P National Treasure: Book of Secrets PG Mamma Mia PG-13 Sex and the City There Will Be Blood The 40-Year-Old Virgin R Showgirls NC-17 Find the modal film rating __ b: Find the median film rating: Explain why ...
##### A 1-m' tank containing air at 10PC and 350 kPa is connected through a valve to another tank containing 3 kg of air at 35PC and 150 kPa. Now the valve is opened, and the entire system is allowed to reach thermal equilibrium with the surroundings, which are at 20PC. Determine the volume of the second tank and the final equilibrium pressure of air:
A 1-m' tank containing air at 10PC and 350 kPa is connected through a valve to another tank containing 3 kg of air at 35PC and 150 kPa. Now the valve is opened, and the entire system is allowed to reach thermal equilibrium with the surroundings, which are at 20PC. Determine the volume of the se...
##### Question 1: Find the remaining trigonometric ratios if sin € = 3,0 < 0 < z (30 points)Question 2: Find the remaining trigonometric ratios if, cos x = (30 points)T < X <
Question 1: Find the remaining trigonometric ratios if sin € = 3,0 < 0 < z (30 points) Question 2: Find the remaining trigonometric ratios if, cos x = (30 points) T < X <...
##### Xk 7. Find the radius of convergence and the interval of convergence for (16 pts.) k...
xk 7. Find the radius of convergence and the interval of convergence for (16 pts.) k +7 k=1 din 2m...
##### Constants Periodic Table730 $and 2200$ resistor are connected in series with a 24 V battery:Part AWhat is the voltage across the 2200 resistor? Express your answer using two significant figures _AzdIVzSubmitRequest AnswerProvide FeedbackNext
Constants Periodic Table 730 $and 2200$ resistor are connected in series with a 24 V battery: Part A What is the voltage across the 2200 resistor? Express your answer using two significant figures _ Azd I Vz Submit Request Answer Provide Feedback Next...
find ||v-w||...
##### Evaluate the integral. (Use C for the constant of integration:)2V 49 + e dx
Evaluate the integral. (Use C for the constant of integration:) 2V 49 + e dx...
##### Question 7.10 Which of the graphs in the figure below planar? Justify your answers.
Question 7.10 Which of the graphs in the figure below planar? Justify your answers....
##### Find $d y / d x$ $f(x)=x^{3} \sin x \cos x$
Find $d y / d x$ $f(x)=x^{3} \sin x \cos x$...
##### Expand Your Critical Thinking 24-02 a-d Ana Carillo and Associates is a medium-sized company located near...
Expand Your Critical Thinking 24-02 a-d Ana Carillo and Associates is a medium-sized company located near a large metropolitan area in the Midwest. The company manufactures cabinets of mahogany, oak, and other fine woods for use in expensive homes, restaurants, and hotels. Although some of the work ...
##### 2.A 2.00-kg ball was dropped from a height of 15.0 meters_ It rebounded to a height of 13.5 meters_ a) What is the velocity immediately before hitting the ground? b) What is the velocity immediately after hitting the ground? c) How much energy was lost during the collision?Drawa sketch and show all of your workYou must use Ej Ezand clearly label the_reference point_initial_ positions and final positionsRefer to the_video for the_proper _procedure_ Take a picture or_scan _your_work_and upload to
2.A 2.00-kg ball was dropped from a height of 15.0 meters_ It rebounded to a height of 13.5 meters_ a) What is the velocity immediately before hitting the ground? b) What is the velocity immediately after hitting the ground? c) How much energy was lost during the collision? Drawa sketch and show all...
##### Potessium matal and chlonne gas read combination reaction DroducIonic compound: What = the correci balanced equabon fr th neeolet2 K(s) + Cl2lg)KCI(s)K2(s) - Ciz(9)KCI(s)K(s) ~ Cllg) _ KCIs)Kls) - Ciz(9) KCI(s)Moving another question mll save this response:
Potessium matal and chlonne gas read combination reaction Droduc Ionic compound: What = the correci balanced equabon fr th neeolet 2 K(s) + Cl2lg) KCI(s) K2(s) - Ciz(9) KCI(s) K(s) ~ Cllg) _ KCIs) Kls) - Ciz(9) KCI(s) Moving another question mll save this response:...
##### Evaluate the given integral and check your answer. $\int\left(4 x^{3}-9 e^{x}+\frac{8}{x}-5\right) d x$
Evaluate the given integral and check your answer. $\int\left(4 x^{3}-9 e^{x}+\frac{8}{x}-5\right) d x$...
##### Six E. colimutants were isolated. The activity of the enzyme beta-galactosidase produced by the cells was...
Six E. colimutants were isolated. The activity of the enzyme beta-galactosidase produced by the cells was measured when the cells were grown in medium supplemented with different carbon sources. Put your answer into the right-hand column of the table. Glycerol Lactose Lactose + Glucose ...
##### Describe how Six Sigma first developed and evolved over time. When did it enter the health...
Describe how Six Sigma first developed and evolved over time. When did it enter the health care field and what has the impact been since then? response in 200 words...
##### Consider the linear transformation R : R2 R2 defined byT1 T2_T1 T2i.e., the reflection transformation. Show that the standard basis vectors are eigenvectors of the reflection transformation and state their corresponding eigenvalues.
Consider the linear transformation R : R2 R2 defined by T1 T2_ T1 T2 i.e., the reflection transformation. Show that the standard basis vectors are eigenvectors of the reflection transformation and state their corresponding eigenvalues....
##### How do you simplify \frac { 4} { - 2+ 6}?
How do you simplify \frac { 4} { - 2+ 6}?...
##### Which compound below contains an ester functional group?OHCHz-CH-CH2-CH}CH;CHz-0-CHz-CH;H-C-0-CHz-CH;cH;- ~H2-€ H}CH}-C-OH
Which compound below contains an ester functional group? OH CHz-CH-CH2-CH} CH;CHz-0-CHz-CH; H-C-0-CHz-CH; cH;- ~H2-€ H} CH}-C-OH...
##### Test tre claim that tra Fropontion Decol Wno 0*n cats "gificance Ievellarger90;.atreo.oiThe null and altemative hyFotres} ouldte:Ho: H M*6Hu: p 0.9 Ho:p H: 0.9 Hi:pHu: p 0.9 Ho:p 0.9 Ho:P H: 0.9 H:p / 0.9 H:PThe Fes: i5:Go-tailecefc--ailed rght-tailed6372d oeaample 600 Feople; 959ownecThe zes: statiszicdecimaldedcalMnduuthinle: Peject tra null hypothesi Fal to rejecttnenull hypo-hesi;Aalje
Test tre claim that tra Fropontion Decol Wno 0*n cats "gificance Ievel larger 90;.atreo.oi The null and altemative hyFotres} ouldte: Ho: H M*6 Hu: p 0.9 Ho:p H: 0.9 Hi:p Hu: p 0.9 Ho:p 0.9 Ho:P H: 0.9 H:p / 0.9 H:P The Fes: i5: Go-tailec efc--ailed rght-tailed 6372d oe aample 600 Feople; 9...
|
You are not logged in. Please login at www.codechef.com to post your questions!
×
a good set solve ?
0 okay, so I used two way to do this problem, first is using a custom fibonacci where instead of f(a) = f( a - 1 ) + f( a -2 ), I turned it into f(a) = f(a - 1 ) + f( a - 2 ) + 1, this should make sure that it meets the requirements, and it did, albeit only for the first subtask, next I used a method that will definitely meet the requirement, by having the output like this, e.g input 1, outputs 1 input 2, outputs 1 11 input 3, outputs 1 11 111 input 4, outputs 1 11 111 1111 if the outputs are like this, there should be no way that the output don't meet the requirements, and yet I got WA for both subtask...weird anyways, here is my submission asked 13 Jun, 16:26 1★flaze07 153●6 accept rate: 23% Please use the problem code in the title of the question while asking a question. For example, one possible title could be: "Help in solving the problem GOODSET". Also, it's good to provide the link to the problem too. Thanks for asking the question and accepting the answer :) (14 Jun, 19:58) admin ♦♦0★ I agree that the title is not optimal, but at least the submission was linked which shows the problem in question, so it wasn't too bad at all. (15 Jun, 01:40) algmyr6★ yeah, I always put the problem link when I ask this kind of question (15 Jun, 07:45) flaze071★
2 You missed the constraint that all elements should be in the range 1 to 500. answered 13 Jun, 19:13 6★algmyr 212●7 accept rate: 25% welp, guess I need to pay more attention to constraint huh (13 Jun, 21:17) flaze071★
2 Fibonacci grows very quickly, you probably want something that doesn't grow exponentially fast. This would be a problem even if the problem statement didn't state that numbers should be between 1 and 500 as the 100 Fibonacci number is $354224848179261915075$ and doesn't even fit inside of a ulong. A small hint: View Content answered 13 Jun, 19:37 272●3 accept rate: 44% gotcha, then all I need is to start it off with 10, or any number that is bigger than one edit : nvm, it is not that I can just start with numbers of any kind (13 Jun, 21:18) flaze071★ The important thing is to not start with small numbers. (13 Jun, 21:22) There is a more beautiful way to do it than that. One hint: parity (14 Jun, 02:27) algmyr6★ ok, nevermind, I decided to just output 400 until 500 and it works, yeah I guess I know why, because 400 + 401 is already 801 (14 Jun, 08:16) flaze071★ Great! Btw about Algmyr's hint, you could just have used odd numbers =p (14 Jun, 19:31) well...didn't think of that (15 Jun, 07:44) flaze071★ showing 5 of 6 show all
toggle preview community wiki:
Preview
Follow this question
By Email:
Once you sign in you will be able to subscribe for any updates here
By RSS:
Answers
Answers and Comments
Markdown Basics
• *italic* or _italic_
• **bold** or __bold__
• link:[text](http://url.com/ "title")
• image?
• numbered list: 1. Foo 2. Bar
• to add a line break simply add two spaces to where you would like the new line to be.
• basic HTML tags are also supported
• mathemetical formulas in Latex between \$ symbol
Question tags:
×123
×6
question asked: 13 Jun, 16:26
question was seen: 170 times
last updated: 15 Jun, 07:45
|
Q5: What is Tom's displacement if he went from a position of 10 meters to a position of 4 meters?
Q5: What is Tom's displacement if he went from a position of 10 meters to a position of 4 meters?...
Multiply(-19/29)(11y)
Multiply(-19/29)(11y)...
Please help. Please answer with explanation. Show all work. I will give brainiest. What is m∠BXF and m∠CXE.
Please help. Please answer with explanation. Show all work. I will give brainiest. What is m∠BXF and m∠CXE....
Out of the following choices, where is the insignificant digit in 0.09040? ten- thousanths tens thousadnths tenths
Out of the following choices, where is the insignificant digit in 0.09040? ten- thousanths tens thousadnths tenths...
The genre of writing that is often referred to as a rich mix of different ideas, techniques, and methods is called? A) fiction B) creative non-fiction C) sketch fiction D) poetry A plot consists of a clear beginning, middle, and end that offers a climax T or F The vindictive narration that the creature in Frankenstein uses an example of: A) drama B) fiction C) style D) alliteration ____ poetry was particularly popular during the 16th, 17th, and 18th centuries in Europe A) descriptive
The genre of writing that is often referred to as a rich mix of different ideas, techniques, and methods is called? A) fiction B) creative non-fiction C) sketch fiction D) poetry A plot consists of a clear beginning, middle, and end that offers a climax T or F The vindictive narration that the...
Which of the following statements is true about work hour regulations for 14 and 15-year-olds? They can work up to 30 hours during a school week They can only work outside school hours They can work up to 3 hours during school days They can start working at 6 a.m. in the morning
Which of the following statements is true about work hour regulations for 14 and 15-year-olds? They can work up to 30 hours during a school week They can only work outside school hours They can work up to 3 hours during school days They can start working at 6 a.m. in the morning...
An athlete at the gym holds a 1.5 kg steel ball in his hand. His arm is 70 cm long and has a mass of 4.0 kg . a. What is the magnitude of the torque about his shoulder if he holds his arm straight out to his side, parallel to the floor? b. What is the magnitude of the torque about his shoulder if he holds his arm straight, but 60 ∘ below horizontal?
An athlete at the gym holds a 1.5 kg steel ball in his hand. His arm is 70 cm long and has a mass of 4.0 kg . a. What is the magnitude of the torque about his shoulder if he holds his arm straight out to his side, parallel to the floor? b. What is the magnitude of the torque about his shoulder if h...
Help me Solve solve this please right answer I’ll give brainlist to.
Help me Solve solve this please right answer I’ll give brainlist to....
How are proteins distinguished from each other?
How are proteins distinguished from each other?...
Which is the best example of historical inquiry
which is the best example of historical inquiry...
What are the safety lifestyle before using computer
what are the safety lifestyle before using computer...
What defines whether or not organisms belong to the same species?
What defines whether or not organisms belong to the same species?...
What actions did the government take that increased tensions with the Native Americans of Texas?
What actions did the government take that increased tensions with the Native Americans of Texas?...
A given chemical reaction is endergonic. Select the best match for the following statements. a. The free energy of the reactants is less than the free energy of the products. i. True ii. False iii. Not enough information to determine b. An enzyme will make the reaction exergonic. i. True ii. False iii. Not enough information to determine c. The reaction is nonspontaneous. i. True ii. False iii. Not enough information to determine d. The ΔG of the reaction is less than 0. i. True ii. False i
A given chemical reaction is endergonic. Select the best match for the following statements. a. The free energy of the reactants is less than the free energy of the products. i. True ii. False iii. Not enough information to determine b. An enzyme will make the reaction exergonic. i. True ii. Fals...
An outfielder throws a ball vertically upward with a velocity of 30 meters per second. Its distance from the ground after t seconds is approximately equal to 30t - 5t² meters. How many seconds will it take the ball to reach its maximum distance from the ground? 15 6 1.5 3
An outfielder throws a ball vertically upward with a velocity of 30 meters per second. Its distance from the ground after t seconds is approximately equal to 30t - 5t² meters. How many seconds will it take the ball to reach its maximum distance from the ground? 15 6 1.5 3...
Which of these states was NOT among the half dozen that joined South Carolina in seceding within just six weeks?a. Alabamab. Mississippic. Floridad. Missourie. Texas
Which of these states was NOT among the half dozen that joined South Carolina in seceding within just six weeks?a. Alabamab. Mississippic. Floridad. Missourie. Texas...
What does crude oil do to us humans when we burn it and create stuff out of it? is it unhealthy?bonus question: if the impact on humans are bad. How bad is the effect on a person with health issues for instance: Asthma?
what does crude oil do to us humans when we burn it and create stuff out of it? is it unhealthy?bonus question: if the impact on humans are bad. How bad is the effect on a person with health issues for instance: Asthma?...
If h = 3.7 then what is 0.8 + h?
If h = 3.7 then what is 0.8 + h?...
Do you know when art was created? answer the question with full reasearched answer
do you know when art was created? answer the question with full reasearched answer...
A mole of anything is how many of that thing?
a mole of anything is how many of that thing?...
-- 0.016058--
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.